© 2026 Greg T. Chism · MIT License

Graph Neural Networks — Interactive Explorer

Watch message passing propagate through graphs — see how GNNs learn node, edge, and graph-level representations


Graph
Dataset
Karate = social, Molecule = chemistry
Architecture
GCN Layers
Aggregation
How neighbor messages are combined
Playback
Round 0 / 3
Speed
What's happening?
Select a graph and press Play to watch message passing. Each round, every node collects messages from its neighbors and updates its own feature representation.
Key Concepts
What is a graph? A set of nodes (entities) connected by edges (relationships). Graphs appear everywhere: molecules (atoms + bonds), social networks (people + friendships), citation networks (papers + references), and knowledge bases (concepts + relations).
Message passing: the core GNN operation. In each layer, every node gathers feature vectors from its direct neighbors and passes them as "messages." The node then aggregates these messages and updates its own representation. After k layers, each node knows about its k-hop neighborhood.
Aggregation: messages from all neighbors are combined using a permutation-invariant function — mean, sum, or max pooling — so the result doesn't depend on the arbitrary order in which neighbors are listed. This is what makes GNNs well-defined on graphs.
Graph Convolution (GCN): extends convolution to irregular graphs. The GCN update rule is h_v^(l+1) = σ(Σ_{u∈N(v)} W·h_u^(l) / √(deg(v)·deg(u))). The normalization by degree prevents high-degree nodes from dominating and keeps gradients stable.
Why graphs matter: molecules are naturally graphs — the same atom arrangement in different shapes has completely different chemical properties. GNNs achieve state-of-the-art in drug discovery, protein folding (AlphaFold uses graph attention), and traffic prediction (Google Maps) because they respect graph structure.
Message Passing — Karate Club · Round 0 of 3
Graph visualization 8 nodes · D3 rendered
① Select node ② Collect messages ③ Aggregate ④ Update
Resting node Active node Sending neighbor Updated node
Press Play to start message passing. The highlighted node will collect messages from its neighbors (orange), aggregate them, and update its feature vector.
Graph Convolution — Node Features Across Layers
Input Graph + Node Features
Graph with feature bars rendered by D3
Feature Transformation (Layer-by-Layer)
L0 → L1 → L2 → L3 feature embeddings per layer rendered by D3
GCN Update Rule (Kipf & Welling, 2017)
hv(l+1) = σ( ÃD̃ H(l) W(l))
à = A + I (adjacency + self-loops)  ·  D̃ = degree matrix of à  ·  W = learnable weight matrix  ·  σ = activation (ReLU)
Each GCN layer performs one round of neighborhood aggregation. With 2 layers, each node sees its 2-hop neighborhood. The normalization by degree prevents high-degree nodes from dominating the aggregation.
GNN Applications — Three Core Tasks
Node Classification
Social Network
Predict community membership
Karate Club graph color = community
Community A Community B
Click to expand
Graph Classification
Molecule
Predict molecular property
Atom/bond graph node = atom, edge = bond
Carbon (C) Nitrogen (N) Oxygen (O)
Click to expand
Citation Network
Predict new citations
Paper citation graph dashed = predicted links
Existing edge - - - Predicted edge
Click to expand
GNNs solve three fundamental graph learning tasks: node-level (classify each node), edge-level (predict or score edges), and graph-level (classify or regress over the whole graph). Each builds on the same message-passing foundation.
Selected Node
Node ID
Degree
Layer
Feature vector h_v
Incoming Messages
v?
v?
v?
h_u for each neighbor u ∈ N(v)
Aggregated Message
MEAN of neighbor features
Combined neighbor signal before weight transform
Updated Feature
h_v^(l+1)
σ(W · AGG({h_u : u ∈ N(v)}))