Skip to content

Commit e4f7dd7

Browse files
authored
Merge pull request #105 from yuehhua/doc
Doc
2 parents 778ea82 + 868f9a3 commit e4f7dd7

File tree

7 files changed

+109
-6
lines changed

7 files changed

+109
-6
lines changed

README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ Suggestions, issues and pull requsts are welcome.
1919

2020
```
2121
]add GeometricFlux
22-
]add GraphSignals@0.1.1
2322
```
2423

2524
## Features

docs/make.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,9 @@ using GeometricFlux
33

44
makedocs(
55
sitename = "GeometricFlux",
6-
format = Documenter.HTML(),
6+
format = Documenter.HTML(
7+
canonical = "https://yuehhua.github.io/GeometricFlux.jl/stable"
8+
),
79
modules = [GeometricFlux],
810
pages = ["Home" => "index.md",
911
"Get started" => "start.md",

docs/src/abstractions/gn.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,27 @@
11
# Graph network block
2+
3+
Graph network (GN) is a more generic model for graph neural network. It describes an update order: edge, node and then global. There are three corresponding update functions for edge, node and then global, respectively. Three update functions return their default values as follow:
4+
5+
```
6+
update_edge(gn, e, vi, vj, u) = e
7+
update_vertex(gn, ē, vi, u) = vi
8+
update_global(gn, ē, v̄, u) = u
9+
```
10+
11+
Information propagation between different levels are achieved by aggregate functions. Three aggregate functions `aggregate_neighbors`, `aggregate_edges` and `aggregate_vertices` are defined to aggregate states.
12+
13+
GN block is realized into a abstract type `GraphNet`. User can make a subtype of `GraphNet` to customize GN block. Thus, a GN block is defined as a layer in GNN. `MessagePassing` is a subtype of `GraphNet`.
14+
15+
## Update functions
16+
17+
`update_edge` acts as the first update function to apply to edge states. It takes edge state `e`, node `i` state `vi`, node `j` state `vj` and global state `u`. It is expected to return a feature vector for new edge state. `update_vertex` updates nodes state by taking aggregated edge state ``, node `i` state `vi` and global state `u`. It is expected to return a feature vector for new node state. `update_global` updates global state with aggregated information from edge and node. It takes aggregated edge state ``, aggregated node state `` and global state `u`. It is expected to return a feature vector for new global state. User can define their own behavior by overriding update functions.
18+
19+
## Aggregate functions
20+
21+
An aggregate function `aggregate_neighbors` aggregates edge states for edges incident to some node `i` into node-level information. Aggregate function `aggregate_edges` aggregates all edge states into global-level information. The last aggregate function `aggregate_vertices` aggregates all vertex states into global-level information. It is avaible for assigning aggregate function by assigning aggregate operations to `propagate` function.
22+
23+
```
24+
propagate(gn, fg::FeaturedGraph, naggr=nothing, eaggr=nothing, vaggr=nothing)
25+
```
26+
27+
`naggr`, `eaggr` and `vaggr` are arguments for `aggregate_neighbors`, `aggregate_edges` and `aggregate_vertices`, respectively. Avaible aggregate functions are assigned by following symbols to them: `:add`, `:sub`, `:mul`, `:div`, `:max`, `:min` and `:mean`.

docs/src/abstractions/msgpass.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,40 @@
11
# Message passing scheme
2+
3+
Message passing scheme is a popular GNN scheme in many frameworks. It adapts the property of connectivity of neighbors and form a general approach for spatial grpah convolutional neural network. It comprises two user-defined functions and one aggregate function. A message function is defined to process information from edge states and node states from neighbors and itself. Messages from each node are obtained and aggregated by aggregate function to provide node-level information for update function. Update function takes current node state and aggregated message and gives a new node state.
4+
5+
Message passing scheme is realized into a abstract type `MessagePassing`. Any subtype of `MessagePassing` is a message passing layer which utilize default message and update functions:
6+
7+
```
8+
message(mp, x_i, x_j, e_ij) = x_j
9+
update(mp, m, x) = m
10+
```
11+
12+
`mp` denoates a message passing layer. `message` accepts node state `x_i` for node `i` and its neighbor state `x_j` for node `j`, as well as corresponding edge state `e_ij` for edge `(i,j)`. The default message function gives all the neighbor state `x_j` for neighbor of node `i`. `update` takes aggregated message `m` and current node state `x`, and then outputs `m`.
13+
14+
## Message function
15+
16+
A message function accepts feature vector representing node state `x_i`, feature vectors for neighbor state `x_j` and corresponding edge state `e_ij`. A vector is expected to output from `message` for message. User can override `message` for customized message passing layer to provide desired behavior.
17+
18+
## Aggregate messages
19+
20+
Messages from message function are aggregated by an aggregate function. An aggregated message is passed to update function for node-level computation. An aggregate function is given by the following:
21+
22+
```
23+
propagate(mp, fg::FeaturedGraph, aggr::Symbol=:add)
24+
```
25+
26+
`propagate` function calls the whole message passing layer. `fg` acts as an input for message passing layer and `aggr` represents assignment of aggregate function to `propagate` function. `:add` represents an aggregate function of addition of all messages.
27+
28+
The following `aggr` are avaible aggregate functions:
29+
30+
`:add`: sum over all messages
31+
`:sub`: negative of sum over all messages
32+
`:mul`: multiplication over all messages
33+
`:div`: inverse of multiplication over all messages
34+
`:max`: the maximum of all messages
35+
`:min`: the minimum of all messages
36+
`:mean`: the average of all messages
37+
38+
## Update function
39+
40+
An update function takes aggregated message `m` and current node state `x` as arguments. An output vector is expected to be the new node state for next layer.User can override `update` for customized message passing layer to provide desired behavior.

docs/src/basics/layers.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,14 @@
1-
# Building layers
1+
# Building graph neural networks
2+
3+
Building GNN is as simple as building neural network in Flux. The synatex here is the same as Flux. `Chain` is used to stack layers into a GNN. A simple example is shown here:
4+
5+
```
6+
model = Chain(GCNConv(adj_mat, feat=>h1),
7+
GCNConv(adj_mat, h1=>h2, relu))
8+
```
9+
10+
`GCNConv` is used for layer construction for neural network. The first argument `adj_mat` is the representation of a graph in form of adjacency matrix. The feature dimension in first layer is mapped from `feat` to `h1`. In seceond layer, `h1` is then mapped to `h2`. Default activation function is given as identity if it is not specified by users.
11+
12+
## Customize layers
13+
14+
Customizing your own GNN layers are the same as customizing layers in Flux. You may want to reference [Flux documentation](https://fluxml.ai/Flux.jl/stable/models/basics/#Building-Layers-1).

docs/src/basics/passgraph.md

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,33 @@
11
# Graph passing
22

3-
Graph is an input data structure for graph neural network. Passing graph and input feature into
3+
Graph is an input data structure for graph neural network. Passing graph into GNN layer can have different behaviors. If graph remains fixed across samples, that is, all samples utilize the same graph structure, a static graph is used. Graphs can be carried within `FeaturedGraph` to provide varible graph to GNN layer. Users have the flexibility to pick a adequate approach for their own needs.
44

55
## Static graph
66

7+
A static graph is used to reduce redundent computation during passing through layers. A static graph can be set in graph convolutional layers in prior such that graph in layers is used first for computation. A adjacency matrix `adj_mat` is given to represent a graph and is put into a graph convolutional layer as follow:
8+
9+
```
10+
GCNConv(adj_mat, feat=>h1, relu)
11+
```
12+
13+
`Simple(Di)Graph`, `SimpleWeighted(Di)Graph` or `Meta(Di)Graph` provided by LightGraphs, SimpleWeightedGraphs and MetaGraphs, respectively, are acceptable for passing to layer as a static graph. A adjacency list is also accepted in the type of `Vector{Vector}`.
14+
715
## Variable graph
816

9-
## Cached graph
17+
A variable graph is supported by `FeaturedGraph`. Each `FeaturedGraph` contains different graph structure and its features. Data of `FeaturedGraph` are feed directly to graph convolutional layer or graph neural network to let each feature be learn on different graph strucutre. A adjacency matrix `adj_mat` is given to construct a `FeaturedGraph` as follow:
18+
19+
```
20+
FeaturedGraph(adj_mat, features)
21+
```
22+
23+
`Simple(Di)Graph`, `SimpleWeighted(Di)Graph` or `Meta(Di)Graph` provided by LightGraphs, SimpleWeightedGraphs and MetaGraphs, respectively, are acceptable for constructing a `FeaturedGraph`. A adjacency list is also accepted, too.
24+
25+
## Cached graph in layers
26+
27+
While a variable graph is given by `FeaturedGraph`, a GNN layer don't need a static graph anymore. Besides taking off the static graph from arguments of a layer, remember to turn off the cache mechanism. A cache mechanism is designed to cache static graph to reduce computation. A cached graph is gotten from layer and computation is then performed. For each time, it will assign current computating graph back to layer. Assignment operation is not differentiable, so we must turn off the cache mechanism as follow:
28+
29+
```
30+
GCNConv(feat=>h1, relu, cached=false)
31+
```
32+
33+
This ensures layer function as expected.

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ GeometricFlux is a framework for geometric deep learning/machine learning.
66
* It supports of CUDA GPU with CUDA.jl
77
* It integrates with JuliaGraphs ecosystems.
88
* It supports generic graph neural network architectures (i.g. message passing scheme and graph network block)
9-
* It contains built-in GNN benchmark datasets (WIP)
9+
* It contains built-in GNN benchmark datasets (provided by GraphMLDatasets)
1010

1111
## Installation
1212

0 commit comments

Comments
 (0)