Karthik Yearning Deep Learning

Gated Graph Sequence Neural Networks


Gated Graph Sequence Neural Networks



Authors:



Abstract



Introduction


There are two settings for feature learning on graphs:

  1. Learning a representation of the input graph.

  2. Learning representations of the internal state during the process of producing a sequence of outputs.




We illustrate aspects of this general model in experiments on bAbI tasks (Weston et al., 2015) and graph algorithm learning tasks that illustrate the capabilities of the model.

We then present an application to the verification of computer programs. When attempting to prove properties such as memory safety (i.e., that there are no null pointer dereferences in a program), a core problem is to find mathematical descriptions of the data structures used in a program.

Following Brockschmidt et al. (2015), we have phrased this as a machine learning problem where we will learn to map from a set of input graphs, representing the state of memory, to a logical description of the data structures that have been instantiated. Whereas Brockschmidt et al. (2015) relied on a large amount of hand-engineering of features, we show that the system can be replaced with a GGS-NN at no cost in accuracy.




Let us cover all three kinds of Networks.

  1. Graph Neural Networks.
  2. Gated Graph Neural Networks.
  3. Gated Graph Sequence Neural Networks.


Graph Neural Networks:


Propagation Model
Output Model and Learning



GATED GRAPH NEURAL NETWORKS:


Node Annotations.


Propagation Model
Output Models



GATED GRAPH SEQUENCE NEURAL NETWORKS:



Applications:


For more information and Mathematical calculations please refer GATED GRAPH SEQUENCE NEURAL NETWORKS Paper


comments powered by Disqus