Experiments. We performed a set of experiments to test different properties of GAM. First, we tested the generality of GAM by applying our approach to Multilayer Perceptrons (MLP), Convolutional Neural Networks (CNN), Graph Convolution Networks (GCN) [15], and Graph Attention Networks (GAT) [35]2. Next, we tested the robustness of GAM when faced with noisy graphs, as well as evaluated GAM and GAM* with and without a provided graph, comparing them with the state-of-the-art methods. 4.1 Graph-based Classification Datasets. We obtained three public datasets from Yang et al. [38]: ▇▇▇▇ [19], Citeseer [5], and Pubmed [25], which have become the de facto standard for evaluating graph node classification algorithms. We used the same train/validation/test splits as ▇▇▇▇ et al. [39], which have been used by the methods we compare to. In these datasets, graph nodes represent research publications and edges represent citations. Each node is represented as a vector, whose components correspond to words. For ▇▇▇▇ and Citeseer the vector elements are binary indicating whether the corresponding term is present in the publication, while for Pubmed they are real-valued tf-idf scores. The goal is to classify research publications according to their main topic which belongs to a provided set of topics. In each case we are given true labels for a small subset of nodes. Dataset statistics are shown in Table 4 in Appendix A. 2MLPs and CNNs are common in many SSL problems and GCN and GAT achieve state-of-the-art perfor- ▇▇▇▇▇ on three datasets commonly used in recent graph-based SSL work.
Appears in 1 contract
Sources: Graph Agreement Models for Semi Supervised Learning
Experiments. We performed a set of experiments to test different properties of GAM. First, we tested the generality of GAM by applying our approach to Multilayer Perceptrons (MLP), Convolutional Neural Networks (CNN), Graph Convolution Networks (GCN) [15], and Graph Attention Networks (GAT) [35]2. Next, we tested the robustness of GAM when faced with noisy graphs, as well as and evaluated GAM and GAM* it with and without a provided graph, comparing them with the state-of-the-art methods.
4.1 Graph-based Classification Datasets. We obtained three public datasets from Yang et al. [38]: ▇▇▇▇ [19], Citeseer [5], and Pubmed [25], which have become the de facto standard for evaluating graph node classification algorithms. We used the same train/validation/test splits as ▇▇▇▇ et al. [39], which have been used by the methods we compare to. In these datasets, graph nodes represent research publications and edges represent citations. Each node is represented as a vector, whose components correspond to words. For ▇▇▇▇ and Citeseer the vector elements are binary indicating whether the corresponding term is present in the publication, while for Pubmed they are real-valued tf-idf scores. The goal is to classify research publications according to their main topic which belongs to a provided set of topics. In each case we are given true labels for a small subset of nodes. Dataset statistics are shown in Table 4 in Appendix A. 2MLPs and CNNs are common in many SSL problems and GCN and GAT achieve state-of-the-art perfor- ▇▇▇▇▇ on three datasets commonly used in recent graph-based SSL work.
Appears in 1 contract
Sources: Graph Agreement Models for Semi Supervised Learning