Model catalog
Neo4j Graph Analytics for Snowflake is in Public Preview and is not intended for production use. |
In this chapter, we will demonstrate how to inspect and manage the models you have trained.
When training models using Neo4j Graph Analytics for Snowflake, the models are stored on a per-user basis. This means that a snowflake user can only interact with models that the same user has created via training.
To run use these endpoints, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.
We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.
Checking if a model exists
To check if a model exists for the current user, we invoke the model_exists
procedure:
CALL Neo4j_Graph_Analytics.graph.model_exists('a_model_name')
This yields a single row like
MODEL_EXISTS |
TRUE |
or similar with value FALSE if the model does not exist for the current user.
Listing models and their metadata
In order to show all available models and inspect their metadata, such as training configuration and metrics, we invoke:
SELECT Neo4j_Graph_Analytics.graph.show_models()
This yields, for example:
MODELNAME |
INFO |
a_model_name |
{ "metrics": { "test_acc": 0.47403684258461, "test_f1_macro": 0.46957120299339294, "test_f1_micro": 0.47403684258461, "train_acc": 0.5744920969009399, "train_f1_macro": 0.568971574306488, "train_f1_micro": 0.5744920969009399 }, "compute": { "activation": "relu", "aggregator": "mean", "class_weights": true, "dropout": 0.1, "epochs_per_checkpoint": 1, "epochs_per_val": 0, "eval_batch_size": 886, "hidden_channels": 256, "layer_normalization": true, "learning_rate": 0.001, "make_undirected": true, "num_epochs": 1, "num_samples": [ 20, 20 ], "random_seed": 2119823670, "split_ratios": { "TEST": 0.2, "TRAIN": 0.6, "VALID": 0.2 }, "target_label": "movie", "target_property": "genre", "train_batch_size": 886 } } |
This result is a dictionary containing model names as keys and model metadata as values. In this case, the only model present is "a_model_name". The metadata, in turn, contains the training configuration under the "compute" key. It includes explicit user settings as well as default values for omitted settings. Moreover, when available, quality metrics collected during training appear under the "metrics" key.