Fast Random Projection

Neo4j Graph Analytics for Snowflake is in Public Preview and is not intended for production use.

Introduction

Fast Random Projection, or FastRP for short, is a node embedding algorithm in the family of random projection algorithms. These algorithms are theoretically backed by the Johnsson-Lindenstrauss lemma according to which one can project n vectors of arbitrary dimension into O(log(n)) dimensions and still approximately preserve pairwise distances among the points. In fact, a linear projection chosen in a random way satisfies this property.

Such techniques therefore allow for aggressive dimensionality reduction while preserving most of the distance information. The FastRP algorithm operates on graphs, in which case we care about preserving similarity between nodes and their neighbors. This means that two nodes that have similar neighborhoods should be assigned similar embedding vectors. Conversely, two nodes that are not similar should be not be assigned similar embedding vectors.

The Neo4j Graph Analytics for Snowflake implementation of FastRP extends the original algorithm[1] in several ways:

The FastRP algorithm initially assigns random vectors to all nodes using a technique called very sparse random projection [2]. Starting with random vectors (node projections) and iteratively averaging over node neighborhoods, the algorithm constructs a sequence of intermediate embeddings e n to the ith for each node n. More precisely,

e n to the ith equals average of e m to the ith minus one

where m ranges over neighbors of n and e n to the zeroeth is the node’s initial random vector.

The embedding e n of node n, which is the output of the algorithm, is a combination of the vectors and embeddings defined above:

e n equals w zero times normalise r n plus sum from i equals 1 to k of w i times normalise e n to the ith

where normalize is the function which divides a vector with its L2 norm, the value of nodeSelfInfluence is w zero, and the values of iterationWeights are w 1 comma w 2 comma dot dot dot w k. We will return to Node Self Influence later on.

Therefore, each node’s embedding depends on a neighborhood of radius equal to the number of iterations. This way FastRP exploits higher-order relationships in the graph while still being highly scalable.

Node properties

Most real-world graphs contain node properties which store information about the nodes and what they represent. The FastRP algorithm in Neo4j Graph Analytics for Snowflake extends the original FastRP algorithm with a capability to take node properties into account. The resulting embeddings can therefore represent the graph more accurately.

The node property aware aspect of the algorithm is configured via the parameters featureProperties and propertyRatio. Each node property in featureProperties is associated with a randomly generated vector of dimension propertyDimension, where propertyDimension = embeddingDimension * propertyRatio. Each node is then initialized with a vector of size embeddingDimension formed by concatenation of two parts:

  1. The first part is formed like in the standard FastRP algorithm,

  2. The second one is a linear combination of the property vectors, using the property values of the node as weights.

The algorithm then proceeds with the same logic as the FastRP algorithm. Therefore, the algorithm will output arrays of size embeddingDimension. The last propertyDimension coordinates in the embedding captures information about property values of nearby nodes (the "property part" below), and the remaining coordinates (embeddingDimension - propertyDimension of them; "topology part") captures information about nearby presence of nodes.

[0, 1, ...        | ...,   N - 1, N]
 ^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^
  topology part   |  property part
                  ^
           property ratio

Tuning algorithm parameters

In order to improve the embedding quality using FastRP on one of your graphs, it is possible to tune the algorithm parameters. This process of finding the best parameters for your specific use case and graph is typically referred to as hyperparameter tuning. We will go through each of the configuration parameters and explain how they behave.

For statistically sound results, it is a good idea to reserve a test set excluded from parameter tuning. After selecting a set of parameter values, the embedding quality can be evaluated using a downstream machine learning task on the test set. By varying the parameter values and studying the precision of the machine learning task, it is possible to deduce the parameter values that best fit the concrete dataset and use case. To construct such a set you may want to use a dedicated node label in the graph to denote a subgraph without the test data.

Embedding dimension

The embedding dimension is the length of the produced vectors. A greater dimension offers a greater precision, but is more costly to operate over.

The optimal embedding dimension depends on the number of nodes in the graph. Since the amount of information the embedding can encode is limited by its dimension, a larger graph will tend to require a greater embedding dimension. A typical value is a power of two in the range 128 - 1024. A value of at least 256 gives good results on graphs in the order of 105 nodes, but in general increasing the dimension improves results. Increasing embedding dimension will however increase memory requirements and runtime linearly.

Normalization strength

The normalization strength is used to control how node degrees influence the embedding. Using a negative value will downplay the importance of high degree neighbors, while a positive value will instead increase their importance. The optimal normalization strength depends on the graph and on the task that the embeddings will be used for. In the original paper, hyperparameter tuning was done in the range of [-1,0] (no positive values), but we have found cases where a positive normalization strengths gives better results.

Iteration weights

The iteration weights parameter control two aspects: the number of iterations, and their relative impact on the final node embedding. The parameter is a list of numbers, indicating one iteration per number where the number is the weight applied to that iteration.

In each iteration, the algorithm will expand across all relationships in the graph. This has some implications:

  • With a single iteration, only direct neighbors will be considered for each node embedding.

  • With two iterations, direct neighbors and second-degree neighbors will be considered for each node embedding.

  • With three iterations, direct neighbors, second-degree neighbors, and third-degree neighbors will be considered for each node embedding. Direct neighbors may be reached twice, in different iterations.

  • In general, the embedding corresponding to the i:th iteration contains features depending on nodes reachable with paths of length i. If the graph is undirected, then a node reachable with a path of length L can also be reached with length L+2k, for any integer k.

  • In particular, a node may reach back to itself on each even iteration (depending on the direction in the graph).

It is good to have at least one non-zero weight in an even and in an odd position. Typically, using at least a few iterations, for example three, is recommended. However, a too high value will consider nodes far away and may not be informative or even be detrimental. The intuition here is that as the projections reach further away from the node, the less specific the neighborhood becomes. Of course, a greater number of iterations will also take more time to complete.

Node Self Influence

Node Self Influence is a variation of the original FastRP algorithm.

How much a node’s embedding is affected by the intermediate embedding at iteration i is controlled by the i'th element of iterationWeights. This can also be seen as how much the initial random vectors, or projections, of nodes that can be reached in i hops from a node affect the embedding of the node. Similarly, nodeSelfInfluence behaves like an iteration weight for a 0 th iteration, or the amount of influence the projection of a node has on the embedding of the same node.

A reason for setting this parameter to a non-zero value is if your graph has low connectivity or a significant amount of isolated nodes. Isolated nodes combined with using propertyRatio = 0.0 leads to embeddings that contain all zeros. However using node properties along with node self influence can thus produce more meaningful embeddings for such nodes. This can be seen as producing fallback features when graph structure is (locally) missing. Moreover, sometimes a node’s own properties are simply informative features and are good to include even if connectivity is high. Finally, node self influence can be used for pure dimensionality reduction to compress node properties used for node classification.

If node properties are not used, using nodeSelfInfluence may also have a positive effect, depending on other settings and on the problem.

Orientation

Choosing the right orientation when creating the graph may have the single greatest impact. The FastRP algorithm is designed to work with undirected graphs, and we expect this to be the best in most cases. If you expect only outgoing or incoming relationships to be informative for a prediction task, then you may want to try using the orientations NATURAL or REVERSE respectively.

Weighted graphs

By default, the algorithm treats the graph relationships as unweighted. You can specify a relationship weight with the relationshipWeightProperty parameter to instruct the algorithm to compute weighted averages of the neighboring embeddings.

Syntax

Run FastRP.
CALL Neo4j_Graph_Analytics.graph.fast_rp(
  'X64_CPU_L',        (1)
  {
    'project': {...}, (2)
    'compute': {...}, (3)
    'write':   {...}  (4)
  }
);
1 Compute pool selector.
2 Project config.
3 Compute config.
4 Write config.
Table 1. Parameters
Name Type Default Optional Description

computePoolSelector

String

n/a

no

The selector for the compute pool on which to run the FastRP job.

configuration

Map

{}

no

Configuration for graph project, algorithm compute and result write back.

The configuration map consists of the following three entries.

For more details on below Project configuration, refer to the Project documentation.
Table 2. Project configuration
Name Type

nodeTables

List of node tables.

relationshipTables

Map of relationship types to relationship tables.

Table 3. Compute configuration
Name Type Default Optional Description

mutateProperty

String

'fast_rp'

yes

The node property that will be written back to the Snowflake database.

propertyRatio

Float

0.0

yes

The desired ratio of the property embedding dimension to the total embeddingDimension. A positive value requires featureProperties to be non-empty.

featureProperties

List of String

[]

yes

The names of the node properties that should be used as input features. All property names must exist in the projected graph and be of type Float or List of Float.

embeddingDimension

Integer

n/a

no

The dimension of the computed node embeddings. Minimum value is 1.

iterationWeights

List of Float

[0.0, 1.0, 1.0]

yes

Contains a weight for each iteration. The weight controls how much the intermediate embedding from the iteration contributes to the final embedding.

nodeSelfInfluence

Float

0.0

yes

Controls for each node how much its initial random vector contributes to its final embedding.

normalizationStrength

Float

0.0

yes

The initial random vector for each node is scaled by its degree to the power of normalizationStrength.

randomSeed

Integer

n/a

yes

A random seed which is used for all randomness in computing the embeddings.

relationshipWeightProperty

String

null

yes

Name of the relationship property to use for weighted random projection. If unspecified, the algorithm runs unweighted.

The number of iterations is equal to the length of iterationWeights.

It is required that iterationWeights is non-empty or nodeSelfInfluence is non-zero.

For more details on below Write configuration, refer to the Write documentation.
Table 4. Write configuration
Name Type Default Optional Description

nodeLabel

String

n/a

no

Node label in the in-memory graph from which to write a node property.

nodeProperty

String

'fast_rp'

yes

The node property that will be written back to the Snowflake database.

outputTable

String

n/a

no

Table in Snowflake database to which node properties are written.

Examples

In this section we will show examples of running the FastRP node embedding algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to make use of the algorithm in a real setting. We will do this on a small social network graph of a handful nodes connected in a particular pattern. The example graph looks like this:

Visualization of the example graph
The following SQL statement will create the example graph tables in the Snowflake database:
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.PERSONS (NODEID STRING, AGE INT);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.PERSONS VALUES
  ('Dan',   18),
  ('Annie', 12),
  ('Matt',  22),
  ('Jeff',  51),
  ('Brie',  45),
  ('Elsa',  65),
  ('John',  64);

CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.KNOWS (SOURCENODEID STRING, TARGETNODEID STRING, WEIGHT FLOAT);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.KNOWS VALUES
  ('Dan',   'Annie', 1.0),
  ('Dan',   'Matt',  1.0),
  ('Annie', 'Matt',  1.0),
  ('Annie', 'Jeff',  1.0),
  ('Annie', 'Brie',  1.0),
  ('Matt',  'Brie',  3.5),
  ('Brie',  'Elsa',  1.0),
  ('Brie',  'Jeff',  2.0),
  ('John',  'Jeff',  1.0);

This graph represents seven people who know one another. A relationship property weight denotes the strength of the knowledge between two persons.

Run job

To run the query, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.

We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.

The following will run the algorithm, and stream results:
CALL Neo4j_Graph_Analytics.graph.fast_rp('CPU_X64_XS', {
    'project': {
        'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
        'nodeTables': [ 'PERSONS' ],
        'relationshipTables': {
            'LINKS': {
                'sourceTable': 'PERSONS',
                'targetTable': 'PERSONS',
            }
        }
    },
    'compute': {
        'mutateProperty': 'embedding',
        'embeddingDimension': 4
    },
    'write': [{
        'nodeLabel': 'PERSONS',
        'outputTable': 'EXAMPLE_DB.DATA_SCHEMA.PERSONS_EMBEDDING',
        'nodeProperty': 'embedding'
    }]
});
Table 5. Results
JOB_ID JOB_START JOB_END JOB_RESULT

job_6f57b3e10a604422850630117caf0de7

2025-04-30 11:57:12.598000

2025-04-30 11:57:21.348000

{ "fast_rp_1": { "computeMillis": 32, "configuration": { "concurrency": 2, "embeddingDimension": 4, "featureProperties": [], "iterationWeights": [ 0, 1, 1 ], "jobId": "249f0d00-f957-426b-b15f-7b67dc898784", "logProgress": true, "mutateProperty": "fast_rp", "nodeLabels": [ "" ], "nodeSelfInfluence": 0, "normalizationStrength": 0, "propertyRatio": 0, "relationshipTypes": [ "" ], "sudo": false }, "mutateMillis": 2, "nodeCount": 7, "nodePropertiesWritten": 7, "preProcessingMillis": 14 }, "project_1": { "graphName": "snowgraph", "nodeCount": 7, "nodeMillis": 189, "relationshipCount": 9, "relationshipMillis": 326, "totalMillis": 515 }, "write_node_property_1": { "exportMillis": 2004, "nodeLabel": "FASTRP_PERSONS", "nodeProperty": "fast_rp", "outputTable": "EXAMPLE_DB.DATA_SCHEMA.PERSONS_EMBEDDING", "propertiesExported": 7 } }

The returned result contains information about the job execution and result distribution. Additionally, the embedding for each of the nodes has been written back to the Snowflake database. We can query it like so:

SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.PERSONS_EMBEDDING;
Table 6. Results
NODEID FAST_RP

Annie

[1.0129523, -1.4094763, -0.64521426, 0.14176996]

Brie

[0.8979494, -1.0018919, -0.8030941, -0.14222366]

Dan

[1.1513789, -0.89023, -0.21288107, -0.06492126]

Elsa

[0.71804917, -0.9413747, -0.90776074, -0.060044557]

Jeff

[1.0402749, -1.3812861, -0.9326958, -0.08068423]

John

[0.9855986, -1.393847, -0.9855986, 0]

Matt

[0.9290148, -1.3734311, -0.550267, 0.10201697]

The results of the algorithm are not very intuitively interpretable, as the node embedding format is a mathematical abstraction of the node within its neighborhood, designed for machine learning programs. What we can see is that the embeddings have four elements (as configured using embeddingDimension) and that the numbers are relatively small (they all fit in the range of [-2, 2]). The magnitude of the numbers is controlled by the embeddingDimension, the number of nodes in the graph, and by the fact that FastRP performs euclidean normalization on the intermediate embedding vectors.

Due to the random nature of the algorithm the results will vary between the runs. However, this does not necessarily mean that the pairwise distances of two node embeddings vary as much.


1. Chen, Haochen, Syed Fahad Sultan, Yingtao Tian, Muhao Chen, and Steven Skiena. "Fast and Accurate Network Embeddings via Very Sparse Random Projection." arXiv preprint arXiv:1908.11512 (2019).
2. Achlioptas, Dimitris. "Database-friendly random projections: Johnson-Lindenstrauss with binary coins." Journal of computer and System Sciences 66, no. 4 (2003): 671-687.