K-Nearest Neighbors

Neo4j Graph Analytics for Snowflake is in Public Preview and is not intended for production use.

Introduction

The K-Nearest Neighbors algorithm computes a distance value for all node pairs in the graph and creates new relationships between each node and its k nearest neighbors. The distance is calculated based on node properties.

The input of this algorithm is a homogeneous graph; any node label or relationships type information in the graph is ignored. The graph does not need to be connected, in fact, existing relationships between nodes will be ignored - apart from random walk sampling if that that initial sampling option is used. New relationships are created between each node and its k nearest neighbors.

The K-Nearest Neighbors algorithm compares given properties of each node. The k nodes where these properties are most similar are the k-nearest neighbors.

The initial set of neighbors is picked at random and verified and refined in multiple iterations. The number of iterations is limited by the configuration parameter maxIterations. The algorithm may stop earlier if the neighbor lists only change by a small amount, which can be controlled by the configuration parameter deltaThreshold.

The particular implementation is based on Efficient k-nearest neighbor graph construction for generic similarity measures by Wei Dong et al. Instead of comparing every node with every other node, the algorithm selects possible neighbors based on the assumption, that the neighbors-of-neighbors of a node are most likely already the nearest one. The algorithm scales quasi-linear with respect to the node count, instead of being quadratic.

Furthermore, the algorithm only compares a sample of all possible neighbors on each iteration, assuming that eventually all possible neighbors will be seen. This can be controlled with the configuration parameter sampleRate:

  • A valid sample rate must be in between 0 (exclusive) and 1 (inclusive).

  • The default value is 0.5.

  • The parameter is used to control the trade-off between accuracy and runtime-performance.

  • A higher sample rate will increase the accuracy of the result.

    • The algorithm will also require more memory and will take longer to compute.

  • A lower sample rate will increase the runtime-performance.

    • Some potential nodes may be missed in the comparison and may not be included in the result.

When encountered neighbors have equal similarity to the least similar already known neighbor, randomly selecting which node to keep can reduce the risk of some neighborhoods not being explored. This behavior is controlled by the configuration parameter perturbationRate.

The output of the algorithm are new relationships between nodes and their k-nearest neighbors. Similarity scores are expressed via relationship properties.

For more information on this algorithm, see:

Similarity metrics

The similarity measure used in the KNN algorithm depends on the type of the configured node properties. KNN supports both scalar numeric values and lists of numbers.

Scalar numbers

When a property is a scalar number, the similarity is computed as follows:

knn scalar similarity
Figure 1. one divided by one plus the absolute difference

This gives us a number in the range (0, 1].

List of integers

When a property is a list of integers, similarity can be measured with either the Jaccard similarity or the Overlap coefficient.

Jaccard similarity
jacard
Figure 2. size of intersection divided by size of union
Overlap coefficient
overlap
Figure 3. size of intersection divided by size of minimum set

Both of these metrics give a score in the range [0, 1] and no normalization needs to be performed. Jaccard similarity is used as the default option for comparing lists of integers when the metric is not specified.

List of floating-point numbers

When a property is a list of floating-point numbers, there are three alternatives for computing similarity between two nodes.

The default metric used is that of Cosine similarity.

Cosine similarity
cosine
Figure 4. dot product of the vectors divided by the product of their lengths

Notice that the above formula gives a score in the range of [-1, 1] . The score is normalized into the range [0, 1] by doing score = (score + 1) / 2.

The other two metrics include the Pearson correlation score and Normalized Euclidean similarity.

Pearson correlation score
pearson
Figure 5. covariance divided by the product of the standard deviations

As above, the formula gives a score in the range [-1, 1], which is normalized into the range [0, 1] similarly.

Euclidean similarity
ed
Figure 6. the root of the sum of the square difference between each pair of elements

The result from this formula is a non-negative value, but is not necessarily bounded into the [0, 1] range. Τo bound the number into this range and obtain a similarity score, we return score = 1 / (1 + distance), i.e., we perform the same normalization as in the case of scalar values.

Multiple properties

Finally, when multiple properties are specified, the similarity of the two neighbors is the mean of the similarities of the individual properties, i.e. the simple mean of the numbers, each of which is in the range [0, 1], giving a total score also in the [0, 1] range.

The validity of this mean is highly context dependent, so take care when applying it to your data domain.

Node properties and metrics configuration

The node properties and metrics to use are specified with the nodeProperties configuration parameter. At least one node property must be specified.

This parameter accepts one of:

Table 1. nodeProperties syntax

a single property name

nodeProperties: 'embedding'

a Map of property keys to metrics

nodeProperties: {
    embedding: 'COSINE',
    age: 'DEFAULT',
    lotteryNumbers: 'OVERLAP'
}

list of Strings and/or Maps

nodeProperties: [
    {embedding: 'COSINE'},
    'age',
    {lotteryNumbers: 'OVERLAP'}
]

The available metrics by type are:

Table 2. Available metrics by type
type metric

List of Integer

JACCARD, OVERLAP

List of Float

COSINE, EUCLIDEAN, PEARSON

For any property type, DEFAULT can also be specified to use the default metric. For scalar numbers, there is only the default metric.

Initial neighbor sampling

The algorithm starts off by picking k random neighbors for each node. There are two options for how this random sampling can be done.

Uniform

The first k neighbors for each node are chosen uniformly at random from all other nodes in the graph. This is the classic way of doing the initial sampling. It is also the algorithm’s default. Note that this method does not actually use the topology of the input graph.

Random Walk

From each node we take a depth biased random walk and choose the first k unique nodes we visit on that walk as our initial random neighbors. If after some internally defined O(k) number of steps a random walk, k unique neighbors have not been visited, we will fill in the remaining neighbors using the uniform method described above. The random walk method makes use of the input graph’s topology and may be suitable if it is more likely to find good similarity scores between topologically close nodes.

The random walk used is biased towards depth in the sense that it will more likely choose to go further away from its previously visited node, rather that go back to it or to a node equidistant to it. The intuition of this bias is that subsequent iterations of comparing neighbor-of-neighbors will likely cover the extended (topological) neighborhood of each node.

Syntax

Run K-Nearest Neighbors.
CALL Neo4j_Graph_Analytics.graph.knn(
  'X64_CPU_L',        (1)
  {
    'project': {...}, (2)
    'compute': {...}, (3)
    'write':   {...}  (4)
  }
);
1 Compute pool selector.
2 Project config.
3 Compute config.
4 Write config.
Table 3. Parameters
Name Type Default Optional Description

computePoolSelector

String

n/a

no

The selector for the compute pool on which to run the KNN job.

configuration

Map

{}

no

Configuration for graph project, algorithm compute and result write back.

The configuration map consists of the following three entries.

For more details on below Project configuration, refer to the Project documentation.
Table 4. Project configuration
Name Type

nodeTables

List of node tables.

relationshipTables

Map of relationship types to relationship tables.

Table 5. Compute configuration
Name Type Default Optional Description

mutateProperty

String

'similarity'

yes

The relationship property that will be written back to the Snowflake database.

mutateRelationshipType

String

'SIMILAR_TO'

yes

The relationship type used for the relationships written back to the Snowflake database.

nodeProperties

String or Map or List of Strings / Maps

n/a

no

The node properties to use for similarity computation along with their selected similarity metrics. Accepts a single property key, a Map of property keys to metrics, or a List of property keys and/or Maps, as above. See Node properties and metrics configuration for details.

topK

Integer

10

yes

The number of neighbors to find for each node. The K-nearest neighbors are returned. This value cannot be lower than 1.

sampleRate

Float

0.5

yes

Sample rate to limit the number of comparisons per node. Value must be between 0 (exclusive) and 1 (inclusive).

deltaThreshold

Float

0.001

yes

Value as a percentage to determine when to stop early. If fewer updates than the configured value happen, the algorithm stops. Value must be between 0 (exclusive) and 1 (inclusive).

maxIterations

Integer

100

yes

Hard limit to stop the algorithm after that many iterations.

randomJoins

Integer

10

yes

The number of random attempts per node to connect new node neighbors based on random selection, for each iteration.

initialSampler

String

"uniform"

yes

The method used to sample the first k random neighbors for each node. "uniform" and "randomWalk", both case-insensitive, are valid inputs.

randomSeed

Integer

n/a

yes

The seed value to control the randomness of the algorithm. Note that concurrency must be set to 1 when setting this parameter.

similarityCutoff

Float

0

yes

Filter out from the list of K-nearest neighbors nodes with similarity below this threshold.

perturbationRate

Float

0

yes

The probability of replacing the least similar known neighbor with an encountered neighbor of equal similarity.

For more details on below Write configuration, refer to the Write documentation.
Table 6. Write configuration
Name Type Default Optional Description

sourceLabel

String

n/a

no

Node label in the in-memory graph for start nodes of relationships to be written back.

targetLabel

String

n/a

no

Node label in the in-memory graph for end nodes of relationships to be written back.

outputTable

String

n/a

no

Table in Snowflake database to which relationships are written.

relationshipType

String

'SIMILAR_TO'

yes

The relationship type that will be written back to the Snowflake database.

relationshipProperty

String

'similarity'

yes

The relationship property that will be written back to the Snowflake database.

The KNN algorithm does not read any relationships, but the values for relationshipProjection or relationshipQuery are still being used and respected for the graph loading.

The results are the same as running write mode on a named graph, see write mode syntax above.

To get a deterministic result when running the algorithm:

  • the concurrency parameter must be set to one

  • the randomSeed must be explicitly set.

Examples

In this section we will show examples of running the KNN algorithm on a concrete graph. With the Uniform sampler, KNN samples initial neighbors uniformly at random, and doesn’t take into account graph topology. This means KNN can run on a graph of only nodes, without any relationships. Consider the following graph of five disconnected Person nodes.

Visualization of the example graph
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.PERSONS (NODEID STRING, AGE INT);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.PERSONS VALUES
  ('Alice', 24),
  ('Bob',   73),
  ('Carol', 24),
  ('Dave',  48),
  ('Eve',   67);

In the example, we want to use the K-Nearest Neighbors algorithm to compare people based on either their age or a combination on all provided properties.

With the node and relationship tables in Snowflake we can now project it as part of an algorithm job. In the following examples we will demonstrate using the Betweenness Centrality algorithm on this graph.

Run job

Running a KNN job involves the three steps: Project, Compute and Write.

To run the query, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.

We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.

The following will run the algorithm, and stream results:
CALL Neo4j_Graph_Analytics.graph.knn('CPU_X64_XS', {
    'project': {
        'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
        'nodeTables': [ 'PERSONS' ],
        'relationshipTables': {}
    },
    'compute': {
        'nodeProperties': ['AGE'],
        'topK': 1,
        'mutateProperty': 'score',
        'mutateRelationshipType': 'SIMILAR'
    },
    'write': [{
        'outputTable': 'EXAMPLE_DB.DATA_SCHEMA.PERSONS_SIMILARITY',
        'sourceLabel': 'PERSONS',
        'targetLabel': 'PERSONS',
        'relationshipType': 'SIMILAR',
        'relationshipProperty': 'score'
    }]
});
Table 7. Results
JOB_ID JOB_START JOB_END JOB_RESULT
 job_df2be9e531014fa186cdabd9c3c1099f
 2025-04-29 19:40:25.960000
 2025-04-29 19:40:31.701000
 {
    "knn_1": {
      "computeMillis": 70,
      "configuration": {
        "concurrency": 2,
        "deltaThreshold": 0.001,
        "initialSampler": "UNIFORM",
        "jobId": "b74ad39a-fa2d-4db0-be0a-0862518cf2ad",
        "logProgress": true,
        "maxIterations": 100,
        "mutateProperty": "score",
        "mutateRelationshipType": "SIMILAR",
        "nodeLabels": [
          "*"
        ],
        "nodeProperties": {
          "AGE": "LONG_PROPERTY_METRIC"
        },
        "perturbationRate": 0,
        "randomJoins": 10,
        "relationshipTypes": [
          "*"
        ],
        "sampleRate": 0.5,
        "similarityCutoff": 0,
        "sudo": false,
        "topK": 1
      },
      "didConverge": true,
      "mutateMillis": 436,
      "nodePairsConsidered": 128,
      "nodesCompared": 5,
      "postProcessingMillis": 0,
      "preProcessingMillis": 6,
      "ranIterations": 2,
      "relationshipsWritten": 5,
      "similarityDistribution": {
        "max": 1.000007629394531,
        "mean": 0.4671443462371826,
        "min": 0.04999995231628418,
        "p1": 0.04999995231628418,
        "p10": 0.04999995231628418,
        "p100": 1.0000073909759521,
        "p25": 0.14285731315612793,
        "p5": 0.04999995231628418,
        "p50": 0.14285731315612793,
        "p75": 1.0000073909759521,
        "p90": 1.0000073909759521,
        "p95": 1.0000073909759521,
        "p99": 1.0000073909759521,
        "stdDev": 0.4363971449375242
      }
    },
    "project_1": {
      "graphName": "snowgraph",
      "nodeCount": 5,
      "nodeMillis": 249,
      "relationshipCount": 0,
      "relationshipMillis": 0,
      "totalMillis": 249
    },
    "write_relationship_type_1": {
      "exportMillis": 1978,
      "outputTable": "EXAMPLE_DB.DATA_SCHEMA.PERSONS_SIMILARITY",
      "relationshipProperty": "score",
      "relationshipType": "SIMILAR",
      "relationshipsExported": 5
    }
 }

The returned result contains information about the job execution and result distribution. Additionally, the similarity score for each of the nodes has been written back to the Snowflake database. We can query it like so:

SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.PERSONS_SIMILARITY ORDER BY SCORE DESC;

Which shows the computation results as stored in the database:

Table 8. Results
SOURCENODEID TARGETNODEID SCORE

Alice

Carol

1.0

Carol

Alice

1.0

Bob

Eve

0.14285714285714285

Eve

Bob

0.14285714285714285

Dave

Eve

0.05

We use default values for the procedure configuration parameter for most parameters. The randomSeed and concurrency is set to produce the same result on every invocation. The topK parameter is set to 1 to only return the single nearest neighbor for every node. Notice that the similarity between Dave and Eve is very low. Setting the similarityCutoff parameter to 0.10 will filter the relationship between them, removing it from the result.