Filtered K-Nearest Neighbors
Introduction
The Filtered K-Nearest Neighbors algorithm extends the K-Nearest Neighbors algorithm with filtering capabilities on source nodes, target nodes, or both. It computes distance values for filtered node pairs in the graph and creates new relationships between each node and its k nearest neighbors from the filtered set. The distance is calculated based on node properties.
Like the standard K-Nearest Neighbors algorithm, the input is a homogeneous graph where any node label or relationship type information is ignored. The graph does not need to be connected. In fact, existing relationships between nodes will be ignored—apart from random walk sampling if that initial sampling option is used. New relationships are created between each node and its k nearest neighbors from the filtered candidate set.
The Filtered K-Nearest Neighbors algorithm compares given properties of each node.
With filtering applied, the k
nodes where these properties are most similar are selected from the filtered candidate set as the k-nearest neighbors.
The algorithm operates similarly to the standard KNN, using the same iteration-based refinement process.
The initial set of neighbors is picked at random from the filtered candidate set and verified and refined in multiple iterations.
The number of iterations is limited by the configuration parameter maxIterations
.
The algorithm may stop earlier if the neighbor lists only change by a small amount, which can be controlled by the configuration parameter deltaThreshold
.
The particular implementation is based on Efficient k-nearest neighbor graph construction for generic similarity measures by Wei Dong et al., with additional filtering capabilities. Instead of comparing every node with every other node, the algorithm selects possible neighbors based on the assumption that the neighbors-of-neighbors of a node are most likely already the nearest ones. The algorithm scales quasi-linear with respect to the node count, instead of being quadratic.
Furthermore, the algorithm only compares a sample of all possible neighbors on each iteration, assuming that eventually all possible neighbors will be seen.
This can be controlled with the configuration parameter sampleRate
:
-
A valid sample rate must be between 0 (exclusive) and 1 (inclusive).
-
The default value is
0.5
. -
The parameter is used to control the trade-off between accuracy and runtime-performance.
-
A higher sample rate will increase the accuracy of the result.
-
The algorithm will also require more memory and will take longer to compute.
-
-
A lower sample rate will increase the runtime-performance.
-
Some potential nodes may be missed in the comparison and may not be included in the result.
-
When encountered neighbors have equal similarity to the least similar already known neighbor, randomly selecting which node to keep can reduce the risk of some neighborhoods not being explored.
This behavior is controlled by the configuration parameter perturbationRate
.
The output of the algorithm is new relationships between nodes and their k-nearest neighbors. Similarity scores are expressed via relationship properties.
For more information on the base algorithm, see:
Types of Filtering
The Filtered K-Nearest Neighbors algorithm operates in a world of source nodes, target nodes, and the relationships between them that hold a similarity score or distance.
Just like the K-Nearest Neighbors algorithm, the output with filtering are new relationships between nodes and their k-nearest neighbors. Similarity scores are expressed via relationship properties.
Filtered K-Nearest Neighbors gives you control over nodes on either end of the relationships, saving you from having to filter a large result set on your own, and enabling better control over output volumes.
Source node filtering
For some use cases, you will want to restrict the set of nodes that can act as source nodes, or the type of node that can act as source node. This is source node filtering. You want the best scoring relationships that originate from these particular nodes or this particular type of node.
A source node filter can be specified by providing a node label in the compute configuration, or by providing specific node IDs along with a source node table.
Target node filtering
Just like for source nodes, you sometimes want to restrict the set of nodes or type of node that can act as target node, i.e., target node filtering. The best scoring relationships for a given source node are computed where the target node is from a specified set or of a specified type.
A target node filter can be specified by providing a node label in the compute configuration, or by providing specific node IDs along with a target node table.
Seeding for target node filtering
A further use case for target node filtering is that you absolutely want to produce k results. You want to fill a fixed size bucket with relationships. You hope that there are enough high scoring relationships found by the algorithm, but as an insurance policy, you can seed your result set with arbitrary relationships to "guarantee" a full bucket of k results.
Just like the K-Nearest Neighbors algorithm is not guaranteed to find k results, the Filtered K-Nearest Neighbors algorithm is not strictly guaranteed to find k results either. But you will increase your odds massively if you employ seeding. In fact, with seeding, the only time you would not get k results is when there are not k target nodes in your graph.
Now, the quality of the arbitrary padding results is unknown. How does that square with the similarityCutoff
parameter? Here we have chosen semantics where seeding overrides similarity cutoff, and you risk getting results where the similarity score is below the cutoff - but guaranteeing that at least there are k of them.
Seeding is a boolean property you switch on or off (default is off).
You can mix and match source node filtering, target node filtering, and seeding to achieve your goals. |
Similarity metrics
The similarity measure used in the Filtered KNN algorithm depends on the type of the configured node properties. Filtered KNN supports both scalar numeric values and lists of numbers, using the same metrics as the standard KNN algorithm.
Scalar numbers
When a property is a scalar number, the similarity is computed as follows:
This gives us a number in the range (0, 1]
.
List of integers
When a property is a list of integers, similarity can be measured with either the Jaccard similarity or the Overlap coefficient.
- Jaccard similarity
-
Figure 2. size of intersection divided by size of union
- Overlap coefficient
-
Figure 3. size of intersection divided by size of minimum set
Both of these metrics give a score in the range [0, 1]
and no normalization needs to be performed.
Jaccard similarity is used as the default option for comparing lists of integers when the metric is not specified.
List of floating-point numbers
When a property is a list of floating-point numbers, there are three alternatives for computing similarity between two nodes.
The default metric used is that of Cosine similarity.
- Cosine similarity
-
Figure 4. dot product of the vectors divided by the product of their lengths
Notice that the above formula gives a score in the range of [-1, 1]
.
The score is normalized into the range [0, 1]
by doing score = (score + 1) / 2
.
The other two metrics include the Pearson correlation score and Normalized Euclidean similarity.
- Pearson correlation score
-
Figure 5. covariance divided by the product of the standard deviations
As above, the formula gives a score in the range [-1, 1]
, which is normalized into the range [0, 1]
similarly.
- Euclidean similarity
-
Figure 6. the root of the sum of the square difference between each pair of elements
The result from this formula is a non-negative value, but is not necessarily bounded into the [0, 1]
range.
Τo bound the number into this range and obtain a similarity score, we return score = 1 / (1 + distance)
, i.e., we perform the same normalization as in the case of scalar values.
Multiple properties
Finally, when multiple properties are specified, the similarity of the two neighbors is the mean of the similarities of the individual properties, i.e., the simple mean of the numbers, each of which is in the range [0, 1]
, giving a total score also in the [0, 1]
range.
The validity of this mean is highly context dependent, so take care when applying it to your data domain. |
Node properties and metrics configuration
The node properties and metrics to use are specified with the nodeProperties
configuration parameter.
At least one node property must be specified.
This parameter accepts one of:
a single property name |
|
a Map of property keys to metrics |
nodeProperties: { embedding: 'COSINE', age: 'DEFAULT', lotteryNumbers: 'OVERLAP' } |
list of Strings and/or Maps |
nodeProperties: [ {embedding: 'COSINE'}, 'age', {lotteryNumbers: 'OVERLAP'} ] |
The available metrics by type are:
type | metric |
---|---|
List of Integer |
|
List of Float |
|
For any property type, DEFAULT
can also be specified to use the default metric.
For scalar numbers, there is only the default metric.
Configuring filters and seeding
You should consult K-Nearest Neighbors configuration for the standard configuration options.
The source node filter can be specified in one of two ways:
-
Using
sourceNodeFilter
with a node label string -
Using
sourceNodeFilter
with a list of node IDs combined withsourceNodeTable
specifying the table containing those nodes
The target node filter can be specified in one of two ways:
-
Using
targetNodeFilter
with a node label string -
Using
targetNodeFilter
with a list of node IDs combined withtargetNodeTable
specifying the table containing those nodes
Seeding can be enabled with the seedTargetNodes
configuration parameter in the compute configuration. It defaults to false
.
Initial neighbor sampling
The algorithm starts off by picking k
random neighbors for each node from the filtered candidate set.
There are two options for how this random sampling can be done.
- Uniform
-
The first
k
neighbors for each node are chosen uniformly at random from all nodes matching the filter criteria. This is the classic way of doing the initial sampling. It is also the algorithm’s default. Note that this method does not actually use the topology of the input graph. - Random Walk
-
From each node we take a depth biased random walk and choose the first
k
unique nodes we visit on that walk that match the filter criteria as our initial random neighbors. If after some internally definedO(k)
number of steps in a random walk,k
unique neighbors matching the filter have not been visited, we will fill in the remaining neighbors using the uniform method described above. The random walk method makes use of the input graph’s topology and may be suitable if it is more likely to find good similarity scores between topologically close nodes.
The random walk used is biased towards depth in the sense that it will more likely choose to go further away from its previously visited node, rather than go back to it or to a node equidistant to it. The intuition of this bias is that subsequent iterations of comparing neighbor-of-neighbors will likely cover the extended (topological) neighborhood of each node. |
Syntax
This section covers the syntax used to execute the Filtered K-Nearest Neighbors algorithm.
CALL Neo4j_Graph_Analytics.graph.knn_filtered(
'CPU_X64_XS', (1)
{
['defaultTablePrefix': '...',] (2)
'project': {...}, (3)
'compute': {...}, (4)
'write': {...} (5)
}
);
1 | Compute pool selector. |
2 | Optional prefix for table references. |
3 | Project config. |
4 | Compute config. |
5 | Write config. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
computePoolSelector |
String |
|
no |
The selector for the compute pool on which to run the Filtered KNN job. |
configuration |
Map |
|
no |
Configuration for graph project, algorithm compute and result write back. |
The configuration map consists of the following three entries.
For more details on below Project configuration, refer to the Project documentation. |
Name | Type |
---|---|
nodeTables |
List of node tables. |
relationshipTables |
Map of relationship types to relationship tables. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
mutateProperty |
String |
|
yes |
The relationship property that will be written back to the Snowflake database. |
mutateRelationshipType |
String |
|
yes |
The relationship type used for the relationships written back to the Snowflake database. |
nodeProperties |
String or Map or List of Strings / Maps |
|
no |
The node properties to use for similarity computation along with their selected similarity metrics. Accepts a single property key, a Map of property keys to metrics, or a List of property keys and/or Maps, as above. See Node properties and metrics configuration for details. |
topK |
Integer |
|
yes |
The number of neighbors to find for each node. The K-nearest neighbors are returned. This value cannot be lower than 1. |
sampleRate |
Float |
|
yes |
Sample rate to limit the number of comparisons per node. Value must be between 0 (exclusive) and 1 (inclusive). |
deltaThreshold |
Float |
|
yes |
Value as a percentage to determine when to stop early. If fewer updates than the configured value happen, the algorithm stops. Value must be between 0 (exclusive) and 1 (inclusive). |
maxIterations |
Integer |
|
yes |
Hard limit to stop the algorithm after that many iterations. |
randomJoins |
Integer |
|
yes |
The number of random attempts per node to connect new node neighbors based on random selection, for each iteration. |
String |
|
yes |
The method used to sample the first |
|
randomSeed |
Integer |
|
yes |
The seed value to control the randomness of the algorithm.
Note that |
similarityCutoff |
Float |
|
yes |
Filter out from the list of K-nearest neighbors nodes with similarity below this threshold. |
perturbationRate |
Float |
|
yes |
The probability of replacing the least similar known neighbor with an encountered neighbor of equal similarity. |
sourceNodeFilter |
String or List of Integer |
|
yes |
Node label string to filter source nodes, or list of node IDs (requires sourceNodeTable). Only nodes matching this filter will be used as source nodes in the similarity computation. |
sourceNodeTable |
String |
|
yes |
Fully qualified table name for source nodes when using node ID filtering. Required when sourceNodeFilter contains node IDs. |
targetNodeFilter |
String or List of Integer |
|
yes |
Node label string to filter target nodes, or list of node IDs (requires targetNodeTable). Only nodes matching this filter will be considered as potential neighbors. |
targetNodeTable |
String |
|
yes |
Fully qualified table name for target nodes when using node ID filtering. Required when targetNodeFilter contains node IDs. |
seedTargetNodes |
Boolean |
|
yes |
Whether to seed the target nodes to guarantee k results per source node. When enabled, if fewer than k neighbors are found above the similarity cutoff, additional neighbors are added to reach k neighbors, potentially including nodes below the cutoff. |
For more details on below Write configuration, refer to the Write documentation. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
sourceLabel |
String |
|
no |
Node label in the in-memory graph for start nodes of relationships to be written back. |
targetLabel |
String |
|
no |
Node label in the in-memory graph for end nodes of relationships to be written back. |
outputTable |
String |
|
no |
Table in Snowflake database to which relationships are written. |
relationshipType |
String |
|
yes |
The relationship type that will be written back to the Snowflake database. |
relationshipProperty |
String |
|
yes |
The relationship property that will be written back to the Snowflake database. |
The Filtered KNN algorithm does not read any relationships, but the values for |
To get a deterministic result when running the algorithm:
|
Examples
In this section, we will show examples of running the Filtered KNN algorithm on a concrete graph. With the Uniform sampler, Filtered KNN samples initial neighbors uniformly at random from the filtered set, and doesn’t take into account graph topology. This means Filtered KNN can run on a graph of only nodes without any relationships.
Consider the following graph of five nodes with different labels - some are Person nodes and some are Robot nodes.
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON (NODEID VARCHAR, AGE NUMBER, EMBEDDING ARRAY);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON SELECT 'alice', 24, ARRAY_CONSTRUCT(1.0::FLOAT, -1.0);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON SELECT 'carol', 24, ARRAY_CONSTRUCT(3.0, 5.0);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON SELECT 'eve', 67, ARRAY_CONSTRUCT(5.0, 3.0);
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT (NODEID VARCHAR, AGE NUMBER, EMBEDDING ARRAY);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT SELECT 'bob', 73, ARRAY_CONSTRUCT(2.0::FLOAT, 2.0);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT SELECT 'dave', 48, ARRAY_CONSTRUCT(4.0, 5.0);
Constructing the embedding arrays above, we need to ensure the first value in the first row array is a floating point type in Snowflake.
If we did not append the |
In this example, we want to use the Filtered K-Nearest Neighbors algorithm to compare nodes based on their age and embedding properties, with filtering applied to source and/or target nodes.
With the node tables in Snowflake we can now project them as part of an algorithm job. In the following examples, we will demonstrate using the Filtered KNN algorithm on this graph.
Filtering using node labels
Running a Filtered KNN job with label-based filtering involves the three steps: Project, Compute and Write.
To run the query, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.
We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.
CALL Neo4j_Graph_Analytics.graph.knn_filtered('CPU_X64_XS', {
'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
'project': {
'nodeTables': ['NODES_PERSON', 'NODES_ROBOT'],
'relationshipTables': {}
},
'compute': {
'nodeProperties': ['AGE', 'EMBEDDING'],
'topK': 1,
'sourceNodeFilter': 'nodes_person'
},
'write': [{
'outputTable': 'SIMILARITY_OUTPUT',
'sourceLabel': 'nodes_person',
'targetLabel': 'nodes_person'
}]
});
JOB_ID | JOB_STATUS | JOB_START | JOB_END | JOB_RESULT |
---|---|---|---|---|
job_abc123def456ghi789 |
SUCCESS |
2025-10-22 14:30:15.123000 |
2025-10-22 14:30:20.456000 |
{ { "knn_filtered_1": { "computeMillis": 38, "configuration": { "concurrency": 6, "deltaThreshold": 0.001, "initialSampler": "UNIFORM", "maxIterations": 100, "nodeLabels": [ "*" ], "nodeProperties": { "AGE": "LONG_PROPERTY_METRIC", "EMBEDDING": "COSINE" }, "perturbationRate": 0, "randomJoins": 10, "relationshipTypes": [ "*" ], "resultProperty": "similarity", "resultRelationshipType": "SIMILARITY", "sampleRate": 0.5, "seedTargetNodes": false, "similarityCutoff": 0, "sourceNodeFilter": "NodeFilter[label=PERSONS]", "targetNodeFilter": "NodeFilter[NoOp]", "topK": 1 }, "didConverge": true, "nodePairsConsidered": 173, "nodesCompared": 7, "ranIterations": 2, "similarityDistribution": { "max": 0.9979286193847655, "mean": 0.9040115356445313, "min": 0.5665702819824219, "p1": 0.5665702819824219, "p10": 0.5665702819824219, "p100": 0.9979248046875, "p25": 0.9598922729492188, "p5": 0.5665702819824219, "p50": 0.9977455139160156, "p75": 0.9979248046875, "p90": 0.9979248046875, "p95": 0.9979248046875, "p99": 0.9979248046875, "stdDev": 0.16936039641075756 } }, "project_1": { "graphName": "snowgraph", "nodeCount": 7, "nodeMillis": 491, "relationshipCount": 0, "relationshipMillis": 0, "totalMillis": 491 }, "write_relationship_type_1": { "exportMillis": 1847, "outputTable": "EXAMPLE_DB.DATA_SCHEMA.OUTPUT", "relationshipProperty": "similarity", "relationshipType": "SIMILARITY", "relationshipsExported": 2 } } |
The returned result contains information about the job execution. Notice that only 3 relationships were exported because we filtered to only use Person nodes as source nodes. The similarity scores have been written back to the Snowflake database.
We can query the results:
SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.SIMILARITY_OUTPUT ORDER BY SCORE DESC;
SOURCENODEID | TARGETNODEID | SCORE |
---|---|---|
alice |
carol |
0.8234 |
carol |
alice |
0.8234 |
eve |
carol |
0.6745 |
In this example, we used sourceNodeFilter: 'nodes_person'
to filter source nodes to only those in the NODES_PERSON table.
Only Person nodes (alice, carol, eve) appear as source nodes in the results, while Robot nodes (bob, dave) are excluded from being source nodes.
Filtering using node IDs and tables
The Filtered KNN algorithm also supports filtering by specific node IDs. When using node ID filtering, you must specify the table containing those nodes.
CALL Neo4j_Graph_Analytics.graph.knn_filtered('CPU_X64_XS', {
'project': {
'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
'nodeTables': ['NODES_PERSON','NODES_ROBOT'],
'relationshipTables': {}
},
'compute': {
'topK': 1,
'nodeProperties': ['AGE', 'EMBEDDING'],
'sourceNodeFilter': [42,44,46],
'sourceNodeTable': 'EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON',
'targetNodeFilter': [43,45],
'targetNodeTable': 'EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT'
},
'write': [{
'outputTable': 'SIMILARITY_OUTPUT_IDS',
'sourceLabel': 'NODES_PERSON',
'targetLabel': 'NODES_PERSON'
}]
});
JOB_ID | JOB_STATUS | JOB_START | JOB_END | JOB_RESULT |
---|---|---|---|---|
job_xyz789abc012def345 |
SUCCESS |
2025-10-22 14:35:22.789000 |
2025-10-22 14:35:27.123000 |
{ { "knn_filtered_1": { "computeMillis": 40, "configuration": { "concurrency": 6, "deltaThreshold": 0.001, "initialSampler": "UNIFORM", "maxIterations": 100, "nodeLabels": [ "*" ], "nodeProperties": { "AGE": "LONG_PROPERTY_METRIC", "EMBEDDING": "JACCARD" }, "perturbationRate": 0, "randomJoins": 10, "relationshipTypes": [ "*" ], "resultProperty": "similarity", "resultRelationshipType": "SIMILARITY", "sampleRate": 0.5, "seedTargetNodes": false, "similarityCutoff": 0, "sourceNodeFilter": [ 42, 44, 46 ], "sourceNodeTable": "EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON", "targetNodeFilter": [ 43, 45 ], "targetNodeTable": "EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT", "topK": 1 }, "didConverge": true, "nodePairsConsidered": 136, "nodesCompared": 5, "ranIterations": 2, "similarityDistribution": { "max": 0.19166755676269528, "mean": 0.13277800877888998, "min": 0.019999980926513672, "p1": 0.019999980926513672, "p10": 0.019999980926513672, "p100": 0.19166743755340576, "p25": 0.019999980926513672, "p5": 0.019999980926513672, "p50": 0.1866673231124878, "p75": 0.19166743755340576, "p90": 0.19166743755340576, "p95": 0.19166743755340576, "p99": 0.19166743755340576, "stdDev": 0.07977222975785117 } }, "project_1": { "graphName": "snowgraph", "nodeCount": 5, "nodeMillis": 415, "relationshipCount": 0, "relationshipMillis": 0, "totalMillis": 415 }, "write_relationship_type_1": { "exportMillis": 1779, "outputTable": "EXAMPLE_DB.DATA_SCHEMA.OUTPUT", "relationshipProperty": "similarity", "relationshipType": "SIMILARITY", "relationshipsExported": 0 } } |
In this example, we specified specific node IDs for both source and target filters.
The sourceNodeFilter
contains IDs [42, 44, 46] from the NODES_PERSON table, and the targetNodeFilter
contains IDs [43, 45] from the NODES_ROBOT table.
Notice that 0 relationships were exported - this happens when the specific node IDs don’t exist in the tables or when there are no valid similarity pairs between the filtered source and target nodes.
Memory Estimation
Before running the Filtered KNN algorithm, you can estimate the memory requirements.
CALL Neo4j_Graph_Analytics.graph.estimate_knn_filtered('CPU_X64_XS', {
'project': {
'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
'nodeTables': ['NODES_PERSON','NODES_ROBOT'],
'relationshipTables': {}
},
'compute': {
'topK': 1,
'nodeProperties': ['AGE', 'EMBEDDING'],
'sourceNodeFilter': [42,44,46],
'sourceNodeTable': 'EXAMPLE_DB.DATA_SCHEMA.NODES_PERSON',
'targetNodeFilter': [43,45],
'targetNodeTable': 'EXAMPLE_DB.DATA_SCHEMA.NODES_ROBOT'
},
'write': [{
'outputTable': 'SIMILARITY_OUTPUT_EST',
'sourceLabel': 'NODES_PERSON',
'targetLabel': 'NODES_PERSON'
}]
});
JOB_ID | JOB_STATUS | JOB_START | JOB_END | JOB_RESULT |
---|---|---|---|---|
job_est456def789ghi012 |
SUCCESS |
2025-10-22 14:40:05.234000 |
2025-10-22 14:40:07.567000 |
{ "arguments": { "node_count": 5, "node_label_count": 2, "node_property_count": 4, "relationship_count": 0, "relationship_property_count": 0, "relationship_type_count": 0 }, "estimation": { "bytes_total": 432978 }, "recommendation": { "pool_selector": "CPU_X64_XS" } } |
The estimation provides the expected memory requirements in bytes, helping you choose an appropriate compute pool for your workload.