Betweenness Centrality
Neo4j Graph Analytics for Snowflake is in Public Preview and is not intended for production use. |
Introduction
Betweenness centrality is a way of detecting the amount of influence a node has over the flow of information in a graph. It is often used to find nodes that serve as a bridge from one part of a graph to another.
The algorithm calculates shortest paths between all pairs of nodes in a graph. Each node receives a score, based on the number of shortest paths that pass through the node. Nodes that more frequently lie on shortest paths between other nodes will have higher betweenness centrality scores.
Betweenness centrality is implemented for graphs without weights or with positive weights. The Neo4j Graph Analytics for Snowflake implementation is based on Brandes' approximate algorithm for unweighted graphs. The implementation requires O(n + m) space and runs in O(n * m) time, where n is the number of nodes and m the number of relationships in the graph.
For more information on this algorithm, see:
Considerations and sampling
The Betweenness Centrality algorithm can be very resource-intensive to compute. Brandes' approximate algorithm computes single-source shortest paths (SSSP) for a set of source nodes. When all nodes are selected as source nodes, the algorithm produces an exact result. However, for large graphs this can potentially lead to very long runtimes. Thus, approximating the results by computing the SSSPs for only a subset of nodes can be useful. In Neo4j Graph Analytics for Snowflake we refer to this technique as sampling, where the size of the source node set is the sampling size.
There are two things to consider when executing the algorithm on large graphs:
-
A higher parallelism leads to higher memory consumption as each thread executes SSSPs for a subset of source nodes sequentially.
-
In the worst case, a single SSSP requires the whole graph to be duplicated in memory.
-
-
A higher sampling size leads to more accurate results, but also to a potentially much longer execution time.
Changing the values of the configuration parameters concurrency
and samplingSize
, respectively, can help to manage these considerations.
Sampling strategies
Brandes defines several strategies for selecting source nodes. The Neo4j Graph Analytics for Snowflake implementation is based on the random degree selection strategy, which selects nodes with a probability proportional to their degree. The idea behind this strategy is that such nodes are likely to lie on many shortest paths in the graph and thus have a higher contribution to the betweenness centrality score.
Syntax
CALL Neo4j_Graph_Analytics.graph.betweenness(
'X64_CPU_L', (1)
{
'project': {...}, (2)
'compute': {...}, (3)
'write': {...} (4)
}
);
1 | Compute pool selector. |
2 | Project config. |
3 | Compute config. |
4 | Write config. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
computePoolSelector |
String |
|
no |
The selector for the compute pool on which to run the Betweenness Centrality job. |
configuration |
Map |
|
no |
Configuration for graph project, algorithm compute and result write back. |
The configuration map consists of the following three entries.
For more details on below Project configuration, refer to the Project documentation. |
Name | Type |
---|---|
nodeTables |
List of node tables. |
relationshipTables |
Map of relationship types to relationship tables. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
mutateProperty |
String |
|
yes |
The node property that will be written back to the Snowflake database. |
samplingSize |
Integer |
|
yes |
The number of source nodes to consider for computing centrality scores. |
samplingSeed |
Integer |
|
yes |
The seed value for the random number generator that selects start nodes. |
relationshipWeightProperty |
String |
|
yes |
Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted. |
For more details on below Write configuration, refer to the Write documentation. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
nodeProperty |
String |
|
yes |
The node property that will be written back to the Snowflake database. |
Examples
In this section we will show examples of running the Betweenness Centrality algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to make use of the algorithm in a real setting. We will do this on a small social network graph of a handful nodes connected in a particular pattern. The example graph looks like this:

CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.USERS (NODEID STRING);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.USERS VALUES
('Alice'),
('Bob'),
('Carol'),
('Dan'),
('Eve'),
('Frank'),
('Gale');
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.FOLLOWS (SOURCENODEID STRING, TARGETNODEID STRING, WEIGHT DOUBLE);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.FOLLOWS VALUES
('Alice', 'Carol', 1.0),
('Bob', 'Carol', 1.0),
('Carol', 'Dan', 1.0),
('Carol', 'Eve', 1.3),
('Dan', 'Frank', 1.0),
('Eve', 'Frank', 0.5),
('Frank', 'Gale', 1.0);
With the node and relationship tables in Snowflake we can now project it as part of an algorithm job. In the following examples we will demonstrate using the Betweenness Centrality algorithm on this graph.
To run the queries in this section, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.
We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.
Run job
Running a Betweenness Centrality job involves the three steps: Project, Compute and Write.
CALL Neo4j_Graph_Analytics.graph.betweenness('CPU_X64_XS', {
'project': {
'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
'nodeTables': [ 'USERS' ],
'relationshipTables': {
'FOLLOWS': {
'sourceTable': 'USERS',
'targetTable': 'USERS'
}
}
},
'compute': {
'mutateProperty': 'score'
},
'write': [{
'nodeLabel': 'USERS',
'outputTable': 'EXAMPLE_DB.DATA_SCHEMA.USERS_CENTRALITY',
'nodeProperty': 'score'
}]
});
JOB_ID | JOB_START | JOB_END | JOB_RESULT |
---|---|---|---|
job_18ec2c6bfa744a9d866acffc86e6ecba |
2025-04-29 11:41:43.604000 |
2025-04-29 11:41:50.077000 |
{ "betweenness_1": { "centralityDistribution": { "max": 8.000061035156248, "mean": 2.714292253766741, "min": 0, "p50": 3, "p75": 5.0000152587890625, "p90": 8.000045776367188, "p95": 8.000045776367188, "p99": 8.000045776367188, "p999": 8.000045776367188 }, "computeMillis": 12, "configuration": { "concurrency": 2, "jobId": "0705b31c-8c41-4237-a543-5021c1fc675e", "logProgress": true, "mutateProperty": "betweenness", "nodeLabels": [ "*" ], "relationshipTypes": [ "*" ], "sudo": false }, "mutateMillis": 1, "nodePropertiesWritten": 7, "postProcessingMillis": 27, "preProcessingMillis": 8 }, "project_1": { "graphName": "snowgraph", "nodeCount": 7, "nodeMillis": 183, "relationshipCount": 7, "relationshipMillis": 602, "totalMillis": 785 }, "write_node_property_1": { "exportMillis": 2336, "nodeLabel": "USERS", "nodeProperty": "score", "outputTable": "EXAMPLE_DB.DATA_SCHEMA.USERS_CENTRALITY", "propertiesExported": 7 } } |
The returned result contains information about the job execution and result distribution. Additionally, the centrality score for each of the seven nodes has been written back to the Snowflake database. We can query it like so:
SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.USERS_CENTRALITY;
Which shows the computation results as stored in the database:
NODEID | SCORE |
---|---|
Alice |
0.0 |
Bob |
0.0 |
Carol |
8.0 |
Dan |
3.0 |
Eve |
3.0 |
Frank |
5.0 |
Gale |
0.0 |
Sampling
Betweenness Centrality can be very resource-intensive to compute.
To help with this, it is possible to approximate the results using a sampling technique.
The configuration parameters samplingSize
and samplingSeed
are used to control the sampling.
We illustrate this on our example graph by approximating Betweenness Centrality with a sampling size of two.
The seed value is an arbitrary integer, where using the same value will yield the same results between different runs of the procedure.
CALL Neo4j_Graph_Analytics.graph.betweenness('CPU_X64_XS', {
'project': {
'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
'nodeTables': [ 'USERS' ],
'relationshipTables': {
'FOLLOWS': {
'sourceTable': 'USERS',
'targetTable': 'USERS'
}
}
},
'compute': {
'mutateProperty': 'score',
'samplingSize': 2,
'samplingSeed': 4
},
'write': [{
'nodeLabel': 'USERS',
'outputTable': 'EXAMPLE_DB.DATA_SCHEMA.USERS_CENTRALITY_SAMPLED'
'nodeProperty': 'score'
}]
});
SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.USERS_CENTRALITY_SAMPLED;
NODEID | score |
---|---|
Alice |
0.0 |
Bob |
0.0 |
Carol |
4.0 |
Dan |
2.0 |
Eve |
2.0 |
Frank |
2.0 |
Gale |
0.0 |
Here we can see that the 'Carol' node has the highest score, followed by a three-way tie between the 'Dan', 'Eve', and 'Frank' nodes. We are only sampling from two nodes, where the probability of a node being picked for the sampling is proportional to its outgoing degree. The 'Carol' node has the maximum degree and is the most likely to be picked. The 'Gale' node has an outgoing degree of zero and is very unlikely to be picked. The other nodes all have the same probability to be picked.
With our selected sampling seed of 0, we seem to have selected either of the 'Alice' and 'Bob' nodes, as well as the 'Carol' node. We can see that because either of 'Alice' and 'Bob' would add four to the score of the 'Carol' node, and each of 'Alice', 'Bob', and 'Carol' adds one to all of 'Dan', 'Eve', and 'Frank'.
To increase the accuracy of our approximation, the sampling size could be increased.
In fact, setting the samplingSize
to the node count of the graph (seven, in our case) will produce exact results.