Eigenvector Centrality
Glossary
 Directed

Directed trait. The algorithm is welldefined on a directed graph.
 Directed

Directed trait. The algorithm ignores the direction of the graph.
 Directed

Directed trait. The algorithm does not run on a directed graph.
 Undirected

Undirected trait. The algorithm is welldefined on an undirected graph.
 Undirected

Undirected trait. The algorithm ignores the undirectedness of the graph.
 Heterogeneous nodes

Heterogeneous nodes fully supported. The algorithm has the ability to distinguish between nodes of different types.
 Heterogeneous nodes

Heterogeneous nodes allowed. The algorithm treats all selected nodes similarly regardless of their label.
 Heterogeneous relationships

Heterogeneous relationships fully supported. The algorithm has the ability to distinguish between relationships of different types.
 Heterogeneous relationships

Heterogeneous relationships allowed. The algorithm treats all selected relationships similarly regardless of their type.
 Weighted relationships

Weighted trait. The algorithm supports a relationship property to be used as weight, specified via the relationshipWeightProperty configuration parameter.
 Weighted relationships

Weighted trait. The algorithm treats each relationship as equally important, discarding the value of any relationship weight.
Introduction
Eigenvector Centrality is an algorithm that measures the transitive influence of nodes. Relationships originating from highscoring nodes contribute more to the score of a node than connections from lowscoring nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores.
The algorithm computes the eigenvector associated with the largest absolute eigenvalue. To compute that eigenvalue, the algorithm applies the power iteration approach. Within each iteration, the centrality score for each node is derived from the scores of its incoming neighbors. In the power iteration method, the eigenvector is L2normalized after each iteration, leading to normalized results by default.
The PageRank algorithm is a variant of Eigenvector Centrality with an additional jump probability.
Considerations
There are some things to be aware of when using the Eigenvector centrality algorithm:

Centrality scores for nodes with no incoming relationships will converge to
0
. 
Due to missing degree normalization, highdegree nodes have a very strong influence on their neighbors' score.
Syntax
This section covers the syntax used to execute the Eigenvector Centrality algorithm in each of its execution modes. We are describing the named graph variant of the syntax. To learn more about general syntax variants, see Syntax overview.
CALL gds.eigenvector.stream(
graphName: String,
configuration: Map
)
YIELD
nodeId: Integer,
score: Float
Name  Type  Default  Optional  Description 

graphName 
String 

no 
The name of a graph stored in the catalog. 
configuration 
Map 

yes 
Configuration for algorithmspecifics and/or graph filtering. 
Name  Type  Default  Optional  Description 

List of String 

yes 
Filter the named graph using the given node labels. Nodes with any of the given labels will be included. 

List of String 

yes 
Filter the named graph using the given relationship types. Relationships with any of the given types will be included. 

Integer 

yes 
The number of concurrent threads used for running the algorithm. 

String 

yes 
An ID that can be provided to more easily track the algorithm’s progress. 

Boolean 

yes 
If disabled the progress percentage will not be logged. 

Integer 

yes 
The maximum number of iterations of Eigenvector Centrality to run. 

Float 

yes 
Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and the algorithm returns. 

String 

yes 
Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted. 

sourceNodes 
List or Node or Number 

yes 
The nodes or node ids to use for computing Personalized Page Rank. 
scaler 
String or Map 

yes 
The name of the scaler applied for the final scores. Supported values are 
Name  Type  Description 

nodeId 
Integer 
Node ID. 
score 
Float 
Eigenvector score. 
CALL gds.eigenvector.stats(
graphName: String,
configuration: Map
)
YIELD
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
centralityDistribution: Map,
configuration: Map
Name  Type  Default  Optional  Description 

graphName 
String 

no 
The name of a graph stored in the catalog. 
configuration 
Map 

yes 
Configuration for algorithmspecifics and/or graph filtering. 
Name  Type  Default  Optional  Description 

List of String 

yes 
Filter the named graph using the given node labels. Nodes with any of the given labels will be included. 

List of String 

yes 
Filter the named graph using the given relationship types. Relationships with any of the given types will be included. 

Integer 

yes 
The number of concurrent threads used for running the algorithm. 

String 

yes 
An ID that can be provided to more easily track the algorithm’s progress. 

Boolean 

yes 
If disabled the progress percentage will not be logged. 

Integer 

yes 
The maximum number of iterations of Eigenvector Centrality to run. 

Float 

yes 
Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and the algorithm returns. 

String 

yes 
Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted. 

sourceNodes 
List or Node or Number 

yes 
The nodes or node ids to use for computing Personalized Page Rank. 
scaler 
String or Map 

yes 
The name of the scaler applied for the final scores. Supported values are 
Name  Type  Description 

ranIterations 
Integer 
The number of iterations run. 
didConverge 
Boolean 
Indicates if the algorithm converged. 
preProcessingMillis 
Integer 
Milliseconds for preprocessing the graph. 
computeMillis 
Integer 
Milliseconds for running the algorithm. 
postProcessingMillis 
Integer 
Milliseconds for computing the 
centralityDistribution 
Map 
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. 
configuration 
Map 
The configuration used for running the algorithm. 
CALL gds.eigenvector.mutate(
graphName: String,
configuration: Map
)
YIELD
nodePropertiesWritten: Integer,
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
mutateMillis: Integer,
centralityDistribution: Map,
configuration: Map
Name  Type  Default  Optional  Description 

graphName 
String 

no 
The name of a graph stored in the catalog. 
configuration 
Map 

yes 
Configuration for algorithmspecifics and/or graph filtering. 
Name  Type  Default  Optional  Description 

mutateProperty 
String 

no 
The node property in the GDS graph to which the score is written. 
List of String 

yes 
Filter the named graph using the given node labels. 

List of String 

yes 
Filter the named graph using the given relationship types. 

Integer 

yes 
The number of concurrent threads used for running the algorithm. 

String 

yes 
An ID that can be provided to more easily track the algorithm’s progress. 

Integer 

yes 
The maximum number of iterations of Eigenvector Centrality to run. 

Float 

yes 
Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and the algorithm returns. 

String 

yes 
Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted. 

sourceNodes 
List or Node or Number 

yes 
The nodes or node ids to use for computing Personalized Page Rank. 
scaler 
String or Map 

yes 
The name of the scaler applied for the final scores. Supported values are 
Name  Type  Description 

ranIterations 
Integer 
The number of iterations run. 
didConverge 
Boolean 
Indicates if the algorithm converged. 
preProcessingMillis 
Integer 
Milliseconds for preprocessing the graph. 
computeMillis 
Integer 
Milliseconds for running the algorithm. 
postProcessingMillis 
Integer 
Milliseconds for computing the 
mutateMillis 
Integer 
Milliseconds for adding properties to the inmemory graph. 
nodePropertiesWritten 
Integer 
The number of properties that were written to the inmemory graph. 
centralityDistribution 
Map 
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. 
configuration 
Map 
The configuration used for running the algorithm. 
CALL gds.eigenvector.write(
graphName: String,
configuration: Map
)
YIELD
nodePropertiesWritten: Integer,
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
writeMillis: Integer,
centralityDistribution: Map,
configuration: Map
Name  Type  Default  Optional  Description 

graphName 
String 

no 
The name of a graph stored in the catalog. 
configuration 
Map 

yes 
Configuration for algorithmspecifics and/or graph filtering. 
Name  Type  Default  Optional  Description 

List of String 

yes 
Filter the named graph using the given node labels. Nodes with any of the given labels will be included. 

List of String 

yes 
Filter the named graph using the given relationship types. Relationships with any of the given types will be included. 

Integer 

yes 
The number of concurrent threads used for running the algorithm. 

String 

yes 
An ID that can be provided to more easily track the algorithm’s progress. 

Boolean 

yes 
If disabled the progress percentage will not be logged. 

Integer 

yes 
The number of concurrent threads used for writing the result to Neo4j. 

String 

no 
The node property in the Neo4j database to which the score is written. 

Integer 

yes 
The maximum number of iterations of Eigenvector Centrality to run. 

Float 

yes 
Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and the algorithm returns. 

String 

yes 
Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted. 

sourceNodes 
List or Node or Number 

yes 
The nodes or node ids to use for computing Personalized Page Rank. 
scaler 
String or Map 

yes 
The name of the scaler applied for the final scores. Supported values are 
Name  Type  Description 

ranIterations 
Integer 
The number of iterations run. 
didConverge 
Boolean 
Indicates if the algorithm converged. 
preProcessingMillis 
Integer 
Milliseconds for preprocessing the graph. 
computeMillis 
Integer 
Milliseconds for running the algorithm. 
postProcessingMillis 
Integer 
Milliseconds for computing the 
writeMillis 
Integer 
Milliseconds for writing result data back. 
nodePropertiesWritten 
Integer 
The number of properties that were written to Neo4j. 
centralityDistribution 
Map 
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. 
configuration 
Map 
The configuration used for running the algorithm. 
Examples
All the examples below should be run in an empty database. The examples use Cypher projections as the norm. Native projections will be deprecated in a future release. 
In this section we will show examples of running the Eigenvector Centrality algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to make use of the algorithm in a real setting. We will do this on a small web network graph of a handful nodes connected in a particular pattern. The example graph looks like this:
CREATE
(home:Page {name:'Home'}),
(about:Page {name:'About'}),
(product:Page {name:'Product'}),
(links:Page {name:'Links'}),
(a:Page {name:'Site A'}),
(b:Page {name:'Site B'}),
(c:Page {name:'Site C'}),
(d:Page {name:'Site D'}),
(home)[:LINKS {weight: 0.2}]>(about),
(home)[:LINKS {weight: 0.2}]>(links),
(home)[:LINKS {weight: 0.6}]>(product),
(about)[:LINKS {weight: 1.0}]>(home),
(product)[:LINKS {weight: 1.0}]>(home),
(a)[:LINKS {weight: 1.0}]>(home),
(b)[:LINKS {weight: 1.0}]>(home),
(c)[:LINKS {weight: 1.0}]>(home),
(d)[:LINKS {weight: 1.0}]>(home),
(links)[:LINKS {weight: 0.8}]>(home),
(links)[:LINKS {weight: 0.05}]>(a),
(links)[:LINKS {weight: 0.05}]>(b),
(links)[:LINKS {weight: 0.05}]>(c),
(links)[:LINKS {weight: 0.05}]>(d);
This graph represents eight pages, linking to one another.
Each relationship has a property called weight
, which describes the importance of the relationship.
MATCH (source:Page)[r:LINKS]>(target:Page)
RETURN gds.graph.project(
'myGraph',
source,
target,
{ relationshipProperties: r { .weight } }
)
Memory Estimation
First off, we will estimate the cost of running the algorithm using the estimate
procedure.
This can be done with any execution mode.
We will use the write
mode in this example.
Estimating the algorithm is useful to understand the memory impact that running the algorithm on your graph will have.
When you later actually run the algorithm in one of the execution modes the system will perform an estimation.
If the estimation shows that there is a very high probability of the execution going over its memory limitations, the execution is prohibited.
To read more about this, see Automatic estimation and execution blocking.
For more details on estimate
in general, see Memory Estimation.
CALL gds.eigenvector.write.estimate('myGraph', {
writeProperty: 'centrality',
maxIterations: 20
})
YIELD nodeCount, relationshipCount, bytesMin, bytesMax, requiredMemory
nodeCount  relationshipCount  bytesMin  bytesMax  requiredMemory 

8 
14 
696 
696 
"696 Bytes" 
Stream
In the stream
execution mode, the algorithm returns the score for each node.
This allows us to inspect the results directly or postprocess them in Cypher without any side effects.
For example, we can order the results to find the nodes with the highest Eigenvector score.
For more details on the stream
mode in general, see Stream.
stream
mode:CALL gds.eigenvector.stream('myGraph')
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
name  score 

"Home" 
0.7465574981728249 
"About" 
0.33997520529777137 
"Links" 
0.33997520529777137 
"Product" 
0.33997520529777137 
"Site A" 
0.15484062876886298 
"Site B" 
0.15484062876886298 
"Site C" 
0.15484062876886298 
"Site D" 
0.15484062876886298 
The above query is running the algorithm in stream
mode as unweighted
.
Below, one can find an example for weighted graphs.
Stats
In the stats
execution mode, the algorithm returns a single row containing a summary of the algorithm result.
For example Eigenvector stats returns centrality histogram which can be used to monitor the distribution of centrality scores across all computed nodes.
This execution mode does not have any side effects.
It can be useful for evaluating algorithm performance by inspecting the computeMillis
return item.
In the examples below we will omit returning the timings.
The full signature of the procedure can be found in the syntax section.
For more details on the stats
mode in general, see Stats.
CALL gds.eigenvector.stats('myGraph', {
maxIterations: 20
})
YIELD centralityDistribution
RETURN centralityDistribution.max AS max
max 

0.7465591431 
Mutate
The mutate
execution mode extends the stats
mode with an important side effect: updating the named graph with a new node property containing the score for that node.
The name of the new property is specified using the mandatory configuration parameter mutateProperty
.
The result is a single summary row, similar to stats
, but with some additional metrics.
The mutate
mode is especially useful when multiple algorithms are used in conjunction.
For more details on the mutate
mode in general, see Mutate.
mutate
mode:CALL gds.eigenvector.mutate('myGraph', {
maxIterations: 20,
mutateProperty: 'centrality'
})
YIELD nodePropertiesWritten, ranIterations
nodePropertiesWritten  ranIterations 



Write
The write
execution mode extends the stats
mode with an important side effect: writing the score for each node as a property to the Neo4j database.
The name of the new property is specified using the mandatory configuration parameter writeProperty
.
The result is a single summary row, similar to stats
, but with some additional metrics.
The write
mode enables directly persisting the results to the database.
For more details on the write
mode in general, see Write.
write
mode:CALL gds.eigenvector.write('myGraph', {
maxIterations: 20,
writeProperty: 'centrality'
})
YIELD nodePropertiesWritten, ranIterations
nodePropertiesWritten  ranIterations 



Weighted
By default, the algorithm considers the relationships of the graph to be unweighted.
To change this behaviour, we can use the relationshipWeightProperty
configuration parameter.
If the parameter is set, the associated property value is used as relationship weight.
In the weighted
case, the previous score of a node sent to its neighbors is multiplied by the normalized relationship weight.
Note, that negative relationship weights are ignored during the computation.
In the following example, we use the weight
property of the input graph as relationship weight property.
stream
mode using relationship weights:CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
relationshipWeightProperty: 'weight'
})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
name  score 

"Home" 
0.8328163407319487 
"Product" 
0.5004775834976313 
"About" 
0.1668258611658771 
"Links" 
0.1668258611658771 
"Site A" 
0.008327591469710233 
"Site B" 
0.008327591469710233 
"Site C" 
0.008327591469710233 
"Site D" 
0.008327591469710233 
As in the unweighted example, the "Home" node has the highest score. In contrast, the "Product" now has the second highest instead of the fourth highest score.
We are using stream mode to illustrate running the algorithm as weighted , however, all the algorithm modes support the relationshipWeightProperty configuration parameter.

Tolerance
The tolerance
configuration parameter denotes the minimum change in scores between iterations.
If all scores change less than the configured tolerance, the iteration is aborted and considered converged.
Note, that setting a higher tolerance leads to earlier convergence, but also to less accurate centrality scores.
stream
mode using a high tolerance
value:CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
tolerance: 0.1
})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
name  score 

"Home" 
0.7108273818583551 
"About" 
0.3719400001993262 
"Links" 
0.3719400001993262 
"Product" 
0.3719400001993262 
"Site A" 
0.14116155811301126 
"Site B" 
0.14116155811301126 
"Site C" 
0.14116155811301126 
"Site D" 
0.14116155811301126 
We are using tolerance: 0.1
, which leads to slightly different results compared to the stream example.
However, the computation converges after three iterations, and we can already observe a trend in the resulting scores.
Personalised Eigenvector Centrality
Personalized Eigenvector Centrality is a variation of Eigenvector Centrality which is biased towards a set of sourceNodes
.
By default, the power iteration starts with the same value for all nodes: 1 / V
.
For a given set of source nodes S
, the initial value of each source node is set to 1 / S
and to 0
for all remaining nodes.
The following examples show how to run Eigenvector centrality centered around 'Site A'.
MATCH (siteA:Page {name: 'Site A'}), (siteB:Page {name: 'Site B'})
CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
sourceNodes: [siteA, siteB]
})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
name  score 

"Home" 
0.7465645391567868 
"About" 
0.33997203172449453 
"Links" 
0.33997203172449453 
"Product" 
0.33997203172449453 
"Site A" 
0.15483736775159632 
"Site B" 
0.15483736775159632 
"Site C" 
0.15483736775159632 
"Site D" 
0.15483736775159632 
Scaling centrality scores
Internally, centrality scores are scaled after each iteration using L2 normalization. As a consequence, the final values are already normalized. This behavior cannot be changed as it is part of the power iteration method.
However, to normalize the final scores as part of the algorithm execution, one can use the scaler
configuration parameter.
A description of all available scalers can be found in the documentation for the scaleProperties
procedure.
stream
mode and returns normalized results:CALL gds.eigenvector.stream('myGraph', {
scaler: "MINMAX"
})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
name  score 

"Home" 
1.0 
"About" 
0.312876962110942 
"Links" 
0.312876962110942 
"Product" 
0.312876962110942 
"Site A" 
0.0 
"Site B" 
0.0 
"Site C" 
0.0 
"Site D" 
0.0 
Comparing the results with the stream example, we can see that the relative order of scores is the same.