This section describes the Eigenvector Centrality algorithm in the Neo4j Graph Algorithms library.

Eigenvector Centrality is an algorithm that measures the **transitive** influence or connectivity of nodes.

Relationships to high-scoring nodes contribute more to the score of a node than connections to low-scoring nodes. A high score means that a node is connected to other nodes that have high scores.

The Eigenvector Centrality algorithm is experimental and not officially supported. |

This section includes:

Eigenvector Centrality was proposed by Phillip Bonacich, in his 1986 paper Power and Centrality: A Family of Measures. It was the first of the centrality measures that considered the transitive importance of a node in a graph, rather than only considering its direct importance.

Eigenvector Centrality can be used in many of the same use cases as the PageRank algorithm.

This sample will explain the Eigenvector Centrality algorithm, using a simple graph:

The following will create a sample graph:

```
MERGE (home:Page {name:'Home'})
MERGE (about:Page {name:'About'})
MERGE (product:Page {name:'Product'})
MERGE (links:Page {name:'Links'})
MERGE (a:Page {name:'Site A'})
MERGE (b:Page {name:'Site B'})
MERGE (c:Page {name:'Site C'})
MERGE (d:Page {name:'Site D'})
MERGE (home)-[:LINKS]->(about)
MERGE (about)-[:LINKS]->(home)
MERGE (product)-[:LINKS]->(home)
MERGE (home)-[:LINKS]->(product)
MERGE (links)-[:LINKS]->(home)
MERGE (home)-[:LINKS]->(links)
MERGE (links)-[:LINKS]->(a)
MERGE (a)-[:LINKS]->(home)
MERGE (links)-[:LINKS]->(b)
MERGE (b)-[:LINKS]->(home)
MERGE (links)-[:LINKS]->(c)
MERGE (c)-[:LINKS]->(home)
MERGE (links)-[:LINKS]->(d)
MERGE (d)-[:LINKS]->(home)
```

The following will run the algorithm and stream results:

```
CALL algo.eigenvector.stream('Page', 'LINKS', {})
YIELD nodeId, score
RETURN algo.asNode(nodeId).name AS page,score
ORDER BY score DESC
```

The following will run the algorithm and write back results:

```
CALL algo.eigenvector('Page', 'LINKS', {write: true, writeProperty:"eigenvector"})
YIELD nodes, iterations, loadMillis, computeMillis, writeMillis, dampingFactor, write, writeProperty
```

Name | Eigenvector Centrality |
---|---|

Home |
31.45819 |

About |
14.40379 |

Product |
14.40379 |

Links |
14.40379 |

Site A |
6.572370000000001 |

Site C |
6.572370000000001 |

Site D |
6.572370000000001 |

Site B |
6.572370000000001 |

As we might expect, the *Home* page has the highest Eigenvector Centrality because it has incoming links from all other pages.
We can also see that it’s not only the number of incoming links that is important, but also the importance of the pages behind
those links.

By default, the scores returned by the Eigenvector Centrality are not normalized.
We can specify a normalization using the `normalization`

parameter.
The algorithm supports the following options:

`max`

- divide all scores by the maximum score`l1norm`

- normalize scores so that they sum up to 1`l2norm`

- divide each score by the square root of the squared sum of all scores

The following will run the algorithm and stream results using `max`

normalization:

```
CALL algo.eigenvector.stream('Page', 'LINKS', {normalization: "max"})
YIELD nodeId, score
RETURN algo.asNode(nodeId).name AS page,score
ORDER BY score DESC
```

Name | Eigenvector Centrality |
---|---|

Home |
1.0 |

About |
0.45787090738532643 |

Product |
0.45787090738532643 |

Links |
0.45787090738532643 |

Site A |
0.2089239717860437 |

Site C |
0.2089239717860437 |

Site D |
0.2089239717860437 |

Site B |
0.2089239717860437 |

The default label and relationship-type projection has a limitation of 2 billion nodes and 2 billion relationships. Therefore, if our projected graph contains more than 2 billion nodes or relationships, we will need to use huge graph projection.

Set `graph:'huge'`

in the config:

```
CALL algo.eigenvector('Page','LINKS', {graph:'huge'})
YIELD nodes, iterations, loadMillis, computeMillis, writeMillis, dampingFactor, writeProperty;
```

If label and relationship-type are not selective enough to describe your subgraph to run the algorithm on, you can use Cypher statements to load or project subsets of your graph. This can also be used to run algorithms on a virtual graph. You can learn more in the Section 2.2, “Cypher projection” section of the manual.

Set `graph:'cypher'`

in the config:

```
CALL algo.eigenvector(
'MATCH (p:Page) RETURN id(p) as id',
'MATCH (p1:Page)-[:LINKS]->(p2:Page) RETURN id(p1) as source, id(p2) as target',
{graph:'cypher', iterations:5, write: true}
)
```

The following will run the algorithm and write back results:

```
CALL algo.eigenvector(label:String, relationship:String,
{write: true, writeProperty:'eigenvector', concurrency:4})
YIELD nodes, loadMillis, computeMillis, writeMillis, write, writeProperty
```

Name | Type | Default | Optional | Description |
---|---|---|---|---|

label |
string |
null |
yes |
The label to load from the graph. If null, load all nodes. |

relationship |
string |
null |
yes |
The relationship-type to load from the graph. If null, load all relationships. |

concurrency |
int |
available CPUs |
yes |
The number of concurrent threads. |

weightProperty |
string |
null |
yes |
The property name that contains weight. If null, treats the graph as unweighted. Must be numeric. |

defaultValue |
float |
0.0 |
yes |
The default value of the weight in case it is missing or invalid. |

write |
boolean |
true |
yes |
Specify if the result should be written back as a node property. |

graph |
string |
'heavy' |
yes |
Use 'heavy' when describing the subset of the graph with label and relationship-type parameter. Use 'cypher' for describing the subset with cypher node-statement and relationship-statement. |

normalization |
string |
null |
yes |
The type of normalization to apply to the results. Valid values are |

Name | Type | Description |
---|---|---|

nodes |
int |
The number of nodes considered. |

writeProperty |
string |
The property name written back to. |

write |
boolean |
Specifies if the result was written back as node property. |

loadMillis |
int |
Milliseconds for loading data. |

computeMillis |
int |
Milliseconds for running the algorithm. |

writeMillis |
int |
Milliseconds for writing result data back. |

The following will run the algorithm and stream results:

```
CALL algo.eigenvector.stream(label:String, relationship:String,
{concurrency:4})
YIELD node, score
```

Name | Type | Default | Optional | Description |
---|---|---|---|---|

label |
string |
null |
yes |
The label to load from the graph. If null, load all nodes. |

relationship |
string |
null |
yes |
The relationship-type to load from the graph. If null, load all nodes. |

concurrency |
int |
available CPUs |
yes |
The number of concurrent threads. |

weightProperty |
string |
null |
yes |
The property name that contains weight. If null, treats the graph as unweighted. Must be numeric. |

defaultValue |
float |
0.0 |
yes |
The default value of the weight in case it is missing or invalid. |

graph |
string |
'heavy' |
yes |
Use 'heavy' when describing the subset of the graph with label and relationship-type parameter. Use 'cypher' for describing the subset with cypher node-statement and relationship-statement. |

normalization |
string |
null |
yes |
The type of normalization to apply to the results. Valid values are |

Name | Type | Description |
---|---|---|

nodeId |
long |
Node ID |

score |
float |
Eigenvector Centrality weight |

The Eigenvector Centrality algorithm supports the following graph types:

- ✓ directed, unweighted
- [] directed, weighted
- ✓ undirected, unweighted
- [] undirected, weighted