Harmonic Centrality
Glossary
- Directed
-
Directed trait. The algorithm is well-defined on a directed graph.
- Directed
-
Directed trait. The algorithm ignores the direction of the graph.
- Directed
-
Directed trait. The algorithm does not run on a directed graph.
- Undirected
-
Undirected trait. The algorithm is well-defined on an undirected graph.
- Undirected
-
Undirected trait. The algorithm ignores the undirectedness of the graph.
- Heterogeneous nodes
-
Heterogeneous nodes fully supported. The algorithm has the ability to distinguish between nodes of different types.
- Heterogeneous nodes
-
Heterogeneous nodes allowed. The algorithm treats all selected nodes similarly regardless of their label.
- Heterogeneous relationships
-
Heterogeneous relationships fully supported. The algorithm has the ability to distinguish between relationships of different types.
- Heterogeneous relationships
-
Heterogeneous relationships allowed. The algorithm treats all selected relationships similarly regardless of their type.
- Weighted relationships
-
Weighted trait. The algorithm supports a relationship property to be used as weight, specified via the relationshipWeightProperty configuration parameter.
- Weighted relationships
-
Weighted trait. The algorithm treats each relationship as equally important, discarding the value of any relationship weight.
Harmonic centrality (also known as valued centrality) is a variant of closeness centrality, that was invented to solve the problem the original formula had when dealing with unconnected graphs. As with many of the centrality algorithms, it originates from the field of social network analysis.
History and explanation
Harmonic centrality was proposed by Marchiori and Latora in Harmony in the Small World while trying to come up with a sensible notion of "average shortest path".
They suggested a different way of calculating the average distance to that used in the Closeness Centrality algorithm. Rather than summing the distances of a node to all other nodes, the harmonic centrality algorithm sums the inverse of those distances. This enables it deal with infinite values.
The raw harmonic centrality for a node u is calculated using the following formula:
That is, for every node j (excluding u) we compute its minimum distance from u and sum up its inverse.
If there is no path from u to j, 1/d(u,v) is treated as 0
.
Similar to Closeness centrality, we can also calculate a normalized harmonic centrality value for node with the following formula:
Here, we divide the raw value by the number of nodes minus one to normalize the returned value. In this formula, ∞ values are handled cleanly. The Neo4j GDS Library calculates normalized harmonic centrality values.
Use-cases - when to use the Harmonic Centrality algorithm
Harmonic centrality was proposed as an alternative to closeness centrality, and therefore has similar use cases.
For example, we might use it if we’re trying to identify where in the city to place a new public service so that it’s easily accessible for residents. If we’re trying to spread a message on social media we could use the algorithm to find the key influencers that can help us achieve our goal.
Syntax
This section covers the syntax used to execute the Harmonic Centrality algorithm in each of its execution modes. We are describing the named graph variant of the syntax. To learn more about general syntax variants, see Syntax overview.
CALL gds.closeness.harmonic.stream(
graphName: String,
configuration: Map
)
YIELD
nodeId: Integer,
score: Float
Name | Type | Default | Optional | Description |
---|---|---|---|---|
graphName |
String |
|
no |
The name of a graph stored in the catalog. |
configuration |
Map |
|
yes |
Configuration for algorithm-specifics and/or graph filtering. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
List of String |
['*'] |
yes |
Filter the named graph using the given node labels. Nodes with any of the given labels will be included. |
|
List of String |
['*'] |
yes |
Filter the named graph using the given relationship types. Relationships with any of the given types will be included. |
|
Integer |
4 [1] |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
Boolean |
true |
yes |
If disabled the progress percentage will not be logged. |
|
1. In a GDS Session the default is the number of available processors |
Name | Type | Description |
---|---|---|
nodeId |
Integer |
Node ID. |
score |
Float |
Harmonic centrality score. |
CALL gds.closeness.harmonic.stats(
graphName: String,
configuration: Map
)
YIELD
centralityDistribution: Map,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
configuration: Map
Name | Type | Default | Optional | Description |
---|---|---|---|---|
graphName |
String |
|
no |
The name of a graph stored in the catalog. |
configuration |
Map |
|
yes |
Configuration for algorithm-specifics and/or graph filtering. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
Integer |
4 [2] |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
Boolean |
true |
yes |
If disabled the progress percentage will not be logged. |
|
2. In a GDS Session the default is the number of available processors |
Name | Type | Description |
---|---|---|
centralityDistribution |
Map |
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. |
preProcessingMillis |
Integer |
Milliseconds for preprocessing the graph. |
computeMillis |
Integer |
Milliseconds for running the algorithm. |
postProcessingMillis |
Integer |
Milliseconds for computing the statistics. |
configuration |
Map |
The configuration used for running the algorithm. |
CALL gds.closeness.harmonic.mutate(
graphName: String,
configuration: Map
)
YIELD
centralityDistribution: Map,
preProcessingMillis: Integer,
computeMillis: Integer,
mutateMillis: Integer,
nodePropertiesWritten: Integer,
configuration: Map
Name | Type | Default | Optional | Description |
---|---|---|---|---|
graphName |
String |
|
no |
The name of a graph stored in the catalog. |
configuration |
Map |
|
yes |
Configuration for algorithm-specifics and/or graph filtering. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
Integer |
4 [3] |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
Boolean |
true |
yes |
If disabled the progress percentage will not be logged. |
|
mutateProperty |
String |
n/a |
no |
The node property in the GDS graph to which the score is written. |
List of String |
['*'] |
yes |
Filter the named graph using the given node labels. |
|
List of String |
['*'] |
yes |
Filter the named graph using the given relationship types. |
|
Integer |
4 |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
3. In a GDS Session the default is the number of available processors |
Name | Type | Description |
---|---|---|
centralityDistribution |
Map |
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. |
preProcessingMillis |
Integer |
Milliseconds for preprocessing the graph. |
computeMillis |
Integer |
Milliseconds for running the algorithm. |
mutateMillis |
Integer |
Milliseconds for adding properties to the projected graph. |
nodePropertiesWritten |
Integer |
Number of properties written to the projected graph. |
configuration |
Map |
The configuration used for running the algorithm. |
CALL gds.closeness.harmonic.write(
graphName: String,
configuration: Map
)
YIELD
centralityDistribution: Map,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
writeMillis: Integer,
nodePropertiesWritten: Integer,
configuration: Map
Name | Type | Default | Optional | Description |
---|---|---|---|---|
graphName |
String |
|
no |
The name of a graph stored in the catalog. |
configuration |
Map |
|
yes |
Configuration for algorithm-specifics and/or graph filtering. |
Name | Type | Default | Optional | Description |
---|---|---|---|---|
Integer |
4 [4] |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
Boolean |
true |
yes |
If disabled the progress percentage will not be logged. |
|
List of String |
['*'] |
yes |
Filter the named graph using the given node labels. Nodes with any of the given labels will be included. |
|
List of String |
['*'] |
yes |
Filter the named graph using the given relationship types. Relationships with any of the given types will be included. |
|
Integer |
4 [4] |
yes |
The number of concurrent threads used for running the algorithm. |
|
String |
Generated internally |
yes |
An ID that can be provided to more easily track the algorithm’s progress. |
|
Boolean |
true |
yes |
If disabled the progress percentage will not be logged. |
|
Integer |
value of 'concurrency' |
yes |
The number of concurrent threads used for writing the result to Neo4j. |
|
String |
n/a |
no |
The node property in the Neo4j database to which the score is written. |
|
4. In a GDS Session the default is the number of available processors |
Name | Type | Description |
---|---|---|
centralityDistribution |
Map |
Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values. |
preProcessingMillis |
Integer |
Milliseconds for preprocessing the graph. |
computeMillis |
Integer |
Milliseconds for running the algorithm. |
postProcessingMillis |
Integer |
Milliseconds for computing the statistics. |
writeMillis |
Integer |
Milliseconds for writing result data back. |
nodePropertiesWritten |
Integer |
Number of properties written to Neo4j. |
configuration |
Map |
The configuration used for running the algorithm. |
Examples
All the examples below should be run in an empty database. The examples use Cypher projections as the norm. Native projections will be deprecated in a future release. |
In this section we will show examples of running the Harmonic Centrality algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to make use of the algorithm in a real setting. We will do this on a small user network graph of a handful nodes connected in a particular pattern. The example graph looks like this:
CREATE (a:User {name: "Alice"}),
(b:User {name: "Bob"}),
(c:User {name: "Charles"}),
(d:User {name: "Doug"}),
(e:User {name: "Ethan"}),
(a)-[:LINK]->(b),
(b)-[:LINK]->(c),
(d)-[:LINK]->(e)
MATCH (source:User)-[r:LINK]->(target:User)
RETURN gds.graph.project(
'graph',
source,
target,
{},
{ undirectedRelationshipTypes: ['*'] }
)
In the following examples we will demonstrate using the Harmonic Centrality algorithm on this graph.
Memory Estimation
First off, we will estimate the cost of running the algorithm using the estimate
procedure.
This can be done with any execution mode.
We will use the stream
mode in this example.
Estimating the algorithm is useful to understand the memory impact that running the algorithm on your graph will have.
When you later actually run the algorithm in one of the execution modes the system will perform an estimation.
If the estimation shows that there is a very high probability of the execution going over its memory limitations, the execution is prohibited.
To read more about this, see Automatic estimation and execution blocking.
For more details on estimate
in general, see Memory Estimation.
CALL gds.closeness.harmonic.stream.estimate('graph', {})
YIELD nodeCount, relationshipCount, bytesMin, bytesMax, requiredMemory
nodeCount | relationshipCount | bytesMin | bytesMax | requiredMemory |
---|---|---|---|---|
5 |
6 |
1368 |
1368 |
"1368 Bytes" |
Stream
CALL gds.closeness.harmonic.stream('graph', {})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS user, score
ORDER BY score DESC
user | score |
---|---|
"Bob" |
0.5 |
"Alice" |
0.375 |
"Charles" |
0.375 |
"Doug" |
0.25 |
"Ethan" |
0.25 |
Stats
CALL gds.closeness.harmonic.stats('graph', {})
YIELD centralityDistribution
centralityDistribution |
---|
{max=0.5000038147, mean=0.3500003815, min=0.25, p50=0.375, p75=0.375, p90=0.5000019073, p95=0.5000019073, p99=0.5000019073, p999=0.5000019073} |
Mutate
CALL gds.closeness.harmonic.mutate('graph', {mutateProperty: 'harmonicScore'})
YIELD nodePropertiesWritten, centralityDistribution
nodePropertiesWritten | centralityDistribution |
---|---|
5 |
{max=0.5000038147, mean=0.3500003815, min=0.25, p50=0.375, p75=0.375, p90=0.5000019073, p95=0.5000019073, p99=0.5000019073, p999=0.5000019073} |