Max Flow

Given a source node, a target node and relationships with capacity constraints, the max-flow algorithm assigns a flow to each relationship to achieve maximal transport from source to target.

The flow is a scalar property for each relationship and must satisfy:

  • Flow into a node equals flow out of a node (preservation)

  • Flow is restricted by the capacity of a relationship

Syntax

This section covers the syntax used to execute the Max Flow algorithm.

Run MaxFlow.
CALL Neo4j_Graph_Analytics.graph.maxflow(
  'CPU_X64_XS',                    (1)
  {
    ['defaultTablePrefix': '...',] (2)
    'project': {...},              (3)
    'compute': {...},              (4)
    'write':   {...}               (5)
  }
);
1 Compute pool selector.
2 Optional prefix for table references.
3 Project config.
4 Compute config.
5 Write config.
Table 1. Parameters
Name Type Default Optional Description

computePoolSelector

String

n/a

no

The selector for the compute pool on which to run the Maximum Flow job.

configuration

Map

{}

no

Configuration for graph project, algorithm compute and result write back.

The configuration map consists of the following three entries.

For more details on below Project configuration, refer to the Project documentation.
Table 2. Project configuration
Name Type

nodeTables

List of node tables.

relationshipTables

Map of relationship types to relationship tables.

Table 3. Compute configuration
Name Type Default Optional Description

sourceNodes

List of Strings/Integer, String, or Integer

n/a

no

Source nodes given as nodes or node ids from which flow comes to the network.

sourceNodesTable

String

n/a

no

The name of the table containing the source nodes.

targetNodes

List of Strings/Integer, String, or Integer

n/a

no

Target nodes given as nodes or node ids to which the flow is deposited.

targetNodesTable

String

n/a

no

A table for mapping the target nodes identifiers.

nodeCapacityProperty

String

n/a

yes

If defined, nodes with the given property are restricted on the total flow it can process based on their property value. Leave undefined for nodes without restrictions.

capacityProperty

String

n/a

no

Name of the relationship property to use as capacity.

resultRelationshipType

String

'FLOW_RELATIONSHIP'

yes

The relationship type used for the relationships written back to the Snowflake database.

resultProperty

String

'flow'

yes

The relationship property that will be written back to the Snowflake database.

For more details on below Write configuration, refer to the Write documentation.
Table 4. Write configuration
Name Type Default Optional Description

sourceLabel

String

n/a

no

Node label in the in-memory graph for start nodes of relationships to be written back.

targetLabel

String

n/a

no

Node label in the in-memory graph for end nodes of relationships to be written back.

outputTable

String

n/a

no

Table in Snowflake database to which relationships are written.

relationshipType

String

'FLOW_RELATIONSHIP'

yes

The relationship type that will be written back to the Snowflake database.

relationshipProperty

String

'flow'

yes

The relationship property that will be written back to the Snowflake database.

Example

In this section we will show examples of running the Max Flow algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to make use of the algorithm in a real setting. We will do this on a small supply graph of a handful of nodes, connected in a particular pattern. The example graph looks like this:

Visualization of the example graph
The following SQL statement will create the example graph tables in the Snowflake database:
CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.LOCATIONS (NODEID VARCHAR, STORAGE FLOAT);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.LOCATIONS VALUES
('A', 3),
('B', 5),
('C', NULL),
('D', 50),
('E', 10),
('F', NULL);

CREATE OR REPLACE TABLE EXAMPLE_DB.DATA_SCHEMA.ROUTES (SOURCENODEID VARCHAR, TARGETNODEID VARCHAR, CAPACITY FLOAT);
INSERT INTO EXAMPLE_DB.DATA_SCHEMA.ROUTES VALUES
  ('A', 'F',  10),
  ('A', 'B',  3),
  ('A', 'E',  7),
  ('B', 'C',  1),
  ('C', 'D',  4),
  ('C', 'E',  6),
  ('F', 'D',  3);

The graph stores a set of locations and routes connecting them.

Run job

To run the query, there is a required setup of grants for the application, your consumer role and your environment. Please see the Getting started page for more on this.

We also assume that the application name is the default Neo4j_Graph_Analytics. If you chose a different app name during installation, please replace it with that.

The following will run a Max Flow job:
CALL Neo4j_Graph_Analytics.graph.maxflow('CPU_X64_XS', {
    'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
    'project': {
        'nodeTables': [ 'LOCATIONS' ],
        'relationshipTables': {
            'ROUTES': {
                'sourceTable': 'LOCATIONS',
                'targetTable': 'LOCATIONS'
            }
        }
    },
    'compute': {
        'sourceNodesTable': 'LOCATIONS', 'sourceNodes': 'A',
        'targetNodesTable': 'LOCATIONS', 'targetNodes': 'D'
    },
    'write': [{
        'sourceLabel': 'LOCATIONS',
        'targetLabel': 'LOCATIONS',
        'outputTable': 'FLOWS'
    }]
});
Table 5. Results
JOB_ID JOB_STATUS JOB_START JOB_END JOB_RESULT

job_cec5b6b71a2d4d8dad94f4a553422d69

SUCCESS

2026-01-15 12:58:36

2026-01-15 12:58:44

 {
    "maxflow_1": {
        "computeMillis": 75,
        "configuration": {
            "concurrency": 6,
            "resultRelationshipType":"FLOW_RELATIONSHIP",
            "resultProperty":"flow",
            "nodeLabels": [
                "*"
            ],
            "relationshipTypes": [
                "*"
            ],
            "capacityProperty":"CAPACITY",
            "sourceNode": "A",
            "sourceNodesTable": "EXAMPLE_DB.DATA_SCHEMA.LOCATIONS",
            "targetNodes": "D",
            "targetNodesTable": "EXAMPLE_DB.DATA_SCHEMA.LOCATIONS",
            "totalFlow":4.0,
        }
    },
    "project_1": {
        "graphName": "snowgraph",
        "nodeCount": 6,
        "nodeLabels": ...,
        "nodeMillis": 270,
        "relationshipCount": 7,
        "relationshipMillis": 153,
        "relationshipTypes": ...,
        "totalMillis": 423
    },
    "write_relationship_type_1": {
        "outputTable": "EXAMPLE_DB.DATA_SCHEMA.FLOWS",
        "relationshipProperty": "flow",
        "relationshipType": "FLOW_RELATIONSHIP",
        "rowsWritten": 5,
        "writeMillis": 1303
    }
}

The returned result contains information about the job execution and result distribution. We can query it like so:

SELECT * FROM EXAMPLE_DB.DATA_SCHEMA.FLOWS;
Table 6. Results
SOURCENODEID TARGETNODEID FLOW

A

B

1.0

A

F

3.0

B

C

1.0

C

D

1.0

F

D

3.0

Using node capacities constraint

If there is a restriction on how much specific nodes can output/receive, this can be modeled using the nodeCapacityProperty parameter property. For example, source facilities might have a cap on the amount of products they can produce, and similarly target facilities might have constraints on the amount of products they can store. In the example below, we pass the "constraint" node property as the value of the nodeCapacityProperty parameter to model these additional requirements.

The following will run a Max Flow job using the storage as a constraint on nodes:
CALL Neo4j_Graph_Analytics.graph.maxflow('CPU_X64_XS', {
    'defaultTablePrefix': 'EXAMPLE_DB.DATA_SCHEMA',
    'project': {
        'nodeTables': [ 'LOCATIONS' ],
        'relationshipTables': {
            'ROUTES': {
                'sourceTable': 'LOCATIONS',
                'targetTable': 'LOCATIONS'
            }
        }
    },
    'compute': {
        'sourceNodesTable': 'LOCATIONS', 'sourceNodes': 'A',
        'targetNodesTable': 'LOCATIONS', 'targetNodes': 'D'
        'nodeCapacityProperty': 'STORAGE'
    },
    'write': [{
        'sourceLabel': 'LOCATIONS',
        'targetLabel': 'LOCATIONS',
        'outputTable': 'RESTRICTED_FLOWS'
    }]
});
Table 7. Results
JOB_ID JOB_STATUS JOB_START JOB_END JOB_RESULT

job_fyi5b6b81a2d4d8dad94f4a553433d69

SUCCESS

2026-01-15 13:04:28

2026-01-15 13:04:36

{
    "maxflow_1": {
        "computeMillis": 68,
        "configuration": {
            "concurrency": 6,
            "resultRelationshipType":"FLOW_RELATIONSHIP",
            "resultProperty":"flow",
            "nodeLabels": [
                "*"
            ],
        "relationshipTypes": [
            "*"
        ],
        "capacityProperty":"CAPACITY",
        "nodeCapacityProperty":"STORAGE",
        "sourceNode": "A",
        "sourceNodesTable": "EXAMPLE_DB.DATA_SCHEMA.LOCATIONS",
        "targetNodes": "D",
        "targetNodesTable": "EXAMPLE_DB.DATA_SCHEMA.LOCATIONS",
        "totalFlow":3.0,
        }
    },
    "project_1": {
        "graphName": "snowgraph",
        "nodeCount": 6,
        "nodeLabels": ...,
        "nodeMillis": 274,
        "relationshipCount": 7,
        "relationshipMillis": 152,
        "relationshipTypes": ...,
        "totalMillis": 449
    },
    "write_relationship_type_1": {
        "outputTable": "EXAMPLE_DB.DATA_SCHEMA.RESTRICTED_FLOWS",
        "relationshipProperty": "flow",
        "relationshipType": "FLOW_RELATIONSHIP",
        "rowsWritten": 5,
        "writeMillis": 1319
    }
}