Import

This section describes how to perform batch imports of data into Neo4j using the command line tool neo4j-admin import.

There are two ways to import data from CSV files into Neo4j: via neo4j-admin import or LOAD CSV.

If you want to do batch imports of large amounts of data into a Neo4j database from CSV files, use the import command of neo4j-admin. This command can only be used to load data into a previously unused database and can only be performed once per database. By default, this database is set to graph.db but can be configured to other names and locations.

If you wish to import small to medium-sized CSV files into an existing database, use LOAD CSV. LOAD CSV can be run as many times as needed and does not require an empty database.

However, using the import command of neo4j-admin is generally faster since it is run against a stopped and empty database. This section describes the neo4j-admin import option. For information on LOAD CSV, see the Cypher Manual → LOAD CSV.

These are some things you will need to keep in mind when creating your input files:

  • Fields are comma-separated by default but a different delimiter can be specified.

  • All files must use the same delimiter.

  • Multiple data sources can be used for both nodes and relationships.

  • A data source can optionally be provided using multiple files.

  • A separate file with a header that provides information on the data fields, must be the first specified file of each data source.

  • Fields without corresponding information in the header will not be read.

  • UTF-8 encoding is used.

  • By default, the importer will trim extra whitespace at the beginning and end of strings. Quote your data to preserve leading and trailing whitespaces.

Indexes and constraints

Indexes and constraints are not created during the import. Instead, you will need to add these afterwards (see Cypher Manual → Indexes).

If you wish to see in-depth examples of using the command neo4j-admin import, refer to the Tutorials → Neo4j Admin import.

1. Syntax

The syntax for importing a set of CSV files is:

neo4j-admin import [--verbose]
                   [--cache-on-heap[=<true/false>]]
                   [--high-io[=<true/false>]]
                   [--ignore-empty-strings[=<true/false>]]
                   [--ignore-extra-columns[=<true/false>]]
                   [--legacy-style-quoting[=<true/false>]]
                   [--multiline-fields[=<true/false>]]
                   [--normalize-types[=<true/false>]]
                   [--skip-bad-entries-logging[=<true/false>]]
                   [--skip-bad-relationships[=<true/false>]]
                   [--skip-duplicate-nodes[=<true/false>]]
                   [--trim-strings[=<true/false>]]
                   [--additional-config=<path>]
                   [--array-delimiter=<char>]
                   [--bad-tolerance=<num>]
                   [--database=<database>]
                   [--delimiter=<char>]
                   [--id-type=<STRING|INTEGER|ACTUAL>]
                   [--input-encoding=<character-set>]
                   [--max-memory=<size>]
                   [--processors=<num>]
                   [--quote=<char>]
                   [--read-buffer-size=<size>]
                   [--report-file=<path>]
                   --nodes=[<label>[:<label>]...=]<files>...
                   [--nodes=[<label>[:<label>]...=]<files>...]...
                   [--relationships=[<type>=]<files>...]...
Example 1. Import data from CSV files

Assume that we have formatted our data as per CSV header format so that we have it in six different files:

  1. movies_header.csv

  2. movies.csv

  3. actors_header.csv

  4. actors.csv

  5. roles_header.csv

  6. roles.csv

The following command will import the three datasets:

neo4j_home$ bin/neo4j-admin import --nodes import/movies_header.csv,import/movies.csv \
--nodes import/actors_header.csv,import/actors.csv \
--relationships import/roles_header.csv,import/roles.csv
Example 2. Import data from CSV files using regular expression

Assume that we want to include a header and then multiple files that matches a pattern, e.g. containing numbers. In this case a regular expression can be used. It is guaranteed that groups of digits will be sorted in numerical order, as opposed to lexicograghic order.

For example:

neo4j_home$ bin/neo4j-admin import --nodes import/node_header.csv,import/node_data_\d+\.csv
Example 3. Import data from CSV files using a more complex regular expression

For regular expression patterns containing commas, which is also the delimiter between files in a group, the pattern can be quoted to preserve the pattern.

For example:

neo4j_home$ bin/neo4j-admin import --nodes import/node_header.csv,'import/node_data_\d{1,5}.csv'

If importing to a database that has not explicitly been created prior to the import, it must be created subsequently in order to be used.

2. Options

Table 1. neo4j-admin import options
name

--verbose

--cache-on-heap

--high-io

--ignore-empty-strings

--ignore-extra-columns

--legacy-style-quoting

--multiline-fields

--normalize-types

--skip-bad-entries-logging

--skip-bad-relationships

--skip-duplicate-nodes

--trim-strings

--additional-config

--array-delimiter

--bad-tolerance

--database

--delimiter

--id-type

--input-encoding

--max-memory

--processors

--quote

--read-buffer-size

--report-file

--nodes

--relationships

Some of the options below are marked as Advanced. These options should not be used for experimentation.

For more information, please contact Neo4j Professional Services.

--verbose

Enable verbose output.

--cache-on-heap[=<true/false>] Advanced

Determines whether or not to allow allocating memory for the cache on heap.

If false, then caches will still be allocated off-heap, but the additional free memory inside the JVM will not be allocated for the caches.

Use this to have better control over the heap memory.

Default: false

--high-io[=<true/false>]

Ignore environment-based heuristics, and specify whether the target storage subsystem can support parallel IO with high throughput.

Typically this is true for SSDs, large raid arrays and network-attached storage.

Default: false

--ignore-empty-strings[=<true/false>]

Determines whether or not empty string fields, such as "", from input source are ignored (treated as null).

Default: false

--ignore-extra-columns[=<true/false>]

If unspecified columns should be ignored during the import.

Default: false

--legacy-style-quoting[=<true/false>]

Determines whether or not backslash-escaped quote e.g. \" is interpreted as inner quote.

Default: false

--multiline-fields[=<true/false>]

Determines whether or not fields from input source can span multiple lines, i.e. contain newline characters.

Setting --multiline-fields=true can severely degrade performance of the importer. Therefore, use it with care, especially with large imports.

Default: false

--normalize-types[=<true/false>]

Determines whether or not to normalize property types to Cypher types, e.g. int becomes long and float becomes double.

Default: true

--skip-bad-entries-logging[=<true/false>]

Determines whether or not to skip logging bad entries detected during import.

Default: false

--skip-bad-relationships[=<true/false>]

Determines whether or not to skip importing relationships that refer to missing node IDs, i.e. either start or end node ID/group referring to node that was not specified by the node input data.

Skipped relationships will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

Default: false

--skip-duplicate-nodes[=<true/false>]

Determines whether or not to skip importing nodes that have the same ID/group.

In the event of multiple nodes within the same group having the same ID, the first encountered will be imported, whereas consecutive such nodes will be skipped.

Skipped nodes will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

Default: false

--trim-strings[=<true/false>]

Determines whether or not strings should be trimmed for whitespaces.

Default: false

--additional-config=<config-file-path>

Path to a configuration file that contain additional configuration options.

--array-delimiter=<char>

Determines the array delimiter within a value in CSV data.

  • ASCII character — e.g. --array-delimiter=";".

  • \ID — unicode character with ID, e.g. --array-delimiter="\59".

  • U+XXXX — unicode character specified with 4 HEX characters, e.g. --array-delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --array-delimiter="\t".

For horizontal tabulation (HT), use \t or the unicode character ID \9.

Unicode character ID can be used if prepended by \.

Default: ;

--bad-tolerance=<num>

Number of bad entries before the import is considered failed.

This tolerance threshold is about relationships referring to missing nodes. Format errors in input data are still treated as errors.

Default: 1000

--database=<name>

Name of the database to import.

Default: neo4j

--delimiter=<char>

Determines the delimiter between values in CSV data.

  • ASCII character — e.g. --delimiter=",".

  • \ID — unicode character with ID, e.g. --delimiter="\44".

  • U+XXXX — unicode character specified with 4 HEX characters, e.g. --delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --delimiter="\t".

For horizontal tabulation (HT), use \t or the unicode character ID \9.

Unicode character ID can be used if prepended by \.

Default: ,

--id-type=<STRING|INTEGER|ACTUAL>

Each node must provide a unique ID in order to be used for creating relationships during the import.

Possible values are:

  • STRING — arbitrary strings for identifying nodes.

  • INTEGER — arbitrary integer values for identifying nodes.

  • ACTUAL — actual node IDs. (Advanced)

Default: STRING

--input-encoding=<character-set>

Character set that input data is encoded in.

Default: UTF-8

--max-memory=<size>

Maximum memory that neo4j-admin can use for various data structures and caching to improve performance.

Values can be plain numbers such as 10000000, or 20G for 20 gigabyte. It can also be specified as a percentage of the available memory, for example 70%.

Default: 90%

--processors=<num> Advanced

Max number of processors used by the importer.

Defaults to the number of available processors reported by the JVM. There is a certain amount of minimum threads needed, so for that reason there is no lower bound for this value.

For optimal performance, this value shouldn’t be greater than the number of available processors.

--quote=<char>

Character to treat as quotation character for values in CSV data.

Quotes can be escaped as per RFC 4180 by doubling them, for example "" would be interpreted as a literal ".

You cannot escape using \.

Default: "

--read-buffer-size=<size>

Size of each buffer for reading input data.

It has to at least be large enough to hold the biggest single value in the input data. Value can be a plain number or byte units string, e.g. 128k, 1m.

Default: 4m

--report-file=<filename>

File in which to store the report of the csv-import.

Default: import.report

The location of the import log file can be controlled using the --report-file option. If you run large imports of CSV files that have low data quality, the import log file can grow very large. For example, CSV files that contain duplicate node IDs, or that attempt to create relationships between non-existent nodes, could be classed as having low data quality. In these cases, you may wish to direct the output to a location that can handle the large log file.

If you are running on a UNIX-like system and you are not interested in the output, you can get rid of it altogether by directing the report file to /dev/null.

If you need to debug the import, it might be useful to collect the stack trace. This is done by using --verbose option.

--nodes=[<label>[:<label>]…​=]<files>…​

Node CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

--relationships=[<type>=]<files>…​

Relationship CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

@<arguments-file-path>

File containing all arguments, used as an alternative to supplying all arguments on the command line directly.

Each argument can be on a separate line, or multiple arguments per line and separated by space.

Arguments containing spaces must be quoted.

Heap size for the import

You want to set the maximum heap size to a relevant value for the import. This is done by defining the HEAP_SIZE environment parameter before starting the import. For example, 2G is an appropriate value for smaller imports.

If doing imports in the order of magnitude of 100 billion entities, 20G will be an appropriate value.

Record format

If your import data will result in a graph that is larger than 34 billion nodes, 34 billion relationships, or 68 billion properties you will need to configure the importer to use the high limit record format. This is achieved by setting the parameters dbms.record_format=high_limit and dbms.allow_upgrade=true in a configuration file, and supplying that file to the importer with --additional-config. The format is printed in the debug.log file.

The high_limit format is available for Enterprise Edition only.

3. CSV header format

The header file of each data source specifies how the data fields should be interpreted. You must use the same delimiter for the header file and for the data files.

The header contains information for each field, with the format <name>:<field_type>. The <name> is used for properties and node IDs. In all other cases, the <name> part of the field is ignored.

4. Node files

Files containing node data can have an ID field, a LABEL field as well as properties.

ID

Each node must have a unique ID if it is to be connected by any relationships created in the import. The IDs are used to find the correct nodes when creating relationships. Note that the ID has to be unique across all nodes in the import; even for nodes with different labels. The unique ID can be persisted in a property whose name is defined by the <name> part of the field definition <name>:ID. If no such property name is defined, the unique ID will be used for the purpose of the import but not be available for reference later. If no ID is specified, the node will be imported but it will not be able to be connected by any relationships during the import.

LABEL

Read one or more labels from this field. Like array values, multiple labels are separated by ;, or by the character specified with --array-delimiter.

Example 4. Define nodes files

We define the headers for movies in the movies_header.csv file. Movies have the properties movieId, year and title. We also specify a field for labels.

movieId:ID,title,year:int,:LABEL

We define three movies in the movies.csv file. They contain all the properties defined in the header file. All the movies are given the label Movie. Two of them are also given the label Sequel.

tt0133093,"The Matrix",1999,Movie
tt0234215,"The Matrix Reloaded",2003,Movie;Sequel
tt0242653,"The Matrix Revolutions",2003,Movie;Sequel

Similarly, we also define three actors in the actors_header.csv and actors.csv files. They all have the properties personId and name, and the label Actor.

personId:ID,name,:LABEL
keanu,"Keanu Reeves",Actor
laurence,"Laurence Fishburne",Actor
carrieanne,"Carrie-Anne Moss",Actor

5. Relationship files

Files containing relationship data have three mandatory fields and can also have properties. The mandatory fields are:

TYPE

The relationship type to use for this relationship.

START_ID

The ID of the start node for this relationship.

END_ID

The ID of the end node for this relationship.

The START_ID and END_ID refer to the unique node ID defined in one of the node data sources, as explained in the previous section. None of these takes a name, e.g. if <name>:START_ID or <name>:END_ID is defined, the <name> part will be ignored.

Example 5. Define relationships files

In this example we assume that the two nodes files from the previous example are used together with the following relationships file.

We define relationships between actors and movies in the files roles_header.csv and roles.csv. Each row connects a start node and an end node with a relationship of relationship type ACTED_IN. Notice how we use the unique identifiers personId and movieId from the nodes files above. The name of character that the actor is playing in this movie is stored as a role property on the relationship.

:START_ID,role,:END_ID,:TYPE
keanu,"Neo",tt0133093,ACTED_IN
keanu,"Neo",tt0234215,ACTED_IN
keanu,"Neo",tt0242653,ACTED_IN
laurence,"Morpheus",tt0133093,ACTED_IN
laurence,"Morpheus",tt0234215,ACTED_IN
laurence,"Morpheus",tt0242653,ACTED_IN
carrieanne,"Trinity",tt0133093,ACTED_IN
carrieanne,"Trinity",tt0234215,ACTED_IN
carrieanne,"Trinity",tt0242653,ACTED_IN

6. Properties

For properties, the <name> part of the field designates the property key, while the <field_type> part assigns a data type (see below). You can have properties in both node data files and relationship data files.

Data types

Use one of int, long, float, double, boolean, byte, short, char, string, point, date, localtime, time, localdatetime, datetime, and duration to designate the data type for properties. If no data type is given, this defaults to string. To define an array type, append [] to the type. By default, array values are separated by ;. A different delimiter can be specified with --array-delimiter. Boolean values are true if they match exactly the text true. All other values are false. Values that contain the delimiter character need to be escaped by enclosing in double quotation marks, or by using a different delimiter character with the --delimiter option.

Example 6. Header format with data types

This example illustrates several different data types specified in the CSV header.

:ID,name,joined:date,active:boolean,points:int
user01,Joe Soap,2017-05-05,true,10
user02,Jane Doe,2017-08-21,true,15
user03,Moe Know,2018-02-17,false,7
Special considerations for the point data type

A point is specified using the Cypher syntax for maps. The map allows the same keys as the input to the Cypher Manual → Point function. The point data type in the header can be amended with a map of default values used for all values of that column, e.g. point{crs: 'WGS-84'}. Specifying the header this way allows you to have an incomplete map in the value position in the data file. Optionally, a value in a data file may override default values from the header.

Example 7. Property format for point data type

This example illustrates various ways of using the point data type in the import header and the data files.

We are going to import the name and location coordinates for cities. First, we define the header as:

:ID,name,location:point{crs:WGS-84}

We then define cities in the data file.

  • The first city’s location is defined using latitude and longitude, as expected when using the coordinate system defined in the header.

  • The second city uses x and y instead. This would normally lead to a point using the coordinate reference system cartesian. Since the header defines crs:WGS-84, that coordinate reference system will be used.

  • The third city overrides the coordinate reference system defined in the header, and sets it explicitly to WGS-84-3D.

:ID,name,location:point{crs:WGS-84}
city01,"Malmö","{latitude:55.6121514, longitude:12.9950357}"
city02,"London","{y:51.507222, x:-0.1275}"
city03,"San Mateo","{latitude:37.554167, longitude:-122.313056, height: 100, crs:'WGS-84-3D'}"

Note that all point maps are within double quotation marks " in order to prevent the enclosed , character from being interpreted as a column separator. An alternative approach would be to use --delimiter='\t' and reformat the file with tab separators, in which case the " characters are not required.

:ID name    location:point{crs:WGS-84}
city01  Malmö   {latitude:55.6121514, longitude:12.9950357}
city02  London  {y:51.507222, x:-0.1275}
city03  San Mateo   {latitude:37.554167, longitude:-122.313056, height: 100, crs:'WGS-84-3D'}
Special considerations for temporal data types

The format for all temporal data types must be defined as described in Cypher Manual → Temporal instants syntax and Cypher Manual → Durations syntax. Two of the temporal types, Time and DateTime, take a time zone parameter which might be common between all or many of the values in the data file. It is therefor possible to specify a default time zone for Time and DateTime values in the header, for example: time{timezone:+02:00} and: datetime{timezone:Europe/Stockholm}. If no default time zone is specified, the default timezone is determined by the db.temporal.timezone configuration setting. The default time zone can be explicitly overridden in the values in the data file.

Example 8. Property format for temporal data types

This example illustrates various ways of using the datetime data type in the import header and the data files.

First, we define the header with two DateTime columns. The first one defines a time zone, but the second one does not:

:ID,date1:datetime{timezone:Europe/Stockholm},date2:datetime

We then define dates in the data file.

  • The first row has two values that do not specify an explicit timezone. The value for date1 will use the Europe/Stockholm time zone that was specified for that field in the header. The value for date2 will use the configured default time zone of the database.

  • In the second row, both date1 and date2 set the time zone explicitly to be Europe/Berlin. This overrides the header definition for date1, as well as the configured default time zone of the database.

1,2018-05-10T10:30,2018-05-10T12:30
2,2018-05-10T10:30[Europe/Berlin],2018-05-10T12:30[Europe/Berlin]

7. Using ID spaces

By default, the import tool assumes that node identifiers are unique across node files. In many cases the ID is only unique across each entity file, for example when our CSV files contain data extracted from a relational database and the ID field is pulled from the primary key column in the corresponding table. To handle this situation we define ID spaces. ID spaces are defined in the ID field of node files using the syntax ID(<ID space identifier>). To reference an ID of an ID space in a relationship file, we use the syntax START_ID(<ID space identifier>) and END_ID(<ID space identifier>).

Example 9. Define and use ID spaces

Define a Movie-ID ID space in the movies_header.csv file.

movieId:ID(Movie-ID),title,year:int,:LABEL
1,"The Matrix",1999,Movie
2,"The Matrix Reloaded",2003,Movie;Sequel
3,"The Matrix Revolutions",2003,Movie;Sequel

Define an Actor-ID ID space in the header of the actors_header.csv file.

personId:ID(Actor-ID),name,:LABEL
1,"Keanu Reeves",Actor
2,"Laurence Fishburne",Actor
3,"Carrie-Anne Moss",Actor

Now use the previously defined ID spaces when connecting the actors to movies.

:START_ID(Actor-ID),role,:END_ID(Movie-ID),:TYPE
1,"Neo",1,ACTED_IN
1,"Neo",2,ACTED_IN
1,"Neo",3,ACTED_IN
2,"Morpheus",1,ACTED_IN
2,"Morpheus",2,ACTED_IN
2,"Morpheus",3,ACTED_IN
3,"Trinity",1,ACTED_IN
3,"Trinity",2,ACTED_IN
3,"Trinity",3,ACTED_IN

8. Skipping columns

IGNORE

If there are fields in the data that we wish to ignore completely, this can be done using the IGNORE keyword in the header file. IGNORE must be prepended with a :.

Example 10. Skip a column

In this example, we are not interested in the data in the third column of the nodes file and wish to skip over it. Note that the IGNORE keyword is prepended by a :.

personId:ID,name,:IGNORE,:LABEL
keanu,"Keanu Reeves","male",Actor
laurence,"Laurence Fishburne","male",Actor
carrieanne,"Carrie-Anne Moss","female",Actor

If all your superfluous data is placed in columns located to the right of all the columns that you wish to import, you can instead use the command line option --ignore-extra-columns.

9. Import compressed files

The import tool can handle files compressed with zip or gzip. Each compressed file must contain a single file.

Example 11. Perform an import using compressed files
neo4j_home$ ls import
actors-header.csv  actors.csv.zip  movies-header.csv  movies.csv.gz  roles-header.csv  roles.csv.gz
neo4j_home$ bin/neo4j-admin import --nodes import/movies-header.csv,import/movies.csv.gz --nodes import/actors-header.csv,import/actors.csv.zip --relationships import/roles-header.csv,import/roles.csv.gz

10. Resuming a stopped or cancelled import

An import that is stopped or fails before completing can be resumed from a point closer to where it was stopped. An import can be resumed from the following points:

  • Linking of relationships

  • Post-processing