Import

neo4j-admin database import writes CSV data into Neo4j’s native file format as fast as possible.
Starting with version 5.26, Neo4j also provides support for the Parquet file format.

You should use this tool when:

  • Import performance is important because you have a large amount of data (millions/billions of entities).

  • The database can be taken offline and you have direct access to one of the servers hosting your Neo4j DBMS.

  • The database is either empty or its content is unchanged since a previous incremental import.

  • The CSV data is clean/fault-free (nodes are not duplicated and relationships' start and end nodes exist). This tool can handle data faults but performance is not optimized. If your data has a lot of faults, it is recommended to clean it using a dedicated tool before import.

Other methods of importing data into Neo4j might be better suited to non-admin users:

Change Data Capture does not capture any data changes resulting from the use of neo4j-admin database import. See Change Data Capture → Key considerations for more information.

Overview

The neo4j-admin database import command has two modes both used for initial data import:

  • full — used to import data into a non-existent empty database.

  • incremental — used when import cannot be completed in a single full import, by allowing the import to be a series of smaller imports.

The user running neo4j-admin database import must have WRITE capabilities into server.directories.data and server.directories.log.

This section describes the neo4j-admin database import option.

For information on LOAD CSV, see the Cypher Manual → LOAD CSV. For in-depth examples of using the command neo4j-admin database import, refer to the Tutorials → Neo4j Admin import.

These are some things you need to keep in mind when creating your input files:

  • Fields are comma-separated by default but a different delimiter can be specified.

  • All files must use the same delimiter.

  • Multiple data sources can be used for both nodes and relationships.

  • A data source can optionally be provided using multiple files.

  • A separate file with a header that provides information on the data fields, must be the first specified file of each data source.

  • Fields without corresponding information in the header are not read.

  • UTF-8 encoding is used.

  • By default, the importer trims extra whitespace at the beginning and end of strings. Quote your data to preserve leading and trailing whitespaces.

Indexes and constraints

Indexes and constraints are not created during the import. Instead, you have to add these afterward (see Cypher Manual → Indexes).

Starting from Neo4j 5.24, you can use the --schema option to create indexes and contraints during the import process. The option is available in the Enterprise Edition and works only for the block format. See Provide indexes and constraints during import for more information.

Full import

Syntax

The syntax for importing a set of CSV files is:

neo4j-admin database import full [-h] [--expand-commands] [--verbose] [--auto-skip-subsequent-headers[=true|false]]
                                 [--ignore-empty-strings[=true|false]] [--ignore-extra-columns[=true|false]]
                                 [--legacy-style-quoting[=true|false]] [--normalize-types[=true|false]]
                                 [--overwrite-destination[=true|false]] [--skip-bad-entries-logging[=true|false]]
                                 [--skip-bad-relationships[=true|false]] [--skip-duplicate-nodes[=true|false]] [--strict
                                 [=true|false]] [--trim-strings[=true|false]] [--additional-config=<file>]
                                 [--array-delimiter=<char>] [--bad-tolerance=<num>] [--delimiter=<char>]
                                 [--format=<format>] [--high-parallel-io=on|off|auto] [--id-type=string|integer|actual]
                                 [--input-encoding=<character-set>] [--input-type=csv|parquet]
                                 [--max-off-heap-memory=<size>] [--quote=<char>] [--read-buffer-size=<size>]
                                 [--report-file=<path>] [--schema=<path>] [--threads=<num>] --nodes=[<label>[:
                                 <label>]...=]<files>... [--nodes=[<label>[:<label>]...=]<files>...]...
                                 [--relationships=[<type>=]<files>...]... [--multiline-fields=true|false|<path>[,
                                 <path>] [--multiline-fields-format=v1|v2]] <database>

Description

Initial import into a non-existent empty database.

Parameters

Table 1. neo4j-admin database import full parameters
Parameter Description Default

<database>

Name of the database to import. If the database into which you import does not exist prior to importing, you must create it subsequently using CREATE DATABASE.

neo4j

Some of the options below are marked as Advanced. These options should not be used for experimentation.

For more information, please contact Neo4j Professional Services.

Options

Starting from Neo4j 5.26, the importer also supports the Parquet file format. An additional parameter --input-type=csv|parquet has been introduced to explicitly specify whether to use CSV or Parquet for the importer. If not defined, the default value will be CSV. The examples for CSV can also be used with Parquet.

Table 2. neo4j-admin database import full options
Option Description Default

--additional-config=<file>[1]

Configuration file with additional configuration.

--array-delimiter=<char>

Delimiter character between array elements within a value in CSV data. Also accepts TAB and e.g. U+20AC for specifying a character using Unicode.

  • ASCII character — e.g. --array-delimiter=";".

  • \ID — Unicode character with ID, e.g. --array-delimiter="\59".

  • U+XXXX — Unicode character specified with 4 HEX characters, e.g. --array-delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --array-delimiter="\t".

For horizontal tabulation (HT), use \t or the Unicode character ID \9.

Unicode character ID can be used if prepended by \.

;

--auto-skip-subsequent-headers[=true|false][2]

Automatically skip accidental header lines in subsequent files in file groups with more than one file.

false

--bad-tolerance=<num>

Number of bad entries before the import is aborted. The import process is optimized for error-free data. Therefore, cleaning the data before importing it is highly recommended. If you encounter any bad entries during the import process, you can set the number of bad entries to a specific value that suits your needs. However, setting a high value may affect the performance of the tool.

1000

--delimiter=<char>[2]

Delimiter character between values in CSV data. Also accepts TAB and e.g. U+20AC for specifying a character using Unicode.

  • ASCII character — e.g. --delimiter=",".

  • \ID — Unicode character with ID, e.g. --delimiter="\44".

  • U+XXXX — Unicode character specified with 4 HEX characters, e.g. --delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --delimiter="\t".

For horizontal tabulation (HT), use \t or the Unicode character ID \9.

Unicode character ID can be used if prepended by \.

,

--expand-commands

Allow command expansion in config value evaluation.

--format=<format>

Name of database format. The imported database will be created in the specified format or use the format set in the configuration. Valid formats are standard, aligned, high_limit, and block.

-h, --help

Show this help message and exit.

--high-parallel-io=on|off|auto

Ignore environment-based heuristics and indicate if the target storage subsystem can support parallel IO with high throughput or auto detect. Typically this is on for SSDs, large raid arrays, and network-attached storage.

auto

--id-type=string|integer|actual

Each node must provide a unique ID. This is used to find the correct nodes when creating relationships.

Possible values are:

  • string — arbitrary strings for identifying nodes.

  • integer — arbitrary integer values for identifying nodes.

  • actual — (advanced) actual node IDs.

string

--ignore-empty-strings[=true|false]

Whether or not empty string fields, i.e. "" from input source are ignored, i.e. treated as null.

false

--ignore-extra-columns[=true|false][2]

If unspecified columns should be ignored during the import.

false

--input-encoding=<character-set>[2]

Character set that input data is encoded in.

UTF-8

--input-type=csv|parquet

Introduced in 5.26 File type to import from. Can be csv or parquet. Defaults to csv.

csv

--legacy-style-quoting[=true|false]

Whether or not a backslash-escaped quote e.g. \" is interpreted as an inner quote.

false

--max-off-heap-memory=<size>

Maximum memory that neo4j-admin can use for various data structures and caching to improve performance.

Values can be plain numbers, such as 10000000, or 20G for 20 gigabytes. It can also be specified as a percentage of the available memory, for example 70%.

90%

--multiline-fields=true|false|<path>[,<path>][2]

Changed in 5.26 In v1, whether or not fields from an input source can span multiple lines, i.e. contain newline characters. Setting --multiline-fields=true can severely degrade the performance of the importer. Therefore, use it with care, especially with large imports. In v2, this option will specify the list of files that contain multiline fields. Files can also be specified using regular expressions.

null

--multiline-fields-format=v1|v2[2]

Introduced in 5.26 Controls the parsing of input source that can span multiple lines, i.e. contain newline characters. When set to v1, the value for --multiline-fields can only be true or false. When set to v2, the value for --multiline-fields should be the list of files that contain multiline fields.

null

--nodes=[<label>[:<label>]…​=]<files>…​

Node CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

It is possible to import files from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.

--normalize-types[=true|false]

When true, non-array property values are converted to their equivalent Cypher types. For example, all integer values will be converted to 64-bit long integers.

true

--overwrite-destination[=true|false]

Delete any existing database files prior to the import.

false

--quote=<char>[2]

Character to treat as quotation character for values in CSV data.

Quotes can be escaped as per RFC 4180 by doubling them, for example "" would be interpreted as a literal ".

You cannot escape using \.

"

--read-buffer-size=<size>

Size of each buffer for reading input data.

It has to be at least large enough to hold the biggest single value in the input data. The value can be a plain number or a byte units string, e.g. 128k, 1m.

4194304

--relationships=[<type>=]<files>…​

Relationship CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

It is possible to import files from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.

--report-file=<path>

File in which to store the report of the csv-import.

The location of the import log file can be controlled using the --report-file option. If you run large imports of CSV files that have low data quality, the import log file can grow very large. For example, CSV files that contain duplicate node IDs, or that attempt to create relationships between non-existent nodes, could be classed as having low data quality. In these cases, you may wish to direct the output to a location that can handle the large log file.

If you are running on a UNIX-like system and you are not interested in the output, you can get rid of it altogether by directing the report file to /dev/null.

If you need to debug the import, it might be useful to collect the stack trace. This is done by using the --verbose option.

import.report

--schema=<path>

Introduced in 5.24 Enterprise edition Path to the file containing the Cypher commands for creating indexes and constraints during data import.

--skip-bad-entries-logging[=true|false]

When set to true, the details of bad entries are not written in the log. Disabling logging can improve performance when the data contains lots of faults. Cleaning the data before importing it is highly recommended because faults dramatically affect the tool’s performance even without logging.

false

--skip-bad-relationships[=true|false]

Whether or not to skip importing relationships that refer to missing node IDs, i.e. either start or end node ID/group referring to a node that was not specified by the node input data.

Skipped relationships will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

false

--skip-duplicate-nodes[=true|false]

Whether or not to skip importing nodes that have the same ID/group.

In the event of multiple nodes within the same group having the same ID, the first encountered will be imported, whereas consecutive such nodes will be skipped.

Skipped nodes will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

false

--strict[=true|false]

Introduced in 5.6 Whether or not the lookup of nodes referred to from relationships needs to be checked strict. If disabled, most but not all relationships referring to non-existent nodes will be detected. If enabled all those relationships will be found but at the cost of lower performance.

false Changed in 5.8

--threads=<num>

(advanced) Max number of worker threads used by the importer. Defaults to the number of available processors reported by the JVM. There is a certain amount of minimum threads needed so for that reason there is no lower bound for this value. For optimal performance, this value should not be greater than the number of available processors.

20

--trim-strings[=true|false][2]

Whether or not strings should be trimmed for whitespaces.

false

--verbose

Enable verbose output.

1. See Tools → Configuration for details.

2. Ignored by Parquet import.

Heap size for the import

You want to set the maximum heap size to a relevant value for the import. This is done by defining the HEAP_SIZE environment parameter before starting the import. For example, 2G is an appropriate value for smaller imports.

If doing imports in the order of magnitude of 100 billion entities, 20G will be an appropriate value.

Record format

If your import data results in a graph that is larger than 34 billion nodes, 34 billion relationships, or 68 billion properties, you will need to configure the importer to use the block format. This is achieved by using the format option of the import command and setting the value to block:

bin/neo4j-admin database import full --format=block

The block format is available in Enterprise Edition only.

Providing arguments in a file

All options can be provided in a file and passed to the command using the @ prefix. This is useful when the command line becomes too long to manage. For example, the following command:

bin/neo4j-admin database import full @/path/to/your/<args-filename> mydb

For more information, see Picocli → AtFiles official documentation.

Using both a multi-value option and a positional parameter

When using both a multi-value option, such as --nodes and --relationships, and a positional parameter (for example, in --additional-config neo4j.properties --nodes 0-nodes.csv mydatabase), the --nodes option acts "greedy" and the next option, in this case, mydatabase, is pulled in via the nodes convertor.

This is a limitation of the underlying library, Picocli, and is not specific to Neo4j Admin. For more information, see Picocli → Variable Arity Options and Positional Parameters official documentation.

To resolve the problem, use one of the following solutions:

  • Put the positional parameters first. For example, mydatabase --nodes 0-nodes.csv.

  • Put the positional parameters last, after -- and the final value of the last multi-value option. For example, nodes 0-nodes.csv — mydatabase.

Importing from a cloud storage

The --nodes and --relationships options can also import files from AWS S3 buckets (from Neo4j 5.19), Google Cloud storage buckets (from Neo4j 5.21), and Azure buckets (from Neo4j 5.24). For more information, see Importing files from a cloud storage.

Examples

If importing to a database that has not explicitly been created before the import, it must be created subsequently in order to be used.

Import data from CSV files

Assume that you have formatted your data as per CSV header format so that you have it in six different files:

  1. movies_header.csv

  2. movies.csv

  3. actors_header.csv

  4. actors.csv

  5. roles_header.csv

  6. roles.csv

The following command imports the three datasets:

bin/neo4j-admin database import full --nodes import/movies_header.csv,import/movies.csv \
--nodes import/actors_header.csv,import/actors.csv \
--relationships import/roles_header.csv,import/roles.csv

Provide indexes and constraints during import

Starting from Neo4j 5.24, you can use the --schema option that allows Cypher commands to be provided to create indexes/constraints during the initial import process. It currently only works for the block format and full import.

You should have a Cypher script containing only CREATE INDEX|CONSTRAINT commands to be parsed and executed. This file uses ';' as the separator.

For example:

CREATE INDEX PersonNameIndex FOR (i:Person) ON (i.name);
CREATE CONSTRAINT PersonAgeConstraint FOR (c:Person) REQUIRE c.age IS :: INTEGER

List of supported indexes and constraints that can be created by the import tool:

  • RANGE

  • LOOKUP

  • POINT

  • TEXT

  • FULL-TEXT

  • VECTOR

For example:

bin/neo4j-admin database import full neo4j --nodes=import/movies.csv --nodes=import/actors.csv --relationships=import/roles.csv --schema=import/schema.cypher

Import data from CSV files using regular expression

Assume that you want to include a header and then multiple files that match a pattern, e.g. containing numbers. In this case, a regular expression can be used. It is guaranteed that groups of digits will be sorted in numerical order, as opposed to lexicograghic order.

For example:

bin/neo4j-admin database import full --nodes import/node_header.csv,import/node_data_\d+\.csv

Import data from CSV files using a more complex regular expression

For regular expression patterns containing commas, which is also the delimiter between files in a group, the pattern can be quoted to preserve the pattern.

For example:

bin/neo4j-admin database import full --nodes import/node_header.csv,'import/node_data_\d{1,5}.csv' databasename

Importing files from a cloud storage

The following examples show how to import data stored in a cloud storage bucket using the --nodes and --relationships options.

Neo4j uses the AWS SDK v2 to call the APIs on AWS using AWS URLs. Alternatively, you can override the endpoints so that the AWS SDK can communicate with alternative storage systems, such as Ceph, Minio, or LocalStack, using the system variables aws.endpointUrls3, aws.endpointUrlS3, or aws.endpointUrl, or the environments variables AWS_ENDPOINT_URL_S3 or AWS_ENDPOINT_URL.

  1. Install the AWS CLI by following the instructions in the AWS official documentation — Install the AWS CLI version 2.

  2. Create an S3 bucket and a directory to store the backup files using the AWS CLI:

    aws s3 mb --region=us-east-1 s3://myBucket
    aws s3api put-object --bucket myBucket --key myDirectory/

    For more information on how to create a bucket and use the AWS CLI, see the AWS official documentation — Use Amazon S3 with the AWS CLI and Use high-level (s3) commands with the AWS CLI.

  3. Verify that the ~/.aws/config file is correct by running the following command:

    cat ~/.aws/config

    The output should look like this:

    [default]
    region=us-east-1
  4. Configure the access to your AWS S3 bucket by setting the aws_access_key_id and aws_secret_access_key in the ~/.aws/credentials file and, if needed, using a bucket policy. For example:

    1. Use aws configure set aws_access_key_id aws_secret_access_key command to set your IAM credentials from AWS and verify that the ~/.aws/credentials is correct:

      cat ~/.aws/credentials

      The output should look like this:

      [default]
      aws_access_key_id=this.is.secret
      aws_secret_access_key=this.is.super.secret
    2. Additionally, you can use a resource-based policy to grant access permissions to your S3 bucket and the objects in it. Create a policy document with the following content and attach it to the bucket. Note that both resource entries are important to be able to download and upload files.

      {
          "Version": "2012-10-17",
          "Id": "Neo4jBackupAggregatePolicy",
          "Statement": [
              {
                  "Sid": "Neo4jBackupAggregateStatement",
                  "Effect": "Allow",
                  "Action": [
                      "s3:ListBucket",
                      "s3:GetObject",
                      "s3:PutObject",
                      "s3:DeleteObject"
                  ],
                  "Resource": [
                      "arn:aws:s3:::myBucket/*",
                      "arn:aws:s3:::myBucket"
                  ]
              }
          ]
      }
  5. Run the neo4j-admin database import command to import your data from your AWS S3 storage bucket. The example assumes that you have data stored in the myBucket/data folder in your bucket.

    bin/neo4j-admin database import full --nodes s3://myBucket/data/nodes.csv --relationships s3://myBucket/data/relationships.csv newdb
  1. Ensure you have a Google account and a project created in the Google Cloud Platform (GCP).

    1. Install the gcloud CLI by following the instructions in the Google official documentation — Install the gcloud CLI.

    2. Create a service account and a service account key using Google official documentation — Create service accounts and Creating and managing service account keys.

    3. Download the JSON key file for the service account.

    4. Set the GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT environment variables to the path of the JSON key file and the project ID, respectively:

      export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"
      export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID
    5. Authenticate the gcloud CLI with the e-mail address of the service account you have created, the path to the JSON key file, and the project ID:

      gcloud auth activate-service-account service-account@example.com --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project=$GOOGLE_CLOUD_PROJECT

      For more information, see the Google official documentation — gcloud auth activate-service-account.

    6. Create a bucket in the Google Cloud Storage using Google official documentation — Create buckets.

    7. Verify that the bucket is created by running the following command:

      gcloud storage ls

      The output should list the created bucket.

  2. Run the neo4j-admin database import command to import your data from your Google storage bucket. The example assumes that you have data stored in the myBucket/data folder in your bucket.

    bin/neo4j-admin database import full --nodes gs://myBucket/data/nodes.csv --relationships gs://myBucket/data/relationships.csv newdb
  1. Ensure you have an Azure account, an Azure storage account, and a blob container.

    1. You can create a storage account using the Azure portal.
      For more information, see the Azure official documentation on Create a storage account.

    2. Create a blob container in the Azure portal.
      For more information, see the Azure official documentation on Quickstart: Upload, download, and list blobs with the Azure portal.

  2. Install the Azure CLI by following the instructions in the Azure official documentation — Azure official documentation.

  3. Authenticate the neo4j or neo4j-admin process against Azure using the default Azure credentials.
    See the Azure official documentation on default Azure credentials for more information.

    az login

    Then you should be ready to use Azure URLs in either neo4j or neo4j-admin.

  4. To validate that you have access to the container with your login credentials, run the following commands:

    # Upload a file:
    az storage blob upload --file someLocalFile  --account-name accountName - --container someContainer --name remoteFileName  --auth-mode login
    
    # Download the file
    az storage blob download  --account-name accountName --container someContainer --name remoteFileName --file downloadedFile --auth-mode login
    
    # List container files
    az storage blob list  --account-name someContainer --container someContainer  --auth-mode login
  5. Run the neo4j-admin database import command to import your data from your Azure blob storage container. The example assumes that you have data stored in the myStorageAccount/myContainer/data folder in your container.

    bin/neo4j-admin database import full --nodes azb://myStorageAccount/myContainer/data/nodes.csv --relationships azb://myStorageAccount/myContainer/data/relationships.csv newdb

Incremental import

Incremental import supports block format starting from Neo4j 5.20.

Incremental import allows you to incorporate large amounts of data in batches into the graph. You can run this operation as part of the initial data load when it cannot be completed in a single full import. Besides, you can update your graph by importing data incrementally, which is more performant than transactional insertion of such data.

Incremental import requires the use of --force and can be run on an existing database only.

You must stop your database, if you want to perform the incremental import within one command.

If you cannot afford a full downtime of your database, split the operation into several stages:

  • prepare stage (offline)

  • build stage (offline or read-only)

  • merge stage (offline)

The database must be stopped for the prepare and merge stages. During the build stage, the database can be left online but put into read-only mode. For a detailed example, see Incremental import in stages.

It is highly recommended to back up your database before running the incremental import, as if the merge stage fails, is aborted, or crashes, it may corrupt the database.

Syntax

The syntax for importing a set of CSV files incrementally is:

neo4j-admin database import incremental [-h] [--expand-commands] --force [--verbose] [--auto-skip-subsequent-headers
                                        [=true|false]] [--ignore-empty-strings[=true|false]] [--ignore-extra-columns
                                        [=true|false]] [--legacy-style-quoting[=true|false]] [--normalize-types
                                        [=true|false]] [--skip-bad-entries-logging[=true|false]]
                                        [--skip-bad-relationships[=true|false]] [--skip-duplicate-nodes[=true|false]]
                                        [--strict[=true|false]] [--trim-strings[=true|false]]
                                        [--additional-config=<file>] [--array-delimiter=<char>] [--bad-tolerance=<num>]
                                        [--delimiter=<char>] [--high-parallel-io=on|off|auto]
                                        [--id-type=string|integer|actual] [--input-encoding=<character-set>]
                                        [--input-type=csv|parquet] [--max-off-heap-memory=<size>] [--quote=<char>]
                                        [--read-buffer-size=<size>] [--report-file=<path>] [--schema=<path>]
                                        [--stage=all|prepare|build|merge] [--threads=<num>] --nodes=[<label>[:
                                        <label>]...=]<files>... [--nodes=[<label>[:<label>]...=]<files>...]...
                                        [--relationships=[<type>=]<files>...]... [--multiline-fields=true|false|<path>[,
                                        <path>] [--multiline-fields-format=v1|v2]] <database>

Description

Incremental import into an existing database.

Usage and limitations

The incremental import command can be used to add:

  • New nodes with labels and properties.

    Note that you must have node property uniqueness constraints in place for the property key and label combinations that form the primary key, or the uniquely identifiable nodes. Otherwise, the command will throw an error and exit. For more information, see CSV header format.

  • New relationships between existing or new nodes.

The incremental import command cannot be used to:

  • Add new properties to existing nodes or relationships.

  • Update or delete properties in nodes or relationships.

  • Update or delete labels in nodes.

  • Delete existing nodes and relationships.

The importer works well on standalone servers. In clustering environments with multiple copies of the database, the updated database must be reseeded.

Parameters

Table 3. neo4j-admin database import incremental parameters
Parameter Description Default

<database>

Name of the database to import. If the database into which you import does not exist prior to importing, you must create it subsequently using CREATE DATABASE.

neo4j

Options

Table 4. neo4j-admin database import incremental options
Option Description Default

--additional-config=<file>[3]

Configuration file with additional configuration.

--array-delimiter=<char>

Delimiter character between array elements within a value in CSV data. Also accepts TAB and e.g. U+20AC for specifying a character using Unicode.

  • ASCII character — e.g. --array-delimiter=";".

  • \ID — Unicode character with ID, e.g. --array-delimiter="\59".

  • U+XXXX — Unicode character specified with 4 HEX characters, e.g. --array-delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --array-delimiter="\t".

For horizontal tabulation (HT), use \t or the Unicode character ID \9.

Unicode character ID can be used if prepended by \.

;

--auto-skip-subsequent-headers[=true|false][4]

Automatically skip accidental header lines in subsequent files in file groups with more than one file.

false

--bad-tolerance=<num>

Number of bad entries before the import is aborted. The import process is optimized for error-free data. Therefore, cleaning the data before importing it is highly recommended. If you encounter any bad entries during the import process, you can set the number of bad entries to a specific value that suits your needs. However, setting a high value may affect the performance of the tool.

1000

--delimiter=<char>[4]

Delimiter character between values in CSV data. Also accepts TAB and e.g. U+20AC for specifying a character using Unicode.

  • ASCII character — e.g. --delimiter=",".

  • \ID — Unicode character with ID, e.g. --delimiter="\44".

  • U+XXXX — Unicode character specified with 4 HEX characters, e.g. --delimiter="U+20AC".

  • \t — horizontal tabulation (HT), e.g. --delimiter="\t".

For horizontal tabulation (HT), use \t or the Unicode character ID \9.

Unicode character ID can be used if prepended by \.

,

--expand-commands

Allow command expansion in config value evaluation.

--force

Confirm incremental import by setting this flag.

-h, --help

Show this help message and exit.

--high-parallel-io=on|off|auto

Ignore environment-based heuristics and indicate if the target storage subsystem can support parallel IO with high throughput or auto detect. Typically this is on for SSDs, large raid arrays, and network-attached storage.

auto

--id-type=string|integer|actual

Introduced in 5.1 Each node must provide a unique ID. This is used to find the correct nodes when creating relationships.

Possible values are:

  • string — arbitrary strings for identifying nodes.

  • integer — arbitrary integer values for identifying nodes.

  • actual — (advanced) actual node IDs.

string

--ignore-empty-strings[=true|false]

Whether or not empty string fields, i.e. "" from input source are ignored, i.e. treated as null.

false

--ignore-extra-columns[=true|false][4]

If unspecified columns should be ignored during the import.

false

--input-encoding=<character-set>[4]

Character set that input data is encoded in.

UTF-8

--input-type=csv|parquet

Introduced in 5.26File type to import from. Can be csv or parquet. Defaults to csv.

csv

--legacy-style-quoting[=true|false]

Whether or not a backslash-escaped quote e.g. \" is interpreted as an inner quote.

false

--max-off-heap-memory=<size>

Maximum memory that neo4j-admin can use for various data structures and caching to improve performance.

Values can be plain numbers, such as 10000000, or 20G for 20 gigabytes. It can also be specified as a percentage of the available memory, for example 70%.

90%

--multiline-fields=true|false|<path>[,<path>][4]

Changed in 5.26 In v1, whether or not fields from an input source can span multiple lines, i.e. contain newline characters. Setting --multiline-fields=true can severely degrade the performance of the importer. Therefore, use it with care, especially with large imports. In v2, this option will specify the list of files that contain multiline fields. Files can also be specified using regular expressions.

null

--multiline-fields-format=v1|v2[4]

Introduced in 5.26 Controls the parsing of input source that can span multiple lines, i.e. contain newline characters. When set to v1, the value for --multiline-fields can only be true or false. When set to v2, the value for --multiline-fields should be the list of files that contain multiline fields.

null

--nodes=[<label>[:<label>]…​=]<files>…​

Node CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

It is possible to import files from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.

--normalize-types[=true|false]

When true, non-array property values are converted to their equivalent Cypher types. For example, all integer values will be converted to 64-bit long integers.

true

--quote=<char>[4]

Character to treat as quotation character for values in CSV data.

Quotes can be escaped as per RFC 4180 by doubling them, for example "" would be interpreted as a literal ".

You cannot escape using \.

"

--read-buffer-size=<size>

Size of each buffer for reading input data.

It has to be at least large enough to hold the biggest single value in the input data. The value can be a plain number or a byte units string, e.g. 128k, 1m.

4194304

--relationships=[<type>=]<files>…​

Relationship CSV header and data.

  • Multiple files will be logically seen as one big file from the perspective of the importer.

  • The first line must contain the header.

  • Multiple data sources like these can be specified in one import, where each data source has its own header.

  • Files can also be specified using regular expressions.

It is possible to import files from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.

--report-file=<path>

File in which to store the report of the csv-import.

The location of the import log file can be controlled using the --report-file option. If you run large imports of CSV files that have low data quality, the import log file can grow very large. For example, CSV files that contain duplicate node IDs, or that attempt to create relationships between non-existent nodes, could be classed as having low data quality. In these cases, you may wish to direct the output to a location that can handle the large log file.

If you are running on a UNIX-like system and you are not interested in the output, you can get rid of it altogether by directing the report file to /dev/null.

If you need to debug the import, it might be useful to collect the stack trace. This is done by using the --verbose option.

import.report

--schema=<path>[5]

Introduced in 5.24 Path to the file containing the Cypher commands for creating indexes and constraints during data import.

--skip-bad-entries-logging[=true|false]

When set to true, the details of bad entries are not written in the log. Disabling logging can improve performance when the data contains lots of faults. Cleaning the data before importing it is highly recommended because faults dramatically affect the tool’s performance even without logging.

false

--skip-bad-relationships[=true|false]

Whether or not to skip importing relationships that refer to missing node IDs, i.e. either start or end node ID/group referring to a node that was not specified by the node input data.

Skipped relationships will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

false

--skip-duplicate-nodes[=true|false]

Whether or not to skip importing nodes that have the same ID/group.

In the event of multiple nodes within the same group having the same ID, the first encountered will be imported, whereas consecutive such nodes will be skipped.

Skipped nodes will be logged, containing at most the number of entities specified by --bad-tolerance, unless otherwise specified by the --skip-bad-entries-logging option.

false

--stage=all|prepare|build|merge

Stage of incremental import.

For incremental import into an existing database use all (which requires the database to be stopped).

For semi-online incremental import run prepare (on a stopped database) followed by build (on a potentially running database) and finally merge (on a stopped database).

all

--strict[=true|false]

Introduced in 5.6 Whether or not the lookup of nodes referred to from relationships needs to be checked strict. If disabled, most but not all relationships referring to non-existent nodes will be detected. If enabled all those relationships will be found but at the cost of lower performance.

false Changed in 5.8

--threads=<num>

(advanced) Max number of worker threads used by the importer. Defaults to the number of available processors reported by the JVM. There is a certain amount of minimum threads needed so for that reason there is no lower bound for this value. For optimal performance, this value should not be greater than the number of available processors.

20

--trim-strings[=true|false][4]

Whether or not strings should be trimmed for whitespaces.

false

--verbose

Enable verbose output.

3. See Tools → Configuration for details.

4. Ignored by Parquet import.

5. The --schema option is available in this version but not yet supported. It will be functional in a future release.

Using both a multi-value option and a positional parameter

When using both a multi-value option, such as --nodes and --relationships, and a positional parameter (for example, in --additional-config neo4j.properties --nodes 0-nodes.csv mydatabase), the --nodes option acts "greedy" and the next option, in this case, mydatabase, is pulled in via the nodes convertor.

This is a limitation of the underlying library, Picocli, and is not specific to Neo4j Admin. For more information, see Picocli → Variable Arity Options and Positional Parameters official documentation.

To resolve the problem, use one of the following solutions:

  • Put the positional parameters first. For example, mydatabase --nodes 0-nodes.csv.

  • Put the positional parameters last, after -- and the final value of the last multi-value option. For example, nodes 0-nodes.csv — mydatabase.

Examples

There are two ways of importing data incrementally.

Incremental import in a single command

If downtime is not a concern, you can run a single command with the option --stage=all. This option requires the database to be stopped.

neo4j@system> STOP DATABASE db1 WAIT;
...
bin/neo4j-admin database import incremental --stage=all --nodes=N1=../../raw-data/incremental-import/b.csv db1

Incremental import in stages

If you cannot afford a full downtime of your database, you can run the import in three stages.

  1. prepare stage:

    During this stage, the import tool analyzes the CSV headers and copies the relevant data over to the new increment database path. The import command is run with the option --stage=prepare and the database must be stopped.

    1. Using the system database, stop the database db1 with the WAIT option to ensure a checkpoint happens before you run the incremental import command. The database must be stopped to run --stage=prepare.

      STOP DATABASE db1 WAIT
    2. Run the incremental import command with the --stage=prepare option:

      bin/neo4j-admin database import incremental --stage=prepare --nodes=N1=../../raw-data/incremental-import/c.csv db1
  2. build stage:

    During this stage, the import tool imports the data, deduplicates it, and validates it in the new increment database path. This is the longest stage and you can put the database in read-only mode to allow read access. The import command is run with the option --stage=build.

    1. Put the database in read-only mode:

      ALTER DATABASE db1 SET ACCESS READ ONLY
    2. Run the incremental import command with the --stage=build option:

      bin/neo4j-admin database import incremental --stage=build --nodes=N1=../../raw-data/incremental-import/c.csv db1
  3. merge stage:

    During this stage, the import tool merges the new with the existing data in the database. It also updates the affected indexes and upholds the affected property uniqueness constraints and property existence constraints. The import command is run with the option --stage=merge and the database must be stopped. It is not necessary to include the --nodes or --relationships options when using --stage=merge.

    1. Using the system database, stop the database db1 with the WAIT option to ensure a checkpoint happens before you run the incremental import command.

      STOP DATABASE db1 WAIT
    2. Run the incremental import command with the --stage=merge option:

      bin/neo4j-admin database import incremental --stage=merge db1

CSV header format

The header file of each data source specifies how the data fields should be interpreted. You must use the same delimiter for the header file and the data files.

The header contains information for each field, with the format <name>:<field_type>. The <name> is used for properties and node IDs. In all other cases, the <name> part of the field is ignored.

Incremental import

When using incremental import, you must have node property uniqueness constraints in place for the property key and label combinations that form the primary key, or the uniquely identifiable nodes. For example, importing nodes with a Person label that are uniquely identified with a uuid property key, the format of the header should be uuid:ID{label:Person}.

This is also true when working with multiple groups. For example, you can use uuid:ID(Person){label:Person}, where the relationship CSV data can refer to different groups for its :START_ID and :END_ID, just like the full import method.

Node files

Files containing node data can have an ID field, a LABEL field, and properties.

ID

Each node must have a unique ID if it is to be connected by any relationships created in the import. Neo4j uses the IDs to find the correct nodes when creating relationships. Note that the ID has to be unique across all nodes within the group, regardless of their labels. The unique ID is persisted in a property whose name is defined by the <name> part of the field definition <name>:ID. If no such property name is defined, the unique ID will be used for the import but not be available for reference later. If no ID is specified, the node will be imported, but it will not be connected to other nodes during the import. When a property name is provided, that property type can be configured globally via the --id-type option (as for Property data types).
From Neo4j 5.1, you can specify a different value ID type to be stored for a node property in its group using the option id-type in the header, e.g: id:ID(MyGroup){label:MyLabel, id-type: int}. This ID type overrides the global --id-type option. For example, the global id-type can be a string, but the nodes will have their IDs stored as int type in their ID properties. For more information, see Storing a different value type for IDs in a group.
From Neo4j 5.3, a node header can also contain multiple ID columns, where the relationship data references the composite value of all those columns. This also implies using string as id-type. For each ID column, you can specify to store its values as different node properties. However, the composite value cannot be stored as a node property. For more information, see Using multiple node IDs.

LABEL

Read one or more labels from this field. Like array values, multiple labels are separated by ;, or by the character specified with --array-delimiter. Introduced in 5.25 The max length of label names for block format is 16,383 characters.

Example 1. Define node files

You define the headers for movies in the movies_header.csv file. Movies have the properties movieId, year, and title. You also specify a field for labels.

movieId:ID,title,year:int,:LABEL

You define three movies in the movies.csv file. They contain all the properties defined in the header file. All the movies are given the label Movie. Two of them are also given the label Sequel.

tt0133093,"The Matrix",1999,Movie
tt0234215,"The Matrix Reloaded",2003,Movie;Sequel
tt0242653,"The Matrix Revolutions",2003,Movie;Sequel

Similarly, you also define three actors in the actors_header.csv and actors.csv files. They all have the properties personId and name, and the label Actor.

personId:ID,name,:LABEL
keanu,"Keanu Reeves",Actor
laurence,"Laurence Fishburne",Actor
carrieanne,"Carrie-Anne Moss",Actor

Relationship files

Files containing relationship data have three mandatory fields and can also have properties. The mandatory fields are:

TYPE

The relationship type to use for this relationship. Introduced in 5.25 The max length of relationship type names for block format is 16,383 characters.

START_ID

The ID of the start node for this relationship.

END_ID

The ID of the end node for this relationship.

The START_ID and END_ID refer to the unique node ID defined in one of the node data sources, as explained in the previous section. None of these take a name, e.g. if <name>:START_ID or <name>:END_ID is defined, the <name> part will be ignored. Nor do they take a <field_type>, e.g. if :START_ID:int or :END_ID:int is defined, the :int part does not have any meaning in the context of type information.

Example 2. Define relationships files

In this example, you assume that the two node files from the previous example are used together with the following relationships file.

You define relationships between actors and movies in the files roles_header.csv and roles.csv. Each row connects a start node and an end node with a relationship of relationship type ACTED_IN. Notice how you use the unique identifiers personId and movieId from the nodes files above. The name of the character that the actor is playing in this movie is stored as a role property on the relationship.

:START_ID,role,:END_ID,:TYPE
keanu,"Neo",tt0133093,ACTED_IN
keanu,"Neo",tt0234215,ACTED_IN
keanu,"Neo",tt0242653,ACTED_IN
laurence,"Morpheus",tt0133093,ACTED_IN
laurence,"Morpheus",tt0234215,ACTED_IN
laurence,"Morpheus",tt0242653,ACTED_IN
carrieanne,"Trinity",tt0133093,ACTED_IN
carrieanne,"Trinity",tt0234215,ACTED_IN
carrieanne,"Trinity",tt0242653,ACTED_IN

Property data types

For properties, the <name> part of the field designates the property key, while the <field_type> part assigns a data type. You can have properties in both node data files and relationship data files. Introduced in 5.25 The max length of property keys for block format is 16,383 characters.

Use one of int, long, float, double, boolean, byte, short, char, string, point, date, localtime, time, localdatetime, datetime, and duration to designate the data type for properties. By default, types (except arrays) are converted to Cypher types. See Cypher Manual → Property, structural, and constructed values.

This behavior can be disabled using the option --normalize-types=false. Normalizing types can require more space on disk, but avoids Cypher converting the type during queries. If no data type is given, this defaults to string.

To define an array type, append [] to the type. By default, array values are separated by ;. A different delimiter can be specified with --array-delimiter. Arrays are not affected by the --normalize-types flag. For example, if you want a byte array to be stored as a Cypher long array, you must explicitly declare the property as long[].

Boolean values are true if they match exactly the text true. All other values are false. Values that contain the delimiter character need to be escaped by enclosing in double quotation marks, or by using a different delimiter character with the --delimiter option.

Example 3. Header format with data types

This example illustrates several different data types specified in the CSV header.

:ID,name,joined:date,active:boolean,points:int
user01,Joe Soap,2017-05-05,true,10
user02,Jane Doe,2017-08-21,true,15
user03,Moe Know,2018-02-17,false,7
Special considerations for the point data type

A point is specified using the Cypher syntax for maps. The map allows the same keys as the input to the Cypher Manual → Point function. The point data type in the header can be amended with a map of default values used for all values of that column, e.g. point{crs: 'WGS-84'}. Specifying the header this way allows you to have an incomplete map in the value position in the data file. Optionally, a value in a data file may override default values from the header.

Example 4. Property format for point data type

This example illustrates various ways of using the point data type in the import header and the data files.

You are going to import the name and location coordinates for cities. First, you define the header as:

:ID,name,location:point{crs:WGS-84}

You then define cities in the data file.

  • The first city’s location is defined using latitude and longitude, as expected when using the coordinate system defined in the header.

  • The second city uses x and y instead. This would normally lead to a point using the coordinate reference system cartesian. Since the header defines crs:WGS-84, that coordinate reference system will be used.

  • The third city overrides the coordinate reference system defined in the header and sets it explicitly to WGS-84-3D.

:ID,name,location:point{crs:WGS-84}
city01,"Malmö","{latitude:55.6121514, longitude:12.9950357}"
city02,"London","{y:51.507222, x:-0.1275}"
city03,"San Mateo","{latitude:37.554167, longitude:-122.313056, height: 100, crs:'WGS-84-3D'}"

Note that all point maps are within double quotation marks " in order to prevent the enclosed , character from being interpreted as a column separator. An alternative approach would be to use --delimiter='\t' and reformat the file with tab separators, in which case the " characters are not required.

:ID name    location:point{crs:WGS-84}
city01  Malmö   {latitude:55.6121514, longitude:12.9950357}
city02  London  {y:51.507222, x:-0.1275}
city03  San Mateo   {latitude:37.554167, longitude:-122.313056, height: 100, crs:'WGS-84-3D'}
Special considerations for temporal data types

The format for all temporal data types must be defined as described in Cypher Manual → Temporal instants syntax and Cypher Manual → Durations syntax. Two of the temporal types, Time and DateTime, take a time zone parameter that might be common between all or many of the values in the data file. It is therefore possible to specify a default time zone for Time and DateTime values in the header, for example: time{timezone:+02:00} and: datetime{timezone:Europe/Stockholm}. If no default time zone is specified, the default timezone is determined by the db.temporal.timezone configuration setting. The default time zone can be explicitly overridden in the values in the data file.

Example 5. Property format for temporal data types

This example illustrates various ways of using the datetime data type in the import header and the data files.

First, you define the header with two DateTime columns. The first one defines a time zone, but the second one does not:

:ID,date1:datetime{timezone:Europe/Stockholm},date2:datetime

You then define dates in the data file.

  • The first row has two values that do not specify an explicit timezone. The value for date1 will use the Europe/Stockholm time zone that was specified for that field in the header. The value for date2 will use the configured default time zone of the database.

  • In the second row, both date1 and date2 set the time zone explicitly to be Europe/Berlin. This overrides the header definition for date1, as well as the configured default time zone of the database.

1,2018-05-10T10:30,2018-05-10T12:30
2,2018-05-10T10:30[Europe/Berlin],2018-05-10T12:30[Europe/Berlin]

Using ID spaces

By default, the import tool assumes that node identifiers are unique across node files. In many cases, the ID is unique only across each entity file, for example, when your CSV files contain data extracted from a relational database and the ID field is pulled from the primary key column in the corresponding table. To handle this situation you define ID spaces. ID spaces are defined in the ID field of node files using the syntax ID(<ID space identifier>). To reference an ID of an ID space in a relationship file, you use the syntax START_ID(<ID space identifier>) and END_ID(<ID space identifier>).

Example 6. Define and use ID spaces

Define a Movie-ID ID space in the movies_header.csv file.

movieId:ID(Movie-ID),title,year:int,:LABEL
1,"The Matrix",1999,Movie
2,"The Matrix Reloaded",2003,Movie;Sequel
3,"The Matrix Revolutions",2003,Movie;Sequel

Define an Actor-ID ID space in the header of the actors_header.csv file.

personId:ID(Actor-ID),name,:LABEL
1,"Keanu Reeves",Actor
2,"Laurence Fishburne",Actor
3,"Carrie-Anne Moss",Actor

Now use the previously defined ID spaces when connecting the actors to movies.

:START_ID(Actor-ID),role,:END_ID(Movie-ID),:TYPE
1,"Neo",1,ACTED_IN
1,"Neo",2,ACTED_IN
1,"Neo",3,ACTED_IN
2,"Morpheus",1,ACTED_IN
2,"Morpheus",2,ACTED_IN
2,"Morpheus",3,ACTED_IN
3,"Trinity",1,ACTED_IN
3,"Trinity",2,ACTED_IN
3,"Trinity",3,ACTED_IN

Using multiple node IDs

From Neo4j 5.3, a node header can also contain multiple ID columns, where the relationship data references the composite value of all those columns. This also implies using string as id-type.

For each ID column, you can specify to store its values as different node properties. However, the composite value cannot be stored as a node property.

Incremental import doesn’t support the use of multiple node identifiers. This functionality is only available with a full import.

Example 7. Define multiple IDs as node properties

You can define multiple ID columns in the node header. For example, you can define a node header with two ID columns.

nodes_header.csv
:ID,:ID,name
nodes.csv
aa,11,John
bb,22,Paul

Now use both IDs when defining the relationship:

relationships_header.csv
:START_ID,:TYPE,:END_ID
relationships.csv
aa11,WORKS_WITH,bb22
Example 8. Define multiple IDs stored in ID spaces

Define a MyGroup ID space in the nodes_header.csv file.

nodes_header.csv
personId:ID(MyGroup),memberId:ID(MyGroup),name
nodes.csv
aa,11,John
bb,22,Paul

Now use the defined ID space when connecting John with Paul, and use both IDs in the relationship.

relationships_header.csv
:START_ID(MyGroup),:TYPE,:END_ID(MyGroup)
relationships.csv
aa11,WORKS_WITH,bb22

Storing a different value type for IDs in a group

From Neo4j 5.1, you can control the ID type of the node property that will be stored by defining the id-type option in the header, for example, :ID{id-type:long}. The id-type option in the header overrides the global --id-type value provided to the command. This way, you can have property values of different types for different groups of nodes. For example, the global id-type can be a string, but some nodes can have their IDs stored as long type in their ID properties.

Example 9. Import nodes with different ID value types
persons_header.csv
id:ID(GroupOne){id-type:long},name,:LABEL
persons.csv
123,P1,Person
456,P2,Person
games_header.csv
id:ID(GroupTwo),name,:LABEL
games.csv
ABC,G1,Game
DEF,G2,Game
Import the nodes
neo4j_home$ --nodes persons.csv --nodes games.csv --id-type string

The id property of the nodes in the persons group will be stored as long type, while the id property of the nodes in the games group will be stored as string type, as the global id-type is a string.

Importing data that spans multiple lines

The --multiline-fields option allows fields from an input source to span multiple lines, i.e. contain newline characters. For example:

bin/neo4j-admin database import full --nodes import/node_header.csv,import/node_data.csv --multiline-fields=true databasename

Where import/node_data.csv contains multiline fields, such as:

id,name,birthDate,birthYear,birthLocation,description
1,John,October 1st,2000,New York,This is a multiline
description

Setting --multiline-fields=true can severely degrade the performance of the importer. Therefore, use it with care, especially with large imports.

Starting from 5.26, you can optionally specify the format of the --multiline-fields to control the parsing of the input source by setting the --multiline-fields-format option. Possible values are:

  • v1 - the default format, which uses the current processing method for multiline fields.

  • v2 - a more efficient processing method that requires text fields to be quoted. For v2, the --multiline-fields option must be set to a list of files (regular expressions are allowed) that contain multiline fields.

Both formats have the restriction that the entirety of every row must be able to fit into the buffer (default is 4m). The --multiline-fields-format option is available in the full and incremental import modes.

For example:

bin/neo4j-admin database import full --nodes import/node_header.csv,import/node_data.csv --multiline-fields=true --multiline-fields-format=v1 databasename

Where import/node_data.csv contains multiline fields, such as:

id,name,birthDate,birthYear,birthLocation,description
1,John,October 1st,2000,New York,This is a multiline
description
bin/neo4j-admin database import full --nodes import/node_header.csv,import/node_data.csv --multiline-fields=import/node_data.csv --multiline-fields-format=v2 databasename

Where import/node_data.csv contains multiline fields, such as:

id,name,birthDate,birthYear,birthLocation,description
1,"John","October 1st",2000,"New York","This is a multiline
description"

Skipping columns

IGNORE

If there are fields in the data that you wish to ignore completely, this can be done using the IGNORE keyword in the header file. IGNORE must be prepended with a :.

Example 10. Skip a column

In this example, you are not interested in the data in the third column of the nodes file and wish to skip over it. Note that the IGNORE keyword is prepended by a :.

personId:ID,name,:IGNORE,:LABEL
keanu,"Keanu Reeves","male",Actor
laurence,"Laurence Fishburne","male",Actor
carrieanne,"Carrie-Anne Moss","female",Actor

If all your superfluous data is placed in columns located to the right of all the columns that you wish to import, you can instead use the command line option --ignore-extra-columns.

Importing compressed files

The import tool can handle files compressed with zip or gzip. Each compressed file must contain a single file.

Example 11. Perform an import using compressed files
neo4j_home$ ls import
actors-header.csv  actors.csv.zip  movies-header.csv  movies.csv.gz  roles-header.csv  roles.csv.gz
bin/neo4j-admin database import --nodes import/movies-header.csv,import/movies.csv.gz --nodes import/actors-header.csv,import/actors.csv.zip --relationships import/roles-header.csv,import/roles.csv.gz

Resuming a stopped or canceled import

An import that is stopped or fails before completing can be resumed from a point closer to where it was stopped. An import can be resumed from the following points:

  • Linking of relationships

  • Post-processing