Run an import job

Once the job specification is ready, upload its JSON file to your Cloud Storage bucket and go to Dataflow → Create job from template.

To create a job, specify the following fields:

  • Job name — A human-friendly name for the job.

  • Regional endpoint — Must match the region of the Google Cloud Storage bucket containing configuration and source files. If running the full example provided in docs, set the region to one of the us ones.

  • Dataflow template — Select Google Cloud to Neo4j.

  • Path to job configuration file — The JSON job specification file (from a Google Cloud Storage bucket).

  • Optional Parameters > Options JSON — Values for variables used in the job specification file.

The connection metadata is specified either as a secret or a plain-text JSON resource. Exactly one of these options must be specified!

  • Optional Parameters > Path to the Neo4j connection metadata — The JSON connection information file (from a Google Cloud Storage bucket).

  • Optional Parameters > Secret ID for the Neo4j connection metadata — The ID of the secret containing the JSON connection information (from Google Secret Manager).

image$dataflow job example