Load JSON Data with LOAD DATA

Data from JSON files can be loaded using the LOAD DATA command. For more information on the LOAD DATA command, refer to LOAD DATA. To see a few examples of using LOAD DATA with JSON files, refer to Load JSON Files Examples.

JSON LOAD DATA

Syntax

LOAD DATA [LOCAL] INFILE 'file_name'
[REPLACE | SKIP { CONSTRAINT | DUPLICATE KEY } ERRORS]
INTO TABLE tbl_name
FORMAT JSON
subvalue_mapping
[SET col_name = expr,...]
[WHERE expr,...]
[MAX_ERRORS number]
[ERRORS HANDLE string]
subvalue_mapping:
( {col_name | @variable_name} <- subvalue_path [DEFAULT literal_expr], ...)
subvalue_path:
{% | [%::]ident [::ident ...]}

Semantics

Error Logging and Error Handling are discussed in the LOAD DATA page.

Extract specified subvalues from each JSON value in file_name. Assign them to specified columns of a new row in tbl_name, or to variables used for a column assignment in a SET clause. If a specified subvalue can’t be found in an input JSON, assign the DEFAULT clause literal instead. Discard rows which don’t match the WHERE clause.

To specify the compression type of an input file, use the COMPRESSION clause. See LOAD DATA for more information.

The file named by file_name must consist of concatenated UTF-8 encoded JSON values, optionally separated by whitespace. Newline-delimited JSON is accepted, for example.

Non-standard JSON values like NaN, Infinity, and -Infinity must not occur in file_name.

If file_name ends in .gz or .lz4, it will be decompressed.

JSON LOAD DATA supports a subset of the error recovery options allowed by CSV LOAD DATA. Their behavior is as described under Load CSV Files with LOAD DATA.

Like CSV LOAD DATA, JSON LOAD DATA allows you to use globbing to load data from multiple files.

Writing to multiple databases in a transaction is not supported.

LOAD DATA from an AWS S3 Source

JSON files that are stored in an AWS S3 bucket can be loaded via a LOAD DATA query without a pipeline.

LOAD DATA S3 '<bucket name>'
CONFIG '{"region" : "<region_name>"}'
CREDENTIALS '{"aws_access_key_id" : "<key_id>",
"aws_secret_access_key": "<access_key>"}'
INTO TABLE <table_name>;

For more information on loading data from an AWS S3 bucket, refer to .

Last modified: April 8, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK