Paste a JSON array (or newline-delimited NDJSON) into the input panel, or upload a .json file.
Select the compression codec: Snappy (fast, default), Gzip (high compression), Zstd (balanced), or Uncompressed.
Click "Convert" or press ⌘↵ to run the DuckDB WASM engine and generate the Parquet file in memory.
Review the inferred schema — column names and data types — displayed below the output.
Click "Download .parquet" to save the file locally for use with Spark, Athena, BigQuery, or DuckDB.
Powered by DuckDB WASM — a full analytical query engine running in your browser via WebAssembly
Supports standard JSON arrays and newline-delimited JSON (NDJSON / JSON Lines) input formats
Automatic schema inference: maps JSON types to Parquet types (INT32, INT64, DOUBLE, BOOLEAN, BYTE_ARRAY)
Three compression codecs: Snappy for fast read/write, Gzip for maximum compression, Zstd for a balanced trade-off
Displays inferred schema with column names and physical types before download
Shows output file size and compression ratio compared to the raw JSON input
Generates standard Parquet files compatible with Apache Spark, AWS Athena, Google BigQuery, and Pandas
Runs entirely in your browser using WebAssembly — your data never leaves your machine
No file size limit imposed by the tool (limited only by available browser memory)
Keyboard shortcut ⌘↵ to convert instantly