The Reprompt dashboard lets you upload a CSV file of places and enrich them in bulk without writing any code. You can upload files with up to 1 million rows and the platform automatically splits them into batches for you.
Navigate to Datasets → Upload dataset to get started. Drag and drop your CSV file onto the upload area, or click to browse. The file must be under 200 MB, include a header row, and contain at least a place name with an address or coordinates.
Map Your Columns
After uploading, the dashboard parses your file and shows the Map CSV Columns screen. It auto-detects columns where possible, and shows a green “All required columns mapped” banner when the essential fields are set.
Essential Fields
These fields drive enrichment accuracy. You need either a full_address string (preferred) or the individual address components (house, street, city, state, postalCode, country).
| Field | Required | Description |
|---|
id | Yes | Unique identifier for each place |
name | Yes | Business or place name |
full_address | Recommended | Full street address as a single string (preferred over separate components) |
latitude | Recommended | Latitude coordinate |
longitude | Recommended | Longitude coordinate |
Additional Options
Expand Additional Options to map every column your CSV has. The more fields you map, the better your enrichment results.
| Field | Description |
|---|
phone | Phone number |
website | Official website URL |
category | Place category |
house | House/building number |
street | Street name |
city | City |
postalCode | Postal / zip code |
state | State or province |
country_code | ISO country code |
country | Country name |
If you don’t provide a full_address, make sure to map the individual address components (house, street, city, state, postalCode, country) so the platform can resolve your places accurately.
Map every field you have in your CSV. Unmapped columns are ignored during enrichment, which reduces match quality. For example, if your CSV has a phone_number column, make sure it’s mapped to the phone field.
Preview and Import
After mapping, the dashboard shows a Preview of your data. Click Select All to include every row, then review the import summary at the bottom.
The dashboard tells you exactly how many rows you’re importing and how many datasets (batches) it will create. In this example, 1,290,391 rows are automatically split into 259 datasets.
Click Import 1,290,391 rows as 259 datasets to start the upload. The platform handles all the splitting for you, no need to manually chunk your file.
You can upload CSV files with up to 1,000,000 rows in a single upload. The dashboard automatically splits them into batches of ~5,000 rows each.
Run Enrichment
After importing, your datasets appear in the sidebar and the main list. Select the datasets you want to enrich by checking the boxes, then click Enrich N datasets… at the top.
In the enrichment dialog, select Enrich Place under Select Columns, then click Process N places across N datasets to start.
Splitting files over 200 MB
If your CSV exceeds the 200 MB file size limit, split it into smaller files before uploading. There are many free CSV splitter tools available online, or you can use a command-line utility like csv-split from npm. Upload each resulting file separately through the dashboard.
Monitoring Progress
After importing, each auto-split batch appears as a separate dataset in the sidebar under Datasets. You can track enrichment progress for each one independently.
- Pending - jobs are queued and waiting to start
- In Progress - jobs are actively being enriched
- Completed - enrichment finished; results are available
- Failed - some jobs failed; you can reprocess them
See Batch Processing for details on monitoring, pagination, and reprocessing failed items via the API.
Best Practices
- Map every column - The more fields you map, the higher the enrichment accuracy. Don’t leave columns unmapped if they contain useful data.
- Include coordinates -
latitude and longitude help disambiguate places with similar names at different locations.
- Give datasets descriptive names - The dataset name helps you find results later. Name them by geography, date, or purpose (e.g. “texas” or “Q1 2025 Store Locations”).
- Start with a sample - Upload a small subset (100–1,000 rows) first to verify column mapping before processing your full dataset.
Large Uploads and Processing Time
Uploading and enriching large datasets (hundreds of thousands or millions of rows) can take a significant amount of time - both for the upload itself and for enrichment to complete across all batches.
If you’re working with very large files or need a faster transfer method, you can use an S3 bucket to exchange data with Reprompt instead of uploading through the browser. Contact the Reprompt team to set up S3-based ingestion for your account.
Next Steps