Skip to content

Syndicated datasets

Syndicated data is similar to manually stored data but the system automates most of the tasks associated with uploading files, managing the data schema and databases. When a dataset is created as syndicated data, it is essentially lazy-loaded and certain parts of the dataset will only be populated when it is first accessed.

The database name and approximate database size can be inferred from the file system, but specifics like the database schema and database contents require the system to download a copy of the file.

This process occurs when the data is first accessed or viewed through the user interface. Requests to view data or inspect the metadata will remain pending until the file has been downloaded and analyzed. This process typically takes a few seconds but is dependent on the response time of the 3rd party system and the size of the file. When the data has been downloaded and analyzed, the dataset will act similarly to a regular datasets but certain features such as the ability to edit the data schema, database management and others will be unavailable.

The system will self-manage the databases and will typically attempt to keep only the latest version available. This means that if the source file changes frequently, the generated database will also change with a higher frequency and will often have to be transferred to the browser.

When a file is deleted, the associated dataset and any stored databases will be deleted as well.