What is the computational overhead of ingesting source data into Molecula and keeping it in sync?
The computational demand needed to abstract your source data into Molecula’s format isn’t negligible, but it has been carefully optimized. For example, for a 300GB data set of CSV files (over 1B records) it takes about 20 minutes using a single 32 core VM. Once the initial data load is complete, it takes a small fraction of the compute resources to process data and schema changes in the original data source and apply them to the feature store. A lag is usually between 5 and 2000ms behind the original data source.