Yay! Excited to see DataChain on the front page :)<p>Maintainer and author here. Happy to answer any questions.<p>We built DataChain because our DVC couldn't fully handle data transformations and versioning directly in S3/GCS/Azure without data copying.<p>Analogy with "DBT for unstractured data" applies very well to DataChain since it transforms data (using Python, not SQL) inside in storages (S3, not DB). Happy to talk more!
Cool! Does this assume the unstructured data already has a corresponding metadata file?<p>My most common use cases involve getting PDFs or HTML files and I have to parse the metadata to store along with the embedding.<p>Would I have to run a process to extract file metadata into JSONs for every embedding/chunk? Would keys created based off document be title+chunk_no?<p>Very interested in this because documents from clients are subject to random changes and I don’t have very robust systems in place.
It took me a minute to grok what this was for, but I think I like it<p>It doesn't really replace any of the tooling we use to wrangle data at scale (like prefect or dagster or temporal) but as a local library it seems to be excellent, I think what confused me most was the comparison to dbt.<p>I like the from_* utils and the magic of the Column class operator overloading and how chains can be used as datasets. Love how easy checkpointing is too. Will give it a go
> It is made to organize your unstructured data into datasets and wrangle it at scale on your local machine.<p>How does one wrangle terabytes of data on a local machine?
> Datachain does not abstract or hide the AI models and API calls, but helps to integrate them into the postmodern data stack.<p>I’m not sure if this term <i>postmodern data stack</i> was invented for the purposes of this copy. Probably not. But terms like this don’t really engender a lot of faith that this isn’t yet another piece of the now decades long hype cycle data engineering products face