TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Will data model/schema/structure variance widen or narrow over time?

1 pointsby mattewongalmost 3 years ago
As Andrew Ng recently pointed out, &quot;Every hospital has its own slightly different format for electronic health records&quot; and &quot;you might have 10,000 manufacturers building 10,000 custom AI models&quot; (https:&#x2F;&#x2F;spectrum.ieee.org&#x2F;andrew-ng-data-centric-ai). In other words, every firm has a different data model.<p>As data volume and uses continue to explode, what will happen to data <i>structures</i>? Will data model &#x2F; schema &#x2F; structure variance get wider or narrower?<p>Argument for narrowing: standardization becomes too powerful to resist. Example: Ford standardizing auto parts<p>Argument for widening: everyone competes for an &quot;edge&quot; in extracting value from data. Generating the &quot;edge&quot; means proprietary models. Proprietary models require not only bespoke &quot;black box&quot;es, but also proprietary ways of normalizing&#x2F;standardizing data and related schemas. Furthermore, even if the model was not shared, each industry participant is not going to wait for some common &quot;standard&quot; to reflect the latest evolution of how it thinks about its data model.<p>Which trend will win?

no comments

no comments