The temporal aspect seems to be somewhat ignored by mainstream db development - in some fields, e.g. BI you'll have slowly changing dimensions; and the event sourcing pattern promises a time-machine view on data.<p>Since I'm writing a history-aware application at the moment, I recently looked into different patterns for this and trying a mixed strategy at the moment (SQL DB used as document store and a single event log, that accumulates changes - a lean approach though, a few hundred lines python for the data access layer; what always gets ugly is the validation, which your application must take care of).<p>I wish, there was more hands-on material on the subject (some resources dive depth into bi-temporal modeling, but I feel your schemas can get complex (= expensive) very fast).
There have been numerous triple stores. Mozilla/Firefox even used one as its core backend for a while before ripping it out and replacing it with SQLite. Why do you think those failed, and how is Datomic going to avoid the same failures?<p>That is, I've seen this pitched as the solution to all our data woes numerous times now, why is this time different?
Couldn't you just model your data in a traditional RDBMS with a time stamp as well? In fact for mutable data that you'd like to keep old versions of, this is pretty standard. A simple design would be to have a separate table for person_locations that mapped a person to a location.<p>With everything else, the standard RDBMS table could be considered as having a 'snapshot' of the Datomic values.<p>I'm still not sure what benefit this has over a traditional DB. Perhaps I'll just have to wait for the next post.
"Datomic is so different than regular databases that your average developer will probably chose to ignore it."<p>Playing to the <i>Well I'm obviously above average</i> gut feeling. Cute.