By time, they’re talking about the writing style of a specific time period.<p>Feels like a click bait title. Of course language model weights encode different writing styles. The fact that you can lift out a vector to stylize writing is also more interesting, but that’s also nothing newly discovered here. It should be obvious that this is possible given that you can prompt ChatGPT to change its writing style.
I think I like time. Though spectral, indeterminate, presently a fixture, essential moments last forever but occur daily. Why would any network encode time if it were all just a crystal vase?
Sooo… if I’m reading this right, it’s possible to force an AI into extrapolating into the future. As in, it’ll answer as-if its training was based on data from future years.<p>Obviously this isn’t time travel, but more of a zeitgeist extrapolation.<p>I would expect that if an AI was made to answer like it’s from December 2024 it would talk a lot about the US election but it wouldn’t know who won — just that a “race is on.”<p>This could have actual utility: predicting trends, fads, new market opportunities, etc…
Well, I think this could become one of most underestimated idea in LLM development.<p>To be honest, it is relatively obvious idea, to make vectors from timestamps and feed them to LLMs, but for some strange reason, nobody made this before and looks like, this is mostly unnoticed in NN community.
I think a more general way to think about it would be to add any data and reduce weight. For eg, if we want to create geography vectors, we would add all geography data to fine tune and then take a difference. Now add this to any other model with same architecture, and you have a geography capable llm.