With all the recent advancements in LLM and transformers, has the goal of parsing natural languages and representing them as an AST been achieved?<p>Or is this task still considered to be a hard one?<p>LLMs seem to understand the text much better than any previous technologies, so anaphoric resolution, and complex tenses, and POS choice, and rare constructs, and cross-language boundaries all don't seem to be hard issues for them.<p>There are so many research papers published on LLMs and transformer now. With all kinds of applications, but they wll not quite there at all.
It feels like it's sort of it's own thing. LLMs are really good at morphing or fuzzy finding.<p>An interesting example – I had a project where I needed to parse out addresses and dates in a document. However, the address and date formats were not standardized across documents. Utilizing LLMs was way easier then trying to regex or pattern match across the text.<p>But if you're trying to take a text document and break it down into some sort of a structured output, the outcome using LLMs will be much more variable.
No. Word2Vec takes in words and converts them to a high dimensional vector. The relationship between the vectors in terms of cosine distance generally indicates similarity of meaning. The vector difference in terms can be used to indicate some relationship, for example [father]-[mother] is close in distance to [male]-[female].<p>There's nothing like an abstract syntax tree, nor anything programmatic in the traditional meaning of programming going on inside the math of an LLM. It's all just weights and wibbly-wobbly / timey-whimey <i>stuff</i> in there.
I think it’s useful to draw a Chomsky-esque distinction here between understanding and usefulness.<p>I think LLMs haven’t advanced our understanding of how human language syntax/semantics work, but they’ve massively advanced our ability to work with it.