> I’m going to write down what I hear, not what you say.<p>There is a mysterious boundary here that separates <i>meaning</i> from <i>expression</i>. I've been working on an idea that could move this boundary from the inside of the human mind into the realm of software. I call it the Story Empathizer.<p>story = meaning + expression<p>When we express ourselves in writing, we encode meaning into text. The result is a subset of all possible written text: intentionally written text. I call that subset "story".<p>The real challenge is to do this process backwards. To read a story - and see more than text - we need to unravel the original intentions of the writer. In doing so, we define a boundary between meaning and expression.<p>But what if we could take that boundary, and put it into software? Traditionally, we do this to the extreme: limiting ourselves to "context-free grammar", we minimize expression to zero; factoring it out of the equation entirely. Source code always means precisely what it says. All of the what and how is carefully preserved, but the <i>why</i> is nowhere to be found. Mystery averted. Natural language is full of mysterious why, locked away from syntax parsers in a cage of ambiguity.<p>If we can define that ambiguity, then we answer the why, and draw a boundary between meaning and expression. With that boundary, we could read more story, separating its meaning from its expression. We could go backwards, too: rewriting the meaning in terms of another story's expression. That's the process I call "empathizing".<p>Of course, all of this is so abstract, I'm struggling to tie it back down to reality..
There's a lot to criticize about the current crop of LLMs, but one area where I've found it to shine is in transcribing and summarizing meetings (I use Copilot in Teams).<p>Wherever I've worked, as the meeting organizer you're implicitly the designated notetaker. I'm not smart enough to take notes and listen simultaneously so have always felt at a disadvantage when organizing meetings. Copilot summaries have been a godsend for me as I'm able to focus 100% of my attention on the discussion knowing that the salient points will be captured for us by Copilot. And because I'm attending the meeting I can verify the accuracy of the summaries. So far, having used it in countless meetings, I've never found a fabrication.
I like to repeat out loud what I heard but in my own words. I believe it uncovers assumptions made by the speaker and lets newcomers learn about them. Sometimes, though, I feel like I am calling people out when the language used was particularly obfuscated. I think it does require some level of empathy and honesty in the organization.<p>From another point of view, seems like we had a shared document automatically created of the meeting transcription/summary it would reduce the friction and other biases. (Oh look, the LLM is dumb, it wrote that the project is over budget and we depend of risky integrations)