TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Semantic note-taking

42 pointsby luuabout 1 year ago

3 comments

thomastjefferyabout 1 year ago
&gt; I’m going to write down what I hear, not what you say.<p>There is a mysterious boundary here that separates <i>meaning</i> from <i>expression</i>. I&#x27;ve been working on an idea that could move this boundary from the inside of the human mind into the realm of software. I call it the Story Empathizer.<p>story = meaning + expression<p>When we express ourselves in writing, we encode meaning into text. The result is a subset of all possible written text: intentionally written text. I call that subset &quot;story&quot;.<p>The real challenge is to do this process backwards. To read a story - and see more than text - we need to unravel the original intentions of the writer. In doing so, we define a boundary between meaning and expression.<p>But what if we could take that boundary, and put it into software? Traditionally, we do this to the extreme: limiting ourselves to &quot;context-free grammar&quot;, we minimize expression to zero; factoring it out of the equation entirely. Source code always means precisely what it says. All of the what and how is carefully preserved, but the <i>why</i> is nowhere to be found. Mystery averted. Natural language is full of mysterious why, locked away from syntax parsers in a cage of ambiguity.<p>If we can define that ambiguity, then we answer the why, and draw a boundary between meaning and expression. With that boundary, we could read more story, separating its meaning from its expression. We could go backwards, too: rewriting the meaning in terms of another story&#x27;s expression. That&#x27;s the process I call &quot;empathizing&quot;.<p>Of course, all of this is so abstract, I&#x27;m struggling to tie it back down to reality..
maroonblazerabout 1 year ago
There&#x27;s a lot to criticize about the current crop of LLMs, but one area where I&#x27;ve found it to shine is in transcribing and summarizing meetings (I use Copilot in Teams).<p>Wherever I&#x27;ve worked, as the meeting organizer you&#x27;re implicitly the designated notetaker. I&#x27;m not smart enough to take notes and listen simultaneously so have always felt at a disadvantage when organizing meetings. Copilot summaries have been a godsend for me as I&#x27;m able to focus 100% of my attention on the discussion knowing that the salient points will be captured for us by Copilot. And because I&#x27;m attending the meeting I can verify the accuracy of the summaries. So far, having used it in countless meetings, I&#x27;ve never found a fabrication.
评论 #40436994 未加载
adelpozoabout 1 year ago
I like to repeat out loud what I heard but in my own words. I believe it uncovers assumptions made by the speaker and lets newcomers learn about them. Sometimes, though, I feel like I am calling people out when the language used was particularly obfuscated. I think it does require some level of empathy and honesty in the organization.<p>From another point of view, seems like we had a shared document automatically created of the meeting transcription&#x2F;summary it would reduce the friction and other biases. (Oh look, the LLM is dumb, it wrote that the project is over budget and we depend of risky integrations)