TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Apple's 3B LLM Outperforms GPT-4

4 pointsby shubham_sabooabout 1 year ago

2 comments

Someoneabout 1 year ago
&gt; Apple has released ReALM (Reference Resolution As Language Modeling)<p>Interesting use of the word “released”. As far as I can tell, they published a paper (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.20329" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.20329</a>), but didn’t release their model, nor their training set, nor their code.<p>All we have is a rough description of the approach and numbers measuring how well it works.<p>I wouldn’t know whether it’s easy or difficult to reproduce their results, but they don’t make that easy.<p>Edit: other articles use ‘reveal’, not ‘release’. That’s a bit better, but IMO still a bit too optimistic.
rany_about 1 year ago
&gt; Apple has released ReALM (Reference Resolution As Language Modeling), a new method for improving how AI understands references made during conversations and to items displayed on a screen or operating in the background. Imagine asking your phone to “call the top pharmacy on the list” without specifying which one – ReALM aims to figure out exactly what you mean.<p>Seems like they&#x27;re talking about GPT-4 Vision. It&#x27;s still impressive as it is, there is no need for clickbait...