Popular press seems to do a bad job covering AI related developments. Twitter is too scattered and academic papers are too narrow.<p>So where do working professionals get seasoned and mature coverage of this space? What would be the AI equivalent of the Economist, AnandTech or Tom’s Hardware?
I've been trying to do this on my blog. Here's my AI tag: <a href="https://simonwillison.net/tags/ai/" rel="nofollow noreferrer">https://simonwillison.net/tags/ai/</a>
The recent progress in “AI” is almost entirely due to advancements in large language models (LLMs).<p>In some aspects the hype is real; LLMs are extraordinarily performant for a wide range of previously hard tasks.<p>On the other hand, people seem to equate these advancements with “strong AI” (or AGI). We are one step closer, sure, but the calculator was also a step forward.<p>We’ve created a mirror of all (most) human knowledge, queryable via natural language. People look into this mirror and see themselves, sometimes things greater than themselves.<p>This mirror tricks us into thinking the machine will soon replace us. It’s so accurate, why would it not?<p>Fortunately, it’s just a mirror, and we’re the bear in the woods seeing it’s reflection for the first time. Scared and ready to fight.<p>If you focus on the technology (LLMs) and throw caution at anyone hyping “AI” generally, you can create a filter for what’s real and what should be questioned.
I follow these on YouTube:<p>* Matt Wolfe: <a href="https://www.youtube.com/@mreflow">https://www.youtube.com/@mreflow</a><p>* MattVidPro AI: <a href="https://www.youtube.com/@MattVidPro">https://www.youtube.com/@MattVidPro</a><p>* Two Minute Papers: <a href="https://www.youtube.com/@TwoMinutePapers">https://www.youtube.com/@TwoMinutePapers</a><p>* Dr Alan D. Thompson: <a href="https://www.youtube.com/@DrAlanDThompson">https://www.youtube.com/@DrAlanDThompson</a><p>* Curious Refuge: <a href="https://www.youtube.com/@curiousrefuge">https://www.youtube.com/@curiousrefuge</a>
I had the same problem, so I ended up classifying all HN posts because I believe HN is my most relevant and trustworthy source for tech news. Example: <a href="https://www.kadoa.com/hacksnack/6194542d-2157-4e3c-8321-a43728423d36" rel="nofollow noreferrer">https://www.kadoa.com/hacksnack/6194542d-2157-4e3c-8321-a437...</a><p>This was more of an experiment for a personalizable HN feed, but I'll fully productize it if there is enough interest.
I find Zvi Mowshowitz to be the best source available (<a href="https://thezvi.substack.com/" rel="nofollow noreferrer">https://thezvi.substack.com/</a>). He’s in the X-risk camp (and so am I) but seems to have a clear view of some AI things being exciting, others being dangerous, and still others being irrelevant.<p>I don’t trust most AI-positive sources because they almost never have anything negative to say at all, so they’re clearly in it to hype AI and not to inform anyone of true things. I don’t trust Gary Marcus’s opinion for a similar reason.
Following AI-related RSS feeds from the ML research arms of the big companies is nice. Not as hypey as the popular press, and you get more technical details. Most mail clients support RSS nowadays, so you can even get notified when a new post is made, instead of periodically checking on Mondays or something. IBM, Microsoft, Meta (I don't think they have RSS though), nVidia, OpenAI, Google, and IBM all have great blogs that cover their work.<p>Another good resource is the YouTube channel "2-minute papers." It sometimes has a lot of hype, but it does a good job of showcasing recent work.
Surprised no one has mentioned AK’s daily paper digest: <a href="https://huggingface.co/papers" rel="nofollow noreferrer">https://huggingface.co/papers</a><p>He also posts summaries on Twitter, or at least he used to but my Twitter account is glitched and I can’t see Tweets anymore
In general, any new research from OpenAI that is subsequently quickly replicated by the actual open source community (e.g. Latent Consistency Models for a recent example). They have put out research that hasn't gone anywhere in the past - which is why I suggest waiting until the open source community or other AI startups start to copy them on that specific research.<p>Another group that is important to watch is any of the members from the `CompVis` group that originally developed VQGAN and Latent Diffusion models. Although I'm uncertain how much of the team remains as many seem to have realized they can do more research (and make some more money) by working at the various research labs popping up.
Those outlets cover mature industries. And as Charles Kettering tells us (inventor of the electric starter motor): "You can’t plan industries". Of course no one is yet out in front with the settled science to preach so just enjoy the chaos and behind the scenes look at the birth of an industry or use an aggregator.<p><a href="https://allainews.com" rel="nofollow noreferrer">https://allainews.com</a><p><a href="https://nuse.ai" rel="nofollow noreferrer">https://nuse.ai</a><p><a href="https://news.bensbites.co/newest" rel="nofollow noreferrer">https://news.bensbites.co/newest</a>
<a href="https://trendingpapers.com/" rel="nofollow noreferrer">https://trendingpapers.com/</a><p>Pure numbers: the top trending papers surface. They are a function of PageRank (citations and the importance of which papers cite each other), authors' previous body of work, etc...<p>The filters help select a sub-area (NLP, Computer Vision, etc.) and slice what's really new (released over the last week, last 3 months, last 6, etc.).<p>The tool is designed to solve this problem.
Twitter is good but you <i>have</i> to filter it. Start by following AI researchers doing real work, not pundits. However that's not enough by itself. It is essential to use the "mute words" feature to aggressively prune any and all tweets about politics and other crap. You're left with a much denser stream of people's informed thoughts about the latest research as it comes out.
I was looking for something similar as well since the initial SD launch. But I've ended up just using the HN frontpage as that, with good results. Most things end up here, even if not driven by hype, and the discussions that follow usually are "Good Enough" but with quite a bit of off-topic comments.
Zvi does a very detailed roundup of the weekly news, with an AI x-risk lens:<p><a href="https://thezvi.substack.com/" rel="nofollow noreferrer">https://thezvi.substack.com/</a><p>Pretty thorough (though verbose) if you just want to stay on top of developments.
On the AI policy side: <a href="https://aipoliticalpulse.substack.com/" rel="nofollow noreferrer">https://aipoliticalpulse.substack.com/</a>
well, given that industry people and “experts” confidently predicted that what has happened in 2023 could happen only hundreds of years from now or not at all… i would say that the “hype” has as much veracity as anything else. the simple fact os that no matter how carefully you curate your news, you will have no idea whats coming. and coming soon. most people havent wrapped their head around it yet but the plain and obvious truth is that technology, especially AI, must be slowed, paused, regulated in some way because too much change too fast is very dangerous.<p>please dont comment about previous eras of technology and change. nothings even comes close to comparing
At this point it's better to look for specific people rather than a single source. Some suggestions<p>Gary Marcus <a href="https://garymarcus.substack.com/" rel="nofollow noreferrer">https://garymarcus.substack.com/</a><p>Temnit Gebru and Dr. Emily Bender <a href="https://www.dair-institute.org/" rel="nofollow noreferrer">https://www.dair-institute.org/</a><p>Alex Hanna, Mystery AI Hype Theater: <a href="https://www.buzzsprout.com/2126417" rel="nofollow noreferrer">https://www.buzzsprout.com/2126417</a><p>Dr. Émile P. Torres: <a href="https://www.xriskology.com/" rel="nofollow noreferrer">https://www.xriskology.com/</a>