TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI Canon

518 pointsby nihit-desaialmost 2 years ago

36 comments

jhp123almost 2 years ago
If you click the domain on this submission, you&#x27;ll see loads of articles from a16z on the topic of generative AI.<p>Click back a couple years and you&#x27;ll find this page: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=a16z.com&amp;next=29816846" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=a16z.com&amp;next=2981684...</a> with submissions like &quot;DAOs, a Canon&quot; <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29440901" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29440901</a>
评论 #36074376 未加载
评论 #36076788 未加载
评论 #36073701 未加载
lwnealalmost 2 years ago
This is a fine list, but it only covers a specific type of generative AI. Any set of resources about AI in general has to at least include the truly canonical Norvig &amp; Russel textbook [1].<p>Probably also canonical are Goodfellow&#x27;s Deep Learning [2], Koller &amp; Friedman&#x27;s PGMs [3], the Krizhevsky ImageNet paper [4], the original GAN [5], and arguably also the AlphaGo paper [6] and the Atari DQN paper [7].<p>[1] <a href="https:&#x2F;&#x2F;aima.cs.berkeley.edu&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aima.cs.berkeley.edu&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.deeplearningbook.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.deeplearningbook.org&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Probabilistic-Graphical-Models-Principles-Computation&#x2F;dp&#x2F;0262013193" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Probabilistic-Graphical-Models-Princi...</a><p>[4] <a href="https:&#x2F;&#x2F;proceedings.neurips.cc&#x2F;paper_files&#x2F;paper&#x2F;2012&#x2F;file&#x2F;c399862d3b9d6b76c8436e924a68c45b-Paper.pdf" rel="nofollow">https:&#x2F;&#x2F;proceedings.neurips.cc&#x2F;paper_files&#x2F;paper&#x2F;2012&#x2F;file&#x2F;c...</a><p>[5] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1406.2661" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1406.2661</a><p>[6] <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;nature16961" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;nature16961</a><p>[7] <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;nature14236" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;nature14236</a>
评论 #36078774 未加载
评论 #36082125 未加载
评论 #36076025 未加载
negamaxalmost 2 years ago
I am sorry but I am not a believer in a16z anything after their massive crypto token scams and wealth extraction. We all need to move away from all these companies who continue to bloat in private and then have a big pay day as a public company.
评论 #36073640 未加载
评论 #36073355 未加载
评论 #36073544 未加载
评论 #36074510 未加载
评论 #36073463 未加载
评论 #36074840 未加载
评论 #36073679 未加载
评论 #36075105 未加载
评论 #36073452 未加载
评论 #36075035 未加载
评论 #36076215 未加载
评论 #36075032 未加载
评论 #36073536 未加载
评论 #36075983 未加载
alanpagealmost 2 years ago
And why should we trust their judgement about anything, after they put money into Adam Neuman&#x27;s new company (after the WeWork debacle)?<p>CNBC: <a href="https:&#x2F;&#x2F;www.cnbc.com&#x2F;2022&#x2F;08&#x2F;15&#x2F;a16z-to-invest-in-adam-neumanns-new-residential-real-estate-company.html" rel="nofollow">https:&#x2F;&#x2F;www.cnbc.com&#x2F;2022&#x2F;08&#x2F;15&#x2F;a16z-to-invest-in-adam-neuma...</a>
评论 #36074336 未加载
评论 #36075768 未加载
TradingPlacesalmost 2 years ago
Came for everyone roasting a16z. Was not disappointed.
ryanSrichalmost 2 years ago
Looking at these comments, I can&#x27;t think of another VC that has burned as much goodwill among technical people as a16z has. Don&#x27;t get me wrong, it&#x27;s well deserved, but it&#x27;s just surprising how universal it seems to be (at least in this thread).
评论 #36074455 未加载
评论 #36073599 未加载
评论 #36074238 未加载
评论 #36074632 未加载
评论 #36073845 未加载
评论 #36075988 未加载
评论 #36073477 未加载
sharemywinalmost 2 years ago
Build AI or just invest in chip makers?<p><a href="https:&#x2F;&#x2F;a16z.com&#x2F;2023&#x2F;01&#x2F;19&#x2F;who-owns-the-generative-ai-platform&#x2F;" rel="nofollow">https:&#x2F;&#x2F;a16z.com&#x2F;2023&#x2F;01&#x2F;19&#x2F;who-owns-the-generative-ai-platf...</a><p>Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.<p>In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it
评论 #36073519 未加载
评论 #36073361 未加载
whywhywhywhyalmost 2 years ago
Getting whiplash from the 90 degree handbrake turn the crypto grifters have taken into being AI grifters.
评论 #36074484 未加载
moinnadeemalmost 2 years ago
I would hold skepticism for the moment.<p>I know the authors from the blog post quite well. Say what you will about the firm, but one of the authors have been investing in machine learning since 2016, and another has a PhD in CS (including a SIGCOMM test of time award!)<p>I come from a strong ML background (multiple publications, PhD dropout), I would say that the canon is actually quite good.
评论 #36074342 未加载
评论 #36074286 未加载
评论 #36074234 未加载
rdlialmost 2 years ago
I was an early member of the CNCF community (circa 2016), and at the time I thought &quot;wow things are moving quickly.&quot; Lots of different tech was being introduced to solve similar problems -- I distinctly remember multiple ways of templating K8S YAML :-).<p>Now that I&#x27;m spending time learning AI, it feels the same -- but the innovation pace feels at least 10x faster than the evolution of the cloud native ecosystem.<p>At this point, there&#x27;s a reasonable degree of convergence around the core abstractions you should start with in the cloud-native world, and an article written today on this would probably be fine a year from now. I doubt this is the case in AI.<p>(Caveat: I&#x27;ve only been learning about the space for about 4 weeks, so maybe it&#x27;s just me!)
评论 #36073231 未加载
评论 #36075077 未加载
zeroxfealmost 2 years ago
&gt; Andrej Karpathy was one of the first to clearly explain (in 2017!) why the new AI wave really matters.<p>Geoff Hilton had been saying this well before 2017. I remember his talks at Google ~2013ish.
评论 #36074883 未加载
评论 #36076172 未加载
Imnimoalmost 2 years ago
The trick here is that they get to put their own think pieces alongside actually influential work and pretend like the two deserve to share a stage.
davidhunteralmost 2 years ago
I hope Tyler Cowen can ask Marc Andreessen how AI works so that we can all learn something from the master
评论 #36073555 未加载
xpealmost 2 years ago
&gt; Research in artificial intelligence is increasing at an exponential rate.<p>Probably in the blundering sense of &quot;exponential&quot;, meaning <i>a lot</i>. But what are some specific numbers? (such as publications)
评论 #36073587 未加载
评论 #36073335 未加载
nologic01almost 2 years ago
Nvidia is 25% up on &quot;AI guidance&quot;, a16z publishes &quot;AI Canon&quot;.<p>Its settled -&gt; AI is the new crypto
dpflanalmost 2 years ago
Looking at the authors, was this created by experts in AI? Is it sufficient to truly be a `canon`?
评论 #36073298 未加载
uptownfunkalmost 2 years ago
Wow, why so much hate against a16z. There&#x27;s a really funny clip about Marc on the Rogan podcast where he is like &quot;I have to come on Rogan, there&#x27;s so much clout&quot; or something to that effect. Rogan was immediately like &quot;igghh&quot;.
mark_l_watsonalmost 2 years ago
Well, that is a good list. I would guess that I have only previously read the content from about 15% of the links, oh well!<p>Like everyone else, starting about a year and a half ago I have found it really difficult to stay up to date.<p>I try to dive deep on a narrow topic for several months and then move on.<p>I am just wrapping up a dive into GPT+LangChain+LlamaIndex applications. I am now preparing to drop most follows on social media for GPT+LangChain+LlamaIndex and try to find good people and companies to follow for LLM+Knowledge Graphs (something I tried 3 years ago, but the field was too new).<p>I find that when I want to dive into something new the best starting point is finding the right people who post links to the best new papers, etc.
gistalmost 2 years ago
Pompous to use the word &#x27;canon&#x27; to describe what amounts to a bunch of links and thoughts&#x2F;opinions. Implies the authors of the various articles are the authoritative source&#x2F;experts to which there is no point disagreeing.
boringgalmost 2 years ago
Anyone else feel like we&#x27;ve seen peak A16z at this point?
评论 #36075995 未加载
SilverBirchalmost 2 years ago
From the people who bought you web3. Look where the crowd is going, run to the front and shout &quot;Follow me!&quot;.
oh_sighalmost 2 years ago
A16Z: Friendship ended with Blockchain. Now AI is my best friend.<p>What&#x27;s the last investment A16Z was actually ahead of the curve on? I guess it isn&#x27;t important, since from their position, they don&#x27;t rely on being ahead of the curve in order to make good investments, they make their investments good through their network and funding abilities.
评论 #36073272 未加载
评论 #36073266 未加载
paralmost 2 years ago
It&#x27;s a good resource but I hardly think a16z is the team to host the &#x27;AI Canon&#x27;.
dbsalmost 2 years ago
I was quite surprised to see Sequoia getting involved in crypto fiascos.<p>My data sample is very small but I have a pretty good track record of shifting career focus in the last 20 years. In particular, 2000 and 2008 were two HUGE shifts for me as the writing was on the wall before crisis hit. Common theme to drive change: too much competition. I jumped out from areas where there was still tremendous growth to be seen but no serious money to be done.<p>I’m calling a third.
评论 #36101751 未加载
WoahNounalmost 2 years ago
It&#x27;s just a list of links with no real substance. Don&#x27;t they have some crypto scams to attend to?
评论 #36073163 未加载
seydoralmost 2 years ago
&gt; Research in artificial intelligence is increasing at an exponential rate.<p>but then most of the list is transformers &amp; stablediffusion.<p>Anyway, oobabooga and automatic1111 are doing more to spread AI than many of those papers.
king_magicalmost 2 years ago
It&#x27;s almost offensive how this &quot;AI Canon&quot; leaves out landmark AI results from... what, before the 2020s? (or 2017, I suppose) Honestly reads like something generated by ChatGPT.
natural219almost 2 years ago
Seems like a good list, I enjoyed the comic explainer of stable diffusion and learned a thing. Thank you to the authors and a16z for publishing this :).
kvetchingalmost 2 years ago
perfect timing i was just prescribed vyvanse + adderall
zxieninalmost 2 years ago
Looking at the content list, all I need now is Brain - AI interface that “uploads“ all of it to my brain neural net.<p>&lt;&#x2F;cheeky&gt;
yellow_postitalmost 2 years ago
I’d buy a copy of these all bound into a nice book as a point in time in the industry collectible.
ssnalmost 2 years ago
Hope this is an &quot;in progress&quot; article.<p>Not a single resource or pointer mentioning &quot;ethics&quot;?
mirekrusinalmost 2 years ago
Why everybody (including this a16z dude) underestimates&#x2F;not mentions:<p>1. quality of input data - for language models that are currently setup to be force-fed with any incoming data instead of real training (see 2.) this is the greatest gain you can get for your money - models can&#x27;t distinguish between truth and nonsense, they&#x27;re forced to follow training data auto-completion regardless of how stupid or sane it is<p>2. evaluation of input data by the model itself - self evaluating what is nonsense during training and what makes sense&#x2F;is worthy of learning - based on so far gathered knowledge, dealing with biases in this area etc.<p>Current training methods equate things like first order logic with any kind of nonsense - having on its defense only quantity, not quality.<p>But there are many widely repeated things that are plainly wrong. Simplifying this thought - if there weren&#x27;t, there would be no further progress in human kind. We constantly reexamine assumptions and come up with new theories leaving solid axioms untouched - why not teach this approach&#x2F;hardcode it into LLMs?<p>Those two aspects seem to be problems with large gains, yet nobody seems to be discussing them.<p>Align training towards common&#x2F;self sense, good&#x2F;own judgement, not unconditional alignment towards input data.<p>If fine-tuning works, why not start training with first principles - dictionary, logic, base theories like sets, categories, encyclopedia of facts (omitting historic facts which are irrelevant at this stage) etc. - taking snapshots at each stage so others can fork their own training trees. Maybe even stop calling fine-tuning fine-tuning, just learning stages. Let researchers play with paths on those trees and evaluate them to find something more optimal, find optimal network sizes for each step, allow models to gradually grow in size etc.<p>To rephrase it a bit - we&#x27;re saying that base models learned on large data work well when fine tuned - why not base models trained on first principles can continue to be trained on concepts that depend on previously learned first principles recursively are efficient - did anybody try?<p>As some concrete example - you want LLM to be good at math? Tokenize digits, teach it to do base-10 math, teach it to do addition, subtraction, multiplication, division, exponentiation, all known math basic operations&#x2F;functions, then grow from that.<p>You want it to do good code completion? Teach it bnf, parsing, ast, interpreting, then code examples with simple output, then more complex code (github stuff).<p>Learning LLMs should start with teaching tiny model ASCII, numbers, basic ops on them, then slowly introducing words instead of symbols (is instead of =) etc., then forming basic phrases, then basic sentences, basic language grammar, etc. - everything in software 2.0 way - just throw in examples that have expected output and do back-propagation&#x2F;gradient descent on it.<p>Training has to have a way of gradually growing model size in (ideally) optimal way.
boeingUH60almost 2 years ago
These hucksters have found the next thing to latch on to, I see.
orsenthilalmost 2 years ago
Usually this is written as awesome- list in a github repo.
personjerryalmost 2 years ago
Nice, later this afternoon I&#x27;ll have ChatGPT read these and summarize them for me
评论 #36073125 未加载