TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Turing awardees republished key methods and ideas without credit

101 pointsby Lucover 1 year ago

19 comments

SeanLukeover 1 year ago
We published a paper a while back in a genetic programming conference, and before the paper had been published (only the title was announced) Schmidhuber guessed that we had not cited his work. He very publically put us through the wringer for not citing him as the inventor of the concept were were examining. In fact we had <i>not</i> cited him: but among our dozen or so previous examples, we had cited two <i>others</i> which long predated his work and were the actual seminal papers. He had failed to cite them himself.
评论 #38644747 未加载
评论 #38650348 未加载
评论 #38644468 未加载
erostrateover 1 year ago
List of &quot;famous&quot; ML people not to waste time on:<p>- Gary Marcus<p>- Juergen Schmidhuber<p>- Pedro Domingos<p>- Max Tegmark<p>- Eliezer Yudkowsky<p>Some context for people unfamiliar with ML research: the author, Schmidhuber, is well known for claiming that he should get credit for many ML ideas. Most ML researchers think that:<p>- He doesn&#x27;t deserve the credit he claims, in most if not all cases.<p>- There&#x27;s a few cases where his papers should have been cited and weren&#x27;t. That&#x27;s fairly common.<p>- People do not get much credit for formulating an abstract idea in a paper or implementing it on a toy problem. Credit belongs to whoever actually makes it work.<p>- Credit assignment in ML is not perfect but roughly works.
评论 #38644578 未加载
评论 #38645884 未加载
评论 #38643445 未加载
评论 #38644722 未加载
评论 #38651675 未加载
评论 #38644358 未加载
评论 #38643706 未加载
YeGoblynQueenneover 1 year ago
This whole comment section is full of absolutely unacceptable ad-hominem attacks on Schmidhuber, from people who most likely haven&#x27;t even read any of the works in question and are certainly not showing any of the &quot;intellectual curiosity&quot; this site is supposed to be about.<p>Anyone who cares about academic integrity should at least not attack someone complaining of plagiarism. That sort of attack is the academic equivalent of blaming the victim. If even half of Schmidhuber&#x27;s accusations have a basis that&#x27;s still a major academic scandal of epic proportions.
评论 #38646611 未加载
评论 #38645972 未加载
评论 #38645061 未加载
harveywiover 1 year ago
LeCuna (noun): An empty space in a list of citations where the works of Jürgen Schmidhuber should appear.
Imnimoover 1 year ago
Just from the title I knew exactly which awardees we were talking about, and who was going to be doing the talking.
评论 #38643742 未加载
P-NPover 1 year ago
I can see some angry comments here, but so far I have not seen any facts that refute his claims. Once I spent a long time reviewing a related paper on Hacker News, and I think he is right about disputes B1, B2, B5, H2, H4, H5. I&#x27;d have to study the others more closely:<p>B: Priority disputes with Dr. Bengio (original date v Bengio&#x27;s date): B1: Generative adversarial networks or GANs (1990 v 2014) B2: Vanishing gradient problem (1991 v 1994) B3: Metalearning (1987 v 1991) B4: Learning soft attention (1991-93 v 2014) for Transformers etc. B5: Gated recurrent units (2000 v 2014) B6: Auto-regressive neural nets for density estimation (1995 v 1999) B7: Time scale hierarchy in neural nets (1991 v 1995)<p>H: Priority disputes with Dr. Hinton (original date v Hinton&#x27;s date): H1: Unsupervised&#x2F;self-supervised pre-training for deep learning (1991 v 2006) H2: Distilling one neural net into another neural net (1991 v 2015) H3: Learning sequential attention with neural nets (1990 v 2010) H4: NNs program NNs: fast weight programmers (1991 v 2016) and linear Transformers H5: Speech recognition through deep learning (2007 v 2012) H6: Biologically plausible forward-only deep learning (1989, 1990, 2021 v 2022)<p>L: Priority disputes with Dr. LeCun (original date v LeCun&#x27;s date): L1: Differentiable architectures &#x2F; intrinsic motivation (1990 v 2022) L2: Multiple levels of abstraction and time scales (1990-91 v 2022) L3: Informative yet predictable representations (1997 v 2022) L4: Learning to act largely by observation (2015 v 2022)
评论 #38651492 未加载
hiddencostover 1 year ago
Schmidhuber really needs to stop. He&#x27;s been beating this drum for decades and he&#x27;s wrong.
评论 #38644180 未加载
评论 #38645938 未加载
RcouF1uZ4gsCover 1 year ago
Reading the article and some of the links make me feel like the author Jürgen Schmidhuber is the academic version of the patent troll.<p>It sounds like he published some theoretical musings back in 1990s without any real practical implementation that did anything useful and since then has run around accusing AI researchers who actually produced concrete research and techniques to get are actually in use today of plagiarism.
评论 #38643034 未加载
评论 #38646016 未加载
评论 #38650419 未加载
oldesthackerover 1 year ago
The machine learning field as a whole has a huge credit assignment problem. This post seems to encourage other ML researchers to come out with their own priority disputes. Tomas Mikolov just aired his grievances:<p>&gt; I wanted to popularize neural language models by improving Google Translate. I did start collaboration with Franz Och and his team, during which time I proposed a couple of models that could either complement the phrase-based machine translation, or even replace it. I came up (actually even before joining Google) with a really simple idea to do end-to-end translation by training a neural language model on pairs of sentences (say French - English), and then use the generation mode to produce translation after seeing the first sentence. It worked great on short sentences, but not so much on the longer ones. I discussed this project many times with others in Google Brain - mainly Quoc and Ilya - who took over this project after I moved to Facebook AI. I was quite negatively surprised when they ended up publishing my idea under now famous name &quot;sequence to sequence&quot; where not only I was not mentioned as a co-author, but in fact my former friends forgot to mention me also in the long Acknowledgement section, where they thanked personally pretty much every single person in Google Brain except me. This was the time when money started flowing massively into AI and every idea was worth gold. It was sad to see the deep learning community quickly turn into some sort of Game of Thrones. Money and power certainly corrupts people...<p>Reddit post: &quot;Tomas Mikolov is the true father of sequence-to-sequence&quot; <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;18jzxpf&#x2F;d_tomas_mikolov_is_the_true_father_of&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;18jzxpf&#x2F;d_...</a>
33aover 1 year ago
Somehow I knew it was Schmidhuber before I clicked the link.
qzwover 1 year ago
Shouldn&#x27;t the title be &quot;Turing Awardees Accused of Republishing...&quot; rather than how it currently reads?
评论 #38645218 未加载
评论 #38646212 未加载
daveguyover 1 year ago
It&#x27;s nice of Schmidhuber to point out the quality papers with theoretical advancements and actual validations that fix the intellectual problems with his weak musings.
gexahaover 1 year ago
I wonder what Schmidhuber colleagues think of all of this.
评论 #38643031 未加载
senderistaover 1 year ago
This guy has been the biggest blowhard in AI for years if not decades.
renecitoover 1 year ago
Shocking! Successful people getting credited for someone else work.
gedyover 1 year ago
As an outsider to this space, it seems suspect that all of the people he has a beef with (like LaCun) will gladly cite others and predecessors, but not Schmidhuber.<p>Has he explained his reasons for thinking there is some conspiracy? Otherwise it reflects badly on his assessment of himself, or possibly his mental state.
pavel_lishinover 1 year ago
Cue the next HBomberguy video, please.
cs702over 1 year ago
Copying and pasting Urban Dictionary&#x27;s definitions of &quot;to schmidhuber&quot;[a], &quot;schmidhuber&quot;[b] and &quot;schmidhubered&quot;[c]:<p>---<p>to schmidhuber<p>When you publicly claim that someone else&#x27;s idea that is remotely resembling your own is stolen from you.<p>---<p>schmidhuber<p>1) To interject for a moment and explain how one&#x27;s recent popular idea is a few transformations away from your 1991 paper.<p>2) To miraculously produce fifty years of relevant literature after someone claims to trace the origin of an idea in a particular work.<p>---<p>schmidhubered<p>Being &quot;schmidhubered&quot; looks something like this:<p>1) Invent something brilliant that no one cares about. Experience derision.<p>2) That thing becomes popular years later. Someone else is given credit for inventing it. That person appears in the New York Times and is declared smartest person alive.<p>3) Go on a campaign explaining the situation and how you are the rightful inventor and thus the rightful Smartest Person Alive.<p>4) Everyone accuses you of being a sore loser and no one takes you seriously.<p>5) A verb is named after you.<p>---<p>[a] <a href="https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=to+schmidhuber" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=to+schmidhub...</a><p>[b] <a href="https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=schmidhuber" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=schmidhuber</a><p>[c] <a href="https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=schmidhubered" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.urbandictionary.com&#x2F;define.php?term=schmidhubere...</a>
评论 #38646998 未加载
1024coreover 1 year ago
Life would be so boring without Schmidhuber.