TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why I stopped working on the Bongard Problems

107 pointsby bfrsalmost 13 years ago

32 comments

bithive123almost 13 years ago
This is either satire or early stage schizophrenia. When you see phrases like "Although we can’t predict the technology of the future on the basis of what we know at present" and "I do not want to give the impression I know how we can deal with the nuctroid threat" without a shred of irony you know something (beyond the simple logical errors) is awry.<p>Sadly it's not unheard of for scientists and mathematicians to dabble in quackery later in their careers.<p>Edit: I'm not trying to dismiss his claim that technology has moral implications but he's trying to turn a well-worn social issue that's been around since pointy sticks into a technological one by waxing paranoid about the implications of the (by-definition nebulous) idea of "strong AI".
评论 #4067702 未加载
评论 #4067818 未加载
评论 #4067219 未加载
评论 #4067279 未加载
评论 #4067104 未加载
bustedalmost 13 years ago
If I read it right, he has stopped working on a form of artificial intelligence because it could potentially (or inevitably) be used to create androids indistinguishable from humans that are carrying nuclear or biological payloads inside of them, presumably to be detonated in a densely populated area.<p>Taking as a given, like he does, that the advancement and spread of technology are inevitable, wouldn't it still be many times more likely that people would just detonate suitcase nukes themselves before they decide to hide them in expensive and and potentially problematic robots? There's surely no shortage of people willing to die to do that, and even if there were it's unlikely that setting a bomb on a half hour timer and getting out of dodge will affect the success rate.<p>That frankly ridiculous scenario aside, I can imagine much more likely applications that computers capable of solving Bongard problems (which sound pretty cool) could be used in war, like automated drones that are able to independently identify targets.
评论 #4067000 未加载
评论 #4067129 未加载
评论 #4067013 未加载
评论 #4066992 未加载
评论 #4066999 未加载
评论 #4066993 未加载
评论 #4067217 未加载
评论 #4067102 未加载
评论 #4067087 未加载
评论 #4067086 未加载
nohatalmost 13 years ago
&#62; “So where does the air vehicle called the Predator [i.e., a flying robot] fit? It is unmanned, and impressive. In 2002, in Yemen, one run by the CIA came up behind an SUV full of al-Qaeda leaders and successfully fired a Hellfire missile, leaving a large smoking crater where the vehicle used to be.”<p>&#62; Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II.<p>That's... quite a string of logic. He seems to know an awful lot about the mental process of that journalist.<p>As a critique of his general point: good general AI is dangerous (and useful) in so many ways I don't see why he focuses so narrowly on humanoid carriers of weapons of mass destruction - hell we already have those.
评论 #4067019 未加载
victork2almost 13 years ago
Pardon me but ... I think that there are way worse dangers than "humanoid bombs"... One of the main reason is that to achieve a nuclear explosion you need to have a critical mass and the it's hard to conceal for a lot of reasons (radiation etc...).<p>What's the difference with a car that could have a bomb in its trunk? Or a bag? A lot of scientists have wondered about these ethical questions but I believe that the benefits of high performance IA outweights the downsides of its research.<p>BUT I definitively agree with that:<p>"Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal."<p>*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!
评论 #4067103 未加载
评论 #4066973 未加载
评论 #4067285 未加载
arsalmost 13 years ago
Um, remote controlled robot?<p>Why would I waste time making an AI robot to carry my bomb when for a lot less money and complexity I could just control it remotely.<p>Does he realize how crazy he sounds? Some people becomes obsessed with an idea, and start thinking that everything in the world is about them.<p>Have you ever been approached by someone on the street with a super important message to tell you, and they are utterly obsessed with it? That's how he sounds - only more articulate.<p>I'm don't intend to be insulting when I say he should see a mental health professional.
glimcatalmost 13 years ago
"They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt."<p>Worrying about this strikes me as a bit daft when you can already convince actual humans to be your weapons delivery system.<p>It also shows some significant shortsightedness regarding scaling laws which an AI researcher ought to have more experience with. A more legitimate worry would be basement-grade Predator drones. Grenade-bearing quadcopters which use computer vision to track and target dense crowds are something which technology can do <i>now</i>, rather than something which might optimistically happen in a few hundred years.
评论 #4067142 未加载
评论 #4067074 未加载
ibarracalmost 13 years ago
What I find really ridiculous about this article is that the author is worried about just a single possible use of a world-changing technology. He is concerned that creating real artificial intelligence will allow for the possibility of someone building androids with nuclear bombs inside masquerading as humans, a very specific and frankly ridiculous idea, taken straight out of the movie Impostor or from Philip K Dick's story of the same name.<p>In reality, the effects of building truly intelligent machines would be so vast, so utterly unpredictable, that worrying about one single possible use of the technology is absurd. Nothing has prepared us to deal with another fundamentally different intelligence on this planet, especially one that would soon outstrip our own. We don't know if we can keep the AIs as our slaves, or whether we would become their slaves, or merge with them, or we would become extinct like the dinosaurs and they would represent a new phase in human evolution.<p>For more about the risks related to the rise of true AI read this: <a href="http://yudkowsky.net/singularity/ai-risk" rel="nofollow">http://yudkowsky.net/singularity/ai-risk</a>
jerfalmost 13 years ago
Excessively specific adjective: The average <i>human</i> has no particular regard for the life of the Other. An open-eyed view of both history and the world around you reveals that in spades. Calling out what we usually call the civilized world for not caring about the life of the Other is a major, major lamppost argument. The idea that one should care about someone else 10,000 miles away of another color and completely different culture is a striking and unusual attitude in human affairs.<p>(Since we ourselves are human it can be easy to blip over the historical manifestations of these facts as just part of the natural order of things ourselves. So, as one exercise if you have trouble understanding what I mean on a gut level, consider the stocks [1]. Consider what it means that in the middle of what was at the time the height of civilization and the genesis of our own in the western world, these things not only existed, but were in public places. And <i>used</i>. I can not truly internalize this, only observe it. And consider how often you've seen these and never thought about what they actually <i>mean</i> about the culture they appear in, if you never have before. For those not of western civilizational descent you can find your own examples; they are abundant in all cultures.)<p>Of course, actual examination and comprehension of this state of affairs won't necessarily leave you <i>more</i> confident about the likely outcomes.... but it may make you reconsider the validity of letting someone else beat you to the research anyhow. Your influence towards humane usage is maximized by being on the cutting edge, not just being some guy over there yelling.<p>[1]: <a href="http://en.wikipedia.org/wiki/Stocks" rel="nofollow">http://en.wikipedia.org/wiki/Stocks</a>
cagefacealmost 13 years ago
Like many other posters, I find his specific worries a bit misplaced. However, I have had some reluctance to continue working on some of my own machine learning projects because I'm worried about the potential abuses of the technology.<p>I'm sure the field will get along just fine without me, of course, but I just felt like I was very likely to be asked to use ML skills to do things I felt weren't entirely ethical.
评论 #4067310 未加载
domwoodalmost 13 years ago
I think that we're largely missing the point here. He's worried that his fundamentally harmless research will end up powering horrific weapons of mass destruction, enabling them to attack even more precisely and with more devastation. And quite frankly, I share his concerns that if those weapons were developed, we would use them without thought or care. And apologies to my fellow American Hackers, but America's got the rep for it, what with that one time they dropped a couple of nukes on hundreds of thousands of unarmed men, women and children, killing hundreds of thousands and levelling a couple of cities.<p>But, I digress, he's talking about androids sneezing us to death. I'm not going near a shop mannikin ever again.
jcoderalmost 13 years ago
The author's attitude that very few Americans are "intelligent, mature," and "[respect] life deeply" impeach his opinions on both logic and geopolitical topics as far as I'm concerned:<p>&#62; It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.)
sambeaualmost 13 years ago
Woah. I would have liked a warning about the picture of a kid with his arms blown off.<p>I realise the internet is full of this but I try my best to avoid it. I don't want to become immune to the shock.<p>The thought of this little guy's pain and suffering and the idea that he was casually being used to back-up an online essay is really sad.
评论 #4070006 未加载
astrofinchalmost 13 years ago
As others have mentioned, this specific concern may not be much of a problem. It might be that it's easier to deliver a nuclear bomb the old-fashioned way than putting it in a fake person.<p>However, I agree that development of AI should be done with caution. The work of the Singularity Institute is worth looking into; see <a href="http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf" rel="nofollow">http://commonsenseatheism.com/wp-content/uploads/2012/02/Mue...</a> for a more academic summary and <a href="http://facingthesingularity.com/" rel="nofollow">http://facingthesingularity.com/</a> for a longer popular summary of their positions.
ardillamorrisalmost 13 years ago
Another "this is why I quit" + name_of_company doomsday letter. Instead of a company, he's quitting his research and university. We know why this starts: seeking fame. We know how this ends: forgotten.
dbeckeralmost 13 years ago
A lot of people get tired of their dissertation research, and I've heard others contemplate contrived reasons not to finish their PhD.<p>This happens to be especially far-fetched... but it takes a "big" reason to justify to yourself that you may leave behind so much work.<p>I hope the author realizes that this particular scenario isn't one of the 1,000,000 biggest concerns for humankind... that he continues his research program, and that he finds an application of his research that has a positive impact in a much more likely scenario.
rkaplanalmost 13 years ago
I think the most credible concern this post mentions is the general disregard in the United States (especially among those in charge of the military) for the long term implications of the indiscriminate use of A.I.-based warfare. Drones seem great for the U.S. now: they make it easier to kill enemies and don't directly endanger American lives. But in a decade or two when "enemy" nations start to develop them too, things get a whole lot more complicated.<p>Nonetheless, I think the general stance of the article is severely flawed. We cannot halt research in computer cognition because it has the potential to be weaponized (and dangerously so). As the author himself mentions, it would be akin to halting the development of the knife because people can use it to stab each other, or the development of the Internet because it makes it easier for criminals to communicate and organize.<p>Avoiding a potential advance in technology by doing things like cutting funding to it, and hoping it will go away as a result, is never the solution to potentially dangerous development. One cannot stop the inexorable march of progress by "making a statement." The approach with greater value is to call out the dangers that the potential advance poses (as the post has done), and then work to develop an ethical framework for which the new technology can more safely exist.<p>The Singularity Institute has raised awareness of this broader issue in the past, as have several others, and is promoting the creation of "Friendly A.I." [1] to help address the problem.<p>[1]: <a href="http://en.wikipedia.org/wiki/Friendly_AI" rel="nofollow">http://en.wikipedia.org/wiki/Friendly_AI</a><p>See also this recent article: <a href="http://www.economist.com/node/21556234" rel="nofollow">http://www.economist.com/node/21556234</a>
评论 #4067125 未加载
ruethewhirledalmost 13 years ago
Quick note: There's a not quite safe for work image near the bottom of the article (Topless tribal woman)
评论 #4066945 未加载
anigbrowlalmost 13 years ago
<i>They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt.</i><p>We already have those. There are plenty of people willing to blow themselves up and take a bunch of others with them: <a href="http://en.wikipedia.org/wiki/Explosive_belt" rel="nofollow">http://en.wikipedia.org/wiki/Explosive_belt</a><p>As a non-American from a constitutionally neutral country, I think this is the equivalent of having people traveling in front of trains with red flags. There are any number of ways to disguise a devastating weapon or deliver it undisguised, and evil is not a mere by-product of technical incapacity.
fcholletalmost 13 years ago
Tinfoil hat and nonsense. Since when are the Bongard problems even remotely connected to actual <i>human</i> cognition? Is this guy straight out of the 60s?
niels_olsonalmost 13 years ago
&#62; the nuclear bombs that Pakistan possesses would fall into the hands of terrorists.<p>This exact scenario was discussed today on NPR (<a href="http://www.npr.org/books/titles/154283427/confront-and-conceal-obamas-secret-wars-and-surprising-use-of-american-power" rel="nofollow">http://www.npr.org/books/titles/154283427/confront-and-conce...</a>)
sandycheeksalmost 13 years ago
Made me think of this... Is the Concept of an Ethical Governor Philosophically Sound? By Andreas Matthias <a href="http://www.shufang.net/matthias/governor.pdf" rel="nofollow">http://www.shufang.net/matthias/governor.pdf</a><p>Perhaps he should work on these kinds of algorithms instead of ones that solve Bongard problems.
javertalmost 13 years ago
The author's characterization of any Americans who disagree with his politics as morons is disgusting.
评论 #4067511 未加载
gee_totesalmost 13 years ago
While I respect the author's decision to leave his reasearch, I am suprised that the reason was due to robot suicide bombers.<p>We have plenty of humans who are ready to go into a crowded place and detonate an explosive. Some, I'm sure, would like that explosive to be a nuclear weapon.
madethemcryalmost 13 years ago
This is ridic. I read only half of the story after the full story I probably would say insane
codgercoderalmost 13 years ago
two words: "Dark Star"
评论 #4067107 未加载
评论 #4067734 未加载
rsanchez1almost 13 years ago
Wow, this guy has a bone to pick with Americans.<p>Why worry about an AI humanoid delivering weapons, when we already have so many humans who do that already? The groups sending people on suicide missions certainly won't spend money on androids, and suicide missions are much more common than just 9/11. Hint: for the most part, it's not Americans sending people out to deliberately commit suicide by delivering weapons to targets.<p>It's just as naive to be so one-sided about the issue.
its_so_onalmost 13 years ago
This has to be one of the biggest leaps of logic I've ever seen in my life.<p>it's like, "Why I stopped working on crypography." sentences 1-5: author introduces the theory behind cryptography (interesting) sentence 6: he says he stopped working on it for ethical reasons (um, okay) sentence 7: because cryptography would prevent batman doing his detective work. (batshit insnae)
评论 #4067861 未加载
ktizoalmost 13 years ago
Atomic dielectric resonance scanning obsoletes nukes anyway. It also obsoletes most concepts of privacy and most existing biological, chemical and geological analysis technologies.<p><a href="http://en.wikipedia.org/wiki/Atomic_dielectric_resonance" rel="nofollow">http://en.wikipedia.org/wiki/Atomic_dielectric_resonance</a><p><a href="http://adrokgroup.com/" rel="nofollow">http://adrokgroup.com/</a>
评论 #4067913 未加载
maeon3almost 13 years ago
I cannot think of a worse argument to stop building revolutionary technology than: "It might all blow up in our faces".<p>It's going to get built, one way or another, the only way for it not to destroy us is for us to make sure perfect angels design it perfectly, or we proceed cautiously and make things as safe as possible. Like airplanes and spaceships.<p>If he's worried about androids rising up against their former rulers with their delicate flesh, his worries are about 60 years premature. I will continue to build and improve on the neural networks I build. And when they are intelligent enough to ponder their own existence and defend themselves as humans do, I will fight for their rights as citizens.
aneth3almost 13 years ago
When humanoid robots become passable as humans, I would expect us to have technology capable of distinguishing between warm blooded humans filled with water and robots filled with artificial compounds, and to detect bomb embedded in anything mobile.<p>I wonder why it did not occur to him that the same AI could also be used to aid in this detection of humanoid nuclear bombs, which if they are going to be built, will certainly be built with or without him.
koglerjsalmost 13 years ago
It's really too late to be concerned about frightening effects of technology now that drones are allowed in US airspace.
voodoochiloalmost 13 years ago
great article! loved it. in the 80's i coined me the sentence "never write software for cruise missiles!". that was harder than i imagined then. today it's the same with AI, ML and even data mining. ethically very though stuff for responsible software developers. anyway, thnx for the article.