TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Do most AI researchers agree on the importance of value alignment?

1 pointsby markhendersonover 7 years ago
I received an email recently that went way over my head, so I figured I&#x27;d turn it over to people who are in the field of AI research. The email referenced Stuart Russell&#x27;s TED talk: https:&#x2F;&#x2F;www.ted.com&#x2F;talks&#x2F;stuart_russell_how_ai_might_make_us_better_people?language=en and then asked the following questions, referencing Asimov&#x27;s classic 3 rules:<p>1. Is it generally agreed in the field that Asimov’s three rules are in fact downright dangerous? (he implies the rules are dangerous, but never states it explicitly) 2. Is his program just a restatement in different terms (or a minor tweaking) of something common in the field? 3. Is implementation of his program at all realistic? 4. Do the majority of researchers in the field agree on the importance of something like his program?<p>Any insight would be incredibly valuable, thank you!

no comments

no comments