TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Workers at Google DeepMind Push Company to Drop Military Contracts

31 pointsby hardmaru9 months ago

4 comments

nunez9 months ago
A) this is absolutely not gonna happen; fed contracts are huge, sticky money<p>B) everything can be used for good and bad. &quot;AI&quot; can help fighter pilots make decisions during mission just as much as &quot;AI&quot; can help Air Force researchers find the next breakthrough in materials science.<p>C) Defense has absolutely insane amounts of data. (Satellite images, for example.) This is great for the advancement of AI in general.
评论 #41332305 未加载
coffe2mug9 months ago
Wondering... why this happens only in Google (Alphabet)? Don&#x27;t the employees have the duty to accept employment contracts? Why and how MS or Oracle or Amazon never have these problems?
评论 #41328977 未加载
YeGoblynQueenne9 months ago
&gt;&gt; But as the AI race heated up, DeepMind was drawn more tightly into Google proper. A bid by the lab’s leaders in 2021 to secure more autonomy failed, and in 2023 it merged with Google’s other AI team—Google Brain—bringing it closer to the heart of the tech giant<p>The way I understand it what happened was that Reinforcement Learning (RL) went out of fashion at the same time that LLMs became wildly popular. DeepMind was all about RL so their needs and wants were basically sidelined in favour of the new New Big Thing in AI™.<p>The reason of course that RL &quot;fell out of fashion&quot; as I say is the continuing failure of RL approaches to work convincingly and reliably in the real world. RL (basically Deep RL, since that&#x27;s all anyone&#x27;s doing these days) works great in simulation but there are two big problems with it.<p>The first one is generalisation, or lack thereof. RL doesn&#x27;t generalise. You can train an RL agent in one environment and it will learn to solve the environment perfectly, if sometimes awkwardly, but if you take the same agent and put in a different environment, even one from the same domain, it will basically die [1,2].<p>The second problem is that RL agents rely on a model of the dynamics of the environment and those are not easy to come by: only humans are able to create robust, useful models of real world environments. There are of course model-free RL approaches that learn a model by interaction with an environment but those only work in virtual environments, for the simple reason that you can&#x27;t learn real-world dynamics by model-free interaction with the physical world without dying many thousands of times.<p>So it looks like it&#x27;s RL out, LLMs in, in Google as in everything else, and I guess we&#x27;ll see what the Next Big Thing in AI™ is going to be after LLMs and who is going to make their fortune with it.<p>________________<p>[1] <a href="https:&#x2F;&#x2F;robertkirk.github.io&#x2F;2022&#x2F;01&#x2F;17&#x2F;generalisation-in-reinforcement-learning-survey.html" rel="nofollow">https:&#x2F;&#x2F;robertkirk.github.io&#x2F;2022&#x2F;01&#x2F;17&#x2F;generalisation-in-re...</a><p>[2] I can&#x27;t find that paper, if it was a paper, but there was a story about moving the paddle in Breakout a few pixels away and thereby causing an RL agent to fail.
know-how9 months ago
Only a child brings their activism to work. Do the job you were hired to do; it is your livelihood and how you survive. Their priorities are all mixed up, but I can&#x27;t fault them too much. Such people were raised by others who told them that their feelings matter more than objective reality.
评论 #41329367 未加载
评论 #41329181 未加载
评论 #41329276 未加载
评论 #41329142 未加载
评论 #41329068 未加载