TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Humans progressively feel agency over events triggered before their actions

113 点作者 kvee超过 1 年前

15 条评论

sstevenshang超过 1 年前
Since the animation was “ based on the history of the players’ past movements and on the beginning of their current movement”, the human subjects did in fact have agency over the animation. Their brains probably figured out the pattern: “if I move the mouse towards a subject, it will explode”, which changes the game a little but nonetheless gives the players agency.
评论 #38705285 未加载
评论 #38705155 未加载
评论 #38707381 未加载
评论 #38707211 未加载
评论 #38704965 未加载
sho_hn超过 1 年前
This is interesting in an abstract-intellectual sense, but doesn&#x27;t really seem to surprise me.<p>If I understood the abstract right, they made a program learn where a user clicks, then used this to anticipate where the user would click next and begin visual feedback prior to the actual click. Users either didn&#x27;t think twice or understood that the system was anticipating them and rolled with it.<p>This is what I would expect. Extrapolation and prediction of how a system will evolve based on past experience (if I roll this ball off the table it will fall; it was me who set it in motion) is something humans master as young children.<p>From the submission title I expected it would be about something more like this: <a href="https:&#x2F;&#x2F;www.theatlantic.com&#x2F;health&#x2F;archive&#x2F;2019&#x2F;09&#x2F;free-will-bereitschaftspotential&#x2F;597736&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.theatlantic.com&#x2F;health&#x2F;archive&#x2F;2019&#x2F;09&#x2F;free-will...</a><p>A wrong prediction as it turns out ... :-)
评论 #38705969 未加载
评论 #38706257 未加载
clbrmbr超过 1 年前
Does anyone else sometimes have the experience where you feel like you know everything that’s going to be said or happen, as if life were a movie you saw so many times in grade-school that you know every line, but forgot about the film until just now?<p>I feel like this is another situation where sense of time can be reversed.<p>Derives from consciousness being actually a slow integration of diverse parallel processes.
评论 #38705199 未加载
评论 #38705294 未加载
评论 #38706603 未加载
评论 #38705864 未加载
评论 #38707583 未加载
评论 #38705521 未加载
评论 #38705160 未加载
评论 #38706338 未加载
评论 #38706279 未加载
评论 #38705627 未加载
评论 #38714020 未加载
评论 #38707993 未加载
评论 #38705447 未加载
Frummy超过 1 年前
My interpretation is that rather than lack of agency, the paper demonstrates perfect analogy of what agency turns out to be in real life. Not our willpower or decisionmaking in the minutiae, but rather a general pattern of our behavior over time. Removing the explicit causal link for the click, making it rather implicit based on an algorithm over their past behavior, is still a ”reflection” of their will. If they happen to identify with the mimicked behavior or not is less significant. I think it’s fitting because in real life the noise of minute events can detract from ”us”, but over time our values and character is revealed through actions and results.<p>Let’s extend the concept. Deploying AI agents in the future say with my values and simulated life experience, they would (maybe) act independently of me but according to my instruction or general desires and values. It’s not far removed from an employer instructing their employees, or a parent instructing a child. Or removing instruction, perhaps just expectation of a certain type of maneuvering in the world. Or presuming no expectation at all, let’s say I have an AI copycat without knowing about it, acting like me but in another setting, there would be remnants of my will in how it acts. Like information theory or energy laws, since it has information replicating me that it uses to act in the world, it’s like a lack of entropy, my will is preserved and extended. Disclaimer: None of what was just written makes too much sense, and many what-ifs.
ggm超过 1 年前
There are things I have seen in popSci which go to the minimum possible signal delay between some sense of things in the world, and the brain being told, and the continuous model we operate as brains, which has to integrate over those inputs.<p>So believing that is a tenable view of things, I can believe in this model we maintain, we can assign &#x27;agency&#x27; to actions which other parts of the model predict &quot;are going to happen&quot; based on mismatches between actual signal delay, &quot;computed&quot; delay, and synthesized interior world-view delay. Events can happen in real world time, and lag into the system. Events can lag into the system in a fully integrated manner but we can have a computed sense of their likely outcome based on our internal predicted model.<p>Measurement across this would be complicated. I don&#x27;t know I think ML is going to be the best path, if it actually drives to some &quot;wrong&quot; assumptions about where delay is, and where &quot;agency&quot; is being inferred.<p>Agency in gross time, where we choose to press a button and therefore cause things to happen, and where we can choose not to press the button at the last moment, and have them not (yet) happen, is different to a sense of agency over things which are happening, and which we sense internally against our world model, distinctly from when we get input signals about them.
smeej超过 1 年前
Humans are used to dealing with natural intelligences all the time, in the form of other humans. Other humans often get a sense of what we&#x27;re going to do next based on what we&#x27;ve done before and move to our next step alongside or slightly before us. This kind of experience-based cooperative alignment arguably even has an evolutionary advantage.<p>The fact that a computer can do it now too doesn&#x27;t make it a novel experience for us. Most of us have known since we were kids that we can affect the actions of other entities by establishing a pattern.
skybrian超过 1 年前
This sounds similar to how autocomplete works. If it guesses right then it feels like writing the word yourself, and if it guesses wrong, you fix it, and maybe complain a bit.
评论 #38704992 未加载
frans超过 1 年前
I recall a study with a setup similar to what&#x27;s described in this article, but with an important change. In the experiment, when the equipment predicted a subject&#x27;s choice to push button &#x27;A&#x27;, they ingeniously manipulated the outcome (perhaps through some neural stimulation?) causing the subject to choose button &#x27;B&#x27; instead.<p>What&#x27;s fascinating is how participants consistently rationalized their choices as products of their own free will, despite the external influence. This suggests that our conscious mind might often act as a &#x27;spokesperson&#x27;, justifying actions initiated by our subconscious.<p>Can anyone remember this and perhaps post a link to that study?
评论 #38711318 未加载
eyelidlessness超过 1 年前
The events in the study are triggered after the participants actions. Both in terms of accounting for past actions as such, and in terms of anticipating continuation of present actions.
mellosouls超过 1 年前
I&#x27;m surprised to see no reference in the paper to Bereitschaftspotential or Libet&#x27;s famous (and criticized) experiments - at least to give some context on how the findings here relate or differ.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benjamin_Libet" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benjamin_Libet</a>
m3kw9超过 1 年前
Is agency a feeling as a way to keep us sane about how we are just reacting to everything?
评论 #38705961 未加载
isaacfrond超过 1 年前
Sound a lot like Scott Aaronson&#x27;s free will challenge.<p>A user types a &#x27;random&#x27; sequence of Ts and Fs.<p>A computer can predict about 70% correctly though, by just counting 5-grams.<p>Here the taks is the opposite, make the prediction even easier.
barrenko超过 1 年前
I believe Ted Chiang has a short story with a similar &quot;device&quot;.
评论 #38707064 未加载
评论 #38707307 未加载
apienx超过 1 年前
&quot;Control-seizing is a more fundamental principle from which intelligence emerges, not vice versa&quot;<p>(quote from a slide in Alex Wissner-Gross&#x27;s A new equation for intelligence @TED)
spacebacon超过 1 年前
Same clinical mouse trap, new clinical budget target.