Since the animation was “ based on the history of the players’ past movements and on the beginning of their current movement”, the human subjects did in fact have agency over the animation. Their brains probably figured out the pattern: “if I move the mouse towards a subject, it will explode”, which changes the game a little but nonetheless gives the players agency.
This is interesting in an abstract-intellectual sense, but doesn't really seem to surprise me.<p>If I understood the abstract right, they made a program learn where a user clicks, then used this to anticipate where the user would click next and begin visual feedback prior to the actual click. Users either didn't think twice or understood that the system was anticipating them and rolled with it.<p>This is what I would expect. Extrapolation and prediction of how a system will evolve based on past experience (if I roll this ball off the table it will fall; it was me who set it in motion) is something humans master as young children.<p>From the submission title I expected it would be about something more like this: <a href="https://www.theatlantic.com/health/archive/2019/09/free-will-bereitschaftspotential/597736/" rel="nofollow noreferrer">https://www.theatlantic.com/health/archive/2019/09/free-will...</a><p>A wrong prediction as it turns out ... :-)
Does anyone else sometimes have the experience where you feel like you know everything that’s going to be said or happen, as if life were a movie you saw so many times in grade-school that you know every line, but forgot about the film until just now?<p>I feel like this is another situation where sense of time can be reversed.<p>Derives from consciousness being actually a slow integration of diverse parallel processes.
My interpretation is that rather than lack of agency, the paper demonstrates perfect analogy of what agency turns out to be in real life. Not our willpower or decisionmaking in the minutiae, but rather a general pattern of our behavior over time. Removing the explicit causal link for the click, making it rather implicit based on an algorithm over their past behavior, is still a ”reflection” of their will. If they happen to identify with the mimicked behavior or not is less significant. I think it’s fitting because in real life the noise of minute events can detract from ”us”, but over time our values and character is revealed through actions and results.<p>Let’s extend the concept. Deploying AI agents in the future say with my values and simulated life experience, they would (maybe) act independently of me but according to my instruction or general desires and values. It’s not far removed from an employer instructing their employees, or a parent instructing a child. Or removing instruction, perhaps just expectation of a certain type of maneuvering in the world. Or presuming no expectation at all, let’s say I have an AI copycat without knowing about it, acting like me but in another setting, there would be remnants of my will in how it acts. Like information theory or energy laws, since it has information replicating me that it uses to act in the world, it’s like a lack of entropy, my will is preserved and extended.
Disclaimer: None of what was just written makes too much sense, and many what-ifs.
There are things I have seen in popSci which go to the minimum possible signal delay between some sense of things in the world, and the brain being told, and the continuous model we operate as brains, which has to integrate over those inputs.<p>So believing that is a tenable view of things, I can believe in this model we maintain, we can assign 'agency' to actions which other parts of the model predict "are going to happen" based on mismatches between actual signal delay, "computed" delay, and synthesized interior world-view delay. Events can happen in real world time, and lag into the system. Events can lag into the system in a fully integrated manner but we can have a computed sense of their likely outcome based on our internal predicted model.<p>Measurement across this would be complicated. I don't know I think ML is going to be the best path, if it actually drives to some "wrong" assumptions about where delay is, and where "agency" is being inferred.<p>Agency in gross time, where we choose to press a button and therefore cause things to happen, and where we can choose not to press the button at the last moment, and have them not (yet) happen, is different to a sense of agency over things which are happening, and which we sense internally against our world model, distinctly from when we get input signals about them.
Humans are used to dealing with natural intelligences all the time, in the form of other humans. Other humans often get a sense of what we're going to do next based on what we've done before and move to our next step alongside or slightly before us. This kind of experience-based cooperative alignment arguably even has an evolutionary advantage.<p>The fact that a computer can do it now too doesn't make it a novel experience for us. Most of us have known since we were kids that we can affect the actions of other entities by establishing a pattern.
This sounds similar to how autocomplete works. If it guesses right then it feels like writing the word yourself, and if it guesses wrong, you fix it, and maybe complain a bit.
I recall a study with a setup similar to what's described in this article, but with an important change. In the experiment, when the equipment predicted a subject's choice to push button 'A', they ingeniously manipulated the outcome (perhaps through some neural stimulation?) causing the subject to choose button 'B' instead.<p>What's fascinating is how participants consistently rationalized their choices as products of their own free will, despite the external influence. This suggests that our conscious mind might often act as a 'spokesperson', justifying actions initiated by our subconscious.<p>Can anyone remember this and perhaps post a link to that study?
The events in the study are triggered after the participants actions. Both in terms of accounting for past actions as such, and in terms of anticipating continuation of present actions.
I'm surprised to see no reference in the paper to Bereitschaftspotential or Libet's famous (and criticized) experiments - at least to give some context on how the findings here relate or differ.<p><a href="https://en.wikipedia.org/wiki/Benjamin_Libet" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Benjamin_Libet</a>
Sound a lot like Scott Aaronson's free will challenge.<p>A user types a 'random' sequence of Ts and Fs.<p>A computer can predict about 70% correctly though, by just counting 5-grams.<p>Here the taks is the opposite, make the prediction even easier.
"Control-seizing is a more fundamental principle from which intelligence emerges, not vice versa"<p>(quote from a slide in Alex Wissner-Gross's A new equation for intelligence @TED)