There are things I have seen in popSci which go to the minimum possible signal delay between some sense of things in the world, and the brain being told, and the continuous model we operate as brains, which has to integrate over those inputs.<p>So believing that is a tenable view of things, I can believe in this model we maintain, we can assign 'agency' to actions which other parts of the model predict "are going to happen" based on mismatches between actual signal delay, "computed" delay, and synthesized interior world-view delay. Events can happen in real world time, and lag into the system. Events can lag into the system in a fully integrated manner but we can have a computed sense of their likely outcome based on our internal predicted model.<p>Measurement across this would be complicated. I don't know I think ML is going to be the best path, if it actually drives to some "wrong" assumptions about where delay is, and where "agency" is being inferred.<p>Agency in gross time, where we choose to press a button and therefore cause things to happen, and where we can choose not to press the button at the last moment, and have them not (yet) happen, is different to a sense of agency over things which are happening, and which we sense internally against our world model, distinctly from when we get input signals about them.