Is it hard though? Or is the hard part ethics i.e. personhood, and because the terms are conflated that makes consciousness hard, since it means you cannot accept what consciousness is without needing to also define ethics. Drop the idea that consciousness is sufficient or required for personhood in favor for something more behaviourally consistent like cuteness or power, and things become clearer.<p>There is a part of you which simulates social interaction by learning models of various other agents it has inferred the existence of. As can be expected from something which is looking for agents based on indirect clues, we know this part does struggle with accidentally assigning agency to things which clearly lack consciousness i.e that damned sharp rock you stepped on twice.
This part of you is capable of simulating a finite number of simultaneous such agents at a time, meaning it will focus on, as a whole, being able to predict the actions of the agents most often observed. It is also why we would expect it to replace groups of people you only interact with as a group as a "them". It is also very common that the most significant agent to simulate would be you. Hence one of the models being simulated is you. This is what generates the perception of consciousness, why it is you yet separate. It predicts the cognitive bias of the mind body duality, yet maintains the perception of consciousness. A part of you is constantly trying to explain your own actions, but critically, while we would expect it to be good at providing a socially acceptable explanation, we do not expect it to be all that good at predicting what you will actually do, or even explaining why you did something. See Split brain examples, <a href="https://www.youtube.com/watch?v=wfYbgdo8e-8&ab_channel=CGPGrey" rel="nofollow">https://www.youtube.com/watch?v=wfYbgdo8e-8&ab_channel=CGPGr...</a>. It also makes the prediction that it should be possible to damage this part of the brain and lose the sensation of consciousness yet retain primary function as a human. Which raises no ethical problems as the person still remains cute.<p>Further, when predicting/explaining the actions of the modelled this social simulator is fairly robust, but it can have chaotic points, i.e. points where imperceptibly tiny differences in the inputs result in drastically different outcomes, specifically, the model/language has a name for these. When the social simulator concludes that such a point exists, we call these points choices, we do so regardless of our awareness that the agent is a machine or not, as in deep blue chose to move his knight instead of the queen, or you chose to accept/disbelieve this. Specifically we call these points choices, or once made decisions, when the model expects its there. This is the reason why one person will call what they did a choice others may or may not. It is why one person can know that person X will do y and be right, while person X thinks they are choosing between y,z and chose y. You may not be the best predictor of your own actions, and if you have/had kids you know this.<p>In our case, the social simulator is strongly connected to language, and it will use language to perform simulations, providing predictions and explanations, and social manipulations. However, our ability to simulate the actions of animals shows that consciousness is not limited by language.<p>Remember whenever we reason using language, we generally get far worse results compared to when we do not restrict ourselves to reason using language. If you have ever experienced the Zone when programming or doing math, or anything really, then you know the deeply disturbing feeling of the social simulator suddenly starting to chatter and try to weigh in to problems it has jack shit ability in and in practice going from smart and non conscious/ego dissolution (mostly ignoring the output of the social simulator/putting it in a sleep mode if you will) to conscious and retarded. Programming and math highlights this, because you cant argue with a compiler.<p>This model of consciousness and free will isn't perfect by any means, but its the best one I know, mostly because it does not try to add magic into things while explaining the perceptions of it, and most of the contradictions between perception and physical reality as we know it.<p>It predicts the cognitive bias towards mind body duality, and the cognitive bias towards free will. We needed words to communicate these. It resolves the paradoxes around free will in their entirety, while predicting the perception of free will, notably including the "thinking I will do x" then doing y problem. It predicts that it might be the case that we have "decided" on something before we are consciously aware of the decision, consciousness only being a weak input to choices, not the decision maker after all. And as if another's model of you does not put you in a chaotic point, then that does not mean your model of you, i.e. your consciousness wasn't.
It predicts that we would be constantly simulating ourselves, yet can be surprisingly bad at predicting our own actions, and even worse that trying to subvocally reason yourself into changing behaviour by thinking " I will do this instead" would be utterly useless. The social simulator is expected to provide the outcome of actions in social context, decisions are then taken based on them. But when you are having hypothetical discussions that does not make a prediction it will use, that's just practice. Meaning, if you want to convince yourself not to have another slice of pizza, thinking i chose to be on a diet is useless, but imagining meeting a nice girl flirting then looking disgusted at your waistline might be strong enough to want to hurl. In short it predicts how to strengthen the influence of what you perceive as conscious will. It makes it possible that the output from the social simulator could be severed and that we could create people who live in the Zone, while being socially oblivious. (Not autism)<p>It predicts that if you want to build a consciousness from scratch what you need is a system designed to infer the existence of and predict the interactions with other agents and having a very limited output bandwidth and whose input is direct environment observation and time delayed/ no feedback on the agents internal state. Trained on the feedback signal of some other system/agent using its predictions to optimize some score in an environment with multiple agents not all of whom are interacting. The consciousness so made wont feel like a person deserving of rights, but that isn't necessary, as we didnt tie ethical personhood to consciousness.