<i>“Think about it this way,” says Dr. Zakaria Neemeh, a philosopher from the University of Memphis, “when I feel happiness, my brain will create a distinctive pattern of complex neural activity. This neural pattern will perfectly correlate with my conscious feeling of happiness, but it is not my actual feeling. It is just a neural pattern that represents my happiness. That’s why a scientist looking at my brain and seeing this pattern should ask me what I feel, because the pattern is not the feeling itself, just a representation of it.”<p>As a result, we can’t</i><p>I’m not a professional philosopher, but I think that if you replace “when I feel happiness, my brain will …” with “when my brain …, I feel happiness” then the conclusion changes. How they came up with this prerequisite is unclear.
The summary doesn't really explain everything and the actual paper [1] is gargantuan and filled with math I don't understand as well as questionable assumptions (to me). That said, I <i>think</i> it can be summarized as, "Everyone's brain runs it's own operating system." Observing the pattern of neurons doesn't tell you what that pattern means within the context of an individual's brain. Like seeing just a list of bytes and not knowing what CPU or OS it's meant to run on.<p>The paper goes on and on about relativity and observers, but I think that's the gist of it. And it makes sense, and seems applicable to Machine Learning. If you train an ML algorithm on two slightly different sets of data (equivalent to our life experiences), the generated models could provide very similar results when queried, but the underlying structure of the two models will be wildly different. In fact, we have trouble understanding how and why some models produce the results they do, and backtracking through the model is like trying to find the original string after it's been hashed. Though there are Explainable AI methods, it's basically impossible to trace the step-by-step decision tree all the way to the source.<p>My guess is that the human brain is a giant learning algorithm, with some guaranteed universal data inputs such as the biological makeup of our brains and physical phenomenon like the force of gravity. But we input an innumerable number of unique experiences as well. So even though we can observe the various neuron connections objectively, we have no idea what that particular pattern means in the context of that person's brain model. I'd bet that any similarities in neuron patterns we've observed to date probably stem from the fact that we all share equivalent sensory inputs.<p>Hmm. So, if this is true, in theory if we figure out the fundamental human learning algorithm, in the future we could implant a chip into a fetus to record every sensory input (and monitor the brain for the result of any quantum randomness that pops up), and then re-create your consciousness perfectly later in life. Just a thought.<p>1. <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2021.704270/full" rel="nofollow">https://www.frontiersin.org/articles/10.3389/fpsyg.2021.7042...</a>
It’s interesting to me that I just know that consciousness exists before matter and energy. Or that matter and energy occur within consciousness.<p>Either you know or you don’t. No amount of thinking will take you there, and no philosophy can show you what’s just plainly true once you know it.<p>There was a time when I was trapped in my mind, trying to reason about these things, but now I see the world differently. I don’t know when it happened. I sure hope more people can come to this knowing too.