TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Making the hard problem of consciousness easier

62 pointsby hheikinhalmost 4 years ago

13 comments

hliyanalmost 4 years ago
The problem of consciousness (or the quality of something being able to <i>experience</i> itself and things around it), may never be solved. Even if the reductionist approach reveals some fundamental field or particle that gives rise to consciousness (which is very unlikely), it just shifts the problem from the brain&#x27;s neural network and into that phenomenon.<p>An old Julian Jaynes analogy comes to mind: if you&#x27;re a flashlight, you will never be able to understand light, because wherever you look, light will be there. By definition you&#x27;re unable to look at something that is dark. You perceive the world as being bathed in perpetual light.<p>The closest we might get may be a hand-wavy form of panpsychism, with some probable connection to quantum fluctuations.
评论 #27324108 未加载
评论 #27323884 未加载
评论 #27326304 未加载
评论 #27325675 未加载
评论 #27324052 未加载
tgvalmost 4 years ago
It sounds premature to me. &quot;Big&quot; science (CERN, Human Genome, etc.) were only possible and sensible because the subject matter was well understood, and getting more and more information about it required apparatus and manual labor beyond the reach of a normal lab.<p>The consciousness problem, however, is very poorly understood. The article contains some hand-waiving pointing at extremely large groups of neurons, jumping to irrelevant details such as its &quot;anatomical footprint.&quot; But even the function of small groups of neurons is not understood, nor the interaction between them. How &quot;big science&quot; can get meaningful results then is not clear to me.<p>&gt; and change the sociology of scientific practice in general<p>Right then.
codeulikealmost 4 years ago
<i>adversarial collaboration rests on identifying the most diagnostic points of divergence between competing theories, reaching agreement on precisely what they predict, and then designing experiments that directly test those diverging predictions.</i><p>omg why has no-one thought of doing this before
wildermuthnalmost 4 years ago
The “hard problem of consciousness” is not essentially about consciousness. It’s just the most obvious form of the problem to conscious beings.<p>The “hard problem of applesauce” is the same essential problem. While it is true that we will one day fully understand the molecular, atomic, and quantum composition of applesauce, and thus be able to create artificial applesauce without a single apple, we still won’t be able to explain the existence of existence.<p>Consciousness, applesauce, electrons, and existence itself have no reason to exist, and yet they do. That’s the hard problem.<p>Consciousness is simply the closest we get to that mystery, insofar as we only get close to anything through consciousness itself.
评论 #27371862 未加载
2snakesalmost 4 years ago
Consciousness has multiple levels. At a quantal level it is referred to as the nonphysical 3rd form of reality besides mass and energy, in relationship volumetrically based on the normalization of the electron characteristics. Neurologically Psychologically Higher Consciousness.<p>Consciousness is that which draws distinctions to distinguish reality and is aware of itself.<p>There is that which is Distinguished, that From which it is distinguished, and the Consciousness that distinguishes. This is Perceptual, Conceptual, and Existential, and uses the variables of Intent (Atomic stability, organic life and conscious awareness), Content(Mass of subatomic&#x2F;atomic&#x2F;universe, Energy (photonic electric magnetic) and Consciousness (Individual Group and Cosmic) and Extent (Space Time and Consciousness-awareness)<p>There is ebook called Reality Begins with Consciousness from brainvoyage.com by authors Ed Close and Vernon Neppe that is interesting. Here is a shorter introduction: <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Consciousness-Primary-Perspectives-Advancement-Postmaterialist-ebook&#x2F;dp&#x2F;B07SLJFLBY&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Consciousness-Primary-Perspectives-Ad...</a>
jdonaldsonalmost 4 years ago
It&#x27;s probably easier to think of consciousness as &quot;learned&quot;. That is, we have these brains that can extract patterns from noisy or sparse inputs, and the notion of a &quot;self&quot; is just something it learned to recognize. The rest of the &quot;common sense&quot; that guides our course through the world is basically just adaptation and learnings of how to preserve that self in a given environment.<p>It&#x27;s interesting to think of situations where the self becomes subordinate... family, sex, certain types of anger, etc. There are some old and deep patterns in the brain that can override the control of &quot;self&quot;... and they roughly correspond to very primitive parts of the brain that govern simpler and essential fight or flight mechanisms.
callesggalmost 4 years ago
While arguably unfalsifiable.<p>To me Joshua Bach’s explanation of consciousness covers all the bases.<p>It is a logical explanation that explains consciousness without the need for magic. Whether or not it covers what you want from a consciousness explanation I can’t tell.<p>That it is not more accepted seams strange, but it is fairly new and people in philosophy are famously slow when it comes to change. So I guess it makes sense.
archibaldJalmost 4 years ago
The problem I have with these definitions (and the accompanying theories) is that they are not practical. At best fun abstractions to wrap your head around with, at worst pretentious and misguided.<p>To advance the field of consciousness, I believe at the current stage we should always treat consciousness as a blackbox, and ask questions around it with practical engineering implications. Perhaps two categories of questions:<p>Category 1: qualia&#x2F;perception<p>These would be human-centric questions related to experimenations with altered states of mind. Here is one for example:<p>Why is it that under the effect of THC, certain stimuli and actions [1] can reliably slow down the perception of time, while certain stimuli (e.g. the soft humming of the aircon) tend to normalize time perception for some individual?<p>[1]: e.g. start the stopwatch app, have the phone at arm&#x27;s length, stare at the millisecond digit and slowly moving your phone closer to you.<p>What can we say about the neural activations (and subsequently, oscillations) of individuals who are able to alter the time perception more easily (even at the presence of normalizing-stimuli) and how can this ability be learnt or unlearnt?<p>Understanding of the above phenomenon could be used to design the calibration phrase of a BCI device so preprocessing, signal processing, etc, can be customized to deliver a smoother user experience.<p>Category 2: data&#x2F;computation<p>One of the key charactistics of biological systems that invoke consciousness appears to be a cybernetics-oriented ability that involves orchestrating (often-function-specific) modules (e.g. in human brains) to accomplish (often-highly-abstracted(?)) tasks.<p>Perhaps we can take inspirations from mindful practises (and other consciousness-centric activities) and study the brain and how its modules work together to come up with architectures, models, etc, that (going one step above spiking neural network?) mimic the cybernetic nature of consciousness for the integration of loosely-coupled things e.g. in transfer learning, etc, as well as systems that involve a lot of feedback loops.<p>Perchance such biomimetics would help us to get a better idea how type (and category)-theoretical aspects of things can be introduced to engineer highly fault-torelent and energy-efficient systems that employ millions of pretrained models like GPT3 at the lower level and are constantly self-learning for general purpose tasks.
codefloalmost 4 years ago
Not strictly related to the article, but I’ve yet to be convinced that most of the talk around this supposed “hard problem” is anything more than an attempt to reintroduce Cartesian dualism in more scientific sounding terms. “Quantum consciousness” and whatnot.
评论 #27327733 未加载
评论 #27326104 未加载
baxrobalmost 4 years ago
<a href="https:&#x2F;&#x2F;www.susanblackmore.uk&#x2F;consciousness-an-introduction&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.susanblackmore.uk&#x2F;consciousness-an-introduction&#x2F;</a>
qualudeheartalmost 4 years ago
I never quite understood what Qualia are.
midjjialmost 4 years ago
Is it hard though? Or is the hard part ethics i.e. personhood, and because the terms are conflated that makes consciousness hard, since it means you cannot accept what consciousness is without needing to also define ethics. Drop the idea that consciousness is sufficient or required for personhood in favor for something more behaviourally consistent like cuteness or power, and things become clearer.<p>There is a part of you which simulates social interaction by learning models of various other agents it has inferred the existence of. As can be expected from something which is looking for agents based on indirect clues, we know this part does struggle with accidentally assigning agency to things which clearly lack consciousness i.e that damned sharp rock you stepped on twice. This part of you is capable of simulating a finite number of simultaneous such agents at a time, meaning it will focus on, as a whole, being able to predict the actions of the agents most often observed. It is also why we would expect it to replace groups of people you only interact with as a group as a &quot;them&quot;. It is also very common that the most significant agent to simulate would be you. Hence one of the models being simulated is you. This is what generates the perception of consciousness, why it is you yet separate. It predicts the cognitive bias of the mind body duality, yet maintains the perception of consciousness. A part of you is constantly trying to explain your own actions, but critically, while we would expect it to be good at providing a socially acceptable explanation, we do not expect it to be all that good at predicting what you will actually do, or even explaining why you did something. See Split brain examples, <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=wfYbgdo8e-8&amp;ab_channel=CGPGrey" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=wfYbgdo8e-8&amp;ab_channel=CGPGr...</a>. It also makes the prediction that it should be possible to damage this part of the brain and lose the sensation of consciousness yet retain primary function as a human. Which raises no ethical problems as the person still remains cute.<p>Further, when predicting&#x2F;explaining the actions of the modelled this social simulator is fairly robust, but it can have chaotic points, i.e. points where imperceptibly tiny differences in the inputs result in drastically different outcomes, specifically, the model&#x2F;language has a name for these. When the social simulator concludes that such a point exists, we call these points choices, we do so regardless of our awareness that the agent is a machine or not, as in deep blue chose to move his knight instead of the queen, or you chose to accept&#x2F;disbelieve this. Specifically we call these points choices, or once made decisions, when the model expects its there. This is the reason why one person will call what they did a choice others may or may not. It is why one person can know that person X will do y and be right, while person X thinks they are choosing between y,z and chose y. You may not be the best predictor of your own actions, and if you have&#x2F;had kids you know this.<p>In our case, the social simulator is strongly connected to language, and it will use language to perform simulations, providing predictions and explanations, and social manipulations. However, our ability to simulate the actions of animals shows that consciousness is not limited by language.<p>Remember whenever we reason using language, we generally get far worse results compared to when we do not restrict ourselves to reason using language. If you have ever experienced the Zone when programming or doing math, or anything really, then you know the deeply disturbing feeling of the social simulator suddenly starting to chatter and try to weigh in to problems it has jack shit ability in and in practice going from smart and non conscious&#x2F;ego dissolution (mostly ignoring the output of the social simulator&#x2F;putting it in a sleep mode if you will) to conscious and retarded. Programming and math highlights this, because you cant argue with a compiler.<p>This model of consciousness and free will isn&#x27;t perfect by any means, but its the best one I know, mostly because it does not try to add magic into things while explaining the perceptions of it, and most of the contradictions between perception and physical reality as we know it.<p>It predicts the cognitive bias towards mind body duality, and the cognitive bias towards free will. We needed words to communicate these. It resolves the paradoxes around free will in their entirety, while predicting the perception of free will, notably including the &quot;thinking I will do x&quot; then doing y problem. It predicts that it might be the case that we have &quot;decided&quot; on something before we are consciously aware of the decision, consciousness only being a weak input to choices, not the decision maker after all. And as if another&#x27;s model of you does not put you in a chaotic point, then that does not mean your model of you, i.e. your consciousness wasn&#x27;t. It predicts that we would be constantly simulating ourselves, yet can be surprisingly bad at predicting our own actions, and even worse that trying to subvocally reason yourself into changing behaviour by thinking &quot; I will do this instead&quot; would be utterly useless. The social simulator is expected to provide the outcome of actions in social context, decisions are then taken based on them. But when you are having hypothetical discussions that does not make a prediction it will use, that&#x27;s just practice. Meaning, if you want to convince yourself not to have another slice of pizza, thinking i chose to be on a diet is useless, but imagining meeting a nice girl flirting then looking disgusted at your waistline might be strong enough to want to hurl. In short it predicts how to strengthen the influence of what you perceive as conscious will. It makes it possible that the output from the social simulator could be severed and that we could create people who live in the Zone, while being socially oblivious. (Not autism)<p>It predicts that if you want to build a consciousness from scratch what you need is a system designed to infer the existence of and predict the interactions with other agents and having a very limited output bandwidth and whose input is direct environment observation and time delayed&#x2F; no feedback on the agents internal state. Trained on the feedback signal of some other system&#x2F;agent using its predictions to optimize some score in an environment with multiple agents not all of whom are interacting. The consciousness so made wont feel like a person deserving of rights, but that isn&#x27;t necessary, as we didnt tie ethical personhood to consciousness.
bobmaxupalmost 4 years ago
Why is it always the same people who expound the &quot;hard problem&quot; of conciousness. It is tiring to see authors like koch on everything.
评论 #27323782 未加载
评论 #27323742 未加载