TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Consciousness is a recurrent neural network

50 点作者 xcodevn超过 8 年前
One of the problems with consciousness is that you know that you are conscious but you can&#x27;t know if others are. Consciousness is like seeing a dog in front of you but only you can see it.<p>Let us begin with an example of seeing a dog: when you see a dog, photons from the dog come to your eyes and converted to neural signal, then, a neural network will <i>recognize</i> that what you&#x27;re seeing is a dog.<p>How do you <i>know</i> that you&#x27;re seeing a dog?<p>I believe we can explain this in the same way as seeing a dog. But in this case, what you see is not photons but signals from within brain itself. In other words, brain takes its current states (signals) as its input (this is possibly no different from brain takes signals created by your eyes as its input.) And as you may already known, this is similar to recurrent neural network.<p>In this way, consciousness is a concept learned by brain about the inner processes of brain itself which is similar to: dog is a concept we learned when seeing many dogs!<p>Stream of consciousness is the result when brain trying to model itself and continuously receiving signals from itself!<p>We can put this further and ask: is any brain of any kind (dog, cat, whale) conscious?<p>My idea is that there is a threshold when brain becomes complicated enough to model its own signal!

15 条评论

hprotagonist超过 8 年前
You need to study some actual neuroscience.<p>some things to remember:<p>- neurons are not linear.<p>- neurons are not time-invariant.<p>- neurons are not causal.<p>- neurons and actual anatomical connectivity do not map well to electronics-inspired wiring diagrams. (see: dendritic arbor)<p>- anatomical structure does not imply functional connectivity.<p>- functional connectivity does not imply anatomy!<p>- synaptic junctions respond more or less well to different neurotransmitters, all of which are continuously present, at different times. We don&#x27;t know why.<p>- So far, what we have observed about anatomy and physiology of the brain does not look like a RNN.<p>- The relationship between the meat architecture and the phenomena it hosts may not be as 1:1 as you&#x27;d like. Opinions vary.<p>My favorite analogy about relating brains to consciousness is this: &quot;if you believe that brains are like computers, (which you shouldn&#x27;t, but just for the sake of argument, let&#x27;s), then even if you really and truly produce a full map of the brain, what you&#x27;ve _got_ is the spec sheet for an x86 processor. What you _want_ is the user manual for Mac OS.&quot;
评论 #13031149 未加载
评论 #13030885 未加载
评论 #13030939 未加载
评论 #13031305 未加载
wbhart超过 8 年前
Can&#x27;t the same argument be used to show that consciousness is a whole bunch of things that it clearly is not. Just arguing that consciousness has certain attributes and it shares those attributes with something else, doesn&#x27;t make consciousness an example of that something else.<p>I think there is a general trend towards thinking of consciousness as an emergent phenomenon which exists when a system is complex enough and has certain attributes, such as memory, information processing capability, etc.<p>But none of these definitions enlighten me as to when or why such a system might become self-aware.
评论 #13031408 未加载
评论 #13031259 未加载
andybak超过 8 年前
I have a tongue-in-cheek theory:<p>Those who think there&#x27;s not a &quot;hard problem of consciousness&quot; or hand-wave it away with purely materialist explanations probably aren&#x27;t conscious.<p>Folks. We have p-zombies in our midst...
评论 #13031173 未加载
评论 #13031252 未加载
评论 #13031197 未加载
评论 #13032235 未加载
评论 #13031356 未加载
lproven超过 8 年前
My thinking is similar. Long ago, I outlined it on a mailing list as follows:<p>I&#x27;d like to posit a progression of animal awareness. (In the full knowledge that there is no &quot;tree&quot; or &quot;hierarchy&quot; of evolution; the progression is merely a convenient way of presenting some data.)<p>1. Single-celled animals, such as amoebae and &#x2F;Paramecium&#x2F;. Many of these display simple taxic responses: they move towards light, away from heat, and towards or away from certain chemicals - they pursue concentration gradients. In other words, a single cell can display what could be called &quot;voluntary&quot; movement; it does not follow programmed paths but responds to its environment. You can watch a Paramecium in a microscope, swimming through a world of bits of plant and mineral matter in water. If they bumble into something, they recoil, and set off in another direction. If they catch a scent of something that might be food, they change direction and set off in pursuit of it. It&#x27;s much like watching a much bigger animal, like a mouse, explore an unfamiliar environment. Surprisingly like.<p>Similar behaviours can be observed in all sorts of small animals, like collembolans and nematodes.<p>Small animals - even single-celled ones - interact with their environment, responding to stimuli in ways that are more than a simple, determinate pattern. They are not like a clockwork mouse or toy that always follows the same path.
评论 #13031107 未加载
rayalez超过 8 年前
Great explanation, makes perfect sense to me! My own heory is pretty much the same. Just like you can experience other internal sensations in your body when nerves send signals that are recognized by the brain, the brain can recognize it&#x27;s own signals.<p>I highly recommend to read Godel Escher Bach(if you dont have time - just read the introduction to get the general idea). In this book author explains how the meaning arises when things(like language, or math equations, or neurons in the brain) &quot;mirror&quot; things in the real world, when their structure can be &quot;mapped&quot; onto some other structure(so called &quot;isomorphism&quot;).<p>He says that brain &quot;mirrors&quot; the world around it as it builds a world model. But the brain itself is a part of the world, so it builds the model of itself as well. Neurons recognizing&#x2F;observing&#x2F;experiencing other neurons.<p>Just like you can see a dog you can &quot;see&quot; your own brain state.<p>I&#x27;ve also heard a cool quote somewhere - &quot;Consciousness is simply just what it feels like to have a brain&quot;. You can close your eyes and feel the position of your body, you can feel your stomach being full, and you can feel your brain thinking.
评论 #13031055 未加载
评论 #13031965 未加载
Kinnard超过 8 年前
You&#x27;ll really appreciate this research on the evolution of self-awareness and mirror neurons: <a href="https:&#x2F;&#x2F;www.edge.org&#x2F;conversation&#x2F;the-neurology-of-self-awareness" rel="nofollow">https:&#x2F;&#x2F;www.edge.org&#x2F;conversation&#x2F;the-neurology-of-self-awar...</a>
unlikelymordant超过 8 年前
When i saw the title, i expected this post to be somebody&#x27;s &#x27;stoner philosophy&#x27; of what conciousness is. But i really like this idea. If true, it means conciousness is naturally emergent as soon as networks can be efficiently trained to represent complex enough functions. And we might not be too far away.<p>How do you define conciousness? Like the voice in your head? Are you saying that voice is essentially the predicted copy of you I.e. what you predict you would do in the current situation? I think this could explain why we have a concious and unconscious mind, the unconscious is the actual brain, the conscious is just our prediction of what we would do in the current situation.
评论 #13031092 未加载
SuperPaintMan超过 8 年前
I can&#x27;t tell if this is ironic or not.
MrQuincle超过 8 年前
In train, so summarized:<p>+ You have forward-inverse models e.g. by Wolpert.<p>+ You have a sequential winner take all process, e.g. see Baars.<p>+ You have honing in on the on&#x2F;off switch. Search for claustrum and Francis Crick.<p>The challenge of course is to know what the brain knows and learns about itself. Oscillations at an alpha, beta, gamma level have as far as I have seen no place in current networks. I find it suspicious that we don&#x27;t reproduce this behaviour. Are we sure that it is nonfunctional?
amelius超过 8 年前
This explanation feels a little tautological to me. We knew this already because physics doesn&#x27;t care what is a brain and a dog. All physics knows is that there are atoms, and whether they belong to a brain or a dog doesn&#x27;t really matter. It&#x27;s all the same.<p>And this theory fails to explain how we can feel pleasure or pain, and, it fails to predict whether an artificial neural network can feel pleasure or pain.
评论 #13030777 未加载
maliniakh超过 8 年前
I have the very same theory on that and can&#x27;t imagine it being something else. Yet I haven&#x27;t came across such explanation until now.
danruckus超过 8 年前
F u feeder the triple Danielle rose Simpson and Douglas Paul Hacker no game but what we leave in place so we are not a monitored brain amount all things to a joint that which should not be a common cause obstacle
vorotato超过 8 年前
The map is not the territory.
lproven超过 8 年前
11. Such symbolic communication is not unprecedented among wild animals. Social mammals such as meerkats have vocal calls which can indicate the type of threat that a scout has perceived. Many primates do similar things. Different groups use different sounds; these are not inherited actions, they are learned, or else genetically-similar groups in varying locations would use the same noises. Wild animals use symbolic communication to manipulate the behaviours of others, sometimes even in an altruistic fashion - favouring kin over themselves.<p>12. As previously discussed, chimps have been observed to lie, meaning that chimps are not only able to model the behaviours of other chimps in their troupe, they also model the mental state of those others. This is not to say that a butterfly with eyespots on its wings is consciously &quot;lying&quot; to predators, but when a more complex animal such as a chimp gives false information to other chimps, I think that what it&#x27;s doing is certainly trying to manipulate another&#x27;s mind, implying that it knows it has a mind.<p>What I&#x27;m trying to demonstrate here is that there is a fairly simple, steady, observable and demonstrable increase in the sophistication of animal awareness of the world. Few aspects of human cognition are unique to humans; just about everything we do except writing - a recent human innovation, not an evolutionary one - various animals do too. Animals can be shown to possess and perform just about every mental trick that we do, from symbolic manipulation to abstract thought. Cognition is not a uniquely human behaviour and neither is self-awareness. We&#x27;re just better at it. It&#x27;s a difference of degree, not of kind.<p>Now, this being so - and I think it is unarguable, but I welcome attempts - and the basic aspects of stimulus&#x2F;response being readily demonstrable right down to single cells, what I want to ask is this:<p>Where is the step from simple reflex action to perception&#x2F;thought&#x2F;response?<p>Even in humans, functional NMRI has shown that the cerebral impulses governing physical actions arise before the conscious mind is aware of them. Whereas we do undoubtedly reason things out and act on them, in much of the basic action of the human brain, the conscious mind is merely a spectator, watching what&#x27;s going on &quot;beneath&quot; it and then rationalising after the event that it &quot;decided&quot; to do that.<p>Thinking is not, I submit, some special event in the brain. It&#x27;s merely a slightly more sophisticated version of the very simple environmental modelling that even small crustaceans like woodlice do. Right down at the level of animals that have no brain, merely a small loop of nerve tissue around the mouth with more ganglia than elsewhere, animals take a step back from simple direct-wired stimulus-&gt;response, filter the incoming signals, form a model of what&#x27;s going on, and act upon it. This, I submit, is the simplest kind of &quot;mind&quot;, and the difference between it and us is that we have an awful lot more neurons and much more complex neural networks in between &quot;in&quot; and &quot;out&quot;. It is a difference of degree, not of kind. Purely quantitative, not qualitative.<p>A woodlouse &quot;sees&quot; in exactly the same way as we do. There&#x27;s no deep difference. Many insects and birds and fish see colour better than we primates; they can see more colours, more differences over a greater range. The bigger the brain, the more complex the pattern-analysis; the bigger the patterns that can be identified. What happens, though, is still the same: a sensor detects a stimulus, sends an action potential down an axon to a ganglion, where it triggers a cascade of other action potentials that propagate across a network of neurons until they either elicit a response or not.<p>The difference is that in humans, the cascades are bigger than they are in other animals, except whales, dolphins, elephants and the like. In at least some of the great apes - chimps and orangs - some of the impulses originate in some circuits whose job is to monitor the activity of the rest of the brain; there are circuits given over to modelling the activities of the rest of the brain, and there are circuits given over to modelling the model. The senses include awareness of brain activity: a feedback loop. The brain model includes a model of the brain model.<p>Where, in this model, do &quot;qualia&quot; occur? Where is the great marvellous miracle over which so much paper and so many innocent electrons are expended?<p>To me, it all seems fairly simple and clear. I don&#x27;t understand why there is so much debate.
hasenj超过 8 年前
The problem with this line of thinking is that any data model for anything has absolutely no intrinsic meaning. It&#x27;s just an encoding.<p>To give the simplest example: you can model a color in many different encodings, &#x27;red&#x27;, &#x27;hsl(0, 50%, 50%)&#x27;, &#x27;rgb(255, 0, 0)&#x27;, etc.<p>Any &quot;thing&quot; can be encoded in an infinitely different number of ways. The same applies to the state of your mind. Why should one encoding give rise to consciousness?<p>Arguably, the manner in which water is flowing within a sewage pipe network could be interpreted to be encoding some kind of information. Would you argue that some particular arrangement of water flow within a pipe network could give rise to consciousness?<p>I think there&#x27;s a difference between a system modelling itself and becoming conscious of itself.<p>Any system can model itself. We can do it easily with computers. Arguably the linux kernel has a model of itself, its hardware, its inputs, etc. That doesn&#x27;t make it conscious.
评论 #13034396 未加载