[Fis] "the mother of information"

GUEVARA ERRA RAMON MARIANO guevara.erra at gmail.com
Tue Jan 22 13:25:04 CET 2019


Dear colleagues,

I have some comments on the question by Krassimir. In our paper we talked
about consciousness but I think the results can also be interpreted in a
wider sense.

Indeed, with open or closed eyes, a person is not more or less conscious
than with closed eyes, also seems to me. There is simply more sensory input
with eyes opened, and presumably more information processing.

So, going back to our paper, we measured the information content in the
brain network, and see that in some states there is more information
content than in others. Now, if you are unconscious, in a medical sense,
say you fainted or you are in coma, the information content is very low.
But also if you switch off part of the sensory input. In both cases what
you measure is information processing.
In other words, our measure is good at revealing the amount of information
processing in large scale brain networks. Incidentally, it serves to
contrast conscious and unconscious states as consciousness is related to
information processing. But not only, it also serves to contrast states
with different sensory input, as in the eyes opened/ eyes closed case, even
when both seem to be conscious states.
It would be interesting to see results from an experiment where subjects
have sensory deprivation.

Regarding consciousness, I don't know of a method to quantify it
behaviorally. Actually, even the definition is elusive. Without a
behavioral quantification, all we can do is to rely on an empirical,
medical use of the concept and say "this state is more conscious than that
state".

I agree with Karl , this question is very important, weather something is
alive or not, and is perhaps related to the question of begin conscious or
not.  They may be examples of "major evolutionary transitions" (Maynard
Smith and Szathmary). In this sense I have a comment. There seems to be a
believe in certain communities that intelligence and /or consciousness
would appear as a result of the accumulation of processing units, with
networks of sufficient complexity. So, an artificial intelligence could
appear if we have a very complex and large set of artificial neurons (it
could even be a simulation, it doesn't have to be physical). I disagree
with this optimism on historical grounds. There was a similar  wave of
optimism after the Miller - Urey experiment on the origin of life, long
time ago, and look where we are now. As long as I know, a self-replicating
artificial cell cannot be created from inorganic molecules.  I think this
is the case because, of the large amount of possibilities that gives
molecular combinations, chemical reactions, etc, only a few can be
qualified as "alive". And the more the system is complex, the more there
are combinations. Is the selection of the correct combinations that is
difficult. One could say the same about the brain, where in this case the
units are neurons. There is a nice argument in one of Penrose's books about
this. The cerebellum and the cerebral cortex have the same order of
magnitude neurons. However, we don't tend to believe that the cerebellum is
the material basis of consciousness.

Best,
Ramon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20190122/af2272b1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image[3].png
Type: image/png
Size: 54909 bytes
Desc: not available
URL: <http://listas.unizar.es/pipermail/fis/attachments/20190122/af2272b1/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image[1].png
Type: image/png
Size: 35123 bytes
Desc: not available
URL: <http://listas.unizar.es/pipermail/fis/attachments/20190122/af2272b1/attachment-0003.png>


More information about the Fis mailing list