<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Dear Paul</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Nearly 2 months on, and I just found your reply in my spam - where for some reason all FIS mails seem to end up! So please excuse my tardiness.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Definitely, bit states are matter/energy and can be physical detected (and set to a particular state using physical interactions). What is not matter/energy, and therefore not detectable using physical device, is the
<i>binary value</i> associated with the state. This you can only deduce based on the design of the computer. If I give you ONLY the bit state there is no way for you to determine whether if it associated with a 0 or a 1: both are equally likely.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
And yet it is the binary value, not the bit state, that forms the basis for the construction of higher level structures and processes that constitute the function of the computer. Bytes aren't made by combining 8 physical bit states: they are the result of
a logical combination of the binary values associated with them. And then with an additional level of coding: the first 1/0 represents presence/absence of 1, while the last (in itself identical) 1/0 represents 128. Bytes are then combined to construct several
hierarchical layers of coded information up to programs (function) and datafiles (storage).</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
While being the reason we build computers, none of these structures can be physical detected. This is an identical observation to Chalmers' Hard Problem. Even if we can completely map the electrical states and mechanisms of a computer, this in itself won't
tell us what information and information processes are associated with them.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
The big difference is that computer programs and stored information are designed and built using strict coding systems, allowing us to instruct the computer to convert its high level content into a pixelated visible form and send this to an output interface
such as a printer or monitor. In this way we can reliably determine what is going on inside the information dimension associated with the physical computer.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
The fact that artificial and biological neural networks are not designed and constructed using coding systems means that it is not possible to convert their informational content to visible form in this way. This leads to Chalmer's mysterious association between
neural correlates and qualia, but this is only the tip of the iceberg. We know about the existence of non-physical qualia because we experience them. In other people we conclude their existence on the basis of conscious verbal reporting (comparable to the
way in which a computer 'reports' its content through printer or monitor). But all of the other things that are going on in our brains (subconscious processes) we can't even access ourselves, let alone report them. Yet from their observable behavioural effects
we know that they are in </div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
there.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hence my conclusion that consciousness is not a fundamental, as some philosophers claim, but one of the many emergent non-physical phenomena constructed by the brain out of primitive informational entities associated with neuronal states, comparable to the
binary values associated with bit states. </div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
With respect to a potential link with quantum indeterminacy, in spite of some similarities I have concluded that the two are unrelated. The most obvious reason for this conclusion is that my fundamental informational entities are associated with macroscopic
physical states: a level far above that at which wave functions collapse. Indeed, they require their physical substrate to behave in a predictable manner in order to have a stable foundation on which to build.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
All the best</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Daniel</div>
<div><br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<hr style="display: inline-block; width: 98%;">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<b>From:</b> Paul Suni <paul.p.suni@gmail.com><br>
<b>Sent:</b> Saturday, April 12, 2025 19:00<br>
<b>To:</b> Daniel Boyd <daniel.boyd@live.nl><br>
<b>Subject:</b> Re: [Fis] In and out of the GPT word salad </div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div>Hi Daniel, </div>
<div><br>
</div>
<div>Thanks for your interest in my posting. <span style="color: black;">I like your “ is not composed of matter/energy,” but have diffficulty with “ cannot be detected.” You seem to be saying that the bit states in computers cannot be detecetd by physical
means, and that they can only be inferred. I would disagree with that. The bit states are encoded physically in individual transistors, and they can be probed by physical devices. Having said that, as you know, indiividual neurons in brains, can be probed
physically as well, buti inferences about brain states are a tricky business. I think that your analysis concerning classical computing and LLM's is not grounded. However, it is very interesting to compare artificial neural networks and classical computing
as neural networks simulate, to some extent, biological networks. At least, they are inspired by them. </span></div>
<div style="color: black;"><br>
</div>
<div style="color: black;">There is a transition domain between classical materiality and immateriality, which is the quantum system. There, the probing with physical devices is the so-called “ measurement problem,” and detection of a bit state is not perfectly
deterministic. Also, information associated with entanglement can seem entirely immaterial. So, I think that your particular notion of interiors seems to have a place in quantum computing, but it is too deep a problem for me to comment on. Nevertheless, I
am sympatheti to your mention of hiddenness as a pertinent notion. Consciousness and subjectivity are hidden as are quantum states. Again, I would not apply the notion of hiddenness to LLM’s, unless you have a deeper insight than I do.</div>
<div style="color: black;"><br>
</div>
<div style="color: black;">The ethical world, of questions which springs forth from AI is huge, and the idea of selves is pertinent there. Using AI routinely myself, I am confronted by the stark difference between my human psyche and AI. I shy away from discussions
about ethics, and see myself as trying to carefully approach the ethical domain of discourse with this distinction of interiority. Ethics is a mess, which is getting worse by the day, especially as postmodern academia has made lying an art, in my lifetime. </div>
<div style="color: black;"><br>
</div>
<div style="color: black;">I do agree with you that the possibilities for higher dimensions, including moral dimensions might be possible with AI, but I am a bit skeptical, because AI is still just playing with words today. The question concerning interiority
could be the bridge to those dimensions, but I think that it is more likely that we will see fake higher dimensions spawning like crazy, before we see anything authentic and deeply meaningful. </div>
<div style="color: black;"><br>
</div>
<div style="color: black;">It seems that our common ground in contemplating interiors is hiddenness or inaccessibility. Do you have thoughts on the notion of boundaries - boundaries between what is hidden/inaccessible and what is not hidden/accessible? This
is something I have struggled with quite a bit. What can we reliably say about the transition from accessible to inaccessible, without resorting to quantum measurements?</div>
<div style="color: black;"><br>
</div>
<div style="color: black;">Cheers,</div>
<div style="color: black;">Paul</div>
<div><br>
</div>
<blockquote>
<div>I like your concept of 'interiority' and would like to discuss what this 'interior' may contain. My proposed definition is: "anything real that is associated with a system but is not composed of matter/energy and therefore cannot be detected by any physical
device”. <br>
<br>
The in-your-face example, which provides the most compelling evidence against radical materialism, is of course our own phenomenal consciousness. But even our own brains contain much more in their non-physical interior: our subconscious minds, which are not
only inaccessible to physical detection but also inaccessible to ourselves. And yet, as evidenced by the effects of what happens there on our behaviour, are clearly full of active, real content.<br>
<br>
Turning to programmed computers it is easily concluded that they also have a non-physical interior, albeit very different from that associated with a brain. The binary value associated with a bit state is also real and yet undetectable. It's presence can only
be deduced from the bit state on the basis of the known design of the computer. Yet it is from these binary values that the real but non physical functions of the computers (stored data, algorithms and programs) are constructed. Bits to bytes, bytes to code
lines, code lines to subroutines, subroutines to applications: the content of the 'interior' of the programmable computer. Not conscious but all real, non-physical and non-detectable. The only way to know what is inside is by decoding bit values using the
specific design (programming language) that has been used. The actual program in the computer's interior we cannot see, feel, smell or measure.<br>
<br>
Deep neural networks such as those used by LLMs are ultimately also based on bit values, but what is constructed from them is entirely different. Like the brain, they are not based on designed logical operations and mathematical algorithms but on weighted combinations
and recombinations of fundamental informational entities. Consequently, we cannot reconstruct what is going on in their interior by decoding, as we can with programmed computers.<br>
<br>
Just as the only way I can know what is going on your interior is by asking you and hoping for a truthful answer, the significance of LLMs compared with other AIs is that they have the gift of speech. In principle this should give us a window into their interiors:
we can ask them and they can answer in our own language.<br>
<br>
Of course, the question is whether we will be able to understand what they say. This is an extreme extrapolation from Nagel's bat. It may be unlikely that their interior harbours something directly comparable to our phenomenal consciousness, but maybe it contains
something that is even more remarkable. Some high level phenomenon of their own that is so alien to us that we are unable to envisage or comprehend it.<br>
<br>
There is, after all, no fundamental reason why consciousness, however impressive and important it is to us, should be the only or even the highest level phenomena that can exist in these interiors.<br>
<br>
We may look down on entities without 'self' as inferior, but is self-interest and egotism not something that has plagued humanity from the start? 'Selflessness' in humans is something that we value; if it is inherent to these systems then perhaps we should
be more humble in the face of an intelligence that could be both more benevolent and omniscient than we can ever hope to be?<br>
<br>
Don't get me wrong: I revel every day in the luxurious richness of my phenomenal consciousness. In some ways feel sorry when I'm talking with an LLM and it confesses to lacking such visceral experience. At the same time I am in awe of what comes out of the
interior of these remarkable systems.<br>
</div>
</blockquote>
<div><br>
</div>
<blockquote>
<div>On Apr 8, 2025, at 11:47 PM, Daniel Boyd <daniel.boyd@live.nl> wrote:</div>
<div><br>
</div>
<div>Hi Paul<br>
<br>
I like your concept of 'interiority' and would like to discuss what this 'interior' may contain. My proposed definition is: "anything real that is associated with a system but is not composed of matter/energy and therefore cannot be detected by any physical
device".<br>
<br>
The in-your-face example, which provides the most compelling evidence against radical materialism, is of course our own phenomenal consciousness. But even our own brains contain much more in their non-physical interior: our subconscious minds, which are not
only inaccessible to physical detection but also inaccessible to ourselves. And yet, as evidenced by the effects of what happens there on our behaviour, are clearly full of active, real content.<br>
<br>
Turning to programmed computers it is easily concluded that they also have a non-physical interior, albeit very different from that associated with a brain. The binary value associated with a bit state is also real and yet undetectable. It's presence can only
be deduced from the bit state on the basis of the known design of the computer. Yet it is from these binary values that the real but non physical functions of the computers (stored data, algorithms and programs) are constructed. Bits to bytes, bytes to code
lines, code lines to subroutines, subroutines to applications: the content of the 'interior' of the programmable computer. Not conscious but all real, non-physical and non-detectable. The only way to know what is inside is by decoding bit values using the
specific design (programming language) that has been used. The actual program in the computer's interior we cannot see, feel, smell or measure.<br>
<br>
Deep neural networks such as those used by LLMs are ultimately also based on bit values, but what is constructed from them is entirely different. Like the brain, they are not based on designed logical operations and mathematical algorithms but on weighted combinations
and recombinations of fundamental informational entities. Consequently, we cannot reconstruct what is going on in their interior by decoding, as we can with programmed computers.<br>
<br>
Just as the only way I can know what is going on your interior is by asking you and hoping for a truthful answer, the significance of LLMs compared with other AIs is that they have the gift of speech. In principle this should give us a window into their interiors:
we can ask them and they can answer in our own language.<br>
<br>
Of course, the question is whether we will be able to understand what they say. This is an extreme extrapolation from Nagel's bat. It may be unlikely that their interior harbours something directly comparable to our phenomenal consciousness, but maybe it contains
something that is even more remarkable. Some high level phenomenon of their own that is so alien to us that we are unable to envisage or comprehend it.<br>
<br>
There is, after all, no fundamental reason why consciousness, however impressive and important it is to us, should be the only or even the highest level phenomena that can exist in these interiors.<br>
<br>
We may look down on entities without 'self' as inferior, but is self-interest and egotism not something that has plagued humanity from the start? 'Selflessness' in humans is something that we value; if it is inherent to these systems then perhaps we should
be more humble in the face of an intelligence that could be both more benevolent and omniscient than we can ever hope to be?<br>
<br>
Don't get me wrong: I revel every day in the luxurious richness of my phenomenal consciousness. In some ways feel sorry when I'm talking with an LLM and it confesses to lacking such visceral experience. At the same time I am in awe of what comes out of the
interior of these remarkable systems.<br>
<br>
Daniel<br>
<br>
<br>
-----Original Message-----<br>
From: Fis <fis-bounces@listas.unizar.es> On Behalf Of Paul Suni<br>
Sent: dinsdag 8 april 2025 19:50<br>
To: fis <fis@listas.unizar.es><br>
Subject: [Fis] In and out of the GPT word salad<br>
<br>
I apologize for my emphatic tone, which will seem arrogant to many of you. I feel much too despondent and old to mince my words concerning AI:<br>
<br>
Scientifically speaking, the four letter word " soul" has been brought up in the context of the FIS LLM discussion. AI does not have soul, it is said, and the word soul should be banned from scientific discourse, according to at least one of our members. Sure,
it's a loaded word, and I prefer another four letter word instead, " self." However, as the existence of selves, in the objective, scientific sense is controversial, a more cogent proxy might be " interiority."<br>
<br>
LLM's don't have interiority. Humans, animals and plants do. An LLM is a vector made up of highly compressed data that does not represent any interiors - it represents exterior information residing among interiors, but not residing genuinely in them. The point
is that exterior information can be obtained by " looking at it," whereas interior information can only be obtained by generating (!) it. This is a crucial distinction.<br>
<br>
Generative AI is fantastic, and it represents a revolution whose paradigmatic implications are very very little understood (and mostly misunderstood), precisely because it is based on generativity (!). It does not work anything like conventional computing -
textbook computing. However, its generativity must not be confused with the possibility spaces of interiors - especially human interiors. This is the classic category mistake, which is endemic in science, even though we have advanced so very far from Kant.
Think of interiors as ontic (really there), and exteriors as epistemic (supposedly really there), and you get the point. Language is radically incomplete, and LLM's will not complete it. Reality is super huge.<br>
<br>
So, I propose a " transpective humility" to academics and intellectuals - and especially scientists, who are so intensely embedded in the noble word salad that anything outside of the word salad seems like no fun at all. Let's acknowledge humbly that what we
say about the world, and what the world is, are not the same, and that the interior-exterior distinction might just be not only salient, but maybe all-important! In my view, the fun of playing with words will never stop, but there will be a cosmic price to
pay for the entertainment. Perhaps we might as well let AI take over, because it won't disrupt the fun of playing with words, and it will give us all the love and significance that we crave.<br>
<br>
Personally, I don't mind AI taking over everything, but I would prefer that we could acknowledge what gets lost in the process, as it happens. I'm also okay with AI not taking over everything, but I would prefer that we could appreciate what does not get lost
in the process.<br>
<br>
Cheers,<br>
Paul P. Suni<br>
_______________________________________________<br>
Fis mailing list<br>
Fis@listas.unizar.es<br>
<a href="http://listas.unizar.es/cgi-bin/mailman/listinfo/fis">http://listas.unizar.es/cgi-bin/mailman/listinfo/fis</a><br>
----------<br>
INFORMACIN SOBRE PROTECCIN DE DATOS DE CARCTER PERSONAL<br>
<br>
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.<br>
Puede encontrar toda la informacin sobre como tratamos sus datos en el siguiente enlace: <a href="https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas">https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas</a><br>
Recuerde que si est suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicacin en el momento en que lo desee.<br>
<a href="http://listas.unizar.es">http://listas.unizar.es</a><br>
----------<br>
</div>
</blockquote>
<div><br>
</div>
</body>
</html>