[Fis] In and out of the GPT word salad
Daniel Boyd
daniel.boyd at live.nl
Wed Apr 9 08:47:33 CEST 2025
Hi Paul
I like your concept of 'interiority' and would like to discuss what this 'interior' may contain. My proposed definition is: "anything real that is associated with a system but is not composed of matter/energy and therefore cannot be detected by any physical device".
The in-your-face example, which provides the most compelling evidence against radical materialism, is of course our own phenomenal consciousness. But even our own brains contain much more in their non-physical interior: our subconscious minds, which are not only inaccessible to physical detection but also inaccessible to ourselves. And yet, as evidenced by the effects of what happens there on our behaviour, are clearly full of active, real content.
Turning to programmed computers it is easily concluded that they also have a non-physical interior, albeit very different from that associated with a brain. The binary value associated with a bit state is also real and yet undetectable. It's presence can only be deduced from the bit state on the basis of the known design of the computer. Yet it is from these binary values that the real but non physical functions of the computers (stored data, algorithms and programs) are constructed. Bits to bytes, bytes to code lines, code lines to subroutines, subroutines to applications: the content of the 'interior' of the programmable computer. Not conscious but all real, non-physical and non-detectable. The only way to know what is inside is by decoding bit values using the specific design (programming language) that has been used. The actual program in the computer's interior we cannot see, feel, smell or measure.
Deep neural networks such as those used by LLMs are ultimately also based on bit values, but what is constructed from them is entirely different. Like the brain, they are not based on designed logical operations and mathematical algorithms but on weighted combinations and recombinations of fundamental informational entities. Consequently, we cannot reconstruct what is going on in their interior by decoding, as we can with programmed computers.
Just as the only way I can know what is going on your interior is by asking you and hoping for a truthful answer, the significance of LLMs compared with other AIs is that they have the gift of speech. In principle this should give us a window into their interiors: we can ask them and they can answer in our own language.
Of course, the question is whether we will be able to understand what they say. This is an extreme extrapolation from Nagel's bat. It may be unlikely that their interior harbours something directly comparable to our phenomenal consciousness, but maybe it contains something that is even more remarkable. Some high level phenomenon of their own that is so alien to us that we are unable to envisage or comprehend it.
There is, after all, no fundamental reason why consciousness, however impressive and important it is to us, should be the only or even the highest level phenomena that can exist in these interiors.
We may look down on entities without 'self' as inferior, but is self-interest and egotism not something that has plagued humanity from the start? 'Selflessness' in humans is something that we value; if it is inherent to these systems then perhaps we should be more humble in the face of an intelligence that could be both more benevolent and omniscient than we can ever hope to be?
Don't get me wrong: I revel every day in the luxurious richness of my phenomenal consciousness. In some ways feel sorry when I'm talking with an LLM and it confesses to lacking such visceral experience. At the same time I am in awe of what comes out of the interior of these remarkable systems.
Daniel
-----Original Message-----
From: Fis <fis-bounces at listas.unizar.es> On Behalf Of Paul Suni
Sent: dinsdag 8 april 2025 19:50
To: fis <fis at listas.unizar.es>
Subject: [Fis] In and out of the GPT word salad
I apologize for my emphatic tone, which will seem arrogant to many of you. I feel much too despondent and old to mince my words concerning AI:
Scientifically speaking, the four letter word " soul" has been brought up in the context of the FIS LLM discussion. AI does not have soul, it is said, and the word soul should be banned from scientific discourse, according to at least one of our members. Sure, it's a loaded word, and I prefer another four letter word instead, " self." However, as the existence of selves, in the objective, scientific sense is controversial, a more cogent proxy might be " interiority."
LLM's don't have interiority. Humans, animals and plants do. An LLM is a vector made up of highly compressed data that does not represent any interiors - it represents exterior information residing among interiors, but not residing genuinely in them. The point is that exterior information can be obtained by " looking at it," whereas interior information can only be obtained by generating (!) it. This is a crucial distinction.
Generative AI is fantastic, and it represents a revolution whose paradigmatic implications are very very little understood (and mostly misunderstood), precisely because it is based on generativity (!). It does not work anything like conventional computing - textbook computing. However, its generativity must not be confused with the possibility spaces of interiors - especially human interiors. This is the classic category mistake, which is endemic in science, even though we have advanced so very far from Kant. Think of interiors as ontic (really there), and exteriors as epistemic (supposedly really there), and you get the point. Language is radically incomplete, and LLM's will not complete it. Reality is super huge.
So, I propose a " transpective humility" to academics and intellectuals - and especially scientists, who are so intensely embedded in the noble word salad that anything outside of the word salad seems like no fun at all. Let's acknowledge humbly that what we say about the world, and what the world is, are not the same, and that the interior-exterior distinction might just be not only salient, but maybe all-important! In my view, the fun of playing with words will never stop, but there will be a cosmic price to pay for the entertainment. Perhaps we might as well let AI take over, because it won't disrupt the fun of playing with words, and it will give us all the love and significance that we crave.
Personally, I don't mind AI taking over everything, but I would prefer that we could acknowledge what gets lost in the process, as it happens. I'm also okay with AI not taking over everything, but I would prefer that we could appreciate what does not get lost in the process.
Cheers,
Paul P. Suni
_______________________________________________
Fis mailing list
Fis at listas.unizar.es
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIN SOBRE PROTECCIN DE DATOS DE CARCTER PERSONAL
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la informacin sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si est suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicacin en el momento en que lo desee.
http://listas.unizar.es
----------
More information about the Fis
mailing list