[Fis] Limits of Formal Systems
Carlos Gershenson
cgershen at gmail.com
Mon Feb 5 16:43:48 CET 2024
In the 1920s, David Hilbert's program attempted to get rid once and for all from the paradoxes in mathematics that had arisen from the work of Cantor, Russell, and others. Even when Hilbert’s PhD student — John von Neumann — was working avidly on demonstrating that mathematics were complete, consistent, and decidable, Kurt Gödel proved in the early 1930s that formal systems are incomplete and inconsistent, while Alan Turing proved in 1936 their undecidability (for which he proposed the "Turing Machine", laying the theoretical basis for computer science).
Digital computers have enabled us to study concepts and phenomena for which we did not have the proper tools beforehand, as they process much more information than the one our limited brains can manipulate. These include intelligence, life, and complexity.
Even when computers have served us greatly as "telescopes for complexity", the limits of formal systems are becoming even more evident, as we attempt to model and simulate complex phenomena in all their richness, which implies emergence, self-organization, downward causality, adaptation, multiple scales, semantics, and more.
Can we go beyond the limits of formal systems? Well, we actually do it somehow. It is natural to adapt to changing circumstances, so we can say that our "axioms" are flexible. Moreover, we are able to simulate this process in computers. Similar to an interpreter or a compiler, we can define a formal system where some aspects of it can be modified/adapted. And if we need more adaptation, we can generalize the system so that a constant becomes a variable (similar to oracles in Turing Machines). Certainly, this has its limits, but our adaptation is also limited: we cannot change our physics or our chemistry, although we have changed our biology with culture and technology.
Could it be that the problem lies not in the models we have, but in the modeling itself? We tend to forget the difference between our models and the modeled, between the map and the territory, between epistemology and ontology; simply because our language does not make a distinction between phenomena and our perceptions of them. When we say "this system is complex/alive/intelligent", we assume that these are inherent properties of the phenomenon we describe, forgetting that the moment we name anything, we are already simplifying and limiting it. It is clear that models/descriptions will never be as rich as the modeled/phenomena, and that is the way it should be. As Arbib wrote, “a model that simply duplicates the brain is no more illuminating than the brain itself”. [1]
Still, perhaps we're barking up the wrong tree. We also tend to forget the difference between computability in theory (Church-Turing's) and computability in practice (what digital computers do). There are non-Turing-computable functions which we can compute in practice, while there are Turing-computable functions for which there is not enough time in the universe to compute. So maybe we are focussing on theoretical limits, while we should be concerned more with practical limits.
As you can see, I have many more questions than answers, so I would be very interested in what everyone thinks about these topics.
I'll just share some idea I've been playing with recently, although it might be that it won't lead anywhere. For lack of a better name, let's call them "multi-axiom systems". For example in geometry, we know that if we change the 5th axiom (about intersecting parallel lines), we can go from Euclidean to other geometries. We can define a "multi-axiom geometry", so that we can switch between different versions of the 5th axiom for different purposes. In a similar way, we could define a multi-axiom system that contains several different formal systems. We know we cannot have all at once universal computation and completeness and consistency. But then, in first-order logic, we can have completeness and consistency. In second-order logic we have universal computation but not completeness. In paraconsistent logics we sacrifice consistency but gain other properties. Then, if we consider a multi-axiom system that includes all of these and perhaps more, in theory we could have in the same system all these nice properties, but not at the same time. Would that be useful? Of course, we would need to find rules that would determine when to change the axioms. Just to relate this idea to last month's topic — as it was motivated by Stu's and Andrea's paper [2] — if we want to model evolution, we can have "normal" axioms at short timescales (and thus predictability), but at longer (evolutionary) timescales, we can shift axioms set, and then the "rules" of biological systems could change, towards a new configuration where we can use again "normal" axioms.
[1] Michael Arbib, The Metaphorical Brain 2. Neural Networks and Beyond (1989)
[2] Stuart Kauffman, Andrea Roli. Is the Emergence of Life an Expected Phase Transition in the Evolving Universe? https://urldefense.com/v3/__https://arxiv.org/abs/2401.09514v1__;!!D9dNQwwGXtA!Q9Wf2QzNb33Rbcm_rxf9I_P4EziZ3qwzNM9drNcS2M856SZcvJx6al-U8ZnYt5Fj0OfDWnNsNDd2RoZgOmc$
Carlos Gershenson
SUNY Empire Innovation Professor
Department of Systems Science and Industrial Engineering
Thomas J. Watson College of Engineering and Applied Science
State University of New York at Binghamton
Binghamton, New York 13902 USA
https://urldefense.com/v3/__https://tendrel.binghamton.edu__;!!D9dNQwwGXtA!Q9Wf2QzNb33Rbcm_rxf9I_P4EziZ3qwzNM9drNcS2M856SZcvJx6al-U8ZnYt5Fj0OfDWnNsNDd2yTKmSVg$ <https://urldefense.com/v3/__https://tendrel.binghamton.edu/__;!!D9dNQwwGXtA!Q9Wf2QzNb33Rbcm_rxf9I_P4EziZ3qwzNM9drNcS2M856SZcvJx6al-U8ZnYt5Fj0OfDWnNsNDd2vp1fOLM$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20240205/1c0e6e30/attachment.html>
More information about the Fis
mailing list