<div dir="auto">Dear all<div dir="auto"><br></div><div dir="auto">I think it would be helpful to discourage use of the term "artificial intelligence" to describe this technology. Much better to call it "<b>artificial anticipation</b>" as a description of what it actually does.</div><div dir="auto"><br></div><div dir="auto">In that vein, it's a terrible loss that Loet Leydesdorff is no longer with us, not least because he had a profound understanding of anticipation based on the work of Robert Rosen and Daniel Dubois. But the work is there and it tells us how anticipation works. Current AI is pretty close to this. </div><div dir="auto"><br></div><div dir="auto">"AI" or "AA" is not a database. It has a different architecture. It is a new technology, built in new way. We see very little technology that is genuinely "new". Even the web is basically a distributed database (it gives stuff back that people put in). </div><div dir="auto"><br></div><div dir="auto">Do we anticipate like AI? Well I recommend reading Leydesdorff to help answer that (kind of, yes). </div><div dir="auto"><br></div><div dir="auto">How are we different in our anticipation? That's a better question than the one about intelligence or wisdom. </div><div dir="auto"><br></div><div dir="auto">Does intelligence and consciousness derive from the anticipation in nature? To me, that is a question about evolutionary biology and physiological function, about which science is only beginning to address. </div><div dir="auto"><br></div><div dir="auto">Will we ever get artificial intelligence?Maybe Stafford Beer and Gordon Pask were on to something when they were trying to make computers out of ponds or chemicals. A lot depends on how we change our epistemology to interpret the computations of nature. </div><div dir="auto"><br></div><div dir="auto">Best wishes</div><div dir="auto"><br></div><div dir="auto">Mark</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 8 Nov 2023, 07:46 Louis Kauffman, <<a href="mailto:loukau@gmail.com">loukau@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Dear Eric,<div>The boundary between our dreams and our actualities is vague.</div><div>We do not yet actually have AI.</div><div>And when we get it, it will not longer be artificial.</div><div>AI = ~ AI.</div><div>The present LLM’s are nowhere near doing creative mathematics.</div><div>It is not enough to mimic rationality to do creative mathematics.</div><div>When the rules are all given and a search space is specified,</div><div> then computers can look for and find mathematical proofs that humans would not find without them.</div><div>This has been done and it will be done spectacularly in the future.</div><div>This will be exciting but we (the mathematicians) are designers of these games.</div><div>We will always be happy to see the machines go forward into more and more possibilities.</div><div><br></div><div>The key concepts here are comprescence and coalescence.</div><div>As we work with technologies we are no longer alongside them, we are coalesced with them.</div><div>I use my glasses by putting them on and becoming the world view that happens in SEEING THROUGH them.</div><div>And then “I” have lost “my” objectivity.</div><div>It was never mine.</div><div>Best,</div><div>Lou</div><div>P.S. Please note that I write in such a way that it is tempting to imagine arguing with my point of view.</div><div>But I do not have the point of view. You have the point of view. And when you argue with “me” you are arguing with yourself.</div><div>My intent is to write down points of view until they become absurd and turn into other points of view. </div><div>I trick you into participating, but you should know that I am doing this.</div><div>You trick me into responding.</div><div>Knowing will accelerate the process.</div><div><br></div><div><br></div><div><div><blockquote type="cite"><div>On Nov 7, 2023, at 3:55 AM, Eric Werner <<a href="mailto:eric.werner@oarf.org" target="_blank" rel="noreferrer">eric.werner@oarf.org</a>> wrote:</div><br><div>
<div><p>Dear Lou,</p><p>The boundary between rationality and hucksterism is vague. LLMs
may mimic rationality enough to outperform most mathematicians. I
think you are overemphasizing implementation over function when
regarding LLMs. Two systems may exhibit functioning rationality
yet have very different instantiations/implementations. So too
with so many other mental states and processes. <br>
</p><p>Best,</p><p>Eric <br>
</p>
<div>On 11/7/23 5:57 AM, Louis Kauffman
wrote:<br>
</div>
<blockquote type="cite">
Dear Plamen,
<div>You are hoping for AI language programs that can
actually engage in reason.</div>
<div>They do not yet exist.</div>
<div>We do not yet have AI in this sense.</div>
<div>It is the right goal and it can come when there is a
proper synthesis of the non-publicized formal system handling
and theorem proving systems and the </div>
<div>language generation systems. The present language
generation systems are producing language on the basis of most
probable word generation from a big data base of human texts.
This is not artificial intelligence, but it is being huckstered
as such, alas. We can do better and we shall do better if the
world survives.</div>
<div>Best,</div>
<div>Lou</div>
<div><br>
</div>
<div>
<div>
<blockquote type="cite">
<div>On Oct 27, 2023, at 7:40 AM, Dr. Plamen L.
Simeonov <<a href="mailto:plamen.l.simeonov@gmail.com" target="_blank" rel="noreferrer">plamen.l.simeonov@gmail.com</a>>
wrote:</div>
<br>
<div>
<div dir="ltr">
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">Thank
you, Pedro, for this smart introduction of a new
aspect.</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">Particularly,
I am convinced that we urgently need AI help,
particularly in human patent and civil law with its
plenty of subfields to achieve true justice.</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">The
current situation in many countries is that law
courts are just stuck in cases and the many decision
loops depend on an obsolete hierarchy and freedom of
interpretation by smart lawyers and "lawmakers", i.e.
parliament/congress representatives which does not
often mean justice as the people at the basis
understand it. In my view this is one of the reasons
why modern societies degrade: the lack of operative
justice. </div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">I
know a German professor and inventor who tried to make
an AI based patent law proof engine. But his invention
got stuck in the need for unambiguous syntax and
semantics of the law LLMs used to be given to the
engine for binary processing. This "AI law machine"
would be a great invention, but it would certainly
make generations of lawyers and politicians
unemployed, which I wholeheartedly welcome ;-)</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">By
the way, coming back shortly to my former essay on AI
"wisdom" today: I think that the best way to avoid and
kill tyranny these day is perhaps to invent and switch
on to a new "own" coded language and ignore all the
narrative bombarding us with the globalists'
transhumanist propaganda. So, we can leave them using
the conventional English as they wish. So, the more
people move to this new "Dumbledore" invented coded
language, the less power the unelected tyrants will
have on us. What do you think?</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">Best,</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763">Plamen</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
<div class="gmail_default" style="font-family:arial,sans-serif;color:#073763"><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Oct 27, 2023
at 2:16 PM Pedro C. Marijuán <<a href="mailto:pedroc.marijuan@gmail.com" target="_blank" rel="noreferrer">pedroc.marijuan@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div>
<div>Dear List, (I have just seen Plamen's;
I could rephrase some of the below for the sake of
the argument, but it would become too long. And
about the server--Karl-- and also Marcus, yes
something is happening, I cannot accede to it
either. I will check).<br>
</div>
<div><br>
</div>
<div>Then, regarding the ongoing exchanges
on Wisdom, I was reminded of the TURING TEST (<span lang="en">from wiki: if a machine can
engage in a conversation with a human without
being detected as a machine, it has demonstrated
human intelligence). The test was applauded or
seriously considered decades ago, but now it is
just a bygone obsolete item. Any domestic AI
system passes the test. In my case I disliked
that test when I met it first time (late 70s). I
considered it as a symbol of the very
superficial "theorizing" in those new fields...
so I changed gears and finally focused on
"natural intelligence". <br>
</span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en">Regarding
wisdom, we take it as a exclusively human
quality, and seemingly binary. Either yes or no.
Humans have wisdom, machines don't. But like in
the case of intelligence, it probably is graded.
For the "formal" intelligence, an IQ gradation
was easily established time ago, not quite
perfect, but it was very widely used everywhere.
The, how an IQ of wisdom could be established?
Really difficult... it is the ages old
divergence between the analytical and the
integrative, the reductionist versus the
holistic. <br>
</span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en">My take is
that around Large Language Models a pretty small
but noticeable enough portion of wisdom has been
achieved, see for instance from the below
quotation. I am lightly cooperating in the AI
field "sentiment analysis", and have high hopes
that it can contribute to an improved
rationalization of human social emotions, the
study of which is painfully in disarray ins
Psycho and Sociology. No wonder the awful mental
state of many people in a number of societies...
There is a wonderful quotation from philosopher
Ortega y Gasset about that (but unfortunately
cannot locate it).</span><span lang="en"><br>
</span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en">All the
best--Pedro</span><br>
<span lang="en"><span><b><br>
</b></span></span></div>
<div><span lang="en"><span><b>Theory of Mind for
Multi-Agent Collaboration via Large Language
Models</b>. From Huao Li et al. , at: <a href="https://urldefense.com/v3/__https://arxiv.org/abs/2310.10701__;!!D9dNQwwGXtA!WGQRbH1p47y_3QmXG5cnkavkcLaI6dQneyi1TygmW_kNa1lYM_Mf8gzFCzkD_vh6TMhRW5t-xmMIP2ud1gTy45FYyBUR$" target="_blank" rel="noreferrer">https://arxiv.org/abs/2310.10701</a></span></span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en">"In this
study, we assessed the ability of recent large
language models (LLMs) to conduct embodied
interactions in a team task. Our results
demonstrate that LLM-based agents can handle
complex multi-agent collaborative tasks at a
level comparable with the state-of-the-art
reinforcement learning algorithm. <b>We
also observed evidence of emergent
collaborative behaviors and high-order Theory
of Mind capabilities</b> among LLM-based
agents. These findings confirm the potential
intelligence of LLMs in formal reasoning, world
knowledge, situation modeling and social
interactions. Furthermore, we discussed two
systematic failures that limit the performance
of LLM-based agents and proposed a
prompt-engineering method that <b>mitigates
these failures by incorporating an explicit
belief state about world knowledge</b> into
the model input."</span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en"><br>
</span></div>
<div><span lang="en"><span></span></span>El 27/10/2023 a las
12:36, Eric Werner escribió:<br>
</div>
<blockquote type="cite"><p>Dear Yixin,</p><p>As you know from my different
responses regarding Wisdom and Meta-AI
(Artificial Wisdom) I am of a rather split
opinion: </p><p>On the one hand, the poetic emotional
side of me sees the necessary inclusion of an
ethics of fairness for all living creatures. I
am skeptical, like you, that AI can achieve this
consistently. I am worried about the
ramifications of using AI systems in a
military-governmental decision making process. <br>
</p><p>On the other hand, it may well come
about that Meta-AI is possible. Such a system
poses questions, creates new problems that it
then solves. Such a Meta-AI system could
rapidly explore different combinations of
explicit and implicit theoretical assumptions.
Leading to new theories about nature and the
world. It could then propose new experiments
that confirm or disconfirm its theory or
hypotheses. It could see long range
relationships, logical, mathematical in
different specialized theories or mental
frameworks. Meta-AI is one of the founding
cornerstones of General AI. It presupposes that
reasoning and not just parroting can be learned
in some way. <br>
</p><p>Some more thoughts on Wisdom: <br>
</p>
<div dir="ltr">
<div>
<h1>Human wisdom is distributed and
contradictory<br>
</h1>
</div>
<ul>
<li><b>AI models can contain
all of human wisdom </b>- including
conflicting Wisdom<br>
</li>
<li><b>Conflicting Wisdom:</b></li>
<ul>
<li>One societies Wisdom may be
another societies doom<br>
</li>
</ul>
<li><b>Realpolitik of human
wisdom</b></li>
<ul>
<li>As soon as limited resources,
come in we get conflict</li>
</ul>
<li><b>Imagine 10 people </b>on
the land that supports 10 people if they all
share what they find among the other 10</li>
<ul>
<li>If they are greedy, it reduces
the population</li>
<li>It depends on if they really
need 10 to find the food for 10. If five
are sufficient to survive on the same land
with less stress, then there’s a
temptation to get rid of or disadvange the
other five</li>
<li>Increase and search or
intelligence algorithms whether a genetic
or soft can lead to more resource findings</li>
<li>Sharing knowledge leads to
greater distributed, productivity and more
can join the community</li>
</ul>
<li>T<b>he life and death
struggle</b></li>
<ul>
<li>Imagine another group of 10
comes in to the same area that supports
only 10. Then we get conflict. They may
cooperate but half have to die because of
limited resources.</li>
<li>Same holds for university
positions</li>
<li>Same holds for a limited
resources in well-to-do societies versus
less able societies</li>
<li>Taking advantage of one side's
ability against the other</li>
</ul>
<li><b>Power Creates Laws to
Perpetuate Power </b><br>
</li>
<ul>
<li>Speech is regulated, prevent
thought and action that may lead to change
of the status quo of power</li>
<li>Servants must be servile <br>
</li>
<li>Those in power must pretend to
be generous to the extent that the servant
does not rebel</li>
<li>The good master (wants to be
seen as Wise, knowing what is good for the
underlings)<br>
</li>
<li>The parasite must not kill its
host, unless or until it can jump to
another host</li>
<li>A parasite of a parasite leads
to a hierarchy of parasites <br>
</li>
</ul>
<li>L<b>imited Resources
Disturb the Ideal of Fairness and Absolute
Wisdom</b><br>
</li>
<ul>
<li>As soon as limited resources
come into play the ideal no longer works</li>
<li>The group with more power in
the given environment can win the
resources</li>
<li>With limited resources, there
can be no compromise after a certain point
of sharing</li>
</ul>
</ul>
Thus my ambivalence concerning Wisdom.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Best wishes,</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Eric<br>
</div>
<br>
<br>
<div>On 10/25/23 2:12 PM, 钟义信 wrote:<br>
</div>
<blockquote type="cite">
<div>Dear Eric,</div>
<div><br>
</div>
<div>There have many mysteries remained
in wisdom. This is one of the reasons that the
concept of AI does not involve wisdom and
therefore AI is able to solve problem but is
unable to define problem. </div>
<div><br>
</div>
<div>Wisdom is creative in nature but
AI is not. It is my belief that humans can
build up AI but cannot build up AW (artificial
wisdom).</div>
<div><br>
</div>
<div>Wisdom can only be owned by humans
but not by any machines. Do you think so?
Please give comments on the point.</div>
<div><br>
</div>
<div>Best regards,</div>
<div><br>
</div>
<div>Yixin <br>
<br>
----------<br><p>该邮件从移动设备发送<br>
<br>
</p>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div></blockquote></div><br></div></div>_______________________________________________<br>
Fis mailing list<br>
<a href="mailto:Fis@listas.unizar.es" target="_blank" rel="noreferrer">Fis@listas.unizar.es</a><br>
<a href="http://listas.unizar.es/cgi-bin/mailman/listinfo/fis" rel="noreferrer noreferrer" target="_blank">http://listas.unizar.es/cgi-bin/mailman/listinfo/fis</a><br>
----------<br>
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL<br>
<br>
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.<br>
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: <a href="https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas" rel="noreferrer noreferrer" target="_blank">https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas</a><br>
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.<br>
<a href="http://listas.unizar.es" rel="noreferrer noreferrer" target="_blank">http://listas.unizar.es</a><br>
----------<br>
</blockquote></div>