[Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence

Eric Werner eric.werner at oarf.org
Tue Nov 7 10:55:54 CET 2023


Dear Lou,

The boundary between rationality and hucksterism is vague. LLMs may 
mimic rationality enough to outperform most mathematicians.  I think you 
are overemphasizing implementation over function when regarding LLMs. 
Two systems may exhibit functioning rationality yet have very different 
instantiations/implementations.  So too with so many other mental states 
and processes.

Best,

Eric

On 11/7/23 5:57 AM, Louis Kauffman wrote:
> Dear Plamen,
> You are hoping for AI language programs that can actually engage in 
> reason.
> They do not yet exist.
> We do not yet have AI in this sense.
> It is the right goal and it can come when there is a proper synthesis 
> of the non-publicized formal system handling and theorem proving 
> systems and the
> language generation systems. The present language generation systems 
> are producing language on the basis of most probable word generation 
> from a big data base of human texts. This is not artificial 
> intelligence, but it is being huckstered as such, alas. We can do 
> better and we shall do better if the world survives.
> Best,
> Lou
>
>> On Oct 27, 2023, at 7:40 AM, Dr. Plamen L. Simeonov 
>> <plamen.l.simeonov at gmail.com> wrote:
>>
>> Thank you, Pedro, for this smart introduction of a new aspect.
>> Particularly, I am convinced that we urgently need AI help, 
>> particularly in human patent and civil law with its plenty of 
>> subfields to achieve true justice.
>> The current situation in many countries is that law courts are just 
>> stuck in cases and the many decision loops depend on an obsolete 
>> hierarchy and freedom of interpretation by smart lawyers and 
>> "lawmakers", i.e. parliament/congress representatives which does not 
>> often mean justice as the people at the basis understand it. In my 
>> view this is one of the reasons why modern societies degrade: the 
>> lack of operative justice.
>> I know a German professor and inventor who tried to make an AI based 
>> patent law proof engine. But his invention got stuck in the need for 
>> unambiguous syntax and semantics of the law LLMs used to be given to 
>> the engine for binary processing. This "AI law machine" would be a 
>> great invention, but it would certainly make generations of lawyers 
>> and politicians unemployed, which I wholeheartedly welcome ;-)
>>
>> By the way, coming back shortly to my former essay on AI "wisdom" 
>> today: I think that the best way to avoid and kill tyranny these day 
>> is perhaps to invent and switch on to a new "own" coded language and 
>> ignore all the narrative bombarding us with the globalists' 
>> transhumanist propaganda. So, we can leave them using the 
>> conventional English as they wish. So, the more people move to this 
>> new "Dumbledore" invented coded language, the less power the 
>> unelected tyrants will have on us. What do you think?
>>
>> Best,
>>
>> Plamen
>>
>>
>>
>>
>> On Fri, Oct 27, 2023 at 2:16 PM Pedro C. Marijuán 
>> <pedroc.marijuan at gmail.com> wrote:
>>
>>     Dear List, (I have just seen Plamen's; I could rephrase some of
>>     the below for the sake of the argument, but it would become too
>>     long. And about the server--Karl-- and also Marcus, yes something
>>     is happening, I cannot accede to it either. I will check).
>>
>>     Then, regarding the ongoing exchanges on Wisdom, I was reminded
>>     of the TURING TEST (from wiki: if a machine can engage in a
>>     conversation with a human without being detected as a machine, it
>>     has demonstrated human intelligence). The test was applauded or
>>     seriously considered decades ago, but now it is just a bygone
>>     obsolete item. Any domestic AI system passes the test. In my case
>>     I disliked that test when I met it first time (late 70s). I
>>     considered it as a symbol of the very superficial "theorizing" in
>>     those new fields... so I changed gears and finally focused on
>>     "natural intelligence".
>>
>>     Regarding wisdom, we take it as a exclusively human quality, and
>>     seemingly binary. Either yes or no. Humans have wisdom, machines
>>     don't. But like in the case of intelligence, it probably is
>>     graded. For the "formal" intelligence, an IQ gradation was easily
>>     established time ago, not quite perfect, but it was very widely
>>     used everywhere. The, how an IQ of wisdom could be established?
>>     Really difficult... it is the ages old divergence between the
>>     analytical and the integrative, the reductionist versus the
>>     holistic.
>>
>>     My take is that around Large Language Models a pretty small but
>>     noticeable enough portion of wisdom has been achieved, see for
>>     instance from the below quotation. I am lightly cooperating in
>>     the AI field "sentiment analysis", and have high hopes that it
>>     can contribute to an improved rationalization of human social
>>     emotions, the study of which is painfully in disarray ins Psycho
>>     and Sociology. No wonder the awful mental state of many people in
>>     a number of societies... There is a wonderful quotation from
>>     philosopher Ortega y Gasset about that (but unfortunately cannot
>>     locate it).
>>
>>     All the best--Pedro
>>     *
>>     *
>>     *Theory of Mind for Multi-Agent Collaboration via Large Language
>>     Models*. From Huao Li et al. , at:
>>     https://urldefense.com/v3/__https://arxiv.org/abs/2310.10701__;!!D9dNQwwGXtA!RuCzLjIVLrfA-lym27y32ZGCsaGitgraPasaZBC6Hb1e20Qz1K36tziCQjhi8GskZ54_k6bqwBtpqb5-5MJVCuQ$ 
>>     <https://urldefense.com/v3/__https://arxiv.org/abs/2310.10701__;!!D9dNQwwGXtA!WGQRbH1p47y_3QmXG5cnkavkcLaI6dQneyi1TygmW_kNa1lYM_Mf8gzFCzkD_vh6TMhRW5t-xmMIP2ud1gTy45FYyBUR$>
>>
>>     "In this study, we assessed the ability of recent large language
>>     models (LLMs) to conduct embodied interactions in a team task.
>>     Our results demonstrate that LLM-based agents can handle complex
>>     multi-agent collaborative tasks at a level comparable with the
>>     state-of-the-art reinforcement learning algorithm. *We also
>>     observed evidence of emergent collaborative behaviors and
>>     high-order Theory of Mind capabilities* among LLM-based agents.
>>     These findings confirm the potential intelligence of LLMs in
>>     formal reasoning, world knowledge, situation modeling and social
>>     interactions. Furthermore, we discussed two systematic failures
>>     that limit the performance of LLM-based agents and proposed a
>>     prompt-engineering method that *mitigates these failures by
>>     incorporating an explicit belief state about world knowledge*
>>     into the model input."
>>
>>
>>     El 27/10/2023 a las 12:36, Eric Werner escribió:
>>>
>>>     Dear Yixin,
>>>
>>>     As you know from my different responses regarding Wisdom and
>>>     Meta-AI (Artificial Wisdom) I am of a rather split opinion:
>>>
>>>     On the one hand, the poetic emotional side of me sees the
>>>     necessary inclusion of an ethics of fairness for all living
>>>     creatures. I am skeptical, like you, that AI can achieve this
>>>     consistently. I am worried about the ramifications of using AI
>>>     systems in a military-governmental decision making process.
>>>
>>>     On the other hand, it may well come about that Meta-AI is
>>>     possible. Such a system poses questions, creates new problems
>>>     that it then solves.   Such a Meta-AI system could rapidly
>>>     explore different combinations of explicit and implicit
>>>     theoretical assumptions. Leading to new theories about nature
>>>     and the world. It could then propose new experiments that
>>>     confirm or disconfirm its theory or hypotheses. It could see
>>>     long range relationships, logical, mathematical in different
>>>     specialized theories or mental frameworks.  Meta-AI is one of
>>>     the founding cornerstones of General AI.  It presupposes that
>>>     reasoning and not just parroting  can be learned in some way.
>>>
>>>     Some more thoughts on Wisdom:
>>>
>>>
>>>       Human wisdom is distributed and contradictory
>>>
>>>       * *AI models can contain all of human wisdom *- including
>>>         conflicting Wisdom
>>>       * *Conflicting Wisdom:*
>>>           o One societies Wisdom may be another societies doom
>>>       * *Realpolitik of human wisdom*
>>>           o As soon as limited resources, come in we get conflict
>>>       * *Imagine 10 people *on the land that supports 10 people if
>>>         they all share what they find among the other 10
>>>           o If they are greedy, it reduces the population
>>>           o It depends on if they really need 10 to find the food
>>>             for 10. If five are sufficient to survive on the same
>>>             land with less stress, then there’s a temptation to get
>>>             rid of or disadvange the other five
>>>           o Increase and search or intelligence algorithms whether a
>>>             genetic or soft can lead to more resource findings
>>>           o Sharing knowledge leads to greater distributed,
>>>             productivity and more can join the community
>>>       * T*he life and death struggle*
>>>           o Imagine another group of 10 comes in to the same area
>>>             that supports only 10. Then we get conflict. They may
>>>             cooperate but half have to die because of limited resources.
>>>           o Same holds for university positions
>>>           o Same holds for a limited resources in well-to-do
>>>             societies versus less able societies
>>>           o Taking advantage of one side's ability against the other
>>>       * *Power Creates Laws to Perpetuate Power *
>>>           o Speech is regulated, prevent thought and action that may
>>>             lead to change of the status quo of power
>>>           o Servants must be servile
>>>           o Those in power must pretend to be generous to the extent
>>>             that the servant does not rebel
>>>           o The good master (wants to be seen as Wise, knowing what
>>>             is good for the underlings)
>>>           o The parasite must not kill its host, unless or until it
>>>             can jump to another host
>>>           o A parasite of a parasite leads to a hierarchy of parasites
>>>       * L*imited Resources Disturb the Ideal of Fairness and
>>>         Absolute Wisdom*
>>>           o As soon as limited resources come into play the ideal no
>>>             longer works
>>>           o The group with more power in the given environment can
>>>             win the resources
>>>           o With limited resources, there can be no compromise after
>>>             a certain point of sharing
>>>
>>>     Thus my ambivalence concerning Wisdom.
>>>
>>>     Best wishes,
>>>
>>>     Eric
>>>
>>>
>>>     On 10/25/23 2:12 PM, 钟义信 wrote:
>>>>     Dear Eric,
>>>>
>>>>     There have many mysteries remained in wisdom. This is one of
>>>>     the reasons that the concept of AI does not involve wisdom and
>>>>     therefore AI is able to solve problem but is unable to define
>>>>     problem.
>>>>
>>>>     Wisdom is creative in nature but AI is not. It is my belief
>>>>     that humans can build up AI but cannot build up AW (artificial
>>>>     wisdom).
>>>>
>>>>     Wisdom can only be owned by humans but not by any machines. Do
>>>>     you think so? Please give comments on the point.
>>>>
>>>>     Best regards,
>>>>
>>>>     Yixin
>>>>
>>>>     ----------
>>>>
>>>>     该邮件从移动设备发送
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20231107/df63a61b/attachment-0001.html>


More information about the Fis mailing list