[Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence

Emanuel Diamant emanl.245 at gmail.com
Tue Nov 7 13:45:37 CET 2023


Dear Louis, Dear FIS colleagues, 

 

Usually, I don’t take part in FIS discussions (my views are very different from what FIS colleagues hold), but I would like to support the last Lou’s post. Similar views I have announced in my recent conference presentation. If you have time, please take a look (the Springer version will be published in January 2024, meanwhile I direct you to the Research Gate version). 

 

Emanuel Diamant, BICA’s fears and troubles: GPT-based AI tools are its friends or foes?

Submitted to BICA*AI 2023 Conference, October 13-15, Ningbo, China.

https://urldefense.com/v3/__https://www.researchgate.net/publication/371573729__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linhjEKjxy$  

 

Best regards, 

Emanuel.

 

----------------------------------------------------------------- 

From: Fis <fis-bounces at listas.unizar.es> On Behalf Of Louis Kauffman
Sent: Tuesday, 7 November 2023 6:57
To: Dr. Plamen L. Simeonov <plamen.l.simeonov at gmail.com>
Cc: "Pedro C. Marijuán" <pedroc.marijuan at gmail.com>; fis at listas.unizar.es
Subject: Re: [Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence

 

Dear Plamen,

You are hoping for AI language programs that can actually engage in reason.

They do not yet exist.

We do not yet have AI in this sense.

It is the right goal and it can come when there is a proper synthesis of the non-publicized formal system handling and theorem proving systems and the 

language generation systems. The present language generation systems are producing language on the basis of most probable word generation from a big data base of human texts. This is not artificial intelligence, but it is being huckstered as such, alas. We can do better and we shall do better if the world survives.

Best,

Lou

 

On Oct 27, 2023, at 7:40 AM, Dr. Plamen L. Simeonov <plamen.l.simeonov at gmail.com <mailto:plamen.l.simeonov at gmail.com> > wrote:

 

Thank you, Pedro, for this smart introduction of a new aspect.

Particularly, I am convinced that we urgently need AI help, particularly in human patent and civil law with its plenty of subfields to achieve true justice.

The current situation in many countries is that law courts are just stuck in cases and the many decision loops depend on an obsolete hierarchy and freedom of interpretation by smart lawyers and "lawmakers", i.e. parliament/congress representatives which does not often mean justice as the people at the basis understand it. In my view this is one of the reasons why modern societies degrade: the lack of operative justice.   

I know a German professor and inventor who tried to make an AI based patent law proof engine. But his invention got stuck in the need for unambiguous syntax and semantics of the law LLMs used to be given to the engine for binary processing. This "AI law machine" would be a great invention, but it would certainly make generations of lawyers and politicians unemployed, which I wholeheartedly welcome ;-)

 

By the way, coming back shortly to my former essay on AI "wisdom" today: I think that the best way to avoid and kill tyranny these day is perhaps to invent and switch on to a new "own" coded language and ignore all the narrative bombarding us with the globalists' transhumanist propaganda. So, we can leave them using the conventional English as they wish. So, the more people move to this new "Dumbledore" invented coded language, the less power the unelected tyrants will have on us. What do you think?

 

Best,

 

Plamen

 

 

 

 

On Fri, Oct 27, 2023 at 2:16 PM Pedro C. Marijuán <pedroc.marijuan at gmail.com <mailto:pedroc.marijuan at gmail.com> > wrote:

Dear List, (I have just seen Plamen's; I could rephrase some of the below for the sake of the argument, but it would become too long. And about the server--Karl-- and also Marcus, yes something is happening, I cannot accede to it either. I will check).

 

Then, regarding the ongoing exchanges on Wisdom, I was reminded of the TURING TEST (from wiki: if a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence). The test was applauded or seriously considered decades ago, but now it is just a bygone obsolete item. Any domestic AI system passes the test. In my case I disliked that test when I met it first time (late 70s). I considered it as a symbol of the very superficial "theorizing" in those new fields... so I changed gears and finally focused on "natural intelligence". 

 

Regarding wisdom, we take it as a exclusively human quality, and seemingly binary. Either yes or no. Humans have wisdom, machines don't. But like in the case of intelligence, it probably is graded. For the "formal" intelligence, an IQ gradation was easily established time ago, not quite perfect, but it was very widely used everywhere. The, how an IQ of wisdom could be established? Really difficult... it is the ages old divergence between the analytical and the integrative, the reductionist versus the holistic. 

 

My take is that around Large Language Models a pretty small but noticeable enough portion of wisdom has been achieved, see for instance from the below quotation. I am lightly cooperating in the AI field "sentiment analysis", and have high hopes that it can contribute to an improved rationalization of human social emotions, the study of which is painfully in disarray ins Psycho and Sociology. No wonder the awful mental state of many people in a number of societies... There is a wonderful quotation from philosopher Ortega y Gasset about that (but unfortunately cannot locate it).

 

All the best--Pedro

Theory of Mind for Multi-Agent Collaboration via Large Language Models. From Huao Li et al. , at: https://urldefense.com/v3/__https://arxiv.org/abs/2310.10701__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4lingacXdwq$  <https://urldefense.com/v3/__https:/arxiv.org/abs/2310.10701__;!!D9dNQwwGXtA!WGQRbH1p47y_3QmXG5cnkavkcLaI6dQneyi1TygmW_kNa1lYM_Mf8gzFCzkD_vh6TMhRW5t-xmMIP2ud1gTy45FYyBUR$> 

 

"In this study, we assessed the ability of recent large language models (LLMs) to conduct embodied interactions in a team task. Our results demonstrate that LLM-based agents can handle complex multi-agent collaborative tasks at a level comparable with the state-of-the-art reinforcement learning algorithm. We also observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. These findings confirm the potential intelligence of LLMs in formal reasoning, world knowledge, situation modeling and social interactions. Furthermore, we discussed two systematic failures that limit the performance of LLM-based agents and proposed a prompt-engineering method that mitigates these failures by incorporating an explicit belief state about world knowledge into the model input."

 

 

El 27/10/2023 a las 12:36, Eric Werner escribió:

Dear Yixin,

As you know from my different responses regarding Wisdom and Meta-AI (Artificial Wisdom) I am of a rather split opinion: 

On the one hand, the poetic emotional side of me sees the necessary inclusion of an ethics of fairness for all living creatures. I am skeptical, like you, that AI can achieve this consistently. I am worried about the ramifications of using AI systems in a military-governmental decision making process. 

On the other hand, it may well come about that Meta-AI is possible. Such a system poses questions, creates new problems that it then solves.   Such a Meta-AI system could rapidly explore different combinations of explicit and implicit theoretical assumptions. Leading to new theories about nature and the world. It could then propose new experiments that confirm or disconfirm its theory or hypotheses. It could see long range relationships, logical, mathematical in different specialized theories or mental frameworks.  Meta-AI is one of the founding cornerstones of General AI.  It presupposes that reasoning and not just parroting  can be learned in some way.  

Some more thoughts on Wisdom: 


Human wisdom is distributed and contradictory


*	AI models can contain all of human wisdom - including conflicting Wisdom
*	Conflicting Wisdom:

*	One societies Wisdom may be another societies doom

*	Realpolitik of human wisdom

*	As soon as limited resources, come in we get conflict

*	Imagine 10 people on the land that supports 10 people if they all share what they find among the other 10

*	If they are greedy, it reduces the population
*	It depends on if they really need 10 to find the food for 10. If five are sufficient to survive on the same land with less stress, then there’s a temptation to get rid of or disadvange the other five
*	Increase and search or intelligence algorithms whether a genetic or soft can lead to more resource findings
*	Sharing knowledge leads to greater distributed, productivity and more can join the community

*	The life and death struggle

*	Imagine another group of 10 comes in to the same area that supports only 10. Then we get conflict. They may cooperate but half have to die because of limited resources.
*	Same holds for university positions
*	Same holds for a limited resources in well-to-do societies versus less able societies
*	Taking advantage of one side's ability against the other

*	Power Creates Laws to Perpetuate Power 

*	Speech is regulated, prevent thought and action that may lead to change of the status quo of power
*	Servants must be servile 
*	Those in power must pretend to be generous to the extent that the servant does not rebel
*	The good master (wants to be seen as Wise, knowing what is good for the underlings)
*	The parasite must not kill its host, unless or until it can jump to another host
*	A parasite of a parasite leads to a hierarchy of parasites 

*	Limited Resources Disturb the Ideal of Fairness and Absolute Wisdom

*	As soon as limited resources come into play the ideal no longer works
*	The group with more power in the given environment can win the resources
*	With limited resources, there can be no compromise after a certain point of sharing

Thus my ambivalence concerning Wisdom.

 

Best wishes,

 

Eric

 

On 10/25/23 2:12 PM, 钟义信 wrote:

Dear Eric,

 

There have many mysteries remained in wisdom. This is one of the reasons that the concept of AI does not involve wisdom and therefore AI is able to solve problem but is unable to define problem. 

 

Wisdom is creative in nature but AI is not. It is my belief that humans can build up AI but cannot build up AW (artificial wisdom).

 

Wisdom can only be owned by humans but not by any machines. Do you think so? Please give comments on the point.

 

Best regards,

 

Yixin  

----------

该邮件从移动设备发送




























































--------------原始邮件--------------
发件人:"Eric Werner " <mailto:eric.werner at oarf.org> <eric.werner at oarf.org>;
发送时间:2023年10月25日(星期三) 晚上7:00
收件人:"钟义信"  <mailto:zyx at bupt.edu.cn> <zyx at bupt.edu.cn>;
抄送:"Joeseph Brenner " <mailto:joe.brenner at bluewin.ch> <joe.brenner at bluewin.ch>;"fis " <mailto:fis at listas.unizar.es> <fis at listas.unizar.es>;
主题:Re: [Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence
-----------------------------------

Dear Yixin, 

 

 

The Relativity and Realpolitik of Human Wisdom:

 

Once we relativize Wisdom to human beings and excluded from artificial intelligent systems then wisdom will vary over different human beings. A grandmother may have a different kind of wisdom then a grandfather.  It will vary in different cultures. 

 

Another problem is that wisdom can be for the good of all or for the good of a few if it’s restricted or if it’s malevolent. 

 

“Knowing how”  is a kind of strategic wisdom that can be transferred from one person to another or from a AI model to another AI model,  robot,  or human. 

 

To set the boundary between what is beneficial for all human beings may and what is not may in itself create an inherent contradiction. This is especially so if there is conflict between groups of humans or animals or even AI systems. 

It may be that a met-AI system may be better at differentiating in a neutral way between human needs because of its inherent nonhuman neutrality.  This of course, has its problems as well.  Indeed the very creation of the AI model may be set with bias, as is seen in the conflicts between leftist and rightist AI models.  Determining neutrality may be impossible in a social setting of diverse beliefs. 

 

What may be perceived as good for one group of humans may be disastrous for another group of human beings. This is seen clearly in the relationship between humans and animals where what is good for humans is not always good for say a pig or a cow or a duck or a chicken. 

 

Thus even the notion of being good for all human beings maybe beset with problems that are potentially insurmountable especially in the political world. 

 

So that is the realpolitik and relativity of human wisdom.

 

Best wishes,

 

Eric 

 

 

Sent from my iPad





On Oct 25, 2023, at 12:25 PM, 钟义信  <mailto:zyx at bupt.edu.cn> <zyx at bupt.edu.cn> wrote:

 

Dear Joe, Eric, and colleagues,

 

For the simplity of my reply I just emphase one point that is, all study carried out by humans should be based on human centered stand. Otherwise, humans' research leads to extinguish humans themself. That would be meaningless.

 

Best regards, 

----------

该邮件从移动设备发送




























































--------------原始邮件--------------
发件人:"Joeseph Brenner " <mailto:joe.brenner at bluewin.ch> <joe.brenner at bluewin.ch>;
发送时间:2023年10月25日(星期三) 下午5:17
收件人:"Eric Werner"  <mailto:eric.werner at oarf.org> <eric.werner at oarf.org>;"钟义信"  <mailto:zyx at bupt.edu.cn> <zyx at bupt.edu.cn>;"fis"  <mailto:fis at listas.unizar.es> <fis at listas.unizar.es>;
主题:Re: Re: [Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence
-----------------------------------

Dear Yixin, Dear Eric, 

 

I very much welcome your complexification of the notion of wisdom/intelligence. First of all, it eliminates the flavor of omnipotence which accompanies some discourse on Artificial Intelligence.

 

One now needs to define further the characteristics of Human Centered Wisdom (what Yixin has been talking about all along) so that the same mistakes are not made in discussing Artificial Human Centered Wisdom.

 

My suggestion would be to look at the kinds of logic ("Eastern" or "Western") that are most applicable to/in the two domains. Are we  sure, however, that all our objectives can be achieved by reference to problem solving. Of course, living with unsolved problems simply carries out an additional iteration or recursion step, but it might be worthwhile if this were recognized explicitly.

 

Eric concludes "It seems AHCW is more restrictive than AMI". I agree, but suggest it should be said that AHCW is also more restrictive than HCW.

 

Best wishes,

Joseph 

----Original Message----
>From : eric.werner at oarf.org <mailto:eric.werner at oarf.org> 
Date : 24/10/2023 - 10:54 (E)
To : zyx at bupt.edu.cn <mailto:zyx at bupt.edu.cn> , fis at listas.unizar.es <mailto:fis at listas.unizar.es> 
Subject : Re: [Fis] Fis Digest, Vol 105, Issue 12 Human Wisdom vs Meta-Intelligence

Dear Yixin,

Just had some clarifying thoughts while taking a shower (embodied intelligence 😉)

You state: "In the context of technical study, wisdom means the ability to define the problem, which should be good for all humans if solved, and intelligence means the ability to solve the problem defined by wisdom."  

To clarify:

1.	 Let me define the ability to define the problem as Meta-Intelligence MI
2.	And define ability to define the problem, which should be good for all humans if solved, as Human-Centered-Wisdom HCW
3.	Define intelligence as the ability to solve the problem defined by Meta-Intelligence or Human-Centered-Wisdom 

Under these definitions, Artificial Human Centered Wisdom AHCW will be a different challenge than Artificial Meta Intelligence AMI 

Given the right technology AMI may well be achievable and may give different answers than Artificial Human Centered Wisdom, if the latter is even achievable.

I think this clarifies the differences in understanding of wisdom and the capacity to intelligently solve the problems posed by the different types of Wisdom. It seems AHCW is more restritive than AMI. 

Best wishes,

Eric 

On 10/24/23 9:26 AM, Eric Werner wrote: 

Dear Yixin, 

I am getting a better understanding of what you mean by wisdom. Thank you for your patience! 

This morning I had some thoughts described below. 

You state: "In the context of technical study, wisdom means the ability to define the problem, which should be good for all humans if solved, and intelligence means the ability to solve the problem defined by wisdom." 

In mathematics and other sciences, there is the difference between proving theorems and discovering a theorem. Many bright mathematicians make their name by proving theorems. Others like Gödel in his proof of the incompleteness theorem (inherent limits of the axiomatic method) linked together very different concepts-methods (Cantor's diagonal method and arithmetization) to come up with a wonderful result.  Proving is commonplace compared to coming up with a concept. 

Missing from the parrot-like LLMs is true reasoning and questioning. 

However, I am not convinced that an artificial intelligent-rational system would not be able to formulate its own questions, create new concepts and new method of solving its own conundrums. 

Here are the other earlier thoughts of this morning:


Can wisdom be learned?


*	Artificial wisdom AW
*	Social wisdom SW
*	Artificial Social Wisdom ASW
*	Embodied AI, Embodied AW
*	Artificial Ethics AE
*	Human wisdom HW as generated by experience

*	Rare 
*	There but for the grace of God go I
*	We often cannot understand someone until are in their shoes- experience their situation 
*	Examples: Growing old, living in a different country or culture or region, learning or knowing a different subject, being in a war zone 
*	You have to know two or more subjects to interrelate them 

*	Artificial rationality AR
*	Understanding requires

*	Information 

*	State 
*	Intention-Strategic
*	Value - Emotional Info

*	Operators 

*	Transform information 
*	This gives the dynamics to rational thought

*	Ability or capacities 
*	Intelligence 

*	Circular?? Rational inference
*	Questioning and reasoning in self dialogue

*	Can intelligence be learned?

*	Seems to require basic competencies-capacities
*	Reasoning 
*	Social 
*	Emotional 
*	Wisdom (circular)

*	How organized is the brain?

*	Inherent competencies 
*	Modular capacities of the brain

*	Linguistic, visual, auditory, semantic, pragmatic, motor

*	Wisdom Requires 

*	Experience
*	Capacities 
*	Reasoning 

*	Dynamic
*	Self reflection 

 

Hope this clarifies my thoughts somewhat. 

In summary, I am inclined to view the possibility of Artificial Wisdom AW as a very real possibility. It is an open question whether the Parrot-Like-LLMs will ever achieve AW, but a hybrid might. 

Kind regards,

Eric

On 10/24/23 3:58 AM, 钟义信 wrote: 

Dear Eric, 

 

I am also very worried about the military uses of AI. This is an issue on technical ethics and needs the strong cooperation between all governments.  

 

We, as scientists and professors, have the responsibility to promote the study of technical ethics in AI. At the same time, we have to pay more attentions to the technical study of AI itself. 

 

I agree with you on the characters of wisdom: fairness, kindness, love, for all humans, for all life, and, all in all, for living and developments of all people. 

 

In the context of technical study, wisdom means the ability to define the problem, which should be good for all humans if solved, and intelligence means the ability to solve the problem defined by wisdom. 

 

Keeping the difference between wisdom and intelligence mentioned above, it is believed that intelligence can be simulated by machine whereas wisdom cannot be simulated by machine. In other word, AI cannot be creative in the meaning of unable to define the problem good for all humans in solved. I wonder if you agree or not. 

 

Best regards, 

 

 

  <https://urldefense.com/v3/__https://exmail.qq.com/cgi-bin/viewfile?type=logo&domain=bupt.edu.cn__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linqOJmtxu$ > 


Prof. Yixin ZHONG


AI School, BUPT 

Beijing 100876, China 

 

 

  

  

  

------------------ Original ------------------ 

From:  "Eric Werner" <eric.werner at oarf.org <mailto:eric.werner at oarf.org> >; 

Date:  Mon, Oct 23, 2023 05:33 PM 

To:  "钟义信" <zyx at bupt.edu.cn <mailto:zyx at bupt.edu.cn> >; "fis" <fis at listas.unizar.es <mailto:fis at listas.unizar.es> >; 

Subject:  Re: [Fis]回复: Fis Digest, Vol 105, Issue 12 

  


 


Dear Yixin, Ma 

 

Thank you all for your thoughtful contributions  Krassimir, Marcus, Pedro, Yixin.  Thinking about wisdom and human nature and AI.  Recently viewing the uses of AI in weapons systems already being designed and produced by corporations that sell to governments, made me hesitate about what we are doing. We need a deep discussion about artificial intelligence in a social industrial governmental military context. 


AI in love and war


We walk lightly along the edge of a deep ravine, 

where can be seen 

the results of passions played. 

Oh, I loved too much, 

and by such, by such 

is happiness thrown away. 

I had wooed not as I should 

a creature made of clay 

When the angel woos the clay 

he'd lose  his wings

at the dawning of the day 

 

(Adapted from a poem 'On Raglan Road' by Patrick Kavanagh) 

*	Wisdom in the wide human sense

*	Fairness
*	Kindness
*	Love
*	For all humans 
*	For all life

*	Military uses of AI

*	Goal directed
*	Antagonistic
*	Cooperative
*	Destructive
*	Murderous 
*	Anti-human
*	Financially motivated

*	An AI model is like a child

*	It can be molded to the wishes of the user
*	At the same time, it’s like a mother that responds to every wish
*	It is an all knowing God
*	Connected to a robotic system, it can heal, but it can also murder
*	AI is a child of humankind
*	All too human
*	A savior and genocidal

*	What will we do?

King regards, 

 

Eric 

 

Sent from my iPhone 

On 10/22/23 9:43 AM, zyx at bupt.edu.cn <mailto:zyx at bupt.edu.cn>  wrote: 

Dear Eric, 

 

You proposed a number of points which are interesting and important  Thank you very much! 

 

I would like to discuss at least some of them not now, but a few days later because my notebook was trouble some the day  before yesterday. 

 

Best wished, 

 

Yixin 





发自我的手机 



-------- 原始邮件 -------- 
发件人: Eric Werner <eric.werner at oarf.org <mailto:eric.werner at oarf.org> > 
日期: 2023年10月19日周四 傍晚5:56 
收件人: 钟义信 <zyx at bupt.edu.cn <mailto:zyx at bupt.edu.cn> >, fis <fis at listas.unizar.es <mailto:fis at listas.unizar.es> > 
主 题: Re: [Fis] Fis Digest, Vol 105, Issue 12 

Dear Yixin,

Can you be more specific what you mean by "change the paradigm used in AI".  It might help to give a specific example. 

*At present AI systems certainly behave as if they are goal directed. 

*AI systems appear to have wisdom in that they can propose wise courses of action

* What do you mean by "pure formalism"?  It seems one of the powers of formalism is to understand AI and human intelligence. 

* It seems AI systems exhibit human-like wisdom when they offer advice or guide the actions of a virtual assistant or self driving car. The react based on the circumstances and goals of the other, at leas to an extent. 

* Why can't a machine understand human goals and purposes if it gains a model of those from human data? 

* Why can't an AI system have intentions? 

My overall problem is understanding your specific criticism of the present AI paradigm? This notion seems to me to need clearer definition. 

How would you overcome the present AI paradigm and what specifically is different when you want to "change the paradigm used in AI"???

This is not a criticism it is a real question in trying to understand you.  At present I just don't see the difference between the present AI paradigm and your new AI paradigm. 

Best wishes,

Eric 

 

 

On 10/19/23 8:48 AM, 钟义信 wrote: 

Dear Krassimir, Dear Eric, and Dear Colleagues, 

 

The discussion is going on well thanks to all your efforts. 

 

Here is a few points I would like to mention (or re-mention). 

 

(1) The purpose of the "declaration on Paradigm Change in AI" is to make an appeal for change the paradigm used in AI.  

 

(2) There may have different understanding on the concept of paradigm. However, the concept of paradigm for a scientific discipline has been re-defined as the scientific world view and the associated methodology because the scientific worldview and its methodology as a whole is the only factor that can determine whether a scientific discipline needs a "revolution" (Kuhn's language). 

 

(3) The major result of "paradigm change in AI" is to change the methodology used in AI, including the principles of "pure formalism" and "divide and conquer".  This is because of the fact that the former principle leads to the ignoring the meaning and value and thus leads to the loss of understanding ability and explaining ability while the latter one leads to the loss of the general theory for AI. Note that "no explaining ability" and "no general theory" are the most typical and also most concerned problems for current AI. 

 

(4) There is difference between human intelligence and human wisdom. One of the functions of human wisdom is to find the to-be-solved problem which must be meaningful for human purpose of improving the living and developing. Yet, the function of human intelligence is to solve the problem defined by human wisdom. 

 

(5) Human intelligence can be simulated by machine. But human wisdom cannot be simulated by machine because machine is non-living beings that has no its own purpose and cannot understand human purpose. No purpose means no wisdom. 

 

I wonder if you agree or not. Comments are welcome! 

 

Best regards, 

 

 

 

  <https://urldefense.com/v3/__https://exmail.qq.com/cgi-bin/viewfile?type=logo&domain=bupt.edu.cn__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linqOJmtxu$ > 


Prof. Yixin ZHONG


AI School, BUPT 

Beijing 100876, China 

 

 

  

  

  

------------------ Original ------------------ 

From:  "Krassimir Markov" <itheaiss at gmail.com <mailto:itheaiss at gmail.com> >; 

Date:  Thu, Oct 19, 2023 03:32 AM 

To:  "fis" <fis at listas.unizar.es <mailto:fis at listas.unizar.es> >; 

Subject:  Re: [Fis] Fis Digest, Vol 105, Issue 12 

  

Dear Yixin, Eric and FIS colleagues, 

Let me present some thoughts about 

The “Intelligence” Paradigm

For those who are not familiar with the concepts of "paradigm" and "paradigm shift", I would recommend texts from Wikipedia that explain it clearly enough.

I myself maintain a neutral position in the dispute between Popper and Kuhn regarding the development of science. Both theses have their grounds, but at different levels and stages. In fact, in this case, the law of quantitative accumulation, which leads to qualitative changes, applies. Obviously, in a number of cases the paradigm shift happens in leaps and bounds, while in others it happens smoothly and barely perceptibly.

For example, the accumulation of sufficient observations and evidences regarding the shape of the earth required a shift to a new paradigm: from the "Earth is flat" paradigm to the "Earth is not flat" paradigm.

Sometimes opposing paradigms can coexist, not negating each other, but complementing each other. For example, this is the case with Euclid's fifth postulate (the parallel postulate).

The postulate has long been considered self-evident or inevitable, but no evidence has been found. Eventually, it was discovered that reversing the postulate gave valid, albeit different, geometries. A geometry where the parallelism postulate does not hold is known as non-Euclidean geometry.

With regard to the paradigm of "intelligence" we have a similar situation. We have at least two opposing paradigms based on two opposing postulates.

The first, let's call it the "flat intelligence postulate", was well articulated by Yixin in his post:

"Intelligence is the ability to solve problems, but not the ability to detect and define problems, the latter of which is one of the faculties of wisdom."

The second, let's call it the "non-flat intelligence postulate", will sound unifying: "Intelligence is both the ability to solve problems and the ability to detect and define problems" (Eric), but in different directions in the hierarchy of intelligences (KM)". This is how we arrive at the idea of cybernetic systems, where there is a controller and a controlled, but the controller is connected to the environment from which it receives controlling influences and is, in practice, both "controller" and "controlled", but in different aspects of the system.

 



 

 

 

To be continued ...

 

 

На ср, 18.10.2023 г. в 15:07 ч. <fis-request at listas.unizar.es <mailto:fis-request at listas.unizar.es> > написа: 

Send Fis mailing list submissions to 
        fis at listas.unizar.es <mailto:fis at listas.unizar.es>  

To subscribe or unsubscribe via the World Wide Web, visit 
        http://listas.unizar.es/cgi-bin/mailman/listinfo/fis 
or, via email, send a message with subject or body 'help' to 
        fis-request at listas.unizar.es <mailto:fis-request at listas.unizar.es>  

You can reach the person managing the list at 
        fis-owner at listas.unizar.es <mailto:fis-owner at listas.unizar.es>  

When replying, please edit your Subject line so it is more specific 
than "Re: Contents of Fis digest..." 
Today's Topics: 

   1. Re: Paradigm AI - I guess we call it Genius (Eric Werner) 



---------- Forwarded message ---------- 
From: Eric Werner <eric.werner at oarf.org <mailto:eric.werner at oarf.org> > 
To: Karl Javorszky <karl.javorszky at gmail.com <mailto:karl.javorszky at gmail.com> > 
Cc: "钟义信" <zyx at bupt.edu.cn <mailto:zyx at bupt.edu.cn> >, fis <fis at listas.unizar.es <mailto:fis at listas.unizar.es> > 
Bcc: 
Date: Wed, 18 Oct 2023 14:07:13 +0200 
Subject: Re: [Fis] Paradigm AI - I guess we call it Genius 

Dear Karl,

Thank you for bringing this important point to my attention. Here are some thoughts:


I guess we call it Genius 


*	Difference between generating and understanding or reading
*	Super intelligence, requires genius or generational understanding
*	Generative intelligence
*	Creative intelligence
*	Compositional intelligence
*	Formative intelligence
*	Evolutional intelligence
*	Restricting, intelligence to problem-solving, dismisses, creative acts of composition in science and the arts
*	Think of Heinz Kohut’s formation of the self in psychology versus Freudian reactive psychology
*	It’s the difference between discovering a theorem, and proving the theorem
*	It’s the difference between school-boy problem-solving, and Newton
*	Some psychologists think of intelligence in relationship to testing people for their ability to cope in educational institutions. They want to see if they are college material or not. 
*	With future All systems were talking about Newton level intelligence not college level intelligence
*	Kantian synthetic intelligence 
*	We better be ready for that! If not,  we got some real problems. 
*	That is why making these systems social and cooperative is so essential.

We may quickly reach a point where the compositional creative intelligence of artificial models is so powerful, we will not be able to understand them. Not just how they work. We already don't understand how they work now. But their reasoning and new outputs such, as for example, mathematical insights. Imagine a system that can reason and develop 2,000 years of mathematics in a few minutes. It is precisely this overarching linking of knowledge that makes for real intelligence such as that of Leibniz or Newton.  The old  school model of psychological testing of intelligence uses a definition of intelligence that is to limiting for AI models. AI models are not your evey day student. 

Best wishes,

Eric

On 10/18/23 12:59 PM, Karl Javorszky wrote: 

Dear Eric,

 

Your statement: „The essence of general intelligence is the ability to not only solve an externally given problem but to be creative and find and define problems.” is at deviance to accepted delineations of concepts in the trade of psychology. Rohracher [1] has defined in 1969 (and to my knowledge, no one has disputed this wording): “Intelligence is the degree of efficiency [of the CNS] while solving new problems.”

What you refer to is subsumed variously under: creativity, alertness, curiosity, vitality, spontaneity. 

There is consensus in the epistemology of psychology that there can exist no final, conclusive, all-encompassing theory of personality (in which intelligence and adaptability/curiosity would or would not be separated as concepts), because if such an ultimate, final, true theory of personality would exist, that assumption would negate the axiomatic rule that one can always learn something new, at least about himself. There is, by definition, no end to introspection and philosophy. One can always come up with a new theory of personality and one cannot rule out that a new theory of personality would be more reasonable, truer, more conclusive than anything that has existed before.

Psychologists see theories about mind and soul in the same way believers see their God. It is impossible to recognize all features of God, let alone to insist that one has a correct reading.

So, if you decide not to distinguish between efficiency of solving new problems and ability and tendency towards finding new problems to solve, you are free to do so. Established use of words splits the two personality traits.

I have prepared a statement about the key word “otherwise”. The word is needed to scale the efficiency of mental processes while solving new problems (aka ‘intelligence’) by scaling the diversity/similarity properties of alternatives. To be able to efficiently choose between alternatives, one needs to have alternatives that are different among each other. The task is to find such collections of symbols that are alternatives to each other, not by machinations by humans, but as members of a symbols collection. This task is not easy to solve while using the symbols set in the traditional, Sumerian ways only. One needs to assume that symbols have their own properties, by their nature, immanent to them. 

Due to the two-messages-per-week rule, the contribution shall come next week.

Karl

[1] Rohracher, H.: Einführung in die Psychologie, Urban & Schwarzenberg, Wien 1951

 

Am Mi., 18. Okt. 2023 um 12:01 Uhr schrieb Eric Werner <eric.werner at oarf.org>: 

Dear Yixin,

Thank you for you comments! 

To your point (2): The essence of general intelligence is the ability to not only solve an externally given problem, but to be creative and find and define problems. For example, given a knowledge of mathematics and physics and data to generate new mathematics and new insights into the nature of the world. 

To your point (3): Biotechnology and AI are somewhat independent fields. AI can help genome research and decoding genomes. But once genomes are decoded that information can be used to construct more general AI models. When I say "architecture" I meant the architecture of the human brain encoded in the human genome. This architectural information can be used to guide the structuring of AI models be be more potent and more human like.  And, AI may well help in the process of structuring its future version. That is what I meant by selfreferencing. 

To the more general point, formalization of social information can help guide the improvement of AI models to be more social and have greater abilities in a AI-robot social setting. 

All the best,

Eric 

On 10/18/23 9:16 AM, 钟义信 wrote: 

Dear Eric, 

 

Thank you for the interesting talk on "Paradigm AI" from which I learned a lot. 

 

As a discussant, may I propose some of my understanding. Comments are welcome. 

 

(1) I appreciate your idea that saying "Physics paradigm PPD does not fit well with AI paradigm" and "Information paradigm PID is a better fit". This is the valuable common basis, between you and me, concerning the PPD, PID and AI. 

 

(2) How to define the concept of intelligence? This is a very difficult problem. To my own understanding, the following short statement may serve as one of the candidates: Intelligence is the ability to solve problem but not the ability to find and define problem, the latter of which is one of the abilities for wisdom. 

 

(3) The paradigm for AI can be used as the paradigm for bio-technology with certain simplification and specialization. This judgement is not based on their "structure/architecture",  but based on their "information function" - which is the basic function in both AI and biotechnology, that is to seek opportunity for "living (or solving problem)" and to avoid the "danger (or failing to problem solving)". 

 

Once again, comments and criticisms are most welcome. 

 

 

Best regards, 

 

 

  <https://urldefense.com/v3/__https://exmail.qq.com/cgi-bin/viewfile?type=logo&domain=bupt.edu.cn__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linqOJmtxu$ > 


Prof. Yixin ZHONG


AI School, BUPT 

Beijing 100876, China 

 

 

  

  

  

------------------ Original ------------------ 

From:  "Eric Werner" <eric.werner at oarf.org <mailto:eric.werner at oarf.org> >; 

Date:  Tue, Oct 17, 2023 02:32 AM 

To:  "fis" <fis at listas.unizar.es <mailto:fis at listas.unizar.es> >; 

Subject:  [Fis] Paradigm AI 

  

Here are some brief thoughts on Paradigms and AI by I presume was written by Yixin Zhong since I cannot read  Chinese. 


Paradigm AI


*	I agree that the physics paradigm PPD doesn’t fit well with the AI paradigm, and that the information paradigm PID is a better fit
*	Artificial intelligence systems, don’t necessarily learn from human beings. In unsupervised learning they learn from data and not from humans.
*	The problem, and becomes really how to define what intelligence is: Which of the following is it?

*	Rational inference
*	Summarizing large amounts of text and data
*	Making new predictions based on scientific theories and available data
*	Developing new theories that explain the data in the more succinct way, and making new predictions
*	Developing new technologies independently of human input
*	Planning and executing the actions and intentions of a robot
*	Having social intelligence
*	Being cooperative with a human being in achieving a task 
*	Interrelating two discipline, such as physics and mathematics, to make new discoveries
*	Understanding, genomes in the way that human beings cannot
*	Designing new organisms by designing their genomes

*	I agree with the language of a new paradigm, such as artificial intelligence will develop slowly step by step in conjunction with its use -both conceptually and experimentally .
*	In a new paradigm entire new language is created as a paradigm is developed
*	The language evolves in concert with a new ontology suggested by the paradigm

*	It is an ontology of objects, technologies, actions, and strategies

*	What will be particularly interesting, is the linking of the paradigm of artificial intelligence with the paradigm of biotechnology

*	Biotechnology and AI will truly link the human brain with the artificial brain
*	The genome of the natural brain will be reflected in the architecture of the artificial brain
*	Hence by using AI to decode the genome of the natural brain, it will be self-reflected in the design of the developing artificial brain 
*	This will bring unprecedented social and rational functionality to the artificial brain 
*	Note that the biotech-genome paradigm also is founded on the information paradigm.

Thank you Yixin Zhong for your input and emphasizing the intimate relationship of information and AI paradigms. 

Best wishes,

Eric 

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!TIIx5Wtklq6f08o-lkfpzmVltSrC8Oy2oMP7tcMZsYwSN5x_BDJBF1vtN9DOTbE6BXCYP2mXThgkBtz8Hin4ZKg$> 







_______________________________________________
Fis mailing list
Fis at listas.unizar.eshttp://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL
 
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.
http://listas.unizar.es <http://listas.unizar.es/> 
----------

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!S48dgtLY-v427YBnRO4ovcOPfYmIyRg2qFfQ_Vw-sWoIjfzS8ZWpLpRilKkBtXqBqXyrkHUwwWHOZ6wdhD823UM$> 



_______________________________________________ 
Fis mailing list 
Fis at listas.unizar.es <mailto:Fis at listas.unizar.es>  
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis 
---------- 
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL 

Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza. 
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas 
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee. 
http://listas.unizar.es <http://listas.unizar.es/>  
---------- 

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!VvOfZm0CWjVPM7xYKVUO5vkDvx9MusQMRPpMkuycNvECTx_JKVuphYgtiPWoWJVdjig7Zmh4qyxchxc_Dlf37Ok$> 



_______________________________________________ 
Fis mailing list 
Fis at listas.unizar.es <mailto:Fis at listas.unizar.es>  
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis 





_______________________________________________
Fis mailing list
Fis at listas.unizar.eshttp://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL
 
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.
http://listas.unizar.es <http://listas.unizar.es/> 
----------

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!Tp73mvJuJUMNK3m6xXu_VUCsW-Poi0CFq_XnfNau9_R6RtJ9H97j8KIdmljPVTZ5fp9ugRtDL4oKZu_gxjwG2pY$> 



-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!ULTdPadjetHQaFjDWHSRq3NTGl5cum0ToYkM5RPNPmDlsElQtx0BarbTaNClj9Gs3pK5uLq7CNAT1ZjBQdOJxfo$> 



-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!U6Z9SjSPXXPQE_dYNztjOD0PIXKI7DKT8_nzn-liWAXn4G_QUg1i4fvKHGFzwuH94uJe_nj6fPVMUFlws1cYm38$> 







_______________________________________________
Fis mailing list
Fis at listas.unizar.eshttp://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL
 
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.
http://listas.unizar.es <http://listas.unizar.es/> 
----------

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!RgLnjNz9O07xrTXKPte-Q52ZaNv5bSe2q0kcXcqJTGMt5OUshd6kdTKqnSVj2xb1GscdC55j-7nvyPF4m2g9ZEw$> 



 

 

-- 
Dr. Eric Werner 
Oxford Advanced Research Foundation 
https://urldefense.com/v3/__https://oarf.org__;!!D9dNQwwGXtA!U7lTvGbXJeI81Kd9LofCU62tVqQEE37MHHg19lt5YTQiM8shli4--aVCoVJnO0YDPeZo1mzOw4linuwsvnGi$  <https://urldefense.com/v3/__https:/oarf.org__;!!D9dNQwwGXtA!RceHxVnwcS1HKa4o1K3O2F3CWJTH4hKFZwUBkMrZcO5zelqA1w1gSqqL8f3tyAsY8lI24FG7RRGsI42DlRjSZNs$>  







_______________________________________________
Fis mailing list
Fis at listas.unizar.es <mailto:Fis at listas.unizar.es> 
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL
 
Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.
http://listas.unizar.es <http://listas.unizar.es/> 
----------

 

_______________________________________________
Fis mailing list
Fis at listas.unizar.es <mailto:Fis at listas.unizar.es> 
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACIÓN SOBRE PROTECCIÓN DE DATOS DE CARÁCTER PERSONAL

Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la información sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si está suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicación en el momento en que lo desee.
http://listas.unizar.es <http://listas.unizar.es/> 
----------

_______________________________________________
Fis mailing list
Fis at listas.unizar.es <mailto:Fis at listas.unizar.es> 
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
----------
INFORMACI�N SOBRE PROTECCI�N DE DATOS DE CAR�CTER PERSONAL

Ud. recibe este correo por pertenecer a una lista de correo gestionada por la Universidad de Zaragoza.
Puede encontrar toda la informaci�n sobre como tratamos sus datos en el siguiente enlace: https://sicuz.unizar.es/informacion-sobre-proteccion-de-datos-de-caracter-personal-en-listas
Recuerde que si est� suscrito a una lista voluntaria Ud. puede darse de baja desde la propia aplicaci�n en el momento en que lo desee.
http://listas.unizar.es
----------

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20231107/5b85187c/attachment-0001.html>


More information about the Fis mailing list