[Fis] CONCLUDING THE SESSION

钟义信 zyx at bupt.edu.cn
Wed Dec 6 01:46:30 CET 2023


Dear Mrcus, my friend,


I would like to provide to you some od the materials for the "furthr discussion". However, the FIS platform of discussion has very strict limitation on the capacity (or length) of the materials. Therefore, I can only provide the materials through common e-mail, rather than FIS platform. Sorry for that.




Best regards, 















YIXIN ZHONG

Prof. at AI School, BUPT

Beijing 100876, China









 
 
 
------------------ Original ------------------
From:  "MarcusAbundis"<55mrcs at gmail.com>;
Date:  Mon, Dec 4, 2023 05:08 PM
To:  "钟义信"<zyx at bupt.edu.cn>; 
Cc:  "Eric Werner"<eric.werner at oarf.org>; "pedroc.marijuan"<pedroc.marijuan at gmail.com>; "fis"<fis at listas.unizar.es>; 
Subject:  Re: [Fis] CONCLUDING THE SESSION

 

Dearest Yixin,


Thank you for your suggestion of 'further discussion' — to be honest, I am unsure how to respond. I already noted specific (missing from this exchange) material I think could lead to significantly expanded discussion . . . but that material must be provided by you and/or Eric (as session leaders). I would happily comment on that material for 'further discussion'.


Beyond this, my 28 November post to Pedro included my own initial thoughts on a specific AI paradigm shift/approach/method. I continue to develop this material as the next step in building on an 'a priori theory of meaning' — I copy below my latest *rough* notes. So THAT is *my* contribution . . . but this contribution (of mine) incited no serious discussion from you or others. I accept this typical non-response as an FIS fact, and leave it at that, without complaint. Still, if YOU wish to comment on my rough notes, I would happily follow your lead (it is YOUR session). Otherwise, I am not sure what added material we should use to support `further discussion'.


Lastly, respectfully, I remain unsure of how to share material with you (in the PRC). I am sensitive to 'censorship issues', but I am unaware of acceptable (to PRC) ways of sharing material with you and others, such as papers (currently on Google Drive) or videos (currently on YouTube). I have asked about this in the past, but I have received no guidance. Any thoughts you have to offer are appreciated.


Sincerely, 
Marcus
===
ENTROPY— A Simplified Scientific Base for Super-Intelligence
by Marcus Abundis, Bön Informatics
DRAFT Paper — ver. 4Dec23
(? May 2024 NeurIPS DEADLINE: 10pt, 8 pages max, + 1 reference page)

ABSTRACT: This paper poses a top-down science-based approach to Super-Intelligence, versus more-typical complex/fragmented anthropic and statistical bottom-up methods. It uses Shannon Signal Entropy, Boltzmann’s thermodynamic entropy, and Darwin’s evolution by means of natural selection to frame Super-Intelligence.

INTRODUCTION — base issues, key terms, central goal, and method
Pondering the advent of Super-Intelligence (SI) holds many issues. First, defining human intelligence (HI) is itself quite daunting, with many roles seen in diverse individuals and cultures across the globe \cite {gardner}—often blind to other `intelligences'. Second, a core from which SI arises must be named—with `general intelligence' (GI) as a likely prelude. But if defining HI is already so elusive, how do we hope to define `cosmic GI'? Third, SI risks must be noted. \cite {bostrom} These are a few issues raised in exploring SI. Nick Bostrom defines SI as: `any intellect that greatly exceeds the cognitive performance of humans in virtually all domains’, which does not clarify the matter but poses a base proposition. It omits needed detail on SI's advent, but which this study targets.

This paper names a scientific base for general and super intelligence, to also frame related risk and challenges. But as many mixed SI,GI, HI, and `base intelligence’ views already exist, I first define my terms. These initial terms are expanded further over the paper’s course.

	Key Terms:
Foremost, simply defining GI is a crucial first step to mark `first principles’, without which this and similar studies cannot truly proceed. As such \ldots

General Intelligence (GI) — is knowledge of how things generally work and fall-apart, `material functioning’ in the cosmos, sans `logical gaps’. The cosmos and GI hold myriad direct contiguous functions. But science infers narrow measurable-and-repeatable roles, omitting `uncontrolled variables’ for repeatable and verifiable results. GI is the ideal science pursues as Natural Philosophy. But for now, GI marks nearly-fanciful perfect functional knowledge of the cosmos—essentially, Kant’s `das Ding an sich’ \cite {kant}.

\quote {The price of understanding is always abstraction, neglecting most of a staggeringly complex world to understand one tiny fragment . . . But whenever a theory is successful, it is also easy to forget its limitations} \cite {wagner14, p23}

Super-Intelligence (SI) — is knowledge of how things might work and fall-apart as `creative functioning'. Creative knowledge shows first as partial GI, that SI grows via latent functions `testing’ GI rules. For example, one may imagine the Sun swelling to engulf the Earth as a future event, or see birds in flight transcribed as a 747 jumbo jet. SI surpasses manifest material reality, toward future (often human) material possibilities. `Regular science' offers no such formal-creative narrative.

Knowledge — for GI/SI is `a grasp' of direct cosmic events, by indirect `referential’ means: 1) direct events held in an abstract informatic form, 2) often processed toward targeted effects, 3) materially tested in environs— base `agent’ stimulus-process-response. Conversely, non-agent particles, atoms, etc. are energy-matter directly driving environs, that agents survive. Ideally, agent references (genomic code, mind as memory, etc.) are jointly processed. For example, genomic shifts may yield `longer legs’, but one must instinctually/willfully use `new legs’, for new effects/knowledge. In turn, that joint work frames an agent’s sensorium and afforded `habitats’—Kantian \cite {kant} `bounded phenomena’ as the root of all knowledge.

Human Intelligence (HI) — mixes instinct, thought, myth, and fact, with creative-to-dull and solitary-to-social traits, alongside GI and SI clues, all making HI hard to typify. But it also implies adaptive plasticity where agents evince partial GI as `survival’, via references. The functional effectiveness-and-efficiency of agent references sets one’s habitat. Next extending one’s references (via genomic, mind, or SI `tools’) may also extend one’s habitat—driving a so-called Anthropocene that, in fact, typifies currently mixed HI.

How humans extend their sensorium/habitat (via instinct, thinking, myth, fact, etc.) is a fascinating topic \cite {brown91, human universals}, but too erratic for framing GI/SI. HI is thus seldom referenced herein, departing from most other AI views, as they are essentially anthropic in character.

	Central Goal:
The above implies better knowledge maps come closer to full GI and SI, making `better reference maps’ this paper’s central goal. Here, all agents map partial GI as `survival’, where ruin is the sole alternative—with humans being `not so different’ from other agents. But differed degrees of `how’ and `how many’ references one maps-and-uses for `adaptive intelligence’ has many facets—with humans differing greatly from other agents.

As such, `better SI reference maps’ is too-simplistic a goal, as it involves at least two facets: 1) base intelligence meets most regular functional needs, while 2) adaptive intelligence abides eternal cosmic shifts. 'Better mapping of regular and creative functioning’ thus comes closer to usefully detailing our full GI and SI central goal. But a regular/creative (`science contra art’) split view leaves us with an antithetical `paradox’. Hence, we must ask: what one method can we use to resolve this dualist split, for a unified GI/SI approach?

	Central Method:
`One method’ to cover all GI/SI goals, starts with the above sense of ‘how things generally work and fall-apart’, respectively: Shannon’s Signal Entropy, and Boltzmann’s thermodynamic entropy.



FURTHER DETAIL IS ALREADY COVERING IN THE EARLIER — 'A Simplified A Priori Theory of Meaning'.
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20231206/29ef4b26/attachment-0001.html>


More information about the Fis mailing list