[Fis] [covid-19-therapy] More evidence for coming AI threats

Dr. Plamen L. Simeonov plamen.l.simeonov at gmail.com
Fri Mar 31 11:53:41 CEST 2023


The last word is the most characteristic.
If diversity of computing architectures and operating systems were embedded
as a regulation requirement, such machine pandemics would not happen.
Is it really so important that all systems should interoperate and "feel"
their neighbourhood all the time, and not just on demand using standardized
safe protocols?

On Fri, Mar 31, 2023 at 9:44 AM Keith Henson <hkeithhenson at gmail.com> wrote:

> https://urldefense.com/v3/__https://en.wikipedia.org/wiki/SQL_Slammer__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9Kq0_U33$ 
>
> https://urldefense.com/v3/__https://www.wired.com__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9C5M3_Yy$  › 2003 › 07 › slammer
>
> Slammed! | WIRED
>
> Yet it started with a single killer packet. The tiny worm hit its
> first victim at 12:30 am Eastern standard time. ... was doubling every
> 8.5 seconds. ... in a single, one-way packet. Microsoft's ...
>
> 8.5 seconds.
>
> Best wishes,
>
> Keith
>
> On Thu, Mar 30, 2023 at 11:11 PM Keith Henson <hkeithhenson at gmail.com>
> wrote:
> >
> > I don't know if you are familiar with
> > https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Eliezer_Yudkowsky__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9JBUPPrp$ 
> >
>

No, but I know Bostrom's book "Superintelligence". I think the AI classic
reference is Asimov's "I, the robot" with its clearly defined moral rules
for AI.
The other derivatives of the so-called "strong" and "weak" AI lines came
later with the debates on the works of other AI classicians like Minsky,
Chomsky and Searle who naturally extended Turing's machine and von Neuman's
cellular automata paradigms.

> > Recent concerns here:
> >
> >
> https://urldefense.com/v3/__https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities?fbclid=IwAR3U58gaUrQ7ZO-a6r6Z1merAjuHFu1YVxkZui5NZbXUWDf_iLtR8KOkqgg__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9EvKOu5J$ 
> >
>

I see. Interesting, but still quite theoretical. Nevertheless, it's good to
consider these concerns.
One particular phrase in this reference about AGI lethality sounds a bell
to me:

*AGI will not be upper-bounded by human ability or human learning
speed*.  *Things
much smarter than human would be able to learn from less evidence than
humans require t*o have ideas driven into their brains; there are
theoretical upper bounds here, but those upper bounds seem very high.

What does "much smarter than human" mean here? Are we not hesitating to
make decisions because of having "slow" brains, but because of weighing and
waiting  for more  evidence to make the correct decision? Just imagine a
fast thinking robot *alone* at service on the red nuke button.  Would you
trust such a system to take care of your safety without a human
supervisor?. Don't we also learn from "less evidence"? It depends what task
you will have for the robot. Speaking generally about AGI does not make
sense.


> > Years ago I contributed to this with a warning that you did not want
> > to make an AI by blind copying a human brain.  Brains have
> > psychological mechanisms that can be turned on by environmental
> > conditions.
> > Scarcity, for example, turns on behaviors leading to war.
>

This is important. But scarcity of what? Information?

> >
> > An AI that went to war with the human race because it saw scarcity in
> > the future could be a total disaster.
>

You probably mean "resources" here. Either way, it will be a disaster if
such automation is assigned to a bunch of circuits alone.

Best,

Plamen

>
> > Best wishes,
> >
> > Keith
> >
> > On Thu, Mar 30, 2023 at 6:47 PM Dr. Plamen L. Simeonov
> >
> > <plamen.l.simeonov at gmail.com> wrote:
> > >
> > > Thank you Paul for this response.
> > > I'll comment on it shortly. Brian Ford can ignore my opinion.
> > > If he wishes, he can leave the COVID-19 Therapy group.
> > > I am only its moderator who contributes from time to time.
> > > Of course, I also take care of the content/contributions, incl. my own.
> > > In my view this one about AI + gene therapy + chemtrails + nuke
> missiles + .....  is serious.
> > >
> > > On Fri, Mar 31, 2023 at 12:58 AM Paul Werbos <paul.werbos at gmail.com>
> wrote:
> > >>
> > >>
> > >> On Thu, Mar 30, 2023, 2:17 PM Dr. Plamen L. Simeonov <
> plamen.l.simeonov at gmail.com> wrote:
> > >>>
> > >>> BRIAN J. FORD, YOU CAN SIMPLY IGNORE WHAT COMES NEXT BELOW AS WHITE
> NOISE.
> > >>
> > >> It demonstrates confusion but it  us not white noise and not safely
> ignored. More like a bad dream than white noise.
> > >
> > >
> > > It is a nightmare already, in the situation of war (with ourselves).
> > >
> > >> Musk is  not alone by far by being worried about big changes coming
> now, but he does not know what they are because he does not understand the
> technology.
> > >
> > >
> > > I am not sure, Paul. He knows enough, at least through his project
> Neuralink (https://urldefense.com/v3/__https://neuralink.com__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9BwKIDcl$ ), where all this is heading to.
> > >>
> > >>
> > >> I do.
> > >
> > >
> > > I do that too, since I went through all the ups and downs of the 3
> waves of AI to this moment, including the expert systems hype of the 1980s
> and the failed 5th Generation Computing Japanese project.
> > >>
> > >> The one page abstract attached gives a concise review with citations
> of what happened so far with Artificial General Intelligence AGI and what
> the next stages are. This was written for my plenary talk last year, to
> accept IEEE's Rosenblatt Award, the highest technical field award for the
> WCCI technologies duscussed in the abstract.
> > >
> > >
> > > This is a very good and useful outline.
> > >
> > >> Vint Cert is widely known as fiiundernof the internet,
> > >
> > >
> > > There are also other folks who can be considered  as founders of the
> Internet, like D. Catz (https://urldefense.com/v3/__https://www.rfc-editor.org/pdfrfc/rfc1377.txt.pdf__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9PbYkT9q$ )
> > > as well as Postel, Reynolds and others (
> https://urldefense.com/v3/__http://www.icir.org/rfc/rfc-std.html__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9HeMPLYK$ ), let alone Tim Berners-Lee, the www
> inventor.
> > > I know this story quite well, since I was involved in the development
> and testing of another open source protocol in the 1990s which was supposed
> to replace TCP and IP over ATM, XTPX. But DARPA was stronger.
> > >
> > >> for the oriticoks which create that foundation, but the machine
> learning revolution which is being built on that foundation use the new
> fundamental  design principles reviewed in the abstract.
> > >
> > >
> > > Before GAI, there were three other generations of AI, each one with
> its advantages and drawbacks, but all of them failed so far because of
> naive designs, lack of technological advancement and wrong expectations,
> until recently (around 2015-2017) when the new AI hype emerged under a
> well-organized marketing scheme. I am not afraid of that kind of technology
> as it is, but you know that even a pencil in the hands of a fraudulent
> person can become a weapon. AI can be just as intelligent (and sly) as the
> programmer who has written the code and the data it is fed and "teached" to
> process. What about a hacker, or an army of them creating viruses to block
> and damage critical infrastructure? How many train derailments with
> explosions of chemical cargo have happened in the US this year, 4-5-6? What
> is the relation of these accidents to those of the same kind that happened
> there since 2000? There is a war in Ukraine, but not only there now.
> > > In the Baltic Sea the Nordstream 2 pipeline was destroyed. And before
> all that we had a COVID-19 pandemic all over the world."Good morning,
> Vietnam!"
> > >
> > >>
> > >> I concluded that plenary talk (recorded by IEEE) by saying:
> > >> If you meet anyone who claims to be an expert who says he knows it
> will end in disaster, you have met a liar. If you meet one who says he
> knows it will end well, you have met another liar. No one on earth knows,
> becauysebbit depends on choices you have to make -- design choices.
> > >
> > >
> > > Unfortunately, there can be different design choices, some of which
> are destructive. Computer viruses and malware, for instance. They can come
> from domestic and external sources. Steven Hawking said around 2000 that he
> considers them as the worst threat to humanity. And we have a billionaire
> software guru who plays with mRNA vaccines now. What do you think is going
> to come next? Would the Manhattan project be initiated on time if Einstein
> did not have written a letter to FDR?
> > >
> > >>
> > >> Not just legal  choices but substantive design choices.
> > >
> > >
> > > Exactly, and software is not always bug-free. Sometimes
> gaps/deadlocks/loopholes in protocols and procedures are closed  and
> disentangled after years.
> > >>
> > >>
> > >> I am very worried right now because I still see litle real focus on
> the real design issues out there.
> > >
> > >
> > > I consider regulation as a design issue. Without it nothing is safe.
> > > Everything needs regulation: medications, weapons, software/AI, before
> even using it in order for the world to be safe.
> > >
> > >
> > >>
> > >> If the only forces at work are the ones which dominate duscussions
> now, then we really will all die.
> > >
> > >
> > > I regard this warning as stimulating. It can ignite the discussion on
> these important issues at the end.
> > >
> > > Best,
> > >
> > > Plamen
> > >
> > >
> > >>>
> > >>>
> > >>>
> https://urldefense.com/v3/__https://www.foxnews.com/tech/ai-expert-warns-elon-musk-signed-letter-doesnt-enough-literally-everyone-earth-will-die__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9CjtSwkU$ 
> > >>>
> > >>>
> https://urldefense.com/v3/__https://www.unilad.com/news/microsoft-bing-ai-threatening-users-goading-it-914620-20230323__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9Ai3wQwN$ 
> > >>>
> > >>> Best,
> > >>>
> > >>> Plamen
> > >>>
> > >>> --
> > >>> You received this message because you are subscribed to the Google
> Groups "Covid-19 Therapy" group.
> > >>> To unsubscribe from this group and stop receiving emails from it,
> send an email to covid-19-therapy+unsubscribe at googlegroups.com.
> > >>> To view this discussion on the web visit
> https://urldefense.com/v3/__https://groups.google.com/d/msgid/covid-19-therapy/CAMBikj6aVKnm0E16YEYTrKe4rzP*3DZuqL1kA8h-*3DLYmsSF7CJQw*40mail.gmail.com__;JSUl!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9A3WQri3$ 
> .
> > >>
> > >>
> > >>
> > >> On Thu, Mar 30, 2023, 2:17 PM Dr. Plamen L. Simeonov <
> plamen.l.simeonov at gmail.com> wrote:
> > >>>
> > >>> BRIAN J. FORD, YOU CAN SIMPLY IGNORE WHAT COMES NEXT BELOW AS WHITE
> NOISE.
> > >>>
> > >>>
> > >>>
> https://urldefense.com/v3/__https://www.foxnews.com/tech/ai-expert-warns-elon-musk-signed-letter-doesnt-enough-literally-everyone-earth-will-die__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9CjtSwkU$ 
> > >>>
> > >>>
> https://urldefense.com/v3/__https://www.unilad.com/news/microsoft-bing-ai-threatening-users-goading-it-914620-20230323__;!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9Ai3wQwN$ 
> > >>>
> > >>> Best,
> > >>>
> > >>> Plamen
> > >>>
> > >>> --
> > >>> You received this message because you are subscribed to the Google
> Groups "Covid-19 Therapy" group.
> > >>> To unsubscribe from this group and stop receiving emails from it,
> send an email to covid-19-therapy+unsubscribe at googlegroups.com.
> > >>> To view this discussion on the web visit
> https://urldefense.com/v3/__https://groups.google.com/d/msgid/covid-19-therapy/CAMBikj6aVKnm0E16YEYTrKe4rzP*3DZuqL1kA8h-*3DLYmsSF7CJQw*40mail.gmail.com__;JSUl!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9A3WQri3$ 
> .
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups "Covid-19 Therapy" group.
> > >> To unsubscribe from this group and stop receiving emails from it,
> send an email to covid-19-therapy+unsubscribe at googlegroups.com.
> > >> To view this discussion on the web visit
> https://urldefense.com/v3/__https://groups.google.com/d/msgid/covid-19-therapy/CACLqmgeLXjkS8j723NfhKOX5Kw7OO-*3DtAXgVOtpUTBc7eso2eA*40mail.gmail.com__;JSU!!D9dNQwwGXtA!UNw3XiSDbLW2vob24zuXGCYBKhU7SGJK49L1yVcBM35jP5StCkMwgZ52YjFF3H2sFIvH9NT5RKrbZy2gJf5V9EBTtou1$ 
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20230331/603cfb2c/attachment-0001.html>


More information about the Fis mailing list