[Fis] FW: LECTURE RESPONSES UP TO 16.01.20. Addition

Joseph Brenner joe.brenner at bluewin.ch
Sat Jan 18 08:55:15 CET 2020


Dear All,

In line with my preferred ‘philosophy’ of avoiding unnecessary categorial
separations, I will add Gordana’s note, and I may other ones, to the first
group of responses, where useful.

Thank you and best wishes.

 

J

 

A.10 17.01 Gordana’s note reminds us that concepts, such as ‘fact’, that we
have tended to accept without question conceal significant problems of
information and disinformation. For Democrats, it is a fact that Trump
blocked access to important witnesses and documents. For Republicans, this
fact does not exist publicly; there is only the fact that they wish to deny
legitimacy to impeachment. Are these facts data, information and/or
knowledge? I suggest simply a combination of information and disinformation.
The social use of lies/disinformation does not seem to me to require a major
new theoretical principle: selfishness and the will to power will do very
well, thank you.

 

Gordana also makes an important connection, in the social domain, between
lies and the communication system. I hope Loet will comment on this and tell
us at what point in the hierarchy of contingency disinformation enters in.
In his note he refers to multiple meanings, but here we are talking about
false ones. Again, is there some ‘marker’ for the latter?

 

As to her last sentence, I do not have a literature reference for the
following at the moment; apologies. For me, mutual confirmation is a widely,
I suspect consciously, used mechanism to support doubtful theories of all
kinds, including and especially some domains of science: A supports B; B
supports C; and C supports A, ‘proving’ A, a bootstrap as Gordana suggests.
Finally, her reference is informative, but takes us into the domain of art.
Someone asked earlier if a painting was not in some way ‘disinformation’
These are for me are relatively easy questions, for which the touchstone is
intentionality.      

 

  _____  

From: Gordana Dodig-Crnkovic [mailto:gordana.dodig-crnkovic at mdh.se] 
Sent: vendredi, 17 janvier 2020 23:26
To: Joseph Brenner
Subject: Re: [Fis] LECTURE RESPONSES UP TO 16.01.20

 

Dear Joseph,

Thank you for leading this important discussion.

I have been totally swamped with teaching and not able to contribute, but I
was following and reflecting over those things. Perhaps you follow Aeon
newsletter. If not, this related article might interest you:

 
<https://aeon.co/essays/how-to-tell-fact-from-fiction-in-fiction-and-other-f
orms-of-lies?utm_source=Aeon+Newsletter&utm_campaign=49e7fadf66-EMAIL_CAMPAI
GN_2020_01_16_12_28&utm_medium=email&utm_term=0_411a82e59d-49e7fadf66-686413
45>
https://aeon.co/essays/how-to-tell-fact-from-fiction-in-fiction-and-other-fo
rms-of-lies?utm_source=Aeon+Newsletter&utm_campaign=49e7fadf66-EMAIL_CAMPAIG
N_2020_01_16_12_28&utm_medium=email&utm_term=0_411a82e59d-49e7fadf66-6864134
5

The other day I was thinking about the relation between the data (as
“facts”), information (as “facts”) and knowledge (as “facts”). Also, about
the role of representation in the process of production of common “facts”.

It could be connected to cognitive science, the process of the construction
of “facts”. The easier case is The Construction of Scientific Facts
(Latour). 

And the interesting case is the characterization of lie (in the article) and
distinction between misinformation and disinformation. One can ask a
question: what in a distributed information system like a state leads to the
situations where lies are made prominent part of the communication system?
It should be possible to study on historical examples and I guess it
probably has a function for that system. What might that be? Simple
explanation is that the network is hierarchically organized and the person
on the top of the hierarchy is trying to keep the power. But it must also be
so that people around are supporting it. Why are they doing that? There are
examples of hierarchical structures like house of cards which are in a
delicate balance and may look impressive while in balance. But under a very
slight disturbance they can fall apart, and hierarchy is gone. Is there any
chance to understand mechanisms that support such social systems on
theoretical grounds? They present some sort of bootstrap where hierarchy is
reproduced through mutual support of the parts.

Best wishes,

Gordana

 

 

From: Fis <fis-bounces at listas.unizar.es> on behalf of Joseph Brenner
<joe.brenner at bluewin.ch>
Date: Friday, 17 January 2020 at 10:49
To: "fis at listas.unizar.es" <fis at listas.unizar.es>
Cc: "pcmarijuan.iacs at aragon.es" <pcmarijuan.iacs at aragon.es>
Subject: [Fis] LECTURE RESPONSES UP TO 16.01.20

 

Dear Friends and Colleagues,

 

As our discussion is taking place, the governments of Russia and the United
States are being recognized as the sources of massive amounts of
disinformation. Their objective in part was the election and is now the
re-election of Trump as President of the United States. My personal view is
that this disinformation is linked to and supports extreme right-wing
economic, political and fundamentalist belief systems. I was therefore glad
to see the agreement, in the responses to my Lecture, that disinformation is
not only an important topic for further discussion in the context of the
Foundations of Information Science, but one which some members of our group
may be in a position to counter. 

 

I will present my comments in two groups, the first today are on the
responses with which I agree, in a week or so on responses with which I
disagree. As people who have followed my work might expect, there is no
absolute separation between these categories. I also will try to give some
sense of the dynamics of our dialogue. They will be indexed simply to
facilitate (cross-) reference.

 

I look forward to your responses as the basis for the next phase of the
discussion.

 

Thank you and best wishes,

 

Joseph

 

A. AGREEMENT

A.I 02.01 Jose Javier emphasized the public origins of disinformation, its
relation to ‘intelligence’ and its use by totalitarian regimes, exactly
along the lines above. I tend to agree with his suggestion that more than
having a structure close to if not identical to information, disinformation
IS information, BAD information. Using the ‘dis’ as an oversimplified
distinction in structure tends to gloss over the pernicious operation of
information with the properties we decry. Perhaps another term would convey
the idea better or more simply.   

 

A.2 02.01 I am grateful to Stan for pointing to the distinction between
concepts and examples. I suggest we keep this comment in mind in our further
discussion.

 

A.3 03.01 Mark J. responded to a comment by Stan with which I disagreed,
namely, that there was something less primary about disinformation. Stan
interposes an actor, a ‘searcher’ who analyzes information and ‘reports’ on
it, presumably distorting it in the process. In any case, I think we agree
that disinformation is a process – one of disinforming. I disagree with Mark
however regarding agents and a consequent theory of agency (which has value
in other respects). My view is that one cannot have the intentionality to
disinform without an agent, and this can be an individual present at the
start of the process. I suggest to Stan here that his two-step process is
not absolutely necessary: the agent, the intentional ‘disinformer’, takes
whatever material is available and mis- or should I say dis-uses it. Mark’s
point about the necessity of even the smell (or stink) of an ‘informational
police’ is certainly correct, but what is an acceptable alternative?
Actually, police are simply tools of government, good or bad. We are talking
here about responsible collective interactions as FIS members with – whom?
We don’t know yet.

 

A.4 06.01 Karl makes a number of useful points. The most important is the
confusion, in English between disinformation, as the suppression,
non-dissemination of inconvenient data or theories, and disinformation as
the dissemination of false data. The confusion is not present in French,
where the second is désinformation, and, as I learn, also in German and
Hungarian. So what do we do? Push for the use of desinformation in English,
which requires explanation, or something else? In any case, both share the
notion of intentionality.

 

Finally, I liked the way Karl pats us all on the back, but the objective
remains also to get some message to someone outside the group.  

 

A.5 05.01 Pedro makes several important restatements of the biological
origins of the equivalents of disinformation. (Until further notice, pace
Karl, we have to continue to use this form.) The exacerbation of
disinformation due to the new technologies also needs to be emphasized. What
we have not yet identified is the distinguishing ‘marker’ of disinformation.
It is a ‘fake’ as Pedro says qua content, but it looks exactly like the
information people are used to getting. It can easily mimic credibility and
authority, even if one has to be really ignorant not to see through it (45%
of the U.S. population?)

 

The question of balance is more interesting and possibly more encouraging.
Balance in seeing and presenting other, alternative views to one’s own is an
indication of openness and tolerance - at last, a few positive words! There
is little balance in disinformation. In fact, perhaps absence of pros and
cons in the same message, might be a marker for disinformation.

 

A.6 09.01 This is perhaps the most serious, clear and urgent call so far, by
Terry, for an adequate intellectual level of analysis of disinformation and
related issues. Terry’ most important point, which I hope will receive
further discussion as son as possible, regards the role IS4SI has to play
and in front of what audience. 

 

I wish to emphasize that I was not making some kind of Luddite argument
against cybernetic approaches (my note about Terry’s of 10.1) I only wished
to insure that they have adequate grounding in non-computational science and
philosophy. I hope, or even ask, that Terry could expand on his ideas on an
informational immune system. This would be the next step after the
identification of the ‘markers’ for the dis-ease. 

 

A.7 09.01 More than just a supplementary question, I feel this note of
Mark’s goes to the heart of the technological vs. the non-technological
aspects of the problem. I note his consideration of information and
disinformation in tandem, at the level of institutions and systems. (For me,
a husband and wife, or two friends or colleagues are systems). I also agree
that over-application of computer systems to institutions may result in
information loss.  In the context of this discussion, it may actually be a
tool employed by people who intend to disinform. 

 

A.8 08.01 Mark’s follow-up note points to one success of AI that I suggest
be followed up separately. More specifically, he asks if machine learning,
“might ground an insight” into the functional difference between information
and disinformation, along the lines of Terry’s considerations. My response
is the same: if yes, fine, but human learning is also required, and must be
supported. If not, there will be no one left but machines, and I am sure
they will be able to pursue ‘truth’ on their own.   

 

A.9 15.01 In supporting Terry, Pedro again calls for a role for IS4SI,
initially internally with the formation of a special working group. Its
purpose would be to see what both theoretical and applied contributions we
might make. However, it is not the computer world as such that will result
in ‘voting every afternoon’, in Arbib’s fuite en avant. when only X% (Pedro,
please give us the right number) of the members of FIS have responded to
this Lecture. 

 

The point about authentification, however, joins the previous ones about
markers and functional differences; and constitutes a first real
mini-consensus on a target for further work. 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20200118/78797be4/attachment-0001.html>


More information about the Fis mailing list