[Fis] LECTURE RESPONSES UP TO 16.01.20

Joseph Brenner joe.brenner at bluewin.ch
Fri Jan 17 10:48:14 CET 2020


Dear Friends and Colleagues,

 

As our discussion is taking place, the governments of Russia and the United
States are being recognized as the sources of massive amounts of
disinformation. Their objective in part was the election and is now the
re-election of Trump as President of the United States. My personal view is
that this disinformation is linked to and supports extreme right-wing
economic, political and fundamentalist belief systems. I was therefore glad
to see the agreement, in the responses to my Lecture, that disinformation is
not only an important topic for further discussion in the context of the
Foundations of Information Science, but one which some members of our group
may be in a position to counter. 

 

I will present my comments in two groups, the first today are on the
responses with which I agree, in a week or so on responses with which I
disagree. As people who have followed my work might expect, there is no
absolute separation between these categories. I also will try to give some
sense of the dynamics of our dialogue. They will be indexed simply to
facilitate (cross-) reference.

 

I look forward to your responses as the basis for the next phase of the
discussion.

 

Thank you and best wishes,

 

Joseph

 

A. AGREEMENT

A.I 02.01 Jose Javier emphasized the public origins of disinformation, its
relation to ‘intelligence’ and its use by totalitarian regimes, exactly
along the lines above. I tend to agree with his suggestion that more than
having a structure close to if not identical to information, disinformation
IS information, BAD information. Using the ‘dis’ as an oversimplified
distinction in structure tends to gloss over the pernicious operation of
information with the properties we decry. Perhaps another term would convey
the idea better or more simply.   

 

A.2 02.01 I am grateful to Stan for pointing to the distinction between
concepts and examples. I suggest we keep this comment in mind in our further
discussion.

 

A.3 03.01 Mark J. responded to a comment by Stan with which I disagreed,
namely, that there was something less primary about disinformation. Stan
interposes an actor, a ‘searcher’ who analyzes information and ‘reports’ on
it, presumably distorting it in the process. In any case, I think we agree
that disinformation is a process – one of disinforming. I disagree with Mark
however regarding agents and a consequent theory of agency (which has value
in other respects). My view is that one cannot have the intentionality to
disinform without an agent, and this can be an individual present at the
start of the process. I suggest to Stan here that his two-step process is
not absolutely necessary: the agent, the intentional ‘disinformer’, takes
whatever material is available and mis- or should I say dis-uses it. Mark’s
point about the necessity of even the smell (or stink) of an ‘informational
police’ is certainly correct, but what is an acceptable alternative?
Actually, police are simply tools of government, good or bad. We are talking
here about responsible collective interactions as FIS members with – whom?
We don’t know yet.

 

A.4 06.01 Karl makes a number of useful points. The most important is the
confusion, in English between disinformation, as the suppression,
non-dissemination of inconvenient data or theories, and disinformation as
the dissemination of false data. The confusion is not present in French,
where the second is désinformation, and, as I learn, also in German and
Hungarian. So what do we do? Push for the use of desinformation in English,
which requires explanation, or something else? In any case, both share the
notion of intentionality.

 

Finally, I liked the way Karl pats us all on the back, but the objective
remains also to get some message to someone outside the group.  

 

A.5 05.01 Pedro makes several important restatements of the biological
origins of the equivalents of disinformation. (Until further notice, pace
Karl, we have to continue to use this form.) The exacerbation of
disinformation due to the new technologies also needs to be emphasized. What
we have not yet identified is the distinguishing ‘marker’ of disinformation.
It is a ‘fake’ as Pedro says qua content, but it looks exactly like the
information people are used to getting. It can easily mimic credibility and
authority, even if one has to be really ignorant not to see through it (45%
of the U.S. population?)

 

The question of balance is more interesting and possibly more encouraging.
Balance in seeing and presenting other, alternative views to one’s own is an
indication of openness and tolerance - at last, a few positive words! There
is little balance in disinformation. In fact, perhaps absence of pros and
cons in the same message, might be a marker for disinformation.

 

A.6 09.01 This is perhaps the most serious, clear and urgent call so far, by
Terry, for an adequate intellectual level of analysis of disinformation and
related issues. Terry’ most important point, which I hope will receive
further discussion as son as possible, regards the role IS4SI has to play
and in front of what audience. 

 

I wish to emphasize that I was not making some kind of Luddite argument
against cybernetic approaches (my note about Terry’s of 10.1) I only wished
to insure that they have adequate grounding in non-computational science and
philosophy. I hope, or even ask, that Terry could expand on his ideas on an
informational immune system. This would be the next step after the
identification of the ‘markers’ for the dis-ease. 

 

A.7 09.01 More than just a supplementary question, I feel this note of
Mark’s goes to the heart of the technological vs. the non-technological
aspects of the problem. I note his consideration of information and
disinformation in tandem, at the level of institutions and systems. (For me,
a husband and wife, or two friends or colleagues are systems). I also agree
that over-application of computer systems to institutions may result in
information loss.  In the context of this discussion, it may actually be a
tool employed by people who intend to disinform. 

 

A.8 08.01 Mark’s follow-up note points to one success of AI that I suggest
be followed up separately. More specifically, he asks if machine learning,
“might ground an insight” into the functional difference between information
and disinformation, along the lines of Terry’s considerations. My response
is the same: if yes, fine, but human learning is also required, and must be
supported. If not, there will be no one left but machines, and I am sure
they will be able to pursue ‘truth’ on their own.   

 

A.9 15.01 In supporting Terry, Pedro again calls for a role for IS4SI,
initially internally with the formation of a special working group. Its
purpose would be to see what both theoretical and applied contributions we
might make. However, it is not the computer world as such that will result
in ‘voting every afternoon’, in Arbib’s fuite en avant. when only X% (Pedro,
please give us the right number) of the members of FIS have responded to
this Lecture. 

 

The point about authentification, however, joins the previous ones about
markers and functional differences; and constitutes a first real
mini-consensus on a target for further work. 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listas.unizar.es/pipermail/fis/attachments/20200117/2a61a551/attachment-0001.html>


More information about the Fis mailing list