by Giulio Montanaro

Of the 318 pages that make up Radical Technologies, there is one period in particular that the publisher could have used to summarize the author's message:

A literal, straightforward description of daily life already sounds like the conspiracy theory of a paranoid schizophrenic: we are surrounded by powerful but invisible forces, controlling us by means of devices scattered throughout the house, even applied to our bodies, and these forces are industriously compiling detailed dossiers on each of us, passing the contents of these dossiers to obscure intermediaries who are accountable to no one and who use everything they know to determine the structure of the opportunities we are offered - or worse; that we are not offered. [Editorial translation from the Italian edition].

Radical technologies (which we surprisingly find mentioned among the fundamentals in some local academies' digital ethics courses) postulates a deeper reflection on the supposed, inescapable dependence of humans on technology. In this essay Adam Greenfield issues yet another warning to a society increasingly radically transfigured by interaction with the digital. An essay that lends itself to interesting parallels with everyday life, raising equally relevant questions, always related to it.

adam greenfield tecnologie radicali

The cover

How far will it go the digital avant-garde child of accelerationism, cousin of cyber communism and essential socio-political trait d'union in the transition from technocracy to transhumanism? How long before immigration becomes a humanly unavoidable requirement to slow down the process of automation and robotization? Even here in Italy, we mean. Japan is already addressing the issue. And Japan is the country with the most similarities to the Belpaese: from demographics to credit to cultural identity. Moreover, without workers, who and how will oppose the arch-capitalist, slave-owning Power in the future?

That is, if - one wonders - it will still be possible to act in our defense. If nanotechnologies free to float in capillaries are able to paralyze limbs or create virtual realities in the brain, and if soon subcutaneous implants powered by bodily energy will become a sine qua non for taking advantage of one's money (and more) - it is easy that, very soon, the concept of freedom will be downgraded to a form of autonomy and independence, by technology.

Closing with questions, what are the differences between the COVID false positive model and the anodyne or so-called false criminal model used in artificial intelligence metrics driving predictive policing? We will focus on the last question by going to analyze the concept of Red Boxing or Intelligent Security.

Red Boxing closely resembles the predictive policing foretold by author Philip K. Dick in the novel Minority Report. Except that, in reality, the predictions are carried out not by individuals with special psychic abilities, like Dick's Precogs. Rather, by artificial intelligences. The main, human, ethical issue inherent in such a form of technology is its innate discriminatory nature. From the earliest experiments done in America there have, in fact, emerged blatant violations related to what English speakers use to label as "bias".

Here in Italy, strangely in contrast to America, predictive policing began to take its first steps last year. It is April 2021 when, in the Veneto region always a forerunner in the introduction of new forms of technological despotism, the police of the city of Caorle introduces on an experimental basis the use of Pelta Suite, intelligent security designed to contain predatory acts. Such technology, as can be seen, already belongs to "the design of everyday life," as per the subtitle of Radical technologies.

"Smart security" is an artificial intelligence that processes local crime data provided by police officers with the aim of providing predictions about potential high-crime areas and subjects. Like any artificial intelligence, its operation is based on machine learning algorithms and is done through a fourfold process involving collection, screening, inspection and action. Smart security artificial intelligences are informed by the probabilistic model of Bayesian inference criterion. This statistical approach interprets probabilities as levels of confidence in the occurrence of an event, rather than as frequencies, proportions or similar concepts.

READ ALSO
L'ultima frontiera transumanista: la tecnologia spirituale

Essentially, artificial intelligence searches for clusters of activity and networks of relationships among them. Past events, which are first translocated into sacred big data and then transformed into probabilistic models based on information provided, in the interest of millions, by a single individual to an artificial intelligence. A Path Dependence, as Greenfield correctly puts it, a system tendency to evolve in predetermined ways with respect to decisions already made in the past.

A phenomenon that consistently portrays the scientistic reductionism that increasingly characterizes the contemporary world and invalidates any potential assessment of reality. And which, in the case of predictive policing, as the author also reminds us, will give plenty of instances of "anodyne," false positives. A model that reminds many of that of the supposedly positive COVID. A model, above all, that prefigures an administration of security and justice based on algorithmic conjectures worked out on the basis of past events and accomplished by third parties. Therefore, de facto totally dissociated from the reality of the potential accused.

Practically speaking, people will go from having to prove that they do not carry a virus of relatively low lethality to having to disprove the claim that they do not intend to do what they are accused of. Innocent or healthy subjects who will be tagged as criminals or lepers. Subjects unjustly, but more importantly, illegally persecuted first, and prosecuted later. A consistent expression of the model of freedom and democracy to which the Cognitive Capitalism society is to be informed and well depicted by the "client communication" of cognitive agents. Citizens judged to be at high criminogenic risk will, in fact, be subject to home visits. By predictive policing, or cognitive policing, whatever. And they will be warned about the risks of their potential future acts. That is, if they are not directly charged for them.

In Minority Report, John Anderton, at one point, turns to Dr. Iris Henimen, the inventor of predictive policing, and asks her whether it is possible to fake precognition. Dr. Henimen's answer lends itself as an ideal closing remark: "If a series of genetic errors and a completely mad science can be called invention, then I invented predictive policing."

Nato a Padova nel 1980, appassionato di lingue, storia e filosofia. Scrive fin da giovanissimo e dal ‘99 collabora con organi di stampa. Ha lavorato nel settore della musica elettronica, distinguendosi come talent scout e agente di alcuni degli artisti più importanti degli ultimi 15 anni. Ha fatto esperienze nella moda e nel tessile e vissuto in nove città differenti. Attualmente vive in Tunisia.