Mind
The trail behind us: perfume and data

The trail behind us: perfume and data

Moving we leave a trail. Once it was of a perfume, today there is an aura of data, a procession that never abandons us, from awakening to night sleep. A 24/7 privacy theft.

 

Advertising Message The Wall Street Journal editorialist Noonan noted:

Privacy is tied to our person. It has to do with intimate aspects – the mechanisms of our head and heart, the functioning of our mind – and the boundary between these things and the outside world (2013).

Yet, moving, we leave a trail. Once it was of a perfume; today an aura of data is added. A procession – that of data – that never leaves us, from awakening to night sleep. A 24/7 privacy theft.

Targets are many categories of subjects, including typically consumers, workers, young people. In 2018, U.S. companies spent approximately $ 19 billion on acquiring and interpreting consumer data (Matsakis, 2019).

The data is dual use:

Social platforms provide a huge wealth of personal data. One can delude oneself that they are free. Instead it is an exchange in which we are giving personal information for a service. As you know, “if you are not paying for a product, you are the product”. There are no free lunches: what is posted on social media is used to efficiently profile and target messaging. If we can consider the circumstance of receiving announcements and posts that interest us positive, it is also necessary to be aware of what is done with our personal data and control their use through informed consent (Rossi, 2019).

But here arises another thorny issue. Your consent was likely given when you signed an app’s terms. But the information is complex. According to Time, to read all the conditions of use, a person would take 76 hours every year. The New York Times, when he read 150, called them “an incomprehensible disaster” (Pasley, 2020).

The agenda of our day transmuted for techno-companies in the data agenda.

When you wake up, if you want to take a look at the news via smartphone, the traces will remain.And if what happened to Geoffrey Fowler of the Washington Post? He discovered that his iPhone was sending data to dozens of companies overnight (Pasley, 2020).

To go to work, we use what the New York Times has called “essentially a smartphone with wheels”. Anyone who hires a company car through the app will be welcomed, entered the vehicle, by their favorite kind of music broadcast on the radio. It was recorded in memory and learned. The journey within a comfort area. Was a car purchased? It is able to record the driver’s weight, the number of passengers, the speed and the roads traveled.

Arrived at the office, here’s what can happen … Having the bad surprise of being profiled by the company or, at least, being assailed by the fear of suffering the fate of employees of other companies. For example, of the Swedish fashion chain H&M, with its alleged illegal processing of its employee data by means of hidden records. The information would relate to sensitive aspects, such as their health and privacy. As stated by the Hamburg Data Protection Authority – which has initiated a proceeding against H&M – the quantitative and qualitative extent of the processing of employee data , accessible to all company management, shows a complete profiling of employees that cannot be compared to any other (Domenici, 2020).

This horrifying episode brings to mind another rash scandal, also recent. According to what reported by The Wall Street Journal (Copeland, 2019), in the USA the Nightingale project saw Google, together with the healthcare professional Ascension, secretly collect data (laboratory, medical diagnosis, with names and date of birth) relating to medical records of patients from 21 US states: their complete medical history. Google’s role has been to provide artificial intelligence (AI) systems to read electronic medical records and identify a patient’s condition more quickly. Purpose, more efficient interoperability in the American hospital network (Porro, 2019). Efficiency vs. ethics.

We then collect the outburst of a friend whose son is applying to enroll in US universities, but – he says – the email etiquette could be checked. In fact, American universities use data, such as the amount of time an email remains open and the links clicked, to evaluate the candidate’s degree of motivation (Pasley, 2020).

A Northeastern University study (Renet al., 2019) – the largest on smart TVs – showed that next-generation connected televisions transfer sensitive user data to companies such as Google, Netflix and Facebook, even if they do not subscribe to any service. . In the case of Netflix, the data is sent even when the app is not active. Among the many implications: third parties are able to know if we are at home; based on the preferences chosen on the device, we can be targeted with targeted and effective advertising. Security and privacy at risk, from which the FBI also warns to avoid being spied on and end up in the cyber crime network (Lavalle, 2019).

The violation of personal data is a neverending story: ToTok, similar in name to the most famous (and equally problematic) TikTok, although it presented itself as a messaging app – such as WhatsApp, Telegram, Signal – would have been a spying tool managed by the Government of the Emirates United Arab Emirates (Rijtano, 2019). Behind the app, in fact, there would have been Pax AI, a company specialized in data mining, in turn, connected to DarkMatter, an Abu Dhabi company that deals with white and black hacking: it protects the security and privacy of its customers , while spying on their competitors. ToTok was a very effective surveillance tool, because it tracked conversations, movements, contacts, appointments, the sound recorded by the microphone and the images captured by the camera of the smartphone on which it was installed.

The issue of privacy – the protection of personal information, the value and the cost of such protection – can be included in the information economy literature. In this perspective, privacy has not only a juridical value, but also an economic one.

In 2010 Zuckerberg had attempted to lower the value and cost of protecting personal data by stating that

Advertising message He is the techno-capitalist who spoke because maximizing his profits means exploiting the innate sociality of man and the tendency to self-exposure; personal data (i.e. users’ lives) are transformed into an economic resource to be extracted through the surreptitious offer of a user-fliendly life, through the rhetoric of sharing and the playful component, the make-up of this strategy. But Zuckerberg was wrong in his pro-sharing statements in the round: this optimism today experiences a total change in favor of privacy. And he himself – less than ten years after that strong declaration – implements a color change of the same scale, when in 2019 in San Jose, from the stage of the F8 Congress (the Convention for developers of social networks), he presented himself with equally strong writing and in large letters “The future is private” (Fiocca, 2019). Among the good intentions in favor of privacy for 2020 is the protection of users by introducing the new “Off-Facebook Activity” function, which allows you to limit the information shared with third-party sites and apps. In confirmation of the new deal, Facebook celebrated the “Data Privacy Day” in January to reassure citizens how sensitive techno-capitalism is to the issue of privacy. It’s a pity that the Avast scandal, an antivirus available for free on the market, went through the celebrations: the application memorized the activities of registered users and transferred the information to an Avast subsidiary – the Jumpshot – to package data packages to be sold to giants such as Google, Yelp, Microsoft, McKinsey, Pepsi, Home Depot and various marketing companies. Of course, to strengthen sales channels, to profile the market in segments, to optimize services, etc. (D’Elia, 2020).

These are fundamental aspects in the face of increasingly pervasive AI practices, including the much debated facial recognition – now widely applied, even in countries with non-authoritarian governments.

The EU already has adequate privacy laws: the General Data Protection Regulation (Gdpr – EU Regulation 2016/679) has made it possible to put data protection at the center of the debate also for AI (art.22). Concern and indications regarding ethical issues emerged in June 2019 also from a report of the European Commission’s High Level Group on AI: the EU should regulate invasive practices such as biometric identification (such as facial recognition), use of autonomous lethal weapons systems (like military robots), profiling children. Shortly thereafter, as soon as she took office, the President of the Commission set the regulation of AI in Europe among her priorities. The proposals are contained in the White Paper on artificial intelligence and a European data strategy. The Commission is aiming for a “human-centric” approach whereby AI systems are used in a way that respects EU laws and fundamental rights. Remote biometric identification and other high-risk AI systems are allowed only for reasons of “substantial public interest” and must be “transparent, traceable and guarantee human control in sensitive sectors such as health, police and transport”. The data strategy aims to create a single data market in which personal and non-personal data (including confidential and sensitive data) is secure and in which businesses and public administrations can have access to create and innovate. Remote biometric identification and other high-risk AI systems are allowed only for reasons of “substantial public interest” and must be “transparent, traceable and guarantee human control in sensitive sectors such as health, police and transport”. The data strategy aims to create a single data market in which personal and non-personal data (including confidential and sensitive data) is secure and in which businesses and public administrations can have access to create and innovate. Remote biometric identification and other high-risk AI systems are allowed only for reasons of “substantial public interest” and must be “transparent, traceable and guarantee human control in sensitive sectors such as health, police and transport”. The data strategy aims to create a single data market in which personal and non-personal data (including confidential and sensitive data) is secure and in which businesses and public administrations can have access to create and innovate.

We are on a ridge, where sometimes conflicting needs interface: scientific progress and ethical values, including the right to privacy. An example for all to see is the Covid-19 contrasting AI technology. Today this example can represent the icon of the relationship between the need and the emergency to protect collective health and the protection of the private sphere. A Global China video shows a drone inviting an elderly Tibetan woman to wear a mask to protect herself from the coronavirus (Marino, 2020). The use of this drone compromises privacy, through facial recognition, tracking, georeferencing, profiling. Hence the trade-off,

A saving predictivity for which it is worth giving up pieces of privacy.

This is the first pandemic that AI faces. It is a pilot case. It is a question of marking the border between social protection against systemic risks and individual protection of privacy. The White Paper provides a key to reading it for Europe.