Did you know?

The ViOffice Cloud is now GRATIS for up to 3GB storage space. Register now!
Skip to content
Startseite » Blog » Face Recognition and its Impact on Privacy

Face Recognition and its Impact on Privacy

“Almost everyone in the world will be identifiable” – This quote comes from an investor meeting of the US company Clearview AI, which specializes in face recognition using artificial intelligence (AI). Furthermore, they are developing tools to scan car license plates in public or people’s public behaviour. [1]

Obviously, this might be a thread to all of our individual privacy, and hence, a serious problem for our society as a whole. In this blog post, we would like to inform you about the functionality and application areas of facial recognition and at the same time point out the associated dangers.

How Face Recognition works

Facial recognition is a so-called biometric technique used to identify individuals based on their visual facial characteristics. There are two-dimensional and three-dimensional recognition techniques. Either way, the entire process typically involves three stages:

1. Capturing characteristics

two-dimensional recognition

Based on frontal or profile pictures of faces, the characteristic facial features are identified. These include, for example, the distances between the eyes, the distances between the forehead and the chin, contours of the ears and lip, and many other features. The advantage of the two-dimensional recognition compared to three-dimensional recognition is that the research of these methods is now very advanced and huge amounts of data on images of individuals are available online, especially on social networks.

three-dimensional recognition

Three-dimensional face recognition is based on optical measurement methods using image sequences to detect surfaces in three dimensions. In theory, this reproduces more accurate images of faces. In practice (i.e. for surveillance), however, these methods are not necessarily superior to two-dimensional face recognition, since more complex measuring instruments are required and the concrete methods are not equally advanced.

2. Transforming Faces into Data

After a face is analysed using two- or three-dimensional techniques, the obtained information are transformed into data and a unique digital “faceprint” is created. That means that only a numerical set of mesured data is saved, rather than a literal “digital faceprint”.

3. Face Recognition

This facial data can then be stored in large databases and new images uploaded to social networks, for example, can be automatically matched to people in those databases through facial recognition. The matching process is typically done with machine learning algorithms or artificial intelligence. [2]

Where Face Recognition is used

face recognition

Before we go into more detail on critical large-scale projects regarding facial recognition and privacy, we would like to point out that facial recognition has long since found its way into our everyday lives.

Apple offers the possibility to unlock your iPhone using facial recognition, log in into apps, and make purchases.

Snapchat’s (and other company’s) filters rely on facial recognition. Otherwise, the software would not be able to recognize your faces to apply the filters.

Meta (e.g. Facebook & Instagram) has very large databases to automatically tag people on uploaded pictures.

Google uses face recognition in Google Photos to automatically sort images in your galery.

These are just selected examples to illustrate how widespread face recognition already is. What seem to be helpful and sometimes amusing tools are at the same time leading to the collection of huge amounts of data from all of us by global corporations or government institutions.

Why Face Recognition is a Privacy Issue

In this section, we would like to show why face recognition is a really important privacy issue.

The Case of China’s Social Credit System

China’s controversial social credit system is a national initiative to establish a record system so that businesses, individuals and government institutions can be tracked and evaluated for “trustworthiness”. Using digital surveillance measures, the behavior of the population is systematically controlled and evaluated in order to induce socially “desired” behavior. The system only works if the entire public space is digitally monitored by surveillance cameras and facial recognition.

Subsequently, undesirable behavior is sanctioned, for example by no longer granting loans, no longer being able to purchase train or airline tickets, and/or publicly stigmatizing the person. The system is currently still being tested in various forms in individual regions and is aimed at complete control of individuals in public (and private) space. [3]

The Case of Berlin Südkreuz

From 2017 to 2019, a pilot project ran at Berlin’s Südkreuz (3rd largest train station in Berlin) to test the use of facial recognition. In two test periods, the success of the use of facial recognition for police investigations was tested. In addition to identifying suspected wanted criminals, unattended luggage and conspicuous behavior patterns were also analyzed.

Although data protectionists and civil rights activists rightly pointed out the software’s poor and impractical detection rate on top of its obvious privacy intrusions, Deutsche Bahn and the former German interior minister considered the test a success. Consequently, it was decided to invest several hundred million euros in the expansion of these methods in public places, especially train stations. Legal concerns, especially the apparent violation of the European General Data Protection Regulation (GDPR), were not sufficiently taken into account. [4]

The Case of Clearview AI & PimEyes

The company Clearview AI probably owns one of the largest private databases of pictures worldwide. Clearview AI works like a search engine for photos. As The New York Times revealed, Clearview AI collected more than 3 billion images from Facebook, YouTube and millions of other websites. Based on this increasing data set, the company tries to make almost everyone on earth identifiable within seconds. [1, 5]

Such a dataset is a serious societal threat because, in contrast to potentially useful applications, it would most certainly drive mass surveillance both by states as well as companies. Besides many obvious democratic and privacy threats, the existence of mass surveillance has been proven to change the way we think and behave, so-called “chilling effect”. More information in our blog. [6]

The concrete dangers to one’s social life become more obvious for the case of the European company PimEyes. Similarly to Clearview AI, PimEyes who claim to have aggregated a data set of over 900 million faces, offer a “search engine for faces” with each and every image being tagged by facial recognition AI. [7]

However, as opposed to Clearview AI, who offer their service primarily to governmental authorities, thereby potentially “only” strengthening State Surveillance, PimEyes is available to virtually anyone, having even deeper privacy implications as well as the potential for discrimination in the labour market, social life and indeed even in the legal sphere. [7]

How We can Protect our Privacy

Individual Measures

There are certain basic measures anyone can apply, particularly while using social media. However, this issue is much bigger than individual choises and behaviour.

In Short:
  • For one, be thoughtful of what you post online or share with others privately. This is of course a good idea in general, but particularly so in the case of images depicting yourself (and others).
  • Secondly, try to limit automated image tagging (especially of faces) wherever possible, such as in your cloud storage solution or smartphone image galeries.
  • Lastly, advocate for stricter laws surrounding facial recognition algorithms. The usage of AI in such contexts is an invasive development, which needs to be guided by broad discussions in science, society and of course politics.

The implementation and usage of facial recognition algorithms is an omnipresent development and while general statements like “think before you upload” are still relevant, they of course also play into the previously discussed chilling-effect. The usage of your own face in the public online world (such as social media) can be an expression of freedom and liberty and should be left as a decision to every individual themselves. It should not be anyone’s intention to tell others what they should or should not have done with images of themselves.

That said, whenever you share or publish imagery that includes others (this also applies to uploading them to a personal cloud storage which uses image tagging like Google Drive), it is a good idea to ask affected persons for permission and act accordinly.

The truth is, that limitation of exploitative facial recognition measures are a subject of policy and law. Such questions are an upcoming topic both in politics as well as society for the next few years.

ViOffice Kaŝi & the Fawkes Algorithm

An algorithm developed by scientists from the SAND Lab at the University of Chicago called “Fawkes” set out to offer individual protection against facial recognition AI by adding small alterations to a given photo. These alterations, almost not noticable to the human eye, try to “poison” facial recognition models of a person. [8, 9]

This means, that any facial recognition AI trying to learn how you look from such cloaked images, learn about a wrong version of you, rendering the resulting model much less effective. This in turn means, that the “poisend” model is much less likely to recognise you when it is confrontend with your real face in the future (such as via IP connected surveillance cameras). However, the limitations of such measure should be considered before publishing any photos altered by them. [8, 9]

While the Chicago scientists ensured a relatively simple usage of Fawkes on local computers, there are instances where no appropriate computer is nearby, particularly in the age of smartphones. For such scenarios, we developed the simple to use “Kaŝi” web application. Using the aforementioned Fawkes algorithm in the background, Kaŝi provides easy image cloaking on the go. We decided to provide Kaŝi as a free and public offer hosted on our own infrastructure. However, since just like Fawkes itself, Kaŝi is Free and Open Source Software (FOSS), anyone can simply host the web-application on their own server.

Sources

  1. Harwell, Drew (2022): Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyong law enforcement, in: The Washington Post. Online at: https://www.washingtonpost.com/technology/2022/02/16/clearview-expansion-facial-recognition/ [16.02.2022]
  2. Kaspersky: Die Gesichtserkennung – Definition und Erläuterung. Online at: https://www.kaspersky.de/resource-center/definitions/what-is-facial-recognition
  3. Campbell, Charlie (2019): How China is Using “Social Credit Scores” to Reward and Punish its Citizens, in: Time. Online at: https://time.com/collection/davos-2019/5502592/china-social-credit-score/
  4. Krempl, Stefan (2019): Bahn – Mehr Überwachung mit Gesichtserkennung an Bahnhöfen, in: heise online. Online at: https://www.heise.de/newsticker/meldung/Bahn-Mehr-Ueberwachung-mit-Gesichtserkennung-an-Bahnhoefen-4522296.html [12.09.2019].
  5. Hill, Kashmir (2020): The Secretive Company That Might End Privacy as We Know It, in The New York Times. Online at: https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html [18.01.2020].
  6. Richards, N. (2013): THE DANGERS OF SURVEILLANCE. Harvard Law Review, 126(7), 1934-1965.
  7. Laufer, Daniel & Meineck, Sebastian (2020): A Polish company is abolishing our anonymity. Online at: https://netzpolitik.org/2020/pimeyes-face-search-company-is-abolishing-our-anonymity/ [10.07.2020]
  8. Shan, S., Wegner, E., Zhang, J., Li, H., Zheng, H., Zhao, B. (2020): Image “Cloaking” for Personal Privacy. URL: https://sandlab.cs.uchicago.edu/fawkes/
  9. Shan, S., Wegner, E., Zhang, J., Li, H., Zheng, H., Zhao, B. (2020): Fawkes – Protecting Privacy against Unauthorized Deep Learning Models. In Proceedings of USENIX Security Symposium 2020. URL: http://people.cs.uchicago.edu/%7Eravenben/publications/abstracts/fawkes-usenix20.html
Website | + posts

Pascal founded ViOffice together with Jan in the fall of 2020. He mainly takes care of marketing, finance and sales. After his degrees in political science, economics and applied statistics, he continues to work in scientific research. With ViOffice, he wants to provide access to secure software from Europe for everyone and especially support non-profit associations in their digitalization.

Website | + posts

Jan is co-founder of ViOffice. He is responsible for the technical implementation and maintenance of the software. His interests lie in particular in the areas of security, data protection and encryption.

In addition to his studies in economics, later in applied statistics and his subsequent doctorate, he has years of experience in software development, open source and server administration.