It seems like we have to choose what is worth more to us as a society: the well-being of children or keeping our private data safe?
On 29 of July, 2022, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a Joint Opinion on the Proposal for a Regulation to prevent and combat child sexual abuse. Its aim is to impose obligations when it comes to detecting, removing, reporting and blocking known and new online child sexual abuse material (CSAM). While child sexual exploitation must be stopped, the EDPB stated that only strictly necessary and proportionate information should be retained in these cases, as the right to private life and data protection shall be upheld. They believe the use of artificial intelligence (hereinafter: AI) to scan users’ communications could also generate errors, which may lead to false accusations.
While privacy is a serious issue that impacts all of us, it must be noted that more than one million reports of child sexual abuse happened in the European Union in 2020. So what gives? How can we ensure that every child is protected from sexual exploitation, perpetrators are found and content is removed, while protecting ourselves from becoming completely transparent and vulnerable? Is AI the answer to the problems - or is it unreliable?
In order to understand the gravity of the situation, we must first take a look at what AI entails in this case: for the detection of new CSAM, a different type of technology, including classifiers and artificial intelligence would be used. However, their error rates are incredibly high. Impact Assessment Report indicates that there are technologies for the detection of new CSAM whose precision rate can be set at 99.9%, but with that precision rate they are only able to identify 80% of the total CSAM in the relevant data set. The Joint Opinion states that when using artificial intelligence algorithms on images or text, bias and discrimination can occur due to the lack of representativeness of certain population groups in the data used to train the algorithm. The only way to combat this negative effect of artificial intelligence, biases should be identified, measured and reduced to an acceptable level in order for the detection systems to be effective in identifying predators. Despite the fear surrounding possible false arrests, in my opinion, AI is our best shot at catching as many predators as possible, and preventing more harm to our children.
In order to decide if AI should be used to combat CSAM, I propose that we look at the advantages and disadvantages of its usage, as well as discuss what rights are infringed in case CSAM is found, and in the case that we do not look for it.
Several fundamental rights of victims are infringed, such as the human rights to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment, as guaranteed by the UDHR and other international laws. When we take a look at the United States Supreme Court decision and lower court decisions in United States v. Lanier, we can see that in the US’s interpretation, sexual abuse violates a recognized right of bodily integrity as encompassed by the liberty interest protected by the 14th Amendment.
The problem, however is, uncontrolled access to anyone’s and everyone’s data under the guise of investigation into cases of online abuse could easily lead to surveillance capitalism getting stronger, our data becoming completely visible and privacy essentially ceasing to exist. Right now, personal data is protected by several laws in the EU, most importantly the GDPR.
If the right of children not to be exposed to sexual exploitation online can be upheld in any other way other than giving up data protection, then restricting privacy is not necessary. Currently we are trying to implement measures to stop online child abuse in all its forms, but it yields few results. I must wholeheartedly agree that data protection rights should only be restricted to the most essential degree. Because both of these issues are so intertwined and difficult to balance, we could have a new policy specifically in cases where CSAM is looked for in a person’s data sphere. I firmly believe that this solution could be found, but it would require establishing new agencies that specifically deal with aspects of data protection when it comes to cases like this. We may need agencies dealing specifically with AI in the near future. They could help oversee false accusations, errors and could play a role in developing a practical framework in which AI could become a tool for us to use against predators. As for the data protection aspect of the issue, as Mark Zuckerberg so famously stated more than 10 years ago: Privacy is no longer the social norm, anyway.
These are all questions for the future generations of thinkers, who may just develop newer technologies and safer practices, which make balancing these two sides of human rights possible. Artificial intelligence is a rapidly growing field, so we can have hope that one day we will be able to find the balance between false arrests and saving children. Developing frameworks of law on a global scale would also help in setting down clear guidelines as to what AI can be used for and what percentage of error rates are acceptable. If we had the proper laws governing AI, we would not have to choose what is more important: the fundamental rights of innocents, or protecting our privacy. Therefore, I encourage all jurists to start an open dialogue about the possibilities of creating a comprehensive framework for AI, in order to ensure a brighter future. This may start with setting up an agency, which can report on the practical use of AI in finding CSAM, and we can create a model law on the highest possible level after that, in order to make sure that the law can catch up to technology in time.
__________________________________________________________
The views expressed above belong to the author and do not necessarily represent the views of the Centre for Social Sciences.