Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell (2025)
  Copy   BIBTEX

Abstract

It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of “black box problem” in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (“shallow opacity”), 2) the analysts do not know what the system does (“standard black box opacity”), or 3) the analysts cannot possibly know what the system might do (“deep opacity”). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give “informed consent”, or guarantee “anonymity.” It follows from these points that agents in big data analytics and AI ofen cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.

Author's Profile

Vincent C. Müller
Universität Erlangen-Nürnberg

Analytics

Added to PP
2025-04-29

Downloads
730 (#67,838)

6 months
266 (#31,657)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?