AADM: Auditing Automated Decision-making
Automated decision-making systems are increasingly used to make assessments or predictions about people. Admittedly, this new trend could lead to less biased and more accurate decisions. After all, contrary to humans, algorithms have no prejudices and cannot lie about the reasons that led to a specific decision. At the same time, these decision-making systems often operate as black boxes, which may not allow insights on how they arrived at a decision, especially when machine learning is applied.
For this reason, there is increasing effort from the public and the private sector to make these technologies more explainable through transparency and fairness (see the European General Data Protection Regulation). However, both features require some way to audit algorithms. And unfortunately our understanding of such auditing processes is very limited.
This six-month research project aims to address this crucial gap through the elaboration of a conceptual map of existing and suggested methods for the auditing of algorithms. Underlying ethical and legal principles, as well as latent policies shaping the auditing debate, will be identified to describe a comprehensive model of socially acceptable and ethically ideally forms of auditing, and the exemptions and further work required to realise them in practice.