Registration is closed!
Theme: Responsible AI and statistical techniques in the audit
Join us for the 2025 Audit Analytics Summit, where professionals, academics, and thought leaders interested in audit data analytics come together to share ideas. In particular, the conference will have a focus on advances in statistics, machine learning, and (responsible) artificial intelligence (AI) in auditing and auditing of these techniques.
This year's summit will be an intimate conference experience, featuring:
- Research papers: Explore cutting-edge papers addressing critical topics in audit analytics and data science.
- Networking drinks: Connect with colleagues, exchange ideas, and build professional networks.
Don't miss out on this valuable opportunity to advance your knowledge and engage with the auditing community. To register, please fill out the registration form at the bottom of this page. If you want to present your work, please indicate this in your registration. We encourage presentations of both published and unpublished work.
We look forward to welcoming you to Amsterdam!
Organized by: Nyenrode Business University & Utrecht University
Date: June 20, 2025
Time: 12:00 - 18:00 (with drinks afterwards)
Location: Nyenrode Business Universiteit, Keizersgracht 285, 1016 ED, Amsterdam
Entrance fee: Free
Program
12:00 - 13:00    Walk-in and Lunch
13:00 - 13:15     Opening by Prof. dr. Ruud Wetzels - Nyenrode Business Universiteit
13:15 - 13:45     dr. Mirko Schäfer - Utrecht University
Title: From Impact Assessment to Oversight
Abstract: As the EU's Artificial Intelligence Act (AI Act) approaches implementation, we can observe a shift from merely raising awareness for responsible and ethical AI development and use. Article 27 of the AI Act demands the use of fundamental rights impact assessments (FRIA's). Several of these FRIA's are already developed and in use, notably in the Netherlands where the Utrecht University developed Fundamental Rights & Algorithms Impact Assessment has been introduced in 2021. Drawing from our experience with using FRAIA, this presentation will discuss how impact assessments and good practices governing AI systems and algorithms achieve much more than merely compliance: increasing data and AI literacy, adapting competences and capacities of organisations, contesting AI systems and algorithms, and constituting accountability.
13:45 - 14:15     Fré Vink - Auditdienst Rijk
Title: Why the Privacy - Fairness Trade-Off Doesn't Exist
Abstract: People often think they need to choose between measuring fairness and providing privacy by excluding special categories of personal data. The choice is difficult, as both fairness and privacy are very valuable. The choice is not without risk either. Not conducting fairness measurements can lead to bias and discrimination, while collecting special categories of personal data leads to privacy risks, especially if the data might be stolen or accidentally leaked. The risk is increased as the AI Act provides an exception to the GDPR, allowing the collection of special categories of personal data for high-risk AI systems, which might lead to large-scale collection of sensitive data. In the presentation, we will argue that this choice doesn't have to exist. We have shown the combination of privacy and fairness in decision trees in our recent work (Van der Steen et al., 2025), using a combination of fairness metrics, differential privacy and a trusted third party owning the data. In this presentation, I will elaborate on the implications of this method for companies and the government.
14:15 - 14:30     Coffee and Tea Break
14:30 - 15:00     dr. Lukas Snoek - Nationale Politie
Title: Using Meta-Models to Evaluate Accuracy, Bias, and Fairness of AI Applications
Abstract: Trustworthy AI-models should be accurate, fair, and robust, especially in high-stakes domains like law enforcement and medicine. Data science offers a rich repertoire of tools to quantitatively evaluate models, like cross-validation, an abundance of (fairness) metrics and sensitivity tests. These tools, however, often lack the rigor and parsimony associated with the field of statistics. In this talk, I outline a novel statistical framework to quantitatively evaluate AI-models. This framework is based on "meta-models", in which a single statistical model is used to quantify how accurate, fair, and robust an AI-model is -- offering parsimony, uncertainty quantification, and understandable results (for lay people). I conclude with my perspective on the future of AI auditing, which both causal and, of course, Bayesian.
15:00 - 15:30     Syed Yawir Ali - Reanda Netherlands
Title: Collaboration of Audit, ESG and Gen AI
Abstract: The convergence of Audit, Environmental, Social, and Governance (ESG) considerations, and Generative AI (Gen AI) presents a transformative opportunity to redefine business assurance. This presentation explores the synergistic potential of their collaboration. We delve into how Gen AI can revolutionize traditional audit processes, enhancing efficiency and depth in analyzing financial data, while simultaneously enabling more robust and data-driven ESG reporting and assurance. By fostering a collaborative approach, organizations can leverage Gen AI to not only streamline compliance but also gain deeper insights into their sustainability performance, ultimately driving greater transparency, stakeholder trust, and long-term value creation.
15:30 - 16:00     Lotte Mensink - Nyenrode Business Universiteit
Title: Enhancing Efficiency and Flexibility in Audits through Bayesian Optional Stopping
Abstract: When auditors use statistical sampling, they typically plan their sample size before data collection. This sample size is determined, among other factors, by the expected deviation. If the expected deviation does not align with the observed deviation in the sample, auditors end up with suboptimal sample sizes. This study introduces Bayesian optional stopping as a solution to this problem. With this method, auditors do not need to determine the sample size before collecting data, but they can monitor the strength of the evidence as the data comes in and stop sampling once sufficient evidence has been gathered. Bayesian optional stopping can enhance efficiency and flexibility in audit sampling, enabling auditors to save valuable resources.
16:00 - 16:15     Coffee and Tea Break
16:15 - 16:45     Jurriaan Parie - Algorithm Audit
Title: A Public Standard for Auditing Risk Profiling Algorithms
Abstract: Over the years, lessons have been learnt from Dutch scandals involving risk profiling algorithms. Investigations conducted by consultants, academics and NGOs have contributed to a growing body of public knowledge from which best-practices emerge. These insights are now encapsulated in a public standard for risk profiling and a currently formalized as a Dutch standard through standardization organization NEN. This presentation explores the interplay between the qualitative principles of law and ethics and the quantitative methodologies of statistics and data analytics. Specifically, we shed light on how empirical approaches can help interpret and contextualize open legal norms under EU non-discrimination law. Examples are drawn from a recent bias analysis conducted in collaboration with the Dutch Executive Agency for Education (DUO), in which aggregated statistics on the migration background of 300.000+ students were analyzed. We discuss whether bias testing inevitably leads to the feared 'battle of numbers', or whether it can serve as a critical role for fostering meaningful democratic oversight of AI.
16:45 - 17:15     Federica Picogna - Nyenrode Business Universiteit
Title: How to Choose a Fairness Measure: A Decision-Making Workflow for Auditors
Abstract: Artificial Intelligence (AI) is increasingly used for decision-making across various domains, often involving binary outcomes. While AI systems can improve efficiency and objectivity, they may also worsen societal biases, disadvantaging and discriminating against unprivileged groups. To mitigate possible discrimination risks, regulations such as the AI Act require AI systems to uphold fairness, with auditors responsible for ensuring compliance. However, the auditing process is complicated by the need to select among multiple definitions of fairness and a variety of fairness measures. To support auditors in this task, we developed a decision-making workflow that guides them in choosing the most appropriate fairness measure, and thus the most suitable definition of fairness, for their specific audit. To facilitate practical use, we integrated this workflow into the open-source software JASP for Audit. In this presentation, we demonstrate its application using one case study: the COMPAS case.
17:15 - 17:20     Closing words by dr. Koen Derks - Nyenrode Business Universiteit
17:15 - 19:00     Networking drinks