Join a cutting-edge project at the intersection of AI and privacy regulation. Contribute to the development of solutions that ensure compliance with data protection laws and safeguard individual privacy in AI systems.

Background
In today’s digital age, the risk of information leakage in machine learning applications is a significant concern for regulators worldwide. The General Data Protection Regulation (GDPR) includes the right to be forgotten, allowing individuals to request the deletion of their personal data from datasets. 

The LEAKPRO project, funded by Vinnova, addresses these issues by creating a platform to evaluate and mitigate these risks. This initiative involves collaboration between industry and public sector partners, including AstraZeneca, Sahlgrenska, and Region Halland, alongside AI Sweden and RISE. The project aims to support various attack scenarios and align with the upcoming AI Act, providing scalable solutions for systematic and reproducible testing of AI in real-world settings.

Problem Statement 
AI models have been demonstrated to memorize training data, which raises a critical question: when an individual exercises their right to be forgotten according to GDPR, must we retrain the AI model to ensure their data is removed? Alternatively, is it sufficient to demonstrate that the individual is not identifiable, meaning that the probability is not significantly higher than chance to determine if their data was part of the training set?

Thesis Project Description
This thesis project is ideal for a student interested in the regulation and business models for AI, focusing on privacy risk audits, proactive privacy risk mitigation, and reproducible testing for traceability of training data in AI systems.

Key Responsibilities

  • Literature Review: Conduct a comprehensive literature review on privacy risk in AI systems.
  • Regulatory Framework: Review relevant regulations, including the EU AI Act, GDPR, and other data protection laws. Review ongoing court cases and regulatory sandbox studies.
  • Methodology Analyse how the LeakPro testing framework can be used for privacy risk audits in relation to requirements under EU AI Act, GDPR and other data protection legislation.
  • Reporting: Document your work in a scientific report. Optionally, publish your results as an article.

Qualifications

  • Strong interest in AI, data protection, and regulatory compliance.
  • Background in law or engineering.
  • Ability to work independently and collaboratively within a team.

Terms
Scope: 30 hp, one semester full time, with flexible starting date.
Location: Luleå. 
Benefits: A scholarship of 30,000 SEK is granted upon approval of the final report.

Welcome with your application!
For questions and further information regarding this project opportunity contact Rickard Brännvall, rickard.brannvall@ri.se, +46 730-753 713. Last application date: November 30, 2024.

Keywords: Law, Computer Science, Industrial Economics, Data Privacy, AI Regulation and Compliance

First day of employment According to agreement
Salary According to agreement
City Luleå
County Norrbottens län
Country Sweden
Reference number 2024/304
Contact
  • Rickard Brännvall, +46730-753713
Last application date 30.Nov.2024 11:59 PM CET
Apply for position

Share links

Return to job vacancies