Join a cutting-edge project at the intersection of AI and privacy regulation. Contribute to the development of solutions that ensure compliance with data protection laws and safeguard individual privacy in AI systems.

Background
In today’s digital age, the risk of information leakage in machine learning applications is a significant concern for regulators worldwide. The General Data Protection Regulation (GDPR) includes the right to be forgotten, allowing individuals to request the deletion of their personal data from datasets. 

The LEAKPRO project, funded by Vinnova, addresses these issues by creating a platform to evaluate and mitigate these risks. This initiative involves collaboration between industry and public sector partners, including AstraZeneca, Sahlgrenska, and Region Halland, alongside AI Sweden and RISE. The project aims to support various attack scenarios and align with the upcoming AI Act, providing scalable solutions for systematic and reproducible testing of AI in real-world settings.

Problem Statement 
AI models have been demonstrated to memorize training data, which raises a critical question: when an individual exercises their right to be forgotten according to GDPR, must we retrain the AI model to ensure their data is removed? Alternatively, is it sufficient to demonstrate that the individual is not identifiable, meaning that the probability is not significantly higher than chance to determine if their data was part of the training set?

Thesis Project Description
This thesis project is ideal for a student interested in the regulation and business models for AI, focusing on privacy risk audits, proactive privacy risk mitigation, and reproducible testing for traceability of training data in AI systems.

Key Responsibilities

  • Literature Review: Conduct a comprehensive literature review on privacy risk in AI systems.
  • Regulatory Framework: Review relevant regulations, including the EU AI Act, GDPR, and other data protection laws. Review ongoing court cases and regulatory sandbox studies.
  • Methodology Analyse how the LeakPro testing framework can be used for privacy risk audits in relation to requirements under EU AI Act, GDPR and other data protection legislation.
  • Reporting: Document your work in a scientific report. Optionally, publish your results as an article.

Qualifications

  • Strong interest in AI, data protection, and regulatory compliance.
  • Background in law or engineering.
  • Ability to work independently and collaboratively within a team.

Terms
Scope: 30 hp, one semester full time, with flexible starting date.
Location: You are expected to be at the RISE office regularly during the thesis period, preferably a few days each week. This applies to our offices, primarily in Luleå, Gothenburg, or Kista, with some flexibility.
Benefits: A scholarship of 30,000 SEK is granted upon approval of the final report.

Welcome with your application!
For questions and further information regarding this project opportunity contact Rickard Brännvall, rickard.brannvall@ri.se, +46 730-753 713. Last application date: November 30, 2024.

Keywords: Law, Computer Science, Industrial Economics, Data Privacy, AI Regulation and Compliance

Tillträde Enligt överenskommelse
Löneform Enligt överenskommelse
Ort Luleå, Göteborg, Kista or Flexibelt
Län Norrbottens län
Land Sverige
Referensnummer 2024/304
Kontakt
  • Rickard Brännvall, +46730-753713
Sista ansökningsdag 2024-11-30
Sök jobbet

Dela länkar

Tillbaka till lediga jobb