Join a cutting-edge project to advance AI testing methodologies, ensuring robust and reliable privacy risk assessments of machine learning systems.

Background
In today’s digital age, the risk of information leakage in machine learning applications is a significant concern for regulators worldwide. Handling personal data demands stringent privacy protection measures, as mandated by global regulations like Europe’s GDPR, which includes the right to be forgotten. Data breaches can lead to loss of trust, legal implications, and harm to individuals. Membership inference attacks and attribute inference attacks are notable threats where adversaries can extract sensitive information from AI models.

The LEAKPRO project, funded by Vinnova, is developing an open-source tool to assess the risk of information leakage in trained models. This initiative involves collaboration between industry and public sector partners, including AstraZeneca, Sahlgrenska, and Region Halland, alongside AI Sweden and RISE.

Problem statement 
Compliance with regulations will require that quantitative risk assessments and tests are reliable, robust against assumptions, and reproducible. This includes demonstrating confidence bands, consistently identifying vulnerable data points, and ensuring the robustness of privacy-preserving methods, while remaining computationally feasible for organizations with constrained resources.

Thesis Project Description
This Master’s thesis project will focus on evaluating the robustness and reliability of machine learning inference attacks. The student will reproduce and evaluate privacy attacks documented in the literature and develop methodologies to quantify their uncertainty when included in privacy risk audits.

Key Responsibilities

  • Literature Review: Conduct a comprehensive literature review on ML inference attacks.
  • Implementation: Implement selected ML inference attacks and obtain risk benchmarks.
  • Attack Evaluation: Evaluate the reproducibility and robustness of the selected inference attacks.
  • Uncertainty Analysis: Assess the performance, scalability, and feasibility for uncertainty quantification for the different tests, and make recommendations for real-world application.
  • Reporting: Document your work in a scientific report. Optionally, publish your code open source.

Qualifications

  • Strong programming skills, particularly for machine learning and data analysis
  • Background in computer science, mathematics or engineering physics. A talent for mathematical modelling and statistical analysis will be considered a plus.
  • Interest and basic knowledge of cyber security, data protection, and privacy regulation also a plus.

Terms
Scope: 30 hp, one semester full time, with flexible starting date.
Location: You are expected to be at the RISE office regularly during the thesis period, preferably a few days each week. This applies to our offices, primarily in Luleå, Gothenburg, or Kista, with some flexibility.  
Benefits: A scholarship of 30,000 SEK is granted upon approval of the final report.

Welcome with your application!
For questions and further information regarding this project opportunity contact Rickard Brännvall, rickard.brannvall@ri.se, +46 730-753 713. Last application date: November 30, 2024.

Keywords: Computer Science, Data Privacy, AI Testing and Regulation, Deep Learning, Model Risk

Tillträde According to agreement
Löneform According to agreement
Ort Luleå, Gothenburg, Kista or Flexible
Län Norrbottens län
Land Sverige
Referensnummer 2024/303
Kontakt
  • Rickard Brännvall, +46730-753713
Sista ansökningsdag 2024-11-30
Sök jobbet

Dela länkar

Tillbaka till lediga jobb