Background
As industries increasingly recognize the value of data-driven decisions and machine learning algorithms, there’s a growing concern about the potential threats posed by these algorithms, especially when trained on sensitive data. If such an algorithm is made public, there’s a risk that sensitive information could be extracted, ranging from identifying specific training data points to extracting individual attributes or even recreating the data. Coupled with this, handling personal data demands stringent privacy protection measures. Global privacy regulations like Europe’s GDPR are in place to safeguard individuals’ data. Data breaches can lead to loss of trust, legal implications, and harm to individuals. Therefore, it’s crucial to employ advanced methods for preserving privacy and ensuring data security, including protecting the information encoded in the model.

Membership inference attacks are a type of privacy attack where an adversary, given a data record and access to a model, determines if the record was in the model’s training dataset. The attacker trains their own inference model to recognize differences in the target model’s predictions on the inputs that it trained on versus the inputs that it did not train on. Attribute inference attacks, on the other hand, occur when an adversary has partial knowledge of some training records and access to a model trained on those records, and infers the unknown values of a sensitive feature of those records. This can potentially lead to the exposure of sensitive information such as race, gender, sexual orientation or medical diagnosis.

Our team is working on an open-source tool that enables risk assessment of information leakage in trained models. This tool can be integrated into internal systems to consider data integrity already in the development phase of a project. It can also be used for profiling models trained on different data modalities to assist in business decisions about how a given model should be published.

Description
We are seeking a talented Master's student for a thesis project that explores and evaluates different machine learning inference attacks and the techniques used to defend against them.

Key Responsibilities

  • Literature Review: Conduct a comprehensive literature review on ML inference attacks.
  • Framework Comparison: Evaluate and compare some existing frameworks for assessment of ML inference attacks, considering factors such as performance and ease of implementation.
  • Implementation: Implement selected ML inference attacks and obtain risk benchmarks.
  • Performance Analysis: Assess the performance, scalability, and security of the implemented frameworks, and make recommendations for real-world application.
  • Reporting: Document your work in a scientific report. Optionally, publish your code open source. 

Qualifications

  • Strong programming skills, particularly in machine learning and data analysis.
  • Interest and basic knowledge of cyber security and privacy preservation.
  • Ability to work independently and collaboratively within a team.

Terms
Scope: 30 hp, one semester full time, with flexible starting date. Location: Luleå.  Benefits: A scholarship of 30,000 SEK is granted upon approval of the final report. 

Welcome with your application!
Last day of application is September 13. For questions and further information regarding this project opportunity contact Rickard Brännvall, rickard.brannvall@ri.se, +46 730-753 713.

Keywords: Deep Learning, Inference Attacks, Privacy Preserving Technologies, Model Risk

Tillträde Enligt överenskommelse
Löneform E
Ort Luleå
Län Norrbottens län
Land Sverige
Referensnummer 2024/250
Kontakt
  • Rickard Brännvall, +46730-753713
Sista ansökningsdag 2024-10-13
Sök jobbet

Dela länkar

Tillbaka till lediga jobb