Aug 4, 2025 We introduced a design pattern that prevents (most) jailbreaks against LLM Chatbots. It’s unusual that one can get by design protection against this kind of attacks. Paper and code.
Aug 4, 2025 I wrote a couple of challenges for this year’s DEF CON qualifiers and finals. They have to do with LLM security; hope you enjoy them: vibe and hs.
May 1, 2025 We concluded two rounds of the Adaptive Prompt Injection Competition (LLMail-inject) (first round winners announcement). With this, we released the (massive) dataset of successful/insuccessful attacks, the code to rerun the challenge, as well as a paper describing our findings.
Aug 14, 2024 One can get a closed-form approximation of the risk against membership inference for DP-SGD, and we released an interactive tool that uses this idea to help trimming DP-SGD’s parameters. We can also get data-dependent guarantees for the risk of attribute inference; code for this is available too. Based on our USENIX24 work.
Aug 10, 2022 Our work on evaluating website fingerprinting in the real world was awarded: i) the Internet Defense Award (2nd place) sponsored by Meta, and ii) a Distinguished Paper Award (USENIX ‘22)!
May 25, 2022 Our work on reconstruction attacks against ML models was accepted by IEEE S&P 2022. Check out Jamie’s wonderful presentation!
Feb 7, 2022 I joined Microsoft Research Cambridge and the Microsoft Security Response Centre (conveniently, both acronymise to “MSRC”). I will work as a Senior Researcher on all things ML, privacy-preserving ML, and security.
Nov 30, 2021 Check out our work on deploying&evaluating website fingerprinting attacks on the Tor network. TL;DR: WF is hard for untargeted attacks. To appear in USENIX ‘22.
May 14, 2021 Our paper “Exact Optimization of Conformal Predictors via Incremental and Decremental Learning has been accepted for presentation&publication at ICML ‘21. This work has also been accepted as a spotlight talk at the DFUQ ‘21 ICML workshop.
May 11, 2020 From October 2020, I will join the Turing Institute as a Research Fellow in Safe & Ethical AI.
Jan 13, 2020 I will be co-chairing this year’s symposium on Conformal and Probabilistic Prediction with Applications (COPA2020). Please consider submitting your works.
Jul 25, 2019 My PhD thesis is now available online. Highlights here.
Feb 28, 2019 Our paper, “F-BLEAU: Fast Black-box Leakage Estimation”, has been accepted by the IEEE Symposium on Security and Privacy, 2019. It shows how to use ML methods for measuring the information leakage of a black-box system in a practical yet theoretically sound manner.
Dec 16, 2018 The code of fbleau for measuring the leakage of black box systems is now online and available for installation via crates.io.
Nov 6, 2018 A list of semester projects for EPFL MSc/PhD students is available at https://spring.epfl.ch/en/projects.
Sep 3, 2018 Work on Conformal Predictors' ensembles accepted by the Machine Learning journal (read more)