
I’m currently in the second year of my Master’s program, working as a student research assistant at Saarland University under Prof. Dr. Michael Hahn and Prof. Dr. Vera Demberg.
I've worked on several projects covering different aspects of large language models, from the expressivity of transformer and state-space models to mechanistic interpretability in in-context learning. Recently, we finished a project aimed at understanding which theoretical limitations of transformer architecture persist in pre-trained LLMs (PLMs).
I’ve also begun my thesis with Yihong Liu at the Schütze Lab at LMU Munich, trying to figure out why better cross-lingual alignment fails for better cross-lingual transfer.
After graduation, I plan to pursue a PhD and join my interests in multilingual NLP, mechanistic interpretability, and efficient and robust NLP to make multilingual LLMs more interpretable and robust. I am looking forward to turn the Left-Behinds and the Scraping-Bys[1] at the very least into Hopeful and Rising Stars of the field.
In my free time, I perform stand-up comedy, make mixed drinks (like espresso martinis and blue lagoon) for my friends, and hit 1.5X my bodyweight squats at the gym.
Feel free to hit me up to talk about work or hobbies, I am always happy to chat :)
- 1. Joshi et al. (2020). “The State and Fate of Linguistic Diversity in the NLP World.” ACL 2020.