Tony Wang's Personal Website

Welcome to my personal website. I am currently at the US AI Safety Institute on leave from my PhD at MIT. At the safety institute, I work on the design, implementation, execution, and analysis of frontier model evaluations. The overall goal of my work and research is to enable humanity to realize the benefits of AGI while adequately managing its risks.

Research Interests

Much of my previous work and thinking has been on adversarial robustness. I’ve thought about the phenomenon both in simplified toy settings as well as in the setting of superhuman game-playing agents. I’m interested in adversarial robustness for two key reasons:

At the moment, I’m working on robustness in both the vision and language domains. My key focus is on developing techniques that can improve robustness against unrestricted adversaries. In the vision domain, I am particularly interested in how we can make progress on something like the Unrestricted Adversarial Examples Challenge. In the language domain (where most of my efforts are right now), I want to answer the following question:

What are the core difficulties with preventing jailbreaks in language models,
and how can these difficulties be overcome?

My current take is that the answer involves scalable oversight, and techniques like relaxed adversarial training, representation engineering and stateful defenses.

A north star for my research is to develop techniques that could let us reliably instill Asimov’s laws (at least the first two) into AGI systems.

Contact

If you would like to chat about anything I’ve mentioned on this site, feel free to contact me at twang6 [at] mit [dot] edu.

Some links: Twitter, Google Scholar, CV.