Last updated June 2025

Hello, Reader! My name is James Lucassen and this is my blog. This page contains biographical information about me personally, and a pile of links to the rest of my Internet Presence.

Bio

I grew up in suburban New York, and was sort of a STEM kid as far back as I can remember. In middle school I first became interested in philosophy after binge-watching Crash Course Philosophy one summer. After that I realized I didn’t know what I wanted to do with my life, so I started thinking. And after a bit of wandering around and trying on different worldviews, in mid high school I found Effective Altruism. EA was built around many of the same principles I had decided on for myself, the ideas and the people just kind of clicked, and I promptly dove down the EA rabbit hole.

I went to college at Harvey Mudd, originally for engineering, planning to work on climate tech. In freshman year I read The Case for Strong Longtermism and became interested in reducing existential risk. I was originally very dismissive of X-risk from AI, but figured I should look into it little bit as due diligence, so I picked up a copy of Superintelligence. After reading it I couldn’t think of anything wrong with the arguments, but the worldview had such wild implications that my gut didn’t really buy it yet. So I spent most of the Pandemic Years just ruminating about whether AI X-risk was real or not. I also read the Sequences during the pandemic, and came back to school with my gut mostly convinced. So I switched my major to CS, and started working on AI stuff.

Since then, I’ve been wandering the AI Safety world working on various projects gradually gathering 1) evidence about what kind of work will help reduce AI X-risk and 2) the skills necessary to contribute to that kind of work. I participated in the ELK competition and won a prize, which made me want to give full-time AI safety research a shot. I spent a summer as an intern under Evan Hubinger, where I learned a lot more about how to do alignment theory and about my particular style of research. I spent a year working on the Consequentialist Cognition Project at MIRI, where I learned a ton about [REDACTED] and how to [REDACTED]. Then I spent about a year doing independent research and leading two research projects at SPAR, and then another year in Pittsburgh working on AI security for the DoD. I’m currently at Redwood Research, working on 70% unsupervised monitoring and 30% everything else needed to make AI go well.

I live in Berkeley with my lovely girlfriend Lauren. In my free time I enjoy soft acrobatics, board games with friends, reading a mixed bag of good nonfiction, good fiction, and truly awful fiction. Some of my favorite fiction books are Unsong, Anathem, A Fire Upon the Deep, There Is No Antimemetics Division, The Dark Forest, and Martial World. Favorite nonfiction includes the Sequences, GEB, Inventing Temperature, Nonviolent Communication, and The Dictator’s Handbook.

If you want to get a sense of what it’s like to interact with me as a person, I’ve been told I talk the same way I write. I think my dominant character traits are curious, earnest, analytical, playful, and laid-back. Some other quick indicators I can think of are that my Meyers-Briggs is consistently inconsistent at (I|E)NT(J|P), and my preferred Magic: The Gathering colors are Jeskai.

As of the last update to this page, I am not bound by any commitments which prevent me from disclosing their existence.