Last updated Oct 2024
Hello, Reader! My name is James Lucassen and this is my blog. On this page, you’ll find biographical information about me personally, and a pile of links to the rest of my Internet Presence.
Bio
I grew up in suburban New York, and was sort of a STEM kid as far back as I can remember. In middle school I first became interested in philosophy after binge-watching Crash Course Philosophy one summer. After that I started thinking about my philosophical views. And after a bit of wandering around and trying on different worldviews, in mid high school I found Effective Altruism. EA was built around many of the same principles I had decided on for myself, the ideas and the people just kind of clicked, and I dove down the EA rabbit hole.
I went to school at Harvey Mudd, originally for engineering, planning to work on climate tech. In freshman year I became interested existential risk after reading The Case for Strong Longtermism. I was originally very skeptical of X-risk from AI, but figured I should look into it out of due diligence, and picked up a copy of Superintelligence. While I couldn’t find anything wrong with the arguments, the worldview had such wild implications that my gut didn’t really buy it yet. So I spent most of the Pandemic Years just ruminating about whether AI X-risk was real or not. I also read the Sequences during the pandemic, and came back to school with my gut mostly convinced. So I switched my major to CS, and started working on AI stuff.
Since then, I’ve been wandering the AI Alignment world working on various projects gradually gathering 1) evidence about what kind of work will help reduce AI X-risk and 2) the skills necessary to contribute to that kind of work. I participated in the ELK competition and won a prize, which made me want to give alignment research a shot. I spent a summer as an intern under Evan Hubinger, where I learned a lot more about how to do alignment theory and about my particular style of research. I spent a year working on the Consequentialist Cognition Project at MIRI, where I learned a ton about [REDACTED] and how to [REDACTED]. Then I spent about a year doing independent research and leading two research projects at SPAR.
I’m now living in Pittsburgh, PA, doing AI cybersecurity research at the Software Engineering Institute at Carnegie Mellon. Every so often I come back to the Bay to visit my lovely girlfriend Lauren. In my free time I enjoy soft acrobatics, board games with friends, reading a mixed bag of good nonfiction, good fiction, and truly awful fiction. Some of my favorite fiction books are Unsong, Anathem, A Fire Upon the Deep, There Is No Antimemetics Division, The Dark Forest, and The Slow Regard of Silent Things. Favorite nonfiction includes the Sequences, GEB, Inventing Temperature, Nonviolent Communication, and The Dictator’s Handbook.
If you want to get a sense of what it’s like to interact with me as a person, I’ve been told multiple times that I write exactly how I talk. I think (or at least I’d like to think) my dominant character traits are curious, earnest, and analytical. Some other quick indicators I can think of are that my Meyers-Briggs is consistently inconsistent at (I|E)NT(J|P), and my preferred Magic: The Gathering colors are Jeskai.
Links
- GitHub
- LessWrong
- <3 🙂
- The Public Part of my Notion
- Goodreads
- Effective Altruism Forum
- Metaculus
As of the last update to this page, I am not bound by any commitments which prevent me from disclosing their existence.