Evaluating Stability of Unreflective Alignment

This post has an accompanying SPAR project! Apply here if you're interested in working on this with me. Huge thanks to Mikita Balesni for helping me implement the MVP. Regular-sized thanks to Aryan Bhatt, Rudolph Laine, Clem von Stengel, Aaron Scher, Jeremy Gillen, Peter Barnett, Stephen Casper, and David Manheim for helpful comments. 0. Key Claims Most alignment work today doesn’t aim for alignment that is stable under value-reflection1. I…

Research Retrospective, Summer 2022

Context: I keep wanting one place to refer to the research I did Summer 2022, and the two Lesswrong links are kind of big and clunky. So here we go! Figured I'd add some brief commentary while I'm at it, mostly just so this isn't a totally empty linkpost. Summer 2022 I did AI Alignment research at MIRI under Evan Hubinger's mentorship. It was a lot like SERI MATS, but…

In Search of Strategic Clarity

Context: quickly written up, less original than I expected it to be, but hey that's a good sign. It all adds up to normality. The concept of "strategic clarity" has recently become increasingly important to how I think. It doesn't really have a precise definition that I've seen - as far as I can tell it's mostly just used to point to something roughly like "knowing what the fuck is…

Optimization and Adequacy in Five Bullets

Context: Quite recently, a lot of ideas have sort of snapped together into a coherent mindset for me. Ideas I was familiar with, but whose importance I didn't intuitively understand. I'm going to try and document that mindset real quick, in a way I hope will be useful to others. Five Bullet Points By default, shit doesn't work. The number of ways that shit can fail to work absolutely stomps…

What I Got From EAGx Boston 2022

Epistemic Status: partially just processing info, partially publishing for feedback, partially encouraging others to go to EAG conferences by demonstrating how much value I got out of my first one. The following is a summary of what I took away from EAGx Boston overall - it's synthesized from a bunch of bits and pieces collected in 30-minute conversations with 19 really incredible people, plus some readings that they directed me…

Unfinished Thoughts on ELK

Epistemic Status: posting for mostly internal reasons - to get something published even if I don't have a complete proposal yet, and to see if anything new crops up while summarizing my thoughts so far. For context, ELK is a conceptual AI safety research competition by ARC, more info here. In this post I will document some ideas I've considered, showing the general thought process, strategy, obstacles, and current state…

Moravec’s Paradox Comes From The Availability Heuristic

Epistemic Status: very quick one-thought post, may very well be arguing against a position nobody actually holds, but I haven't seen this said explicitly anywhere so I figured I would say it. Setting Up The Paradox According to Wikipedia: Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.https://en.wikipedia.org/wiki/Moravec's_paradox I…

True EA Alignment is Overrated

Epistemic Status: simple thought, basically one key insight, just broadcasting because I think people will find it useful. Among the EA folks I talk to, there's a fairly common recurring worry about whether or not they're "truly aligned". In other words, EAs tend to worry about whether they're really motivated to do good in the world, or if they're secretly motivated by something else that leads to EA-like behavior as…

Which Things Are Worth Memorizing?

Epistemic status: one or two hours' worth of thought, reasoning feels pretty clean from the inside. I think I'd be willing to bet on this producing a small effect, but nothing too important. As far as I can tell, there are two reasons to memorize things - fast access and idea generation. Information stored in your brain is accessible very quickly, about 10x or 100x faster than retrieval from the…

Quick Reminder That Your Ability to Do Good Is Ridiculous

Epistemic status: definitely old hat, certainly not my original thoughts, but you know it's worth reiterating every now and then. The Effective Altruism movement in general tends to have problems with feelings of powerlessness. People feel that even if they are on a high-impact track, they still won't be able to make a dent in the global scale of problems. Even worse, people feel that if they aren't a genius…