Evaluating Stability of Unreflective Alignment

This post has an accompanying SPAR project! Apply here if you're interested in working on this with me. Huge thanks to Mikita Balesni for helping me implement the MVP. Regular-sized thanks to Aryan Bhatt, Rudolph Laine, Clem von Stengel, Aaron Scher, Jeremy Gillen, Peter Barnett, Stephen Casper, and David Manheim for helpful comments. 0. Key Claims Most alignment work today doesn’t aim for alignment that is stable under value-reflection1. I…

In Search of Strategic Clarity

Context: quickly written up, less original than I expected it to be, but hey that's a good sign. It all adds up to normality. The concept of "strategic clarity" has recently become increasingly important to how I think. It doesn't really have a precise definition that I've seen - as far as I can tell it's mostly just used to point to something roughly like "knowing what the fuck is…

Optimization and Adequacy in Five Bullets

Context: Quite recently, a lot of ideas have sort of snapped together into a coherent mindset for me. Ideas I was familiar with, but whose importance I didn't intuitively understand. I'm going to try and document that mindset real quick, in a way I hope will be useful to others. Five Bullet Points By default, shit doesn't work. The number of ways that shit can fail to work absolutely stomps…

What I Got From EAGx Boston 2022

Epistemic Status: partially just processing info, partially publishing for feedback, partially encouraging others to go to EAG conferences by demonstrating how much value I got out of my first one. The following is a summary of what I took away from EAGx Boston overall - it's synthesized from a bunch of bits and pieces collected in 30-minute conversations with 19 really incredible people, plus some readings that they directed me…

Unfinished Thoughts on ELK

Epistemic Status: posting for mostly internal reasons - to get something published even if I don't have a complete proposal yet, and to see if anything new crops up while summarizing my thoughts so far. For context, ELK is a conceptual AI safety research competition by ARC, more info here. In this post I will document some ideas I've considered, showing the general thought process, strategy, obstacles, and current state…

Moravec’s Paradox Comes From The Availability Heuristic

Epistemic Status: very quick one-thought post, may very well be arguing against a position nobody actually holds, but I haven't seen this said explicitly anywhere so I figured I would say it. Setting Up The Paradox According to Wikipedia: Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.https://en.wikipedia.org/wiki/Moravec's_paradox I…