AI Predictions 2026

Hey man it's still January these still count as New Year's predictions 2025 Predictions in Hindsight At the start of 2025 I made some predictions via the Sage AI 2025 Forecasting Survey. Let's see how I did and what we can learn: RE-bench is at 1.13, I predicted 1.1SWE-bench Verified is at 80.9%, I predicted 87%Cybench is at 82%, I predicted 78%OSWorld is at 72.6%, I predicted 50%FrontierMath is at…

How to Tell if You’ve Instilled a False Belief in Your LLM

In the spirit of better late than never - this has been sitting in drafts for a couple months now. Big thanks for Aryan Bhatt for helpful input throughout. Thanks to Abhay Sheshadri for running a bunch of experiments on other models for me. Thanks to Rowan Wang, Stewy Slocum, Gabe Mukobi, Lauren Mangla, and assorted Claudes for feedback. Summary It would be useful to be able to make LLMs…

Evaluating Stability of Unreflective Alignment

This post has an accompanying SPAR project! Apply here if you're interested in working on this with me. Huge thanks to Mikita Balesni for helping me implement the MVP. Regular-sized thanks to Aryan Bhatt, Rudolph Laine, Clem von Stengel, Aaron Scher, Jeremy Gillen, Peter Barnett, Stephen Casper, and David Manheim for helpful comments. 0. Key Claims Most alignment work today doesn’t aim for alignment that is stable under value-reflection1. I…

Research Retrospective, Summer 2022

Context: I keep wanting one place to refer to the research I did Summer 2022, and the two Lesswrong links are kind of big and clunky. So here we go! Figured I'd add some brief commentary while I'm at it, mostly just so this isn't a totally empty linkpost. Summer 2022 I did AI Alignment research at MIRI under Evan Hubinger's mentorship. It was a lot like SERI MATS, but…

What I Got From EAGx Boston 2022

Epistemic Status: partially just processing info, partially publishing for feedback, partially encouraging others to go to EAG conferences by demonstrating how much value I got out of my first one. The following is a summary of what I took away from EAGx Boston overall - it's synthesized from a bunch of bits and pieces collected in 30-minute conversations with 19 really incredible people, plus some readings that they directed me…

Unfinished Thoughts on ELK

Epistemic Status: posting for mostly internal reasons - to get something published even if I don't have a complete proposal yet, and to see if anything new crops up while summarizing my thoughts so far. For context, ELK is a conceptual AI safety research competition by ARC, more info here. In this post I will document some ideas I've considered, showing the general thought process, strategy, obstacles, and current state…

Moravec’s Paradox Comes From The Availability Heuristic

Epistemic Status: very quick one-thought post, may very well be arguing against a position nobody actually holds, but I haven't seen this said explicitly anywhere so I figured I would say it. Setting Up The Paradox According to Wikipedia: Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.https://en.wikipedia.org/wiki/Moravec's_paradox I…