EA - A freshman year during the AI midgame: my approach to the next year by Buck
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A freshman year during the AI midgame: my approach to the next year, published by Buck on April 14, 2023 on The Effective Altruism Forum.I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year .It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?In summary:For the purposes of planning my life, I'm going to act as if there are four years before AGI development progresses enough that I should substantially change what I'm doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.I'm going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree--every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.The AI midgameI want to split the AI timeline into the following categories.The early game, during which interest in AI is not mainstream. I think this ended within the last yearThe midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:The AI companies are building AIs that they don’t expect will be transformative.The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:I'm doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.I'm working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you haveâ€...
