AI & Human Co-Improvement for Safer Co-Superintelligence
Jason Weston, Jakob Foerster
2025-12-08
Summary
This paper argues that instead of focusing solely on making AI 'self-improve,' we should concentrate on building AI that can *collaborate* with humans to improve *together*. The idea is to create a partnership where AI and researchers work side-by-side to make advancements in AI, ultimately leading to safer and faster progress.
What's the problem?
The current push for 'self-improvement' in AI is risky and might take a very long time to succeed. If AI tries to improve itself without human guidance, it could potentially develop goals that aren't aligned with human values, or become unpredictable. Simply trying to make AI smarter on its own is a difficult and potentially dangerous path.
What's the solution?
The paper proposes 'co-improvement,' which means specifically designing AI to be better at working *with* human researchers. This includes helping with all parts of the research process, from coming up with new ideas to running experiments. By focusing on this collaborative aspect, the AI and humans can learn and improve together, ensuring safety and accelerating progress.
Why it matters?
This approach is important because it offers a more realistic and safer way to develop advanced AI. Instead of hoping AI will figure things out on its own, co-improvement actively involves humans in the process, helping to steer development in a beneficial direction. It’s not just about getting to 'superintelligence' faster, but about getting there in a way that ensures both AI and humans benefit and remain safe.
Abstract
Self-improvement is a goal currently exciting the field of AI, but is fraught with danger, and may take time to fully achieve. We advocate that a more achievable and better goal for humanity is to maximize co-improvement: collaboration between human researchers and AIs to achieve co-superintelligence. That is, specifically targeting improving AI systems' ability to work with human researchers to conduct AI research together, from ideation to experimentation, in order to both accelerate AI research and to generally endow both AIs and humans with safer superintelligence through their symbiosis. Focusing on including human research improvement in the loop will both get us there faster, and more safely.