Our goal is to automate software engineering and research towards safe superintelligence

A safe transition to a post-AGI world requires the technical ability to steer AGI, a coordinated effort to steer it towards the right objectives, and society-wide robustness against adversarial actors using AGI. Quite uncomfortably, nobody knows how to reliably achieve these outcomes.

Here, we summarize our thinking on three priority subproblems we want to help address:

  1. AI Alignment

    Alignment is hard

    If robustly aligning superintelligence with human intent turns out to be harder than building it, we may want to use AI to autonomously design, develop and evaluate new alignment techniques. On the initial iteration, a shortlist of approaches can be reviewed by human researchers, allowing us to make use of imperfectly aligned superhuman systems. From there, the aligned model could iteratively develop and align stronger models autonomously.

  2. Capitalist safety

    Marrying the capitalist race and safety

    Capitalism is a wonderful optimizer for efficiency and progress, and it could be a force for solving technical safety problems if we build a race track that requires companies to do so. Many companies adopted voluntary safety policies (for example, here is our AGI Readiness Policy), but such frameworks would be most impactful if written into law, ideally internationally.

  3. Security standards

    Improving security standards

    Security is the Achilles heel of hard-earned technological advantage. AI labs and large tech companies frequently fall victim to leaks and IP theft. Any organization pursuing AGI should be required to adopt cyber-, physical- and information security standards comparable to those in the defense and nuclear industry.


If you have thoughts on our approach, we would be delighted to hear from you at safety@magic.dev and if our priorities resonate with you, we invite you to apply to join us.