Research at MAR
We conduct fundamental research to understand and develop AI systems that are safe, beneficial, and interpretable.
Our mission
Building AI that benefits humanity
Our research is guided by the belief that the most capable AI systems should also be the safest. We pursue this through a combination of empirical research, theoretical analysis, and close collaboration with the broader AI safety community.
We publish our findings openly and engage with policymakers, academics, and other organizations to ensure that AI development proceeds in a way that is safe and beneficial for everyone.
Focus areas
Core research directions
Language Understanding
Advancing how AI systems comprehend and reason about natural language, from basic comprehension to complex reasoning chains.
Safety & Alignment
Ensuring AI systems behave as intended, remain helpful, and avoid harmful outputs through robust alignment techniques.
Interpretability
Understanding the internal workings of neural networks to build more transparent and trustworthy AI systems.
Efficiency
Developing methods to reduce computational requirements while maintaining or improving model capabilities.
Join our research team
We are looking for exceptional researchers to help us push the boundaries of AI safety and capabilities. If you are passionate about building beneficial AI, we would love to hear from you.
View open positions