About Us
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated globalresponse. Founded in July 2022, we have grown quickly to 30+ staff. We are uniquely positioned to conduct technical research at a scale surpassing academia and leveraging the research freedom of being a non-profit. Our work is published at top conferences (e.g. NeurIPS, ICLR, ICML) and cited by leading media outlets such as the Financial Times, Nature News and MIT Technology Review.
FAR.AI uses three prongs working together to improve AI safety:
- FAR.Research - we conduct cutting-edge AI safety research in-house and dispense grants to support the wider research community.
- FAR.Futures - we bring together key policy makers, researchers and companies to drive change, such as the San Diego Alignment Workshop or the Guaranteed Safe AI research roadmap written with Yoshua Bengio.
- FAR.Labs - we host a co-working space in Berkeley to help to incubate other AI safety organizations, currently housing 40 members.
About FAR.Research We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Once the core research problems are solved, we work to scale them to a minimum viable prototype, demonstrating their validity to AI companies and governments to drive adoption.
We are aiming to rapidly grow our team in the following areas especially, at varying levels of seniority:
- Evals and red-teaming: Conducting pre- and post-release adversarial evaluations of frontier models (e.g. Claude 4 Opus, ChatGPT Agent, GPT-5); developing novel attacks to support this work; and exploring new threat models (e.g. persuasion, tampering risks).
- Infrastructure: Maintaining GPU compute infrastructure to support experiments with open-weight models and developing new tooling to allow our research teams to scale their fine-tuning and post-training workflows to frontier open-weight models.
We are also seeking more senior candidates in the following research areas:
- Mitigating AI deception: Studying when lie detectors induce honesty or evasion, and developing for deception and sandbagging
- Adversarial Robustness: Working to rigorously solve these security problems through building a science of security and robustness for AI, from demonstrating superhuman systems can be vulnerable, to scaling laws for robustness and jailbreaking constitutional classifiers
- Mechanistic Interpretability: Finding issues with Sparse Autoencoders, probing deception using AmongUs, understanding learned planning in SokoBan and interpretable data attribution.
FAR.AI is one of the largest independent AI safety research institutes, and is rapidly growing with the goal of diversifying and deepening our research portfolio. We would welcome the opportunity to add new research directions if you are a senior researcher with a strong vision and would like to pitch us on it.
About the Role We organize our team as Members of Technical Staff, with significant overlap between scientist and engineer roles. As an engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You can contribute to open source codebases such as Pytorch, HuggingFace Transformers and Accelerate. You will receive engineering mentorship via code review, pair programming and regular 1-to-1s. Alongside the scientists, you will be involved in the write-up of results and credited as an author in papers.
You are encouraged to develop your research taste, proposing novel directions and joining a research pod which suits your interests. You are welcome to take time to study and to attend conferences free of charge. Our technical team is organized into research pods to enable continuity of organizational structure whilst each pod can pivot through varied research projects.
Beyond FAR.AI, you can work with national AI safety institutes, frontier model developers and top academics.
About You It is essential that you:
- Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
- Have experience with at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
- Common ML frameworks like PyTorch or TensorFlow.
- Natural language processing or reinforcement learning.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, probability, and statistics.
We encourage applications from strong software engineers who are new to ML, and from academics without industrial experience in software engineering.
Logistics If based in the USA, you will be an employee of FAR.AI, a 501(c)(3) research non-profit. Outside the USA, you will be an employee of an EoR organization on behalf of FAR.AI.
- Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $100,000-$190,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application process: A 72-minute programming assessment, two interviews with members of our technical staff, and a paid work trial lasting up to 1 week. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
If you have any questions about the role, please do get in touch at ...@far.ai.
Otherwise, if you don't have questions, the best way to ensure a proper review of your skills and qualifications is by applying directly via the application form.
Please don't email us to share your resume (it won't have any impact on our decision). Thank you!