Cognitive Collective

Helping you find your next career in AI. Learn more about the job board on the Scale blog.

Are you a scaling AI startup? Email to be added to our board.

Research Engineer (Safety and Alignment), Post-Training



Menlo Park, CA, USA · New York, NY, USA
Posted on Tuesday, April 30, 2024

Character’s mission is to empower everyone with AGI. Our vision is to enable people with our technology so that they can use Character.AI any moment of any day.

Character.AI is one of the world’s leading personal AI platforms. Founded in 2021 by AI pioneers Noam Shazeer and Daniel De Freitas, Character.AI is a full-stack AI company with a globally scaled direct-to-consumer platform. As of 2023 that platform was #2 in the space in user engagement. Character.AI is uniquely centered around people, letting users personalize their experience by interacting with AI “Characters.” The company achieved unicorn status in 2023 and was named Google Play’s AI App of the Year.

Noam co-invented the key tech powering LLMs and was recently named to TIME100’s Most Influential People in AI list. TIME called him “one of the most important and impactful people of the space’s past, present, and future.” Daniel created and led LaMDA, the breakthrough conversational tech project currently powering Bard.

To learn more, please

Joining us as a Safety and Alignment Research Engineer on the Post-Training team, you’ll be building tools to align our models and making sure they meet the highest standards of safety in the real world.

As increasingly powerful AI models get deployed, building tools to align and steer them becomes increasingly important. Your work will directly contribute to our groundbreaking advancements in AI, helping shape an era where technology is not just a tool, but a companion in our daily lives. At Character.AI, your talent, creativity, and expertise will not just be valued—they will be the catalyst for change in an AI-driven future.

About the role

The Post-Training team is responsible for developing our powerful pretrained language models into intelligent, engaging, and aligned products.

As a Post-Training Researcher focused on Safety and Alignment, you will work across teams and our technical stack to improve our model performance. You will get to shape the conversational experience enjoyed by millions of users per day. This will involve close partnership with our Policy, Research, and Data teams and deploying your changes directly to the product.

Example projects:

  • Develop and apply preference alignment algorithms to guide model generations.

  • Train classifiers to identify model failure modes and adversarial usage.

  • Work with annotators and red-teamers to produce useful datasets for alignment.

  • Invent new techniques for guiding model behavior.

Job Requirements

  • Write clear and clean production-facing and training code

  • Experience working with GPUs (training, serving, debugging)

  • Experience with data pipelines and data infrastructure

  • Strong understanding of modern machine learning techniques (reinforcement learning, transformers, etc)

  • Track-record of exceptional research or creative applied ML projects

Nice to Have

  • Experience developing safety systems for UGC/consumer content platforms

  • Experience working on LLM alignment

  • Publications in relevant academic journals or conferences in the fields of machine learning or recommendation systems

Character is an equal opportunity employer and does not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. We value diversity and encourage applicants from a range of backgrounds to apply.