Remote Otter LogoRemoteOtter

Open Source AI Engineer - Evals - Remote

Posted 3 weeks ago

Overview

The Opportunity: AI is rapidly transforming the world. Whether it’s developing the next generation of human-level intelligence, enhancing voice assistants, or enabling researchers to analyze genetic markers at scale, AI is increasingly integrated into various aspects of our daily lives. Arize is the leading AI observability and Evaluation platform to help AI teams discover issues, diagnose problems, and improve the results of their AI Applications. We are here to build world class software that helps make AI applications work better.

In Short

  • Build LLM Eval Frameworks: Design, architect, and open-source new libraries, pipelines, and APIs that make it simpler to evaluate LLM output quality, consistency, and reliability at scale.
  • Define Metrics and Benchmarks: Curate golden datasets and develop robust benchmarked metrics that guide data scientists and AI practitioners in optimizing their AI tasks.
  • Collaborate with the Community: Partner closely with the broader AI open source ecosystem, gather feedback, review pull requests, and steer the direction of the project to address real developer needs.
  • Prototype and Iterate Rapidly: Experiment with state-of-the-art LLM techniques, turning research into practical developer tooling.
  • Improve Observability and Debugging: Integrate with our existing platform to surface deeper insights on LLM behavior—help teams quickly diagnose and fix issues such as hallucinations or bias.
  • Educate and Evangelize: Write blog posts, white papers, tutorials, and documentation to help developers succeed with our open source tools and grow the LLM eval community.

Requirements

  • Open Source Champion: You believe collaboration and community-driven development unlocks the best innovations.
  • Creative Problem Solver: You enjoy tackling ambiguous challenges and finding elegant technical solutions.
  • Data & Metrics Driven: You value empirical results, enjoy creating or refining evaluation metrics, and iterate based on real-world feedback.
  • Technically Curious: You’re always learning—exploring new LLM architectures, prompt engineering strategies, or emerging library standards.
  • Builder Mindset: You relish the process of taking ideas from initial prototypes to production-ready solutions that delight users.

Benefits

  • Shape the Future of AI Evaluation: Be at the forefront of designing new ways to measure and improve next-generation LLMs.
  • High Impact, Real Ownership: Join a team that values autonomy and speed. You’ll drive major initiatives from day one and see your work used by developers worldwide.
  • Fully Remote, Flexible Environment: We are a fully remote company with offices in the Bay Area and NYC for those who prefer in-person collaboration.
  • Cutting-Edge Challenges: Our platform already helps analyze millions of AI predictions daily, giving you the chance to refine your evaluation tooling on real, large-scale production workloads.
  • Work With a Talented, Passionate Team: Collaborate closely with top engineers who are dedicated to making AI more transparent, reliable, and impactful.

Similar Jobs:

T.B

Open Source Software Engineer - Remote

Test Board

5 weeks ago

Canonical is seeking an Open Source Software Engineer to join their global team and work on innovative projects involving Ubuntu, AI, and cloud computing.

Open Source
Ubuntu
Linux
AI
Worldwide
Full-time
Software Development
Rerun logo

Open Source Robotics Engineer - Remote

Rerun

6 weeks ago

Join Rerun as an Open Source Robotics Engineer to enhance the robotics community through innovative tools and educational resources.

Open Source
Robotics
ROS 2
Python
Worldwide
Full-time
Software Development
Arize AI logo

Open Source Fullstack Engineer - Remote

Arize AI

13 weeks ago

Join Arize as a Software Developer to work on AI observability and evaluation platforms, focusing on front-end and full-stack development.

Python
Typescript
React
GraphQL
United States
Full-time
Software Development
$100,000 - $185,000/year
Canonical logo

Performance Engineer - Open Source - Remote

Canonical

18 weeks ago

Canonical is seeking Performance Engineers to enhance software performance and correctness across its engineering teams.

Performance Engineering
Software Performance
Correctness
Efficiency
Worldwide
Full-time
Software Development
Octopus Deploy logo

Senior Open Source Engineer (Argo CD) - Remote

Octopus Deploy

4 weeks ago

Join Octopus Deploy as a Senior Open Source Engineer to enhance Argo CD and engage with the community.

GO
Kubernetes
Argo CD
Docker
Israel
Full-time
Software Development