LLM Evaluation Engineer

Remote, USA Full-time
About the CompanyThirdLaw is building the control layer for AI in the enterprise. As companies rush to adopt LLMs and AI agents, they face new safety, compliance, and operational risks that traditional observability tools were never designed to detect. Metrics like latency or cost don’t capture when a model makes a bad decision, leaks sensitive data, or behaves unpredictably. We help IT and Security teams answer the foundational question: "Is this OK?"—and take real-time action when it’s not. Backed by top-tier venture firms and trusted by forward-looking enterprise design partners, we’re building the infrastructure to monitor, evaluate, and control AI behavior in real-world environments—at runtime, where it matters.If you're excited to build systems that help AI work as intended—and stop it when it doesn’t—we’d love to meet you. About the RoleYou’ll build the evaluation layer in the ThirdLaw platform—the part of the system that decides whether an LLM prompt, response, tool call, or agent behavior is acceptable. This includes designing and tuning guardrails, classifiers, and semantic judgment systems that operate in real-time. You'll integrate foundation models, similarity search, rules engines, and prompt templates to power high-precision, low-latency policy enforcement.This is not an experimental ML research role—it’s a product-critical engineering role. You’ll work with structured trace data, foundation models, and real-world constraints to build AI safety systems that actually ship. If you’ve built LLM-powered tools and care deeply about how they behave, this is your chance to help define what “trustworthy AI” means in enterprise environments. What You’ll Do• Design and build real-time evaluation logic that determines whether LLM prompts or outputs violate enterprise policies.• Implement evaluation strategies using a mix of semantic similarity, foundation model scoring, rule-based systems, and statistical checks. • Integrate model outputs with downstream enforcement actions (e.g. redaction, escalation, blocking). • Prototype, tune, and productize small language models and prompt templates for classification, labeling, or scoring. • Collaborate with data infrastructure engineers to connect evaluation logic with ingestion and storage layers. • Build tools to observe, debug, and improve evaluator performance across real-world data distributions.• Define abstractions for reusable evaluation components that can scale across use cases. Who We're Looking ForRequired• 7+ years of experience in ML systems or AI engineering roles, with at least 1–2 years working directly with LLMs, NLP pipelines, or semantic search. • Deep understanding of foundation models (e.g. OpenAI, Claude, Mistral, Llama) and how to work with them via APIs or open source. • Hands-on experience with vector search (e.g. FAISS, Qdrant, Weaviate) and embeddings pipelines. • Proven ability to implement real-time or near-real-time evaluation logic using semantic similarity, classifier scoring, or structured rules.• Strong in Python, with familiarity using libraries like Hugging Face Transformers, LangChain, and PyTorch or TensorFlow. • Ability to reason about model behavior, test prompt configurations, and debug complex decision logic in production. Nice-to-Have• Experience with OpenTelemetry, Model Context Protocol (MCP), or structured tracing of multi-agent or multi-model pipelines. • Experience with red-teaming, AI risk taxonomies, or safety audits for LLM-based systems. • Based in or willing to spend time in the San Francisco Bay Area for in-person collaboration.Why Apply? Our team is small and focused, valuing autonomy and real impact over titles and management. We need strong technical skills, a proactive mindset, and clear written communication, as much of our work is asynchronous. If you're organized, take initiative, and want to work closely with customers to shape our products, you'll fit in well here. Finally, we pay market cash compensation and generally above-market equity. The compensation package for this role is benchmarked using Carta TotalCompensation and reflects real-time market data for our company’s size, this role’s level, and your geographic location.We have well-designed and generous benefits. Apply tot his job
Apply Now

Similar Jobs

Python LLM Engineer - Lithuania

Remote, USA Full-time

GenAI / LLM Engineer - Remote (should be able to work on PST time zones)

Remote, USA Full-time

LLM Engineer

Remote, USA Full-time

LLM Developer | Travoom | Remote (United States)

Remote, USA Full-time

Remote Live Chat Agent PT/FT in Brainerd, MN

Remote, USA Full-time

Live Chat and Email Support Agent Job

Remote, USA Full-time

Chat Support Agent (Remote) - Entry Level, No Degree Required - 15 - 18 per Hour

Remote, USA Full-time

Class Action Employment Attorney

Remote, USA Full-time

Experienced Data Entry and e-Filing Specialist for Remote Legal Services Support – Utilizing Advanced Technology for Process Efficiency

Remote, USA Full-time

Docket & Litigation Support Specialist (National Law Firm)

Remote, USA Full-time

Remote Notary Scheduler / Coordinator (11am-8pm PST)

Remote, USA Full-time

Don't See a Role For You? Submit an application anyway!

Remote, USA Full-time

Online Cybersecurity Operations Analyst

Remote, USA Full-time

Principal Systems Engineer -Architect (CRDN), Mounds View, MN

Remote, USA Full-time

Sr. Manager, Sales Operations, Remote

Remote, USA Full-time

[Hiring] Medical Dosimetrist @WVUH West Virginia University Hospitals

Remote, USA Full-time

cybersecurity engineer sr. (Hybrid Seattle)

Remote, USA Full-time

Genetic Counselor, Patient Counseling

Remote, USA Full-time

fulltime React developer ( remote till pandemic) Richardson TX

Remote, USA Full-time

Assistant Construction Project Manager (Wind / Renewables - Nationwide Opportunities!)

Remote, USA Full-time
Back to Home