#3341
governance fit into NAVEX's existing product and data stack. You will design an "AI core services" layer with modular, secure, reusable components and reference architectures that product teams can build against. Your work will cover LLM routing, retrieval-augmented generation (RAG), agent orchestration, tool integration, evaluation and observability, and governance-so that teams can ship agentic features that are secure, auditable, and scalable. If you want to shape the foundational blueprint for enterprise-grade agentic AI systems that drive crucial compliance and ethics decisions, this role is for you.
You'll thrive in this role surrounded by an engaged, collaborative team deeply committed to your success. Join us and help shape what's next!
What you'll get:
Meaningful Purpose. Your work helps organizations operate with integrity and protect their people-at a scale few companies can match.
High-Performance Environment. We move with urgency, set ambitious goals, and expect excellence. You'll be trusted with real ownership and supported to do the best work of your career.
Candid, Supportive Culture. We communicate openly, challenge ideas-not people-and value teammates who embrace bold thinking and continuous improvement.
Growth That Matters. You can count on authentic feedback, strong accountability, and leaders invested in your success so you can achieve real growth.
Rewards for Results. We provide clear, competitive compensation designed to recognize measurable outcomes and real impact.
What you'll do:
Define the agentic reference architecture and core services-establish reusable architecture patterns and standards for scalability, interoperability, composability, and governance; maintain architectural artifacts related to orchestration and retrieval
Own technical design choices for agent building blocks-define the agent runtime shape including orchestration mechanisms, model routing, semantic caching, tool usage, and multi-agent communication patterns, including APIs and contracts that application teams can consume
Architect retrieval and knowledge grounding mechanisms (RAG) so that agents can use explicit knowledge sources, combining parametric and non-parametric memory to improve factuality and provide provenance
Co-design and implement governance mechanisms with security and data governance partners-including data lineage, content safety, bias and skew mitigation, and access control-informed by recognized AI governance standards and the NIST AI Risk Management Framework
Architect least-privilege tool access, approval gates and human-in-the-loop designs, and hard output validation boundaries to mitigate common LLM application risks
Define standards for versioning, evaluation, observability, and continuous improvement-including how telemetry is captured and how changes are released safely
Drive architectural decisions for deterministic AI release control, rollback mechanisms, and evaluation-gated deployment strategies
Participate in design reviews, threat modeling, and compliance architecture assessments
Communicate architectural decisions and tradeoffs to engineering leadership and cross-functional stakeholders
Ensure architecture remains modular and provider-flexible, avoiding vendor lock-in while accelerating delivery
What you'll bring:
Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field
12+ years' experience in software architecture or systems engineering, with evidence of designing and shipping production systems at scale and producing architecture documentation and standards that teams actually follow
Shipped at least one production LLM or agent system-a strong proxy is: shipped a production AI application using retrieval plus a basic evaluation and guardrail pipeline, and can explain tradeoffs around model selection, retrieval quality, and observability
Deep understanding of LLM-based architectures, agentic systems, and AI orchestration patterns including multi-step planning, tool use, and knowledge systems that help agents improve
Cloud-native and platform engineering competence-comfort with cloud-native architecture and reusable platform components, including APIs, orchestration, and operational considerations, particularly AWS services including Bedrock, Lambda, and Step Functions
Governance mindset-ability to translate AI risk governance needs (accountability mechanisms, lifecycle evaluation, traceability, reliability) into enforceable technical controls and documentation
Practical agent patterns literacy-understanding how tool-using agents work (reasoning and acting loops) and what can go wrong in multi-step plans
Experience designing multi-tenant, governed AI platforms at production scale
Culture Agility. Comfort working in a fast-paced, candid environment that values innovation, healthy debate, and follow-through
Fuel performance and outcomes. Leverage your job competencies and champion NAVEX's core values
Our side of the deal: