and managing ground truth alignment to ensure moderation consistency and scalability.
Responsibilities:
- Design end-to-end enforcement strategies for policy execution (machine + human).
- Lead the creation of machine-executable policy frameworks and signal logic.
- Own end-to-end machine moderation guidance creation and iteration across policies and business lines.
- Ensure alignment between policy intent and machine enforceable guidance.
- Optimize specific policy-guidances for moderation for accuracy, efficiency, and consistency.
- Own the Ground Truth Golden Set framework and RCA processes.
- Collaborate closely with Algo, Tech, Governance, and Policy Ops for model readiness.
- Manage Agent QA support, RAG operations, and escalation workflows.
Qualifications
Minimum Qualifications:
- 3-5 years engineering, automoderation related areas.
- Experience in working with machine learning teams or algorithm design preferred.
- Strong understanding of content moderation workflows (human & automated).
- Strong analytical mindset and ability to translate policy into executable logic.
- Proven ability to lead complex workflows and stakeholder programs.
Preferred Qualifications:
- Experience with digital advertising and platform ecosystems is a plus.