Model Policy Lead, Video Policy - Trust & Safety

TikTok

4.5

(6)

Dublin, Ireland

Why you should apply for a job to TikTok:

  • 4.5/5 in overall job satisfaction
  • 4.5/5 in supportive management
  • 100% say women are treated fairly and equally to men
  • 100% would recommend this company to other women
  • 100% say the CEO supports gender diversity
  • Ratings are based on anonymous reviews by Fairygodboss members.
  • Employee well-being is supported via hybrid work, short-term counseling through our EAP and a premium subscription to Headspace.
  • We embrace diversity across all dimensions and provide employees with 9 employee resource groups globally, including our WOMEN ERG.
  • Comprehensive parental leave policy as well as fertility treatment through healthcare providers with a $20,000 lifetime maximum.
  • #7532048095703140626

    Position summary

    ove human reviewer precision - are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.

    You will lead policy governance across four model enforcement streams central to TikTok's AI moderation systems:

    1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. These models rely on static training data and operate without prompt logic - requiring careful threshold setting, false positive/negative analysis, and drift tracking.
    2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day. Your team produces CoT, structured labeling guidelines and dynamic prompts to interpret complex content and provide a policy assessment. Your team will manage accuracy monitoring, labeling frameworks, and precision fine-tuning.
    3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve. You will lead the synchronization of changes across ML classifiers, AI models, labeling logic, and escalation flows to maintain unified, up-to-date enforcement standards.
    4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. These projects are highly experimental, and are at the forefront of LLM-application in real world policy enforcement and quality validation.

    Together, these streams define TikTok's model-led enforcement infrastructure. Your role is to close the quality gap - ensuring that scale does not come at the cost of precision, and that every AI decision reflects a consistent, up-to-date, and defensible application of policy.

    This is a high-impact leadership role that requires strong policy intuition, data fluency, and a deep curiosity for how AI technologies shape the future of Trust and Safety. You'll work closely with stakeholders across Product, Engineering, Product, Responsible AI, Ops, and Policy.

    Responsibilities

    • Lead a team of Policy Analysts responsible for model governance across ML classifiers and LLM-based AI moderation systems;
    • Translate human moderation policies into model-readable logic - including Chain-of-Thought Decision Trees, labeling frameworks, and prompt design standards;
    • Own model performance tracking through key enforcement metrics, and drive RCA cycles to identify and close quality gaps;
    • Oversee policy alignment for large-scale classifiers and LLM moderation, ensuring enforcement consistency across hundreds of millions of daily content reviews;
    • Build and maintain labeling systems for CoT-based AI models, including quality testing, iteration workflows, resource planning;
    • Lead cross-system change management, ensuring that policy iterations are reflected consistently across human reviewers, classifiers, and AI models;
    • Guide the development of next-bound SOTA models, defining policy goals, labelling requirements, and use-case applications;
    • Partner with Engineering, Product, Ops, and Policy to align on enforcement strategy, rollout coordination, and long-term model enforcement and detection priorities.

    Qualifications

    Minimum Qualifications:

    • You have atleast 5 years of experience in Trust & Safety, ML governance, moderation systems, or related policy roles;
    • You have experience in managing or mentoring small to medium-sized teams that are diverse and global;
    • You have a proven ability to lead complex programs with global cross-functional stakeholders;
    • You have a strong understanding of AI/LLM systems, including labeling pipelines, and CoT-based decision logic;
    • You are comfortable working with quality metrics and enforcement diagnostics - including FP/FN tracking, RCAs, and precision-recall tradeoffs;
    • You are a confident self-starter with excellent judgment, and can balance multiple trade-offs to develop principled, enforceable, and defensible policies and strategies. You have persuasive communication skills, with the ability to translate complex challenges into simple and clear language and persuade cross-functional partners in a dynamic, fast-paced, and often uncertain environment;
    • You have a bachelors or masters degree in artificial intelligence, public policy, politics, law, economics, behavioral sciences, or related fields;

    Preferred Qualifications:

    • Experience working in a start-up, or being part of new teams in established companies
    • Experience in prompt engineering

    Why you should apply for a job to TikTok:

  • 4.5/5 in overall job satisfaction
  • 4.5/5 in supportive management
  • 100% say women are treated fairly and equally to men
  • 100% would recommend this company to other women
  • 100% say the CEO supports gender diversity
  • Ratings are based on anonymous reviews by Fairygodboss members.
  • Employee well-being is supported via hybrid work, short-term counseling through our EAP and a premium subscription to Headspace.
  • We embrace diversity across all dimensions and provide employees with 9 employee resource groups globally, including our WOMEN ERG.
  • Comprehensive parental leave policy as well as fertility treatment through healthcare providers with a $20,000 lifetime maximum.