#R154397
rompt injection.
Externally, the role should have strong networks with AI Vendors and industry experts.
This role also ensures information security of Mars AI solutions through bringing AI and security expertise to:
Identifying and remediating AI risks, scanning for emerging technology developments and risks, recommending guardrails and mitigations
Lead an enterprise team to assess all Mars AI use cases against a risk-based approach and the enterprise Responsible Use of AI Policy, including appropriate escalation processes, monitoring, and continuous process improvement.
Collaborate across Mars businesses, data & analytics teams, cybersecurity, AI Responsibility Council to determine Associate guidelines and share them through learning and communications
Responsibilities:
Impact Assessment
Lead and manage the AI Impact Assessment process, ensuring it remains robust, scalable, delivering SLAs, and continuously measured and improved.
Business process design to ensure seamless workflow with security and governance processes and business units (e.g. expand impact assessment to all AI and embed lower-risk use cases into business units)
Enhance automation of impact assessment workflow
AI Inbox Emails and Escalations
Infuse AI Info Security throughout Mars:
Understand technology developments, proactively recommend Mars stance on AI developments, and work with Cybersecurity to implement changes.
Educate Associates through AI Responsibility sharepoint and Risk Navigator, incorporating updates related to audit actions, emerging technologies (e.g., Agentic and physical AI), and evolving risk categories, as well as AI Community calls, and Responsible Use of AI training,
Collaborate with the Platforms, Tools & Tech / Digital Experiences team and vendors to update AI Responsibility guidelines and standards in response to advancements in models, platforms, and AI capabilities.
Integrate tech guidelines into the Impact Assessment process, including AI monitoring , AI security, and platform-specific rules, ensuring consistency and compliance. Design and implement IT controls for AI.
Provide strategic guidance on emerging technologies, such as Agentic AI, physical AI and Quantum computing, helping teams navigate novel risks and opportunities.
Work closely with Legal, IP/MP, DPIA teams to evolve the Impact Assessment process in line with regulatory and policy changes.
Understand regulatory developments (e.g., EU AI Act, NIST frameworks) and proactively update internal processes to maintain compliance and readiness.
Tracking and admin
Key metrics report out, for impact assessments, measurable metrics (e.g. attestations, trainings)
Maintain budgets, SOW, PO, Demand Creation
Qualification:
1. Education & Professional Qualification
Technical degree
Significant experience in digital technologies governance
2. Knowledge/Experience
Strong technical experience AI/ML lifecycle management, Agentic AI, AI and model governance, security, validation, and monitoring
Deep familiarity with emerging AI regulations, standards, and risk frameworks
Experience collaborating with cross-functional executive stakeholders (Risk, Legal, Compliance, IT, Data, Security)
Ability to communicate complex AI risks clearly to both technical and non-technical audiences
Strong follow through, execution and attention-to-detail in complex environment with multiple stakeholders
Track record of developing external networks to stay connected with latest trends, in order to build best practice and strategies.
A history of developing strong internal networks, able to inspire others and help others build capabilities.
Ability to set and achieve goals and manage multiple projects and priorities
Excellent ability to communicate (oral and written) and create presentations/visual materials
Business process design capabilities
#TBdigital
#hybrid