and task allocation. You will be at the center of a cross-functional initiative to improve how data signals are captured, transformed, and used to guide critical assignments within a scaled delivery framework.
This role requires fluency in building data-driven allocation systems, deep comfort with experimentation and analysis, and the ability to translate real-world challenges into technical solutions using machine learning and statistical modeling.
Job Description
- Optimisation Modelling: Design and develop data-driven models and frameworks that support intelligent assignment, prioritization, and resource utilization across operational workflows.
- Feature Engineering: Build and maintain pipelines that transform raw signals such as behavioral data, labelling task attributes, and performance metrics into structured features for decision-making.
- Project Planning and Execution: Develop and manage project plans, timelines, and budgets for data science initiatives.
- Stakeholder Management: Communicate project progress, challenges, and results to stakeholders and senior management.
Qualifications
Minimum Qualifications
- At least 5 years of experience applying machine learning, statistical modeling, or optimization to operational or business challenges.
- Strong skills in data wrangling, feature development, and exploratory analysis.
- Proficient in Python and SQL; experience working with large-scale or distributed data systems.
- Track record of owning end-to-end data projects, from requirements gathering to implementation.
Preferred Qualifications
- Background in decision science, resource allocation models, or workflow optimization.
- Experience working within structured operational environments such as service delivery, content review, or quality control systems.
- Understanding of experimentation infrastructure, including A/B testing and metric design.
- Ability to communicate technical insights clearly to both technical and business audiences.