Cloud Native Engineer - ARK Large Model Platform

TikTok

4.5

(6)

Singapore

Why you should apply for a job to TikTok:

  • 4.5/5 in overall job satisfaction
  • 4.5/5 in supportive management
  • 100% say women are treated fairly and equally to men
  • 100% would recommend this company to other women
  • 100% say the CEO supports gender diversity
  • Ratings are based on anonymous reviews by Fairygodboss members.
  • Employee well-being is supported via hybrid work, short-term counseling through our EAP and a premium subscription to Headspace.
  • We embrace diversity across all dimensions and provide employees with 9 employee resource groups globally, including our WOMEN ERG.
  • Comprehensive parental leave policy as well as fertility treatment through healthcare providers with a $20,000 lifetime maximum.
  • #7345655290304629030

    Position summary

    That's how we drive impact - for ourselves, our company, and the communities we serve.
    Join us.

    About the Team
    The Applied Machine Learning (AML) - Enterprise team provides machine learning platform products on VolcanoEngine with cloud native resource scheduling system which intelligently orchestrates different tasks and jobs with minimised costs of every experiment and maximised resource utilisation, rich modelling tools including customised machine learning tasks and web IDE, and multi-framework high performance model inference services.

    In 2021, through VolcanoEngine, we released this machine learning infrastructure to the public, to provide more enterprises with reduced costs of computation power, lower barriers to machine learning engineering and deeper developments in AI capabilities.

    Responsibilities
    Responsible for Ark Large Model Platform development on Volcano Engine, researching systematic solutions on large model solution implementations and applications in various industries, striving to reduce the IT cost of large model applications, meeting the users' ever-growing demand for intelligent interaction and improving the lifestyle and communications of users in the future world.

    • Maintain a large-scale AI cluster and develop state-of-the-art machine learning platforms to support a diverse group of stakeholders.
    • Tackle extremely challenging tasks which include, but are not limited to, delivering highly efficient training and inference for large language models, managing extremely effective distributed training jobs across clusters with over 10,000 nodes and GPU chips, and constructing highly reliable ML systems with unparalleled scalability.
    • The work encompasses various aspects of LLMOps (Large Language Model Operations), such as resource scheduling, task orchestration, model training, model inference, model management, dataset management, and workflow orchestration.
    • Investigate cutting-edge technologies related to large language models, AI, and machine learning at large, such as state-of-the-art distributed training systems with heterogeneous hardware, GPU utilization optimization, and the latest in hardware architecture.
    • Employ a variety of technological and mathematical analyses to enhance cluster efficiency and performance.

    Qualifications

    Minimum Qualifications

    • B. Sc or higher degree in Computer Science or related fields from accredited and reputable institutions with 5 years of R&D experience in the fields of cloud computing or large-scale model systems.
    • Experience in Golang/C++/Cuda development with a solid understanding of Linux systems and popular cloud platforms such as Volcano Engine Cloud, AWS, and Azure Cloud.
    • Profound knowledge of cloud-native orchestration technologies like Kubernetes, coupled with experience in large-scale cluster maintenance, job scheduling optimization, and cluster efficiency enhancement with a strong grasp on various foundational areas of computer science, including computer networking, the Linux file system, object storage services, and SQL as well as NoSQL databases.
    • Experience in developing ML platforms or MLOps platforms. Experience in distributed machine learning model training, ML model fine-tuning, and deployment.
    • Self-motivated, thirst for innovation, collaborative working aptitude, and consistently uphold high standards in coding and documentation quality.

    Preferred Qualifications:

    • Familiar with High-Performance Computing (HPC) stacks, which may include but is not limited to computing with Cuda/OpenCL, networking with NCCL/MPI/RDMA/DPDK, and model compiling with MLIR/TVM/Trition/LLVM.
    • Experience in large language model (LLM) training and development, including large-scale foundational model training aligned with Scaling Laws, efficient fine-tuning techniques such as Lora/P-Tuning/RLHF, model inference optimization, and transformations of model structures for optimizations like Sparse/MoE/LongContext.

    TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

    Why you should apply for a job to TikTok:

  • 4.5/5 in overall job satisfaction
  • 4.5/5 in supportive management
  • 100% say women are treated fairly and equally to men
  • 100% would recommend this company to other women
  • 100% say the CEO supports gender diversity
  • Ratings are based on anonymous reviews by Fairygodboss members.
  • Employee well-being is supported via hybrid work, short-term counseling through our EAP and a premium subscription to Headspace.
  • We embrace diversity across all dimensions and provide employees with 9 employee resource groups globally, including our WOMEN ERG.
  • Comprehensive parental leave policy as well as fertility treatment through healthcare providers with a $20,000 lifetime maximum.