#200574752_EN-2
rving millions of queries every day with incredible low latencies, drawing every ounce of compute from our hardware. As part of this group, you will get a chance to bring Intelligence to billions of users across the world. You will have an opportunity to make a difference in life of people. You will have a chance to work on optimizing billions of parameter languge and vision and speech models using state of the art technologies and make it run at scale of Apple.
Description
Work along side Foundation Model Research team to optimize inference for cutting edge model architectures. Work closely with product teams to build Production grade solutions to launch models serving millions of customers in real time. Build tools to understand bottlenecks in Inference for different hardwares and use cases. Mentor and guide engineers in the organization.
Minimum Qualifications
Demonstrated experience in leading and driving complex, ambiguous projects.
Experience with high throughput services particularly at supercomputing scale.
Proficient in running applications on Cloud (AWS, Azure, or equivalent) using Kubernetes and Docker.
Familiar with GPU programming concepts using CUDA and with popular machine learning frameworks like PyTorch or TensorFlow.
Preferred Qualifications