#JR1990932
deployment of new deep learning models easier and accessible to more data scientists.
What you'll be doing:
In this role, you will build infrastructure solutions from first principles needed to deliver Triton Inference Server. You will apply software design skills to define the processes and best practices for performing continuous integration, testing, and releasing builds, while ensuring the cross-platform compatibility of Triton Inference Server across a wide range of operating systems and architecture systems. Using your expertise, you will influence how we design our customer facing technology and tools to enable an optimized pipeline for building and deploying our product. Extensive collaboration with cross-functional teams to integrate pipelines from deep learning frameworks and components is essential to ensuring seamless deployment and inference of deep learning models across Triton Inference Server.
What we need to see:
Masters or PhD or equivalent experience
3+ years of experience in Computer Science, computer architecture, or related field
Ability to work in a fast-paced, agile team environment
Excellent Bash, Python programming and software design skills, including debugging, performance analysis, and test design.
Strong background in DevOps, CI/CD tools, and cloud computing
Understanding and knowledge of complex applications built on both on-Prem and cloud infrastructure, across operating systems and Cloud Services.
Experience in administering, monitoring, and deploying systems and services on GitHub and cloud platforms. Support other technical teams in monitoring operating efficiencies of the platform, and responding as needs arise.
Knowledge of distributed systems programming.
Ways to stand out from the crowd:
Experience designing or architecting (design patterns, reliability and scaling) of new and existing vm/container-based clusters to manage linux/windows servers, with horizontal scalability
Experience driving efficiencies in software architecture, creating metrics, implementing infrastructure as code and other automation improvements.
Background deploying cloud-native services using modern technologies such as Docker, and Kubernetes, optimizing software for scalable and efficient deployment in cloud environments.
Experience contributing to a large open-source deep learning community - use of GitHub, bug tracking, branching and merging code, OSS licensing issues handling patches, etc.
Excellent problem solving abilities spanning multiple software (storage systems, kernels and containers) as well as collaborating within an agile team environment to prioritize deep learning-specific features and capabilities within Triton Inference Server, employing advanced troubleshooting and debugging techniques to resolve complex technical issues.
NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most experienced and hard-working people in the world working for us. Are you creative and autonomous? Do you love a challenge? If so, we want to hear from you. Come help us build the real-time, efficient computing platform driving our success in the dynamic and quickly growing field Deep Learning and Artificial Intelligence.
The base salary range is 120,000 USD - 230,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.