#JR1990926
make them more easily consumable by users (via improved scalability, reliability, cleaner abstractions, etc).
What you will be doing:
Develop benchmarks, end to end customer applications running at scale, instrumented for performance measurements, tracking, sampling, to measure and optimize performance of meaningful applications and services;
Construct carefully designed experiments to analyze, study and develop critical insights into performance bottlenecks, dependencies, from an end to end perspective;
Develop ideas on how to improve the end to end system performance and usability by leading changes in the HW or SW (or both).
Collaborate with external CSPs during the full life cycle of cluster deployment and workload optimization to understand and drive standard methodologies
Collaborate with AI researchers, developers, and application service providers to understand difficulties, requirements, project future needs and share best practices
Work with a diverse set of LLM workloads and their application areas such as health care, climate modeling, pharmaceuticals, financial futures, Genomics/Drug discovery, among others.
Develop the vital modeling framework and the TCO analysis to enable efficient exploration and sweep of the architecture and design space;
Develop the methodology needed to drive the engineering analysis to advise the architecture, design and roadmap of DGX Cloud
What we need to see:
8+ years of proven experience
Ability to work with large scale parallel and distributed accelerator-based systems
Expertise optimizing performance and AI workloads on large scale systems
Experience with performance modeling and benchmarking at scale
Strong background in Computer Architecture, Networking, Storage systems, Accelerators
Familiarity with popular AI frameworks (PyTorch, TensorFlow, JAX, Megatron-LM, Tensort-LLM, VLLM) among others
Experience with AI/ML models and workloads, in particular LLMs
Understanding of DNNs and their use in emerging AI/ML applications and services
Bachelors or Masters in Engineering (preferably, Electrical Engineering, Computer Engineering, or Computer Science) or equivalent experience
Proficiency in Python, C/C++
Expertise with at least one of public CSP infrastructure (GCP, AWS, Azure, OCI, ...)
Ways to stand out from the crowd:
Very high intellectual curiosity; Confidence to dig in as needed; Not afraid of confronting complexity; Able to pick up new areas quickly
Proficiency in CUDA, XLA
Excellent interpersonal skills
PhD nice to have
With competitive salaries and a generous benefits package (https://www.nvidiabenefits.com ), we are widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our best-in-class engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you!
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.