NVIDIA is looking for a Principal Engineer to join our Distributed Machine Learning team focused on GPU accelerated Apache Spark. Data scientists often apply machine learning (ML) and deep learning (DL) algorithms over large datasets to train AI models. To accelerate and scale the model training, some libraries (e.g., XGBoost, RAPIDS cuML, PyTorch, and TensorFlow) have been extended for distributed training in a GPU-accelerated computer cluster. NVIDIA plans to work with open source communities to make GPU-accelerated distributed ML/DL even more widely applicable and easier to use. We aim to address the key limitations (including performance and usability) of existing solutions so that data scientists could build AI models to achieve business goals, faster, more reliably, and at a lower cost. Come join NVIDIA to design and develop GPU-accelerated distributed machine learning solutions.
What you’ll be doing:
Design and develop new user-friendly APIs and libraries to optimally use existing DL/ML frameworks in GPU-enabled Spark clusters for distributed DL/ML training and inference at scale.
Design and develop GPU accelerated ML libraries for distributed training and inference on Spark clusters, e.g., to improve our existing spark-rapids-ml open source library.
Demonstrate superior performance of developed solutions on industry standard benchmarks and datasets.
Make technical contributions to enhance capabilities of open source projects such as RAPIDS, XGBoost, spark-rapids-ml, and Apache Spark.
Work with NVIDIA partners and customers on deploying distributed ML algorithms in cloud or on-premise.
Keep up with published advances in distributed ML systems and algorithms.
Provide technical mentorship to a team of engineers.
What we need to see:
BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field (or equivalent experience).
12+ years of work or research experience in software development.
5+ experience as technical lead in distributed machine learning and/or deep learning.
3+ years of open source development experience.
3+ years of hands-on experience with Spark MLlib, XGBoost, and/or PyTorch.
Knowledge of internals of Apache Spark MLlib.
Experience with Kubernetes, YARN, Spark, and/or Ray for distributed ML orchestration.
Proven technical skills in designing, implementing and delivering high-quality distributed systems.
Excellent programming skills in C++, Scala, and Python.
Familiar with agile software development practice.
Ways to stand out from the crowd:
Familiarity with NVIDIA libraries (RAPIDS cuML, Spark-RAPIDS, NVTabular) is a plus.
Familiarity with NVIDIA GPUs and CUDA is also a strong plus.
Familiarity with Horovod, Petastorm and other existing/past distributed learning libraries is desirable.
Experience working with multi-functional teams across organizational boundaries and geographies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most brilliant and talented people in the world working for us. If you are passionate about what you do, creative and autonomous, we want to hear from you!
The base salary range is 272,000 USD - 419,750 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.