Data Direct Networks

Staff Machine Learning Engineer - Infinia AI Performance

Job Locations CN-Shanghai, Beijing, Remote
Job Post Information* : Posted Date 3 weeks ago(2/7/2025 9:56 AM)
ID
2025-5135
# of Openings
1
Location : Name Linked
China
Posting Location : Country (Full Name)
China
Posting Location : City
Shanghai, Beijing, Remote
Job Function
Engineering
Worker Type
Regular Full-Time Employee

Overview

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing.

 

"DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC 

 

“The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA 

 

DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. 

 

Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. 

 

Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. 

Job Description

Staff Machine Learning Engineer - Infinia AI Performance

 

We are seeking a talented and experienced Sr ML Engineer to help us optimize training, inference, and Retrieval-Augmented Generation (RAG) pipelines for high-performance AI applications. You will lead the development of connectors to open-source frameworks for data streaming, such as, Mosaic Streaming, Ray Data, and Tf.Data and inference optimizations such as K-V caching and LORAX. You will guide a talented organization of engineers focused on advanced end-to-end data platform for ingestion, transformation, preparation, and streaming on high-performance AI applications. Collaborating closely with software developers, product teams, and partners, you will lead experiments with state-of-the-art models using open-source tools and cloud platforms.

 

Key Responsibilities:

  • Design and implement integration of data ingestion and streaming pipelines with open-source tools, like Ray Data, Mosaic Streaming, Tf.data, Torch Dataloader.
  • Design of optimization for training like asynchronous checkpointing, and inference, like K-V caching and LORAX.
  • Guide the integration of MLFlow with DDN’s Infinia product for comprehensive experiment tracking, model versioning, and deployment.
  • Drive the implementation and scaling of Retrieval-Augmented Generation (RAG) pipelines to enhance generative model performance.
  • Stay abreast of the latest developments in AIOps, AI frameworks, optimization, and accelerated execution.
  • Identify and implement solutions to optimize training and inference pipeline performance, runtime, and resource utilization on Infinia.

 

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related fields.
  • 4+ years of experience in machine learning operations (MLOps) or related roles.
  • Proven expertise in building and scaling AI/ML pipelines.
  • Strong understanding of machine learning frameworks and libraries (TensorFlow, PyTorch, NVIDIA NeMo, vLLM, TensorRT-LLM).
  • Experience in deploying open-source vector databases at scale.
  • Solid understanding of cloud infrastructure (AWS, GCP, Azure) and distributed computing.
  • Proficiency with containerization tools (Docker, Kubernetes) and infrastructure as code.
  • Excellent problem-solving and troubleshooting skills, with attention to detail and performance optimization.
  • Strong communication and collaboration skills.

 

Preferred Qualifications:

  • Implementation-level understanding of ML frameworks, data loaders and data formats.
  • Experience with scaling RAG pipelines and integrating them with generative AI models.
  • Experience in operationalizing AI/ML models in production environments.

This position requires participation in an on-call rotation to provide after-hours support as needed.

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed