Data Direct Networks

Senior DevOps Engineer

Job Locations DE-France, UK, Italy, Poland, Israel
Job Post Information* : Posted Date 1 month ago(12/24/2024 12:34 PM)
ID
2024-5058
# of Openings
1
Location : Name Linked
Remote: Mainland Europe
Posting Location : Country (Full Name)
Germany
Posting Location : City
France, UK, Italy, Poland, Israel
Job Function
Engineering
Worker Type
Regular Full-Time Employee

Overview

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing.

 

"DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC 

 

“The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA 

 

DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. 

 

Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. 

 

Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. 

Job Description

We are looking for a Sr. DevOps Engineer with demonstrated networking experience for our RED Storage engineering team.

 

About the Team:  

 
Our DevOps team is at the forefront of innovation, driving high-performance and highly scalable emerging technologies with Infinia Software Defined Storage (SDS). Infinia SDS is designed to support multiple client protocols, delivering robust, enterprise-grade storage solutions for cutting-edge use cases. With a global presence of expert developers, QA, and DevOps engineers, we are transforming the way software-defined storage operates in both on-premises and cloud environments. 

 

Why Join Us? 

 

We provide a unique blend of on-premises and cloud DevOps—more than just managing infrastructure, you will have the chance to deploy critical applications on Kubernetes and shape infrastructure as code (IaC) from the ground up. Here’s a glimpse of the exciting work we do: 

 

  • Infrastructure-as-Code: Constantly evolving our infrastructure with Terraform, Python, and Bash, building cloud infrastructure on GCP with future plans to expand into other cloud providers. 
  • On-Premises Excellence: Deploy self-sustaining infrastructure that delivers mission-critical applications faster, while enhancing flexibility and performance. 
  • Build and Release Acceleration: Reducing build times by integrating with Nexus/JFrog Artifactory, streamlining build workflows, and connecting tools to self-managed artifact repositories

 

Cutting-Edge Tools We Use 

 

Our infrastructure and tooling reflect our commitment to modern DevOps practices. Here’s how we empower engineers to stay at the top of their game: 

 

  • Kubernetes and ArgoCD: Orchestrate scalable deployments with Kubernetes and leverage ArgoCD for automated and declarative application management. 
  • Packer + MaaS: Create custom bare-metal images with Packer and deploy them seamlessly using MaaS (Metal as a Service) for
  • K3s + KubeVirt: Build a self-provisioning VM environment to empower developers and QA engineers. 
  • MinIO & Harbor: Utilize MinIO for S3 storage services, and Harbor for container image management. 
  • GitHub Enterprise (GHE) + Actions: Collaborate using GHE, with DevOps leading the development and maintenance of workflow actions for CI/CD pipelines. 
  • Docker: Build containerized environments for both cloud and on-prem applications. 
  • Quay & GCS Buckets: Manage external artifacts with Quay and Google Cloud Storage (GCS). 

 

What You’ll Work On 

 

  • Self-Deployable Environments: Our DevOps engineers are building a seamless, self-service environment to empower developers and QA teams. 
  • Bare-Metal & Cloud Integration: Get hands-on experience with bare-metal provisioning using MaaS, while deploying infrastructure in hybrid models with GCP and expanding to other cloud providers. 
  • Streamlining Build Systems: We integrate tools like Nexus/JFrog Artifactory to accelerate builds and optimize workflows, ensuring smoother development and faster delivery. 
  • Constant Innovation: We believe in continuous improvement, constantly enhancing infrastructure-as-code to make deployments faster, more reliable, and easily replicable. 

 
 
Required Skills: 

 
All of the following skills are mandatory: 

  • Application Deployments within Kubernetes using tools like Helm or ArgoCD. 
  • Docker
  • Configuration Management with Ansible or similar tools. 
  • Monitoring and Metrics using Prometheus, Grafana, or equivalent. 
  • Bash Scripting
  • CICD Pipeline Development using GitHub/GitLab, Jenkins, and related tools (e.g., YAML, Groovy). 
  • Basic Linux Proficiency for underlying infrastructure management. 

Preferred Skills: 

The following skills are desirable but not mandatory: 

  • Linux Administration, Networking, and Troubleshooting. 
  • On-prem Kubernetes Deployment and Administration.
  • Virtualization
  • Python or Golang programming knowledge. 

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed