Senior Production SRE Engineer - Storage

Nvidia

Remote, Italy

Job posting number: #7292513 (Ref:JR1990595)

Posted: November 4, 2024

Job Description

Site Reliability Engineering (SRE) is an engineering discipline that involves designing, building, and maintaining large-scale production systems with high efficiency and availability. It encompasses various areas, including software and systems engineering practices, storage, data management, and services. SRE professionals are highly specialized and possess expertise in different domains such as systems, networking, storage, coding, database management, capacity management, continuous delivery, and deployment, as well as open-source cloud-enabling technologies like Kubernetes, containers, and virtualization. Their responsibilities encompass ensuring reliable storage solutions, managing data efficiently, and providing related services to support the overall stability and performance of the production systems.

SRE at NVIDIA ensures that our internal and external facing GPU cloud services have reliability and uptime as promised to the users and at the same time enables developers to make changes to the existing system through careful preparation and planning while keeping an eye on capacity, latency, and performance. SRE is also a mindset and a set of engineering approaches to running better production systems and optimizations. Much of our software development focuses on eliminating manual work through automation, performance tuning, and growing the efficiency of production systems. As SREs are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to tackle a broad spectrum of problems. Practices such as limiting time spent on reactive operational work, blameless postmortems, and proactive identification of potential outages factor into iterative improvement that is key to product quality and interesting and dynamic day-to-day work. SRE's culture of diversity, intellectual curiosity, problem-solving, and openness is important to its success. Our organization brings together people with a wide variety of backgrounds, experiences, and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects while striving to build an environment that provides the support and mentorship needed to learn and grow.

What You Will Be Doing:

  • Assist in the design, implementation, and support of large-scale storage clusters, including monitoring, logging, and alerting.

  • Work with AI/ML workloads to capture and correlate behavior in large clusters and workflows, which are otherwise hard to understand.

  • Work closely with peers on the team to improve the lifecycle of services – from inception and design, through deployment, operation, and refinement.

  • Support services before they go live through activities such as system design consulting, developing software and frameworks, capacity management, and launch reviews.

  • Maintain services once they are live by measuring and monitoring availability, latency, and overall system health, including leveraging machine learning models.

  • Scale systems sustainably through mechanisms like AI/ML and automation, and evolve systems by pushing for changes that improve reliability and velocity.

  • Practice sustainable incident response and blameless postmortems.

  • Be part of an on-call rotation to support production systems.

What We Need To See:

  • BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics) or equivalent experience.

  • At least 5+ years practical experience.

  • Background with algorithms, data structures, complexity analysis, software design, and maintaining large-scale Linux-based systems.

  • Experience in one or more of the following: C/C++, Java, Python, Go, Perl or Ruby, AI/ML frameworks and methodologies.

  • Good knowledge of infrastructure configuration management tools like Ansible, Chef, Puppet, and Terraform.

  • Experience in using observability and tracing-related tools like InfluxDB, Prometheus, and Elastic stack.

Ways to stand out from the crowd:

  • Demonstrated experience in having SRE mindset, customer-first approach, and focus on customer satisfaction and passion for ensuring customer success.Experience with Git, code review, pipelines, and CI/CD.

  • Interest in crafting, analyzing, and fixing large-scale distributed systems. Strong debugging skills with a systematic problem-solving approach to identify complex problems.

  • Thrive in collaborative environments and enjoy working with various teams. Experience in using or running large private and public cloud systems based on Kubernetes, OpenStack, and Docker. Flexible in adapting to different working styles.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and talented people on the planet working for us. If you're creative and autonomous, we want to hear from you!





Apply Now

Please mention to the employer that you saw this ad on AiCareers.com

More Info

Job posting number:#7292513 (Ref:JR1990595)
Application Deadline:Open Until Filled
Employer Location:Nvidia
Santa Clara,California
United States
More jobs from this employer