Bengaluru, Karnataka, India
9 days ago
Principal Engineer, Big Data Platform

Company Description

At Western Digital, our vision is to power global innovation and push the boundaries of technology to make what you thought was once impossible, possible. At our core, Western Digital is a company of problem solvers. People achieve extraordinary things given the right technology. For decades, weve been doing just that. Our technology helped people put a man on the moon.

We are a key partner to some of the largest and highest growth organizations in the world. From energizing the most competitive gaming platforms, to enabling systems to make cities safer and cars smarter and more connected, to powering the data centers behind many of the worlds biggest companies and public cloud, Western Digital is fueling a brighter, smarter future.

Binge-watch any shows, use social media or shop online lately Youll find Western Digital supporting the storage infrastructure behind many of these platforms. And, that flash memory card that captures and preserves your most. precious moments Thats us, too.

We offer an expansive portfolio of technologies, storage devices and platforms for business and consumers alike. Our data-centric solutions are comprised of the Western Digital, G-Technology, SanDisk and WD brands.

Todays exceptional challenges require your unique skills. Its You & Western Digital. Together, were the next BIG thing in data.

ABOUT ADVANCED ANALYTICS OFFICE (AAO) and BIG DATA PLATFORM (BDP) AAO is missioned with accelerating Analytics solutions at scale across the Enterprise to rapidly capture business value. These solutions target key business metrics, such as, reducing manufacturing cost, improve capital efficiency, reduce time-to market to develop new products, improve operational efficiency, and improve customer experience. These solutions are built using cutting edge Industrial 4.0 technologies and are delivered through a platform approach to enable rapid scaling. The solutions span AI/ML for improving manufacturing yield, quality, equipment uptime, and adaptive testing, Operations Research for capacity and scheduling optimization, Digital Twin for inventory and logistics optimization, and Product Telematics for managing customer fleet management solutions.

Big data platform, BDP team provides self-service data and application platforms enabling rapid scaling of services to make ever increasing business impact. You will have the opportunity to partner in making remarkable things happen across WDTs more than dozen factories across the globe, global product development teams, customer solutions, and supporting operations like Finance, supply chain, Procurement, Sales etc

Job Description

As a hands-on container and infrastructure engineer, you are responsible for design, implement and support our global hybrid cloud container platform using Kubernetes and Google Cloud Platform, (GCP) Anthos and AWS.

Candidate should have expertise in building virtualization platforms of storage, network, and compute for large scale high availability factory manufacturing type workloads. Proven Experience in setting up continuous integration of source code pipelines using Bitbucket, Jenkins, Terraform, Ansible, etc., is required. Ability to build continuous deployments using Docker, Artifactory, Spinnaker, etc., is required with a strong advocate of DevOps principles. Candidate should be passionate about developing and delivering modern software-as-a-service (SaaS) design principles. This position requires partnering with various Western Digital manufacturing, engineering, and IT teams in understanding factory-critical workloads and designing solutions.

Big data platform, BDP team provides self-service data and application platforms to enable machine learning (ML) capabilities to engineering and data science community. The ideal candidate should be passionate about working with various cloud tools to handle various Service Level Agreements (SLA). Candidate should be versatile to experiment with fail-fast approach to adopt to new technologies and natural troubleshooting capabilities. Communication with internal customers, external vendors and co-workers in a clear and professional manner is expected.

 

Job Responsibilities

Work in global team to design, implement and operate our global hybrid cloud container platform (Kubernetes)Define, develop, and maintain customizations/integrations between various Kubernetes OSS tooling  (ingress, helm, operators, observability)Perform application deployment of container applications to Kubernetes environments using CI/CD workflow toolingManage AWS cloud infrastructure setup for services such as EC2, S3, EKS, AWS Lambda, API gateway etcDocument common work tasks to be added to shared knowledge baseWork closely with other business development teams to help them design and deploy their applications

 

Qualifications


Required Qualifications:

BS/MS in Computer Science, Information Technology, Computer Information Systems (or) equivalent working experience in IT field10+ years of experience in handling enterprise level infrastructure for storage, memory, network, compute, and virtualization using vSphere of VMWareProven Experience in setting up continuous integration of source code pipelines using Bitbucket, Jenkins, Terraform, Ansible and continuous deployment pipelines using Artifactory, ArgoCD and SpinnakerProven experience in and deep understanding of the Kubernetes architecture, including the control plane and Kubernetes networking models, including CNI (Container Network Interface) plugins (such as Calico & Flannel), Service Mesh architectures (Istio, Linkerd) and Ingress Controllers. Expertise in resource allocation, scaling using Pods, fine-tune cluster performance, configuring and managing persistent storage in Kubernetes. Strong focus on securing Kubernetes clusters, including implementing best practices for secrets management (using tools like HashiCorp Vault)Proven Experience with end-to-end Observability in Kubernetes environments using monitoring tools such as Prometheus, Grafana, and Logging solutions like Splunk.Strong understanding of network architecture and network virtualization, including bandwidth management, latency troubleshooting, and capacity planning to ensure optimal data flow and resource allocation.Expertise in deploying and managing AWS services like EMR, Redshift, RDS, and scaling AI and ML solutions on platforms like AWS Bedrock and Sagemaker.Candidate should be passionate about developing and delivering modern software-as-a-service (SaaS) design principles using Docker/KubernetesHands-on Python and Unix shell scripting is required with a strong advocate of DevOps principles.Strong troubleshooting skills with a strong appetite to learn new technology


Preferred Qualifications

Certification in KubernetesProven Experience or Certification in one of major cloud providers such as AWS or GCPDeep Understanding of all AWS or GCP offerings for cloud computing and Gen AI solutions including Bedrock or VertexAI servicesDeep Understanding of services like EMR, RDS (Aurora DB), Kafka, and Redshift to support large-scale data processingUnderstanding of MLOps tools for AI & Machine Learning such as DataikuDeep Familiarity with data service solutions such as ElasticSearch, Kafka, Redis, NiFi

Additional Information

Because Western Digital thrives on the power of diversity and is committed to an inclusive environment where every individual can thrive through a sense of belonging, respect, and contribution, we are committed to giving every qualified applicant and employee an equal opportunity.  Western Digital does not discriminate against any applicant or employee based on their protected class status and complies with all federal and state laws against discrimination, harassment, and retaliation, as well as the laws and regulations set forth in the "Equal Employment Opportunity is the Law" poster.

Confirm your E-mail: Send Email