Sr.Data Engineer
Ford
Data Architecture: Design, build, and optimize GCP-based data pipelines (streaming/Batch) using services like BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Composer. Architect data lakes/warehouses for structured and unstructured data.
Pipeline Development: Implement ETL/ELT workflows and integrate data from heterogeneous sources (APIs, databases, streaming platforms) using tools like Spark, Apache Beam, DBT, and Kafka.
GCP Expertise: Proficiency in BigQuery, Dataflow, Dataproc, Cloud SQL, Pub/Sub , logging and monitoring.
Programming: Strong coding skills in Python, SQL,Java and familiarity with Apache Spark
DevOps & IaC: Experience with Terraform, Kubernetes, and CI/CD pipelines for infrastructure automation
Data Modeling: Working Knowledge of relational, NoSQL (Bigtable, Firestore), and Vector databases.
Governance & Compliance: Enforce data security, encryption, and governance standards (GDPR, CCPA) using GCP tools like Cloud KMS and IAM policies
Collaboration: Partner with data scientists, DevOps, and business teams to deploy AI/ML models, BI dashboards (PowerBI ,Looker,Qlik), and real-time analytics solutions
Operational Excellence: Automate CI/CD pipelines, monitor system performance, and troubleshoot issues using Terraform, Jenkins, and GCP monitoring tools
As a senior engineer ensure the data pipelines built by teams are robust, scalable and meets the highest standards.
Execute data governance and data quality projects.
Serve as role model for data engineering experts with strong emphasis on bias towards action.
Develop data engineering pipelines, frameworks focused on modularity & Data craftsmanship in GCP.
Hands an individual contributor for Log data management, analytics, integration with observability and AIOps platforms.
Democratize “Data as a service" API for all interfacing systems.
Confirm your E-mail: Send Email
All Jobs from Ford