Looking for a Data Engineer based in Tulsa, OK to join a growing team. Must have experience in Azure and Databricks.
Key Responsibilities:
Build and optimize data pipelines using Databricks and Databricks SQL for data ingestion, transformation, and analytics.
Use Delta Lake for reliable, high-performance data storage.
Develop in Python, PySpark, and T-SQL for scalable data processing and ETL workflows.
Automate workflows in Databricks with job scheduling and performance monitoring.
Utilize Azure DevOps for version control and CI/CD pipelines.
Work with Azure Synapse for big data integration and analytics.
Create dashboards in Power BI using data from Azure and Databricks.
Familiar with Azure Data Factory for data movement and orchestration.
Experience with SSIS for supporting legacy ETL processes.
Strong skills in data modeling, query optimization, and pipeline troubleshooting.
Key Technologies:
Azure, Databricks, Python, SQL, SSIS
ETL pipelinesÂ
If you’re excited about shaping the future of data engineering and working with cutting-edge cloud technology, we encourage you to apply!