Gurgaon, Haryana
1 day ago
Data Engineering Consultant, Analyst or Lead

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together.


Primary Responsibilities:

Participate in scrum process and deliver stories/features according to the schedule Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team Understand product architecture, features being built and come up with product improvement ideas and POCs Be able to learn and adapt to new data technologies Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:

Engineering degree or equivalent experience Experience:
Deep experience in Data Analysis, including source data analysis, data profiling and mapping Good experience in building data pipelines using ADF/Azure Databricks Hands-on data migration experience from legacy systems to new solutions, such as from on-premises clusters to Cloud Proven hands-on experience with a large-scale data warehouse DevOps, implementation of Bigdata, Apache Spark and Azure Cloud Large scale data processing using PySpark on azure ecosystem Implementation of self-service analytics platform ETL framework using PySpark on Azure Tools/Technologies:
Programming Languages: Python, PySpark Cloud Technologies: Azure (ADF, Databricks, WebApp, Key vault, SQL Server, function app, logic app, Synapse Azure Machine Learning, DevOps) Expert skills in Azure data processing tools (Azure Data Factory, Azure Databricks) Solid proficiency in SQL and complex queries Knowledge on US healthcare industry/Pharmacy data is an added advantage Proven good problem solving skills Proven good communication skills


Preferred Qualification:

Knowledge/Experience on Azure Synapse and Power BI


At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Confirm your E-mail: Send Email