Data Engineer V
ICONMA, LLC
Our Client, a Banking company, is looking for a Data Engineer V for their Toronto, ON/Hybrid location. Responsibilities:
+ Reporting to the Manager, the Data Engineer (DE) supports Fraud data migration and Business Intelligence projects by designing and building data management and analytics solutions, ensuring high quality and trusted data is available and accessible to Fraud Performance Management team to make informed business decisions on our registrants, including developing reporting and analytics systems and self-serve reports.
+ The DE is responsible for working with data and product owners and various stakeholders from the business, IT and external teams to ensure successful delivery of the Data Projects.
+ Scope and Complexity
+ The DE is responsible for developing and maintaining data pipelines in Azure data platform, designing, and managing data repositories, lakes and data warehouse.
+ The DE supports the data governance and data quality strategies on the operational level maintaining the various artifacts on the data assets, including the data dictionary, lineage, ownership and business rules.
+ The DE works closely with the other data engineers and the data analysts (DA), business analysts (BA), product owners, IT and business SME’s. Specifically, for the Data migration project, the DE designs the source to target mapping, extracts the data from the current systems, transforms and loads the data into the new cloud-based solution.
+ The DE also works closely with the Application and QA teams to validate the data ensuring quality and alignment with the data architecture.
+ Work together with other data engineers, data analysts, business analysts, business SME’s, records analyst, and privacy analyst to understand the needs for data and create effective, secure data workflows.
+ Responsible for designing, building, and maintaining secure and compliant data processing pipelines using various Microsoft Azure data services and frameworks including but not limited to Azure Databricks, Data factory, ADLS Storage, PySpark.
+ Build databases, data marts or data warehouse and perform data migration work.
+ Build reporting and analytical tools to utilize the data pipeline, provide actionable insight into key business performance metrics.
+ Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Microsoft Azure Cloud.
+ Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and/or Azure Blob Storage.
+ Utilize Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations.
+ Implement data validation and cleansing procedures to ensure the quality, integrity, and dependability of the data.
+ Improve the scalability, efficiency, and cost-effectiveness of data pipelines.
+ Monitor and resolve data pipeline problems to ensure consistency and availability of the data.
+ Identify, design, and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
+ Adapt and learn new technologies per business requirements.
+ Ensure compliance with data governance, privacy and security policies.
+ Fosters and maintains an organizational culture that promotes equity, diversity and inclusion, mutual respect, teamwork, and service excellence.
Requirements:
+ Architectures, and datasets using Microsoft Azure technologies including Spark.
+ Experience with data migration projects within a Microsoft and Azure
+ Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
+ Strong analytic skills related to working with unstructured datasets.
+ A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
+ Ability to plan, prioritize and manage workload within a time sensitive environment.
+ Cloud experience – MS Azure - Azure EventHub, ADF, ADLS, Azure SQL, ASP
+ Framework – Spark and Spark optimization
+ Working Knowledge: Bitbucket, GitHub, Confluence, JIRA, Experience in DevOps pipeline, CI/CD and related tools
+ (SQL) as well as working familiarity with a variety of databases.
+ Advanced experience programming ETL code, building and optimizing data pipelines,
+ Confluent Kafka, REDIS DB, AKS, MQ, OAuth, Postman
+ Experience with Data Migration projects – moving data from on-prem to cloud environment
+ Jira/Bitbucket/Confluence to collaborate 5 years
+ Azure Cloud experience 5 years
+ ETL 5 years
+ Experience in building and managing CI/CD pipelines 5 years
+ Experience in Spark 5 years
+ Confluent (Kafka vendor) experience 1 years
+ Exp with Postman Yes
+ OAuth 1 years
+ Degree/Certifications Required: A bachelor’s degree in computer science or other quantitative field plus relevant experience.
+ Years of experience: 7yrs
+ + Background: Minimum 7 years’ experience working with relational and distributed databases, query authoring
Why Should You Apply?
+ Health Benefits
+ Referral Program
+ Excellent growth and advancement opportunities
As an equal opportunity employer, ICONMA provides an employment environment that supports and encourages the abilities of all persons without regard to race, color, religion, gender, sexual orientation, gender identity or express, ethnicity, national origin, age, disability status, political affiliation, genetics, marital status, protected veteran status, or any other characteristic protected by federal, state, or local laws.
Confirm your E-mail: Send Email
All Jobs from ICONMA, LLC