Bengaluru, Karnataka, India
20 days ago
Lead Software Engineer

Company Description

When you’re one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit epsilon.com/apac or our LinkedIn page.

Job Description

About Data team :

At the heart of everything we do is data and this team. Our premium data assets empower the team to drive desirable outcomes for leading brands across industries. Armed with high volumes of transactional data, digital expertise and unmatched data quality, the team plays a key role in improving all our product offerings. Our data artisans are keen on embracing the latest in technology and trends, so there’s always room to grow and something new to learn here.

Why we are looking for you

Lead, design and code solutions using Big Data/Hadoop/database for ensuring application access to enable data driven decision making for the company’s multi-faceted ad serving operations.Working closely with Engineering resources across the globe to ensure enterprise data solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model.Ideal candidate can lead in the areas of: solution design, code development, quality assurance, data processing, cross team communication, project management, and application maintenance.You have hands on experience in -  Kafka, Flume, Spark, Java/Scala, Hadoop, HDFS, Hive,SQLto work with Epsilon Market place teamYou have hands on experience in coding languages like Python & Scala , fine tuning Spark jobs.You have exposure on Airflow, Docker container, noSQL databses like Hbase.

What you will enjoy in this role

As part of Data Pipeline  team will be processing Billions of data per day from multiple region/ data center’s.Working on processing Ad-server data into Storage layer where further analytics will be doneWorking on Bigdata technologies  like Flume, Kafka, Spark, and loads the aggregated/processed data into HDFS.Working on identifying a key area of ownership the team has, pipelining data.Working on intraday (hourly, 5 minute ,15 minute) aggregations using Spark Structured Streaming to jobs performed.Working on the data assets which will be further  used in performance measurement and efficacy of the defined solution, as well as feeding business analytics and data mining.

What you will do

Take E2E ownership & lead the Team by delivering product on-time with high qualityEnd to end development of automated receipt of anonymized dataEnd to end development of processing of logs dataData center to data center replicationData processing using Flume, Kafka , Spark jobs, Airflow,Docker etcMigration of production data assets to downstream consuming systemsDisaster Recovery and Business Continuity implementationsWorkload Performance Management and TuningEnsuring that coded solutions meet functional business requirements for ad serving and measurementApplication specific controls and schedulingCustom solution building for syndicated and third party datasets

QualificationsBachelor’s Degree in Computer Science or equivalent degree is required.7  – 12  years of data engineering experience around database marketing technologies and data management, and technical understanding in these areas.Strong hands on experience in opensource components like Kafka, Flume, Spark, Hadoop, HDFS, Hive, Java/Scala, SQL, Experience with Scripting – Python preferredMinimum of 2 years of Lead experience  in running teams.Ability to handle complex productsStrong understanding of Data structures and algorithms.Good experience in working with geographically and culturally diverse teams.Strong understanding of Disaster Recovery and Business Continuity solutionsExperience with scheduling applications like Airflow, Oozie with complex interdependencies Experience with Docker ,Kubernetes a plusStrong understanding of Disaster Recovery and Business Continuity solutionsFamiliarity with complex data lake environments that span OLTP, MPP and Hadoop platformsAbility to diagnose and troubleshoot problems quicklyGood experience in working with geographically and culturally diverse teamsExcellent written and verbal communication skills.
Confirm your E-mail: Send Email