Must Have Skills (Top 3 technical skills only):
Designing, building and maintaining ETL feeds for new and existing data sources,2+ years of development experience in Java (or Scala), Python, SQL, shell scripting2+ years of development experience in Hadoop HDFS, Spark, Hive, Pig, HBase, Map Reduce, Sqoop3+ years of development experience in Hadoop especially Spark programming,Experience in scheduled and realtime data ingestion techniques using big data tools.Nice to have skills (Top 2 only):
2+ years of development experience in Java (or Scala), Python, SQL, shell scriptingExperience in developing REST services is a plus.Detailed Job Description:
Designing, building and maintaining ETL feeds for new and existing data sources 2 years of development experience in Hadoop HDFS, Spark, Hive, Pig, HBase, Map Reduce, Sqoop 3 years of development experience in Hadoop especially Spark programming 2 years of development experience in Java or Scala, Python, SQL, shell scripting. Experience in scheduled and realtime data ingestion techniques using big data tools. Experience in developing REST services is a plus. Developing and implementing data in
Minimum years of experience: 5+
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute:
Developing and implementing data ingestion and transformation workflows fromto Hadoop.Ensuring ongoing accuracy of data ETL feeds, monitoring for changes in the source systems which may alter inbound dataDocumenting all metadata regarding data source, field type, definition, etc. for each field and table created.Interview Process (Is face to face required?) Yes
Does this position require Visa independent candidates only? Yes