PALO ALTO, CA, USA
7 days ago
Hadoop Expert
Job Seekers, Please send resumes to resumes@hireitpeople.com

Some background on Hadoop:

 

MapReduce

            Module used to develop Cloud platforms in Hadoop

            Paas (Platform as a Service)

            Typically have Pig in their background

 

Storm/Kafka

            The Real time processing module in Hadoop

            MUST HAVE Hbase/Cassandra

            MUST HAVE Kafka experience

 

Falcon

            Module in Hadoop for Data Management

            ETL & Pig experience for this module

 

Positions Available

 

**For these Technical Staff roles; Hadoop developers won't be a fit for any of my positions. Those guys are users of Hadoop not the people who actually build it. That's why we usually look for people who have built a distributed systems from the ground up and have solid java or C++ development.***

Member of Technical Staff (MapReduce)

Apache Hadoop MapReduce is among the most popular, open-source, data processing systems in the world. We are looking for senior folks with experience in large-scale, distributed systems to help drive Hadoop MapReduce even further. Your primary focus will be scale, performance and scheduling in Apache Hadoop MapReduce.  
  
Requirements:

Experience with large-scale, distributed systems design and development with strong understanding of scaling, performance and scheduling. Hands on programmer, strong in data structures and programming practices. Java experience desirable. Experience using MapReduce or other parallel programming techniques. Experience with using or designing large-scale, distributed systems. Hands on programmer with strong data-structures and algorithms, Java experience desired. Experience using MapReduce or other parallel programming techniques, and experience using or developing AWS or other Cloud platforms. Experience using cloud platforms such as AWS, OpenStack, Torque/Maui/Moab etc. Experience using multi-tenancy systems features such as Linux containers, cgroups. Experience using projects in Apache Hadoop ecosystem such as Pig, Hive, HBase etc. is a big plus. Strong oral and written communication skills Experience contributing to Open Source projects is desirable. Ability to work in an agile and collaborative setup within an engineering team. Strong oral and written communication skills

Member of Technical Staff (Storm/Kafka)

 

Key responsibilities:

Drive architecture, design, and implementation of Apache Storm core components in collaboration with Apache Storm open source community Work on complex architecture related to real-time processing on Hadoop clusters running on thousands of nodes across data center Understand partner/customer requirements on integration with their existing event stream technologies and frameworks Work with product management and quality assurance teams to ensure deliver high quality products

  
Requirements:

BS/MS in Computer Science Passionate about programming. Clean coding habits, attention to details, and focus on quality 4+ years of hands on software design, implementation and test experience with strong understanding of distributed & large scale systems Experience with Apache Hadoop, YARN, Storm, Kafka, ActiveMQ Strong software engineering skills: modular design, data structures, and algorithms Deep knowledge of system architecture, including process, memory, storage and networking management is highly desired Experience with the following: Java/C++, Concurrent programming, test driven development, and related areas Strong communication skills

  
Big pluses:

Working knowledge of Hadoop or other big data solutions Recognized contributions to open source projects outside of work Experience with NoSQL databases - Cassandra, HBase Experience in Scala or Clojure Strong software engineering skills: modular design, data structures, and algorithms

 

Member of Technical Staff – Oozie

 

Hortonworks is looking for passionate software engineers for the Data Management 

development team within the Hortonworks Data Platform. This team is responsible for the components within the Hadoop ecosystem for managing data and moving data into and out of Hadoop, specifically Oozie, Sqoop and Flume. This position will focus initially on Oozie, but will eventually expand to include the other components.
Candidates should be experienced engineers who want to be part of taking Apache Oozie, Flume, Sqoop, and other ETL frameworks to the next level of functionality, stability, and enterprise readiness. To be successful in this position you will need to be able to work well with others in an open source community, share ideas, review designs, and collaborate to achieve optimal results. You must also be passionate about building quality into software from earliest stages of the development lifecycle until final delivery of a commercial quality product. 

REQUIREMENTS:

• A MS degree in computer science or equivalent experience in industry

• Advanced Java programmer skills with a good grasp of key computer science 

fundamentals including algorithms, data structures, multi-threading

• Advanced C++ can be a substitute for Java

• 3-8 years of relevant hands-on software engineering experience doing system software 

design and development including distributed & large scale systems

• Experience with development of data management software, including experience 

in distributed systems, workflow and scheduling systems, and/or ETL/ELT

• Highly desirable to have experience with enterprise schedulers such as Oozie, 

Quartz, Azkaban or other similar solutions

• Experience with the Hadoop ecosystem is a plus

• Ability to coordinate across teams, including with QA, Doc writers, support, sales, 

etc.

• Ability to interact with customers in pre-sales, planning, joint development, and 

support situations.

• Strong oral and written communication skills


 

 

Confirm your E-mail: Send Email