5+ years Linux or Unix System Administrator experience
. 3+ years Hadoop experience in the area of setup, configuration, or management.
. 11+ years in Networking/Monitoring concepts and tools in a multi-platform data center environment.
. Design and implement data storage, schema and partition system as appropriate to Hadoop and related technologies like Hbase, Hive and Pig.
. Identify, assess, and recommend appropriate solutions to advice customer on cluster requirements and any limitations by applying industry best practices and expertise regarding emerging technologies, risk mitigation, and continuity planning to address back-up and recovery.
. Possess advanced Linux and Hadoop System Administration skills and networking, shell scripting, and system automation.
. Provides enterprise-level information technology recommendations and solutions in support of customer requirements.
. Use customer defined data sources and prototype processes to satisfy proof of concept.
. Develop design patterns for specific data processing jobs.
. Test various scenarios for optimized cluster performance and reporting.
. Prepare and deliver presentations to communicate deployment process for the proof of concept.
. Will serve as a point of contact between internal and external customers and program management.
. Ensures accurate documentation of technical specifications.
Skills Requirements:
Hadoop experience with the setup, configuration, or management of a multi-node (50 to 100) Hadoop cluster, specifically working with Cloudera's Hadoop distribution. Experience with Hadoop technologies like Pig, Hive and Hbase. Experience with Kerberos and Securing Hadoop Clusters. Experience with systems monitoring tools (Nagios), helping tune, configure, and administer cluster.
Certication:
Certified Linux System Administrator (e.g. Red Hat Linux)