Client need a consultant with redhat 6 (certification) with strong Linux background, with hadoop exposure, please find a consultant today, will give an interview tomorrow.
Position: Hadoop Admin
Location: New York NY.
Duration: 6 Months Contract
Interview: Phone and Onsite/skype
Critical skills: Hadoop Admin, Rhel 6 and strong Linux background
(don't get confuse with full JD, skills, mentioned with yellow background, are mandatory)
Description:
- The main responsibility of this position is to install, configure, and manage Hadoop clusters.
- Oversee and assist with major project initiatives, as well as provide support for Hadoop servers in development, QA/UAT, and production environments.
- This position requires in-depth knowledge of the RHEL 6 operating system; superior troubleshooting skills; understanding of system bottlenecks such as memory, CPU, OS, storage, and networks; and Hadoop skills like HBase, Hive, and Pig.
- Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
- Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting.
Responsibilities
- Hadoop Administration: Provide day to day operational support for the Hadoop environment, which is based in regional data center locations. Ensure the integrity for all systems are maintained, system version is kept current and at appropriate maintenance levels. Monitor performance and perform required maintenance and upgrades as appropriate. Make recommendations for operational improvements. Provide ongoing support and troubleshooting. Develop and promote technical standards and support migration efforts to the future Hadoop infrastructure – 75%.
- Incident Support: Provide incident support for all Hadoop clusters, including after-hours. Monitor cluster health and remediate identified problems. Monitor the various work queues and complete incoming requests in a timely manner. Participate in root cause analysis bridges and provide technical assistance when required – 20%.
- Capacity Management: Frequently monitor and review system capacities. Perform hardware, software, and operating system tuning when applicable. Provide short and long term capacity forecasting and perform the necessary planning to ensure adequate system resources are always available and functioning properly – 5%.
Qualifications
Education
- 4 year degree from accredited college
- RHCSA or RHCE Certification, or equivalent experience
Experience
- 5+ years Linux System Administrator experience
- 3+ years Hadoop experience in the areas of setup, configuration, management, and disaster recovery.Pivotal HD experience is a plus.
- 8+ years in Networking/Monitoring concepts and tools in a multi-platform data center environment
- Strong knowledge of Hadoop server architecture
Competencies/Skills
- Design and implement data storage, schema and partition system as appropriate to Hadoop and related technologies like Hbase, Hive and Pig
- Identify, assess, and recommend appropriate solutions to advise customer on cluster requirements and any limitations by applying industry best practices and expertise regarding emerging technologies, risk mitigation, and continuity planning to address back-up and recovery
- Possess advanced Linux and Hadoop System Administration skills and networking, shell scripting, and system automation
- Provides enterprise-level information technology recommendations and solutions in support of customer requirements
- Use customer defined data sources and prototype processes to satisfy proof of concept
- Develop design patterns for specific data processing jobs
- Test various scenarios for optimized cluster performance and reporting
- Responsible for implementation and ongoing administration of Hadoop infrastructure
- Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments
- Working with data delivery teams to setup new Hadoop users, set up Linux users, set up Kerberos principals, and test HDFS, Hive, Pig and MapReduce access for the new users
- Cluster maintenance as well as creation and removal of nodes
- Performance tuning of Hadoop clusters and Hadoop MapReduce routines
- Screen Hadoop cluster job performances and capacity planning
- Monitor Hadoop cluster connectivity and security
- Manage and review Hadoop log files
- HDFS management, monitoring, support and maintenance
- Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability
- Collaborating with infrastructure and application teams to install operating system and Hadoop updates, patches, version upgrades when required
- Point of Contact for Vendor escalation
- General operational expertise such as good troubleshooting skills, understanding of system capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
- Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting
- Knowledge of Troubleshooting Core Java Applications is a plus
Manju Shree
Sr IT Recruiter
IDC Technologies, Inc. 1851 McCarthy Boulevard, Suite 116,Milpitas, CA, USA, 95035
Phone: 408-418-5779 ext 252 |Fax: 408-608-6088 |
Email: manju@idctechnologies.com| Web: www.idctechnologies.com
You received this message because you are subscribed to the Google Groups "SureShotJobs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sureshotjobs+unsubscribe@googlegroups.com.
To post to this group, send email to sureshotjobs@googlegroups.com.
Visit this group at http://groups.google.com/group/sureshotjobs.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment