Please scroll down, To apply

Remote - Hadoop admin/system Engineer

hiring now

Cybertec Inc

2021-12-03 11:30:03

Job location Washington, District of Columbia, United States

Job type: all

Job industry: Engineering

Job description

Hi, Hope you are doing well I have an urgent opening of Hadoop Admin Role- Hadoop AdminDuration: 6+ months and then converted to full timeLocation: 100% remoteVisa Status: USC/GCYears of experience: 5+ years of experience as a Hadoop Admin and 8+ overall experienceJob title: hadoop Admin/System EngineerRequired Technologies: Hadoop, Cloudera, Linux, Healthcare background, Interview Process: 2 roundsAvailability for video screening:MAKE SURE TO INCLUDE A LINKED IN PROFILE Job Description:We are currently seeking resources with expertise in Hadoop infrastructure design, configuration, installation, security and ongoing support.ResponsibilitiesUnderstanding of standard methodologies in maintaining large scale Hadoop clustersImplement and support backup and recovery strategies on Hadoop clusters based on current process and (link removed)stall, configure, and maintain high availability of the fully secured end-to-end environments.Dedicatedly find opportunities to implement automation and monitoring solutions.Propagate knowledge and operations readiness through documentation and mentorship.Ability to perform analysis of competing technologies and products.Building out Platforms On-Premise, Private-Cloud, Public-Cloud, and Hybrid Architectures leveraging the latest and greatest technologies.Required Experience5+ years of experience in architecting, administrating, configuring, installing and maintenance of Big Data Technologies with emphasis on Hadoop. (CDH or HDP)5+ years with hands on Linux administration (Log searching, Troubleshooting, Tuning)Expert understanding of Hadoop ecosystem technologies (Apache Spark, Impala, HDFS, Hive, Cloudera Manager, Sentry/Ranger, Hbase, SOLR, Kudu, etc)Advanced knowledge of specific operating systems (Linux), servers, and shell scripting.Demonstrated experience with databases (i.e., SQL, NoSQL, Hive, In Memory, HBaseInstall and configure Hadoop clusters (With full security - Kerberos, TLS, Encryption at Rest)Effective communication skills to partner closely with citizen analysts and Data Scientists.Plan and execute major platform software and operating system upgrades and maintenance across physical environmentsEnsure proper resource utilization between a globally used multi-tenant cluster(s)Review performance stats and query execution/explain plans; recommend changes for tuningCreate and maintain detailed, up-to-date technical documentationAbility to work in a fast-paced, team-oriented environmentUnderstand network optimization and DR strategiesMinimum Education, Experience, & Specialized Knowledge RequiredDegree in the field of Information Systems, Computer Science or related field OR Equivilent Working Experience.Ability to complete the full lifecycle of software development and deliver in an Agile/Scrum environment, leveraging Continuous Integration/Continuous DevelopmentStrong interpersonal skills, including a positive, solution-oriented attitudeMust be passionate, flexible and innovative in utilizing the tools, their experience, and any other resources, to effectively deliver to very challenging and always changing business requirements with continuous successMust be able to interface with various solution/business areas to understand the requirements and support developmentThis role is in support of the Big Data Factory (BDF), including the commercialization HDSC project.Responsibilities: SME in Human Data Science Cloud - expert in building and supporting all aspects for HDSC. SME in Cloudera Data Platform Support Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems Become SME for Big Data and Hadoop Data Science Stack, including CDSW, Jupyter, Conda, R-Studio, etc. Proficient with robust shell or scripting tasks and coding. Languages such as Python will used, Scala is a plus. Troubleshoot and debug Hadoop ecosystem run-time issues Troubleshoot and debug VMware, Triton, and other virtualization/container management systems Communicate and integrate with a wide set of teams, including Hardware, Network, Linux kernel, JVM, Big Data vendors, and cloud vendors. Automate deployment and rollback of packages and applications for container technologies Work with operating system internals, file systems, disk/storage technologies and storage protocols. Responsible for leading projects, setting accurate expectations as to the scope of work and required time to complete, and communicating status effectively through the completion of work. Assist with management and configuration of Hadoop clusters. Work with various enterprise security solutions such as LDAP, AD, and/or Kerberos Consult in Database administration and design Work with and develop on underlying infrastructure for Big Data Solutions (Clustered/Distributed Computing, Storage, Data Center Networking) Experience in or have a good understanding of other data lake technologies

Inform a friend!

Nearby jobs

Remote - .NET DEVELOPER Washington

Account Manager Washington

Learner Support Specialist Washington

Top