Automation/ QA Engineer

Staff DevOps Engineer - Big Data Analytics & Cloud infrastructure

Team- Relocation Assistance available

The Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform powered Customer instances deployed across the ServiceNow cloud and Azure cloud.  Our mission is to:

Deliver state of the art Monitoring, Analytics and Business Insights by employing new tools, Big Data systems, AI and Machine Learning methodologies that improve efficiencies across a variety of functions in the company: Cloud Operations, Customer Support, Product Usage Analytics, Product Upsell Opportunities enabling to have a significant impact both on the topline and bottomline growth   The Big Data team is responsible for:

  • Collecting, storing and providing real-time access to a large amount of data
  • Provide real-time analytic tools and reporting capabilities for various functions including: Monitoring, alerting and troubleshooting
  • Machine Learning and Anomaly detection
  • Capacity planning
  • Data analytics and deriving Business Insights

Role Responsibilities

  • Responsible for maintaining and supporting Big Data infrastructure on ServiceNow Cloud and Azure
  • Automate deployment, maintenance and monitoring activities
  • Implement Hadoop cluster security
  • Capacity planning
  • Performance tuning for various Hadoop components
  • Responsible for enforcing data governance policies
  • Help with various Big Data and cloud automation projects
  • Code deployment and maintenance of Big Data systems
  • Perform On-Call production monitoring and support for Big Data infrastructure and Big Data applications in ServiceNow cloud and Azure cloud.

To be successful in this role you have:

  • Expert level experience in a Hadoop administration (preferably Cloudera CDP) role.
  • Expert level experience working on Azure or AWS
  • Experience with performing Hadoop and Azure/AWS performance tuning
  • Experience with Ansible, Terraform, Puppet and similar technologies
  • CI/CD automation leveraging Docker/Kubernetes orchestration
  • In-depth knowledge of Hadoop components such as Spark Streaming, HDFS, HBase, YARN, Hive, Impala, Atlas and Kudu
  • Experience securing Hadoop stack with Sentry, Ranger, LDAP, Kerberos KDC
  • In-depth knowledge of Centos 7.x and shell scripts
  • Working knowledge of Java, Python
  • Ability to learn quickly in a fast-paced, dynamic team environment
  • Highly effective communication and collaboration skill
  • Required MS Degree in Computer Science or equivalent experience
  • 7+ years of overall experience with at least 2 in Big Data related positions