Graduate/Entry Level

Principal Infrastructure & Operation Engineer

About the role

As an Principal Infrastructure & Operation Engineer you will part of the Support/DevOps Engineering Team that supports many of the Optum/UHG Analytic organisations. The team focus is on increasing our operational maturity, managing compliance and vulnerabilities and creating solutions through automation and emerging technologies. This role will be working closely with one of our newest customers, Healthcare Economics (HCE) providing them technical leadership and your expertise as they begin the migration of their legacy analytic tech stack to an new more modern open source solutions, that provides them better opportunity for growth.

Primary Responsibilities

  • Provide technical Leadership and Support for HCE Infrastructure Platforms
  • Experience designing, planning and building technical solutions.
  • Maintaining and supporting Linux servers Infrastructure
  • Experience working with and support container Orchestration , Kubernetes , Docker.
  • Exposure to Automated Pipelines, Agile and Devops best practices.
  • Exposure to Apache Spark with R or Python
  • Ensure integration of multiple analytic platforms to make our data science toolkit easier to work with for end users.
  • Address performance and scalability issues and perform necessary capacity planning to meet new business initiatives
  • Provide guidance on best practices and frameworks in areas of automation, security, workload placement, release and change management
  • Provide hands on support to our Data Scientist and Data Engineer users. Self-motivated, resourceful, creative, innovative, results driven, and adaptable with solid problem solving and analytical skills
  • Work along aside the wider Infrastructure Team and provider support and mentoring
  • Experience presenting to senior leadership and technical groups on solutions and designs

Required Qualifications

  • Undergraduate degree or equivalent experience.
  • Experience with Linux system administration and support experience in an enterprise IT, service provider organization.
  • Experience building docker containers, Supporting Kubernetes environment, Helm Charts
  • Building and maintaining Automated Pipelines via Code.
  • Proficient with a scripting language (R, Python)
  • Experience with Apache Spark and other distributed computing solutions
  • Experience with Github / Gitlab

Preferred Qualification

  • Proficient understanding of R and R package management
  • Experience working with Data science and analytics technologies and tooling.
  • Ability to quickly learn new technologies and adapt learning into successful implementations.
  • Self-motivated, resourceful, creative, innovative, results driven, and adaptable with solid problem solving and analytical skills
  • Excellent communication (verbal and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams
  • Working in an Agile/Scrum Environment