Software Developer/ Engineer/ Architect

Data Pipeline Operations Software Engineering Manager

Groupon’s mission is to become the daily habit in local commerce and fulfill our purpose of building strong communities through thriving small businesses by connecting people to a vibrant, global marketplace for local services, experiences and goods. In the process, we’re positively impacting the lives of millions of customers and merchants globally. Even with thousands of employees spread across multiple continents, we still maintain a culture that inspires innovation, rewards risk-taking and celebrates success. If you want to take more ownership of your career, then you're ready to be part of Groupon.

 

Are you ready to help unlock the value of Groupon’s Data? Are you passionate about transforming existing organizations? Interested in preventing issues rather than reacting to them? Have a deep love for Data Science and would love to apply it to provide predictive analytics, higher quality code changes, and squeeze as much operational efficiency as possible?

 

Groupon is hiring a leader for its Pipeline Reliability Engineering Team. This is an opportunity to lead Software Design Engineers within Groupon’s Data Pipelines Operations organization. In this role, you will own the operational aspects behind the execution of all of Groupon’s Data Pipelines which are responsible for providing business insights and reports consumed all the way to our executive team. You’ll also own migrating your systems to the public Cloud as well as defining and building that infrastructure.

 

Obligations and short term deliverables include:

  • Review on-call shifts and allocations. Come up with a better plan for better and more equitable time allocation across all team members.
  • Perform a review of normally occurring issues.
  • Cut back on noise levels by addressing issues.
  • Build a Jenkins based, AWS hosted, CI/CD pipeline to prevent commonly occurring issues.
  • Review production environment and address metrics reporting, monitoring and alerting gaps.
  • Review pre-production environments and address gaps preventing more thorough validation, including proper metrics reporting, monitoring, and alerting.
  • Get involved with Cloud migration and ensure all team members have an opportunity to contribute to the effort.
  • Build career plan for direct reports. Ensure all have growth paths and are being challenged on a regular basis.
  • Work on employee satisfaction and retention.

Long term:

  • Focus on prevention rather than being reactive to issues.
  • Utilize data to inform prevention improvements that remove the need for classes of ongoing remediations.
  • What else you may want to explore.
  • 6 or more years of experience in engineering, distributed real-time data processing at scale.
  • Experience building and owning CI/CD pipelines.
  • Experience with AirFlow, Kafka, Spark and Hadoop Ecosystem.
  • Excellent leadership and interpersonal skills.
  • Strong analytical and interpersonal skills.
  • Involvement on Open source products/technologies development is an excellent plus.
  • Proven ability to adapt to a dynamic project environment and lead multiple projects at a time.
  • Proven ability to collaborate with application development and other cross functional teams.
  • Ability to coach and provide mentorship to junior team members.
  • Ability to recruit and build strong engineering teams.
  • Ability to evaluate the risks and have strong decision-making skills in a dynamic environment.
  • BS in Computer Science or other technical field.