30 days old

Data Ops Engineer

Discovery Inc
Sterling, VA 20164
Apply Now
Apply on the Company Site
  • Job Code
Discovery Inc

Location: Sterling, Virginia, United States,
Req ID: 29835


Our Team

As Discovery Communications portfolio continues to grow around the world and across platforms the Global Technology & Operations team is building media technology and IT systems that meet the world class standard for which Discovery is known. GT&O builds, implements and maintains the business systems and technology that are critical for delivering Discoverys products, while articulating the long-term technology strategy that will enable Discoverys growing pay-TV, digital terrestrial, free-to-air and online services to reach more audiences on more platforms.

From Amsterdam to Singapore and from satellite and broadcast operations to SAP, we are driving Discovery forward on the leading edge of technology.

The Global Data Analytics team enables Discovery Communications to turn data into action. Using big data platforms, data warehousing and business intelligence technology, audience data, advanced analytics, data science, visualization, and self-service analytics, this team supports company efforts to increase revenue, drive ratings, and enhance consumer engagement.

The Role

We are looking for a well-rounded software developer to join our DataOps team. The right candidate for this role will have a passion to develop tools and automation to deploy/maintain/troubleshoot distributed services/applications in cloud and on-premise infrastructure.

Working within Global data engineering and advance analytics team the role will provide technical direction and implementation to build scalable infrastructure to enable digital transformation. With modern cloud, devops tools and open source technologies, you are responsible for architecting and implementing the new end-to-end CI/CD pipeline for Data and analytics applications.

You interface with the functional areas throughout the enterprise, identifying potential opportunities to re-engineer technology processes, improve efficiencies and reduce costs.

A strong background in development is necessary, not only to build and improve internal tooling, but also to improve selected open source tools as well.


  • Develop tools and frameworks for distributed systems, services and applications
  • Be an expert in cloud technologies (AWS must) to drive automation and best practices
  • Contribute in all phases of the development life cycle; Collaborate with system architects on application infrastructure.
  • Develop reliable and scalable systems used for monitoring/alerting and access management of production systems
  • Leveraging modern tools and techniques to develop clean, efficient, and reusable code
  • Identifying and addressing design, development, and delivery performance bottlenecks in preproduction/development environment looking to continually improve applications
  • Writes documentation for both internal and external consumers, covering design artifacts, code and fixes
  • Collaborate with DevOps teams to automate software deployment, including deployment of immutable application infrastructure
  • Administration of AWS services like IAM, Role, policies and AD integration


  • Bachelors Degree in computer science, Information Technology, Information Systems or similar
  • Excellent knowledge in Linux internals, virtualization, DevOps tools, and cloud technologies
  • Hands-on experience building systems and tools using cloud-native technologies in AWS
  • Strong foundation in Infrastructure as Code and configuration management using tools like Terraform, SaltStack, Ansible, Chef or Puppet
  • Minimum of 5 years of experience in enterprise solution development
  • Strong experience developing in Bash and Python
  • Advanced knowledge of AWS big data and analytics platform services, such as EMR, Redshift, Datapipeline, and Kinesis, including system design, build, and deployment
  • Advanced knowledge of HashiCorp Terraform, including best practices for enterprise, working with custom providers, module registry/module sharing, advanced HCL and interpolation functions, and running Terraform in a shared environment
  • Experience in automated deployment, installation and configuration of applications on Linux systems, including the development and improvement of the tools for doing so
  • Advanced knowledge of Linux system administration, on systems such as Redhat/CentOS and Debian, including networking, init/systemd, process monitoring/tracing, and system resource monitoring/troubleshooting
  • Extensive experience working with version control and repository management systems such as GIT
  • Experience working with containerization approaches such as Docker
  • Experience in working with Ansible for automation and configuration, or other configuration management frameworks


Posted: 2021-09-18 Expires: 2021-10-20

Job Opportunities

Before you go...

Our free job seeker tools include alerts for new jobs, saving your favorites, optimized job matching, and more! Just enter your email below.

Share this job:

Data Ops Engineer

Discovery Inc
Sterling, VA 20164

Join us to start saving your Favorite Jobs!

Sign In Create Account
Powered ByCareerCast