Our Company
At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.
What You’ll Do
- As an experienced Analytics Ops Engineer at Teradata, you will design, implement, and maintain our data pipelines and analytics infrastructure. You will leverage modern data ops and automation technologies such as Docker, Kubernetes, and Airflow to ensure seamless data flow and availability. Experience with air-gapped configurations is a significant advantage.
- Your role will be crucial in optimizing our data operations, ensuring high availability and performance of our data services. By automating workflows and managing infrastructure, you will enable our teams to make data-driven decisions more efficiently.
- Success in this role means delivering robust and scalable data pipelines, maintaining high system uptime, and effectively automating data processes. You will be recognized for your ability to integrate various data sources and optimize performance, contributing to the overall success of our analytics platform.
Who You’ll Work With
You will be part of a dedicated and innovative data operations team that collaborates closely with data scientists, analysts, and other stakeholders. The team is focused on building and maintaining a resilient data infrastructure. The data operations team at Teradata plays a vital role in ensuring the reliability and efficiency of our data services. Your contributions will help maintain the integrity and performance of our analytics platform. This position reports to the Data Integration Team Lead
What Makes You a Qualified Candidate
Non-negotiable qualifications:
- Minimum of 5 years of experience in data engineering, analytics operations, or a related field.
- Strong SQL skills with experience in writing complex queries and optimizing database performance.
- Hands-on experience with Kubernetes and Docker for container orchestration and management.
- Proficiency in using Apache Airflow for workflow automation and scheduling.
- Experience with air-gapped configurations.
What You’ll Bring
- Proficiency in programming language such as Python, Java, or JavaScript.
- Proficiency in CI/CD tools (Gitlab)
- Previous experience in dev ops role is a plus
- Proficiency in Linux administration and shell-scripting.
- Knowledgeable in specific Linux distro’s: SUSE Linux Enterprise Server (SLES)
- Experience with cloud platforms such as AWS, GCP is a plus.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Experience with automation scripts and tools to streamline data processing and deployment workflows.
- Knowledge of data integration techniques and ensuring data consistency and reliability.
- Ability to set up monitoring tools and troubleshoot issues related to data pipelines and infrastructure.
- Familiarity with big data technologies like Hadoop or Spark is a plus.
- Strong documentation skills to maintain comprehensive records of data workflows, infrastructure, and processes.
Why We Think You’ll Love Teradata
We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are.