Senior AI engineer - 220082

Full Time
Remote

Karnataka, India | Telangana, India | Gurugram, Haryana, India | Maharashtra, India

Posted within last 24 Hours

Our Company

At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.

Who You'll Work With

This position sits within the Data Intelligence Platform team, a group focused on building next-generation AI-assisted data services as part of Teradata's core platform. Our team operates at the intersection of cloud infrastructure, data engineering, and applied AI — shipping highly available, multi-tenant services that power intelligent query routing and data discovery at scale.

Our platform responsibilities include:

  • Designing and operating highly available microservices for data catalog ingestion and serving
  • Building AI-assisted query generation and routing services across heterogeneous data sources
  • Deployment and lifecycle management of services on Kubernetes (K8s) across AWS, Azure, GCP and on-prem.
  • Data pipeline development for catalog extraction, normalization, and semantic enrichment
  • Centralized observability: monitoring, alerting, and distributed tracing for all platform services
  • Providing DevOps tooling and CICD pipelines to support continuous delivery

What You’ll Do

We are building a new service to collect and normalize data catalogs from diverse data sources — including relational databases, data lakes, data warehouses, and streaming systems — and expose them to an AI agent that dynamically constructs and routes queries to the appropriate source. This is a greenfield initiative that requires strong engineering judgment, a systems-thinking mindset, and experience shipping production-grade services.

You will be a core contributor on this project, working from architecture to implementation — designing ingestion pipelines, building the catalog API layer, and collaborating with the AI/ML team to surface the right metadata signals for intelligent query generation.

Responsibilities

  • Design, build, and operate a highly available data catalog collection service that ingests schema and metadata from heterogeneous data sources (RDBMS, data lakes, streaming platforms, APIs)
  • Develop robust data pipelines for catalog extraction, normalization, lineage tracking, and semantic tagging to power AI-driven query routing
  • Build and maintain RESTful and/or gRPC APIs that expose catalog data to an AI query agent
  • Deploy and manage services on Kubernetes (K8s), including helm chart authoring, autoscaling configuration, and multi-cluster operations
  • Ensure service reliability through SLO definition, circuit breakers, retry logic, and distributed tracing
  • Integrate with open-source and cloud-native technologies including Apache Kafka, Spark, dbt, Apache Atlas, or OpenMetadata
  • Collaborate with AI/ML engineers to design and iterate on the metadata schema and query routing interface
  • Participate in on-call rotations and contribute to incident response, postmortems, and reliability improvements
  • Contribute to CICD pipelines, infrastructure-as-code (Terraform / Helm), and automated testing frameworks

What Makes You a Qualified Candidate

  • 3+ years of software engineering experience building and operating production services
  • Proficiency in one or more of: Rust, Go, Python, Java— with a preference for Go or Python for backend services
  • Hands-on experience with data pipeline development: ingestion, transformation, and metadata management at scale
  • Solid understanding of RESTful API design principles and service-to-service communication patterns
  • Experience deploying and operating services on Kubernetes (K8s) in production cloud environments
  • Familiarity with at least one major public cloud platform: AWS, Azure, or GCP
  • Strong knowledge of relational and non-relational database systems and their schema/catalog semantics
  • Experience with distributed messaging systems such as Apache Kafka or AWS Kinesis
  • Proficiency with Git, code review workflows, and agile development practices
  • Excellent troubleshooting skills and comfort operating in Linux environments

What You Will Bring

  • Experience with data catalog or metadata management tools such as Apache Atlas, OpenMetadata, DataHub, or Collibra
  • Familiarity with semantic search, vector databases, or LLM-based query generation systems
  • Experience designing or integrating AI/ML model APIs into production backend services
  • Knowledge of data governance, lineage tracking, and schema registry patterns
  • Experience with infrastructure-as-code tools
  • Background in multi-tenant SaaS platform engineering
  • Contributions to open-source data or infrastructure projects
Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are committed to actively working to foster an inclusive environment that celebrates people for all of who they are.

.

© 2026 Teradata. All Rights Reserved. | Privacy | Terms of UseTracking Consent | www.teradata.com