Be aware of scams on social media involving phony job postings. Learn more


Data Engineer (Kafka, Cloud)

Location: Pune, Maharashtra, India

Notice

This position is no longer open.

Requisition Number: 209504

External Description:

Role: Data Engineer (JAVA, Kafka, Cloud) 

Job Location: Mumbai/ Pune 

Role Description: 

This position is for a Data Engineer developer who will be specializing in Data ingestion (Kafka/ Spark) projects. Good experience of Kafka, Java & Spark in a Hybrid Cloud Environment will be a key competency for this position.

Minimum Requirements: 

  • Total IT experience of 3+ years.
  • Minimum 2-3 years of relevant experience in Kafka, HDFS, Hive, MapReduce, Oozie
  • Data Integration experience required
  • Worked and developed data ingestion pipeline
  • At least 1 year working experience on Spark Development
  • Good hands-on experience on Core Java and Other Programming languages like Scala or Python.
  • Working Experience on Kafka and its integration with Cloud Services
  • Excellent understanding of Object-Oriented Design & Patterns
  • Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
  • Should have good knowledge on optimization & performance tuning
  • Working knowledge of one of the IDEs (Eclipse or IntelliJ)
  • Experience in working with shared code repository (VSS, SVN or Git)
  • Experience with one of the software build tools (ANT, Maven or Gradle)
  • Knowledge on Web Services (SOAP, REST)
  • Good Experience in Basic SQL and Shell scripting
  • Databases: Teradata, DB2, PostgreSQL, MySQL, Oracle (one or more required).
  • Must have knowledge of any public cloud platform like AWS, Azure, Google Cloud, etc.
  • Must have hands on experience of cloud storage such as S3, Blob, Google Cloud Storage
  • Should be able to work or enhance on predefined frameworks
  • Should be able to communicate effectively with Customers
  • Must have experience or understanding of promoting Bigdata applications into production
  • Commit to travel to customer locations and core GDC sites (Pune/Mumbai/Manila) when required.

Nice to have Experience:

  • Working experience with Apache NiFi
  • Exposure on XML & JSON processing.
  • Awareness about Bigdata Security & related technologies
  • Experience with Webservers: Apache Tomcat or JBoss
  • Preferably one project worked by the candidate should be in Production
  • Understanding of Devops tools like Git, Jenkins, Docker etc..
  • Experience in securing Cloud infrastructure.

 

 

CountryEEOText_Description: Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are.

City: Pune

State: Maharashtra

Community / Marketing Title: Data Engineer (Kafka, Cloud)

Job Category: Consulting

Company Profile:

Our Company

At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.

LinkedIn Remote:

Location_formattedLocationLong: Pune, Maharashtra IN

.