Data Engineer (Java, Kafka)

Location: Pune, Maharashtra IN


Requisition Number: 210172

External Description:

Role: Data Engineer (JAVA, Kafka, Cloud) 

Job Location: Mumbai/ Pune 

Role Description: 

This position is for a Data Engineer developer who will be specializing in Data ingestion (Kafka/ Spark) projects. Good experience of Kafka, Java & Spark in a Hybrid Cloud Environment will be a key competency for this position.

Minimum Requirements: 

  • Total IT experience of 3+ years.
  • Minimum 2-3 years of relevant experience in Kafka, HDFS, Hive, MapReduce, Oozie
  • Data Integration experience required
  • Worked and developed data ingestion pipeline
  • At least 1 year working experience on Spark Development
  • Good hands-on experience on Core Java and Other Programming languages like Scala or Python.
  • Working Experience on Kafka and its integration with Cloud Services
  • Excellent understanding of Object-Oriented Design & Patterns
  • Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
  • Should have good knowledge on optimization & performance tuning
  • Working knowledge of one of the IDEs (Eclipse or IntelliJ)
  • Experience in working with shared code repository (VSS, SVN or Git)
  • Experience with one of the software build tools (ANT, Maven or Gradle)
  • Knowledge on Web Services (SOAP, REST)
  • Good Experience in Basic SQL and Shell scripting
  • Databases: Teradata, DB2, PostgreSQL, MySQL, Oracle (one or more required).
  • Must have knowledge of any public cloud platform like AWS, Azure, Google Cloud, etc.
  • Must have hands on experience of cloud storage such as S3, Blob, Google Cloud Storage
  • Should be able to work or enhance on predefined frameworks
  • Should be able to communicate effectively with Customers
  • Must have experience or understanding of promoting Bigdata applications into production
  • Commit to travel to customer locations and core GDC sites (Pune/Mumbai/Manila) when required.

Nice to have Experience:

  • Working experience with Apache NiFi
  • Exposure on XML & JSON processing.
  • Awareness about Bigdata Security & related technologies
  • Experience with Webservers: Apache Tomcat or JBoss
  • Preferably one project worked by the candidate should be in Production
  • Understanding of Devops tools like Git, Jenkins, Docker etc..
  • Experience in securing Cloud infrastructure.




City: Pune

State: Maharashtra

Community / Marketing Title: Data Engineer (Java, Kafka)

Job Category: Consulting

Company Profile:

Teradata helps businesses unlock value by turning data into their greatest asset.  We’re the cloud data analytics platform company, built for a hybrid multi-cloud reality, solving the world's most complex data challenges at scale. Collectively, we endeavor to serve equal parts innovator and contributor. Because our mission isn’t just about the collection of data – it’s about revolutionizing the future of transportation to save lives, optimizing energy costs to make the planet a cleaner place, and using data to predict and identify cancer risks.

Location_formattedLocationLong: Pune, Maharashtra IN


© 2021, Teradata. All rights reserved. | Privacy | Terms of Use | Fraud Alert | Tracking Consent | Teradata is an Equal Opportunity Employer |