Be aware of scams on social media involving phony job postings. Learn more


Python & PySpark

Location: Pune, Maharashtra, India

Apply

Requisition Number: 214426

External Description:

Role - Python & PySpark

Job Location – Mumbai / Pune/ Bangalore

Role Description:

This position is for a Data Engineer developer who will be specializing in Data ingestion (Kafka/ Spark) projects. Good experience of Kafka, Java & Spark in a Hybrid Cloud Environment will be a key competency for this position.

Minimum Requirements:
• Total IT experience of 3+ years.
• Minimum 2-3 years of relevant experience in Kafka, HDFS, Hive, MapReduce, Oozie
• Data Integration experience required
• Worked and developed data ingestion pipeline
• At least 1 year working experience on Spark Development
• Good hands-on experience on Core Java and Other Programming languages like Scala or Python.
• Working Experience on Kafka and its integration with Cloud Services
• Excellent understanding of Object-Oriented Design & Patterns
• Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
• Should have good knowledge on optimization & performance tuning
• Working knowledge of one of the IDEs (Eclipse or IntelliJ)
• Experience in working with shared code repository (VSS, SVN or Git)
• Experience with one of the software build tools (ANT, Maven or Gradle)
• Knowledge on Web Services (SOAP, REST)
• Good Experience in Basic SQL and Shell scripting
• Databases: Teradata, DB2, PostgreSQL, MySQL, Oracle (one or more required).
• Must have knowledge of any public cloud platform like AWS, Azure, Google Cloud, etc.
• Must have hands on experience of cloud storage such as S3, Blob, Google Cloud Storage
• Should be able to work or enhance on predefined frameworks
• Should be able to communicate effectively with Customers
• Must have experience or understanding of promoting Bigdata applications into production
• Commit to travel to customer locations and core GDC sites (Pune/Mumbai/Manila) when required.


Nice to have Experience:
• Working experience with Apache NiFi
• Exposure on XML & JSON processing.
• Awareness about Bigdata Security & related technologies
• Experience with Webservers: Apache Tomcat or JBoss
• Preferably one project worked by the candidate should be in Production
• Understanding of Devops tools like Git, Jenkins, Docker etc..
• Experience in securing Cloud infrastructure.

 

CountryEEOText_Description:

City: Pune

State: Maharashtra

Community / Marketing Title: Python & PySpark

Job Category: Consulting

Company Profile:

Our Company

Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today.

The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment.

LinkedIn Remote:

Location_formattedLocationLong: Pune, Maharashtra IN

.

© 2022, Teradata. All rights reserved. | Privacy | Terms of Use | Fraud Alert | Tracking Consent | Teradata is an Equal Opportunity Employer | www.teradata.com