Big Data Application Support Engineer

Location: Bangalore, Karnataka IN


This position is no longer open.

Requisition Number: 205866

Position Title: Managed Services II

External Description:

Skill: Big Data Application Support

Job Location: Powai, Mumbai/ Magarpatta City, Pune / Bangalore 

Role Description: 

Providing Applications Support for Think Big Customers on Hadoop platforms. Typically these customers may have 24/7 contracts, and the successful applicant must be prepared to work in shifts and also be on-call to support customer site/s per contractual obligations.

Minimum Requirements:

  • 6 - 8 years of experience in Managing and Supporting large scale Production Hadoop environments (configuration management, monitoring, and application performance tuning) in any of the Hadoop distributions (Apache, Hortonworks, Cloudera, MapR, IBM BigInsights, Pivotal HD)
  • 6+ years of experience in Applications Support (Java / J2EE, any ETL tool , Strong Knowledge of SQL queries and Unix Shell Scripting, BI operations, Analytics support) engagements on large scale systems.
  • Experience in Hadoop components such as:
    • Yarn
    • Spark
    • Hive
    • Impala
    • HBase
    • Sqoop
    • Pig
    • Python
    • Anaconda
    • Shiny
    • RStudio
    • Tensorflow
    • Keras
    • Kafka
    • STreaming Tools – NiFi 
    • IPython Jupyter
    • Gitlab
    • Scala
    • Flume
    • Solr
    • Ranger
    • Atlas
    • Mahout
    • Data pipeline tools
    • Zookeeper
    • Oozie
  • Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
  • Root cause analysis for job failures & data quality issues & providing solutions.
  • Have a working understanding of the software development lifecycle and be able to communicate incident and project status, issues, and resolutions
  • Experience in Incident management, ServiceNow, JIRA, Change Management Process.
  • 6+ years of experience in Scripting Language (Linux, SQL, Python). Should be proficient in shell scripting.
  • Experience in developing / supporting RESTful applications
  • Working knowledge of Linux operating system required.
  • Strong written and verbal communication skills.
  • ITIL Knowledge.

Preferred Qualification 

  • Database support or application DBA – Oracle, DB2, MySQL, PostgreSQL
  • Knowledge of Storm, Accumulo.
  • Knowledge of Datastage ETL tools – TalenD, Informatica, Data Stage.
  • Development, implementation or deployment experience in the Hadoop ecosystem
  • Working experience with one of the Scheduling tools (Control-M, JCL, Unix/Linux-cron etc.)
  • Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka.
  • Proficiency with at least one of the following: Java, Python, Perl, Ruby, C or Web-related development
  • Development or Operational knowledge on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc.
  • Development or Operational knowledge on Web or cloud platforms like Amazon S3, EC2, Redshift, Rackspace, OpenShift, etc.
  • Development/scripting experience on Configuration management and provisioning tools e.g. Puppet, Chef
  • Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
  • Handle deployment methodologies, code and data movement between Dev., QA and Prod Environments (deployment groups / folder copy/ data-copy etc.)
  • Should be able to articulate and discuss the principles of performance tuning on Hadoop
  • Develop and produce daily/ weekly operations reports and metrics as required by IT management
  • Experience on any of the following will be an added advantage:
  • Hadoop integration with large scale distributed DBMSs like Teradata, Teradata aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc.
  • Data Modeling or ability to understand data models
  • Knowledge of Business Intelligence and/or Data Integration (ETL) solution delivery techniques, models, processes, methodologies
  • Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc.
  • Linux Administrator certified.


City: Pune

State: Maharashtra

Community / Marketing Title: Big Data Application Support Engineer

Job Category: Consulting

Company Profile:

Considering COVID-19, we are still hiring but conducting virtual interviews to keep our candidates and employees safe. Many roles will be temporarily remote or work from home to comply with current safety regulations. These roles will be required to be in the office once it is safe or restrictions are lifted. Read more on our response here: Teradata Response to COVID-19 

With all the investments made in analytics, it’s time to stop buying into partial solutions that overpromise and underdeliver. It’s time to invest in answers. Only Teradata leverages all of the data, all of the time, so that customers can analyze anything, deploy anywhere, and deliver analytics that matter most to them. And we do it at scale, on-premises, in the Cloud, or anywhere in between.

We call this Pervasive Data Intelligence. It’s the answer to the complexity, cost, and inadequacy of today’s analytics. And it's the way Teradata transforms how businesses work and people live through the power of data throughout the world. Join us and help create the era of Pervasive Data Intelligence.

Location_formattedLocationLong: Bangalore, Karnataka IN


© 2020, Teradata. All rights reserved. | Privacy | Terms of Use | Fraud Alert | Tracking Consent | Teradata is an Equal Opportunity Employer |