Location: Prague, Praha CZ
Requisition Number: 202478
Position Title: Glob Deliv Ctr Consultant (II)
Data Engineers define the architecture and design of data ingestion and ETL / ELT processing to meet functional and non-functional requirements and objectives. Data Engineers will generally be expected to be “T-shaped” professionals who have a broad understanding of the data acquisition and integration space and to be able to weigh the pros and cons of different architectures and approaches.
A Data Engineer works on implementing complex big data projects with a focus on ingesting, parsing, integrating, and managing large sets of structured and unstructured data. Data Engineers embrace the challenge of dealing with petabytes of structured and unstructured data and associated metadata daily. Data Engineers are involved in the design of big data solutions, leveraging the experience they have with Teradata and Hadoop based technologies such as Hive, Cassandra, and MapReduce. Strong communication skills are required to work in a team environment and with business sponsors of programs.
Key Areas of Responsibility:
- Clarify the client’s business problem for data analysis
- Translate the business requirements into technical requirements
- Design and develop code, scripts, and data pipelines that leverage structured and unstructured data
- Data modelling, database design and the design of non-relational data structures
- Data ingestion pipelines and ETL processing, including low-latency data acquisition and stream processing
- Oversee the design, evaluation, and selection of major databases and metadata structures
- Develop project deliverable documentation
- Must be adept at sizing and estimating new projects, as well as supporting new business development
- Proven expertise in production software development
- Experience with large data sets consisting of structured and unstructured data and associated metadata and distributed computing with Teradata, Hadoop, Hive, HBase, Pig, etc.
- Proficiency in SQL, NoSQL, relational database design and methods
- Hadoop ecosystem technologies (e.g.: HIVE);
- Message bus and broker technologies, real-time data pipeline and streaming technologies (e.g.: Kafka, Kylo, Listener)
- High-performance data processing platforms (e.g.: SPARK)
- Scripting languages and related libraries (e.g.: Python/Pandas)
- Linux / Unix experience
- Experience with Avro, Thrift
- JMS: ActiveMQ, RabbitMQ, JBoss, etc.
- Experience with structured and unstructured data, and metadata
- Work with the appropriate project management methodology (Agile or Waterfall) based upon customer and project requirements
- Excellent verbal and written communication skills
- Knowledge of Architecture Principles
- 25 days of holiday a year
- Private medical care
- Meal vouchers in amount of 110 CZK/ day (Teradata contributes 83 CZK/day)
- Company’s contribution for the Pension fund (up to 3% of monthly gross salary)
- Life insurance
- Company’s assistance in case of sickness (25% of your gross base salary) in addition to local regulations
- Employee referral program (4,000$ USD)
- Sports and activities membership program
- Employee stock purchase program
- Contribution of 300$ USD for a mobile phone of your choice (every 2 years)
Community / Marketing Title: Data Engineer
Job Category: Consulting
With all the investments made in analytics, it’s time to stop buying into partial solutions that overpromise and underdeliver. It’s time to invest in answers. Only Teradata leverages all of the data, all of the time, so that customers can analyze anything, deploy anywhere, and deliver analytics that matter most to them. And we do it at scale, on-premises, in the Cloud, or anywhere in between.
We call this Pervasive Data Intelligence. It’s the answer to the complexity, cost, and inadequacy of today’s analytics. And it's the way Teradata transforms how businesses work and people live through the power of data throughout the world. Join us and help create the era of Pervasive Data Intelligence.
Location_formattedLocationLong: Prague, Praha CZ