Be aware of scams on social media involving phony job postings. Learn more


Software Engineer

Location: Lazio, Italy

Notice

This position is no longer open.

Requisition Number: 204179

External Description:

Location: Rome, IT

As a Big Data Software Engineer you will provide technical skills in a team that designs and develops path-breaking large-scale Big Data and Analytics solutions. You will help our client on the state-of-the-art technologies and tools, covering the whole ecosystem from infrastructure and foundation ones (as Hadoop, HDFS, Hive, Yarn, Kafka, Flume, Spark, Sqoop and No-SQL databases), to the ones related to Data Governance, Data Pipelines, and Analytics tools.

You would also be requested to design and develop code, scripts, and data pipelines that leverage structured and unstructured data integration from multiple sources, using Shell scripts, Java, Scala, Python and related languages.

Additionally, as a member of our Consulting team, you will help Teradata establish thought leadership in the Big Data space by contributing to white papers and technical commentary. This position offers the opportunity to be at the forefront of the growing Big Data and Analytics market. The successful individual possesses knowledge of past and modern data architecture patterns, a strong sense of the art of the possible and, at the same time, can balance it with the requirements of large organizations to build reliable data environments. The right candidate is excited about working hands-on as well as providing strategic guidance to some of the biggest and most well-known organizations.

Responsibilities

Main Responsibilities:

Your main responsibilities are to deliver on our commitments and to help organizations on how to make Big Data and Analytic technologies work for them. You will be invaluable member of our delivery teams working with the most ambitious customers. You will have a chance to learn and work with multiple technologies and Thought Leaders in the Big Data space.

Knowledge, Skills and Abilities

  • Excellent interpersonal skills. Verbal and written communication, with good exposure to working in a cross-cultural environment. You may be requested to communicate and present some topics to small audiences.
  • Background in software development, continuous integration, tooling and software architectures, either in in enterprise environments, system integration, or science-related ones. Proficiency in either Java, Scala, Python, SQL, Ruby, Clojure, etc. You may also have some experience with Tableau, Shiny, R, Javascript, Elastic, and so forth.
  • Experience with relational database systems, data warehouses or other OLAP systems, and in delivering enterprise-level ETL pipelines, data workflows, migrations and lifecycles, is also requested.

Job Qualifications: 

Required

  • Hand-on experience with the Hadoop stack and related Big Data technologies: Hive, Spark, Yarn, Kafka, Flume, HBase, Sqoop. Knowledge and experience of structured, semi-structured and unstructured data.
  • A background in software development, continuous integration, tooling and software architectures and software development patterns is needed, either in in enterprise environments, system integration, or science-related ones. Experience programming in Java, Scala, Python and/or SQL.
  • Experience with Unix/Linux (Bash scripting, ssh, networking).
  • Experience in Agile Methodologies (e.g Scrum, Kanban)
  • Knowledge with SQL and relational database design and methods for efficiently retrieving data.
  • Knowledge of NoSQL databases (HBase, Cassandra, MongoDB). 
  • Knowledge of Search technologies (Solr, ELK Stack).
  • Analytical skills and creative problem solver.
  • Verbal and written communications skills.
  • Strong team player capable of working in a demanding environment.
  • Ability to work effectively in a multi-national environment.

Desired

  • Experience with Git or other CI tools (e.g. Jenkins, Gitlab, etc)
  • Experience with automation tools like Ansible, Terraform, Puppet, Chef
  • Knowledge of cloud services from AWS, Azure, GCP (e.g. EMR, S3, AWS Lambda, Google Cloud DataProc, HDInsight,…).
  • Good understanding of Data Science fundamentals.

Education And Experience

  • 2 years or more experience in relevant roles.
  • Computer Science, Mathematics, Physics, Engineering, or other relevant degree.

Complementary Information

We offer you a not average place to work: we are inspiring and passionate people. We may work hard at times, but always in a dynamic, relaxed and collaborative culture. We offer you the chance of joining a rapidly expanding organization with ambitious growth targets, where you can really make the difference and shape the future.

Position is based in Italy. However, we are often required to spend time on-site with our customers, mainly local but also international ones. You will be required to participate in travel-based work as a Teradata team member. Italian and English business-level is mandatory.

CountryEEOText_Description:

City: Rome

State: Lazio

Community / Marketing Title: Software Engineer

Job Category: Consulting

Company Profile:

Our Company

At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.

LinkedIn Remote:

Location_formattedLocationLong: Rome, Lazio IT

.