Skip to content Skip to menu Skip to footer
Beschrijving vacature

As a Data Engineer at Kyndryl CDO you will help ingest and transform our internal and 3rd party data into tangible business value by analyzing information, creating data models, developing data ingestion pipelines, and providing ongoing support and maintenance. Work with Best-in-Class ingestion and visualization tools and technologies, along with the most flexible and scalable deployment options available on IBM Cloud and Azure Cloud.



The role requires creating and maintaining data pipeline architecture, optimizing data flows based on requirements defined by stakeholders. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with data and creating data systems from the ground up. The Big Data Engineer will work with software developers, database architects, data analysts and data scientists on data projects and will ensure optimal data delivery architecture which is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.

Create and maintain optimal data ingestion pipeline architecture based on requirements.

Recommend and implement ways to improve data reliability, efficiency, and quality.

Assemble large, complex data sets that meet functional / non-functional business requirements.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Build the processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using ‘big data’ and cloud technologies.

Work with Kyndryl stakeholders including the Project Managers, Product, Data, Infrastructure and Design teams to assist with data-related technical issues and support their data needs.

Functievereisten

Required Professional and Technical Expertise:

 

  • We are looking for a candidate with 2+ years of experience in a Data Engineer role.
  • Develops data ingestion pipelines on Big Data, Cloud, and Cognitive technologies.
  • Strong technical abilities to understand, design, write, debug complex code and perform code reviews.
  • Advanced working SQL knowledge and experience working with relational databases, as well as working familiarity with a variety of databases including IBM DB2, Postgresql, SQLServer, etc.
  • Experience with ETL tools like IBM DataStage, Informatica PowerCenter, Azure Data Factory or Azure Synapse Analytics, etc.
  • Programming experience and skills including SQL, Python, Scala, Java, Shell etc.
  • Experience working with IBM Cloud Object Storage, Amazon S3 or Azure Blob Storage
  • Experience building and optimizing IBM or Azure cloud-based data ingestion pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic and data wrangling skills related to working with structured as well as unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
Een pre om te hebben

Preferred Professional and Technical Expertise:

  • Experience in DevOps, DataOps, and CloudOps including CI/CD pilelines and code deployment in Kubernetes or OpenShift containers.
  • Experience with AI and Machine Learning Models, API development, and Microservices Architecture
  • API/Microservices development is a bonus e.g. NodeJS, Spring Boot, etc.
  • Working knowledge of Big Data distributions like IBM, Cloudera, Horton Works, MapR, etc.
  • Working knowledge of message queuing, stream processing including Kafka, Spark, AWS/Azure native services etc.
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets and processing using Hadoop technologies like Spark, Hive, Sqoop, Oozie etc.
  • Experience with Data Visualization tools like IBM COGNOS, Microsoft POWERBI, SAP Business Objects, Tableau, Qlik etc.
Arbeidsvoorwaarden
  • Growth opportunities and ongoing training in different accounting processes, work experience with different countries and an excellent work environment.
  • Personal & Career Development
  • Rich education offerings
  • Additional Days Off
  • Flexible Working Conditions
  • 100% Paid Sick Leave
  • Critical Illness Insurance, Life & Disability insurance, Medical Center
Functiebeschrijving
Work experience:
Work experience is not required
Salaris van/tot:
1600 - 2500 EUR
Date of expiry:

Over het bedrijf

Our company is an information technology infrastructure services provider that designs, builds, manages and develops large-scale information systems. Our global base of customers includes 75 of the Fortune 100 companies. With more than 90,000 skilled professionals operating from over 100 countries, we are committed to the success of our customers, collaborating with them and helping them to… Meer informatie