• Job code: QR8176
  • Dev Database/DWH/BI

Senior Data Engineer (Kafka/Spark)

Amsterdam

For our client in Amsterdam we are looking for a Senior Data Engineer (Kafka/Spark).


Department:

Within Data & Analytics department (D&A) we need a strong center of excellence for data and data analytics, which are the driving force behind a data-driven culture, data governance & quality frameworks, our AI & BI function and the close connection with IT on a future-proof data infrastructure. The goal of the Data Engineering Team is to provide data pipelines that ensure continuity, safety and integrity of the delivered data.


The team, is a team of data engineers, a data steward and a business analyst within D&A, whose primary purpose is to establish robust data pipelines for retrieving data from internal and external sources, store the data as a raw archive (repayable) and eventually send it to the Core Data Platform, responsible for the data storage. The team works closely with different departments, some of them are AI, BI, IoT, Data Governance and some more.


What will you do?

As a Data Engineer at one of the world's premier airports you will help us build this production grade cloud environment. . In this role, you will work alongside our Core Data Platform team to provide the technical necessities for integrating the different data sources into our main Core Data Platform. You'll be responsible for the design, implementation, test and maintenance of data products on Microsoft Azure. You will work closely together with various (internal) business stakeholders and multiple data providers. You see yourself as a senior engineer, with extensive programming experience, and have the willingness to help the team achieve its goals.


Your main activities will be:

- Identify the integration requirements of several data-sources;

- Design, implement and test cloud based (Azure) data intensive applications;

- Collaborate with other Lead Engineers, Data Architects & Enter. Architects for technical solutions

- Contribute improvements to the Data Platform

- Convert data product prototypes to data products.


Your added value:

As a next-generation professional you have a clear vision, courage and focus. Innovation and agile working are an absolute priority. You improve yourself every day so that you can be the best. You are introspective and resilient. You believe that creating a good atmosphere is essential for a successful, respectful and welcoming workplace. We are still a small team so we’ll be looking for a real ‘click’/connection. Your natural approachability enables you to connect with people. You are passionate about your work, and are curious and open to new developments, with a mindset keyed to possibilities and opportunities. If you meet the below requirements, then we look forward to receiving your application! 


Requirements:

- You have a BSC or MsC in a relevant field (e.g. Computer Science, Mathematics etc.) 

- You have at least 4 year of experience running data driven solutions, including deployment and management of data-pipelines in production. 

- Strong Java/Scala experience, we would prefer someone with 2+ years hands-on experience with at least, one of this two.

- Professional experience with Kafka (or Azure EventHub)

- Professional experience with Spark, implementing both batch and streaming pipelines

- You recognize the importance of logging and monitoring (we currently use DataDog, Splunk, sentry)

- You have at least one year experience with Microsoft Azure (AWS, GCP are fine too)

- You are comfortable working with latest DevOps technologies i.e., Kubernetes, Docker;

- Professional experience with Scrum

- You are fluent in English. Knowledge of the Dutch language is a plus.

Apply