Role: Big Data Engineer
Location: Warsaw, Poland (hybrid work)
Key words: Big Data, Python, Spark, PySpark, Hadoop, Kafka, Data Lake, DevOps, data pipelines, data engineering, data modelling
Our client, a multinational technology company, are searching for an experienced Big Data Engineer to join their global team in Warsaw.
If you're looking for an interesting work, plenty of perks as well as the opportunity to further develop personally and professionally, then look no further.
The company can offer you:
• A competitive salary and benefits package (yearly bonus, copyrights, shares)
• A challenging job working with new technologies in a young, professional, multicultural environment
• A flexible working environment, with remote work
Your responsibilities would include:
• Creating a Data Lake for one of our client's international divisions
• Building pipelines using Python, Hadoop, Kafka etc.
• Developing connections to new data sources
You should have:
• At least 3 years' experience within Big Data technology stack and Data Engineering
• Worked as Hadoop developer covering Spark, Hive, Kafka etc.
• Data format and source interface experience
• Exposure to DevOps culture with GIT and Jenkins
• Previous experience using SQL and relational databases
• Fluent English language skills (spoken and written) as well as strong communication skills
If you have any questions please do not hesitate to get in touch with me on +44 2920 020 652 or email@example.com.
I look forward to your application and the chance to speak with you about the next big step in your career!