Scala/Spark Big Data Developer
Correct Context is looking for a Scala/Spark Big Data Developer for Comscore in Poland and around.
Comscore is a global leader in media analytics, revolutionizing insights into consumer behavior, media consumption, and digital engagement.
Comscore leads in measuring and analyzing audiences across diverse digital platforms. Thrive on using cutting-edge technology, play a vital role as a trusted partner delivering accurate data to global businesses, and collaborate with industry leaders like Facebook, Disney, and Amazon. Contribute to empowering businesses in the digital era across media, advertising, e-commerce, and technology sectors.
We offer:
- Real big data projects (PB scale) 🚀
- An international team (US, PL, IE, CL) 🌎
- A small, independent team working environment 🧑💻
- High influence on working environment
- Hands on environment
- Flexible work time ⏰
- Fully remote or in-office work in Wroclaw, Poland 🏢
- 14,000 - 20,000 PLN net/month B2B 💰
- Private healthcare (PL) 🏥
- Multikafeteria (PL) 🍽️
- Free parking (PL)🚗
As a Big Data Developer, you'll:
- Design, implement, and maintain petabyte-scale Big data pipelines using Scala, Spark, Kubernetes and a lot of other tech
- Optimize – working with Big data is very specific, sometimes it’s IO/CPU-bound, depending on the process, we need to figure out a faster way of doing things. At least empirical knowledge of calculation complexity, as in Big data, even simple operations, when you multiply by the size of the dataset can be costly
- Conduct Proof of Concept (PoC) for enhancements
- Writing great and performant Big Data Scala code
- Cooperate with other Big data teams
- Work with technologies like AWS, Kubernetes, Airflow, EMR, Hadoop, Linux/Ubuntu, Kafka, and Spark
- Use Slack and Zoom for communication
The candidate must have:
- 2+ years of experience with Linux
- Solid knowledge of Linux (bash, threads, IPC, filesystems; being power-user is strongly desired, understanding how OS works so you can benefit from performance optimizations in production but also in daily workflows)
- 1+ years of experience with Spark, primarily using Scala for Big data processing (that includes understanding of how Spark works and why)
- Huge need to drive projects of the future, improve stuff, risk taking mindset - covered by examples
- Great communication skills (you can drive end-to-end projects, and guide dev team members)
- Professional working proficiency in English (both oral and written)
- Understanding of HTTP API communication patterns (HTTP/REST/RPC) and protocol
- Good software debugging skills, not only prints, but also using debugger
- Deep understanding of at least one technical area (please let us know which one is this and prepare a story of the biggest battle story about this you had)
- Quite good understanding of Git
If you don't have all the qualifications, but you're interested in what we do and you have a solid Linux understanding -> let's talk!
The recruitment process for the Scale/Spark Big Data Developer position has following steps:
- Technical survey - 10min
- Technical screening - 30 min video call
- Technical interview - 60min video call
- Technical/Managerial interview- 60 min video call
- Final Interview - Technical/Managerial - 30 min video call
- Department
- Comscore
- Locations
- Ruska 3, Wrocław (near pl. Solny)
- Remote status
- Fully Remote
Ruska 3, Wrocław (near pl. Solny)
-
Products
We strongly believe that working on a specific product for longer period of time has a lot of benefits. Person working in such environment where deep understanding of a product, technology in a stable environment can render great ideas, improvements and benefit everybody. No switching projects every three months, no new manager every three months, no new customer every three months. -
Craftsmanship
Everyday we learn something. It's great when you can not only learn but use that knowledge to build great stuff and grow yourself. We are big fans of excellence and technical craftsmanship. No excuses to avoid CI, tests, QA, best practices. It's true that shipping is the most important thing, but in the end, it's your name on it, let's make it right way. -
Stable and Flexible Environment to Grow
As practice shows, small teams are best to achieve best results and we are big fans of that approach. It's easier to agree on holidays, specific time to work, plan your time to take children from school. It's easier to agree on technical solutions and easier to communicate. With all that and bunch of extra smaller things like private healthcare, trainings, flexible working hours, fully remote or office options, employment of B2B type of work, lawyer/accountant help we are creating stable and safe environment for you to grow not only on a technical level, but also as a person. -
Technical Autonomy
We trust you. All teams have the autonomy to pick best solutions, best software, best approach for problem solving. No crazy managers to tell you that mongodb is a must or everybody must work in vim. We want you to support you on the road to technical excellence and avoid artificial blockers. All structures are as flat as possible and all decision making is as close to the core as possible. I hope you enjoy this a lot.
Workplace & culture
We really love to make working software products. We are small teams that are focused on specific products. Each of us contributes to project we like the most and with trust, independence and autonomy, hands-on approach and great software craftsmanship we shape the future.
About Correct Context
Correct Context is about gathering together best IT Developers, UX and UI Designers, Product People and Ambitious Product Companies to build and grow Greatest Products that matter.
Scala/Spark Big Data Developer
Loading application form
Already working at Correct Context?
Let’s recruit together and find your next colleague.