Grid Dynamics recognized as Google Cloud leader by Everest Group

Big Data Engineer (with Scala and Spark)

Bucharest, Romania

About us: 

Grid Dynamics is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season. Founded in 2006 and headquartered in San Ramon, California with offices throughout the US and Eastern Europe, we focus on big data analytics, scalable omnichannel services, DevOps, and cloud enablement. 

Role overview 

We are looking for an experienced and technology-proficient Big Data Engineer to join our team! 

Our customer is one of the world’s largest technology companies based in Silicon Valley with operations all over the world. In this project, we are working on the bleeding edge of Big Data technology to develop a high-performance data analytics platform, which handles petabytes datasets. 

Project description: 

Advertising Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Today, our technology and services power advertising in Search Ads in of the biggest search and news providers. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. You will develop distributed systems to establish, refine and automate our anti-fraud processes across our advertising surfaces. 

Responsibilities: 

  • Running big data analytics, and building large scale data infrastructure
  • Detecting meaningful data patterns
  • Assuring the integrity of our data
  • Measuring fraud activity and its impact on campaign and user performance
  • Analyzing the results of mitigations against fraud

Requirements: 

  • Strong knowledge of Scala
  • In-depth knowledge of Hadoop and Spark
  • Processing and computation frameworks: Kafka, Spark, Dataframe/SQL API - DB engines:  Oracle, Postgres, Teradata, Cassandra.
  • Implementing Data Lakes, Data Warehousing or analytics systems is an advantage
  • Understanding of the best practices in data quality and quality engineering
  • Ability to quickly learn new tools and technologies
  • English languages are required
  • Good understanding of distributed computing technologies, approaches, and patterns

We offer: 

  • Work on bleeding-edge projects on a team of experienced and motivated engineers
  • Flexible working hours
  • Specialization courses
  • 24 days annual leave + an additional of 5 sick days 
  • Floating Holidays
  • Private medical subscription for employees and their family members
  • Benefits basket with the total value of 650 euro/year gross

Apply to the position

Resume*

Consent to the processing of personal data in future recruitment processes

Additional files

Get in touch

We'd love to hear from you. Please provide us with your preferred contact method so we can be sure to reach you.

Please follow up to email alerts if you would like to receive information related to press releases, investors relations, and regulatory filings.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.