Our client has built a platform that aggregates and analyzes massive amounts of structured and unstructured data and uses machine learning, algorithms, and data visualization to help other companies answer the most important strategic questions.

Your role includes collaborating with software engineers from other teams as well as being the main engineering resource within the data science team. Our active and very motivated team works with NLP and knowledge discovery in order to build cutting-edge data processing pipelines.

In our model, you’ll interact with the international teams directly, while working alongside our merry band of Beetroots in Ukraine. It gives you the best of both worlds – international experience with an innovative and life-changing company while enjoying the comforts (and borsch!) of home.

Responsibilities:

  • Build architecture and implement Data and Machine Learning Pipelines;
  • Adopt new data sources;
  • Maintain the best software engineering practices.

What we’re looking for:

  • 4+ years of commercial experience with Python;
  • Proficiency in relational databases (e.g. MySQL, Redshift, Aurora);
  • Experience with Spark, Sqoop, AWS services (Spectrum, Glue) and other related tools in the big data ecosystem;
  • Knowledge of data modeling, data storage techniques, data warehousing, and general data architecture;
  • Experience with engineering data pipelines to capture, store, and process unstructured data;
  • BS in Computer Science or similar;
  • Excellent communication skills & a decent level of English.

Bonus:

  • Familiarity with Go;
  • You love borsch!