Responsible for design and development of core modules in our Big Data & ML platform services and data-oriented applications and tools (resides on hundreds of servers; hosted in AWS; uses Spark Core/Streaming/SQL, Scala, Java, Python, Snowflake, Presto, AngularJS, Kafka, Kubernetes, Tableau) – our Big Data & ML platform handles billions of records per day, through complex processing (CEPs, graphs, machine learning) in batch and real time modes;
Responsible for research, analysis and performing proof of concepts for new technologies, tools and design concepts;
Involvement in technically mentoring other team members, code reviews, defining and maintaining high development standards;
lead technical designs (+ design documentation), review and challenge technical design and architecture definitions;
Contribute to high-standards development processes (unit testing and CI/CD oriented), and high standards deployment to production and production monitoring;
Contribute to our architecture and technology decisions and technical roadmap.
What is important for us?
5+ years of experience working in Java/Scala/Python or other high-level programming language;
Very strong designer and coder;
Strong understanding in software architecture paradigms and design patterns (data-oriented programming, micro services, OOP);
Experience in design and development of large scale distributed systems and distributed programming;
Experience working with Linux OS systems and Bash scripting languages;
Experienced with building scalable stream-processing and/or ETL batch processing – using solutions such as Spark/Spark Streaming;
Strong SQL skills and experienced with NoSQL databases (such as HBase, Cassandra, MongoDB and Presto) and relational databases (such as MySQL and SQL Server);
Experience working with CI/CD tools (Jenkins, TeamCity etc.);
Bachelor’s degree in Computer Science or related;
Extreme passionate for new technologies;
Proactive and willing to take ownership;
Self-starter with strong work ethic with a passion for problem-solving;
Love new challenges and passionate about continually pushing himself and other R&D members to operational excellence.
Surprise us with these additional assets:
Experience in Big Data and BI ecosystem – an advantage;
Experience working with cloud service (AWS, GCP, Azure) – an advantage;
Experience working with Docker/Kubernetes Engine – an advantage;
Knowledge in Hadoop ecosystem – an advantage;
Experience with machine learning technologies – an advantage;
Experience working with development testing (unit testing) methodologies – an advantage.