Senior Software Engineer, Data
Sportradar
Vienna, Austria
THE CHALLENGE
We're scaling rapidly and need a senior data engineer who can take ownership of complex, multi-faceted data projects.
You'll be tackling challenges like:
- Scale & Performance Engineering: Processing and analyzing terabytes of advertising data with sub-second query performance while building and maintaining robust ETL pipelines using Spark and AWS services to handle massive data volumes daily
- Data Pipeline Architecture & Development: Designing and building scalable data processing systems, developing backend APIs and microservices (Python or Go), architecting data flows that support both batch and real-time analytics requirements, and managing user-facing dashboards that visualize complex data insights
- Infrastructure & Data Quality Operations: Implementing robust monitoring and alerting systems to detect data quality issues, managing AWS infrastructure using Terraform, implementing CI/CD best practices, and maintaining high coding standards across data processing systems
- Cross-Functional Leadership & Collaboration: Leading large-scale data projects from requirements gathering through delivery, bridging technical implementation with business requirements, mentoring team members, and presenting technical concepts to stakeholders while challenging requirements constructively
- End-to-End Data System Ownership: Taking complete ownership of complex data engineering projects while ensuring high availability and accuracy for both internal stakeholders and external clients, championing clean code principles, and serving as a knowledge leader who supports delivering the right data solutions
ABOUT YOU
- 5+ years of data engineering experience with proven track record of leading complex data projects from conception to delivery
- Exceptional communication skills and experience working in cross-functional teams with analysts, product managers, and business stakeholders
- AWS & Data Engineering: Very strong hands-on experience with AWS services (S3, Lambda, Glue, Athena, Redshift, EMR, etc.) and proficiency with Apache Spark for large-scale data processing
- Backend Development: Strong experience with Python for building data processing services and APIs, plus expert-level SQL for data processing and analytics
- Infrastructure & DevOps: Hands-on experience with Docker, Terraform, and CI/CD pipelines with automation best practices for data systems
- Clean Code Advocate: Strong commitment to writing clean, maintainable, well-documented code with comprehensive testing and deep knowledge of analytics/reporting requirements
- Data Architecture: Experience designing scalable data architectures, data modeling, and optimizing data processing workflows
- Dashboard Development: Experience creating and managing analytics dashboards in bi tools (Tableau, Qlik Sense, Quicksuite, Power BI) and data visualization solutions to present complex insights to stakeholders
- GenAI Tools (beneficial): Experience working with contextual and prompt engineering with GitHub Copilot, Claude, or similar AI assistants
Don't forget to mention EuroTechJobs when applying.