I am currently working on a mandate with one of our US headquartered Investment Management clients who are looking to add to their global data engineering team.
They are a technology lead Hegde Fund whose trading models rely on open-source technologies and various data sources to make trading decisions in the global markets. Collaborating with the investment team and data scientists, this is a great role for an experienced data engineer.
Responsibilities:
- Design and implement scalable ETL solutions (structured and unstructured; batch and streaming)
- Building data pipelines, standardizing, and cleansing data for the investment team
- Define and set best practice standards surrounding data (i.e., data modeling, database design, ETL design, job scheduling and monitoring, etc)
- Manage the entire data flow, from acquisition to transformation into actionable insights using SQL and Python
- Design and maintain APIs
Required Skills:
- Bachelor's in a technology related field (e.g., Engineering, Computer Science, etc.) required; Master's or PhD degree would be ideal
- Proficiency in Relational database engineering and NoSQL database engineering is required
- Good experience with AWS Services
- Strong SQL and Python experience, good programming background (Java also beneficial)
- Experience with Apache Airflow, Kafka, Docker, Kubernetes would be a plus
- Familiarity with the securities markets and exchanges is a plus but not required
- Knowledge of developing highly scalable distributed systems using open source technologies, such as Spark, Dask, or Hadoop is a plus.
- Good communicator with Strong analytic and strategic thinking skills
They are open to relocating the chosen candidate to Austria. So if you are based in Austria, or interested in the opportunity to relocate to Austria and meet the criteria for this role, please apply below.