Key Responsibilities:
- Create a dynamic metadata repository and related UI which captures technical and business metadata
- Develop research data pipilines to support portfolio optimization
- Build a data ingestion framwork to reduce implementation timlines
- Collaborate with the company's infrastructure management team to make an elastic "analytics sandbox" environment to support various DataOps methodologies
- Incorporate data pipeline orchestration tools
- Develop data architecture which conceives quality metadata across the data lineage graph and implements health checks
- Help automate the creation of data access API's and microservices
This role will allow you not only to learn and leverage skills in Python, SQL, distributed storage and execution platforms, statistical process control, graph databases and Data ops methodologies, but it will also provide an opportunity to broaden financial knowledge across asset classes, markets, and instruments.
Requirements:
Degree in Computer Science/Engineering, Masters preferred
2+ years Python
1+ years developing web-based user interfaces
2+ years programming SQL queries and stored procedures
2+ Years using agile or devops methodologies
2+ years with: Non-SQL databases and storage, Graph Databases, Datapipeline/Workflow Orchestration tools
2+ years developing cloud microservices with containers
Location: Nashville, TN