R Posted byRecruiterWe need a skilled person to build and improve data processing systems that handle both batch and real-time data. You will work with tools like Spark, Kafka, and Snowflake.
Job Overview :
We need a skilled person to build and improve data processing systems that handle both batch and real-time data. You will work with tools like Spark, Kafka, and Snowflake.
Main Duties :
- Develop and optimize data pipelines for smooth data flow.
- Work closely with team members to ensure projects arepleted successfully.
- Identify and fix performance issues in the systems.
- Use your knowledge of data engineering to manage our Reference Data System with modern technologies.
Must-Have Skills :
Experience in building and managing data pipelines using Kafka, Snowflake, and Python.Good understanding of distributed data processing and streaming.Familiar with using Snowflake for data loading and performance tuning.Skilled in Python for handling and automating data tasks.Knowledge of the Kafka ecosystem (Confluent and Kafka Streams).Understanding of SQL, data modeling, and ETL / ELT processes.PLUS :
Knowledge in areas like trade processing, settlement, reconciliation, and back / middle-office functions in financial markets (such as equities, fixed ie, derivatives, and FX).Strong understanding of trade lifecycle events, different order types, allocation rules, and settlement processes.Familiarity with cloud platforms like AWS, Azure, or GCP is a plus.