Company
RiskSpan logo

RiskSpan

riskspan.com
Location

Remote, but you must be in the following location

  • 🇮🇳 India
Apply

Data Engineer - SQL, DBT, Snowflake and Python Experience

RiskSpan is a leading source of analytics, modeling, data and risk management solutions for the Consumer and Institutional Finance industries. We solve business problems for clients such as banks, mortgage-backed and asset-backed securities issuers, equity and fixed-income portfolio managers, servicers, and regulators that require our expertise in the market risk, credit risk, operational risk, and information technology domains.

This position is based in India to work remotely for a US client. You must live in India to be considered for this position. The candidate must be a strong Data Engineer with Strong SQL, DBT, Snowflake and Python Experience.

We are seeking a highly skilled and motivated Data Engineer to join our team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to enable efficient data processing, analysis, and reporting. Expertise in DBT, Snowflake, SQL, and Python will be essential in building and optimizing data pipelines to ensure the availability, scalability, and reliability of our data ecosystem. Matillion /Any Data Extraction tool experience is a plus.

Responsibilities:

- Collaborate with cross-functional teams to understand data requirements and translate them into effective data solutions.

- Design and implement DB T test cases as part of the validation framework.

- Experience in creating DBT Singular, Generic test cases.

- Experience working with Matillion Tools is a plus.

- Experience converting stored procedures into DBT Models.

- Design and implement robust and scalable data pipelines using Snowflake, SQL, and Python to extract, transform, and load (ETL) data from various sources.

- Optimize data workflows for performance, reliability, and maintainability.

- Develop and maintain data models, ensuring data accuracy, consistency, and integrity.

- Work with large datasets, both structured and unstructured, to derive meaningful insights.

- Identify and address data quality issues through data profiling, validation, and cleansing techniques.

- Monitor and troubleshoot data pipelines to ensure data availability and accuracy.

- Continuously improve data processes and automation to streamline data operations.

- Collaborate with data scientists and analysts to support their data needs and facilitate data-driven decision-making.

Qualifications:

- Bachelor’s degree in computer science, Information Technology, or a related field (Master's preferred).

- Strong experience with DBT and Snowflake for data warehousing and analytics.

- Proficiency in SQL for querying and manipulating data.

- Extensive programming experience in Python for building data pipelines and automating data processes.

- Solid understanding of data modeling concepts and database design principles.

- Familiarity with data integration, transformation, and ETL techniques.

- Experience with version control systems (e.g., Azure, Git) and CI/CD pipelines is a plus.

- Excellent problem-solving skills and ability to work in a dynamic, collaborative environment.

- Strong communication skills to collaborate effectively with technical and non-technical stakeholders.