| Job Description: |
Roles & Responsibilities
· Strong experience designing and maintaining production-grade batch data pipelines
· Proficiency in SQL for complex data transformation and analytical workloads
· Hands-on experience with Azure Databricks, Apache Spark, and Python
· Experience working with Azure Synapse Analytics and Azure Data Lake Storage Gen2
· Solid understanding of data modeling, relational databases, and data warehousing concepts
· Experience optimizing large-scale ETL and data processing workflows
· Familiarity with Git-based source control and Agile delivery methodologies
· Strong communication and collaboration skills for cross-functional alignment
Responsibilities:
· Design, build, and maintain scalable data pipelines and workflows using Azure Databricks and
Spark
· Ingest and process structured and unstructured data from internal and external data sources
· Develop and optimize SQL queries for data transformation, validation, and analysis
· Build intermediate data processing solutions using SAS programs and SSIS packages where
applicable · Troubleshoot, debug, and resolve complex data processing and performance issues · Collaborate with stakeholders, analysts, and engineering teams to align data solutions with business goals · Ensure adherence to data quality, security, and governance standards · Support Agile delivery through sprint participation, documentation, and continuous improvement |