Develop, deploy and maintain ETL/ELT data pipelines using Azure Databricks (e.g., notebooks, jobs, clusters) and other Azure services such as Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Synapse.
Extract data from multiple sources, transform it (batch or streaming), load into data lakes or warehouses; ensure pipeline reliability and scalability.
Write complex SQL queries, use PySpark or Scala for transformations, optimise performance of Spark jobs, tune Databricks clusters for cost & speed.
Implement data modelling, data architecture patterns, support data governance, data quality and metadata management.
Monitor production systems, define alerts, troubleshoot issues, manage job failures, ensure availability of pipelines.
Collaborate with cross-functional teams: business analysts, data scientists, cloud infrastructure, security—translate business requirements into technical solutions.
Stay current with Azure and Databricks features, evaluate new tools, improve architecture and cost-efficiency.
High-demand, future-proof role
Cloud + Data Engineering are among the fastest-growing IT fields. Databricks is becoming a standard for enterprise data platforms.
Hands-on with cutting-edge tech
You’ll work with Spark, Lakehouse architecture, streaming, big data, AI/ML integrations.
Excellent career progression
Path toward:
→ Senior Data Engineer → Data Architect → Cloud Architect → Data Engineering Manager
Multi-cloud opportunities
Skills transfer well to AWS, GCP Databricks, Synapse, Snowflake, etc.
Strong visibility & impact
Your work directly supports business analytics, reporting, AI, revenue decisions.