Develop and maintain ETL pipelines using PySpark / SQL
Work with Delta Lake for scalable and reliable data storage
Optimize data performance & query execution
Integrate data from multiple sources (batch & streaming)
Manage Databricks clusters, jobs, notebooks
Collaborate with data scientists & analysts to deliver insights
Implement data governance, quality & security
Deploy solutions on cloud platforms (Azure / AWS / GCP)
High demand in cloud + data engineering
Opportunity to work on Big Data & AI transformation
Exposure to Azure/AWS/GCP technologies
Fast career growth to Data Architect / Lead Engineer
Opportunities in global IT companies and product firms
Work with advanced technologies like Delta Lake, MLflow
Gain strong expertise in modern data lakehouse architecture
Learn scalable data processing frameworks (PySpark)
Hands-on with automation & DevOps for data pipelines
Chance to contribute to analytics & ML workflows
Health insurance and medical coverage
Provident fund, bonuses & performance incentives
Hybrid/Remote work options (in many companies)
Paid leaves, training & cloud certification support
Stock/options in some product-based companies