Responsibilities
- Design, develop, and maintain scalable and robust data pipelines on Databricks.
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
- Optimize and troubleshoot existing data pipelines for performance and reliability.
- Ensure data quality and integrity across various data sources.
- Implement data security and compliance best practices.
- Monitor data pipeline performance and conduct necessary maintenance and updates.
- Document data pipeline processes and technical specifications.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in data engineering.
- Proficiency with Databricks and Spark.
- Strong SQL skills and experience with relational databases.
- Experience with big data technologies (e.g., Hadoop, Kafka).
- Knowledge of data warehousing concepts and ETL processes.
- Excellent problem-solving and analytical skills.
Skills
- Databricks
- Apache Spark
- SQL
- Python
- Data Warehousing
- ETL
- Hadoop
- Kafka
Job Category: Big data Dot Net
Job Type: Full Time
Job Location: Chennai
Total Experience: 5+ Years