Job description
I&I demands - Data Engineers Location: Bangalore Qualifications: ? Education: Bachelor?s degree in Computer Science, Engineering, or a related field ? Experience: Minimum of 7 ? 9 years of experience in for PySpark / Python with AWS. Job Description: Position Overview: As a Data Engineer specializing in PySpark and Python within an AWS environment, you will be responsible for designing, developing, and maintaining robust data pipelines. You will work closely with data scientists and analysts to ensure data availability, reliability, and accessibility for various analytics and machine learning projects. Key Responsibilities: Design, develop, and optimize scalable data pipelines using PySpark and Python. Collaborate with cross-functional teams to gather requirements and understand data needs. Implement data ingestion processes from diverse sources (e.g., APIs, databases, flat files). Manage and optimize AWS services, such as S3, Redshift, Glue, and EMR. Ensure data quality and integrity by implementing validation and cleansing processes. Monitor and troubleshoot data workflows, ensuring timely and accurate data delivery. Document data processes and maintain clear communication with stakeholders. Stay updated on industry trends and best practices in data engineering and AWS technologies.
Skills Required
Key Skills
- PYSPARK
- PYTHON
- AWS