Join our seasoned team of Big Data, Cloud and Analytics professionals to take your career to the next level. You will be trained in technologies such as Amazon Web Services, SAS, Hortonworks Hadoop, Google Cloud Platform.
Responsibilities and Duties
- Contribute to the job and data flow design on platforms like Big Query, Redshift, Hadoop, Snowflake, Teradata, Oracle etc.
- Develop jobs using one or multiple given languages: HiveQL, Sqoop Jobs, Python, Spark SQL, SAS
- Automation of Data Management jobs through different frameworks
- Data ingestion to Hadoop or Cloud-based Data Lakes/Assets from various source systems
Education and Skills
- Bachelor’s degree or higher in an Engineering, IT, Math or Science related field.
- 6+ years of experience in which a minimum of 3+ years of professional Services (customer facing) experience, 3+ years in Hadoop/Spark ecosystem
- Experience in Databricks will be a huge plus
- Must have hands-on experience on one or more languages Spark SQL, Scala, Python, HiveQL etc.
- Hands on experience on Airflow, NiFi or any other data orchestration tool
- Should have a good understanding of data engineering project delivery
- Experience on APIs like REST/JSON or XML/SOAP, Java ecosystem including debugging and monitoring tools
- Strong verbal and written communications skills are a must
|Job Category||Data Engineer|