- Build data-intensive solutions on AWS/GCP/Azure to help customers transition to cloud
- Collaborate with data scientists to design and deploy solutions on cloud
- Design data layout using object storage (S3/Cloud Storage) or databases to enable spark and other processes to access data efficiently
- Keep abreast with latest technologies and apply them to solve big data problems
- Familiarity with containers and their orchestration to manage workloads
- Learn and contribute towards knowledge sharing sessions
- Automate solutions and improve overall productivity and reducing the cost footprint of resources.
- Identify and solve performance and scalability issues
- Understand business problems and propose technical solutions
- Work with a global development team to develop production solutions
- Sound understanding of cloud platforms and its service offerings (at least one of AWS/GCP/Azure)
- Strong programming background in either Python and SQL
- Strong understanding of data engineering, data warehousing concepts
- Knowledge of object storage like S3 and designing data layouts for efficient data access
- Familiarity with Spark and big data technologies like Hive, BQ,EMR, DataProc , Dataflow will be given extra consideration.
- Experience with container technology (Docker) is a plus.
- Prior experience with data visualization technologies like SAS VA, Tableau is a plus
- Education: Master’s Degree in Computer Science or Equivalent
- Travel up to 25% is required
|Job Category||Data Engineer|