Descrição de Vaga



Código: 5631
Título da vaga: Cloud-PaaS-AWS-Amazon Web Services (L3)
Local: São Paulo,São Paulo
Região: Outra
Tipo de emprego: PJ
Nível Profissional:
NÍvel Acadêmico: Ensino Médio Completo
Turno/Horas:
Habilidades: Essential
• Experience of working in a cloud environment – AWS
• Hands-on experience in AWS services S3, AWS Lambda, Athena, Step functions, AWS Glue, AWS DynamoDB (AWS Serverless Technologies)
• Understanding about ETL Pipeline.
• Should be capable of Requirement gathering.
• Experience in Big Data Technologies – e.g. Hadoop, Hive, Spark etc
•Experience of MPP (Massive Parallel Processing) databases helpful – e.g. Teradata, Netezza
•Challenges involved in Big Data – large table sizes (e.g. depth/width), even distribution of data
•Experience of programming- SQL, Python, Pyspark
•Data Modelling experience/awareness – Third Normal Form, Dimensional Modelling
•Data Pipelining skills – Data blending, etc
•Visualisation experience – Tableau, PBI, etc
•Data Management experience – e.g. Data Quality, Security, etc
•Development/Delivery methodologies – Agile, SDLC.
•Experience working in a geographically disparate team.

2.Professional Knowledge / Skills:

AWS services S3, AWS Lambda, Athena, Step functions, AWS Glue, AWS DynamoDB (AWS Serverless Technologies), Big Data
3.Mandatory Requirements: Candidate should work in multi-vendor environment
Categoria:
Remuneração Básica: -   - 
Benefícios: 0
Resumo da Vaga: Essential
• Experience of working in a cloud environment – AWS
• Hands-on experience in AWS services S3, AWS Lambda, Athena, Step functions, AWS Glue, AWS DynamoDB (AWS Serverless Technologies)
• Understanding about ETL Pipeline.
• Should be capable of Requirement gathering.
• Experience in Big Data Technologies – e.g. Hadoop, Hive, Spark etc
•Experience of MPP (Massive Parallel Processing) databases helpful – e.g. Teradata, Netezza
•Challenges involved in Big Data – large table sizes (e.g. depth/width), even distribution of data
•Experience of programming- SQL, Python, Pyspark
•Data Modelling experience/awareness – Third Normal Form, Dimensional Modelling
•Data Pipelining skills – Data blending, etc
•Visualisation experience – Tableau, PBI, etc
•Data Management experience – e.g. Data Quality, Security, etc
•Development/Delivery methodologies – Agile, SDLC.
•Experience working in a geographically disparate team.

2.Professional Knowledge / Skills:

AWS services S3, AWS Lambda, Athena, Step functions, AWS Glue, AWS DynamoDB (AWS Serverless Technologies), Big Data
3.Mandatory Requirements: Candidate should work in multi-vendor environment


Enviar este trabalho para um amigo
Seu nome: 
O endereço de email do seu amigo: 
Importar seus Contatos:

Compartilhar essa vaga

Share