My client is based in the beautiful city of Johannesburg.
They are currently working on a hybrid odel, i.e. 2 days in office per week.
Duties And Responsibilities
- Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Glue, Step-functions, Kafka CC, PySpark, DynamoDB, [URL Removed] RedShift, Lambda, DeltaLake, Python, .
- Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services and Kafka CC.
- Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka CC.
- Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
- Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.
- Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL , AWS big data technologies and Kafka CC.
- Creation and support of real-time data pipelines built on AWS technologies including Glue, Lambda, Step Functions, PySpark , Athena and Kafka CC
- Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency
- Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
- Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning
- Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
Desired Skills
- aws
- pyspark
- ETL
- 5 to 10 years