- Building high quality and sustainable data pipelines and ETL processes to extract data from a variety of APIs and ingest into cloud based services.
- Efficiently developing complex SQL queries to aggregate and transform data for analytics team and general users
- Maintaining accurate and error-free data bases and datalake structures
- Conducting quality assessment and integrity checks on both new and existing queries. And processes
- Monitoring existing solutions and working pro-actively to rapidly resolve errors and identify future problems before they occur.
- Using data visualization tools such as Power BI, SSRS, Tableau, Looker etc to develop high quality dashboards and reports.
- Consulting with a variety of stakeholders to gather new project requirements and transform these into well-defined tasks and targets.
1. The right candidate will be an innovative and adaptable data expert with a strong desire to succeed.
2. You’ll have demonstrated experience working in a high performing business intelligence or data warehouse environment, excellent communication skills and a passion for problem solving and learning new technologies.
3. Working at the Company you’ll be exposed to a large variety of tasks, tools and programming languages so the desire and ability to constantly learn new skills is essential.
4. We’re looking for people who are passionate about data with an emphasis on quality programming and building the best solution possible
- 3-5 years of practical experience in data / analytics with at least 1 year working in an engineering / BI role.
- At least 1 year practical experience working on data pipelines or analytics projects with languages such as Python, Scala or Node.JS
- At least 2 years practical experience working on data pipelines or analytics projects with SQL / NoSql databases (ideally in a Hadoop based Environment).
- Strong Knowledge And Practical Experience Working With At Least Four Of The Following AWS Services: (S3, EMR, ECS / EC2, Lambda, Glue, Athena, Kinises / Spark Streaming, Step Functions, Cloudwatch, Dynamo DB).
- Strong Experience working with data processing and ETL systems such as Oozie, Airflow, Azkaban, Luigi, SSIS
- Experience developing solutions inside a Hadoop stack using tools such as (Hive, Spark, Storm, Kafka, Ambari, Hue etc).
- Ability to work with large volumes of both raw and processed data in a variety of formats including (JSON, ORC, Parquet, CSV) .–
- Ability to work in a Linux / Unix environment (predominately via EMR & AWS CLi / Hadoop File System)
- Experience with DevOps solutions such as (Jenkins, GitHub, Ansible, Docker, Kubernetes) .–
- Minimum undergraduate level qualifications in a technical discipline such as (Computer Science, Data Science, Analytics, Machine Learning, Statistics etc). Post Graduate
- Demonstrated Experience And Expertise On Setting Up And Maintaining Cloud Data Solutions And AWS Infrastructure Will Be Highly Regarded.
- Strong knowledge of cloud based data security, encryption and protection methods will also be highly regarded
- Language skills: -Business Level English and Business Level Japanese