About me


Rogier Dikkes

My name is Rogier Dikkes, since 2012 I have worked in various Linux related roles.

Since 2012 I have specialised in helping organisations building Big Data clusters, container and virtualisation environments. This has been done in E-Learning, HPC Compute and the Energy sector. This has mainly been in Systems programmer/platform engineer roles using Agile/DevOps/SRE methods to achieve the best results.

history

2022 Available for Freelance positions

2021 - Present: Vandebron Techlead Platform engineering

Formally assigned as one of the tech leads within Vandebron. As part of the tech lead team, we managed the architectural backlog, promoted standards in the teams. Many of the responsibilities assigned to this position were already part of the platform position, for example, managing infrastructure costs, infrastructure architecture, make quarterly planning, manage the team backlog, take lead in projects, build a vision for the team, promote and facilitate the DevOps culture and streamline the agile processes.

This position is an additional role to the platform engineer position.

2019 - Present: Vandebron DevOps Platform engineer

The platform I built in my role as Big Data DevOps platform engineer became utilized by multiple teams, with this my role expanded to technical lead of the team. In this role, I had to additionally manage costs and predictions, architect the infrastructure, make quarterly/year planning, and manage the backlog of the team. During this period I introduced Kanban to the team and was responsible for streamlining the process around this. There were 3-4 platform engineers working in the team. Infrastructure was done as code. Scripts were in Bash, Python, Golang.

In this role I helped set up the open API, build and architect the cloud network and infrastructure, improved operational stability and monitoring, helped develop an SRE mindset in the company, and made cost optimizations besides daily operations and changes to the platform.

For the Kubernetes environment, I built various Helm templates for Cassandra with Medusa and Cassandra reaper and more. The cluster ran various workloads such as Spark, Kafka, Cassandra, Jenkins, Airflow, NiFi, Jenkins, and dozens of inhouse built applications created in scala, nodejs and typescript.

Infrastructure as code was done in Ansible, Terraform, Packer, Docker, Helm and other tools.

Oct 2020 - Oct 2021: Project Lead Kubernetes migration

Responsible for migration from Mesos Environment to Kubernetes.

In this role, I had the project lead for a team of 8 to 15 engineers to migrate our DCOS environment to a Kubernetes environment.

During the first stage:

  • made cost estimations of possible solutions
  • set up POC’s for Kubernetes clusters
  • wrote a business case for the Kubernetes solution
  • helped configure and build tools in Golang to migrate data such as Cassandra, MongoDB, CockroachDB to the new environment.
  • Provide teams with training in Kubernetes and give Demo’s

One of the tools built in golang was the IPVS solution and building a Mesos seedprovider for our Cassandra Kubernetes cluster.

In the second stage:

  • Assisted teams in making planning for releasing to DTAP environments
  • Created playbooks with overviews of tasks per team
  • Reported regularly to management
  • Handled the roadmap planning for multiple teams
  • Did refinement sessions.

Entire project had 5 minutes of downtime due to switching of single instance databases

2018 - 2019: Vandebron BigData platform engineer

As DevOps platform engineer in the Big Data team I was responsible for building and maintaining infrastructure for the team. Due to the infrastructure flexibility in scaling and the performance increase achieved in this platform it was utilised by multiple teams. It enabled easy scaling and managing of their micro services, allowing building data platforms and data processing pipelines.

In my role for the Big Data team I supported the team by managing their tools such as: Spark, Cassandra, Kafka, NiFi, Elastic, Hyperledger (Blockchain). For the general infrastructure tools used: Traefik, Mesos, DCOS, zookeeper, Prometheus, Grafana, MongoDB, CockroachDB, Postgres, Hashicorp Vault, Redis, ELK, Zipkin, Jenkins. This was built on top of AWS, on which I build spot instance Spark autoscaling. In AWS I managed the cloud infrastructure based on EC2, Dynamodb, S3, networking, ELB’s. Configuration management was in Ansible, Packer, Docker, Terraform.

2013 - 2018: SURFsara

At SURFsara I was part of two operational teams; the HPC Cloud and the Scalable Data Analytics team.

With the HPC Cloud facility, SURFsara offered self-service, dynamically scalable and fully configurable IAAS HPC systems to the Dutch academic community. The environment used Opennebula, Ansible, Ceph, Prometheus and many other applications in support of the HPC Cloud. Within this environment we offered GPU, Compute and storage as resources. The HPC Cloud had multi petabytes of Ceph storage that I setup as the block storage for the IAAS HPC Cloud. The last year at SURFsara I became the technical lead for the HPC cloud infrastructure.

SURFsara operated one of the largest Hadoop clusters for scientific research in Europe with multi petabytes of storage. The Scalable Data Analytics team was responsible for running ElasticSearch and Hadoop/Yarn/Spark clusters. Main responsibility was the unified compute infrastructure based on Mesos and Aurora.

From June 2014 till December 2014 I was part of the Grid Compute operational team at SURFsara. The grid is a transnational distributed infrastructure of compute clusters and storage systems.

2012 - 2013: UP learning

E-learning hosting of Moodle, Mahara, Big Blue Button and Kaltura environments.

2012 - 2012: UNC

E-learning hosting setup for a training.

[//]: <> (my movie and it)