🤔 Problem

Failing Keycloak Kubernetes pod. This can happen when existing keycloak users refer to entities no longer existing in the latest realm configuration upon migration such as clients.

The Keycloak container keeps on restarting and manual actions on that pod are impossible

🌱 Solution

Manual migration is necessary. However, to do so it is necessary to bring the pod into a stable state.

Edit the Keycloak deployment configuration from

command: [ "/bin/bash" ]
args: [ "-c",
  "/opt/keycloak/bin/kc.sh start \
  -Dkeycloak.migration.action=import \
  -Dkeycloak.migration.provider=dir \
  -Dkeycloak.migration.dir=/opt/keycloak/imports \
  -Dkeycloak.migration.strategy=IGNORE_EXISTING"]
CODE
command: [ "/bin/bash" ]
args: [ "-c", "/opt/keycloak/bin/kc.sh start"]
CODE

This will start keycloak on the previous stable state.

Remove old references either from the keycloak admin console or in the configuration files. lissi-cloud-realm.json or lissi-cloud-users-0.json.

Modify the the configuration files from the container:

  1. Connect and run a bash in the Keycloak container

  2. Export the current configuration

    cd opt/keycloak
    bin/kc.sh export --dir <dir> --users same_file
    CODE
  3. Create a migration folder and copy the user configuration file

    mkdir migration
    cp <dir>/lissi-cloud-users-0.json
    CODE
  4. Copy the realm configuration in the migration folder

    cp imports/lissi-cloud-realm.json
    CODE
  5. Sync the user configuration file with the realm configuration file

  6. Run the migration script

    bin/kc.sh start \
      -Dkeycloak.migration.action=import \
      -Dkeycloak.migration.provider=dir \
      -Dkeycloak.migration.dir=/opt/keycloak/migration
    CODE

Edit back the Keycloak deployment configuration

Stabilise the Keycloak container with ArgoCD

  1. Deactivate the Auto-Sync policy

  2. Edit the deployment from configuration from the ArgoCD admin console

  3. Follow the steps described earlier

  4. Reactivate the Auto-Sync policy