Skip to main content
Skip table of contents

Troubleshooting Keycloak CrashLoopBackOff

🤔 Problem

Failing Keycloak Kubernetes pod. This can happen when existing keycloak users refer to entities no longer existing in the latest realm configuration upon migration such as clients.

The Keycloak container keeps on restarting and manual actions on that pod are impossible

🌱 Solution

Manual migration is necessary. However, to do so it is necessary to bring the pod into a stable state.

Edit the Keycloak deployment configuration from

command: [ "/bin/bash" ]
args: [ "-c",
  "/opt/keycloak/bin/ start \
  -Dkeycloak.migration.action=import \
  -Dkeycloak.migration.provider=dir \
  -Dkeycloak.migration.dir=/opt/keycloak/imports \
command: [ "/bin/bash" ]
args: [ "-c", "/opt/keycloak/bin/ start"]

This will start keycloak on the previous stable state.

Remove old references either from the keycloak admin console or in the configuration files. lissi-cloud-realm.json or lissi-cloud-users-0.json.

Modify the the configuration files from the container:

  1. Connect and run a bash in the Keycloak container

  2. Export the current configuration

    cd opt/keycloak
    bin/ export --dir <dir> --users same_file
  3. Create a migration folder and copy the user configuration file

    mkdir migration
    cp <dir>/lissi-cloud-users-0.json
  4. Copy the realm configuration in the migration folder

    cp imports/lissi-cloud-realm.json
  5. Sync the user configuration file with the realm configuration file

  6. Run the migration script

    bin/ start \
      -Dkeycloak.migration.action=import \
      -Dkeycloak.migration.provider=dir \

Edit back the Keycloak deployment configuration

Stabilise the Keycloak container with ArgoCD

  1. Deactivate the Auto-Sync policy

  2. Edit the deployment from configuration from the ArgoCD admin console

  3. Follow the steps described earlier

  4. Reactivate the Auto-Sync policy

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.