1. Prerequisites

1.1 Access to the Docker Images and Helm Chart

To set up the Lissi Agent using Kubernetes you are required to have access to:

  1. Reading access to the docker images (access token provided by our Lissi Team)

    1. milissi.azurecr.io/lissi-keycloak-theme-lissi

    2. milissi.azurecr.io/lissi-agent

    3. milissi.azurecr.io/lissi-agent-ui

    4. milissi.azurecr.io/lissi-tails-server

  2. The Lissi Agent Helm chart

1.2 Kubernetes Secrets

Sensitive information such as passwords and keys need to be provided as Kubernetes secrets before installing the Lissi Agent Helm chart.

Secret Name

Keys

Values

Description

acapy

acapyAdminKey

A 32-character alpha/numeric string is recommended

Authorization API key to access AcaPy endpoints

agentWalletEncryptionKey

A 32-character alpha/numeric string is recommended

AcaPy wallet encryption key

acapy-db

password

Any random secure password

If the database is externally provisioned the password must correlate with the database user we wish to connect to

AcaPy wallet database password

keycloak-credentials

dbPassword

Any random secure password

If the database is externally provisioned the password must correlate with the database user we wish to connect to

Keycloak database access password

keycloakPassword

Any random secure password

Password for the Keycloak admin user, used to access the Keycloak admin web UI exposed at https://YOUR.DOMAIN/auth

mediator

apiKey

The API key will be provided by the Lissi Agent Team alongside the configuration files

mongodb

password

Any random secure password

If the database is externally provisioned the password must correlate with the database user we wish to connect to

MongoDB access password

regcred

.dockerconfigjson

Follow this tutorial to set your .dockerconfigjson secret with the provided token or follow the steps outlined below

This secret is used to authorize the user to pull images from the private ACR registry

acapy-webhook

apiKey

A 32-character alpha/numeric string is recommended

Adds a security layer to communicate with the acapy webhook endpoint

Bundle secrets in a single file

There are multiple ways to manage Kubernetes secrets. One option is to bundle all secrets in a single yaml file as depicted below. Secrets can be separated by three hyphens (---) between them.

secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mediator
type: Opaque
data:
  apiKey: afdas13fh3d813hdasf=

---
CODE

To apply all secretes to a Kubernetes instance, execute the following terminal command.

kubectl apply -f secrets.yaml
CODE

Create a regcred Secret

In order to ensure that images hosted on milissi.azurecr.io are resolved as expected, a secret holding the auth String for Azure’s container registry needs to be created. This can be done by creating a JSON file similar to the one depicted below. The auth String must contain a base64 encoded user name and corresponding token.

docker-auth.json:

{
    "auths": {
        "milissi.azurecr.io": {
            "auth": "asdfasdf1235asdföpkjlöasjdfasdf="
        }
    }
}
CODE

After having created the JSON file, a secret can be created using the following terminal command.

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=<path/to/.docker/docker-auth.json> \
    --type=kubernetes.io/dockerconfigjson
CODE

1.3 Register a Domain Name

Register a domain name that links to the public IP address of your Kubernetes cluster. The UI and the controller will respectively be accessible from this domain at https://YOUR.DOMAIN/ and https://YOUR.DOMAIN/ctrl/.

2. Install the Helm Chart

The installation of the Lissi Agent Helm chart follows two steps:

  1. Customization of the values.yaml

  2. Installation of the Helm chart.

2.1 Customization of the values.yaml

The provided configuration files present two values.yaml files that set the desired values for a specific Kubernetes deployment. values-template.yaml if passed to the helm install command overwrites the duplicated values present in values.yaml.

The following table provides an overview of the available configuration options.

Lissi Agent

Keys

Description

publicDomain

The domain name from where the Lissi Agent will be accessible

Note that the subcharts share the same public domain. Therefore it is important to use the anchor &domain before the actual value

e.g. publicDomain: &domain example.domain.com

didCommEndpoint

Endpoint used to establish connections to other agents.

It is possible to use the same value as publicDomain.

controller.createDefaultTenant

Boolean that indicates whether a default tenant should be created by the Lissi Agent.

controller.corsAllowedOrigins

List of CORS allowed origins or “*" to allow all origins.

Defaults to https://<publicDomain> if left empty.

controller.sendServerExceptionMessagesToClients

By default, no error message is returned to the clients in the context of server errors. In case the server error messages should be shared with the clients set it to true.

mongo.persistentVolume.name

Name of the persistent volume you wish to attach to the pod. Stores the Lissi Agent’s MongoDB documents

mongo.persistentVolume.storageClass

This field can be used to match an existing storage policy

mongo.persistentVolume.storage

Requested storage capacity for the mongo documents

mongo.persistentVolume.staticPersitentVolume.enabled

Boolean that indicates whether the Persistent volume should be statically provisioned

mongo.persistentVolume.staticPersitentVolume.csi

In case of static Persistent Volume provisioning describe the existing static volume configuration

mongo.db.externalProvisioning

Boolean that indicates whether a MongoDB database pod should be created or the controller should connect to an existing MongoDB

mongo.db.protocol

Protocol used for connection to MongoDB.

The protocol is used to build the MongoDB connection string

Only overload this value if the database is provisioned externally.

mongo.db.queryString

Query string of the connection string to the MongoDB

Only overload this value if the database is provisioned externally.

mongo.db.host

Address from where MongoDB can be reached

Only overload this value if the database is provisioned externally.

mongo.db.port

Port of the MongoDB. Can be left empty in case the port should not be specified.

Only overload this value if the database is provisioned externally.

mongo.db.username

Name of the user used to access to the database

In case the database is externally provisioned the user already needs to exist and requires the right to create new databases.

ingress.tls.secretName['tls-secret'].hosts

List of the domain names you would like to create TLS/SSL certificates for.

purgeRoutines.enabled

Indicates if purge routines are enabled. If not provided default values are applied

purgeRoutines.dataLifespan

Indicates for how long user-sensitive information is stored before removal by purge routine. If not provided default values are applied

purgeRoutines.periodicity

Indicates how often to run purge routines. If not provided default values are applied

MongoDB Query String

The MongoDB Query String is generated based on the different values from the values.yaml as well as the password stored in the K8s secret.

<MONGODB_PROTOCOL>://<MONGODB_USERNAME>:<MONGODB_PASSWORD>@<MONGODB_HOST>:<MONGODB_PORT>/<SPRING_DATA_MONGODB_DATABASE>?<MONGODB_QUERY_STRING>

Tails-Server

Keys

Description

tails-server.apiEndpoint

The same public domain must be used to access the revocation registry. For that reason it is recommended to keep the *domain anchor

tails-server.persistentVolume.name

Name of the persistent volume you wish to attach to the pod. Stores the revocation registry

tails-server.persistentVolume.storage

Requested storage capacity for the revocation registry

tails-server.persistentVolume.storageClass

This field can be used to match an existing storage policy

tails-server.persistentVolume.staticPersistentVolume.enabled

Boolean that indicates whether the Persistent volume should be statically provisioned

tails-server.persistentVolume.staticPersistentVolume.csi

In case of static Persistent Volume provisioning describe the existing static volume configuration

tails-server.logLevel

Keycloak

Keys

Description

keycloak.publicDomain

The same public domain must be used to access Keycloak from the domain name for that reason it is recommended to keep the set anchor *domain

keycloak.keycloakPersistentVolume

Name of the persistent volume you wish to attach to the pod. Stores the realms, users as well as the client application

keycloak.storageClass

This field can be used to match an existing storage policy

keycloak.storage

Requested storage capacity for the Keycloak database

keycloak.db.externalProvisioning

Boolean that indicates whether a new Postgres database pod should be created or the controller should connect to an existing Postgres database

keycloak.db.host

Address of the Postgres database

Only overload this value if the database is provisioned externally.

keycloak.db.port

Port of the Postgres database

keycloak.db.name

Name of the database that should be used

Only overload this value if the database is provisioned externally.
The database already needs to exist.

keycloak.db.username

Name of the user used to access to the database

In case the database is externally provisioned the user already needs to exist and requires read and write rights for the database specified by keycloak.db.name.

keycloak.customKeycloak.realm

Boolean value that activate a custom keycloak configuration defined in the realm configMap (see Section 2.2.3 Custom Keycloak Config)

keycloak.customKeycloak.user

Boolean value that activate a custom keycloak configuration defined in the users configMap (see Section 2.2.3 Custom Keycloak Config)

keycloak.cache

Distributed cache is enabled by default. In case we provide “local“ parameter it is set to be disabled between nodes

AcaPy

Keys

Description

acapy.publicDomain

The same public domain must be used to access AcaPy from the domain name for that reason we recommend to keep the set anchor *domain

acapy.acapyLabel

The label is the SSI agents name that users see when connecting to your SSI agent.

acapy.acapyImageUrl

The image URL is used by SSI wallets to show users a logo of your company when they are connecting to your SSI agent.

acapy.persistentVolume.name

Name of the persistent volume you wish to attach to the pod. Stores the Acapy wallet encrypted information

acapy.persistentVolume.storageClass

This field can be used to match an existing storage policy

acapy.persistentVolume.storage

Requested storage capacity for the Acapy wallet

acapy.persistentVolume.staticPersistentVolume.enabled

Boolean that indicates whether a Persistent volume should be statically provisioned

acapy.persistentVolume.staticPersistentVolume.csi

acapy.ledgerPoolName

Name of the ledger that acapy should connect to

acapy.ledgerGenesisFile

Use this field to describe the ledger that acapy should connect to

acapy.db.externalProvisioning

Boolean that indicates whether a new Postgres database pod should be created or the controller should connect to an existing Postgres database

acapy.db.host

Address of the Postgres database

Only overload this value if the database is provisioned externally.

acapy.db.port

Port of the Postgres database

acapy.db.username

Name of the user used to access to the database

In case the database is externally provisioned the user already needs to exist and requires the right to create new databases.

2.2.3 Custom Keycloak Configuration

It is possible to import a custom json formatted keycloak configuration leveraging the configMaps.

Custom Realm
  1. Create a file charts/keyckloak/templates/realm-config.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: realm-config
    data:
      lissi-cloud-realm.json: |
        {
          /** Your custom realm JSON description **/
        }
    CODE

  2. In the parent Helm Chart set keycloak.customKeycloak.realm = true

  3. To update an existing deployment with latest changes

    1. Adopt the migration strategy to OVERWRITE_EXISTING in charts/keycloak/templates/deployment.yaml of the keycloak Chart.

    2. OR Run a manual migration directly from the Keycloak Container (cf: Keycloak Migration)

Custom Users
  1. Create a file charts/keyckloak/templates/users-config.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: users-config
    data:
      lissi-cloud-users-0.json: |
        {
          /** Your custom users JSON description **/
        }
    CODE

  2. In the parent Helm Chart set keycloak.customKeycloak.users = true

  3. To update an existing deployment with latest changes

    1. Adopt the migration strategy to OVERWRITE_EXISTING in charts/keycloak/templates/deployment.yaml of the keycloak Chart.

    2. OR Run a manual migration directly from the Keycloak Container (cf: Keycloak Migration)

2.2 Installing the Helm Chart

Once the value-template.yaml file configured as desired and connected to your cluster, through a terminal, navigate to the root of the configuration files and run the following command.

helm install [NAME] ./ -f ./values-template.yaml [ADDITIONAL_FLAGS]
CODE

More information on Helm chart installation and flags here: https://helm.sh/docs/helm/helm_install/

2.3 Test the Lissi Agent

After a couple of minutes, the agent is fully set up and accessible through the registered domain name https://{DOMAIN}/. To login to the Lissi Agent use the username lissi and password lissi. On your first login, you are requested to change the password to a secure password of your choice.