<img src="https://ad.doubleclick.net/ddm/activity/src=11631230;type=pagevw0;cat=pw_allpg;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1;num=1?" width="1" height="1" alt=""> Securing Kubernetes: Implementing Container Signature Verification with Cosign and Kyverno

Securing Kubernetes: Implementing Container Signature Verification with Cosign and Kyverno


    See Virtru In Action

    { content.featured_image.alt }}

    In our last technical series blog, we explained how to leverage both Cosign and Kyverno to make it easier and faster to sign and check the container images that are used on Kubernetes clusters. That way, you can protect your infrastructure from both accidental errors and malicious attacks. This time, we’re fulfilling our promise to dive into the actual code we use to do it.

    Verifying container image signatures is quickly becoming an essential part of a secure Kubernetes cluster. With container signature verification, you can be sure that only the images from trusted sources, like your organization’s CI/CD environment, are allowed to be deployed to your clusters – adding another hurdle for threat actors or insider threats. This includes not only the container images that your organization creates in-house, but also the many 3rd party Kubernetes components that are regularly deployed to a cluster. While there are a few different projects that support container image signature verification, Virtru’s SRE team chose Kyverno.

    Kyverno is a powerful Kubernetes policy engine designed to generate, validate, and mutate Kubernetes resources based on user created policies. With its many features and ease of use, we deemed it to be the best fit out of the available options.

    Using Kyverno, you can verify signatures from a variety of sources, including your own self-hosted public/private key pair and KMS keys from different cloud providers. In this example, we’ll be showing how we use a combination of Cosign, Terraform, Kyverno, and a GCP KMS key on a Google Kubernetes Engine (GKE) cluster to verify our container images.


    After Kyverno is deployed to your cluster, perhaps using a Gitops tool like ArgoCD, you’ll need a Kyverno policy to verify that all Pods contain images signed with a specified key. To enable verification of signatures from other projects, the policy would look something like:

    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
      name: external-secrets-policy
      validationFailureAction: enforce
      webhookTimeoutSeconds: 30
      failurePolicy: Fail
      background: true
    - name: external-secrets-policy
           - resources:
                 - Pod
         - imageReferences:
             - "ghcr.io/external-secrets/external-secrets:*"
             - entries:
                 - keyless:
                     subject: "https://github.com/external-secrets/external-secrets*"
                     issuer: "https://token.actions.githubusercontent.com"
    This policy leverages cosign keyless signatures that the external-secrets project creates for their published container images. If an image is deployed to the cluster originating from the “ghcr.io/external-secrets/external-secrets” repository without a keyless signature issued by “https://token.actions.githubusercontent”, the Pod will fail to deploy. The next example uses a private Google Artifact Registry OCI repository, but other container registries, like GCR and ECR, are also compatible with cosign signatures:
       apiVersion: kyverno.io/v1
    kind: ClusterPolicy
      name: virtru-apps-policy
      validationFailureAction: enforce
      background: false
      webhookTimeoutSeconds: 30
      failurePolicy: Fail
        - name: check-image
                - Pod
          - image: "us-docker.pkg.dev/project123/apps/*"
            key: "gcpkms://projects/project123/locations/us/keyRings/test_keyring/cryptoKeys/cosign_key/versions/1"
    This policy requires that any container image deployed from the  “us-docker.pkg.dev/project123/apps” container registry must have an associated signature that uses the “cosign_key” KMS key from a GCP project named “project123”. Kyverno policies hook into the Kubernetes admission controller, so upon any Kubernetes resource creation, the policy verification check must pass before Kubernetes will create the resource. Thus, if an image from the container registry is deployed to the cluster without an associated KMS signature, the Pod will fail to deploy.

    Workload Identity

    The best way to give Kyverno access to GCP resources is using Workload Identity on your GKE clusters. Once Workload Identity is enabled on the cluster, you’ll need to bind the Kyverno service account on your Kubernetes cluster to a GCP service account with the proper roles.  

    module "kyverno-workload-identity" {
      source              = "terraform-google-modules/kubernetes-engine/google//modules/workload-identity"
      use_existing_k8s_sa = true
      name                = kyverno
      namespace           = kyverno
      project_id          = "project123"
      roles             = [

    The above Terraform module will create a GCP service with the name “kyverno” in the project123 project, add a role binding for the “roles/workloadIdentityUser” role, as well as additional “roles/cloudkms.viewer” and “roles/cloudkms.verifier” roles, to the Kyverno GCP service account, and finally creating a binding between the Kubernetes Kyverno service account and the GCP Kyverno service account. Your organization will need to decide the best way to scope the permissions of the GCP service account to give it the minimal permissions needed to interact with the GCP resources that it needs access to.

    KMS code

    If your organization would like to sign the container images that your developers are creating and deploying to your clusters, the KMS key needs to be in a specific format to be compatible with cosign. Using a built-in subcommand, we can use the cosign cli to create this key.

    > cosign generate-key-pair --kms gcpkms://projects/project123/locations/global/keyRings/test_keyring/cryptoKeys/cosign_key

    Using the gcloud cli tool we can determine the format of the key that cosign created.

    > gcloud kms keys describe projects/project123/locations/global/keyRings/test_keyring/cryptoKeys/cosign_key

    createTime: '2022-01-06T01:47:10.360168836Z'
    destroyScheduledDuration: 86400s
    name: projects/project123/locations/global/keyRings/test_keyring/cryptoKeys/cosign_key
    purpose: ASYMMETRIC_SIGN
      algorithm: EC_SIGN_P256_SHA256
      protectionLevel: SOFTWARE
    While creating a key with the Cosign cli is sufficient, best practice would be to define the key in an infrastructure-as-code (IAC) tool like terraform.
    module "kms" {
      source  = "terraform-google-modules/kms/google"
      version = "~> 2.2"
      project_id         = "project123"
      location           = "us"
      keyring            = "test_keyring"
      keys               = ["cosign_key"]
      purpose            = "ASYMMETRIC_SIGN"
      key_algorithm      = "EC_SIGN_P256_SHA256"
      set_owners_for     = ["cosign_key"]
      owners = [
    Notice that the “purpose” of “ASYMMETRIC_SIGN” and the “key_algorithm” of “EC_SIGN_P256_SHA256” matches the specifications of the key that was created manually with the cosign cli. Signing the container will push an object (the signature) to your OCI registry that links a specific key, in this case a GCP KMS key, to a specific container image digest. With this KMS key, you can sign container images manually using cosign, or with a CI/CD system like Argo Workflows or GitHub Actions.

    Now that you've assembled all the pieces, combining cosign signatures with Kyverno signature verification, you can be sure that unsigned and unapproved images will be prevented from being deployed onto your clusters! 


    See Virtru In Action