ODM Deployment & Traffic Exposure

Published

April 2, 2026

Phase 2: ODM Deployment & Traffic Exposure

In this final phase, we deploy the ODM workload. The configuration must bridge the gap between the Kubernetes Service (ClusterIP) and the AWS ALB, ensuring traffic remains encrypted across the boundary.

2.1 Configure Helm Repository

Before generating the deployment manifests, add the IBM Helm repository to your local client. This allows Helm to locate the ibm-odm-prod chart.

# 1. Add the IBM Helm Repo
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm

# 2. Update to ensure you have the latest chart versions
helm repo update

# 3. Verify the chart is available (Target: 25.1.0 for ODM 9.5.0.1)
helm search repo ibm-odm-prod
TipChart Available in Artifactory

In order to complete these commands, the chart must be uploaded to Artifactory. For instructions to upload the chart into Artifactory, refer to Solution Overview/Prepare.

Unlike classic Helm repositories, OCI registries do not use helm repo add. Instead, you authenticate with the registry and reference the oci:// URL directly.

# 1. Log into Artifactory
helm registry login artifactory.gym.lan:8443 --insecure

# 2. Verify access by pulling the chart locally 
helm pull oci://artifactory.gym.lan:8443/<Artifactory_Repo_Name>/ibm-odm-prod \
    --version 25.1.0 \
    --insecure-skip-tls-verify

# 3. Untar ibm-odm-prod-25.1.0.tgz
tar -xvzf ibm-odm-prod-25.1.0.tgz
TipReference Documentation

For a complete list of available configuration parameters, default values, and architectural details, refer to the official IBM ODM Production Helm Chart README.

2.2 Helm Chart Configuration

To satisfy the strict OPA policies and network requirements, we must construct a specific values.yaml file. This file overrides the default “insecure” settings of the chart.

Constructing the Values File

Select the configuration that matches your deployment phase.

Use Case: Deployment into the EKS Pilot environment.
Key Features: External Oracle Database, Image Digests, Strict Security Contexts.

Create a file named values-prod.yaml:

Download values-prod.yaml

# values-prod.yaml

# 1. License & Auth
license: true
usersPassword: "<SET_ADMIN_PASSWORD>"

# 2. Image Config (Internal Mirror)
image:
  repository: artifactory.internal.corp/odm-repo
  pullSecrets:
    - internal-registry-secret
  # Note: Global tag is commented out to force component-level digests
  # tag: "9.5.0.1"

# 3. Component Digests (Required for 'container-image-must-have-digest' policy)
# You must obtain the SHA256 digest from your Artifactory for each image.
decisionCenter:
  tagOrDigest: "sha256:6a0eb1f874ba52918bcd8e2c3acde2d3e428685cad7e5996e0c1227e88d3de0b"
decisionRunner:
  tagOrDigest: "sha256:6f0643013e18d848199a73f38c5f6f854c1226ae7702c8294b835b74aa561782"
decisionServerConsole:
  tagOrDigest: "sha256:f4c778a388535330ce5d5612d6325d5522cedb70f0cb7895fa7f015a38e5bb9c"
decisionServerRuntime:
  tagOrDigest: "sha256:ab03e4e35923c674a090456f6869963a6d29e8f94117061ff11d383cc8c9369a"

# 4. Architecture: External Database (Required for 'psp-fsgroup' policy)
internalDatabase:
  persistence:
    enabled: false # Disable internal DB

externalDatabase:
  type: "oracle" # or "postgresql"
  url: "jdbc:oracle:thin:@//<ORACLE_SERVER_ADDRESS>:1521/freepdb1""
  # References the secret created in Prereqs section
  secretCredentials: "odm-db-secret"

# 5. Security Contexts (Native v9.5 Features)
customization:
  runAsUser: 1001
  # NATIVE FIX: Satisfies psp-seccomp
  seccompProfile:
    type: RuntimeDefault
  # NATIVE FIX: Satisfies must-have-appid
  labels:
    applicationid: "ODM-PILOT"

# 6. Ingress Configuration (AWS ALB Specific)
service:
  type: ClusterIP
  enableRoute: false
  # NATIVE FIX: Satisfies "Host cannot be empty" policy
  hostname: "odm.internal.corp"

  ingress:
    enabled: true
    host: "odm.internal.corp"
    class: alb
    
    # NOTE: When using AWS ACM, we do not use k8s TLS secrets. 
    # However, if OPA strictness requires a TLS block to be present in the YAML, 
    # leave these empty or define a dummy secret. 
    # Usually, the 'allow-http: false' annotation satisfies OPA.
    tlsSecretRef: "" 
    tlsHosts: []

    annotations:
      # 1. Controller Class
      kubernetes.io/ingress.class: alb
      
      # 2. Network Configuration
      alb.ingress.kubernetes.io/scheme: internet-facing
      # 'ip' mode routes traffic directly to Pod IPs (bypassing NodePort)
      # This is faster and required for some sticky session configurations
      alb.ingress.kubernetes.io/target-type: ip
      
      # 3. Encryption & Certificates (AWS ACM)
      # Reference your ACM Certificate ARN here
      alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:123456789012:certificate/xxxx-xxxx-xxxx"
      
      # 4. Backend Security (Re-encryption)
      # Tells ALB to speak HTTPS to the Pods (required for OPA compliance inside cluster)
      alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
      # Ensure Health Checks also use HTTPS so pods don't fail readiness
      alb.ingress.kubernetes.io/healthcheck-protocol: "HTTPS"
      
      # 5. OPA Compliance
      # Explicitly disables HTTP to satisfy 'ingress-https-only' policy
      kubernetes.io/ingress.allow-http: "false"
ImportantAWS ALB & OPA “TLS” Constraints

Standard OPA policies (ingress-https-only) often check if the spec.tls list is populated in the Ingress YAML.

In AWS ALB: You typically use the certificate-arn annotation instead of a Kubernetes Secret, leaving spec.tls empty.

If OPA blocks this configuration: You may need to create a “dummy” self-signed secret and reference it in tlsSecretRef just to satisfy the OPA regex check, even though the ALB ignores it in favor of the ARN.

Use Case: Sandbox testing where an external database is not available.
Key Features: Internal PostgreSQL or Oracle (Accepts known OPA violation for DB only), Image Digests, Self-Signed Ingress.

Create a file named values-lab.yaml:

Download values-lab.yaml

# values-lab.yaml

# 1. License & Auth
license: true
usersPassword: "odmAdminPassword123!"

# 2. Image Config (Lab Artifactory)
image:
  # document out or remove the repository line to use the default IBM registry (cp.icr.io)
  repository: artifactory.gym.lan:8443/docker-local
  pullSecrets:
    - internal-registry-secret
  # Do not use tag for lab deployments - use digest instead
  # tag: "9.5.0.1"

# 3. Component Digests (From Lab Artifactory)
decisionCenter:
  tagOrDigest: "sha256:6a0eb1f874ba52918bcd8e2c3acde2d3e428685cad7e5996e0c1227e88d3de0b"
decisionRunner:
  tagOrDigest: "sha256:6f0643013e18d848199a73f38c5f6f854c1226ae7702c8294b835b74aa561782"
decisionServerConsole:
  tagOrDigest: "sha256:f4c778a388535330ce5d5612d6325d5522cedb70f0cb7895fa7f015a38e5bb9c"
decisionServerRuntime:
  tagOrDigest: "sha256:ab03e4e35923c674a090456f6869963a6d29e8f94117061ff11d383cc8c9369a"

# 4. Architecture: Internal Database
# Note: This WILL fail 'psp-fsgroup' checks. Acceptable for Lab only.
internalDatabase:
  # Digest for dbserver image
  tagOrDigest: "sha256:9106481ba539808ea9fed4b7d3197e91732748bc2170e862b729af8cc874f5db"
  persistence:
    enabled: true
    useDynamicProvisioning: true
    storageClassName: "local-path"
  runAsUser: 26
# Uncomment below to use "external" DB in lab setting, ensure that internalDatabase above is commented out
# internalDatabase:
#   persistence:
#     enabled: false # Disable internal DB
# externalDatabase:
#   type: "postgresql"
#   serverName: "postgres.postgres.svc.cluster.local"
#   databaseName: "odm_db"
#   port: "5432"
#   # References the secret created in Prereqs section
#   secretCredentials: "odm-db-secret"
# externalDatabase:
#   type: "oracle"
#   url: "jdbc:oracle:thin:@//oracle-db.oracle.svc.cluster.local:1521/freepdb1"
#   # References the secret created in Prereqs section
#   secretCredentials: "odm-db-secret"

# 5. Security Contexts
customization:
  runAsUser: 1001
  seccompProfile:
    type: RuntimeDefault
  labels:
    applicationid: "ODM-LAB"

# 6. Ingress Configuration for lab using nginx ingress controller
service:
  type: ClusterIP
  enableRoute: false
  hostname: "odm.my-haproxy.gym.lan"

  ingress:
    enabled: true
    host: "odm.my-haproxy.gym.lan"
    tlsSecretRef: "odm-tls-secret"
    tlsHosts:
      - "odm.my-haproxy.gym.lan"
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/ingress.allow-http: "false"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

2.3. Kustomize Workarounds

While ODM v9.5 resolves many security configurations natively, critical gaps remain that cannot be fixed via values.yaml alone:

  1. Group ID Enforcement: The Helm chart templates explicitly define runAsUser but ignore runAsGroup and supplementalGroups. This causes the pods to fail the psp-pods-allowed-user-ranges policy. We inject these fields (1001) into every Deployment.
  2. Test Job Compliance: The odm-test-connection Job generated by the chart is unconfigurable via values.yaml. It lacks Resource Limits, Security Contexts, and Image Digests. We patch this job to add all missing security fields.
  3. Image Pull Policy (Init Containers): The OPA policy requires imagePullPolicy: Always for all containers. The Helm chart defaults Init Containers to IfNotPresent with no option to override. We use a JSON Patch to forcibly update the pull policy on every container in the manifest.

Create Patch Files

Define the following files in your overlay directory. Note that we utilize distinct patch files for each component to ensure explicit targeting and avoid accidental modification of infrastructure components (such as the database).

File 1: security-patch.yaml

Target: Application Deployments (Decision Center, Runner, Console, Runtime).
Purpose: Injects the mandatory Group IDs required by the “Restricted” OPA policy.

Download security-patch.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: .*
spec:
  template:
    spec:
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]
File 2: job-security-patch.yaml

Target: The Database Connection Test Job.
Purpose: This job is “unconfigurable” in the standard chart. We must patch it to enforce Image Digests, Resource Limits, and strict Security Contexts.

Download job-security-patch.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: .*
spec:
  template:
    spec:
      # Pod Level Security
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]
        seccompProfile:
          type: RuntimeDefault

      containers:
      - name: odm-lab-odm-test
        imagePullPolicy: Always
        # CRITICAL: The Helm chart does not support digests for this specific job.
        # You must hardcode the mirrored Runtime image and SHA256 digest here or comment out to use default cp.icr.io
        image: artifactory.internal.corp/odm-repo/odm-decisionserverruntime@sha256:ab03e4e35923c674a090456f6869963a6d29e8f94117061ff11d383cc8c9369a
        # Use the following for default image from IBM Container Registry
        #image: cp.icr.io/cp/cp4a/odm/odm-decisionserverruntime@sha256:ab03e4e35923c674a090456f6869963a6d29e8f94117061ff11d383cc8c9369a

        # Fix "container-must-have-limits-and-requests"
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi

        # Fix "privilege-escalation", "capabilities", "readonlyrootfilesystem"
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - ALL
          seccompProfile:
            type: RuntimeDefault
File 3: security-patch-dc.yaml

Target: Decision Center Deployment.
Purpose: Injects the mandatory runAsGroup: 1001 and supplementalGroups into the Decision Center pods, as the Helm chart template ignores these values for this component.

Download security-patch-dc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: odm-lab-odm-decisioncenter
spec:
  template:
    spec:
      automountServiceAccountToken: false
      # Pod-level settings (keep these)
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]

      # NEW: Explicitly inject into the Main Container
      containers:
      - name: odm-decisioncenter
        securityContext:
          seccompProfile:
            type: RuntimeDefault

      # NEW: Explicitly inject into the Init Containers
      initContainers:
      - name: init-folder-readonlyfs
        securityContext:
          seccompProfile:
            type: RuntimeDefault
File 4: security-patch-runner.yaml

Target: Decision Runner Deployment.
Purpose: Injects the mandatory runAsGroup: 1001 and supplementalGroups into the Decision Runner pods to satisfy the “Restricted” OPA policy.

Download security-patch-runner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: odm-lab-odm-decisionrunner
spec:
  template:
    spec:
      automountServiceAccountToken: false
      # Pod-level settings (keep these)
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]

      # NEW: Explicitly inject into the Main Container
      containers:
      - name: odm-decisionrunner
        securityContext:
          seccompProfile:
            type: RuntimeDefault

      # NEW: Explicitly inject into the Init Containers
      initContainers:
      - name: init-folder-readonlyfs
        securityContext:
          seccompProfile:
            type: RuntimeDefault
File 5: security-patch-console.yaml

Target: Decision Server Console Deployment.
Purpose: Injects the mandatory runAsGroup: 1001 and supplementalGroups into the Rule Execution Server Console pods.

Download security-patch-console.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: odm-lab-odm-decisionserverconsole
spec:
  template:
    spec:
      automountServiceAccountToken: false
      # Pod-level settings (keep these)
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]

      # NEW: Explicitly inject into the Main Container
      containers:
      - name: odm-decisionserverconsole
        securityContext:
          seccompProfile:
            type: RuntimeDefault

      # NEW: Explicitly inject into the Init Containers
      initContainers:
      - name: init-folder-readonlyfs
        securityContext:
          seccompProfile:
            type: RuntimeDefault
File 6: security-patch-runtime.yaml

Target: Decision Server Runtime Deployment.
Purpose: Injects the mandatory runAsGroup: 1001 and supplementalGroups into the Rule Execution Server Runtime pods.

Download security-patch-runtime.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: odm-lab-odm-decisionserverruntime
spec:
  template:
    spec:
      automountServiceAccountToken: false
      # Pod-level settings (keep these)
      securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        supplementalGroups: [1001]

      # NEW: Explicitly inject into the Main Container
      containers:
      - name: odm-decisionserverruntime
        securityContext:
          seccompProfile:
            type: RuntimeDefault

      # NEW: Explicitly inject into the Init Containers
      initContainers:
      - name: init-folder-readonlyfs
        securityContext:
          seccompProfile:
            type: RuntimeDefault
File 7: force-pull-policy.yaml

Target: All ODM Deployments and Jobs.
Purpose: A JSON Patch that overrides the imagePullPolicy for all containers (including Init Containers, which cannot be configured via Helm). This forces the policy to Always, satisfying strict image currency requirements.

Download force-pull-policy.yaml

# force-pull-policy.yaml
# Forces Main Container
- op: replace
  path: /spec/template/spec/containers/0/imagePullPolicy
  value: Always
# Forces First Init Container (e.g. init-folder-readonlyfs)
- op: replace
  path: /spec/template/spec/initContainers/0/imagePullPolicy
  value: Always
# Forces Second Init Container (e.g. init-decisionrunner)
# Note: This needs uncommented if using internal database in a lab setting for OPA constraint
# - op: replace
#   path: /spec/template/spec/initContainers/1/imagePullPolicy
#   value: Always

Create Kustomization Manifest

Create the kustomization.yaml file to link the patches to the generated resources.

Important: We use explicit name targeting to ensure we do not accidentally patch the Database Deployment (if running locally) or other infrastructure components.

Note: The names below assume your Helm Release Name is odm-lab. If you use odm-pilot, update the names accordingly (e.g., odm-pilot-odm-decisioncenter).

Download kustomization.yaml

resources:
  - odm-raw.yaml

patches:
  # Generic Security Patch (Group IDs + Token)
  # Targets ALL deployments via regex name
  - path: security-patch.yaml
    target:
      group: apps
      version: v1
      kind: Deployment
      name: "odm-lab-odm-.*" # "odm-pilot-odm-.*"

  # Specific patches (One per file)
  - path: security-patch-dc.yaml
  - path: security-patch-runner.yaml
  - path: security-patch-console.yaml
  - path: security-patch-runtime.yaml

  # Test Job Patch
  - path: job-security-patch.yaml
    target:
      group: batch
      version: v1
      kind: Job
      name: odm-lab-odm-test # "odm-pilot-odm-test"

  # Force Pull Policy (JSON Patch)
  # Applies to all ODM deployments
  - path: force-pull-policy.yaml
    target:
      group: apps
      version: v1
      kind: Deployment
      name: "odm-lab-odm-.*" # "odm-pilot-odm-.*"
TipLab Workaround: Non-AppArmor Hosts (RHEL/CentOS/Rocky)

If you are running this deployment in a lab environment based on RHEL, CentOS, or Rocky Linux, your kernel likely uses SELinux instead of AppArmor.

Including the container.apparmor.security.beta.kubernetes.io annotations (which are mandatory for the restricted EKS environment) will cause your lab pods to hang in a Blocked state with the error: Cannot enforce AppArmor: AppArmor is not enabled on the host.

Action: Run the following command to strip these annotations from all YAML files in your current directory before applying the configuration:

sed -i '/container.apparmor.security.beta.kubernetes.io/d' *.yaml

2.4 Deploying the Workload

Run the build pipeline to generate the patched manifests and apply them to the cluster.

# 1. Render Helm Template
# (Change values-prod.yaml to values-lab.yaml if needed)
helm template odm-lab ibm-helm/ibm-odm-prod \
  --version 25.1.0 \
  --kube-version 1.28.0 \
  -f values-prod.yaml > odm-raw.yaml

# 2. Apply Patches & Deploy
kubectl -n odm-pilot apply -k .

Skip the above instructions if deploying via Jenkins

  1. Install Config File Provider Plugin
    1. Go to Manage Jenkins → Plugins → Available Plugins
    2. Search for Config File Provider Plugin and select download
    3. Once it is installed, restart Jenkins
  2. Add the values.yaml file into Jenkins
    1. Go to Manage Jenkins → Managed Files → Add new Config
    2. Select Custom file and set the ID to odm-values-lab
    3. Select Next
    4. Set the Name to odm-values-lab
    5. Paste the following into Content then press Submit

Download jenkins-values.yaml

license: true
usersPassword: "__ODM_ADMIN_PASS__"

image:
  pullSecrets:
    - icr-secret
  pullPolicy: Always

decisionCenter:
  tagOrDigest: "__DC_DIGEST__"

decisionRunner:
  tagOrDigest: "__DR_DIGEST__"

decisionServerConsole:
  tagOrDigest: "__DSC_DIGEST__"

decisionServerRuntime:
  tagOrDigest: "__DSR_DIGEST__"

externalDatabase:
  type: "postgresql"
  serverName: "odm-postgres.__NAMESPACE__.svc.cluster.local"
  databaseName: "odm_db"
  port: "5432"
  secretCredentials: "odm-db-secret"

customization:
  runAsUser: 1001
  seccompProfile:
    type: RuntimeDefault
  labels:
    applicationid: "ODM"

service:
  type: ClusterIP
  enableRoute: false
  hostname: "__ODM_HOSTNAME__"

  ingress:
    enabled: true
    host: "__ODM_HOSTNAME__"
    tlsSecretRef: "__ODM_TLS_SECRET__"
    tlsHosts:
      - "__ODM_HOSTNAME__"
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/ingress.allow-http: "false"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  1. Paste Code into Jenkins Pipeline
    1. paste the following code into the Jenkins pipeline

Download jenkins-lab.groovy

pipeline {
    agent any

    environment {
        KUBECTL = '/var/tmp/kubectl'
        NAMESPACE = 'odm-jenkins'
        HELM_RELEASE = 'odm-lab'
        HELM = '/var/tmp/helm'
        YOUR_EMAIL = 'user@example.com'
    }

    stages {

        stage('Check Cluster') {
            steps {
                script {
                    sh "chmod u+x ${KUBECTL}"
                    withKubeConfig(credentialsId: 'k3s-kubeconfig') {
                        sh "${KUBECTL} get nodes"
                    }
                }
            }
        }

        stage('Create Namespace') {
            steps {
                script {
                    withKubeConfig(credentialsId: 'k3s-kubeconfig') {
                        sh """
                        ${KUBECTL} get namespace ${NAMESPACE} >/dev/null 2>&1 || ${KUBECTL} create namespace ${NAMESPACE}
                        """
                    }
                }
            }
        }

        stage('Deploy Postgres (Lab)') {
            steps {
                script {
                    withCredentials([usernamePassword(
                        credentialsId: 'odm-db-credentials',
                        usernameVariable: 'DB_USER',
                        passwordVariable: 'DB_PASS'
                    )]) {
                        withKubeConfig(credentialsId: 'k3s-kubeconfig') {
                            sh """
echo "Creating Postgres secret..."

${KUBECTL} -n ${NAMESPACE} create secret generic postgres-secret \
  --from-literal=POSTGRES_DB=odmdb \
  --from-literal=POSTGRES_USER=${DB_USER} \
  --from-literal=POSTGRES_PASSWORD=${DB_PASS} \
  --dry-run=client -o yaml | ${KUBECTL} apply -f -

echo "Deploying Postgres securely..."

cat <<EOF | ${KUBECTL} -n ${NAMESPACE} apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: odm-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: odm-postgres
  template:
    metadata:
      labels:
        app: odm-postgres
    spec:
      containers:
      - name: postgres
        image: postgres:13
        envFrom:
        - secretRef:
            name: postgres-secret
        ports:
        - containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
  name: odm-postgres
spec:
  selector:
    app: odm-postgres
  ports:
  - port: 5432
    targetPort: 5432
EOF

${KUBECTL} -n ${NAMESPACE} rollout status deployment/odm-postgres --timeout=180s
"""
                        }
                    }
                }
            }
        }
        stage('Create ICR Pull Secret') {
          steps {
            script {
              withCredentials([
                string(credentialsId: 'icr-entitlement-key', variable: 'ICR_KEY')
              ]) {
                withKubeConfig(credentialsId: 'k3s-kubeconfig') {
                  sh '''
        echo "Creating docker-registry secret for cp.icr.io..."
        
        NAMESPACE="odm-jenkins"
        SECRET_NAME="icr-secret"
        
        ${KUBECTL} -n ${NAMESPACE} create secret docker-registry ${SECRET_NAME} \
          --docker-server=cp.icr.io \
          --docker-username=cp \
          --docker-password=${ICR_KEY} \
          --docker-email=${YOUR_EMAIL} \
          --dry-run=client -o yaml | ${KUBECTL} apply -f -
        '''
                }
              }
            }
          }
        }
        stage('Setup Helm') {
            steps {
                sh '''
                set -e
        
                echo "Checking if Helm is already installed..."
        
                if [ ! -f "${HELM}" ]; then
                    echo "Helm not found. Installing Helm..."
        
                    curl -fsSL https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz -o helm.tar.gz
                    tar -xzf helm.tar.gz
                    mv linux-amd64/helm ${HELM}
                    chmod +x ${HELM}
        
                    echo "Helm installed successfully."
                else
                    echo "Helm already installed. Skipping download."
                fi
        
                echo "Helm version:"
                ${HELM} version
        
                echo "Adding/Updating IBM Helm repo..."
                ${HELM} repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm || true
                ${HELM} repo update
                '''
            }
        }
        stage('Prepare Values + Deploy ODM') {
            steps {
                script {
                    withCredentials([
                        string(credentialsId: 'odm-admin-pass', variable: 'ODM_ADMIN_PASS'),
                        string(credentialsId: 'dc-digest', variable: 'DC_DIGEST'),
                        string(credentialsId: 'dr-digest', variable: 'DR_DIGEST'),
                        string(credentialsId: 'dsc-digest', variable: 'DSC_DIGEST'),
                        string(credentialsId: 'dsr-digest', variable: 'DSR_DIGEST')
                    ]) {
                        configFileProvider([configFile(fileId: 'odm-values-lab', variable: 'TEMPLATE_PATH')]) {
                            withKubeConfig(credentialsId: 'k3s-kubeconfig') {
                                sh '''
                                    set -e
        
                                    export NAMESPACE=odm-jenkins
                                    export ODM_HOSTNAME=odm.my-haproxy.gym.lan
                                    export ODM_TLS_SECRET=odm-tls-secret
        
                                    echo "Generating values file with secrets..."
        
                                    sed \
                                      -e "s|__ODM_ADMIN_PASS__|$ODM_ADMIN_PASS|g" \
                                      -e "s|__DC_DIGEST__|$DC_DIGEST|g" \
                                      -e "s|__DR_DIGEST__|$DR_DIGEST|g" \
                                      -e "s|__DSC_DIGEST__|$DSC_DIGEST|g" \
                                      -e "s|__DSR_DIGEST__|$DSR_DIGEST|g" \
                                      -e "s|__NAMESPACE__|$NAMESPACE|g" \
                                      -e "s|__ODM_HOSTNAME__|$ODM_HOSTNAME|g" \
                                      -e "s|__ODM_TLS_SECRET__|$ODM_TLS_SECRET|g" \
                                      "$TEMPLATE_PATH" > "$WORKSPACE/odm-values-lab.yaml"
                                      
                                    echo "Checking rendered values file..."
                                    grep __DC_DIGEST__ $WORKSPACE/odm-values-lab.yaml || echo "DC digest replaced"
                                    grep __ODM_ADMIN_PASS__ $WORKSPACE/odm-values-lab.yaml || echo "Admin pass replaced"

        
                                    echo "Values file created:"
                                    ls -l "$WORKSPACE/odm-values-lab.yaml"
        
                                    echo "Deploying ODM via Helm..."
        
                                    /var/tmp/helm upgrade --install odm-lab ibm-helm/ibm-odm-prod \
                                        --version 25.1.0 \
                                        --namespace ${NAMESPACE} \
                                        -f "$WORKSPACE/odm-values-lab.yaml"
                                '''
                            }
                        }
                    }
                }
            }
        }



    }

    post {
        success {
            echo "FULL ODM stack deployed successfully."
        }
        failure {
            echo "ODM deployment failed."
        }
    }
}
  1. Add credentials
  1. go to Manage Jenkins → Credentials → Global → Add Credentials
  2. Add the icr or ibm entitlement key
    • Kind: Secret Text
    • Scope: Global
    • Secret: user_entitlement_key
    • ID: icr-entitlement-key
  3. Add the DB (postgres) login credentials
    • Kind: Username/Password
    • Scope: Global
    • Username: odm
    • Password: odm123
    • ID: odm-db-credentials
  4. Add the odm login credentials for helm deployment
    • Kind: Secret Text
    • Scope: Global
    • Secret: odmAdminPassword123!
    • ID: odm-admin-pass
  5. Add the Decision Center Digest credentials
    • Kind: Secret Text
    • Scope: Global
    • Secret: sha256:6a0eb1f874ba52918bcd8e2c3acde2d3e428685cad7e5996e0c1227e88d3de0b
    • ID: dc-digest
  6. Add the Decision Runner Digest credentials
    • Kind: Secret Text
    • Scope: Global
    • Secret: sha256:6f0643013e18d848199a73f38c5f6f854c1226ae7702c8294b835b74aa561782
    • ID: dr-digest
  7. Add the Decision Server Console Digest credentials
    • Kind: Secret Text
    • Scope: Global
    • Secret: sha256:f4c778a388535330ce5d5612d6325d5522cedb70f0cb7895fa7f015a38e5bb9c
    • ID: dsc-digest
  8. Add the Decision Server Runtime Digest credentials
    • Kind: Secret Text
    • Scope: Global
    • Secret: sha256:ab03e4e35923c674a090456f6869963a6d29e8f94117061ff11d383cc8c9369a
    • ID: dsr-digest
  1. Run the Jenkins Pipeline
TipVerification

After deployment, verify that the ALB has successfully registered the targets.

kubectl get ingress -n odm-pilot
# Look for the ADDRESS field (e.g., k8s-odmpilot-xxxx.us-east-1.elb.amazonaws.com)

Navigate to https://odm.mycompany.com/decisioncenter. If the page loads securely, End-to-End encryption is functioning correctly.
Use odmAdmin and the password you set in the values.yaml to log in (e.g. odmAdminPassword123!).