Deployment Readiness
Successful deployment of IBM ODM into a strictly governed EKS environment requires specific infrastructure pillars to be established prior to installation. This section outlines the necessary tooling, external services, and cluster configurations required to satisfy the Restricted OPA Gatekeeper policies.
Prerequisite Checklist
Before initiating the deployment pipeline, ensure the following prerequisites are met.
1. Workstation & Tooling
The automation bastion or engineer’s workstation requires connectivity to the target EKS cluster and the following CLI tools:
- Helm 3 (v3.10+ recommended) for chart management.
- Kubectl configured with the correct context.
- Kustomize (v4+ or built-in via
kubectl -k) for manifest post-rendering. - OpenShift CLI (
oc) or Skopeo (Optional): Recommended tools for manual image mirroring if an automated enterprise pipeline is not available.
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash2. Database Strategy
The standard containerized database included with the product requires privileged filesystem groups (fsGroup: 26), which is incompatible with the target environment’s OPA policies.
Please select ONE of the following compliant strategies for this deployment:
Option A: External Oracle Database (Production Recommended) Utilize a managed database service or self-managed EC2 Oracle database to completely offload persistence management and security compliance from the Kubernetes cluster.
- Requirement: Oracle Database 19c or 23ai.
- Configuration: Ensure the database is accessible from the EKS Node Security Group (port 1521).
- Credentials: Obtain the Endpoint URL, Port, Oracle Service Name (preferred over SID), Username, and Password.
Option B: Compliant Internal Container (Pilot/PoC Alternative) For non-production pilot environments, you may provide your own approved PostgreSQL container image to run within the same namespace.
- Requirement: The container image must be security-hardened and configured to run as a non-root user (UID > 1001) to pass OPA checks.
- Configuration: You are responsible for defining the deployment and storage resources for this custom database container.
3. Supply Chain (Image Mirroring)
The target environment prohibits direct access to public registries (cp.icr.io). All container images must be staged in the client’s internal trusted registry (e.g., Artifactory).
You will need an IBM entitlement key to mirror the images or install without mirroring. This can be obtained here.
Enterprise Pipeline Integration If the organization utilizes a centralized image ingestion process, please configure the pipeline to pull from the IBM Entitled Registry using the source details below.
- Source Registry:
cp.icr.io - Source Namespace:
cp/cp4a/odm - Tag:
9.5.0.1
Required Images:
| Component | Image Name |
|---|---|
| Decision Center | odm-decisioncenter |
| Decision Runner | odm-decisionrunner |
| Decision Server Console | odm-decisionserverconsole |
| Decision Server Runtime | odm-decisionserverruntime |
| Database Init Utility | dbserver |
In the absence of an automated ingestion pipeline, we recommend using the OpenShift CLI (oc) or Skopeo. These tools efficiently copy multi-architecture manifest lists between registries without requiring intermediate disk storage or Docker daemons.
Retrieve Chart and Upload to Artifactory
# 1. Add the IBM Helm Repo
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
# 2. Update to ensure you have the latest chart versions
helm repo update
# 3. Verify the chart is available (Target: 25.1.0 for ODM 9.5.0.1)
helm search repo ibm-odm-prod
# 4. Pull the chart
helm pull ibm-helm/ibm-odm-prod --version 25.1.0Push Chart to Artifactory
# 1. Login to Artifactory
helm registry login artifactory.gym.lan:8443 --insecure
# Enter artifactory username and password when prompted
# 2. Push the chart to Artifactory
helm push ibm-odm-prod-25.1.0.tgz oci://artifactory.gym.lan:8443/docker-local --insecure-skip-tls-verify4. Cluster Configuration
The Kubernetes namespace must be prepared with the necessary secrets and networking definitions.
- Namespace: Create a dedicated namespace (e.g.,
odm-pilot). - TLS Secret: A pre-provisioned Kubernetes Secret (type
kubernetes.io/tls) containing the valid certificate and private key for the Ingress controller.- Requirement: The certificate Subject Alternative Name (SAN) must match the intended Ingress host (e.g.,
odm.internal.corp). - Note: In a production environment, this secret is typically provisioned by the organization’s PKI automation (e.g., Cert-Manager or Venafi). For Lab/Pilot implementation steps, see the Implementation Methodology section.
- Requirement: The certificate Subject Alternative Name (SAN) must match the intended Ingress host (e.g.,
- Storage Class (Conditional):
- If using Option B (Internal Container): A Storage Class must be identified to provision the Persistent Volume for the database.
- If using Option A (AWS RDS): No Storage Class is required for the database layer.
Ensure that the Namespace does not have any legacy PodSecurityPolicies attached that might conflict with the OPA Gatekeeper constraints. The solution relies entirely on the OPA constraints for security governance.
If you are attempting to reproduce this environment in a local lab (e.g., k3s or Minikube) and need to manually install OPA Gatekeeper to simulate the constraint layer, please refer to the Lab Setup Guide: OPA Gatekeeper Configuration.