Phase 1: Foundation & Prerequisite Configuration
Phase 1: Foundation & Prerequisite Configuration
Before deploying the ODM application logic, the infrastructure foundation must be secured. This phase covers provisioning the database, generating internal TLS assets for end-to-end encryption, and creating the necessary Kubernetes secrets.
1.0 Create namespace
Create the dedicated namespace for the ODM deployment. It is recommended to set your current context to this namespace to simplify subsequent commands.
# 1. Create the namespace
kubectl create namespace odm-pilot
# 2. Set as current context (Optional but recommended)
kubectl config set-context --current --namespace=odm-pilot1.1 Database Provisioning
ODM requires a robust persistence layer. We utilize Oracle on EC2 (19c or 23ai).
- Provision Oracle: Ensure the instance is deployed in private subnets reachable by the EKS cluster.
- Configure Security Groups:
- Inbound Rule: Allow TCP/1521 from the EKS Cluster Security Group.
- Outbound Rule: Allow return traffic.
Create PostgreSQL in a separate namespace to similate a similar setup to a prod environment with AWS RDS.
Create New Namespace (PostgreSQL) From a node inside your cluster on Bastion (eg. [clouduser@my-k3s-server-0 ~])
kubectl create namespace postgresNote: You can check namespaces with
bash kubectl get nsCreate PostgreSQL Secret
kubectl -n postgres create secret generic postgres-secret \ --from-literal=POSTGRES_DB=postgres \ --from-literal=POSTGRES_USER=postgres \ --from-literal=POSTGRES_PASSWORD='StrongPassword123!'Create persistent value storage
mkdir -p ~/k3s/postgres cd ~/k3s/postgres vi postgres-pvc.yamlPaste into postgres-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc namespace: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 10GiSave the file and apply it.
kubectl apply -f postgres-pvc.yamlShould output:
persistentvolumeclaim/postgres-pvc createdCreate the deployment yaml
vi postgres-deployment.yamlPaste the following into the yaml
apiVersion: apps/v1 kind: Deployment metadata: name: postgres namespace: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:16 ports: - containerPort: 5432 envFrom: - secretRef: name: postgres-secret volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage persistentVolumeClaim: claimName: postgres-pvcSave the file and apply it.
kubectl apply -f postgres-deployment.yamlExpected output:
deployment.apps/postgres createdCheck that the Postgres pod is running
kubectl -n postgres get pods -wExpect one of the following:
postgres-xxxxx Pending postgres-xxxxx ContainerCreating postgres-xxxxx RunningEventually you should see ‘Running’
Check that the volume is mounted
kubectl -n postgres get pvcNote: if it gets stuck, run
kubectl -n postgres describe pod postgres-xxxxxCreate the service file (once pod is running)
vi postgres-service.yamlPaste
apiVersion: v1 kind: Service metadata: name: postgres namespace: postgres spec: type: ClusterIP selector: app: postgres ports: - port: 5432 targetPort: 5432Apply the yaml
kubectl apply -f postgres-service.yamlExpected output
service/postgres createdVerify that the volume is running
kubectl -n postgres get svcYou should now have
postgres-pvc.yaml,postgres-deployment.yaml, andpostgres-service.yamlunder the~/k3s/postgresdirectory.Connect to the database to create schema and user
kubectl exec -it -n postgres \ $(kubectl get pods -n postgres -l app=postgres --field-selector=status.phase=Running -o jsonpath="{.items[0].metadata.name}") \ -- psql -U postgres
The database host for the internal PostgreSQL deployment will be postgres.postgres.svc.cluster.local on port 5432. Use this for configuring the ODM application connection string in the subsequent steps.
Set up Environment Variables & Check Cluster in Jenkins Pipeline
Set KUBECTL, NAMESPACE, HELM_RELEASE, and HELM.
environment { KUBECTL = '/var/tmp/kubectl' NAMESPACE = 'odm-jenkins' HELM_RELEASE = 'odm-lab' HELM = '/var/tmp/helm' }Check Cluster Connection
stages { stage('Check Cluster') { steps { script { sh "chmod u+x ${KUBECTL}" withKubeConfig(credentialsId: 'k3s-kubeconfig') { sh "${KUBECTL} get nodes" } } } } }Create New Namespace (PostgreSQL)
Add Stage to Jenkins file under Check Cluster Stage
stage('Create Namespace') { steps { script { withKubeConfig(credentialsId: 'k3s-kubeconfig') { sh """ ${KUBECTL} get namespace ${NAMESPACE} >/dev/null 2>&1 || \ ${KUBECTL} create namespace ${NAMESPACE} """ } } } }Create Postgres ODM Secret in Jenkins
Inside your Jenkins Deployment, go to Manage Jenkins\(\rightarrow\)Credentials\(\rightarrow\)Global\(\rightarrow\)Add Credentials
Kind: Username with password
Scope: Global
Username: admin
Password: admin123
ID: odm-db-credentialsSelect Create
Create Postgres ODM Secret in Jenkins
Add Stage to Jenkins file under Create Namespace
stage('Deploy Postgres (Lab)') { steps { script { withCredentials([usernamePassword( credentialsId: 'odm-db-credentials', usernameVariable: 'DB_USER', passwordVariable: 'DB_PASS' )]) { withKubeConfig(credentialsId: 'k3s-kubeconfig') { sh """ echo "Creating Postgres secret..." ${KUBECTL} -n ${NAMESPACE} create secret generic postgres-secret \ --from-literal=POSTGRES_DB=odmdb \ --from-literal=POSTGRES_USER=${DB_USER} \ --from-literal=POSTGRES_PASSWORD=${DB_PASS} \ --dry-run=client -o yaml | ${KUBECTL} apply -f - echo "Deploying Postgres securely..." cat <<EOF | ${KUBECTL} -n ${NAMESPACE} apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: odm-postgres spec: replicas: 1 selector: matchLabels: app: odm-postgres template: metadata: labels: app: odm-postgres spec: containers: - name: postgres image: postgres:13 envFrom: - secretRef: name: postgres-secret ports: - containerPort: 5432 --- apiVersion: v1 kind: Service metadata: name: odm-postgres spec: selector: app: odm-postgres ports: - port: 5432 targetPort: 5432 EOF ${KUBECTL} -n ${NAMESPACE} rollout status deployment/odm-postgres --timeout=180s """ } } } } }
Here are instructions to deploy Oracle Database 23ai Free into a dedicated oracle namespace in a k3s lab.
- Create the Deployment Manifest Create a file named
oracle-lab.yaml.
apiVersion: v1
kind: Namespace
metadata:
name: oracle
labels:
# Optional: Label to help with OPA exclusion if needed later
name: oracle
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oracle-data
namespace: oracle
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-path
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracle-db
namespace: oracle
labels:
app: oracle-db
spec:
replicas: 1
selector:
matchLabels:
app: oracle-db
template:
metadata:
labels:
app: oracle-db
spec:
# Oracle 23ai runs as UID 54321 by default
securityContext:
fsGroup: 54321
runAsUser: 54321
runAsGroup: 54321
containers:
- name: oracle
image: container-registry.oracle.com/database/free:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1521
env:
# The password for SYS, SYSTEM, and PDBADMIN
- name: ORACLE_PWD
value: "StrongPassword123"
volumeMounts:
- name: oracle-data
mountPath: /opt/oracle/oradata
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
# Health check to ensure DB is open before ODM tries to connect
startupProbe:
exec:
command: ["/opt/oracle/checkDBStatus.sh"]
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 40 # Wait up to 10 mins for first boot
volumes:
- name: oracle-data
persistentVolumeClaim:
claimName: oracle-data
---
apiVersion: v1
kind: Service
metadata:
name: oracle-db
namespace: oracle
spec:
selector:
app: oracle-db
ports:
- protocol: TCP
port: 1521
targetPort: 1521- Deploy to K3s
Run the apply command:
kubectl apply -f oracle-lab.yamlMonitor the Startup: Oracle takes a while (5-10 minutes) to initialize the database files on the first run. Watch the logs until you see DATABASE IS READY TO USE!.
kubectl logs -f -n oracle -l app=oracle-dbThe database connection url for the internal Oracle deployment will be jdbc:oracle:thin:@//oracle-db.oracle.svc.cluster.local:1521/freepdb1.
1.2 Schema & User Setup
The ODM data source requires specific privileges to initialize the schema on the first startup. Connect to your RDS instance via a bastion host or temporary pod and execute the following SQL commands:
1. (Lab ONLY) Log into the Container via SQL*Plus: We connect as SYS (Superuser) to the default Pluggable Database (FREEPDB1).
# Get the pod name
export ORACLE_POD=$(kubectl get pod -n oracle -l app=oracle-db -o jsonpath="{.items[0].metadata.name}")
# Exec into SQL*Plus
# Note: Connection string is host:port/ServiceName
kubectl exec -it -n oracle $ORACLE_POD -- sqlplus sys/StrongPassword123@//localhost:1521/FREEPDB1 as sysdba2. Run the Setup SQL: Paste this block directly into the SQL> prompt:
Replace ODM_USER and StrongPassword123 with the username and password of your choice.
-- 1. Create User/Schema
CREATE USER ODM_USER IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE USERS QUOTA UNLIMITED ON USERS;
-- 2. Grant Schema Privileges
GRANT CREATE SESSION TO ODM_USER;
GRANT CREATE TABLE TO ODM_USER;
GRANT CREATE VIEW TO ODM_USER;
GRANT CREATE SEQUENCE TO ODM_USER;
GRANT CREATE TRIGGER TO ODM_USER;
-- 3. Grant XA Recovery Privileges (Required for ODM/Liberty)
GRANT SELECT ON sys.dba_pending_transactions TO ODM_USER;
GRANT SELECT ON sys.pending_trans$ TO ODM_USER;
GRANT SELECT ON sys.dba_2pc_pending TO ODM_USER;
GRANT EXECUTE ON sys.dbms_xa TO ODM_USER;
-- 4. Verify
SELECT username FROM dba_users WHERE username = 'ODM_USER';
-- 5. Exit
exit;-- 1. Create the dedicated ODM user
CREATE USER odm WITH PASSWORD 'StrongPassword123!';
-- 2. Create the database
CREATE DATABASE odm_db OWNER odm;
-- 3. Grant privileges (Required for table creation)
GRANT ALL PRIVILEGES ON DATABASE odm_db TO odm;
-- 4. (Optional) If using a specific schema
\c odm_db
CREATE SCHEMA odm_rules AUTHORIZATION odm;1.3 Internal TLS Certificate Generation
To satisfy the OPA policy requiring HTTPS traffic at the cluster boundary, the Kubernetes Ingress resource must be configured with a valid TLS secret. This enables the Ingress Controller to terminate HTTPS traffic at the cluster boundary.
For this document, we will generate a self-signed certificate using OpenSSL.
Generate the Certificate and Key (PEM):
The Common Name (/CN) in the certificate must match the exact Fully Qualified Domain Name (FQDN) that users will type into their browser. Choose the pattern that fits your deployment phase:
- In a Pilot (Dummy Hostname): To bypass corporate DNS/Route53 ticket queues for a quick PoC, use a dummy hostname (e.g.,
/CN=odm.local.test). You will later route traffic to the AWS ALB by updating your local machine’shostsfile. - In Production (AWS ALB with DNS): Use the Route53 CNAME or Alias record created for the application (e.g.,
/CN=odm.internal.corp).- Important: Do not use the raw AWS ALB hostname (e.g.,
*.elb.amazonaws.com) as the CN. Browser security policies will reject the certificate if it identifies the load balancer hardware rather than the application service name.
- Important: Do not use the raw AWS ALB hostname (e.g.,
- In a Local Lab: We use the pattern
odm.<proxy>(e.g.,/CN=odm.my-haproxy.gym.lan). This ensures the browser accepts the certificate when traffic is routed through your lab’s local load balancer.
If you choose the Pilot (Dummy Hostname) route, anyone who needs to access the ODM web interface must map the dummy hostname to the ALB’s IP addresses in their local /etc/hosts (Mac/Linux) or C:\Windows\System32\drivers\etc\hosts (Windows) file.
Modifying this file requires local Administrator or root privileges. If the operations team or stakeholders testing the Pilot do not have admin rights on their corporate workstations, the dummy hostname workaround will fail for them. In that scenario, you must provision a real Route53/Corporate DNS record.
# 1. Generate a self-signed certificate and private key (Pilot Example)
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout odm-lab.key \
-out odm-lab.crt \
-subj "/CN=odm.local.test/O=Pilot/C=US"
# 2. Verify the content
ls -l odm-lab.key odm-lab.crt1.4 Creating Kubernetes Secrets
With the database credentials defined and the keystore generated, inject them into the cluster as Kubernetes Secrets.
Database Credentials Secret:
The ODM application requires a Kubernetes Secret to authenticate with the database. Choose the option matching your deployment strategy.
Target: Pilot environment using external Oracle.
Create a secret containing the credentials for your external Oracle instance.
kubectl create secret generic odm-db-secret \
--namespace odm-pilot \
--from-literal=db-user='ODM_USER' \
--from-literal=db-password='StrongPassword123' \
--from-literal=db-name=freepdb1 \
--from-literal=db-server=oracle.cxxxxx.us-east-1.rds.amazonaws.com
# for lab Oracle use `oracle-db.oracle.svc.cluster.local` as db-serverNote: Ensure the secret name (odm-db-secret) matches the secretCredentials field in your values-prod.yaml.
Target: Sandbox / Local Lab using the internal containerized database.
Create a simple secret for the internal PostgreSQL container.
kubectl create secret generic odm-db-secret \
--namespace odm-pilot \
--from-literal=db-user=odm \
--from-literal=db-password='StrongPassword123!'Note: Ensure the secret name (odm-db-secret) matches the secretCredentials field in your values-lab.yaml.
Target: Sandbox / Local Lab using the “external” postgres database.
Create a simple secret for the PostgreSQL database running in the postgres namespace.
kubectl create secret generic odm-db-secret \
--namespace odm-pilot \
--from-literal=db-user='odm' \
--from-literal=db-password='StrongPassword123!' \
--from-literal=db-server='postgres.postgres.svc.cluster.local'Note: Ensure the secret name (odm-db-secret) matches the secretCredentials field in your values-lab.yaml.
Ingress TLS Secret (Non-AWS / Lab Environments Only): If you are deploying in a local lab or using an in-cluster Ingress controller like NGINX, this secret will be referenced by the Ingress resource (tlsSecretRef) to enable HTTPS.
If you are deploying to AWS EKS and using the AWS Application Load Balancer (ALB), do not create this Kubernetes secret. AWS ALBs terminate TLS using certificates stored securely in AWS Certificate Manager (ACM), not Kubernetes secrets.
Instructions for importing your certificate into ACM and passing the Certificate ARN to the Helm chart are provided in the Helm Configuration section later in this guide.
# Create a standard Kubernetes TLS secret type (For NGINX / Local Labs)
kubectl create secret tls odm-tls-secret \
--namespace odm-pilot \
--key odm-lab.key \
--cert odm-lab.crtIn a production environment, avoid creating secrets from literals in the CLI history. Use an External Secrets Operator (ESO) to sync these values from AWS Secrets Manager or HashiCorp Vault.
1.5 Create Image Pull Secret
Kubernetes requires authentication credentials to pull container images. Depending on your environment constraints (Lab vs. Restricted Production), the source registry and credentials will differ.
Target Environment: Pilot / Air-Gapped / OPA-Enforced
In strict environments where public internet access is blocked or OPA forbids public registries, you must pull from the internal location where you mirrored the images (e.g., Artifactory).
Action: Create a secret using your internal registry credentials.
# Replace with your internal registry details
kubectl create secret docker-registry internal-registry-secret \
--docker-server=artifactory.internal.corp:8443 \
--docker-username=<SERVICE_ACCOUNT_USER> \
--docker-password=<SERVICE_ACCOUNT_TOKEN> \
--docker-email=admin@internal.corp \
-n odm-pilotTarget Environment: Sandbox / POC with Internet Access
If you are working in a lab with direct internet access and no strict OPA registry constraints, you can pull directly from IBM.
Action: Create a secret using your IBM Entitlement Key.
# 1. Get your key from myibm.ibm.com/products-services/containerlibrary
# 2. Create the secret
kubectl create secret docker-registry internal-registry-secret \
--docker-server=cp.icr.io \
--docker-username=cp \
--docker-password=<YOUR_IBM_ENTITLEMENT_KEY> \
--docker-email=user@example.com \
-n odm-pilotInside your Jenkins Deployment, navigate to:
Manage Jenkins → Credentials → Global → Add Credentials
Fill in the fields below:
| Field | Value |
|---|---|
| Kind | Secret Text |
| Scope | Global |
| Secret | YOUR_IBM_ENTITLEMENT_KEY |
| ID | icr-entitlement-key |
Click Create.
Add the following stage after the Deploy Postgres (Lab) stage
stage('Create ICR Pull Secret') {
steps {
script {
withCredentials([
string(credentialsId: 'icr-entitlement-key', variable: 'ICR_KEY')
]) {
withKubeConfig(credentialsId: 'k3s-kubeconfig') {
sh '''
echo "Creating docker-registry secret for cp.icr.io..."
NAMESPACE="default"
SECRET_NAME="icr-secret"
${KUBECTL} -n ${NAMESPACE} create secret docker-registry ${SECRET_NAME} \
--docker-server=cp.icr.io \
--docker-username=cp \
--docker-password=${ICR_KEY} \
--docker-email=dummy@example.com \
--dry-run=client -o yaml | ${KUBECTL} apply -f -
'''
}
}
}
}
}Whichever option you choose, ensure the secret name used in the kubectl create command exactly matches the value in your values.yaml file under image.pullSecrets.