Oracle SOA Suite on Kubernetes
Published on: Author: Michel Schildmeijer Category: OracleIn previous posts, I explained how to transform a traditional application server, such as WebLogic, into a containerized platform—based on Docker containers managed by a Kubernetes Cluster. No effort has been made yet to look at how a huge and complex environment such as Oracle SOA Suite could fit into a container-based strategy. Short answer: you need to more or less lift and shift the current platform to run as Kubernetes managed containers.
There are multiple ways to run a product such as Oracle SOA Suite in Kubernetes. Here's how I did it.
Oracle SOA Suite on OKE
You can use the standard PaaS service Oracle provides: the SOA Cloud service. However, my implementation is based on IaaS, on the Oracle Cloud Infrastructure, where I configured the Kubernetes Cluster as described in my previous posts. This can also be done on an on-premises infrastructure.
Ingredients
The following parts are involved to set up a SOA Suite domain based on version 12.2.1.4:
- Docker Engine
- Kubernetes base setup
- Oracle Database Image
- Oracle SOA Suite docker image (either self-built or from the Oracle Container Registry)
- GitHub
- A lot of patience
Set up the SOA Suite repository
A SOA Suite installation requires a repository, which can be Oracle or another flavor, to dehydrate SOA instance data and store metadata from composite deployments. I used a separate namespace to set up a database in Kubernetes.
The database I created, uses the image container-registry.oracle.com/database/enterprise:12.1.0.2. I used the database yaml that I previously obtained, but I had to add an ephemeral storage because after the first deployment, I got a message in Kubernetes about exhausted ephemeral storage. So, I solved it with this:

- Create a namespace for the database
- Create a secret to be able to pull images from the container registry
1. kubectl create secret docker-registry ct-registry-pull-secret \ 2. --docker-server=container-registry.oracle.com \ 3. --docker-username=********* \ 4. --docker-password=********* \ 5. --docker-email=mschildmeijer@qualogy.com
- Apply the database to Kubernetes. You can log in to see the progress of the database creation:
1. kubectl get pods -n database-namespace 2. NAME READY STATUS RESTARTS AGE 3. database-7b45749f44-kjr97 1/1 Running 0 6d 4. 5. kubectl exec -ti database-7b45749f44-kjr97 /bin/bash -n database-namespace
Or use:
1. kubectl logs database-7b45749f44-kjr97 -n database-namespace

So far so good. The only thing left to do, is create a service to expose the database:
1. kubectl expose deployment database --type=LoadBalancer --name=database-svc -n database-namespace

Repository Creation with RCU
To do this, running a temporary resource pod of the SOA Suite image was sufficient to run RCU from it:
1. kubectl run rcu --generator=run-pod/v1 --image container-registry.oracle.com/middleware/soasuite:12.2.1.4 --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "regsecret"}] } }' -- sleep-infinity
1. And run rcu from it: 2. kubectl exec -ti rcu /bin/bash 3. 4. 5. /u01/oracle/oracle_common/bin/rcu \ 6. -silent \ 7. -createRepository \ 8. -databaseType ORACLE \ 9. -connectString 130.61.65.56:1521/OraPdb.my.domain.com \ 10. -dbUser sys \ 11. -dbRole sysdba \ 12. -useSamePasswordForAllSchemaUsers true \ 13. -selectDependentsForComponents true \ 14. -schemaPrefix FMW1 \ 15. -component SOAINFRA \ 16. -component UCSUMS \ 17. -component ESS \ 18. -component MDS \ 19. -component IAU \ 20. -component IAU_APPEND \ 21. -component IAU_VIEWER \ 22. -component OPSS \ 23. -component WLS \ 24. -component STB
Nevertheless, this is not completely silent as you have to manually fill in your passwords.
Through newer versions there is an rcu create script which makes it more easier to create your repository, which are located in the oracle github repo
-s = schema name
-t = domain type
-p = rcu secret
./create-rcu-schema.sh -s soa1 -t soaosb -d oracle-db.wls-fr.svc.cluster.local:1521/devpdb.k8s -p fmwoci -n wls-soa -q **** -r ***** -o .

Creation of the SOA domain
I used the WebLogic Kubernetes Operator GIT repository to create my SOA domain and change it to what I need.
General steps to take:
- Install the WebLogic Kubernetes Operator
- Create persistent volumes and claims (PV/PVC)
- Create a domain:
- namespacesecrets:
* RCU secrets
* WebLogic domain secrets
Use the scripts and tools provided by Oracle
- Roll out the domain
Install the WebLogic Operator
I used Helm to do this. In the GitHub repository, there are charts available at kubernetes/charts/weblogic-operator. Specify in the .yaml values which namespace needs to be managed.
The SOA domain needs to be managed:
1. domainNamespaces: 2. - "default" 3. - "domain-namespace-soa"
Use the latest operator.
1. # image specifies the docker image containing the operator code. 2. image: "oracle/weblogic-kubernetes-operator:3.0.1"
And install:
1. helm install kubernetes/charts/weblogic-operator --name weblogic-operator --namespace weblogic-operator-namespace --set "javaLoggingLevel=FINE" --wait
Persistent volumes and claims (PV/PVC)
When running a WebLogic domain in Kubernetes pods, two different models can be chosen:
- Domain on image: all artifacts are stored in the container
- Domain on a persistent volume: domain artifacts are stateful stored
In the Git repository, there are some ready-to-go scripts for creating PVs:
1. kubernetes/samples/scripts/create-weblogic-domain-pv-pvc/ 2. create-pv-pvc-inputs.yaml 3. create-pv-pvc.sh 4. pvc-template.yaml 5. pv-template.yaml
Now, provide your own specifics in the input file, such as:
1. # The version of this inputs file. Do not modify. 2. version: create-weblogic-sample-domain-pv-pvc-inputs-v1 3. # The base name of the pv and pvc 4. baseName: soasuite 5. # Unique ID identifying a domain. 6. # If left empty, the generated pv can be shared by multiple domains 7. # This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster. 8. domainUID: soa-domain1 9. # Name of the namespace for the persistent volume claim 10. namespace: domain-namespace-soa 11. # Persistent volume type for the persistent storage. 12. # The value must be 'HOST_PATH' or 'NFS'. 13. # If using 'NFS', weblogicDomainStorageNFSServer must be specified. 14. weblogicDomainStorageType: HOST_PATH 15. # The server name or ip address of the NFS server to use for the persistent storage. 16. # The following line must be uncomment and customized if weblogicDomainStorateType is NFS: 17. #weblogicDomainStorageNFSServer: nfsServer 18. # Physical path of the persistent storage. 19. # When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the 20. # domain storage on the Kubernetes host. 21. # When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set 22. # to the IP address or name of the DNS server, and this value should be set to the exported path 23. # on that server. 24. # Note that the path where the domain is mounted in the WebLogic containers is not affected by this 25. # setting, that is determined when you create your domain. 26. # The following line must be uncomment and customized: 27. weblogicDomainStoragePath: /u01/soapv 28. # Reclaim policy of the persistent storage 29. # The valid values are: 'Retain', 'Delete', and 'Recycle' 30. weblogicDomainStorageReclaimPolicy: Retain 31. # Total storage allocated to the persistent storage. 32. weblogicDomainStorageSize: 20Gi
And run it:
1. ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o soapv -e
-e is just a path where you put your yamls locally in your client.
Secrets
Create WebLogic domain access:
1. kubectl -n domain-namespace-soa \ 2. create secret generic domain1-soa-credentials \ 3. --from-literal=username=weblogic \ 4. --from-literal=password=*****
Create SOA repository access:
1. ./create-rcu-credentials.sh -u fmw1_opss -p qualogy123 -a sys -q qualogy123 -d soa-domain-3 -n domain-namespace-soa -s opss-secret 2. secret "opss-secret" created 3. secret "opss-secret" labeled
Do this for all the SOA Suite schemas.
Domain rollout
Because domain rollout is a complicated process, this is all enclosed in a pod that runs several jobs to configure a domain.
The only thing you have to do, is fill in some inputs in a .yaml file. Using a shell script will create a job that will implement all your values.
A few important ones:
1. <span><span># Port number for admin server 2. adminPort: 7001 3. # Name of the Admin Server 4. adminServerName: admin-server 5. # Unique ID identifying a domain. 6. # This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster. 7. domainUID: soa-domain-1 (no underscores!) 8. # Home of the WebLogic domain 9. # If not specified, the value is derived from the domainUID as /shared/domains/<domainUID> 10. domainHome: /u01/domains/soa-domain-1 11. # Determines which WebLogic Servers the operator will start up 12. # Legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY" 13. serverStartPolicy: IF_NEEDED 14. # Cluster name 15. clusterName: soa-cluster-1 16. # Number of managed servers to generate for the domain 17. configuredManagedServerCount: 3 18. # Number of managed servers to initially start for the domain 19. initialManagedServerReplicas: 2 20. # Base string used to generate managed server names 21. managedServerNameBase: soa-ms 22. # Port number for each managed server 23. managedServerPort: 8001 24. # WebLogic Server Docker image. 25. # The operator requires WebLogic Server 12.2.1.3.0 with </span><a class="oracle-km-link-patch oracl-km-link" href="https://support.oracle.com/rs?type=patch&id=29135930" target="_blank">patch 29135930</a><span> applied. 26. # The existing WebLogic Docker image, `store/oracle/weblogic:12.2.1.3`, was updated on January 17, 2019, 27. # and has all the necessary patches applied; a `docker pull` is required if you already have this image. 28. # Refer to [WebLogic Docker images](../../../../../site/weblogic-docker-images.md) for details on how 29. # to obtain or create the image. 30. image: container-registry.oracle.com/middleware/soasuite:12.2.1.3 31. # Image pull policy 32. # Legal values are "IfNotPresent", "Always", or "Never" 33. imagePullPolicy: IfNotPresent 34. # Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image 35. # The presence of the secret will be validated when this parameter is enabled. 36. imagePullSecretName: ct-registry-pull-secret 37. # Boolean indicating if production mode is enabled for the domain 38. productionModeEnabled: true 39. # Name of the Kubernetes secret for the Admin Server's username and password 40. # The name must be lowercase. 41. # If not specified, the value is derived from the domainUID as <domainUID>-weblogic-credentials 42. weblogicCredentialsSecretName: domain1-soa-credentials 43. # Whether to include server .out to the pod's stdout. 44. # The default is true. 45. includeServerOutInPodLog: true 46. # The in-pod location for domain log, server logs, server out, and node manager log files 47. # If not specified, the value is derived from the domainUID as /shared/logs/<domainUID> 48. logHome: /u01/domains/logs/soa-domain-3 49. # Port for the T3Channel of the NetworkAccessPoint 50. t3ChannelPort: 30012 51. # Public address for T3Channel of the NetworkAccessPoint. This value should be set to the 52. # kubernetes server address, which you can get by running "kubectl cluster-info". If this 53. # value is not set to that address, WLST will not be able to connect from outside the 54. # Name of the domain namespace 55. namespace: domain-namespace-soa 56. #Java Option for WebLogic Server 57. javaOptions: -Dweblogic.StdoutDebugEnabled=false 58. # Name of the persistent volume claim 59. # If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc 60. persistentVolumeClaimName: soa-domain1-soasuite-pvc 61. # Mount path of the domain persistent volume. 62. domainPVMountPath: /u01/domains 63. # Mount path where the create domain scripts are located inside a pod 64. # 65. # The `create-domain.sh` script creates a Kubernetes job to run the script (specified in the 66. # `createDomainScriptName` property) in a Kubernetes pod to create a WebLogic home. Files 67. # in the `createDomainFilesDir` directory are mounted to this location in the pod, so that 68. # a Kubernetes pod can use the scripts and supporting files to create a domain home. 69. createDomainScriptsMountPath: /u01/weblogic 70. # 71. # RCU configuration details 72. # 73. 74. # The schema prefix to use in the database, for example `SOA1`. You may wish to make this 75. # the same as the domainUID in order to simplify matching domains to their RCU schemas. 76. rcuSchemaPrefix: FMW1 77. 78. # The database URL 79. rcuDatabaseURL: 130.61.65.56:1521/ORAPDB.MY.DOMAIN.COM 80. 81. # The kubernetes secret containing the database credentials 82. rcuCredentialsSecret: opss-secret 83. rcuCredentialsSecret: iau-secret 84. rcuCredentialsSecret: iauviewer-secret 85. rcuCredentialsSecret: iauappend-secret 86. rcuCredentialsSecret: wls-secret 87. rcuCredentialsSecret: soainfra-secret 88. rcuCredentialsSecret: mds-secret 89. rcuCredentialsSecret: wls-secret 90. rcuCredentialsSecret: wlsruntime-secret 91. rcuCredentialsSecret: stb-secret 92. </span></span>
Execute the script:
1. ./create-domain.sh -i create-domain-inputs-soa.yaml -o wlssoa -e -v
You can follow the script using the logs:
1. kubectl logs -f soa-domain-2-create-fmw-infra-sample-domain-job-572r6 -n domain-namespace-soa
This is basically what it takes. Next time, I will dive deeper into how to manage a SOA Suite domain in Kubernetes.
Unfortunately, at the moment, the internal configuration was not successfully completed...

Could be the case as described in MOS Doc ID 2284797.1
Update fix
As I expected, the above issue was caused by how I chose the password. This has to be in the structure, as described in the MOS document. So there was no actual Kubernetes issue
Stopping and starting
Stop and starting a WebLogic SOA domain managed by Kubernetes goes a bit different than the conventional way of doing that using scripts or controlling it by the nodemanager. Stopping is starting by the WebLogic Kubernetes operator and uses the kubectl patch commands and jsonpath syntax to update a policy against the so called replica’s. A replica in this case is a running pod with a Managed or AdminServer.

The above command will tell to the operator to scale up the Adminserver pod with one replica. After a while you can access the console and login as usual.
For more standardization there are some scripts which makes life easier:

Example of stopping an entire WebLogic domain:
