Installing the server in an OpenShift/Kubernetes cluster#
You can install a containerized version of the UrbanCode™ Deploy server in an OpenShift/Kubernetes cluster. The UrbanCode Deploy server is installed using a Helm chart.
Prior to installing UrbanCode Deploy, the following prerequisites must be met.
-
Kubernetes 1.9+; kubectl and oc CLI; Helm/Tiller 2.9.1;
-
Install and setup the Helm CLI Be sure to set HELM_VERSION=v2.9.1.
-
Image and Helm Chart - The UrbanCode Deploy server image and helm chart can be accessed either via the Entitled Registry and public Helm repository, or by downloading a Passport Advantage archive (PPA) and loading the image and helm chart into your own image registry and helm repository.
-
Entitled Registry
- The public Helm chart repository can be accessed at https://github.com/HCL/charts/tree/master/entitled and directions for accessing the UrbanCode Deploy server chart is discussed in Installing the Chart section below.
- Get a key to the entitled registry
- Log in to MyHCL Container Software Library with the HCLid and password that are associated with the entitled software.
- In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
- An imagePullSecret must be created to be able to authenticate and pull images from the Entitled Registry. Once this secret has been created you will specify the secret name as the value for the image.secret parameter in the values.yaml you provide to 'helm install ...' Note: Secrets are namespace scoped, so they must be created in every namespace you plan to install UCD into. Example Docker registry secret to access Entitled Registry with an Entitlement key.
oc create secret docker-registry entitledregistry-secret --docker-username=cp --docker-password=<EntitlementKey> --docker-server=cp.icr.io
-
Passport Advantage archive
- Download the PPA archive for your operating system from the HCL Passport Advantage website.
- Load the image and helm chart into your cluster using cloudctl catalog load-archive.
-
Database - UrbanCode Deploy requires a database. The database may be running in your cluster or outside of your cluster. This database must be configured as described in Installing the server database before installing this Helm chart. The database parameters used to connect to the database are required properties of this Helm chart. The Apache Derby database type is not supported when running the UrbanCode Deploy server in a Kubernetes cluster.
-
Secret - A Kubernetes Secret object must be created to store the initial UCD administrator password and the password used to access the database mentioned above. These passwords are retrieved during Helm chart installation. The secret can be named 'HelmReleaseName-secrets' where 'HelmReleaseName' is the release name you give when installing this Helm chart or you can create a secret with any name and pass the name as the Helm Chart parameter value 'secret.name'.
-
Through the oc/kubectl CLI, create a Secret object in the target namespace. Generate the base64 encoded values for the initial UCD admin password and database passwords.
echo -n 'admin' | base64
YWRtaW4=
echo -n 'MyDbpassword' | base64
TXlEYnBhc3N3b3Jk
Create a file named secret.yaml with the following contents, using your Helm Relese name and base64 encoded values.
apiVersion: v1
kind: Secret
metadata:
name: MyRelease-secrets
type: Opaque
data:
initpassword: YWRtaW4=
dbpassword: TXlEYnBhc3N3b3Jk
Create the Secret using oc apply.
oc apply -f ./secret.yaml
Delete or shred the secret.yaml file.
-
JDBC drivers - A PersistentVolume (PV) that contains the JDBC driver(s) required to connect to the database configured above must be created. You must either:
-
Create Persistence Storage Volume - Create a PV, copy the JDBC drivers to the PV, and create a PersistentVolumeClaim (PVC) that is bound to the PV. For more information on Persistent Volumes and Persistent Volume Claims, see the Kubernetes documentation. Sample YAML to create the PV and PVC are provided below.
apiVersion: v1
kind: PersistentVolume
metadata:
name: ucd-ext-lib
labels:
volume: ucd-ext-lib-vol
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
nfs:
server: 192.168.1.17
path: /volume1/k8/ucd-ext-lib
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ucd-ext-lib-volc
spec:
storageClassName: ""
accessModes:
- "ReadOnlyMany"
resources:
requests:
storage: 1Gi
selector:
matchLabels:
volume: ucd-ext-lib-vol
-
Dynamic Volume Provisioning - If your cluster supports dynamic volume provisioning, you may use it to create the PV and PVC. However, the JDBC drivers will still need to be copied to the PV. To copy the JDBC drivers to your PV during the chart installation process, first write a bash script that copies the JDBC drivers from a location accessible from your cluster to
${UCD_HOME}/ext_lib/
. Next, store the script, namedscript.sh
, in a yaml file describing a ConfigMap. Finally, create the ConfigMap in your cluster by running a command such as:oc create configmap <map-name> <data-source>
Below is an example ConfigMap yaml file that copies a MySQL .jar file from a web server using wget.
kind: ConfigMap
apiVersion: v1
metadata:
name: user-script
data:
script.sh: |
#!/bin/bash
echo "Running script.sh..."
if [ ! -f ${UCD_HOME}/ext_lib/mysql-jdbc.jar ] ; then
echo "Copying file(s)..."
wget http://hostname/ucd-extlib/mysql-jdbc.jar
mv mysql-jdbc.jar ${UCD_HOME}/ext_lib/
echo "Done copying."
else
echo "File ${UCD_HOME}/ext_lib/mysql-jdbc.jar already exists."
fi
Note the script must be named script.sh
.
Additionally, you may manually create a PersistentVolume/PersistentVolumeClaim and use a script contained in a ConfigMap to copy drivers into the PersistentVolume.
Example setup scripts to create the Persistent Volume, Persistent Volume Claim and configMap are included in the Helm chart under pak_extensions/pre-install/persistentStorageAdministration directory.
- A PersistentVolume that will hold the appdata directory for the UrbanCode Deploy server is required. If your cluster supports dynamic volume provisioning you will not need to create a PersistentVolume (PV) or PersistentVolumeClaim (PVC) before installing this chart. If your cluster does not support dynamic volume provisioing, you will need to either ensure a PV is available or you will need to create one before installing this chart. You can optionally create the PVC to bind it to a specific PV, or you can let the chart create a PVC and bind to any available PV that meets the required size and storage class. Sample YAML to create the PV and PVC are provided below.
apiVersion: v1
kind: PersistentVolume
metadata:
name: ucd-appdata-vol
labels:
volume: ucd-appdata-vol
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.17
path: /volume1/k8/ucd-appdata
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ucd-appdata-volc
spec:
storageClassName: ""
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 20Gi
selector:
matchLabels:
volume: ucd-appdata-vol
Example setup scripts to create the Persistent Volume and Persistent Volume Claim are included in the Helm chart under pak_extensions/pre-install/persistentStorageAdministration directory.
PodSecurityPolicy Requirements
If you are running on OpenShift, skip this section and continue to the SecurityContextConstraints Requirements section below.
This chart requires a PodSecurityPolicy to be bound to the target namespace prior to installation. Choose either a predefined PodSecurityPolicy or have your cluster administrator create a custom PodSecurityPolicy for you.
The predefined PodSecurityPolicy named ibm-restricted-psp
has been verified for this chart, if your target namespace is bound to this PodSecurityPolicy you can proceed to install the chart.
This chart also defines a custom PodSecurityPolicy which can be used to finely control the permissions/capabilities needed to deploy this chart. You can enable this custom PodSecurityPolicy using the Cluster Console user interface or the supplied instructions/scripts in the pak_extension pre-install directory.
-
From the user interface, you can copy and paste the following snippets to enable the custom PodSecurityPolicy
-
Custom PodSecurityPolicy definition:
apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: annotations: kubernetes.io/description: "This policy is based on the most restrictive policy, requiring pods to run with a non-root UID, and preventing pods from accessing the host." seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default name: ibm-ucd-prod-psp spec: allowPrivilegeEscalation: false forbiddenSysctls: - '*' fsGroup: ranges: - max: 65535 min: 1 rule: MustRunAs hostNetwork: false hostPID: false hostIPC: false requiredDropCapabilities: - ALL runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: ranges: - max: 65535 min: 1 rule: MustRunAs volumes: - configMap - emptyDir - projected - secret - downwardAPI - persistentVolumeClaim
-
Custom ClusterRole for the custom PodSecurityPolicy:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ibm-ucd-prod-clusterrole rules: - apiGroups: - extensions resourceNames: - ibm-ucd-prod-psp resources: - podsecuritypolicies verbs: - use
-
RoleBinding for all service accounts in the current namespace. Replace
{{ NAMESPACE }}
in the template with the actual namespace.apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ibm-ucd-prod-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ibm-ucd-prod-clusterrole subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:{{ NAMESPACE }}
-
From the command line, you can run the setup scripts included under pak_extensions.
As a cluster administrator, the pre-install scripts and instructions are located at:
-
pre-install/clusterAdministration/createSecurityClusterPrereqs.sh As team admin/operator the namespace scoped scripts and instructions are located at:
-
pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh
-
-
Red Hat OpenShift SecurityContextConstraints Requirements
The UCD server image runs as user 1001, which will require you to allow the anyuid SCC for the namespace you are running in. You can configure this using the following command:
$ oc adm policy add-scc-to-group anyuid system:serviceaccounts:${YOUR_NAMESPACE}
This chart requires a SecurityContextConstraints
to be bound to the target namespace prior to installation. To meet this requirement there may be cluster scoped as well as namespace scoped pre and post actions that need to occur.
The predefined SecurityContextConstraints
name: ibm-restricted-scc
has been verified for this chart, if your target namespace is bound to this SecurityContextConstraints
resource you can proceed to install the chart.
This chart defines a custom SecurityContextConstraints
which can be used to finely control the permissions/capabilities needed to deploy this chart. You can enable this custom SecurityContextConstraints
resource using the supplied instructions or scripts in the pak_extensions/pre-install
directory.
-
From the user interface, you can copy and paste the following snippets to enable the custom
SecurityContextConstraints
-
Custom SecurityContextConstraints definition:
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: name: ibm-ucd-prod-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegedContainer: false allowedCapabilities: [] allowedFlexVolumes: [] defaultAddCapabilities: [] defaultPrivilegeEscalation: false forbiddenSysctls: - "*" fsGroup: type: MustRunAs ranges: - max: 65535 min: 1 readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsNonRoot seccompProfiles: - docker/default seLinuxContext: type: RunAsAny supplementalGroups: type: MustRunAs ranges: - max: 65535 min: 1 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret priority: 0
-
From the command line, you can run the setup scripts included under
pak_extensions/pre-install
As a cluster admin the pre-install instructions are located at:-
pre-install/clusterAdministration/createSecurityClusterPrereqs.sh
As team admin the namespace scoped instructions are located at: -
pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh
-
-
Resources Required
- 4GB of RAM, plus 4MB of RAM for each agent
- 2 CPU cores, plus 2 cores for each 500 agents
This task installs the containerized version of the HCL™ Launch server in an OpenShift/Kubernetes cluster.
-
Installing the Chart
Add the Entitled Registry helm chart repository to the local client.
$ helm repo add entitled https://raw.githubusercontent.com/HCL/charts/master/repo/entitled/
Get a copy of the values.yaml file from the helm chart so you can update it with values used by the install.
$ helm inspect values entitled/ibm-ucd-prod > myvalues.yaml
Edit the file myvalues.yaml to specify the parameter values to use when installing the UCD server instance. The configuration section lists the parameter values that can be set.
To install the chart into namespace 'ucdtest' with the release name
my-ucd-release
and use the values from myvalues.yaml:$ helm install --namespace ucdtest --name my-ucd-release --values myvalues.yaml entitled/ibm-ucd-prod --tls
Tip: List all releases using
helm list --tls
The –tls argument may not be required.
-
Verifying the Chart
See the instruction (from NOTES.txt within chart) after the helm installation completes for chart verification. The instruction can also be viewed by running the command: helm status my-ucd-release --tls.
-
Upgrading the Chart
Check here for information about upgrading the chart.
-
Uninstalling the Chart
To uninstall/delete the
my-ucd-release
release:$ helm delete --purge my-ucd-release --tls
The command removes all the Kubernetes components associated with the chart and deletes the release.
Access your containerized instance of UrbanCode Deploy.
- Accessing the server in an OpenShift/Kubernetes cluster
This topic contains the instructions that are used to access and start the UrbanCode Deploy server in the Kubernetes cluster.
Parent topic: Installing the server in a Kubernetes cluster
Related information
Helm chart configuration parameters