Cloud Pak for Data Installation
Installing Multicloud Object Gateway
Associated links with this section
Multicloud Object Gateway is required for the following CP4D components:
- watsonx Assistant
- Watson Discovery
- Watson Speech services
Install Object Data Foundation operator
Open the console of your OCP instance. This can be retrieved with the following commandline:
oc whoami --show-console
https://console-openshift-console.apps.wxai.cpdu8vscs.ibmworkshops.com
Open the URL returned in a browser window and use the kubeadmin
user and the password from the generated auth directory to login.
As of this writing, in ODF 4.14.x and higher the noobaa backing store uses extended file attributes to store file metadata. This is not supported in EFS. So these instructions will encompass installing 4.13.
Future releases of ODF may have that fix in it to support EFS.
Procedure to install OpenShift Data Foundation operator Cribbed from the RHEL Docs
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
- Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
- Click Install.
- Set the following options on the Install Operator page:
- Update Channel as stable-4.13.
- Installation Mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. - Select Approval Strategy as Automatic or Manual.
- If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
- If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps for ODF Operator
- After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. - In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
Creating a standalone Multicloud Object Gateway
You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation.
Prerequisites
- Ensure that the OpenShift Data Foundation Operator is installed.
Procedure
- In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
- Ensure that the Project selected is
openshift-storage
.
- Ensure that the Project selected is
- Click OpenShift Data Foundation operator and then click Create StorageSystem.
- In the Backing storage page, select the following:
- Select Multicloud Object Gateway for Deployment type.
- Select the Use an existing StorageClass option.
- Click Next.
- In the Review and create page, review the configuration details:
- To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
Verifying the state of the pods
- Click Workloads → Pods from the OpenShift Web Console.
- Select
openshift-storage
from the Project drop-down list and verify pods are in Running state.
Installing CP4D Cli
Download binary
CP4D CLI can be found at the following link.
https://github.com/IBM/cpd-cli/releases/download/v13.1.4/cpd-cli-linux-EE-13.1.4.tgz
This binary is for linux, if running from a windows machine, please use the linux binary but run it from Windows Subsystem for Linux.
Also as of this writing the latest version of the cli is 13.1.4.
wget https://github.com/IBM/cpd-cli/releases/download/v13.1.4/cpd-cli-linux-EE-13.1.4.tgz
Extract the files:
tar xzvf https://github.com/IBM/cpd-cli/releases/download/v13.1.4/cpd-cli-linux-EE-13.1.4.tgz
This will create the following directory:
cpd-cli-linux-EE-13.1.4-109
Configure environmental vars
The following creates a CPD env file
cat<<EOF> cpd_vars_48.sh
#===============================================================================
# Cloud Pak for Data installation variables
#===============================================================================
# ------------------------------------------------------------------------------
# Client workstation
# ------------------------------------------------------------------------------
# Set the following variables if you want to override the default behavior of the Cloud Pak for Data CLI.
#
# To export these variables, you must uncomment each command in this section.
# export CPD_CLI_MANAGE_WORKSPACE=<enter a fully qualified directory>
# export OLM_UTILS_LAUNCH_ARGS=<enter launch arguments>
# ------------------------------------------------------------------------------
# Cluster
# ------------------------------------------------------------------------------
export OCP_URL=<enter your Red Hat OpenShift Container Platform URL>
export OPENSHIFT_TYPE=self-managed
export IMAGE_ARCH=amd64
export OCP_USERNAME=kubeadmin
export OCP_PASSWORD=<enter your password>
export SERVER_ARGUMENTS="--server=${OCP_URL}"
export CPDM_OC_LOGIN="cpd-cli manage login-to-ocp ${SERVER_ARGUMENTS} ${LOGIN_ARGUMENTS}"
export OC_LOGIN="oc login ${OCP_URL} ${LOGIN_ARGUMENTS}"
# export OCP_TOKEN=<enter your token>
# export LOGIN_ARGUMENTS="--username=${OCP_USERNAME} --password=${OCP_PASSWORD}"
# export LOGIN_ARGUMENTS="--token=${OCP_TOKEN}"
# ------------------------------------------------------------------------------
# Projects
# ------------------------------------------------------------------------------
export PROJECT_CERT_MANAGER=ibm-cert-manager
export PROJECT_LICENSE_SERVICE=ibm-license-server
export PROJECT_SCHEDULING_SERVICE=ibm-scheduler
export PROJECT_CPD_INST_OPERATORS=cpd-operators
export PROJECT_CPD_INST_OPERANDS=cpd
# export PROJECT_IBM_EVENTS=<enter your IBM Events Operator project>
# export PROJECT_PRIVILEGED_MONITORING_SERVICE=<enter your privileged monitoring service project>
# export PROJECT_CPD_INSTANCE_TETHERED=<enter your tethered project>
# export PROJECT_CPD_INSTANCE_TETHERED_LIST=<a comma-separated list of tethered projects>
# ------------------------------------------------------------------------------
# Storage
# ------------------------------------------------------------------------------
export STG_CLASS_BLOCK=nfs-client
export STG_CLASS_FILE=nfs-client
# ------------------------------------------------------------------------------
# IBM Entitled Registry
# ------------------------------------------------------------------------------
export IBM_ENTITLEMENT_KEY=<enter your IBM entitlement API key>
# ------------------------------------------------------------------------------
# Private container registry
# ------------------------------------------------------------------------------
# Set the following variables if you mirror images to a private container registry.
#
# To export these variables, you must uncomment each command in this section.
# export PRIVATE_REGISTRY_LOCATION=<enter the location of your private container registry>
# export PRIVATE_REGISTRY_PUSH_USER=<enter the username of a user that can push to the registry>
# export PRIVATE_REGISTRY_PUSH_PASSWORD=<enter the password of the user that can push to the registry>
# export PRIVATE_REGISTRY_PULL_USER=<enter the username of a user that can pull from the registry>
# export PRIVATE_REGISTRY_PULL_PASSWORD=<enter the password of the user that can pull from the registry>
# ------------------------------------------------------------------------------
# Extra stuff
# ------------------------------------------------------------------------------
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=noobaa-admin
export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=noobaa-s3-serving-cert
# ------------------------------------------------------------------------------
# Cloud Pak for Data version
# ------------------------------------------------------------------------------
export VERSION=4.8.4
# ------------------------------------------------------------------------------
# Components
# ------------------------------------------------------------------------------
#export COMPONENTS=ibm-cert-manager,ibm-licensing,scheduler,cpfs,cpd_platform,ws,wml,openpages,wkc,watson_assistant
export COMPONENTS=ibm-cert-manager,ibm-licensing,scheduler,cpfs,cpd_platform
# export COMPONENTS_TO_SKIP=<component-ID-1>,<component-ID-2>
# ------------------------------------------------------------------------------
# watsonx Orchestrate
# ------------------------------------------------------------------------------
# export PROJECT_IBM_APP_CONNECT=<enter your IBM App Connect in containers project>
# export AC_CASE_VERSION=<version>
# export AC_CHANNEL_VERSION=<version>
EOF
You will need to open the resultant cpd_vars_48.sh
file and update the following vars:
OCP_URL
OCP_PASSWORD
IBM_ENTITLEMENT_KEY
By default we set our storage classes to the nfs-client
storage class. Your mileage may vary.
The OCP_URL
can be pulled with this command:
oc cluster-info
Kubernetes control plane is running at https://api.wxai.cpdu8vscs.ibmworkshops.com:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The kubeadmin password can be found in the build directory for the cluster under auth/kubeadmin-password
Login to the cluster with cpd-cli
Source the env file
source cpd_vars_48.sh
login with cpd-cli
cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}
Add the entitlement key
The key can be retrieved from here
cpd-cli manage add-icr-cred-to-global-pull-secret \
--entitled_registry_key=${IBM_ENTITLEMENT_KEY}
Create required secrets for Services
If we're planning on installing the following:
- Watsonx Assistant
- Watson Discovery
The names of all NooBaa account credentials can be retrieved with the following command
oc get secrets -n openshift-storage | grep noobaa
Running from our linux bastion host, we need to export the following secret names:
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=noobaa-admin
export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=noobaa-s3-serving-cert
Create the secrets that watsonx Assistant will use to connect to Multicloud Object Gateway:
cpd-cli manage setup-mcg \
--components=watson_assistant \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET} \
--noobaa_cert_secret=${NOOBAA_ACCOUNT_CERTIFICATE_SECRET}
Verify the secrets were created
oc get secrets --namespace=${PROJECT_CPD_INST_OPERANDS} \
noobaa-account-watson-assistant \
noobaa-cert-watson-assistant \
noobaa-uri-watson-assistant
Should return
NAME TYPE DATA AGE
noobaa-account-watson-assistant Opaque 2 33s
noobaa-cert-watson-assistant Opaque 1 32s
noobaa-uri-watson-assistant Opaque 3 29s
Wash, rinse, repeat for Watson Discovery
cpd-cli manage setup-mcg \
--components=watson_discovery \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET}
Installing CP4D
Important links:
This apparently is a requirement for watsonx Assistant https://www.ibm.com/docs/en/cloud-paks/cp-data/4.8.x?topic=software-installing-red-hat-openshift-serverless-knative-eventing
Preparing the cluster nodes
We need to patch the cluster with the following since the initial cluster creation with our scripts assumed a default private registry.
oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": false}]'
Processes derived from the following links
https://www.ibm.com/docs/en/cloud-paks/cp-data/4.8.x?topic=settings-changing-process-ids-limit
To ensure that some services can run correctly, you might need to increase the process IDs limit setting on the OpenShift® Container Platform.
Log into your bastion node and then login to the cluster and run the following:
oc apply -f - << EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: cpd-pidslimit-kubeletconfig
spec:
kubeletConfig:
podPidsLimit: 16384
machineConfigPoolSelector:
matchExpressions:
- key: pools.operator.machineconfiguration.openshift.io/worker
operator: Exists
EOF
Authorizing instances
This step creates all required projects, creates the NamespaceScope operator in the operators project, and binds the applied role to the service account of the NamespaceScope operator.
cpd-cli manage authorize-instance-topology \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
This was successful
Installation of shared components
These two tasks come from here: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.8.x?topic=cluster-installing-shared-components
If you are unable to access github, include the following flags to the next two commands as well as any apply-olm
command below
--case_download=true \
--from_oci=true
cpd-cli manage apply-cluster-components \
--release=${VERSION} \
--license_acceptance=true \
--cert_manager_ns=${PROJECT_CERT_MANAGER} \
--licensing_ns=${PROJECT_LICENSE_SERVICE}
Install the scheduler
cpd-cli manage apply-scheduler \
--release=${VERSION} \
--license_acceptance=true \
--scheduler_ns=${PROJECT_SCHEDULING_SERVICE}
Setup instance topology
cpd-cli manage setup-instance-topology \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--license_acceptance=true \
--block_storage_class=${STG_CLASS_BLOCK}
Installation of primary CPD components
This installs the operators in the operators project for the instance.
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=${COMPONENTS}
Installing Red Hat OpenShift Serverless Knative Eventing
Red Hat OpenShift Serverless Knative Eventing and IBM Events are required for watsonx Assistant
Authorize the projects where the software will be installed to communicate
cpd-cli manage authorize-instance-topology \
--release=${VERSION} \
--cpd_operator_ns=ibm-knative-events \
--cpd_instance_ns=knative-eventing
Install the IBM Events Operator in the ibm-knative-events project
cpd-cli manage setup-instance-topology \
--release=${VERSION} \
--cpd_operator_ns=ibm-knative-events \
--cpd_instance_ns=knative-eventing \
--license_acceptance=true
Finally installing the OSKE
cpd-cli manage deploy-knative-eventing \
--release=${VERSION} \
--block_storage_class=${STG_CLASS_BLOCK}
Installing the Cloud Pak for Data control plane and services
This installs the operands in the operands project for the instance:
cpd-cli manage apply-cr \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=${COMPONENTS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
This is a big one, so it might take a little while. When it completes, verify the operands are up:
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
This should return a Completed
status.
The routes should be created and ready at this point. You can retrieve the web-url for the client with the following command:
cpd-cli manage get-cpd-instance-details \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--get_admin_initial_credentials=true
Generate a cpd-cli Profile
Log into the CP4D webui with the info retrieved from the get-cpd-instance-details
and go to your Profile and settings page in the Cloud Pak for Data client and clicking Generate API key.
In the upper right hand corner, click API key
-> Generate new key
Copy the generated key.
Collect the web client URL and export it with the following command
export CPD_PROFILE_URL=$(oc get route cpd --namespace=${PROJECT_CPD_INST_OPERANDS} | tail -1 | awk '{print "https://"$2}')
We'll set our profile-name
to the cluster name.
Set the following vars:
export API_KEY=<key you copied above>
export CPD_ADMIN_USER=cpadmin
export LOCAL_USER=<local user name>
export CPD_PROFILE_NAME=wxai
Create a local user configuration to store your username and API key by using the config users set command.
cpd-cli config users set ${LOCAL_USER} \
--username ${CPD_ADMIN_USER} \
--apikey ${API_KEY}
Create a profile to store the Cloud Pak for Data URL and to associate the profile with your local user configuration by using the config profiles set command.
cpd-cli config profiles set ${CPD_PROFILE_NAME} \
--user ${LOCAL_USER} \
--url ${CPD_PROFILE_URL}
You can now run cpd-cli commands with this profile as shown in the following example.
cpd-cli service-instance list \
--profile=${CPD_PROFILE_NAME}
Installing our Cartridges
Source the env file
source cpd_vars_48.sh
Login with cpd-cli
cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}
Apply necessary Security Constraints
The apply-db2-kubelet command makes the following changes to the cluster nodes:
allowedUnsafeSysctls:
- "kernel.msg*"
- "kernel.shm*"
- "kernel.sem"
cpd-cli manage apply-db2-kubelet
This might take a bit as the workers will be getting bounced.
Watsonx Assistant
By default the deployment size for WA is Production
. If we want to go with something larger, we need to deploy with an options
file.
install-options.yml
################################################################################
# watsonx Assistant parameters
################################################################################
watson_assistant_size: Production
watson_assistant_bigpv: false
watson_assistant_analytics_enabled: true
The default Production size in this case is more the suited for our purposes.
Apply the olm
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=watson_assistant
And run the install command
cpd-cli manage apply-cr \
--components=watson_assistant \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Validate the installation
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=watson_assistant
Create an instance of WA
Set the INSTANCE_NAME environment variable to the unique name that you want to use as the display name for the service instance. We're just going to call this wa-instance
.
export INSTANCE_NAME="wa-instance"
You should have followed the steps here to generate a profile.
The example profile we created we called wxai
.
export CPD_PROFILE_NAME="wxai"
Set the INSTANCE_VERSION env var to the version that corresponds to the version of CP4D. As of this writing and this guide, we are using 4.8.4. The Service instance version must match the release of CP4D.
export INSTANCE_VERSION=4.8.4
Create the assistant-instance.json payload file:
cat << EOF > ./assistant-instance.json
{
"addon_type": "assistant",
"display_name": "${INSTANCE_NAME}",
"namespace": "${PROJECT_CPD_INST_OPERANDS}",
"addon_version": "${INSTANCE_VERSION}",
"create_arguments": {
"deployment_id": "${PROJECT_CPD_INST_OPERANDS}-wa",
"parameters": {
"serviceId": "assistant",
"url": "https://wa-store.${PROJECT_CPD_INST_OPERANDS}.svc.cluster.local:443/csb/v2/service_instances",
"watson": true
}
}
}
EOF
Set the PAYLOAD_FILE environment variable to the fully qualified name of the JSON payload file
export PAYLOAD_FILE=/path/to/whereever/this/file/is/assistant-instance.json
Create the service instance from the payload file:
cpd-cli service-instance create \
--profile=${CPD_PROFILE_NAME} \
--from-source=${PAYLOAD_FILE}
Validating that the service instance was created
cpd-cli service-instance status ${INSTANCE_NAME} \
--profile=${CPD_PROFILE_NAME} \
--output=json
Watson Discovery
Apply the olm
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=watson_discovery
And then apply the CR
cpd-cli manage apply-cr \
--components=watson_discovery \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Validate the installation
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=watson_discovery
Create an instance of WD
TBD
OpenPages
Run the following command to create the required OLM objects for OpenPages in the operators project for the instance:
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=openpages
Create the custom resource for OpenPages
cpd-cli manage apply-cr \
--components=openpages \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--license_acceptance=true
Validate the installation
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=openpages
Create a service instance
Watson Studio
Run the following command to create the required OLM objects for Watson Studio in the operators project for the instance:
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=ws
Create the custom resource
cpd-cli manage apply-cr \
--components=ws \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Validate the installation
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=ws
Watson Machine Learning
Apply the olm
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=wml
Apply the CR
cpd-cli manage apply-cr \
--components=wml \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Verify the installation
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=ws
IBM Knowledge Catalog
Run the following command to create the required OLM objects for IBM Knowledge Catalog in the operators project for the instance:
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=wkc
We're using default options for this installation, so kick off the following CR
cpd-cli manage apply-cr \
--components=wkc \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Watsonx Orchestrate
IBM App Connect
IBM App Connect is a requirement for Watsonx Orchestrate
Make sure to set these vars in cpd_vars_48.sh
We are using CP4D 4.8.4, so the following vars are valid:
export PROJECT_IBM_APP_CONNECT=appconnect
export AC_CASE_VERSION=11.3.0
export AC_CHANNEL_VERSION=v11.3
Now source the cpd_vars_48.sh
file
source ./cpd_vars_48.sh
Download the IBM App Connect case file
curl -sSLO https://github.com/IBM/cloud-pak/raw/master/repo/case/ibm-appconnect/${AC_CASE_VERSION}/ibm-appconnect-${AC_CASE_VERSION}.tgz
You should have in your home directory ibm-appconnect-11.3.0.tgz
.
Extract the file with
tar xvf ibm-appconnect-${AC_CASE_VERSION}.tgz
Login to the cluster with cpd-cli
cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}
Create the project for App Connect
oc new-project ${PROJECT_IBM_APP_CONNECT} --display-name 'IBM App Connect'
Create the catalog source for the App Connect operator
oc patch \
--filename=ibm-appconnect/inventory/ibmAppconnect/files/op-olm/catalog_source.yaml \
--type=merge \
-o yaml \
--patch="{\"metadata\":{\"namespace\":\"${PROJECT_IBM_APP_CONNECT}\"}}" \
--dry-run=client \
| oc apply -n ${PROJECT_IBM_APP_CONNECT} -f -
Create the operator group for the App Connect operator:
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: appconnect-og
namespace: ${PROJECT_IBM_APP_CONNECT}
spec:
targetNamespaces:
- ${PROJECT_IBM_APP_CONNECT}
upgradeStrategy: Default
EOF
Now create the subscription for the AC operator
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-appconnect-operator
namespace: ${PROJECT_IBM_APP_CONNECT}
spec:
channel: ${AC_CHANNEL_VERSION}
config:
env:
- name: ACECC_ENABLE_PUBLIC_API
value: "true"
installPlanApproval: Automatic
name: ibm-appconnect
source: appconnect-operator-catalogsource
sourceNamespace: ${PROJECT_IBM_APP_CONNECT}
EOF
Now wait for the operator to come online
oc wait csv \
--namespace=${PROJECT_IBM_APP_CONNECT} \
--selector=operators.coreos.com/ibm-appconnect.${PROJECT_IBM_APP_CONNECT}='' \
--for='jsonpath={.status.phase}'=Succeeded
Installing Watsonx Orchestrate
Once the App Connect operator is up and running, we can move on to installing Watsonx Orchestrate
source the cpd_vars_48.sh
file
source ./cpd_vars_48.sh
Login to the cluster with cpd-cli
cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}
Create an instance of App Connect for watsonx Orchestrate
cpd-cli manage setup-appconnect \
--appconnect_ns=${PROJECT_IBM_APP_CONNECT} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--release=${VERSION} \
--components=watsonx_orchestrate \
--file_storage_class=${STG_CLASS_FILE}
Set the CPD_USERNAME environment variable to your Cloud Pak for Data username:
This should be the cpd admin user you created when you ran the tasks here
export CPD_USERNAME=cpadmin
Create the required watsonx Assistant service instances for watsonx Orchestrate.
cpd-cli manage setup-wxo-assistant \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--create_assistants=true \
--user=${CPD_USERNAME} \
--auth_type=apikey
Input the apikey you generated when you created the cpd-cli profile.
Apply the olm
cpd-cli manage apply-olm \
--release=${VERSION} \
--cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
--components=watsonx_orchestrate
When that command returns, apply the Custom Resource
cpd-cli manage apply-cr \
--components=watsonx_orchestrate \
--release=${VERSION} \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--block_storage_class=${STG_CLASS_BLOCK} \
--file_storage_class=${STG_CLASS_FILE} \
--license_acceptance=true
Verify the the cr was applied successfully with this command
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=watsonx_orchestrate
As of version 4.8.4 of CP4D, deploying the CR for Watsonx Orchestrate can sometimes get hung up on a mongodb pod error. This can hold up the entire installation as documented here
If the deployment is hanging, run the following commands to verify the issue:
oc exec -it -c mongodb-agent mongodb-wo-mongo-ops-manager-db-0 -- /opt/scripts/readinessprobe
This may return
panic: Get "[https://172.30.0.1:443/api/v1/namespaces/cpd-instance-1/pods/mongodb-wo-mongo-ops-manager-db-0](https://172.30.0.1/api/v1/namespaces/cpd-instance-1/pods/mongodb-wo-mongo-ops-manager-db-0)": dial tcp 172.30.0.1:443: i/o timeout
Resolve the error by running the following:
# Update opsmanager labels to allow access
oc patch opsmanager mongodb-wo-mongo-ops-manager --type merge --patch '{"spec":{"applicationDatabase":{"podSpec":{"podTemplate":{"metadata":{"labels":{"wo.watsonx.ibm.com/external-access":"true"}}}}}}}'
# Delete existing sts to force re-rollout:
oc delete sts mongodb-wo-mongo-ops-manager-db
Troubleshooting WatsonX Orchestrate installation
If the "apply-cr" command exhausts its retries without succeeding, there maybe in an issue with MongoDB.
This issue does not show a clear error in the pod or in the opsmanager CR. To confirm that the mongodb-wo-mongo-ops-manager-db pod error has occurred in the cluster, run the following command: oc exec -it -c mongodb-agent mongodb-wo-mongo-ops-manager-db-0 -- /opt/scripts/readinessprobe
If the error has occurred in the cluster, the following is displayed:
panic: Get "https://172.30.0.1:443/api/v1/namespaces/cpd-instance-1/pods/mongodb-wo-mongo-ops-manager-db-0": dial tcp 172.30.0.1:443: i/o timeout
Cause
This error occurs in certain clusters when the MongoDB readinessprobe calls to the Kubernetes API are not supported by the network traffic.
Solution
To resolve this error, run the following commands:
# Update opsmanager labels to allow access
oc patch opsmanager mongodb-wo-mongo-ops-manager --type merge --patch '{"spec":{"applicationDatabase":{"podSpec":{"podTemplate":{"metadata":{"labels":{"wo.watsonx.ibm.com/external-access":"true"}}}}}}}'
# Delete existing sts to force re-rollout:
oc delete sts mongodb-wo-mongo-ops-manager-db