Upgrading Self-Managed Shipa

❗️

Make sure to replace [...] in the commands that follow with your email and a password. These should be the current values for your cluster. Additionally, you may need to adjust the target namespace if you did not install Shipa to the shipa-system namespace.

❗️

Make sure you check for version specific upgrade instructions before attempting the general upgrade process!

General upgrade process

In most cases, upgrading your self-managed Shipa cluster is as simple as running the following:

# Update your list of available Shipa versions
helm repo update shipa-charts

# Show the latest Shipa chart version
helm search repo shipa-charts/shipa

# Show the installed version within your cluster
helm list -n shipa-system -l name=shipa -o yaml | grep chart

export ADMIN_EMAIL=[...]

export ADMIN_PASSWORD=[...]

# Upgrade the install, passing any additional configuration as needed, including license
helm upgrade shipa shipa-charts/shipa \
  --set auth.adminUser=$ADMIN_EMAIL --set auth.adminPassword=$ADMIN_PASSWORD \
  --namespace shipa-system --timeout=1000s --wait

Once the install process is done, it will usually take some time for the new shipa-api pod to be ready, but once all pods are back to a running state and replaced pods are done terminating, you should run the following in order to update resources managed by the Shipa API (Note: Please make sure you provide all frameworks attached to shipa-cluster. you can check which frameworks are attached to a cluster by running shipa cluster list command):

shipa cluster update shipa-cluster -k shipa-framework -k ...

After running this it is likely that you will see some of the resources in the shipa namespace restarting. Once all the new pods are ready and terminated pods have been removed, you are ready to use your new version of Shipa.

Troubleshooting a failed upgrade

In addition to any error messages from Helm, it is possible that there will be messages in the logs of any pods used for Helm hooks. To view these logs you can run something like the following:

kubectl logs --tail 1000 --namespace shipa-system -l 'shipa-hook'

This is especially useful if Helm gives the following message:

Error: UPGRADE FAILED: pre-upgrade hooks failed: job failed: BackoffLimitExceeded

Note that the "hook" pods will not automatically delete if the Helm upgrade fails, but you can safely remove them as follows:

kubectl delete po -n shipa-system -l shipa-hook

Upgrading to 1.7.0+ from 1.6.3 or Prior

📘

External MongoDB Recommended

It is highly recommended that you use an external MongoDB instance for Shipa. The Shipa installed MongoDB instance is for demonstration purposes.

If you are currently running Shipa chart version below 1.7.0 e.g 1.6.3 and are looking to upgrade to version 1.7.0 or later and you use the Shipa installed/internal MongoDB instance, there are additional steps you will need to take regarding MongoDB. If you used the default configuration for setting up MongoDB, i.e. it is a stateful set with one replica, then it is possible to allow Helm to automatically handle upgrading your MongoDB instance. It is recommended that you first take a backup of your database, using mongodump or similar. In order to attempt the automatic upgrade you need to provide the name of the persistent volume claim currently used by MongoDB:

export ADMIN_EMAIL=[...]

export ADMIN_PASSWORD=[...]

# Get PVC name for current MongoDB installation 
EXISTING_PVC="$(kubectl get pvc -n shipa-system -l app=mongodb-replicaset \
  -o jsonpath='{.items[].metadata.name}')"

helm upgrade shipa shipa-charts/shipa \
  --set auth.adminUser=$ADMIN_EMAIL --set auth.adminPassword=$ADMIN_PASSWORD \
  --set mongodb.persistence.existingClaim=${EXISTING_PVC} \
  --namespace shipa-system --timeout=1000s --wait

❗️

Note that you must aways use the same value for mongodb.persistence.existingClaim for all future upgrades as well in order to avoid losing data, unless specifically told otherwise.

If the Helm upgrade fails due to an inability to upgrade the MongoDB instance, you will need to take measures to manually migrate your data to a new MongoDB instance. Double check the output of any "hook" pods using the method described in "Troubleshooting a failed upgrade" above.

Alternatively you can choose to continue using the deprecated MongoDB version, but keep in mind that the ability to use this version will be removed in a future version of Shipa. To use the deprecated version you need to set tags.legacyMongoReplicaset=true and tags.defaultDB=false

📘

Upgrading Versions 1.6.1 and Below

In Shipa 1.6.2 (and onwards). The Shipa API has made changes from HTTP/HTTPS port 8080/8081 to more standard 80/443 ports.

Steps

  • Update the Shipa CLI to CLI Version 1.7.0+
  • Upgrade to Shipa 1.6.2+
  • Re-add Shipa Targets for your Control Plane
  • Upgrade Connected Clusters

Manually upgrading MongoDB instance

If the automatic upgrade process does not work for you, the following steps can export your data from the existing MongoDB instance and then import it into a new one.

export MONGO_NAMESPACE="$(kubectl get po -A -l app=mongodb-replicaset -o jsonpath='{.items[0].metadata.namespace}')"
export MONGO_POD="$(kubectl get po -A -l app=mongodb-replicaset -o jsonpath='{.items[0].metadata.name}')"
export MONGO_PVC="$(kubectl get pvc -n ${MONGO_NAMESPACE} -l app=mongodb-replicaset -o jsonpath='{.items[0].metadata.name}')"
export SHIPA_DEPLOYMENT="$(kubectl get deployments.apps -n ${MONGO_NAMESPACE} -l app.kubernetes.io/instance=shipa -o name | grep -e '.*-api$')"
export SHIPA_RELEASE="$(kubectl get deployments.apps -n ${MONGO_NAMESPACE} -l app.kubernetes.io/instance=shipa -o jsonpath='{.items[0].metadata.annotations.meta\.helm\.sh\/release-name}')"

if [[ -z "${MONGO_NAMESPACE}" || -z "${MONGO_POD}" || -z "${MONGO_PVC}" || -z "${SHIPA_DEPLOYMENT}" || -z "${SHIPA_RELEASE}" ]]; then
  echo "[ERROR] Could not pull required cluster information."
  exit 1
fi

# Stop Shipa API
kubectl scale ${SHIPA_DEPLOYMENT} --replicas=0 -n ${SHIPA_SYSTEM_NAMESPACE}
sleep 15

# Export data
kubectl exec -it -n ${MONGO_NAMESPACE} ${MONGO_POD} -c mongodb-replicaset -- mongodump -d shipa --gzip --archive=/tmp/mongobackup.gzip
kubectl cp -n ${MONGO_NAMESPACE} ${MONGO_POD}:/tmp/mongobackup.gzip /tmp/mongobackup.gzip -c mongodb-replicaset
if [[ ! -s /tmp/mongobackup.gzip ]]; then
  echo "[ERROR] Backup is missing or empty. Expected locally at /tmp/mongobackup.gzip"
  exit 1
fi
if ! gunzip --test /tmp/mongobackup.gzip; then
  echo "[ERROR] Backup appears to be corrupt."
  exit 1
fi

# Delete mongo components
kubectl delete svc -n ${MONGO_NAMESPACE} -l app=mongodb-replicaset
kubectl delete statefulsets.apps -n ${MONGO_NAMESPACE} -l app=mongodb-replicaset
kubectl delete configmaps -n ${MONGO_NAMESPACE} -l app=mongodb-replicaset
kubectl delete persistentvolumeclaims -n ${MONGO_NAMESPACE} -l app=mongodb-replicaset
sleep 15

# Helm upgrade (provide all your override values here)
helm upgrade ${SHIPA_RELEASE} -n ${MONGO_NAMESPACE} --timeout=15m ...

# Wait for MongoDB to be ready
kubectl wait --for=condition=ready --timeout=5m po -l app.kubernetes.io/name=mongodb -n ${MONGO_NAMESPACE}

# Import data
export MONGO_POD="$( kubectl get po -n ${MONGO_NAMESPACE} -l app.kubernetes.io/name=mongodb -o jsonpath='{.items[0].metadata.name}')"
kubectl cp -n ${MONGO_NAMESPACE} /tmp/mongobackup.gzip ${MONGO_POD}:/tmp/mongobackup.gzip -c mongodb
kubectl exec -it -n ${SHIPA_SYSTEM_NAMESPACE} ${MONGO_POD} -c mongodb -- mongorestore -d shipa --gzip --archive=/tmp/mongobackup.gzip

# Restart the Shipa API
kubectl scale ${SHIPA_DEPLOYMENT} --replicas=1 -n ${MONGO_NAMESPACE}

Shipa Client [CLI] Upgrades

A good practice is to update the Self-Managed CLI when updating Shipa Versions.

curl -s https://storage.googleapis.com/shipa-client/install.sh | bash

Can look at the Compatibility Matrix in the Install Shipa Client [CLI] documentation section to make sure you have the minimum CLI version.

Shipa Connected Cluster Updates

It is a good practice to update your Connected Clusters when upgrading Shipa to take advantage of new features and bug-fix.

shipa cluster list
shipa cluster upgrade CLUSTERNAME

Did this page help you?