$ oc delete project threescale
You can restore Red Hat 3scale API Management components by restoring the backed up 3scale operator resources. You can also restore databases such as MySQL and Redis.
After the data has been restored, you can scale up the 3scale operator and deployment.
You installed and configured Red Hat 3scale API Management. For more information, see Installing 3scale API Management on OpenShift and Red Hat 3scale API Management.
You backed up the 3scale operator, and databases such as MySQL and Redis.
Ensure that you are restoring 3scale on the same cluster where it was backed up from.
If you want to restore 3scale on a different cluster, ensure that the original backed-up cluster and the cluster you want to restore the operator on are using the same custom domain.
You can restore the Red Hat 3scale API Management operator resources, and both the Secret
and APIManager custom resources (CRs) by using the following procedure.
You backed up the 3scale operator.
You backed up the MySQL and Redis databases.
You are restoring the database on the same cluster, where it was backed up.
If you are restoring the operator to a different cluster that you backed up from, install and configure OADP with nodeAgent
enabled on the destination cluster. Ensure that the OADP configuration is same as it was on the source cluster.
Delete the 3scale operator custom resource definitions (CRDs) along with the threescale
namespace by running the following command:
$ oc delete project threescale
"threescale" project deleted successfully
Create a YAML file with the following configuration to restore the 3scale operator:
restore.yaml
fileapiVersion: velero.io/v1
kind: Restore
metadata:
name: operator-installation-restore
namespace: openshift-adp
spec:
backupName: operator-install-backup (1)
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
- csinodes.storage.k8s.io
- volumeattachments.storage.k8s.io
- backuprepositories.velero.io
itemOperationTimeout: 4h0m0s
1 | Restoring the 3scale operator’s backup |
Restore the 3scale operator by running the following command:
$ oc create -f restore.yaml
restore.velerio.io/operator-installation-restore created
Manually create the s3-credentials
Secret
object by running the following command:
$ oc apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: s3-credentials
namespace: threescale
stringData:
AWS_ACCESS_KEY_ID: <ID_123456> (1)
AWS_SECRET_ACCESS_KEY: <ID_98765544> (2)
AWS_BUCKET: <mybucket.example.com> (3)
AWS_REGION: <us-east-1> (4)
type: Opaque
EOF
1 | Replace <ID_123456> with your AWS credentials ID. |
2 | Replace <ID_98765544> with your AWS credentials KEY. |
3 | Replace <mybucket.example.com> with your target bucket name. |
4 | Replace <us-east-1> with the AWS region of your bucket. |
Scale down the 3scale operator by running the following command:
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
deployment.apps/threescale-operator-controller-manager-v2 scaled
Create a YAML file with the following configuration to restore the Secret
:
restore-secret.yaml
fileapiVersion: velero.io/v1
kind: Restore
metadata:
name: operator-resources-secrets
namespace: openshift-adp
spec:
backupName: operator-resources-secrets (1)
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
- csinodes.storage.k8s.io
- volumeattachments.storage.k8s.io
- backuprepositories.velero.io
itemOperationTimeout: 4h0m0s
1 | Restoring the Secret backup. |
Restore the Secret
by running the following command:
$ oc create -f restore-secrets.yaml
restore.velerio.io/operator-resources-secrets created
Create a YAML file with the following configuration to restore APIManager:
restore-apimanager.yaml
fileapiVersion: velero.io/v1
kind: Restore
metadata:
name: operator-resources-apim
namespace: openshift-adp
spec:
backupName: operator-resources-apim (1)
excludedResources: (2)
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
- csinodes.storage.k8s.io
- volumeattachments.storage.k8s.io
- backuprepositories.velero.io
itemOperationTimeout: 4h0m0s
1 | Restoring the APIManager backup. |
2 | The resources that you do not want to restore. |
Restore the APIManager by running the following command:
$ oc create -f restore-apimanager.yaml
restore.velerio.io/operator-resources-apim created
Scale up the 3scale operator by running the following command:
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
deployment.apps/threescale-operator-controller-manager-v2 scaled
Restoring a MySQL database re-creates the following resources:
The Pod
, ReplicationController
, and Deployment
objects.
The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
The MySQL dump, which the example-claim
PVC contains.
Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted. |
You restored the Secret
and APIManager custom resources (CRs).
Scale down the Red Hat 3scale API Management operator by running the following command:
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
deployment.apps/threescale-operator-controller-manager-v2 scaled
Create the following script to scale down the 3scale operator:
$ vi ./scaledowndeployment.sh
for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do
oc scale deployment/$deployment --replicas=0 -n threescale
done
Scale down all the deployment 3scale components by running the following script:
$ ./scaledowndeployment.sh
deployment.apps.openshift.io/apicast-production scaled
deployment.apps.openshift.io/apicast-staging scaled
deployment.apps.openshift.io/backend-cron scaled
deployment.apps.openshift.io/backend-listener scaled
deployment.apps.openshift.io/backend-redis scaled
deployment.apps.openshift.io/backend-worker scaled
deployment.apps.openshift.io/system-app scaled
deployment.apps.openshift.io/system-memcache scaled
deployment.apps.openshift.io/system-mysql scaled
deployment.apps.openshift.io/system-redis scaled
deployment.apps.openshift.io/system-searchd scaled
deployment.apps.openshift.io/system-sidekiq scaled
deployment.apps.openshift.io/zync scaled
deployment.apps.openshift.io/zync-database scaled
deployment.apps.openshift.io/zync-que scaled
Delete the system-mysql
Deployment
object by running the following command:
$ oc delete deployment system-mysql -n threescale
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
deployment.apps.openshift.io "system-mysql" deleted
Create the following YAML file to restore the MySQL database:
restore-mysql.yaml
fileapiVersion: velero.io/v1
kind: Restore
metadata:
name: restore-mysql
namespace: openshift-adp
spec:
backupName: mysql-backup (1)
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- csinodes.storage.k8s.io
- volumeattachments.storage.k8s.io
- backuprepositories.velero.io
- resticrepositories.velero.io
hooks:
resources:
- name: restoreDB
postHooks:
- exec:
command:
- /bin/sh
- '-c'
- >
sleep 30
mysql -h 127.0.0.1 -D system -u root
--password=$MYSQL_ROOT_PASSWORD <
/var/lib/mysqldump/data/dump.sql (2)
container: system-mysql
execTimeout: 80s
onError: Fail
waitTimeout: 5m
itemOperationTimeout: 1h0m0s
restorePVs: true
1 | Restoring the MySQL backup. |
2 | A path where the data is restored from. |
Restore the MySQL database by running the following command:
$ oc create -f restore-mysql.yaml
restore.velerio.io/restore-mysql created
Verify that the PodVolumeRestore
restore is completed by running the following command:
$ oc get podvolumerestores.velero.io -n openshift-adp
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m
restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m
Verify that the additional PVC has been restored by running the following command:
$ oc get pvc -n threescale
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m
example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m
mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m
system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m
system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m
You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore.
You restored the Red Hat 3scale API Management operator resources, Secret
, and APIManager custom resources.
You restored the MySQL database.
Delete the backend-redis
deployment by running the following command:
$ oc delete deployment backend-redis -n threescale
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
deployment.apps.openshift.io "backend-redis" deleted
Create a YAML file with the following configuration to restore the Redis database:
restore-backend.yaml
fileapiVersion: velero.io/v1
kind: Restore
metadata:
name: restore-backend
namespace: openshift-adp
spec:
backupName: redis-backup (1)
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
- csinodes.storage.k8s.io
- volumeattachments.storage.k8s.io
- backuprepositories.velero.io
itemOperationTimeout: 1h0m0s
restorePVs: true
1 | Restoring the Redis backup. |
Restore the Redis database by running the following command:
$ oc create -f restore-backend.yaml
restore.velerio.io/restore-backend created
Verify that the PodVolumeRestore
restore is completed by running the following command:
$ oc get podvolumerestores.velero.io -n openshift-adp
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m
You can scale up the Red Hat 3scale API Management operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state.
You restored the 3scale operator resources, and both the Secret
and APIManager custom resources (CRs).
You restored the MySQL and back-end Redis databases.
Ensure that there are no scaled up deployments or no extra pods running.
There might be some system-mysql
or backend-redis
pods running detached from deployments after restoration, which can be removed after the restoration is successful.
Scale up the 3scale operator by running the following command:
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
deployment.apps/threescale-operator-controller-manager-v2 scaled
Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
$ oc get pods -n threescale
NAME READY STATUS RESTARTS AGE
threescale-operator-controller-manager-v2-79546bd8c-b4qbh 1/1 Running 0 2m5s
Create the following script to scale up the deployments:
$ vi ./scaledeployment.sh
for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do
oc scale deployment/$deployment --replicas=1 -n threescale
done
Scale up the deployments by running the following script:
$ ./scaledeployment.sh
deployment.apps.openshift.io/apicast-production scaled
deployment.apps.openshift.io/apicast-staging scaled
deployment.apps.openshift.io/backend-cron scaled
deployment.apps.openshift.io/backend-listener scaled
deployment.apps.openshift.io/backend-redis scaled
deployment.apps.openshift.io/backend-worker scaled
deployment.apps.openshift.io/system-app scaled
deployment.apps.openshift.io/system-memcache scaled
deployment.apps.openshift.io/system-mysql scaled
deployment.apps.openshift.io/system-redis scaled
deployment.apps.openshift.io/system-searchd scaled
deployment.apps.openshift.io/system-sidekiq scaled
deployment.apps.openshift.io/zync scaled
deployment.apps.openshift.io/zync-database scaled
deployment.apps.openshift.io/zync-que scaled
Get the 3scale-admin
route to log in to the 3scale UI by running the following command:
$ oc get routes -n threescale
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None
zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None
zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None
zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None
zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None
zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None
In this example, 3scale-admin.apps.custom-cluster-name.openshift.com
is the 3scale-admin URL.
Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.