×

You can restore Red Hat 3scale API Management components by restoring the backed up 3scale operator resources. You can also restore databases such as MySQL and Redis.

After the data has been restored, you can scale up the 3scale operator and deployment.

Prerequisites
  • You installed and configured Red Hat 3scale API Management. For more information, see Installing 3scale API Management on OpenShift and Red Hat 3scale API Management.

  • You backed up the 3scale operator, and databases such as MySQL and Redis.

  • Ensure that you are restoring 3scale on the same cluster where it was backed up from.

  • If you want to restore 3scale on a different cluster, ensure that the original backed-up cluster and the cluster you want to restore the operator on are using the same custom domain.

Restoring the 3scale API Management operator, secrets, and APIManager

You can restore the Red Hat 3scale API Management operator resources, and both the Secret and APIManager custom resources (CRs) by using the following procedure.

Prerequisites
  • You backed up the 3scale operator.

  • You backed up the MySQL and Redis databases.

  • You are restoring the database on the same cluster, where it was backed up.

    If you are restoring the operator to a different cluster that you backed up from, install and configure OADP with nodeAgent enabled on the destination cluster. Ensure that the OADP configuration is same as it was on the source cluster.

Procedure
  1. Delete the 3scale operator custom resource definitions (CRDs) along with the threescale namespace by running the following command:

    $ oc delete project threescale
    Example output
    "threescale" project deleted successfully
  2. Create a YAML file with the following configuration to restore the 3scale operator:

    Example restore.yaml file
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: operator-installation-restore
      namespace: openshift-adp
    spec:
      backupName: operator-install-backup (1)
      excludedResources:
      - nodes
      - events
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - csinodes.storage.k8s.io
      - volumeattachments.storage.k8s.io
      - backuprepositories.velero.io
      itemOperationTimeout: 4h0m0s
    1 Restoring the 3scale operator’s backup
  3. Restore the 3scale operator by running the following command:

    $ oc create -f restore.yaml
    Example output
    restore.velerio.io/operator-installation-restore created
  4. Manually create the s3-credentials Secret object by running the following command:

    $ oc apply -f - <<EOF
    ---
    apiVersion: v1
    kind: Secret
    metadata:
          name: s3-credentials
          namespace: threescale
    stringData:
      AWS_ACCESS_KEY_ID: <ID_123456> (1)
      AWS_SECRET_ACCESS_KEY: <ID_98765544> (2)
      AWS_BUCKET: <mybucket.example.com> (3)
      AWS_REGION: <us-east-1> (4)
    type: Opaque
    EOF
    1 Replace <ID_123456> with your AWS credentials ID.
    2 Replace <ID_98765544> with your AWS credentials KEY.
    3 Replace <mybucket.example.com> with your target bucket name.
    4 Replace <us-east-1> with the AWS region of your bucket.
  5. Scale down the 3scale operator by running the following command:

    $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
    Example output
    deployment.apps/threescale-operator-controller-manager-v2 scaled
  6. Create a YAML file with the following configuration to restore the Secret:

    Example restore-secret.yaml file
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: operator-resources-secrets
      namespace: openshift-adp
    spec:
      backupName: operator-resources-secrets (1)
      excludedResources:
      - nodes
      - events
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - csinodes.storage.k8s.io
      - volumeattachments.storage.k8s.io
      - backuprepositories.velero.io
      itemOperationTimeout: 4h0m0s
    1 Restoring the Secret backup.
  7. Restore the Secret by running the following command:

    $ oc create -f restore-secrets.yaml
    Example output
    restore.velerio.io/operator-resources-secrets created
  8. Create a YAML file with the following configuration to restore APIManager:

    Example restore-apimanager.yaml file
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: operator-resources-apim
      namespace: openshift-adp
    spec:
      backupName: operator-resources-apim (1)
      excludedResources: (2)
      - nodes
      - events
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - csinodes.storage.k8s.io
      - volumeattachments.storage.k8s.io
      - backuprepositories.velero.io
      itemOperationTimeout: 4h0m0s
    1 Restoring the APIManager backup.
    2 The resources that you do not want to restore.
  9. Restore the APIManager by running the following command:

    $ oc create -f restore-apimanager.yaml
    Example output
    restore.velerio.io/operator-resources-apim created
  10. Scale up the 3scale operator by running the following command:

    $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
    Example output
    deployment.apps/threescale-operator-controller-manager-v2 scaled

Restoring a MySQL database

Restoring a MySQL database re-creates the following resources:

  • The Pod, ReplicationController, and Deployment objects.

  • The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).

  • The MySQL dump, which the example-claim PVC contains.

Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted.

Prerequisites
  • You restored the Secret and APIManager custom resources (CRs).

Procedure
  1. Scale down the Red Hat 3scale API Management operator by running the following command:

    $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
    Example output:
    deployment.apps/threescale-operator-controller-manager-v2 scaled
  2. Create the following script to scale down the 3scale operator:

    $ vi ./scaledowndeployment.sh
    Example script:
    for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do
        oc scale deployment/$deployment --replicas=0 -n threescale
    done
  3. Scale down all the deployment 3scale components by running the following script:

    $ ./scaledowndeployment.sh
    Example output:
    deployment.apps.openshift.io/apicast-production scaled
    deployment.apps.openshift.io/apicast-staging scaled
    deployment.apps.openshift.io/backend-cron scaled
    deployment.apps.openshift.io/backend-listener scaled
    deployment.apps.openshift.io/backend-redis scaled
    deployment.apps.openshift.io/backend-worker scaled
    deployment.apps.openshift.io/system-app scaled
    deployment.apps.openshift.io/system-memcache scaled
    deployment.apps.openshift.io/system-mysql scaled
    deployment.apps.openshift.io/system-redis scaled
    deployment.apps.openshift.io/system-searchd scaled
    deployment.apps.openshift.io/system-sidekiq scaled
    deployment.apps.openshift.io/zync scaled
    deployment.apps.openshift.io/zync-database scaled
    deployment.apps.openshift.io/zync-que scaled
  4. Delete the system-mysql Deployment object by running the following command:

    $ oc delete deployment system-mysql -n threescale
    Example output:
    Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
    deployment.apps.openshift.io "system-mysql" deleted
  5. Create the following YAML file to restore the MySQL database:

    Example restore-mysql.yaml file
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: restore-mysql
      namespace: openshift-adp
    spec:
      backupName: mysql-backup (1)
      excludedResources:
        - nodes
        - events
        - events.events.k8s.io
        - backups.velero.io
        - restores.velero.io
        - csinodes.storage.k8s.io
        - volumeattachments.storage.k8s.io
        - backuprepositories.velero.io
        - resticrepositories.velero.io
      hooks:
        resources:
          - name: restoreDB
            postHooks:
              - exec:
                  command:
                    - /bin/sh
                    - '-c'
                    - >
                      sleep 30
    
                      mysql -h 127.0.0.1 -D system -u root
                      --password=$MYSQL_ROOT_PASSWORD <
                      /var/lib/mysqldump/data/dump.sql (2)
                  container: system-mysql
                  execTimeout: 80s
                  onError: Fail
                  waitTimeout: 5m
      itemOperationTimeout: 1h0m0s
      restorePVs: true
    1 Restoring the MySQL backup.
    2 A path where the data is restored from.
  6. Restore the MySQL database by running the following command:

    $ oc create -f restore-mysql.yaml
    Example output
    restore.velerio.io/restore-mysql created
Verification
  1. Verify that the PodVolumeRestore restore is completed by running the following command:

    $ oc get podvolumerestores.velero.io -n openshift-adp
    Example output:
    NAME                    NAMESPACE    POD                     UPLOADER TYPE   VOLUME                  STATUS      TOTALBYTES   BYTESDONE   AGE
    restore-mysql-rbzvm     threescale   system-mysql-2-kjkhl    kopia           mysql-storage           Completed   771879108    771879108   40m
    restore-mysql-z7x7l     threescale   system-mysql-2-kjkhl    kopia           example-claim           Completed   380415       380415      40m
  2. Verify that the additional PVC has been restored by running the following command:

    $ oc get pvc -n threescale
    Example output:
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    backend-redis-storage   Bound    pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d   1Gi        RWO            gp3-csi        <unset>                 68m
    example-claim           Bound    pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54   1Gi        RWO            gp3-csi        <unset>                 57m
    mysql-storage           Bound    pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896   1Gi        RWO            gp3-csi        <unset>                 68m
    system-redis-storage    Bound    pvc-04dadafd-8a3e-4d00-8381-6041800a24fc   1Gi        RWO            gp3-csi        <unset>                 68m
    system-searchd          Bound    pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9   1Gi        RWO            gp3-csi        <unset>                 68m

Restoring the back-end Redis database

You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore.

Prerequisites
  • You restored the Red Hat 3scale API Management operator resources, Secret, and APIManager custom resources.

  • You restored the MySQL database.

Procedure
  1. Delete the backend-redis deployment by running the following command:

    $ oc delete deployment backend-redis -n threescale
    Example output:
    Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
    
    deployment.apps.openshift.io "backend-redis" deleted
  2. Create a YAML file with the following configuration to restore the Redis database:

    Example restore-backend.yaml file
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: restore-backend
      namespace: openshift-adp
    spec:
      backupName: redis-backup (1)
      excludedResources:
        - nodes
        - events
        - events.events.k8s.io
        - backups.velero.io
        - restores.velero.io
        - resticrepositories.velero.io
        - csinodes.storage.k8s.io
        - volumeattachments.storage.k8s.io
        - backuprepositories.velero.io
      itemOperationTimeout: 1h0m0s
      restorePVs: true
    1 Restoring the Redis backup.
  3. Restore the Redis database by running the following command:

    $ oc create -f restore-backend.yaml
    Example output
    restore.velerio.io/restore-backend created
Verification
  • Verify that the PodVolumeRestore restore is completed by running the following command:

    $ oc get podvolumerestores.velero.io -n openshift-adp
    Example output:
    NAME                    NAMESPACE    POD                     UPLOADER TYPE   VOLUME                  STATUS      TOTALBYTES   BYTESDONE   AGE
    restore-backend-jmrwx   threescale   backend-redis-1-bsfmv   kopia           backend-redis-storage   Completed   76123        76123       21m

Scaling up the 3scale API Management operator and deployment

You can scale up the Red Hat 3scale API Management operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state.

Prerequisites
  • You restored the 3scale operator resources, and both the Secret and APIManager custom resources (CRs).

  • You restored the MySQL and back-end Redis databases.

  • Ensure that there are no scaled up deployments or no extra pods running. There might be some system-mysql or backend-redis pods running detached from deployments after restoration, which can be removed after the restoration is successful.

Procedure
  1. Scale up the 3scale operator by running the following command:

    $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
    Example output
    deployment.apps/threescale-operator-controller-manager-v2 scaled
  2. Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:

    $ oc get pods -n threescale
    Example output
    NAME									                    READY        STATUS	  RESTARTS	 AGE
    threescale-operator-controller-manager-v2-79546bd8c-b4qbh	1/1	         Running  0          2m5s
  3. Create the following script to scale up the deployments:

    $ vi ./scaledeployment.sh
    Example script file:
    for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do
        oc scale deployment/$deployment --replicas=1 -n threescale
    done
  4. Scale up the deployments by running the following script:

    $ ./scaledeployment.sh
    Example output:
    deployment.apps.openshift.io/apicast-production scaled
    deployment.apps.openshift.io/apicast-staging scaled
    deployment.apps.openshift.io/backend-cron scaled
    deployment.apps.openshift.io/backend-listener scaled
    deployment.apps.openshift.io/backend-redis scaled
    deployment.apps.openshift.io/backend-worker scaled
    deployment.apps.openshift.io/system-app scaled
    deployment.apps.openshift.io/system-memcache scaled
    deployment.apps.openshift.io/system-mysql scaled
    deployment.apps.openshift.io/system-redis scaled
    deployment.apps.openshift.io/system-searchd scaled
    deployment.apps.openshift.io/system-sidekiq scaled
    deployment.apps.openshift.io/zync scaled
    deployment.apps.openshift.io/zync-database scaled
    deployment.apps.openshift.io/zync-que scaled
  5. Get the 3scale-admin route to log in to the 3scale UI by running the following command:

    $ oc get routes -n threescale
    Example output
    NAME                         HOST/PORT                                                                   PATH   SERVICES             PORT      TERMINATION     WILDCARD
    backend                      backend-3scale.apps.custom-cluster-name.openshift.com                         backend-listener     http      edge/Allow      None
    zync-3scale-api-b4l4d        api-3scale-apicast-production.apps.custom-cluster-name.openshift.com          apicast-production   gateway   edge/Redirect   None
    zync-3scale-api-b6sns        api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com             apicast-staging      gateway   edge/Redirect   None
    zync-3scale-master-7sc4j     master.apps.custom-cluster-name.openshift.com                                 system-master        http      edge/Redirect   None
    zync-3scale-provider-7r2nm   3scale-admin.apps.custom-cluster-name.openshift.com                           system-provider      http      edge/Redirect   None
    zync-3scale-provider-mjxlb   3scale.apps.custom-cluster-name.openshift.com                                 system-developer     http      edge/Redirect   None

    In this example, 3scale-admin.apps.custom-cluster-name.openshift.com is the 3scale-admin URL.

  6. Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.