×

You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using the fully qualified domain name (FQDN) of the cluster.

Accessing VMs by using the cluster FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Configuring a DNS server for secondary networks

The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the deployKubeSecondaryDNS feature gate in the HyperConverged custom resource (CR).

Prerequisites
  • You installed the OpenShift CLI (oc).

  • You configured a load balancer for the cluster.

  • You logged in to the cluster with cluster-admin permissions.

Procedure
  1. Create a load balancer service to expose the DNS server outside the cluster by running the oc expose command according to the following example:

    $ oc expose -n kubevirt-hyperconverged deployment/secondary-dns --name=dns-lb \
      --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'
  2. Retrieve the external IP address by running the following command:

    $ oc get service -n kubevirt-hyperconverged
    Example output
    NAME       TYPE             CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
    dns-lb     LoadBalancer     172.30.27.5    10.46.41.94      53:31829/TCP     5s
  3. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  4. Enable the DNS server and monitoring components according to the following example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: kubevirt-hyperconverged
    spec:
        featureGates:
          deployKubeSecondaryDNS: true
        kubeSecondaryDNSNameServerIP: "10.46.41.94" (1)
    # ...
    1 Specify the external IP address exposed by the load balancer service.
  5. Save the file and exit the editor.

  6. Retrieve the cluster FQDN by running the following command:

     $ oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'
    Example output
    openshift.example.com
  7. Point to the DNS server by using one of the following methods:

    • Add the kubeSecondaryDNSNameServerIP value to the resolv.conf file on your local machine.

      Editing the resolv.conf file overwrites existing DNS settings.

    • Add the kubeSecondaryDNSNameServerIP value and the cluster FQDN to the enterprise DNS server records. For example:

      vm.<FQDN>. IN NS ns.vm.<FQDN>.
      ns.vm.<FQDN>. IN A 10.46.41.94

Connecting to a VM on a secondary network by using the cluster FQDN

You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster.

Prerequisites
  • You installed the QEMU guest agent on the VM.

  • The IP address of the VM is public.

  • You configured the DNS server for secondary networks.

  • You retrieved the fully qualified domain name (FQDN) of the cluster.

Procedure
  1. Retrieve the network interface name from the VM configuration by running the following command:

    $ oc get vm -n <namespace> <vm_name> -o yaml
    Example output
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      running: true
      template:
        spec:
          domain:
            devices:
              interfaces:
                - bridge: {}
                  name: example-nic
    # ...
          networks:
          - multus:
              networkName: bridge-conf
            name: example-nic (1)
    1 Note the name of the network interface.
  2. Connect to the VM by using the ssh command:

    $ ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>