kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: red
masquerade: {} (1)
ports: (2)
- port: 80
networks:
- name: red
pod: {}
You can use the default pod network with OKD Virtualization. To do so, you must use the masquerade
binding method. Do not use masquerade
mode with non-default networks.
For secondary networks, use the bridge
binding method.
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Edit the interfaces
spec of your virtual machine configuration file:
kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: red
masquerade: {} (1)
ports: (2)
- port: 80
networks:
- name: red
pod: {}
1 | Connect using masquerade mode |
2 | Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80. |
Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. |
Create the virtual machine:
$ oc create -f <vm-name>.yaml
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR
field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120
. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
The OKD cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack.
In a new virtual machine configuration, include an interface with masquerade
and configure the IPv6 address and default gateway by using cloud-init.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm-ipv6
...
interfaces:
- name: red
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: red
pod: {}
volumes:
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
addresses: [ fd10:0:2::2/120 ] (3)
gateway6: fd10:0:2::1 (4)
1 | Connect using masquerade mode. |
2 | Allows incoming traffic on port 80 to the virtual machine. |
3 | The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . |
4 | The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . |
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
If you create a virtual machine from the OKD Virtualization web console wizard, select the required binding method from the Networking screen.
Name | Description |
---|---|
Name |
Name for the network interface controller. |
Model |
Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network |
List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address |
MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm
namespace: default
spec:
running: false
template:
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
special: vm-windows
name: vm-windows
spec:
template:
metadata:
labels:
special: vm-windows
spec:
domain:
clock:
timer:
hpet:
present: false
hyperv: {}
pit:
tickPolicy: delay
rtc:
tickPolicy: catchup
utc: {}
cpu:
cores: 2
devices:
disks:
- disk:
bus: sata
name: pvcdisk
interfaces:
- masquerade: {}
model: e1000
name: default
features:
acpi: {}
apic: {}
hyperv:
relaxed: {}
spinlocks:
spinlocks: 8191
vapic: {}
firmware:
uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
machine:
type: q35
resources:
requests:
memory: 2Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 3600
volumes:
- name: pvcdisk
persistentVolumeClaim:
claimName: disk-windows
Create a service from a running virtual machine by first creating a Service
object to expose the virtual machine.
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the The
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the
|
The ClusterIP
service type exposes the virtual machine internally, within the cluster. The NodePort
or LoadBalancer
service types expose the virtual machine externally, outside of the cluster.
This procedure presents an example of how to create, connect to, and expose a Service
object of type: ClusterIP
as a virtual machine-backed service.
|
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-ephemeral
namespace: example-namespace
spec:
running: false
template:
metadata:
labels:
special: key (1)
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
1 | Add the label special: key in the spec.template.metadata.labels section. |
Labels on a virtual machine are passed through to the pod. The labels on
the |
Save the virtual machine YAML to apply your changes.
Edit the Service
YAML to configure the settings necessary to create and expose the Service
object:
apiVersion: v1
kind: Service
metadata:
name: vmservice (1)
namespace: example-namespace (2)
spec:
ports:
- port: 27017
protocol: TCP
targetPort: 22 (3)
selector:
special: key (4)
type: ClusterIP (5)
1 | Specify the name of the service you are creating and exposing. |
2 | Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML. |
3 | Add targetPort: 22 , exposing the service on SSH port 22 . |
4 | In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file. |
5 | In the spec section of the Service YAML, add type: ClusterIP for a
ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as NodePort and LoadBalancer , replace
type: ClusterIP with type: NodePort or type: LoadBalancer , as appropriate. |
Save the Service
YAML to store the service configuration.
Create the ClusterIP
service:
$ oc create -f <service_name>.yaml
Start the virtual machine. If the virtual machine is already running, restart it.
Query the Service
object to verify it is available and is configured with type ClusterIP
.
Run the oc get service
command, specifying the namespace
that you reference in the virtual machine and Service
YAML files.
$ oc get service -n example-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
As shown from the output, vmservice
is running.
The TYPE
displays as ClusterIP
, as you specified in the Service
YAML.
Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-connect
namespace: example-namespace
spec:
running: false
template:
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
Run the oc create
command to create a second virtual machine, where file.yaml
is the name of the virtual machine YAML:
$ oc create -f <file.yaml>
Start the virtual machine.
Connect to the virtual machine by running the following virtctl
command:
$ virtctl -n example-namespace console <new-vm-name>
For service type |
Run the ssh
command to authenticate the connection, where 172.30.3.149
is the ClusterIP of the service and fedora
is the user name of the virtual machine:
$ ssh fedora@172.30.3.149 -p 27017
You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.