Prerequisites to be installed:
Setting Up the VLAN Configuration
In my homelab, I have a vlan created and tagged to the same port that is supplying the ethernet connection to the OCP cluster.

I have bonds setup for my OpenShift cluster to not only emulate a more enterprisey setup but also to normalize the naming for the devices. If bond0 is always the connection and it hides what physical interfaces it fronts, then it gives me the ability to normalize all the configurations upstream and obfuscate differences in hardware. Check out the setup here: https://stephennimmo.com/2025/03/22/home-lab-openshift-using-agent-based-installer/
OpenShift Host Configurations with NodeNetworkConfigurationPolicy
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond-vlan-4-with-ovs-bridge-nncp
spec:
nodeSelector: {} # Apply to all nodes
desiredState:
interfaces:
- name: bond0.4
type: vlan
state: up
vlan:
base-iface: bond0
id: 4
- name: br-vlan4
type: ovs-bridge
state: up
ipv4:
dhcp: false
enabled: true
address:
- ip: 10.4.0.1
prefix-length: 24
bridge:
allow-extra-patch-ports: true
options:
stp: false
fail-mode: standalone
port:
- name: bond0.4
- Why do you need the allow-extra-patch-ports: true? If kubernetes-nmstate pod for any reason restarts, it may reapply all the NNCPs. This would lead to removal of all patch ports connected to configured OVS bridge that were not defined in the NNCP. This means that all Pods and VMs connected to it would be disconnected. https://issues.redhat.com/browse/OCPBUGS-37542
The NNCP is setup at the host level by the cluster administrators. Once this is in place, the NetworkAttachmentDefinition can then be applied at the namespace level to provide VMs in the particular namespace with the ability to have access to the vlan.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: vlan4-network
namespace: vm-examples
spec:
config: '{
"cniVersion": "0.3.1",
"name": "vlan4-network",
"type": "bridge",
"bridge": "br-vlan4",
"ipam": {
"type": "whereabouts",
"range": "10.4.0.16/28",
"gateway": "10.4.0.1"
}
}'
With the NAD in place, it can then be referenced from the VM.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rhel9-vm001
namespace: vm-examples
labels:
app: rhel9-vm001
kubevirt.io/dynamic-credentials-support: 'true'
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9-vm001
spec:
sourceRef:
kind: DataSource
name: rhel9
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
runStrategy: RerunOnFailure
template:
metadata:
annotations:
vm.kubevirt.io/flavor: small
vm.kubevirt.io/os: rhel9
vm.kubevirt.io/workload: server
labels:
kubevirt.io/domain: rhel9-vm001
kubevirt.io/size: small
network.kubevirt.io/headlessService: headless
spec:
architecture: amd64
domain:
cpu:
cores: 1
sockets: 2
threads: 1
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
macAddress: '02:67:d6:00:00:01'
- bridge: {}
model: virtio
name: vlan4-interface
rng: {}
features:
acpi: {}
smm:
enabled: true
firmware:
bootloader:
efi: {}
machine:
type: pc-q35-rhel9.4.0
memory:
guest: 4Gi
networks:
- name: default
pod: {}
- multus:
networkName: vlan4-network
name: vlan4-interface
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: rhel9-vm001
name: rootdisk
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
password: otdt-fh0j-kbts
chpasswd: { expire: False }
name: cloudinitdisk
Once the VM is started, the interfaces pick up their respective IPs and all is good.

Running with a Static IP Setup
To run with a static ip, we can setup a different NAD.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: static-net-191
namespace: vm-examples
spec:
config: |
{
"cniVersion": "0.3.0",
"name": "static-net-191",
"type": "bridge",
"bridge": "br-vlan4",
"ipam": {
"type": "static",
"addresses": [
{
"address": "10.4.0.123/24",
"gateway": "10.4.0.1"
}
]
}
Then add the NAD config into a VM.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rhel9-static-ip-vm
namespace: vm-examples
labels:
app: rhel9-static-ip-vm
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9-static-ip-vm
spec:
sourceRef:
kind: DataSource
name: rhel9
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
runStrategy: RerunOnFailure
template:
metadata:
annotations:
vm.kubevirt.io/flavor: small
vm.kubevirt.io/os: rhel9
vm.kubevirt.io/workload: server
labels:
kubevirt.io/domain: rhel9-static-ip-vm
spec:
architecture: amd64
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- macAddress: '02:67:d6:00:00:0d'
masquerade: {}
model: virtio
name: default
- bridge: {}
macAddress: '02:67:d6:00:00:0e'
model: virtio
name: static-vlan-interface
rng: {}
features:
acpi: {}
smm:
enabled: true
firmware:
bootloader:
efi: {}
machine:
type: pc-q35-rhel9.4.0
memory:
guest: 4Gi
resources: {}
networks:
- name: default
pod: {}
- multus:
networkName: static-net-191
name: static-vlan-interface
volumes:
- dataVolume:
name: rhel9-static-ip-vm
name: rootdisk
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
password: redhat
chpasswd: { expire: False }
name: cloudinitdisk