Open vSwitch (OVS) is an open source, production-quality, multilayer-virtual switch. Open vSwitch is designed for massive network automation through programmatic extension, but still with the support for standard protocols and management interfaces (for example, LACP, NetFlow, sFlow, IPFIX, RSPAN, CLI, and 802.1ag). It is designed to support distribution across multiple physical servers similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.
In this article we explore configurations required to utilize Open vSwitch Bridge in your KVM Virtualization Infrastructure. KVM is a full virtualization solution for Linux on x86 hardware that allows you to run multiple operating systems sharing a single hardware resources. The kernel component of KVM is included in Linux since version 2.6.20.
Most of the operations in this article were performed on an Enterprise Linux Operating system – RHEL / Rocky / Alma, and Fedora Linux systems. For Debian based systems such as Ubuntu few modifications may be required to use Open vSwitch Bridge on KVM Virtual Machines.
1) Install Open vSwitch on Linux system
The first step is installation of Open vSwitch packages on your Linux system. We have guides written earlier on which can be used for the installation.
Install Open vSwitch on Ubuntu / Debian
Use the following link to access Open vSwitch installation guide for Debian based Linux systems:
Install Open vSwitch on Fedora
For Fedora the Open vSwitch packages are available on the default upstream:
sudo dnf install openvswitch
Start the installation process:
....
Transaction Summary
==================================================================================================================================================================
Install 4 Packages
Total download size: 8.1 M
Installed size: 27 M
Is this ok [y/N]: y
Start and enable the openvswitch service by running below commands in your terminal.
$ sudo systemctl enable --now openvswitch
Created symlink /etc/systemd/system/multi-user.target.wants/openvswitch.service → /usr/lib/systemd/system/openvswitch.service.
Confirm the service is running:
$ systemctl status openvswitch
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri Sat 2025-03-15 00:28:07 EAT; 23s ago
Process: 3307 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3307 (code=exited, status=0/SUCCESS)
Mar 15 00:28:07 fed.mylab.io systemd[1]: Starting Open vSwitch...
Mar 15 00:28:07 fed.mylab.io systemd[1]: Started Open vSwitch.
Confirm ovs-vsctl is able to talk to Open vSwitch daemon:
$ ovs-vsctl show
c5b16204-1afa-4ecf-8292-188fa32bc5f6
ovs_version: "3.3.0"
2) Create OVS Bridge on Linux
In this section we show examples of Open vSwitch Bridge creation on your Linux system.
Configuration on RHEL / CentOS / Fedora:
Install Network scripts and disable NetworkManager:
sudo dnf -y install network-scripts
sudo systemctl disable --now NetworkManager
sudo systemctl enable network
Create an Open vSwitch Bridge
Backup current interface configurations:
cp /etc/sysconfig/network-scripts/ifcfg-eno1 /root/
Configure primary interface like below while replacing eno1 with the name of your physical interface.
$ sudo vim /etc/sysconfig/network-scripts/ifcfg-eno1
DEVICE=eno1
ONBOOT=yes
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
Create Open vSwitch Bridge configuration file:
$ sudo vim /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs
USERCTL=yes
PEERDNS=yes
IPV6INIT=no
IPADDR=192.168.20.10
NETMASK=255.255.255.0
GATEWAY=192.168.20.1
DNS1=192.168.20.1
DNS2=8.8.8.8
Add the eno1 physical interface to the br-ex bridge in openVswitch
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex eno1
Restart Network service after making the changes:
sudo systemctl restart network.service
List available OVS bridges:
$ sudo ovs-vsctl show
325c62c0-dafd-4abc-a949-aa6233000ca4
Bridge br-ex
Port eno1
Interface eno1
Port br-ex
Interface br-ex
type: internal
ovs_version: "3.3.0"
You can also show IP information using the command:
$ ip addr
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
link/ether b4:2e:99:c9:b0:72 brd ff:ff:ff:ff:ff:ff
inet6 fe80::b62e:99ff:fec9:b072/64 scope link
valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 02:7a:a4:c1:b2:70 brd ff:ff:ff:ff:ff:ff
4: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether b4:2e:99:c9:b0:72 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.10 brd 192.168.20.255 scope global br-ex
valid_lft forever preferred_lft forever
inet 192.168.20.10/24 scope global br-ex
valid_lft forever preferred_lft forever
inet6 fe80::3cbd:feff:fef9:ed42/64 scope link
valid_lft forever preferred_lft forever
We can confirm the bridge is active and ready for use. The same steps can be used to create additional OVS bridges.
Configuration on Debian / Ubuntu
Sample OVS configuration template for Debian based Linux systems:
# All data traffic flows over the br-ex network
auto br-ex
allow-ovs br-ex
# IP configuration of the OVS Bridge
iface br-ex inet static
address 192.168.20.10
netmask 255.255.255.0
gateway 192.168.20.1
dns-nameservers 8.8.8.8
ovs_type OVSBridge
ovs_ports eth1
Add the eth1 physical interface to the br-ex bridge in openVswitch.
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex eth1
3) Install KVM packages on your Linux Server
With OVS installed and configured the next step will be installation of KVM packages on our Linux Server. Perform the operation by using the commands shared below.
# Install on CentOS 8 / Stream 8 / Fedora
sudo dnf -y install virt-install libvirt qemu-kvm virt-install virt-top libguestfs-tools
sudo systemctl enable --now libvirtd
# Install on Ubuntu / Debian
sudo apt update
sudo apt -y install qemu-kvm libvirt-daemon-system virt-top libvirt-daemon bridge-utils libosinfo-bin virtinst libguestfs-tools
Confirm that kernel modules are loaded:
$ lsmod | grep kvm
kvm_intel 315392 0
kvm 847872 1 kvm_intel
irqbypass 16384 1 kvm
Confirm service status:
systemctl status libvirtd
Enable IP Routing
We need to enable IP routing to direct inbound traffic correctly to the OVS bridge we’ve created.
On the KVM host run the following command to unlock IP routing Kernel feature:
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter = 2"|sudo tee -a /etc/sysctl.conf
Load the settings:
$ sudo sysctl -p
net.ipv4.ip_forward = 1
Create Virtual Machine on the Bridge
Finally we can create a VM that uses the Bridge we’ve created.
List available builder templates:
$ sudo virt-builder -l
I’ll create an OS disk image of 20GB from centosstream-8 template:
$ sudo virt-builder centosstream-9 --format qcow2 \
--size 10G -o /var/lib/libvirt/images/centosstream-9.qcow2 \
--root-password StrongRootPassword
[ 0.8] Downloading: http://builder.libguestfs.org/centosstream-8.xz
########################################################################################################################################################## 100.0%
[ 10.5] Planning how to build this image
[ 10.5] Uncompressing
[ 13.4] Resizing (using virt-resize) to expand the disk to 10.0G
[ 30.0] Opening the new disk
[ 34.1] Setting a random seed
[ 34.2] Setting passwords
[ 35.0] Finishing off
Output file: /var/lib/libvirt/images/centosstream-9.qcow2
Output size: 20.0G
Output format: qcow2
Total usable space: 9.3G
Free space: 8.0G (85%)
Then create the VM with the image above using virt-install command:
sudo virt-install \
--name centosstream-9 \
--ram 2048 \
--disk path=/var/lib/libvirt/images/centosstream-9.qcow2 \
--vcpus 1 \
--os-type linux \
--os-variant rhel8.0 \
--network=bridge:br-ex,model=virtio,virtualport_type=openvswitch \
--graphics none \
--serial pty \
--console pty \
--boot hd \
--import
Where:
- /var/lib/libvirt/images/centosstream-9.qcow2 is the path to the image created
- br-ex is the name of the OVS bridge to use in the VM
- virtualport_type=openvswitch defines virtualport type used as OVS
Expected output:
....
[ OK ] Started Hardware RNG Entropy Gatherer Wake threshold service.
[ OK ] Started Hardware RNG Entropy Gatherer Daemon.
[ OK ] Started QEMU Guest Agent.
[ 4.649459] lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
[ 4.668659] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[ 4.669680] input: PC Speaker as /devices/platform/pcspkr/input/input5
[ 4.689814] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
[ OK ] Started NTP client/server.
[ OK ] Started OpenSSH ecdsa Server Key Generation.
[ OK ] Started OpenSSH ed25519 Server Key Generation.
[ 4.764387] iTCO_vendor_support: vendor-support=0
[ 4.766207] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[ 4.767124] iTCO_wdt: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
[ 4.769471] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[ OK ] Started Authorization Manager.
Starting firewalld - dynamic firewall daemon...
[ 5.013674] intel_pmc_core intel_pmc_core.0: initialized
[ OK ] Started System Security Services Daemon.
[ OK ] Reached target User and Group Name Lookups.
Starting Login Service...
[ OK ] Started OpenSSH rsa Server Key Generation.
[ OK ] Reached target sshd-keygen.target.
[ OK ] Started Login Service.
[ OK ] Started firewalld - dynamic firewall daemon.
[ OK ] Reached target Network (Pre).
Starting Network Manager...
[ OK ] Started Network Manager.
[ OK ] Reached target Network.
Starting OpenSSH server daemon...
Starting Permit User Sessions...
Starting Dynamic System Tuning Daemon...
Starting Network Manager Wait Online...
Starting Hostname Service...
[ OK ] Started Permit User Sessions.
Starting Hold until boot process finishes up...
Starting Terminate Plymouth Boot Screen...
[ OK ] Started Command Scheduler.
[ OK ] Started OpenSSH server daemon.
[ 6.225202] IPv6: ADDRCONF(NETDEV_UP): enp1s0: link is not ready
localhost login: root
[root@localhost ~]#
Update root password:
[root@localhost ~]# passwd
Changing password for user root.
New password: <enter-new-password>
Retype new password: <confirm-new-password>
passwd: all authentication tokens updated successfully.
Confirm network is connected and UP.
[root@localhost ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:5e:69:24 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe5e:6924/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Let’s configure VM IP address on the interface mapped to host OVS bridge:
# vim /etc/sysconfig/network-scripts/ifcfg-enp1s0
NAME="enp1s0"
DEVICE="enp1s0"
ONBOOT="yes"
NETBOOT="yes"
BOOTPROTO="none"
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
DEFROUTE="yes"
IPADDR=192.168.20.12
PREFIX=24
GATEWAY=192.168.20.1
DNS1=8.8.8.8
DNS2=8.8.4.4
Run the commands below to restart the interface:
# ifdown enp1s0 && ifup enp1s0
Connection 'enp1s0' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/9)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/10)
Check IP address information:
# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:5e:69:24 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.12/24 brd 192.168.20.255 scope global noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe5e:6924/64 scope link
valid_lft forever preferred_lft forever
Let’s try ping outside
[root@localhost ~]# ping -c 2 google.com
PING google.com (142.250.185.110) 56(84) bytes of data.
64 bytes from fra16s49-in-f14.1e100.net (142.250.185.110): icmp_seq=1 ttl=118 time=5.06 ms
64 bytes from fra16s49-in-f14.1e100.net (142.250.185.110): icmp_seq=2 ttl=118 time=5.28 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 5.064/5.173/5.282/0.109 ms
List running VMs:
$ sudo virsh list
To destroy VM we’ll run:
$ sudo virsh destroy centosstream-8
Domain 'centosstream-8' destroyed
$ sudo virsh undefine centosstream-8
Domain 'centosstream-8' has been undefined
$ sudo rm -rf /var/lib/libvirt/images/centosstream-8.qcow2
$ sudo virsh list --all
Id Name State
--------------------
Conclusion
In this guide we’ve been able to install OVS and configure a bridge to be used by KVM virtual machines. We hope this guide was useful. If you encountered any issues while referencing this article in your setup please drop us a comment and we’ll be happy to help where we can.