Here at DornerWorks, we work on ways to help our customers quickly and easily leverage open source Xen-based Virtuosity for their embedded products, and that includes the Virtuosity hypervisor. As a quick recap, a hypervisor is software that is used to create multiple virtual machines (VMs) running on a common set of hardware resources. Each VM runs its own software in isolation from the other VMs, allowing use cases like running multiple operating systems side by side on a single target.
Another historic use case of hypervisor-based systems is to allow the re-use of software binaries on different hardware targets. Because virtual machines are logical software constructs, they are infinitely more customizable than the physical hardware. Theoretically, a hypervisor can create a VM that replicates any target hardware, with I/O devices and other interfaces, the software binary was written to run on in the first place. In other words, with virtualization, instead of making the software fit the machine, you can make the (virtual) machine fit the software.
Of course, as with everything in embedded software, the devil is in the details, and there are always tradeoffs to be made. Fully emulating any arbitrary hardware target would require processing overhead and code complexity in Xen that usually cannot be borne by embedded products, which is why the Xen on ARM port actually does away with most of the emulation support that Xen on x86 provides. The Xen on ARM approach is for software running in a VM to be supported directly by the underlying hardware. Where that is not possible, the software should be modified to work appropriately as a guest, i.e. paravirtualized, which results in a new software binary, defeating the goal of binary re-use. Xen on ARM also allows devices to be passed through to a VM, granting that VM direct access of that device. The VM can then access that device as if it were the running natively, without Xen. Since modern system on chips (SoC), like the Xilinx Zynq Ultrascale+ MPSoC (ZUS+), have lots of I/O devices of the same type, this creates the opportunity for identical software binaries running in different VMs to access different instances of a type of I/O device.
As a simple example, the ZUS+ has two Universal Asynchronous Receiver/Transmitter (UART) controllers, UART0 and UART1, at base address of 0xFF000000 and 0xFF010000 respectively. A simple UART driver just needs access to those registers to poll for incoming characters and to transmit outgoing ones. Typically, the guest’s configuration file, typically suffixed by .cfg, specifies which physical addresses should be passed through to the guest. The Xen management tool, XL, and Xen itself also support a configuration option that sets up the MMU’s stage two translation to an arbitrary address. That capability allows the redirection, or aliasing, of the address the software in a VM thinks it is accessing to a different address. In this scenario, one VM’s .cfg file could be configured to pass the UART0 address through un-aliased, and the other VM’s .cfg file could be set to alias the UART0 address the software is using to UART1’s address instead. The end effect is that the UART driver code can be the same, both accessing what the UART0 address base of 0xFF000000, but through the aliasing of the memory access, only one guest actually uses UART0 while the other uses UART1.
A more complicated example would be to do the same for the 4 Gigabit Ethernet Modules (GEM) by creating one software binary that drives GEM0 and using VM configuration to alias access to GEM1 through GEM3 for various VMs. Ethernet drivers are more complex than UART drivers because of the higher throughput involved, and require use of DMA and interrupts. Use of peripheral I/O driven DMA is supported by Xen on the ZUS+ by using the system MMU (SMMU), also known as an I/O MMU. Each GEM instance also has its own set of interrupts, which is supported in the ZUS+ hardware via the Generic Interrupt Controller v2 (GICv2). The GICv2 allows the system to be configured so that all physical interrupts to the A53 cluster are initially handled by Xen, which then raises a virtual interrupt request (vIRQ) to the destination VM based on current state and various configuration data.
Unfortunately, Xen and XL do not support a mapping where the vIRQ is different from the physical IRQ that triggers it… yet. All it would take to take is for some company looking to make Xen work better for embedded products to make the necessary software updates to XL and Xen to enable it. Stay tuned for updates.