BHYVE VM

Required packages: - sysutils/vm-bhyve

ZFS pre-requirements

First create necessary zfs datasets:

  1. # zfs create -o mountpoint=/vm zroot/vm
  2. # sysrc vm_enable="YES"
  3. # sysrc vm_dir="zfs:zroot/vm"
  4. # vm init

PCIe device discovery

Now, find all PCIe devices you’d like to pass through

GPU Example:

vgapci0@pci0:1:0:0: class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1b06 subvendor=0x1043 subdevice=0x85e4
    vendor     = 'NVIDIA Corporation'
    device     = 'GP102 [GeForce GTX 1080 Ti]'
    class      = display
    subclass   = VGA
hdac0@pci0:1:0:1:   class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x10ef subvendor=0x1043 subdevice=0x85e4
    vendor     = 'NVIDIA Corporation'
    device     = 'GP102 HDMI Audio Controller'
    class      = multimedia
    subclass   = HDA

specifically vgapci0@pci0:1:0:0: and hdac0@pci0:1:0:1:

Take the last three numbers of each and construct the following line: pptdevs="1/0/0 1/0/1"

a note on /boot/loader.conf variables, from man 4 vmm

"""
PCI PASSTHROUGH
     When the hardware supports VT-d, and vmm.ko has been loaded at boot time,
     PCI devices can be reserved for use by the hypervisor.  Entries
     consisting of the PCI bus/slot/function are added to the pptdevs
     loader.conf(5) variable.  Additional entries are separated by spaces.
     Host PCI devices that match an entry will be assigned to the hypervisor
     and will not be probed by FreeBSD device drivers.  See the EXAMPLES
     section below for sample usage.

     A large number of PCI device entries may require a string longer than the
     128-character limit of loader.conf(5) variables.  The pptdevs2 and
     pptdevs3 variables can be used for additional entries.
"""

REBOOT

Enter BIOS

# vm passthru ;: to confirm that the passthrough slots work

DEVICE     BHYVE ID     READY        DESCRIPTION
...
ppt0       1/0/0        Yes          GP102 [GeForce GTX 1080 Ti]
ppt1       1/0/1        Yes          GP102 HDMI Audio Controller
...

Add networking bridges for the guest

$ ifconfig ;: to confirm the bridge device works

Create vm-bhyve config

loader="uefi"                                                   #
graphics="yes"                                                  # 1)
xhci_mouse="yes"                                                # 2)
                                                                #                                                                                   
cpu="8"                                                         # 3)
cpu_sockets="1"                                                 # 4)
cpu_cores="4"                                                   # 5)
cpu_threads="2"                                                 # 6)
                                                                #
memory=8G                                                       # 7)
                                                                #
ahci_device_limit="8"                                           # 8)
                                                                #
network0_type="e1000"                                           # 9)
network0_switch="public"                                        #
                                                                #
passthru0="1/0/0=6:0"                                           # 10)
passthru1="1/0/1=6:1"                                           # 11)
                                                                #
disk0_type="nvme"                                               # 12)
disk0_name="disk0.img"                                          # 13)
#disk0_opts="maxq=16,qsz=8,ioslots=1,sectsz=512,ser=ABCDEFGH"   # 14)
                                                                #
utctime="no"                                                    # 15)
                                                                #
vnc_password="VNCPASSWORDHERE"                                  # 16)
  1. enables the VNC framebuffer on PCIe bus 7
  2. enables the VNC mouse on PCIe bus 8
  3. CPU topology. 8 cpu cores in Windows is constructed by:
  4. 1 socket (or the VM will attempt to put each core on its own socket, which can confuse apps)
  5. 4 cores (physical topological cores) (See: https://unix.stackexchange.com/questions/406137/why-does-top-show-a-different-number-of-cores-than-cpuinfo)
  6. 2 threads operating on each core. This makes 8 = 1 * 4 * 2
  7. Amount of RAM to allot the VM
  8. put up to 8 disks on a single ahci controller. Without this, adding a disk pushes the following network devices onto higher slot numbers, which causes windows to see them as a new interface
  9. e1000 works out-of-the-box, but ideally this should be changed to virtio-net with drivers installed inside the guest (See: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.248-1/)
  10. Passthru[N] is how you declare a passthru device. First, the triplet constructed from $ pciconf -lv or # vm passthru and then the PCI bus and device number you’d like it to be on in the guest. See Notes for more info on which PCIe bus numbers are safe to use
  11. The second function of the GPU, being passed thru next.
  12. Disk setup. Emulated type NVME works well for most people.
  13. Disk name, anything, ending with .img
  14. Windows specific disk options. Not needed for a Linux guest.
  1. Windows expects to get localtime. Not needed for a Linux guest.
  2. VNC password to be used along with the framebuffer and mouse. Many VNC applications will not let you connect to a passwordless client, so put something here.

Create VM

If you are using an install medium, skip this step - # vm install VM_NAME WIN10_ISO_NAME.iso

We bring the bridge down so that windows setup doesn’t lock you into a microsoft account. Yes this is really the only way. It’s unbelievable. - # ifconfig vm-public down

Log into the vm via VNC:

once setup is done and past the forced microsoft account

Post Setup

Jun 21 20:55:04:  [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,nvme,/vm/VM_NAME/disk0.img,maxq=16,qsz=8,ioslots=1,sectsz=512,ser=ABCDEFGH -s 5:0,virtio-net,tap0,mac=58:9c:fc:03:ad:28 -s 6:0,passthru,1/0/0 -s 6:1,passthru,1/0/1]

Locate the pci triplets from earlier (1/0/0 and 1/0/1). Notice that they’re attached to bhyve arguments which describe their guest PCIe locations as 6:0 and 6:1. If the PCI passthrough for the gpu is not working, confirm in this log that no PCI guest lanes are being double described. Additionally, almost every additional PCI device should be passed on its own number (eg 7:0), but any device with multiple functions (like a GPU, with a compute slot and audio slot), should share a number. Again, sharing a PCI lane is rare. If this vm needs to be tweaked after it’s running, edit /vm/VM_NAME/VM_NAME.conf

After the guest is up and stable and has VirtIO drivers for networking and drivers for everything else:

Notes

The default slots that seemed to be used by vm-bhyve and should NOT be used:

Windows sometimes struggles with high PCIe slot numbers