In Windows 11 Pro Station, a Mellanox ConnectX-4 MCX455A-ECAT PCIe x16 3.0 100GBe VPI EDR IB network adatper, goes to support SMB Direct, client and server side SMB Multichannel and RDMA (Remote Direct Memory Access):
PS C:\> Set-SmbClientConfiguration -EnableMultiChannel $true Confirm Are you sure you want to perform this action? Performing operation 'Modify' on Target 'SMB Client Configuration'. [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):
PS C:\> Set-SmbServerConfiguration -EnableMultiChannel $true Confirm Are you sure you want to perform this action? Performing operation 'Modify' on Target 'SMB Server Configuration'. [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):
Have a look TrueNAS disk speed benchmark, over a 100Gbps ethernet network, from Windows 11 Pro for Workstations with SMB Direct, client/server SMB Multichannel and RDMA enabled:
In Windows Server 2022, with SMB shared folder in Storage Spaces:
Run Windows Powershell as Administrator user , which RDMA Capable are all True for both SMB client/server:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
Have a look Windows Server 2022 disk speed benchmark, over a 100Gbps ethernet network, from Windows 11 Pro for Workstations with SMB Direct, client/server SMB Multichannel and RDMA enabled:
Modify config.plist file under EFI folder -> OC folder, change the default Screen Resolution from 1080P1920x1080@32 to 2K2560x1440@32. Look more information in Configuration.pdf file in KVM-Opencore release
Connect with Apple Remote Desktop, so can use exact same Apple Keyboard mapping:
Current KVM-Opencore hasn’t setup support for Audio device and GPU accelerator.
Also try to setup Parsec in MacOS on Proxmox, without success.
MikroTik CRS504-4XQ-IN, the Cloud Switch can handle FOUR QSFP28 100Gbps ports, equal to 16 x 25Gbps bandwidth.
Setup single link mode, only the first QSFP28 sub-interface needs to be configured, while the remaining three sub-interfaces should remain enabled. For example, connect Mellanox MCX455A-ECAT ConnectX-4 InfiniBand/Ethernet adapter card (EDR IB 100Gbps and 100GbE, single-port QSFP28, PCIe 3.0x16) using ONTi DAC QSFP28 100Gbps cable to the switch.
Change FEC Mode to fec91.
Ethernet Forward Error Correction (FEC) is a technique used to improve the reliability of data transmission over Ethernet networks by detecting and correcting errors in data packets. The two most common types of FEC used in Ethernet networks are CL74 and CL91.
CL74 and CL91 refer to two different types of FEC codes, each with its own characteristics and performance. Here’s a brief comparison between the two:
Code Rate:
CL91 has a higher code rate of 91.6%, which means that only 8.4% of the data transmitted is used for error correction.
In addition, setup the swith port connected to ONTi QSFP28 40Gbps TO 4SFP+ breakout cable:
1 2 3 4 5 6
$ ssh -l admin MikroTik.local
[admin@MikroTik] > /interface ethernet set qsfp28-1-1 auto-negotiation=no speed=10G-baseCR [admin@MikroTik] > /interface ethernet set qsfp28-1-2 auto-negotiation=no speed=10G-baseCR [admin@MikroTik] > /interface ethernet set qsfp28-1-3 auto-negotiation=no speed=10G-baseCR [admin@MikroTik] > /interface ethernet set qsfp28-1-4 auto-negotiation=no speed=10G-baseCR
Speed test
In iperf3 server, run listens to 4 ports to manage connections in parallel:
Synology NAS DS920+ with two build-in 1Gbps ethernet adapters, and two 2.5Gbps USB 3.0 Ethernet Adapters. Now setup network bond / link aggregation on them.
Enable Network Link Aggregation Mode
Pickup network devices into bond
Setup network
Accept network interface change after network bond
Two network adapters have been setup with DHCP allocated addresses:
ens18 192.168.0.246
ens19 192.168.0.105
Create a Network Link Aggregation Interface
After Link Aggregation Interface bond1 created, original two network adapters ens18 and ens19 IP addresses are gone. bond1 with the ONLY ONE network interface address for TrueNAS.
After a default installation, a tiny disk space given to Proxmox root volume. Insufficient disk space issue raised up after several VMs installed and backup made, as iinstallation image files *.ISO and backup all put into root volume.
In storage.cfg:
1 2 3 4 5 6 7 8 9
root@pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup
lvmthin: local-lvm thinpool data vgname pve content rootdir,images
root@pve:~# lvdisplay --- Logical volume --- LV Name data VG Name pve # open 0 LV Size <3.58 TiB
--- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV Status available # open 2 LV Size 8.00 GiB
--- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV Status available # open 1 LV Size <112.25 GiB
Solution is to decrease the size of pve/data volume, as this volume doesn’t support reducing thin pools in size yet, and then to increase the size of pdev/root volume.
Backup all VMs, then remove pve/data volume:
1 2
root@pve:~# lvremove pve/data Removing pool pve/data will remove 7 dependent volume(s). Proceed? [y/n]: y
Based on the disk space has just released, increase the size of pdev/root volume, 20% for current FREE space in this case:
1 2 3 4 5 6 7 8 9
root@pve:~# lvextend -l +20%FREE /dev/pve/root Size of logical volume pve/root changed from <112.25 GiB (28735 extents) to <851.09 GiB (217878 extents). Logical volume pve/root successfully resized.
root@pve:~# resize2fs /dev/pve/root resize2fs 1.47.0 (5-Feb-2023) Filesystem at /dev/pve/root is mounted on /; on-line resizing required old_desc_blocks = 15, new_desc_blocks = 107 The filesystem on /dev/pve/root is now 223107072 (4k) blocks long.
root@pve:~# vgdisplay --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 126 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <3.73 TiB PE Size 4.00 MiB Total PE 976498 Alloc PE / Size 967446 / 3.69 TiB Free PE / Size 9052 / <35.36 GiB VG UUID UEsIZR-TBsz-UYlP-u2FO-2AbC-uq4d-vcb35f
Create thin pool volume of the metadata, usually size of 1% of pve/data volume:
1 2 3 4 5 6 7
root@pve:~# lvconvert --type thin-pool --poolmetadatasize 36G pve/data Reducing pool metadata size 36.00 GiB to maximum usable size <15.88 GiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: Converting pve/data to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert pve/data? [y/n]: y Converted pve/data to thin pool.
Verify current disk volumes, pve/root disk volume has more space now:
Enable VT-d(Intel Virtualization Technology for Directed I/O), for IOMMU(Input Output Memory Management Unit) services, and SR-IOV (Single Root IO Virtualization), a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus, in motherboard BIOS in Chipset, e.g. ASRock Z790 Riptide WiFi.
Enable SR-IOV for Mellonax network adapter e.g. Mellanox ConnectX-4 MCX455A-ECAT PCIe x16 3.0 100GBe VPI EDR IB in the same motherboard BIOS.
Add Proxmox No Subscription URL:
1 2 3 4 5 6 7 8 9 10 11
root@pve:~# cat /etc/apt/sources.list deb http://ftp.au.debian.org/debian bookworm main contrib
deb http://ftp.au.debian.org/debian bookworm-updates main contrib
# security updates deb http://security.debian.org bookworm-security main contrib
# Proxmox VE pve-no-subscription repository provided by proxmox.com, # NOT recommended for production use deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
and run packages update:
1
root@pve:~# apt update
and install all build tools:
1
root@pve:~# apt install build-* dkms
Set/Pin Proxmox kernel version:
1 2 3 4 5 6 7 8
root@pve:~# proxmox-boot-tool kernel pin 6.8.4-2-pve Setting '6.8.4-2-pve' as grub default entry and running update-grub. Generating grub configuration file ... Found linux image: /boot/vmlinuz-6.8.4-2-pve Found initrd image: /boot/initrd.img-6.8.4-2-pve Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi Adding boot menu entry for UEFI Firmware Settings ... done
and verify Proxmox kernal version:
1 2 3 4 5 6 7 8 9
root@pve:~# proxmox-boot-tool kernel list Manually selected kernels: None.
root@pve:~# apt install proxmox-headers-6.8.4-2-pve Reading package lists... Done Building dependency tree... Done Reading state information... Done The following NEW packages will be installed: proxmox-headers-6.8.4-2-pve 0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded. Need to get 13.7 MB of archives. After this operation, 97.0 MB of additional disk space will be used. Get:1 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 proxmox-headers-6.8.4-2-pve amd64 6.8.4-2 [13.7 MB] Fetched 13.7 MB in 1s (23.8 MB/s) Selecting previously unselected package proxmox-headers-6.8.4-2-pve. (Reading database ... 70448 files and directories currently installed.) Preparing to unpack .../proxmox-headers-6.8.4-2-pve_6.8.4-2_amd64.deb ... Unpacking proxmox-headers-6.8.4-2-pve (6.8.4-2) ... Setting up proxmox-headers-6.8.4-2-pve (6.8.4-2) ...
Download Linux i915 driver with SR-IOV support for Linux kernel:
Add a PCI device for Ubuntu VM, and choose one Virtual GPU; Enable Primary GPU and PCI Express in options; Choose none in Display and host in Processors options:
Setup remote desktop connection to Ubuntu:
1 2 3 4 5
root@nucleus:~# apt install ubuntu-desktop
root@nucleus:~# apt install xrdp
root@nucleus:~# systemctl enable xrdp
Fix Remote Desktop audio over HDMI issue with the script, enable the sound redirection:
1
terrence@nucleus:~$ ./xrdp-installer-1.5.1.sh -s
then reboot VM.
Now Audio device becomes xrdp input / output.
Windows Server 2022
Windows Server 2022 is similar to Windows 11 setup in Proxmox. A few issues like GPU:
just disable GPU then enable it, it will work correctly.
And no sound after installation, but can enable Windows Audio Service and choose Remote Audio:
then audio over HDMI to remote desktop can work.
In addition, can setup User Auto Logon after Windows Server 2022 startup. And check Windows license by running:
1
PS C:\Users\Administrator> slmgr -dlv
Now can remote desktop access Ubuntu, Windows 11 and Windows Server 2022 VMs both run in Proxmox: