Fix network object name already existed issue in Windows

When rename a network adapter in Windows:

1
PS C:\> Rename-NetAdapter -Name Ethernet -NewName Mellanox

an error Rename-NetAdapter : {Object Exists} An attempt was made to create an object and the object name already existed thrown.

Work around solution is:

  1. Open Device Manager in Windows Control Panel
  2. Under menu View enable Show hidden devices
  3. Uninstall the old network adapter with the old name
  4. Then rename the network adapter again

Rename Network in Windows

References

How to enable SMB Direct client/server in Windows 11 Pro for Workstations

In Windows 11 Pro Station, a Mellanox ConnectX-4 MCX455A-ECAT PCIe x16 3.0 100GBe VPI EDR IB network adatper, goes to support SMB Direct, client and server side SMB Multichannel and RDMA (Remote Direct Memory Access):

Windows 11 Pro for Workstations

Open Windows Terminal as Administrator.

Enable SMB Direct:

1
2
3
4
PS C:\> Enable-WindowsOptionalFeature -Online -FeatureName SMBDirect
Path :
Online : True
RestartNeeded : False

Enable SMB Multichannel on the client-side:

1
2
3
4
5
6
7
8
9
10
PS C:\> Set-SmbClientConfiguration -EnableMultiChannel $true
Confirm
Are you sure you want to perform this action?
Performing operation 'Modify' on Target 'SMB Client Configuration'.
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):

PS C:\> Get-SmbClientConfiguration
...
EnableMultiChannel : True
...

Enable SMB Multichannel on the server-side:

1
2
3
4
5
6
7
8
9
10
PS C:\> Set-SmbServerConfiguration -EnableMultiChannel $true
Confirm
Are you sure you want to perform this action?
Performing operation 'Modify' on Target 'SMB Server Configuration'.
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):

PS C:\> Get-SmbServerConfiguration
...
EnableMultiChannel : True
...

Enable RDMA for a specific interface:

1
PS C:\> Enable-NetAdapterRDMA Mellanox

Verify which state of operability SMB Direct is currently configured to:

1
2
3
4
5
6
7
PS C:\> Get-WindowsOptionalFeature -Online -FeatureName SMBDirect
FeatureName : SmbDirect
DisplayName : SMB Direct
Description : Remote Direct Memory Access (RDMA) support for the SMB 3.x file sharing protocol
RestartRequired : Possible
State : Enabled
CustomProperties :
1
2
3
4
5
6
7
8
9
10
PS C:\> Get-SmbClientNetworkInterface
Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name
--------------- ----------- ------------ ----- ----------- -------------
22 True True 100 Gbps {fe80::708:c529:1bcb:2432, 192.168.68.67} Mellanox

PS C:\> Get-SmbServerNetworkInterface
Scope Name Interface Index RSS Capable RDMA Capable Speed IpAddress
---------- --------------- ----------- ------------ ----- ---------
* 22 True True 100 Gbps fe80::708:c529:1bcb:2432
* 22 True True 100 Gbps 192.168.68.67

Have a look TrueNAS disk speed benchmark, over a 100Gbps ethernet network, from Windows 11 Pro for Workstations with SMB Direct, client/server SMB Multichannel and RDMA enabled:

TrueNAS disk speed benchmark

In Windows Server 2022, with SMB shared folder in Storage Spaces:

Windows Server 2022

Run Windows Powershell as Administrator user , which RDMA Capable are all True for both SMB client/server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows

PS C:\Users\Administrator> Get-SmbServerNetworkInterface
Scope Name Interface Index RSS Capable RDMA Capable Speed IpAddress
---------- --------------- ----------- ------------ ----- ---------
* 5 True True 100 Gbps fe80::9ee0:7f4c:5128:863b
* 5 True True 100 Gbps 192.168.68.66

PS C:\Users\Administrator> Get-SmbClientNetworkInterface
Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name
--------------- ----------- ------------ ----- ----------- -------------
5 True True 100 Gbps {fe80::9ee0:7f4c:5128:863b, 192.168.68.66} Mellanox

Have a look Windows Server 2022 disk speed benchmark, over a 100Gbps ethernet network, from Windows 11 Pro for Workstations with SMB Direct, client/server SMB Multichannel and RDMA enabled:

Windows Server 2022 disk speed benchmark

References

Fixing TrueNAS SMB IP binding

After changing network settings, TrueNAS IP address has been updated. If modify SMB configuration, error like:

1
smb_update.bindip.0: IP address [192.168.0.51] is not a configured address for this server

thrown.

To reset and clean up already bind IP, login TrueNAS Console and run:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
root@TrueNAS[~]# midclt call smb.update '{"bindip": []}'
{
"id": 1,
"netbiosname": "TRUENAS",
"netbiosalias": [],
"workgroup": "IGLOO STUDIO",
"description": "TrueNAS Server",
"unixcharset": "UTF-8",
"loglevel": "MINIMUM",
"syslog": false,
"aapl_extensions": true,
"Localmaster": true,
"guest": "nobody",
"filemask"': "",
"dirmask"': "",
"smb_options": "",
"bindip": [],
"cifs_SID": "S-1-5-21-2487580926-3122677641-100607549",
"ntImv1_auth": false,
"enable_smb1": false,
"admin_group": null,
"next_rid": 0,
"multichannel": true,
"netbiosname_local": "TRUENAS"
}

then can modify SMB configuration successfully.

References

Running MacOS on Proxmox

Following the instructions to run the latest MacOS, Sonoma 14.6.1, on Proxmox - The Definitive Guide to Running MacOS in Proxmox.

Proxmox - MacOS

A few more put on notes:

  • You can login with Apple ID
  • Modify config.plist file under EFI folder -> OC folder, change the default Screen Resolution from 1080P 1920x1080@32 to 2K 2560x1440@32. Look more information in Configuration.pdf file in KVM-Opencore release

Connect with Apple Remote Desktop, so can use exact same Apple Keyboard mapping:

Proxmox - Apple Remote Desktop

Current KVM-Opencore hasn’t setup support for Audio device and GPU accelerator.

Also try to setup Parsec in MacOS on Proxmox, without success.

Proxmox - Parsec

Setup MikroTik CRS504-4XQ-IN and run a speed test

MikroTik CRS504-4XQ-IN, the Cloud Switch can handle FOUR QSFP28 100Gbps ports, equal to 16 x 25Gbps bandwidth.

MikroTik - Interfaces

Setup single link mode, only the first QSFP28 sub-interface needs to be configured, while the remaining three sub-interfaces should remain enabled. For example, connect Mellanox MCX455A-ECAT ConnectX-4 InfiniBand/Ethernet adapter card (EDR IB 100Gbps and 100GbE, single-port QSFP28, PCIe 3.0x16) using ONTi DAC QSFP28 100Gbps cable to the switch.

Change FEC Mode to fec91.

Ethernet Forward Error Correction (FEC) is a technique used to improve the reliability of data transmission over Ethernet networks by detecting and correcting errors in data packets. The two most common types of FEC used in Ethernet networks are CL74 and CL91.

CL74 and CL91 refer to two different types of FEC codes, each with its own characteristics and performance. Here’s a brief comparison between the two:

Code Rate:

CL91 has a higher code rate of 91.6%, which means that only 8.4% of the data transmitted is used for error correction.

In addition, setup the swith port connected to ONTi QSFP28 40Gbps TO 4SFP+ breakout cable:

1
2
3
4
5
6
$ ssh -l admin MikroTik.local

[admin@MikroTik] > /interface ethernet set qsfp28-1-1 auto-negotiation=no speed=10G-baseCR
[admin@MikroTik] > /interface ethernet set qsfp28-1-2 auto-negotiation=no speed=10G-baseCR
[admin@MikroTik] > /interface ethernet set qsfp28-1-3 auto-negotiation=no speed=10G-baseCR
[admin@MikroTik] > /interface ethernet set qsfp28-1-4 auto-negotiation=no speed=10G-baseCR

Speed test

In iperf3 server, run listens to 4 ports to manage connections in parallel:

1
$ iperf3 -s -p 5201 & iperf3 -s -p 5202 & iperf3 -s -p 5203 & iperf3 -s -p 5204 &

In a MacBook Pro with WiFi-6 connection, run:

1
$ iperf3 -c MikroTik.local -p 5201 -P 4 -t 1000

In a Mac Studio with 10Gbps Ethernet connection, run:

1
$ iperf3 -c MikroTik.local -p 5202 -P 8 -t 1000 -B 192.168.0.104

In a Windows 11 PC with 100Gbps Ethernet connection, run:

1
$ iperf3 -c MikroTik.local -p 5203 -P 2 -t 1000

Check the speed on switch console:

MikroTik - Speed

and in graph:

MikroTik - Performance

References

Setup network bond in Synology NAS

Synology NAS DS920+ with two build-in 1Gbps ethernet adapters, and two 2.5Gbps USB 3.0 Ethernet Adapters. Now setup network bond / link aggregation on them.

  • Enable Network Link Aggregation Mode

Synology - Network Link Aggregation Mode

  • Pickup network devices into bond

Synology - Physical devices

  • Setup network

Synology - Physical devices network setup

  • Accept network interface change after network bond

Synology - Bond warning message

  • Network bonded

Synology - Network bonded

  • Network bond service order

Synology - Network bond service order

Setup TrueNAS Network and Link Aggregation

Two network adapters have been setup with DHCP allocated addresses:

  1. ens18 192.168.0.246
  2. ens19 192.168.0.105
  • Create a Network Link Aggregation Interface

TrueNAS - Link Aggregation

After Link Aggregation Interface bond1 created, original two network adapters ens18 and ens19 IP addresses are gone. bond1 with the ONLY ONE network interface address for TrueNAS.

TrueNAS - Network

Samba setup and share in TrueNAS

TrueNAS installed and a few steps to setup to let users access shared Samba directories.

  • Enable SMB Samba service in TrueNAS

TrueNAS - Enable SMB service

TrueNAS - SMB Settings

  • Create an user account with home directory and all access permisssions

TrueNAS - Create user

  • Share TrueNAS pool created

TrueNAS - Sharing

Then can access shared directory from TrunNAS in both Windows and Mac:

TrueNAS - Access

Make sure there is NO a lock icon on the folder (can unlock the folder in MacOS Finder)

Increase, decrease and resize Proxmox disk volume

After a default installation, a tiny disk space given to Proxmox root volume. Insufficient disk space issue raised up after several VMs installed and backup made, as iinstallation image files *.ISO and backup all put into root volume.

In storage.cfg:

1
2
3
4
5
6
7
8
9
root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

Run lvdisplay:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@pve:~# lvdisplay
--- Logical volume ---
LV Name data
VG Name pve
# open 0
LV Size <3.58 TiB

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV Status available
# open 2
LV Size 8.00 GiB

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV Status available
# open 1
LV Size <112.25 GiB

Solution is to decrease the size of pve/data volume, as this volume doesn’t support reducing thin pools in size yet, and then to increase the size of pdev/root volume.

Backup all VMs, then remove pve/data volume:

1
2
root@pve:~# lvremove pve/data
Removing pool pve/data will remove 7 dependent volume(s). Proceed? [y/n]: y

Based on the disk space has just released, increase the size of pdev/root volume, 20% for current FREE space in this case:

1
2
3
4
5
6
7
8
9
root@pve:~# lvextend -l +20%FREE /dev/pve/root
Size of logical volume pve/root changed from <112.25 GiB (28735 extents) to <851.09 GiB (217878 extents).
Logical volume pve/root successfully resized.

root@pve:~# resize2fs /dev/pve/root
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/pve/root is mounted on /; on-line resizing required
old_desc_blocks = 15, new_desc_blocks = 107
The filesystem on /dev/pve/root is now 223107072 (4k) blocks long.

Create a new pve/data volume:

1
2
root@pve:~# lvcreate -L2920G -ndata pve
Logical volume "data" created.

Check free disk space:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 126
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.73 TiB
PE Size 4.00 MiB
Total PE 976498
Alloc PE / Size 967446 / 3.69 TiB
Free PE / Size 9052 / <35.36 GiB
VG UUID UEsIZR-TBsz-UYlP-u2FO-2AbC-uq4d-vcb35f

Create thin pool volume of the metadata, usually size of 1% of pve/data volume:

1
2
3
4
5
6
7
root@pve:~# lvconvert --type thin-pool --poolmetadatasize 36G pve/data
Reducing pool metadata size 36.00 GiB to maximum usable size <15.88 GiB.
Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
WARNING: Converting pve/data to thin pool's data volume with metadata wiping.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert pve/data? [y/n]: y
Converted pve/data to thin pool.

Verify current disk volumes, pve/root disk volume has more space now:

1
2
3
4
5
6
7
8
9
10
11
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.8M 3.2G 1% /run
/dev/mapper/pve-root 841G 71G 736G 9% /
tmpfs 16G 46M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 192K 114K 74K 61% /sys/firmware/efi/efivars
/dev/nvme3n1p2 1022M 12M 1011M 2% /boot/efi
/dev/fuse 128M 20K 128M 1% /etc/pve
tmpfs 3.2G 0 3.2G 0% /run/user/0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 100M 0 part
├─sda2 8:2 0 16M 0 part
├─sda3 8:3 0 120.3G 0 part
├─sda4 8:4 0 625M 0 part
└─sda5 8:5 0 102.5G 0 part
nvme2n1 259:0 0 3.6T 0 disk
nvme4n1 259:1 0 3.6T 0 disk
nvme0n1 259:2 0 3.6T 0 disk
nvme1n1 259:3 0 3.6T 0 disk
nvme5n1 259:4 0 3.6T 0 disk
nvme3n1 259:5 0 3.7T 0 disk
├─nvme3n1p1 259:6 0 1007K 0 part
├─nvme3n1p2 259:7 0 1G 0 part /boot/efi
└─nvme3n1p3 259:8 0 3.7T 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 854.7G 0 lvm /
├─pve-data_tmeta 252:2 0 15.9G 0 lvm
│ └─pve-data-tpool 252:4 0 2.9T 0 lvm
│ ├─pve-data 252:5 0 2.9T 1 lvm
│ ├─pve-vm--100--disk--0 252:6 0 240G 0 lvm
│ ├─pve-vm--101--disk--0 252:7 0 4M 0 lvm
│ ├─pve-vm--101--disk--1 252:8 0 240G 0 lvm
│ ├─pve-vm--101--disk--2 252:9 0 4M 0 lvm
│ ├─pve-vm--102--disk--0 252:10 0 4M 0 lvm
│ ├─pve-vm--102--disk--1 252:11 0 240G 0 lvm
│ └─pve-vm--102--disk--2 252:12 0 4M 0 lvm
└─pve-data_tdata 252:3 0 2.9T 0 lvm
└─pve-data-tpool 252:4 0 2.9T 0 lvm
├─pve-data 252:5 0 2.9T 1 lvm
├─pve-vm--100--disk--0 252:6 0 240G 0 lvm
├─pve-vm--101--disk--0 252:7 0 4M 0 lvm
├─pve-vm--101--disk--1 252:8 0 240G 0 lvm
├─pve-vm--101--disk--2 252:9 0 4M 0 lvm
├─pve-vm--102--disk--0 252:10 0 4M 0 lvm
├─pve-vm--102--disk--1 252:11 0 240G 0 lvm
└─pve-vm--102--disk--2 252:12 0 4M 0 lvm

root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 2.85t 3.19 0.32
root pve -wi-ao---- <854.69g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 240.00g data 10.38
vm-101-disk-0 pve Vwi-a-tz-- 4.00m data 14.06
vm-101-disk-1 pve Vwi-a-tz-- 240.00g data 19.50
vm-101-disk-2 pve Vwi-a-tz-- 4.00m data 1.56
vm-102-disk-0 pve Vwi-a-tz-- 4.00m data 14.06
vm-102-disk-1 pve Vwi-a-tz-- 240.00g data 8.98
vm-102-disk-2 pve Vwi-a-tz-- 4.00m data 1.56

References

Proxmox and SR-IOV support for Intel GPU and Mellanox network adapter

  • Enable VT-d(Intel Virtualization Technology for Directed I/O), for IOMMU(Input Output Memory Management Unit) services, and SR-IOV (Single Root IO Virtualization), a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus, in motherboard BIOS in Chipset, e.g. ASRock Z790 Riptide WiFi.

  • Enable SR-IOV for Mellonax network adapter e.g. Mellanox ConnectX-4 MCX455A-ECAT PCIe x16 3.0 100GBe VPI EDR IB in the same motherboard BIOS.

  • Add Proxmox No Subscription URL:

1
2
3
4
5
6
7
8
9
10
11
root@pve:~# cat /etc/apt/sources.list
deb http://ftp.au.debian.org/debian bookworm main contrib

deb http://ftp.au.debian.org/debian bookworm-updates main contrib

# security updates
deb http://security.debian.org bookworm-security main contrib

# Proxmox VE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

and run packages update:

1
root@pve:~# apt update

and install all build tools:

1
root@pve:~# apt install build-* dkms
  • Set/Pin Proxmox kernel version:
1
2
3
4
5
6
7
8
root@pve:~# proxmox-boot-tool kernel pin 6.8.4-2-pve
Setting '6.8.4-2-pve' as grub default entry and running update-grub.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.8.4-2-pve
Found initrd image: /boot/initrd.img-6.8.4-2-pve
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Adding boot menu entry for UEFI Firmware Settings ...
done

and verify Proxmox kernal version:

1
2
3
4
5
6
7
8
9
root@pve:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
6.8.4-2-pve

Pinned kernel:
6.8.4-2-pve
  • Install Proxmox kernel headers source code package:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@pve:~# apt install proxmox-headers-6.8.4-2-pve
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
proxmox-headers-6.8.4-2-pve
0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded.
Need to get 13.7 MB of archives.
After this operation, 97.0 MB of additional disk space will be used.
Get:1 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 proxmox-headers-6.8.4-2-pve amd64 6.8.4-2 [13.7 MB]
Fetched 13.7 MB in 1s (23.8 MB/s)
Selecting previously unselected package proxmox-headers-6.8.4-2-pve.
(Reading database ... 70448 files and directories currently installed.)
Preparing to unpack .../proxmox-headers-6.8.4-2-pve_6.8.4-2_amd64.deb ...
Unpacking proxmox-headers-6.8.4-2-pve (6.8.4-2) ...
Setting up proxmox-headers-6.8.4-2-pve (6.8.4-2) ...
  • Download Linux i915 driver with SR-IOV support for Linux kernel:
1
root@pve:~# git clone https://github.com/strongtz/i915-sriov-dkms

and change into the cloned repository and run:

1
2
3
4
5
root@pve:~/i915-sriov-dkms# cat VERSION
2024.07.24

root@pve:~/i915-sriov-dkms# dkms add .
Creating symlink /var/lib/dkms/i915-sriov-dkms/2024.07.24/source -> /usr/src/i915-sriov-dkms-2024.07.24

and build, install i915-sriov-dkms Linux kernel module:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@pve:~/i915-sriov-dkms# dkms install -m i915-sriov-dkms -v $(cat VERSION) --force
Sign command: /lib/modules/6.8.4-2-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Certificate or key are missing, generating self signed certificate for MOK...

Building module:
Cleaning build area...
make -j20 KERNELRELEASE=6.8.4-2-pve -C /lib/modules/6.8.4-2-pve/build M=/var/lib/dkms/i915-sriov-dkms/2024.07.24/build.......
Signing module /var/lib/dkms/i915-sriov-dkms/2024.07.24/build/i915.ko
Cleaning build area...

i915.ko:
Running module version sanity check.
- Original module
- Installation
- Installing to /lib/modules/6.8.4-2-pve/updates/dkms/
depmod...

and enable i915-sriov-dkms module with upto maximum 7 VFS (Virtual File System) in Linux kernel:

1
2
root@pve:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7"
  • Reboot Proxmox:
1
2
3
4
5
6
7
8
9
root@pve:~# lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.1 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.2 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.3 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.4 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.5 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.6 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.7 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)

First VGA 00:02.0 is the REAL GPU. Other 7 are Virtual ones.

Verify IOMMU has been enabled:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
root@pve:~# dmesg | grep -i iommu
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.4-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7
[ 0.036486] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.4-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7
[ 0.036515] DMAR: IOMMU enabled
[ 0.090320] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.244292] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[ 0.270125] iommu: Default domain type: Translated
[ 0.270125] iommu: DMA domain TLB invalidation policy: lazy mode
[ 0.303735] DMAR: IOMMU feature fl1gp_support inconsistent
[ 0.303735] DMAR: IOMMU feature pgsel_inv inconsistent
[ 0.303736] DMAR: IOMMU feature nwfs inconsistent
[ 0.303736] DMAR: IOMMU feature dit inconsistent
[ 0.303737] DMAR: IOMMU feature sc_support inconsistent
[ 0.303737] DMAR: IOMMU feature dev_iotlb_support inconsistent
[ 0.304175] pci 0000:00:02.0: Adding to iommu group 0
[ 0.304544] pci 0000:00:00.0: Adding to iommu group 1
[ 0.304554] pci 0000:00:01.0: Adding to iommu group 2
[ 0.304562] pci 0000:00:01.1: Adding to iommu group 3
[ 0.304570] pci 0000:00:06.0: Adding to iommu group 4
[ 0.304582] pci 0000:00:14.0: Adding to iommu group 5
[ 0.304589] pci 0000:00:14.2: Adding to iommu group 5
[ 0.304598] pci 0000:00:15.0: Adding to iommu group 6
[ 0.304607] pci 0000:00:16.0: Adding to iommu group 7
[ 0.304614] pci 0000:00:17.0: Adding to iommu group 8
[ 0.304630] pci 0000:00:1a.0: Adding to iommu group 9
[ 0.304641] pci 0000:00:1b.0: Adding to iommu group 10
[ 0.304651] pci 0000:00:1c.0: Adding to iommu group 11
[ 0.304662] pci 0000:00:1c.1: Adding to iommu group 12
[ 0.304671] pci 0000:00:1c.2: Adding to iommu group 13
[ 0.304681] pci 0000:00:1c.3: Adding to iommu group 14
[ 0.304692] pci 0000:00:1c.4: Adding to iommu group 15
[ 0.304703] pci 0000:00:1d.0: Adding to iommu group 16
[ 0.304721] pci 0000:00:1f.0: Adding to iommu group 17
[ 0.304728] pci 0000:00:1f.3: Adding to iommu group 17
[ 0.304735] pci 0000:00:1f.4: Adding to iommu group 17
[ 0.304742] pci 0000:00:1f.5: Adding to iommu group 17
[ 0.304750] pci 0000:01:00.0: Adding to iommu group 18
[ 0.304758] pci 0000:02:00.0: Adding to iommu group 19
[ 0.304765] pci 0000:03:00.0: Adding to iommu group 20
[ 0.304781] pci 0000:04:00.0: Adding to iommu group 21
[ 0.304791] pci 0000:05:00.0: Adding to iommu group 22
[ 0.304801] pci 0000:06:00.0: Adding to iommu group 23
[ 0.304810] pci 0000:07:00.0: Adding to iommu group 24
[ 0.304834] pci 0000:08:00.0: Adding to iommu group 25
[ 0.304845] pci 0000:09:00.0: Adding to iommu group 26
[ 0.304857] pci 0000:0a:00.0: Adding to iommu group 27
[ 0.304866] pci 0000:0b:00.0: Adding to iommu group 28
[ 4.659395] pci 0000:00:02.1: DMAR: Skip IOMMU disabling for graphics
[ 4.659438] pci 0000:00:02.1: Adding to iommu group 29
[ 4.664441] pci 0000:00:02.2: DMAR: Skip IOMMU disabling for graphics
[ 4.664479] pci 0000:00:02.2: Adding to iommu group 30
[ 4.667692] pci 0000:00:02.3: DMAR: Skip IOMMU disabling for graphics
[ 4.667727] pci 0000:00:02.3: Adding to iommu group 31
[ 4.671096] pci 0000:00:02.4: DMAR: Skip IOMMU disabling for graphics
[ 4.671129] pci 0000:00:02.4: Adding to iommu group 32
[ 4.673545] pci 0000:00:02.5: DMAR: Skip IOMMU disabling for graphics
[ 4.673572] pci 0000:00:02.5: Adding to iommu group 33
[ 4.676357] pci 0000:00:02.6: DMAR: Skip IOMMU disabling for graphics
[ 4.676402] pci 0000:00:02.6: Adding to iommu group 34
[ 4.679192] pci 0000:00:02.7: DMAR: Skip IOMMU disabling for graphics
[ 4.679221] pci 0000:00:02.7: Adding to iommu group 35
  • Enable Virtual GPU for Windows 11 VM in Proxmox

Add a PCI device for Windows 11 VM, and choose one Virtual GPU:

Proxmox - Windows 11, PCI device

Enable Primary GPU and PCI Express in options:

Proxmox - Windows 11, Primary GPU and PCI Express

Choose none in Display and host in Processors options for Windows 11 VM:

Proxmox - Windows 11, Display

Proxmox - Windows 11, Display none

Start Windows 11 VM, login with Microsoft Remote Desktop https://apps.microsoft.com/detail/9wzdncrfj3ps and a Virtual GPU is available now. Run Task Manager and check CPU and GPU load:

Proxmox - Windows 11, Performance

  • Enable Virtual GPU for Ubuntu VM in Proxmox

Ubuntu version 22.04.4 for desktop.

Add a PCI device for Ubuntu VM, and choose one Virtual GPU; Enable Primary GPU and PCI Express in options; Choose none in Display and host in Processors options:

Proxmox - Ubuntu

Setup remote desktop connection to Ubuntu:

1
2
3
4
5
root@nucleus:~# apt install ubuntu-desktop

root@nucleus:~# apt install xrdp

root@nucleus:~# systemctl enable xrdp

Proxmox - Remote Desktop

  • Fix Remote Desktop audio over HDMI issue with the script, enable the sound redirection:
1
terrence@nucleus:~$ ./xrdp-installer-1.5.1.sh -s

then reboot VM.

Proxmox - Ubuntu Remote Desktop

Now Audio device becomes xrdp input / output.

  • Windows Server 2022

Windows Server 2022 is similar to Windows 11 setup in Proxmox. A few issues like GPU:

Proxmox - Windows Server 2022, GPU

just disable GPU then enable it, it will work correctly.

And no sound after installation, but can enable Windows Audio Service and choose Remote Audio:

Proxmox - Windows Server 2022, Sound

then audio over HDMI to remote desktop can work.

In addition, can setup User Auto Logon after Windows Server 2022 startup. And check Windows license by running:

1
PS C:\Users\Administrator> slmgr -dlv

Now can remote desktop access Ubuntu, Windows 11 and Windows Server 2022 VMs both run in Proxmox:

Proxmox - Ubuntu and Windows

References