Setup network bond in Synology NAS

Synology NAS DS920+ with two build-in 1Gbps ethernet adapters, and two 2.5Gbps USB 3.0 Ethernet Adapters. Now setup network bond / link aggregation on them.

  • Enable Network Link Aggregation Mode

Synology - Network Link Aggregation Mode

  • Pickup network devices into bond

Synology - Physical devices

  • Setup network

Synology - Physical devices network setup

  • Accept network interface change after network bond

Synology - Bond warning message

  • Network bonded

Synology - Network bonded

  • Network bond service order

Synology - Network bond service order

Setup TrueNAS Network and Link Aggregation

Two network adapters have been setup with DHCP allocated addresses:

  1. ens18 192.168.0.246
  2. ens19 192.168.0.105
  • Create a Network Link Aggregation Interface

TrueNAS - Link Aggregation

After Link Aggregation Interface bond1 created, original two network adapters ens18 and ens19 IP addresses are gone. bond1 with the ONLY ONE network interface address for TrueNAS.

TrueNAS - Network

Samba setup and share in TrueNAS

TrueNAS installed and a few steps to setup to let users access shared Samba directories.

  • Enable SMB Samba service in TrueNAS

TrueNAS - Enable SMB service

TrueNAS - SMB Settings

  • Create an user account with home directory and all access permisssions

TrueNAS - Create user

  • Share TrueNAS pool created

TrueNAS - Sharing

Then can access shared directory from TrunNAS in both Windows and Mac:

TrueNAS - Access

Make sure there is NO a lock icon on the folder (can unlock the folder in MacOS Finder)

Increase, decrease and resize Proxmox disk volume

After a default installation, a tiny disk space given to Proxmox root volume. Insufficient disk space issue raised up after several VMs installed and backup made, as iinstallation image files *.ISO and backup all put into root volume.

In storage.cfg:

1
2
3
4
5
6
7
8
9
root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

Run lvdisplay:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@pve:~# lvdisplay
--- Logical volume ---
LV Name data
VG Name pve
# open 0
LV Size <3.58 TiB

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV Status available
# open 2
LV Size 8.00 GiB

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV Status available
# open 1
LV Size <112.25 GiB

Solution is to decrease the size of pve/data volume, as this volume doesn’t support reducing thin pools in size yet, and then to increase the size of pdev/root volume.

Backup all VMs, then remove pve/data volume:

1
2
root@pve:~# lvremove pve/data
Removing pool pve/data will remove 7 dependent volume(s). Proceed? [y/n]: y

Based on the disk space has just released, increase the size of pdev/root volume, 20% for current FREE space in this case:

1
2
3
4
5
6
7
8
9
root@pve:~# lvextend -l +20%FREE /dev/pve/root
Size of logical volume pve/root changed from <112.25 GiB (28735 extents) to <851.09 GiB (217878 extents).
Logical volume pve/root successfully resized.

root@pve:~# resize2fs /dev/pve/root
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/pve/root is mounted on /; on-line resizing required
old_desc_blocks = 15, new_desc_blocks = 107
The filesystem on /dev/pve/root is now 223107072 (4k) blocks long.

Create a new pve/data volume:

1
2
root@pve:~# lvcreate -L2920G -ndata pve
Logical volume "data" created.

Check free disk space:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 126
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.73 TiB
PE Size 4.00 MiB
Total PE 976498
Alloc PE / Size 967446 / 3.69 TiB
Free PE / Size 9052 / <35.36 GiB
VG UUID UEsIZR-TBsz-UYlP-u2FO-2AbC-uq4d-vcb35f

Create thin pool volume of the metadata, usually size of 1% of pve/data volume:

1
2
3
4
5
6
7
root@pve:~# lvconvert --type thin-pool --poolmetadatasize 36G pve/data
Reducing pool metadata size 36.00 GiB to maximum usable size <15.88 GiB.
Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
WARNING: Converting pve/data to thin pool's data volume with metadata wiping.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert pve/data? [y/n]: y
Converted pve/data to thin pool.

Verify current disk volumes, pve/root disk volume has more space now:

1
2
3
4
5
6
7
8
9
10
11
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.8M 3.2G 1% /run
/dev/mapper/pve-root 841G 71G 736G 9% /
tmpfs 16G 46M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 192K 114K 74K 61% /sys/firmware/efi/efivars
/dev/nvme3n1p2 1022M 12M 1011M 2% /boot/efi
/dev/fuse 128M 20K 128M 1% /etc/pve
tmpfs 3.2G 0 3.2G 0% /run/user/0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 100M 0 part
├─sda2 8:2 0 16M 0 part
├─sda3 8:3 0 120.3G 0 part
├─sda4 8:4 0 625M 0 part
└─sda5 8:5 0 102.5G 0 part
nvme2n1 259:0 0 3.6T 0 disk
nvme4n1 259:1 0 3.6T 0 disk
nvme0n1 259:2 0 3.6T 0 disk
nvme1n1 259:3 0 3.6T 0 disk
nvme5n1 259:4 0 3.6T 0 disk
nvme3n1 259:5 0 3.7T 0 disk
├─nvme3n1p1 259:6 0 1007K 0 part
├─nvme3n1p2 259:7 0 1G 0 part /boot/efi
└─nvme3n1p3 259:8 0 3.7T 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 854.7G 0 lvm /
├─pve-data_tmeta 252:2 0 15.9G 0 lvm
│ └─pve-data-tpool 252:4 0 2.9T 0 lvm
│ ├─pve-data 252:5 0 2.9T 1 lvm
│ ├─pve-vm--100--disk--0 252:6 0 240G 0 lvm
│ ├─pve-vm--101--disk--0 252:7 0 4M 0 lvm
│ ├─pve-vm--101--disk--1 252:8 0 240G 0 lvm
│ ├─pve-vm--101--disk--2 252:9 0 4M 0 lvm
│ ├─pve-vm--102--disk--0 252:10 0 4M 0 lvm
│ ├─pve-vm--102--disk--1 252:11 0 240G 0 lvm
│ └─pve-vm--102--disk--2 252:12 0 4M 0 lvm
└─pve-data_tdata 252:3 0 2.9T 0 lvm
└─pve-data-tpool 252:4 0 2.9T 0 lvm
├─pve-data 252:5 0 2.9T 1 lvm
├─pve-vm--100--disk--0 252:6 0 240G 0 lvm
├─pve-vm--101--disk--0 252:7 0 4M 0 lvm
├─pve-vm--101--disk--1 252:8 0 240G 0 lvm
├─pve-vm--101--disk--2 252:9 0 4M 0 lvm
├─pve-vm--102--disk--0 252:10 0 4M 0 lvm
├─pve-vm--102--disk--1 252:11 0 240G 0 lvm
└─pve-vm--102--disk--2 252:12 0 4M 0 lvm

root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 2.85t 3.19 0.32
root pve -wi-ao---- <854.69g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 240.00g data 10.38
vm-101-disk-0 pve Vwi-a-tz-- 4.00m data 14.06
vm-101-disk-1 pve Vwi-a-tz-- 240.00g data 19.50
vm-101-disk-2 pve Vwi-a-tz-- 4.00m data 1.56
vm-102-disk-0 pve Vwi-a-tz-- 4.00m data 14.06
vm-102-disk-1 pve Vwi-a-tz-- 240.00g data 8.98
vm-102-disk-2 pve Vwi-a-tz-- 4.00m data 1.56

References

Proxmox and SR-IOV support for Intel GPU and Mellanox network adapter

  • Enable VT-d(Intel Virtualization Technology for Directed I/O), for IOMMU(Input Output Memory Management Unit) services, and SR-IOV (Single Root IO Virtualization), a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus, in motherboard BIOS in Chipset, e.g. ASRock Z790 Riptide WiFi.

  • Enable SR-IOV for Mellonax network adapter e.g. Mellanox ConnectX-4 MCX455A-ECAT PCIe x16 3.0 100GBe VPI EDR IB in the same motherboard BIOS.

  • Add Proxmox No Subscription URL:

1
2
3
4
5
6
7
8
9
10
11
root@pve:~# cat /etc/apt/sources.list
deb http://ftp.au.debian.org/debian bookworm main contrib

deb http://ftp.au.debian.org/debian bookworm-updates main contrib

# security updates
deb http://security.debian.org bookworm-security main contrib

# Proxmox VE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

and run packages update:

1
root@pve:~# apt update

and install all build tools:

1
root@pve:~# apt install build-* dkms
  • Set/Pin Proxmox kernel version:
1
2
3
4
5
6
7
8
root@pve:~# proxmox-boot-tool kernel pin 6.8.4-2-pve
Setting '6.8.4-2-pve' as grub default entry and running update-grub.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.8.4-2-pve
Found initrd image: /boot/initrd.img-6.8.4-2-pve
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Adding boot menu entry for UEFI Firmware Settings ...
done

and verify Proxmox kernal version:

1
2
3
4
5
6
7
8
9
root@pve:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
6.8.4-2-pve

Pinned kernel:
6.8.4-2-pve
  • Install Proxmox kernel headers source code package:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@pve:~# apt install proxmox-headers-6.8.4-2-pve
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
proxmox-headers-6.8.4-2-pve
0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded.
Need to get 13.7 MB of archives.
After this operation, 97.0 MB of additional disk space will be used.
Get:1 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 proxmox-headers-6.8.4-2-pve amd64 6.8.4-2 [13.7 MB]
Fetched 13.7 MB in 1s (23.8 MB/s)
Selecting previously unselected package proxmox-headers-6.8.4-2-pve.
(Reading database ... 70448 files and directories currently installed.)
Preparing to unpack .../proxmox-headers-6.8.4-2-pve_6.8.4-2_amd64.deb ...
Unpacking proxmox-headers-6.8.4-2-pve (6.8.4-2) ...
Setting up proxmox-headers-6.8.4-2-pve (6.8.4-2) ...
  • Download Linux i915 driver with SR-IOV support for Linux kernel:
1
root@pve:~# git clone https://github.com/strongtz/i915-sriov-dkms

and change into the cloned repository and run:

1
2
3
4
5
root@pve:~/i915-sriov-dkms# cat VERSION
2024.07.24

root@pve:~/i915-sriov-dkms# dkms add .
Creating symlink /var/lib/dkms/i915-sriov-dkms/2024.07.24/source -> /usr/src/i915-sriov-dkms-2024.07.24

and build, install i915-sriov-dkms Linux kernel module:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@pve:~/i915-sriov-dkms# dkms install -m i915-sriov-dkms -v $(cat VERSION) --force
Sign command: /lib/modules/6.8.4-2-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Certificate or key are missing, generating self signed certificate for MOK...

Building module:
Cleaning build area...
make -j20 KERNELRELEASE=6.8.4-2-pve -C /lib/modules/6.8.4-2-pve/build M=/var/lib/dkms/i915-sriov-dkms/2024.07.24/build.......
Signing module /var/lib/dkms/i915-sriov-dkms/2024.07.24/build/i915.ko
Cleaning build area...

i915.ko:
Running module version sanity check.
- Original module
- Installation
- Installing to /lib/modules/6.8.4-2-pve/updates/dkms/
depmod...

and enable i915-sriov-dkms module with upto maximum 7 VFS (Virtual File System) in Linux kernel:

1
2
root@pve:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7"
  • Reboot Proxmox:
1
2
3
4
5
6
7
8
9
root@pve:~# lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.1 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.2 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.3 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.4 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.5 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.6 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:02.7 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)

First VGA 00:02.0 is the REAL GPU. Other 7 are Virtual ones.

Verify IOMMU has been enabled:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
root@pve:~# dmesg | grep -i iommu
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.4-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7
[ 0.036486] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.4-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_guc=3 i915.max_vfs=7
[ 0.036515] DMAR: IOMMU enabled
[ 0.090320] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.244292] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[ 0.270125] iommu: Default domain type: Translated
[ 0.270125] iommu: DMA domain TLB invalidation policy: lazy mode
[ 0.303735] DMAR: IOMMU feature fl1gp_support inconsistent
[ 0.303735] DMAR: IOMMU feature pgsel_inv inconsistent
[ 0.303736] DMAR: IOMMU feature nwfs inconsistent
[ 0.303736] DMAR: IOMMU feature dit inconsistent
[ 0.303737] DMAR: IOMMU feature sc_support inconsistent
[ 0.303737] DMAR: IOMMU feature dev_iotlb_support inconsistent
[ 0.304175] pci 0000:00:02.0: Adding to iommu group 0
[ 0.304544] pci 0000:00:00.0: Adding to iommu group 1
[ 0.304554] pci 0000:00:01.0: Adding to iommu group 2
[ 0.304562] pci 0000:00:01.1: Adding to iommu group 3
[ 0.304570] pci 0000:00:06.0: Adding to iommu group 4
[ 0.304582] pci 0000:00:14.0: Adding to iommu group 5
[ 0.304589] pci 0000:00:14.2: Adding to iommu group 5
[ 0.304598] pci 0000:00:15.0: Adding to iommu group 6
[ 0.304607] pci 0000:00:16.0: Adding to iommu group 7
[ 0.304614] pci 0000:00:17.0: Adding to iommu group 8
[ 0.304630] pci 0000:00:1a.0: Adding to iommu group 9
[ 0.304641] pci 0000:00:1b.0: Adding to iommu group 10
[ 0.304651] pci 0000:00:1c.0: Adding to iommu group 11
[ 0.304662] pci 0000:00:1c.1: Adding to iommu group 12
[ 0.304671] pci 0000:00:1c.2: Adding to iommu group 13
[ 0.304681] pci 0000:00:1c.3: Adding to iommu group 14
[ 0.304692] pci 0000:00:1c.4: Adding to iommu group 15
[ 0.304703] pci 0000:00:1d.0: Adding to iommu group 16
[ 0.304721] pci 0000:00:1f.0: Adding to iommu group 17
[ 0.304728] pci 0000:00:1f.3: Adding to iommu group 17
[ 0.304735] pci 0000:00:1f.4: Adding to iommu group 17
[ 0.304742] pci 0000:00:1f.5: Adding to iommu group 17
[ 0.304750] pci 0000:01:00.0: Adding to iommu group 18
[ 0.304758] pci 0000:02:00.0: Adding to iommu group 19
[ 0.304765] pci 0000:03:00.0: Adding to iommu group 20
[ 0.304781] pci 0000:04:00.0: Adding to iommu group 21
[ 0.304791] pci 0000:05:00.0: Adding to iommu group 22
[ 0.304801] pci 0000:06:00.0: Adding to iommu group 23
[ 0.304810] pci 0000:07:00.0: Adding to iommu group 24
[ 0.304834] pci 0000:08:00.0: Adding to iommu group 25
[ 0.304845] pci 0000:09:00.0: Adding to iommu group 26
[ 0.304857] pci 0000:0a:00.0: Adding to iommu group 27
[ 0.304866] pci 0000:0b:00.0: Adding to iommu group 28
[ 4.659395] pci 0000:00:02.1: DMAR: Skip IOMMU disabling for graphics
[ 4.659438] pci 0000:00:02.1: Adding to iommu group 29
[ 4.664441] pci 0000:00:02.2: DMAR: Skip IOMMU disabling for graphics
[ 4.664479] pci 0000:00:02.2: Adding to iommu group 30
[ 4.667692] pci 0000:00:02.3: DMAR: Skip IOMMU disabling for graphics
[ 4.667727] pci 0000:00:02.3: Adding to iommu group 31
[ 4.671096] pci 0000:00:02.4: DMAR: Skip IOMMU disabling for graphics
[ 4.671129] pci 0000:00:02.4: Adding to iommu group 32
[ 4.673545] pci 0000:00:02.5: DMAR: Skip IOMMU disabling for graphics
[ 4.673572] pci 0000:00:02.5: Adding to iommu group 33
[ 4.676357] pci 0000:00:02.6: DMAR: Skip IOMMU disabling for graphics
[ 4.676402] pci 0000:00:02.6: Adding to iommu group 34
[ 4.679192] pci 0000:00:02.7: DMAR: Skip IOMMU disabling for graphics
[ 4.679221] pci 0000:00:02.7: Adding to iommu group 35
  • Enable Virtual GPU for Windows 11 VM in Proxmox

Add a PCI device for Windows 11 VM, and choose one Virtual GPU:

Proxmox - Windows 11, PCI device

Enable Primary GPU and PCI Express in options:

Proxmox - Windows 11, Primary GPU and PCI Express

Choose none in Display and host in Processors options for Windows 11 VM:

Proxmox - Windows 11, Display

Proxmox - Windows 11, Display none

Start Windows 11 VM, login with Microsoft Remote Desktop https://apps.microsoft.com/detail/9wzdncrfj3ps and a Virtual GPU is available now. Run Task Manager and check CPU and GPU load:

Proxmox - Windows 11, Performance

  • Enable Virtual GPU for Ubuntu VM in Proxmox

Ubuntu version 22.04.4 for desktop.

Add a PCI device for Ubuntu VM, and choose one Virtual GPU; Enable Primary GPU and PCI Express in options; Choose none in Display and host in Processors options:

Proxmox - Ubuntu

Setup remote desktop connection to Ubuntu:

1
2
3
4
5
root@nucleus:~# apt install ubuntu-desktop

root@nucleus:~# apt install xrdp

root@nucleus:~# systemctl enable xrdp

Proxmox - Remote Desktop

  • Fix Remote Desktop audio over HDMI issue with the script, enable the sound redirection:
1
terrence@nucleus:~$ ./xrdp-installer-1.5.1.sh -s

then reboot VM.

Proxmox - Ubuntu Remote Desktop

Now Audio device becomes xrdp input / output.

  • Windows Server 2022

Windows Server 2022 is similar to Windows 11 setup in Proxmox. A few issues like GPU:

Proxmox - Windows Server 2022, GPU

just disable GPU then enable it, it will work correctly.

And no sound after installation, but can enable Windows Audio Service and choose Remote Audio:

Proxmox - Windows Server 2022, Sound

then audio over HDMI to remote desktop can work.

In addition, can setup User Auto Logon after Windows Server 2022 startup. And check Windows license by running:

1
PS C:\Users\Administrator> slmgr -dlv

Now can remote desktop access Ubuntu, Windows 11 and Windows Server 2022 VMs both run in Proxmox:

Proxmox - Ubuntu and Windows

References

The solution making old Intel 10Gbps network adapter work in Windows 11

Buy some old Intel 10Gbps network adapter, X520, X540 … from AliExpress https://aliexpress.com/, and install old Intel network adapter driver for Windows 10 and make it working in Windows 11. The example is install version 25.0 Intel network adapter driver, https://www.intel.com/content/www/us/en/download/18293/29648/intel-network-adapter-driver-for-windows-10.html, to get it to work in Windows 11:

How to enable SMB Multichannel in Windows 11

Network adapter requires to support RSS (Receive Side Scaling).

RSS (Receive Side Scaling)

  • Open PowerShell as administrator in Windows 11, run and enable SMB Multichannel (should be enabled by default):
1
2
3
4
5
6
PS C:\> Set-SmbClientConfiguration -EnableMultiChannel $true

Confirm
Are you sure you want to perform this action?
Performing operation 'Modify' on Target 'SMB Client Configuration'.
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):

Check network interfaces which show “RSS capable = True“:

1
2
3
4
5
6
7
8
PS C:\> Get-SmbClientNetworkInterface

Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name
--------------- ----------- ------------ ----- ----------- -------------
17 True False 20 Gbps {} X710-1-WFP Native MAC Layer LightWeight Filter-0000
8 False False 10 Gbps {} X710-1
13 False False 10 Gbps {} X710-2
26 True False 20 Gbps {fe80::923a:90de:dedd:ef44, 192.168.0.98} NIC-Team
  • Verify there are any active SMB connections:
1
2
3
4
5
PS C:\> Get-SmbConnection

ServerName ShareName UserName Credential Dialect NumOpens
---------- --------- -------- ---------- ------- --------
Synology NAS Drive RIPTIDE\terrence MicrosoftAccount\terrence.miao@mail.net 3.1.1 2
  • Copy a large file to a SMB device, e.g., Synology NAS which also has SMB Multichannel enabled, then verify the SMB Multichannel is working:
1
2
3
4
5
6
7
PS C:\> Get-SmbMultichannelConnection -IncludeNotSelected

Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable
----------- -------- --------- --------- ---------------------- ---------------------- ------------------ -------------------
Synology True 192.168.0.98 192.168.0.112 26 5 False False
Synology False 192.168.0.98 192.168.0.34 26 4 False False
Synology False 192.168.0.98 192.168.196.140 26 7 False False

192.168.0.98 is Windows 11 network address, after Network Teaming; 192.168.0.112 and 192.168.0.34 are Synology NAS network addresses.

With and without multichannel

References

How to upgrade Synology NAS network from 1Gbps to 2.5Gbps

Synology NAS DS920+ with two 1Gbps ethernet adapters. There is an affordable and easy upgrading its gigabytes network path to 2.5Gbps.

Login Synology NAS Admin UI and run Control Panel -> Network -> Network Interface

Installation before

  • Get a USB 3.0 Ethernet Adapter 2.5Gbps with Realtek RTL8156 / RTL8156B / RTL8156BG chipset, e.g., UGREEN 2.5Gbps USB-C Ethernet Adapter:

UGREEN 2.5Gbps USB-C Ethernet Adapter

  • Find out the architecture name of CPU in NAS. For example, Synology DS920+ is equipped with Intel Celeron J4125 CPU. The architecture name of this processor is Geminilake.

  • Go to driver releases site https://github.com/bb-qq/r8152/releases and download the latest version e.g. r8152-geminilake-2.17.1-1_7.2.spk, Synology DSM 7.2 and above, use packages with the suffix _7.2.

  • Login Synology Admin UI, then go to Package Center -> Manual Install and choose a driver package downloaded from above step.

Package installation

Installation warning

Installation confirmation

  • The installation will fail at the very first time.

Installation failed

  • Then ssh into the NAS, and run the following command:
1
$ sudo install -m 4755 -o root -D /var/packages/r8152/target/r8152/spk_su /opt/sbin/spk_su

and also enable multiple identical USB devices, which SAME products have the SAME serial number:

1
$ sudo bash /var/packages/r8152/scripts/install-udev-rules

Installation fix

1
2
3
4
$ sudo bash /var/packages/r8152/scripts/install-udev-rules
Updating Hardware Database Index...
UDEV rules have been installed to /usr/lib/udev/rules.d
lrwxrwxrwx 1 root root 50 May 24 17:13 /usr/lib/udev/rules.d/51-usb-r8152-net.rules -> /var/packages/r8152/scripts/51-usb-r8152-net.rules

and continue / retry the installation .

  • Reboot NAS.

  • Login Synology Admin UI, Package Center -> Installed -> RTL8152/RTL8153 driver and check new installed Realtek network adapter driver is running:

Running

  • Control Panel -> Network -> Network Interface and check the new network interface LAN 3 and Lan 4 have been turned on, with MTU / jumbo frame enabled 9000:

New network interface

Bind the USB network adapter and run iperf3 network performance test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ iperf3 -c 192.168.0.244 -B 192.168.0.229
Connecting to host 192.168.0.244, port 5201
[ 5] local 192.168.0.229 port 46171 connected to 192.168.0.244 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 281 MBytes 2.36 Gbits/sec 0 450 KBytes
[ 5] 1.00-2.00 sec 281 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 2.00-3.00 sec 280 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 3.00-4.00 sec 281 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 4.00-5.00 sec 281 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 5.00-6.00 sec 281 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 6.00-7.00 sec 281 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 7.00-8.00 sec 280 MBytes 2.35 Gbits/sec 0 450 KBytes
[ 5] 8.00-9.00 sec 281 MBytes 2.36 Gbits/sec 0 450 KBytes
[ 5] 9.00-10.00 sec 281 MBytes 2.36 Gbits/sec 0 670 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.74 GBytes 2.35 Gbits/sec 0 sender
[ 5] 0.00-10.05 sec 2.74 GBytes 2.34 Gbits/sec receiver

iperf Done.

References

How to team network (link aggregation) in Windows 11

Intel Ethernet Converged Network Adapter X710, with two 10Gbps ports. This allows to team the two ports together for link aggregation.

  • Install Optional Features Server Manager in Windows 11

Server Manager

  • Open and run Windows Powershell as administrator, then run:
1
PS C:\> New-NetSwitchTeam -Name "NIC-Team" -TeamMembers "X710-1","X710-2"

Network Connections

A new network interface created, with combined speed 20Gbps.

Network Status

Network Details

To remove network team, run:

1
PS C:\> Remove-NetSwitchTeam -Name "NIC-Team"

Step by step root OnePlus 5T

OnePlus 5T, first announced in Nov 2017. 7 years later, has been upgraded to Android 10.0.1, still robust and fast.

NOTE: Before you take on this brave journey, make sure backup all important files on the phone at first!

About phone

  • In Settings -> System -> Developer options, enable Advanced reboot, OEM unlocking, USB Debugging

Developer options

1
2
3
4
5
$ adb devices
List of devices attached
9b26c76 device

$ adb reboot bootloader
  • Wait for phone to reboot till phone in the Bootloader mode, then run:
1
$ fastboot flashing unlock
  • ON the phone will ask to confirm “UNLOCK THE BOOTLOADER”. After UNLOCK, your phone WILL BE RESET, like a factory hard reset. ALL APPS AND DATA ARE GONE. Android system will be reinstalled.

  • Go to OnePlus Smartphone Software Update site and download the latest version of OnePlus 5T update on Windows, https://oneplus.net/in/support/softwareupdate

  • Unzip OnePlus 5T update on Widnows

  • On the phone Settings, search for USB Preferences, select USE USB FOR File transfer

USB Preferences

  • On Windows, in File Explorer, copy OnePlus5TOxygen_43_OTA_069_all_2010292144_76910d123e3940e5/boot.img file to ONEPLUS A5010 -> Internal shared storage -> Download directory on the phone

  • On the phone, download and install latest version Magisk, https://github.com/topjohnwu/Magisk

  • Run Magisk, select Magisk Install, https://topjohnwu.github.io/Magisk/install.html

  • Select and patch boot.img file under /Download directory

Magisk

Magisk select and patch

Magisk patch boot.img

  • A patched file magisk_patched-27000_nplRf successfully generated. On Windows, in File Explorer, copy it to local directory

  • On Windows, run:

1
$ fastboot flash boot magisk_patched-27000_nplRf.img

NOTE: Always patch boot image on the SAME device where you run Magisk.

Now OnePlus 5T has been officially ROOTED!

NOTE: NO need to install TWRP (Team Win Recovery Project), https://twrp.me, a customised recovery application for Android devices on OnePlus 5T.