Pages

Thursday, October 9, 2025

Why does a shut down Dell server consume 50W?

Question: Why does a shut down Dell server consume 50W?
 
Short Answer: Because some hardware components still consume power when the server is not disconnected from power. 

Longer Story with details 

I have Dell PowerEdge R620 with iDRAC7 in my home lab and here is the home power consumption in two scenarios

  1. shutdown server still connected to power (531 Watts)
  2. server fully disconnected from the power (475 Watts)

Scenario 1: shutdown server still connected to power

 
Scenario 2: server fully disconnected from the power

The difference between above two scenarios is ~ 50W. Why? 

Let's dive deeper. 

Sunday, September 28, 2025

FortiGate Configuration Backup via REST API

One of my customers would like to backup FortiGate configuration as part of DRBC (Disaster Recovery and Business Continuity) Solution.

FortiGate supports REST API so it is great solution to periodically get configuration, store it into some file directory and leverage Veeam Backup and Replication solution to backup FortiGate configurations in with company standard protection process. 

In this blog post I document all customer's specific design factors and also the solution prototype how to fulfill these factors and backup FortiGate configuration into file directory.

I personally prefer *nix way over Windows, therefore, I will leverage Linux Docker and PowerShell to get information from FortiGate security appliance and put it into file directory. Docker solution could be leveraged on Windows operating systems as well.

If you are interested in details, read on.

Thursday, September 25, 2025

David Pasek’s version of Greg Ferro’s 11 rules of design

Design documentation is not literature; it is a technical tool. The goal is clarity, precision, and usability. Here are 11 rules to guide you when writing a design document.

Greg Ferro's Eleven Rules of Design Documentation

Here is Greg Ferro’s approach to designing network design documentation. The “world” of networks is too big and varied to have only one document to cover more than one or two projects, but here are some rules to write a detailed Design document.

Wednesday, September 24, 2025

My IT Infrastructure Tips & Tricks - tmux

tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. Tmux is available on Linux and BSD systems.

Let's dive into TMUX usage ...

Saturday, September 20, 2025

ZeroEcho: Open Source, Future-Ready Cryptography for Java

What is ZeroEcho?

ZeroEcho is an open-source cryptography toolkit for Java. It builds on trusted providers such as Bouncy Castle (especially for post-quantum algorithms) and organizes them into a coherent, safe, and scriptable framework.

It is designed for developers, researchers, and practitioners who want to build cryptographic workflows that are:

  • Secure today with classical algorithms, and
  • Resilient tomorrow with post-quantum standards. 

Get Started

📂 Repository: https://gitea.egothor.org/Egothor/ZeroEcho

📖 Documentation: https://www.egothor.org/javadoc/zeroecho/lib/

Source: https://www.linkedin.com/pulse/zeroecho-open-source-future-ready-cryptography-java-leo-galambos-pgu2e/ 

Wednesday, August 27, 2025

What is GPON?

GPON stands for Gigabit Passive Optical Network.

It’s a type of fiber-optic broadband technology used by internet service providers (ISPs) to deliver high-speed internet, TV, and phone services to homes and businesses.

Thursday, August 21, 2025

Signi.com & Electronic Signatures

Foundation – eIDAS Signature Levels

Under EU law (eIDAS 910/2014), electronic signatures can be:

  • SES – Simple Electronic Signature (basic: typed name, click-to-sign, tickbox).

  • AdES – Advanced Electronic Signature (cryptographically bound to the signer, integrity-protected).

  • QES – Qualified Electronic Signature (requires a qualified certificate + secure signing device; legally equivalent to handwritten signature in the EU).

👉 Signi supports SES, AdES, and in certain cases QES (e.g. with BankID or qualified certificates).

Tuesday, August 12, 2025

How to set PERC H310 Mini to HBA mode and use disks diretly?

H310/H710/H710P/H810 Mini & Full Size IT Crossflashing

Original Source: https://fohdeesha.com/docs/perc.html

This guide allows you to crossflash 12th gen Dell Mini Mono & full size cards to LSI IT firmware. Mini Mono refers to the small models that fit in the dedicated "storage slot" on Dell servers. Because iDRAC checks the PCI vendor values of cards in this slot before allowing the server to boot, the generic full-size PERC crossflashing guides do not apply. This guide however solves that issue. Technical explanation for those curious. The following cards are supported:

  • H310 Mini Mono
  • H310 Full Size
  • H710 Mini Mono
  • H710P Mini Mono
  • H710 Full Size
  • H710P Full Size
  • H810 Full Size

Saturday, August 9, 2025

Garage Keyboard

Hardware 

  • Klon Arduino NANO CH340
  • Membránová klávesnice pro Arduino 3 x 4 matice
  • Rozšiřující sada nepájivé pole a vodiče 

E-Shop: https://dratek.cz/ 

Training videos: 

  • Arduino Basics
    • https://www.youtube.com/watch?v=6OR7STWnIaE
    • https://www.youtube.com/watch?v=fJWR7dBuc18 
  • Arduino + keyboard: https://www.youtube.com/watch?v=afl15UdQiaw

 

Friday, August 8, 2025

Sunday, July 13, 2025

How to connect Tuya device to Node-Red

Here is the process how to get Device ID and Local Key for Tuya device.   

  1. Create a Tuya Developer Account
    • Go to https://iot.tuya.com and register for a developer account. 
  2. Create a Cloud Project
  3. Link Tuya App Account
    • In your cloud project, navigate to the "Devices" tab and select "Link Tuya App Account." You'll typically scan a QR code with your Immax NEO PRO app (or Tuya Smart/Smart Life app) to authorize the link.
  4. Get Device ID
    • Once linked, your devices from the app should appear under the "Devices" tab in your cloud project. Note down the "Device ID" for each Tuya device you want to control. 
  5. Create API Subscription
    • Go to "Cloud" > "Cloud Services"
    • Subscribe to
      •  IoT Core Services
  6. Still within the "Cloud Services" section, after subscribing, click on "My Service"
    • For each of the services you just subscribed to, click "View Details"
    • Go to the "Authorized Projects" tab 
    • Ensure your specific cloud project is listed and authorized here. If not, you may need to click "Add Authorization" and select your project.
  7. Get Local Key
    • Go to "Cloud" -> "API Explorer."
    • Under "Smart Home Device Control" (or similar), look for an option like "Query Device Details in Bulk" or "Get Device Specification Attribute."
      • Device Management > Query Device Details 
    • Input your Device ID and submit the request.
      • The "Local Key" should be in the JSON response.

 


Sunday, July 6, 2025

Převod souboru z MKV na MP4 pomocí ffmpeg

Pro převod souboru z MKV na MP4 pomocí ffmpeg použij následující příkaz:

ffmpeg -i vstup.mkv -codec copy vystup.mp4

Pokud MKV obsahuje kodeky, které nejsou kompatibilní s MP4 (např. některé titulky nebo audio kodeky), můžeš použít překódování:

ffmpeg -i vstup.mkv -c:v libx264 -c:a aac -strict experimental vystup.mp4

Thursday, July 3, 2025

How to install and configure network printer and scanner in Linux Mint

Because of sustainability, I would like to use old Laptop/Printer/Scanner devices. 

This blog post is focused on Printer and Scanner.

I have a Canon MX350, so the runbooks for installing and using the printer and scanner were tested only with this model.

Saturday, June 28, 2025

How to Install and Configure NVIDIA Graphics Card in FreeBSD

 

[SKIP - NOT USED] Install driver for NVIDIA Graphics Card

pkg install nvidia-driver
sysrc kld_list+="nvidia nvidia-modeset"
sysrc linux_enable="YES" 

[SKIP - NOT USED] Configure the NVIDIA driver in a configuration file

cat >> /usr/local/etc/X11/xorg.conf.d/20-nvidia.conf << EOF
Section "Device"
    Identifier "Card0"
    Driver     "nvidia"
    BusID     "pci0:0:1:0"  
EndSection
EOF

[SKIP - NOT USED] NVIDIA configuration (it creates /etc/X11/xorg.conf)

pkg install nvidia-xconfig
nvidia-xconfig

Tuesday, June 17, 2025

How to get VMs with specific custom attribute?

Here is the Onliner to list VMs with custom attribute "Last Backup" ...

Get-VM | Select-Object Name, @{N='LastBackup';E={($_.CustomFields | Where-Object {$_.Key -match "Last Backup"}).Value}} | Where-Object {$_.LastBackup -ne $null -and $_.LastBackup -ne ""}

and here is the another one to count the number of such VMs ...

Get-VM | Select-Object Name, @{N='LastBackup';E={($_.CustomFields | Where-Object {$_.Key -match "Last Backup"}).Value}} | Where-Object {$_.LastBackup -ne $null -and $_.LastBackup -ne ""} | Measure-Object | Select-Object Count

 

How to get all VMs restarted by VMware vSphere HA? PowerCLI OneLiner below will do the magic ...

Get-VIEvent -MaxSamples 100000 -Start (Get-Date).AddDays(-1) -Type Warning | Where {$_.FullFormattedMessage -match "restarted"} | select CreatedTime,FullFormattedMessage | sort CreatedTime -Descending | Format-Table


Sunday, June 15, 2025

How to compress PDF file in Linux

I'm using Linux Mint with xsane for scanning documents on my old but still good Canon MX350 printer/scanner. Scans are saved as huge PDF documents (for example 50 MB) and I would like to compress it to consume much less disk space.

Install Ghostscript

apt install ghostscript

Compress the file input.pdf

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output_compressed.pdf input.pdf

Let's break down these options

  • -sDEVICE=pdfwrite: Tells Ghostscript to output a PDF file.
  • -dCompatibilityLevel=1.4: Sets the PDF version. Version 1.4 is quite old but widely compatible and often allows for good compression. You can try 1.5 or 1.6 for slightly more modern features and potentially better compression in some cases.
  • -dPDFSETTINGS=/ebook: This is the main compression control. As mentioned, /ebook usually gives a good balance.
  • -dNOPAUSE -dQUIET -dBATCH: These make Ghostscript run silently and non-interactively.
  • -sOutputFile=output_compressed.pdf: Specifies the name of the compressed output file.
  • input.pdf: original 50 MB PDF.

Lossy compression (322x) from 50 MB to 155 KB without any visible degradation is worth to keep cloud (Google drive) costs low.


Sunday, June 1, 2025

My VIM configuration file

My preferred editor in unix-like systems is vi or vim. VI is everywhere and VIM is improved for scripting and coding.

Below is my VIM config file /home/dpasek/.vimrc

 syntax on  
 filetype plugin indent on  

 " Show line numbers  
 set number  

 " Show relative line numbers (optional, good for motions like 5j/5k)  
 " set relativenumber  
 " Highlight matching parentheses  
 set showmatch  

 " Enable auto-indentation  
 set smartindent  
 set autoindent  

 " Use spaces instead of tabs, and set width (adjust to taste)  
 set expandtab  
 set tabstop=4  
 set shiftwidth=4  
 set softtabstop=4  

 " Show line and column in status line  
 set ruler  

 " Show partial command in bottom line  
 set showcmd  

 " Show a vertical line at column 80 (optional)  
 set colorcolumn=80  
 
 " Disable VIM mouse handling and keep it to terminal  
 set mouse=  

 " Enable persistent undo (requires directory)  
 set undofile  
 set undodir=~/.vim/undodir  
 
 " Make backspace behave sanely  
 set backspace=indent,eol,start  
 
 " Enable searching while typing  
 set incsearch  
 set hlsearch     " Highlight all matches  
 set ignorecase    " Case insensitive search...  
 set smartcase     " ...unless capital letter used  
 
 " Status line always visible  
 set laststatus=2  

 

Sunday, May 18, 2025

VMware VCF's SDDC Backup over sftp

You can do a native VCF SDDC Manager backup via SFTP protocol. SFTP is a file transfer protocol that operates over the SSH protocol. When using SFTP for VMware VCF's backup, you're effectively using the SSH protocol for transport.

For VCF older than 5.1, you have to allow ssh-rsa algorithm for host key and user authentication on your SSH Server.

It is configurable in SSH Daemon Configuration (/etc/ssh/sshd_config) on your backup server should have following lines to allow ssh-rsa algorithm for host key and user authentication.

# add ssh-rsa to the list of acceptable host key algorithms
HostKeyAlgorithms +ssh-rsa
 
# allow the ssh-rsa algorithm for user authentication
PubkeyAcceptedAlgorithms +ssh-rsa
 
 
This should not be necessary for SDDC Manager in VCF 5.1 and later.
 

Friday, May 9, 2025

RaspberryPi - GPIO control over Web Interface

How to use RaspberryPi inputs and outputs? The easiest way is to use the GPIO pins directly on the RaspberryPi board.

Hardware

Raspberry Pi has 8 freely accessible GPIO ports. which can be controlled. In the following picture they are colored green. 

GPIO ports

Attention!!! GPIO are 3.3V and do not tolerate 5V !! Maximum current is 16mA !! It would be possible to use more of them by changing the configuration.

Software

First you need to install the ligthhttpd (or apache ) server and PHP5:
sudo groupadd www-data
sudo apt-get install lighttpd
sudo apt-get install php5-cgi
sudo lighty-enable-mod fastcgi
sudo adduser pi www-data
sudo chown -R www-data:www-data /var/www
In the lighthttpd configuration

you need to add:
bin-path" => "/usr/bin/php5-cgi
socket" => "/tmp/php.socket"

Now you need to restart lighthttpd:
sudo /etc/init.d/lighttpd force-reload

This will run our webserver with PHP.

Now we get to the actual GPIO control. The ports can be used as input and output. Everything needs to be done as root.

First you need to make the port accessible:
echo "17" > /sys/class/gpio/export

Then we set whether it is an input (in) or output (out):
echo "out" > /sys/class/gpio/gpio17/direction

Set the value like this:
echo 1 > /sys/class/gpio/gpio17/valu

Read the status:
cat /sys/class/gpio/gpio17/value

This way we can control GPIO directly from the command line. If we use the www interface for control, we need to set the rights for all ports so that they can be controlled by a user other than root.
chmod 666 /sys/class/gpio/gpio17/value
chmod 666 /sys/class/gpio/gpio17/direction

Saturday, May 3, 2025

How to create a template on XCP-ng with XenOrchestra

"In this post I will show you how to create a template in XenOrchestra and using an image we created and customized ourself. " ... full blog post is available at https://blog.bufanda.de/how-to-create-a-template-on-xcp-ng-with-xenorchestra/

Thursday, February 13, 2025

VMware vSAN ESA on Cisco UCS - TCP Connection Half Open Drop Rate

During the investigation of high disk response times in one VM using vSAN storage, I saw a strange vSAN metric (TCP Connection Half Open Drop Rate).

What is it?

I have opened support ticket with VMware Support (2025-02-13) and started my own troubleshooting in paralel.

 

vSAN ESA - TCP Connection Half Open Drop issue

Here is the screenshot of vSAN ESA - Half Open Drop Rate over 50% on some vSAN Nodes ...

vSAN ESA - Half Open Drop Rate over 50% on some vSAN Nodes

Physical infrastructure schema

Here is the physical infrastructure schema of VMware vSAN ESA cluster ...

The schema of Physical infrastructure

 Virtual Networking schema

Here is the virtual networing schema of VMware vSphere ESXi host (vSAN Node) participating in vSAN ESA cluster ...

Virtual Networking of ESXi Host (vSAN Node)

vSAN Cluster state

  • ESX01 dcserv-esx05 192.168.123.21 (agent)  [56% half-open drop]
  • ESX02 dcserv-esx06 192.168.123.22 (backup) [98% half-open drop]
  • ESX03 dcserv-esx07 192.168.123.23 (agent)  [54% half-open drop]
  • ESX04 dcserv-esx08 192.168.123.24 (agent)  [0% half-open drop]
  • ESX05 dcserv-esx09 192.168.123.25 (master) [0% half-open drop] but once per some time (hour or so) 42% - 49% drop
  • ESX06 dcserv-esx10 192.168.123.26 (agent)  [0% half-open drop]

Do I have problem? I’m not certain, but it doesn’t appear to be the case.

I have seen high virtual disk latency on VM (docker host with single NVMe vDisk) with the storage load less than 12,000 IOPS (IOPS limit set to 25,000), so that was the reason why I was checking vSAN ESA infrastructure deeper and found the TCP Half Open Drop "issue".

High vDisk (vNVMe) response times in first week of February

However, IOmeter in Windows server with single SCSI vDisk on SCSI0:0 adapter is able to generate almost 25,000 IOPS @ 0.6 ms response time of 28.5KB-100%_read-100%_random storage pattern with 12 workers (threads)

12 workers on SCSI vDisk - we see performance of 25,000 IOPS @ 0.6 ms response time
 
It is worth to mention, that approximately 2,600 IOPS (512B I/O size) - 1,400 IOPS (1MB I/O size) per storage worker is not only vSAN but any shared enterprise storages "artificial" throuhput limit for good reason (explanation of the reason is another topic), however, it's essential to use more workers (threads, oustanding I/Os)  to achieve higher performance/throughput. Bellow is the performance result of single worker (thread) with 4KB I/O size.
 
Single worker (thread) with 4KB I/O size
 
So, let's use more workers (more storage threads = leveraging higher queue depth = higher paralelization) and test how many 28.5KB IOPS we can achieve on single vDisk.
 
With 64 workers (IOmeter_64workers_28.5KB_IO_100%_read_100%_random) I can generate 108,000 IOPS @ 0.6 ms response time.
 
64 workers on SCSI vDisk we see performance of 108,000 IOPS @ 0.6 ms response time
 
It is important mention that all above test were done on SCSI vDisk on PVSCSI adapter which has 256 queue depth, so performance can, if storage subsystem allows it, theoretically scale up to 256 workers.
 
However, if we use 128 workers (IOmeter_64workers_28.5KB_IO_100%_read_100%_random) we can see that storage subsystem does not handle it, performance 98,000 IOPS is even lower than performance with 64 workers and response time increase to 1.3 ms.
 
128 workers on SCSI vDisk we see performance of 98,300 IOPS @ 1.3 ms response time
 
If we use the same storage workload with 128 workers (IOmeter_64workers_28.5KB_IO_100%_read_100%_random) but with NVMe vDisk instead of SCSI vDisk, we can see that storage subsystem can handle 108,000 IOPS @ 1.2 ms but it is still worse performance quality than 64 workers on SCSI vDisk from response time perspective (1.2 ms vs 0.6 ms response time).
 
128 workers on NVMe vDisk we see performance of 108,000 IOPS @ 1.2 ms response time
 
If we test 64 workers on NVMe vDisk we see performance of 110,000 IOPS @ 0.6 ms response time.
 
64 workers on NVMe vDisk we see performance of 110,000 IOPS @ 0.6 ms response time
 
Anyway, all tests above shows pretty good storage performance on vSAN ESA cluster experiencing TCP Connection Half Open Drop Rate.

Network Analysis - packet capturing

What is happening in vSAN Node (dcserv-esx06) in maintenance mode with all vSAN storage migrated out of node?

[root@dcserv-esx06:/usr/lib/vmware/vsan/bin] pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'src host 192.168.123.22 and tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0'
The name of the uplink is vmnic4.
The session capture point is UplinkRcvKernel,UplinkSndKernel.
pktcap: The output file is -.
pktcap: No server port specifed, select 30749 as the port.
pktcap: Local CID 2.
pktcap: Listen on port 30749.
pktcap: Main thread: 305300921536.
pktcap: Dump Thread: 305301452544.
pktcap: The output file format is pcapng.
pktcap: Recv Thread: 305301980928.
pktcap: Accept...
reading from file -pktcap: Vsock connection from port 1032 cid 2.
, link-type EN10MB (Ethernet), snapshot length 65535
09:19:52.104211 IP 192.168.123.22.52611 > 192.168.123.23.2233: Flags [SEW], seq 2769751215, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 401040956 ecr 0], length 0
09:20:52.142511 IP 192.168.123.22.55264 > 192.168.123.23.2233: Flags [SEW], seq 3817033932, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1805625573 ecr 0], length 0
09:21:52.182787 IP 192.168.123.22.57917 > 192.168.123.23.2233: Flags [SEW], seq 2055691008, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 430011832 ecr 0], length 0
09:22:26.956218 IP 192.168.123.22.59456 > 192.168.123.23.2233: Flags [SEW], seq 3524784519, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 2597182302 ecr 0], length 0
09:22:52.225550 IP 192.168.123.22.60576 > 192.168.123.23.2233: Flags [SEW], seq 3089565460, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 378912106 ecr 0], length 0
09:23:52.397431 IP 192.168.123.22.63229 > 192.168.123.23.2233: Flags [SEW], seq 2552721354, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 2409421282 ecr 0], length 0
09:24:52.436734 IP 192.168.123.22.12398 > 192.168.123.23.2233: Flags [SEW], seq 3269754737, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3563144147 ecr 0], length 0
09:25:52.476565 IP 192.168.123.22.15058 > 192.168.123.23.2233: Flags [SEW], seq 1510936927, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1972989571 ecr 0], length 0
09:26:52.515032 IP 192.168.123.22.17707 > 192.168.123.23.2233: Flags [SEW], seq 262766144, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3787605572 ecr 0], length 0
09:27:52.554904 IP 192.168.123.22.20357 > 192.168.123.23.2233: Flags [SEW], seq 2099691233, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 2472387791 ecr 0], length 0
09:28:52.598409 IP 192.168.123.22.23017 > 192.168.123.23.2233: Flags [SEW], seq 1560369055, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 688302913 ecr 0], length 0
09:29:52.641938 IP 192.168.123.22.25663 > 192.168.123.23.2233: Flags [SEW], seq 394113563, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3836880073 ecr 0], length 0
09:30:52.682276 IP 192.168.123.22.28221 > 192.168.123.23.2233: Flags [SEW], seq 4232787521, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 830544087 ecr 0], length 0
09:31:52.726506 IP 192.168.123.22.30871 > 192.168.123.23.2233: Flags [SEW], seq 3529232466, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3037414646 ecr 0], length 0
09:32:52.768689 IP 192.168.123.22.33520 > 192.168.123.23.2233: Flags [SEW], seq 3467993307, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3716244554 ecr 0], length 0
09:33:52.809641 IP 192.168.123.22.36184 > 192.168.123.23.2233: Flags [SEW], seq 2859309873, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1556603624 ecr 0], length 0
09:34:52.849282 IP 192.168.123.22.38830 > 192.168.123.23.2233: Flags [SEW], seq 891574849, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 226049490 ecr 0], length 0
09:35:52.889434 IP 192.168.123.22.41487 > 192.168.123.23.2233: Flags [SEW], seq 1629372626, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 100385827 ecr 0], length 0
09:36:52.931192 IP 192.168.123.22.44140 > 192.168.123.23.2233: Flags [SEW], seq 3898717755, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3230029896 ecr 0], length 0
09:37:52.972758 IP 192.168.123.22.46788 > 192.168.123.23.2233: Flags [SEW], seq 3798420138, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1400467195 ecr 0], length 0
09:38:53.013565 IP 192.168.123.22.49449 > 192.168.123.23.2233: Flags [SEW], seq 1759807546, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1072184991 ecr 0], length 0
09:39:53.055394 IP 192.168.123.22.52096 > 192.168.123.23.2233: Flags [SEW], seq 2996482935, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3573008833 ecr 0], length 0
09:40:53.095123 IP 192.168.123.22.54754 > 192.168.123.23.2233: Flags [SEW], seq 103237119, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3275581229 ecr 0], length 0
09:41:53.136593 IP 192.168.123.22.57408 > 192.168.123.23.2233: Flags [SEW], seq 2105630912, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 1990595855 ecr 0], length 0
09:42:53.178033 IP 192.168.123.22.60054 > 192.168.123.23.2233: Flags [SEW], seq 4245039293, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 296668711 ecr 0], length 0
09:43:38.741557 IP 192.168.123.22.62070 > 192.168.123.23.2233: Flags [SEW], seq 343657957, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3406471577 ecr 0], length 0
09:43:53.219844 IP 192.168.123.22.62713 > 192.168.123.23.2233: Flags [SEW], seq 452468561, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3555078978 ecr 0], length 0
09:44:53.264107 IP 192.168.123.22.11779 > 192.168.123.23.2233: Flags [SEW], seq 3807775128, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3836709718 ecr 0], length 0
09:45:53.306117 IP 192.168.123.22.14431 > 192.168.123.23.2233: Flags [SEW], seq 3580778695, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3478626421 ecr 0], length 0
09:46:53.348438 IP 192.168.123.22.17083 > 192.168.123.23.2233: Flags [SEW], seq 1098229669, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 2219974257 ecr 0], length 0
09:47:53.386992 IP 192.168.123.22.19737 > 192.168.123.23.2233: Flags [SEW], seq 1338972264, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 708281300 ecr 0], length 0
09:48:53.426861 IP 192.168.123.22.22389 > 192.168.123.23.2233: Flags [SEW], seq 3973038592, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3153895628 ecr 0], length 0
09:49:53.469640 IP 192.168.123.22.25046 > 192.168.123.23.2233: Flags [SEW], seq 2367639206, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3155172682 ecr 0], length 0
09:50:53.510996 IP 192.168.123.22.27703 > 192.168.123.23.2233: Flags [SEW], seq 515312838, win 65535, options [mss 8960,nop,wscale 9,sackOK,TS val 3434645295 ecr 0], length 0

How does TCP SYN/SYN-ACK behave between DCSERV-ESX06 and other vSAN nodes?

ESXi command to sniff TCP SYN from DCSERV-ESX06 (192.168.123.23) to DCSERV-ESX07 (192.168.123.23) is

pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'src host 192.168.123.22 and dst host 192.168.123.23 and tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0'

Command to sniff TCP SYN-ACK is

pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'src host 192.168.123.23 and dst host 192.168.123.22 and tcp[tcpflags] & (tcp-syn|tcp-ack) = (tcp-syn|tcp-ack)' 

Here are observations and screenshots from sniffing excercise.

No new TCP connections have been initiated between DCSERV-ESX06 (backup vSAN node) and DCSERV-ESX05 (agent vSAN node) in some limited sniffing time (several minutes).

Between DCSERV-ESX06 (192.168.123.22, backup vSAN node) and DCSERV-ESX07 (192.168.123.23, agent vSAN node) new TCP Connection is established (SYN/SYN-ACK) every minute.

No new TCP connections have been initiated between DCSERV-ESX06 (192.168.123.22, backup vSAN node) and DCSERV-ESX08 (192.168.123.24, agent vSAN node) in some limited sniffing time (several minutes).

No new TCP connections have been initiated between DCSERV-ESX06 (192.168.123.22, backup vSAN node) and DCSERV-ESX09 (192.168.123.25, agent vSAN node) in some limited sniffing time (several minutes).

No new TCP connections have been initiated between DCSERV-ESX06 (192.168.123.22, backup vSAN node) and DCSERV-ESX10 (192.168.123.26, agent vSAN node) in some limited sniffing time (several minutes).

Interesting observation

New TCP Connection between DCSERV-ESX06 (192.168.123.22, backup vSAN node) and DCSERV-ESX07 (192.168.123.23, agent vSAN node) is usually established (SYN/SYN-ACK) every minute.

Why this happening only between DCSERV-ESX06 (backup node) and DCSERV-ESX07 (agent node) and not with other nodes? I do not know.

Further TCP network troubleshooting

Next step is to collect TCP SYN, TCP SYN/ACK, TCP stats, and NET stats on DCSERV-ESX06 (most "problematic" vSAN node) and DCSERV-ESX10 (not "problematic" vSAN node) into the files. I will capture data during one hour (60 minutes) to be able to compare number of SYN and SYN/ACK packets and compare it with TCP and network statistics. 

Capturing of TCP SYN

timeout -t 3600 pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0' > /tmp/dcserv-esx06_tcp-syn.dump

timeout -t 3600 pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0' > /tmp/dcserv-esx10_tcp-syn.dump

Capturing of TCP SYN/ACK

timeout -t 3600 pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'tcp[tcpflags] & (tcp-syn|tcp-ack) = (tcp-syn|tcp-ack)' > /tmp/dcserv-esx06_tcp-syn_ack.dump

timeout -t 3600 pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'tcp[tcpflags] & (tcp-syn|tcp-ack) = (tcp-syn|tcp-ack)' > /tmp/dcserv-esx10_tcp-syn_ack.dump

Capturing of TCP Statistics

for i in $(seq 60); do { date; vsish  -e get /net/tcpip/instances/defaultTcpipStack/stats/tcp; }  >> /tmp/dcserv-esx06_tcp_stats; sleep 60; done

for i in $(seq 60); do { date; vsish  -e get /net/tcpip/instances/defaultTcpipStack/stats/tcp; }  >> /tmp/dcserv-esx10_tcp_stats; sleep 60; done 

Capturing of TCP Statistics

netstat  captures 60 min with 30 sec x 120 times = 3600 sec = 60 min 

for i in $(seq 120); do { date; net-stats -A -t WwQqihVv -i 30; } >> /tmp/dcserv-esx06_netstats ; done

for i in $(seq 120); do { date; net-stats -A -t WwQqihVv -i 30; } >> /tmp/dcserv-esx10_netstats ; done

Output Files Comparison

ESX06
tcpdump
15:48:32.422347 - 16:48:16.542078: 199 TCP SYN
15:49:16.434140 - 16:48:46.533262: 199 TCP SYN/ACK

Fri Mar  7 15:49:10 UTC 2025
tcp_statistics
   connattempt:253432751
   accepts:3996127
   connects:8341861
   drops:4778493
   conndrops:247894569
   minmssdrops:0
   closed:257671058

Fri Mar  7 16:48:10 UTC 2025
tcp_statistics
   connattempt:253587720
   accepts:3997071
   connects:8345071
   drops:4781004
   conndrops:248047267
   minmssdrops:0
   closed:257827086

tcp_statistics difference
   connattempt:154969
   accepts:944
   connects:3210
   drops:2511
   conndrops:152698
   minmssdrops:0
   closed:156028

ESX10
tcpdump
15:49:44.554242 - 16:49:16.544940: 179 TCP SYN
15:50:16.441776 - 16:49:54.142493: 185 TCP SYN/ACK

Fri Mar  7 15:50:49 UTC 2025
tcp_statistics
   connattempt:826534
   accepts:2278888
   connects:3105348
   drops:1414905
   conndrops:74
   minmssdrops:0
   closed:3338137

Fri Mar  7 16:49:49 UTC 2025
tcp_statistics
   connattempt:826864
   accepts:2279789
   connects:3106579
   drops:1415439
   conndrops:74
   minmssdrops:0
   closed:3339470

Difference
   connattempt:330
   accepts:901
   connects:1231
   drops:534
   conndrops:0
   minmssdrops:0
   closed:1333
 

What does it mean? I don't know. I have VMware support case opened and waiting on their analysis.

There were various calls with various parts of VMware support but here is the first meaningful response from VMware support (2025-04-03 - 50 days after opening a support ticket)

Your capture is highly filtered and many details are missing. Please consider the following points when collecting the capture:

  1. Use the pktcap-uw command and capture in .pcap format. Collecting all the data in a single file will help us trace packets to specific connections.
  2. Capture all TCP packets, not just SYN/SYN-ACK. Half-open drops are usually caused by RESET packets
  3. TCP uses the same set of statistics for the entire network stack. Therefore, we must collect packets from all vmk interfaces in the default network stack, or from a common uplink.

You can use a command similar to below one:

pktcap-uw --vmk <vmk> --proto 0x6 --dir 2 -o <file.pcap>
pktcap-uw --uplink <vmnic> --proto 0x6 --dir 2 -o <file.pcap>

Ok. No problem. Let's do a packet capturing of everything going through uplink used by vSAN.

My vSAN ESA vmkernel interface is pined to vmnic4, therefore I used following command

cd /vmfs/volumes/MY-DATASTORE
pktcap-uw --uplink vmnic4 --proto 0x6 --dir 2 -o netdump.pcap

It is good to monitor datastore usage as it dumps 30GB of network trafic in 4 minutes. 

Another meaningful communication with VMware support (2025-05-08 - 85 days after opening a support ticket)

VMware support asked me for another packet capturing. They want packet capture not only from uplink used for vSAN traffic (VMKNIC4), but also from uplinks VMKNIC0, VMKNIC1, and VMKNIC5, where if vSphere management traffic.

Below is onliner I used to capture network traffic and split it into ~2 GB (2,000 MB) files as requested by VMware support.

cd /vmfs/volumes/MY-DATASTORE 
 
timeout -t 360 pktcap-uw --uplink vmnic0 --proto 0x6 --dir 2 -o - | tcpdump-uw -r - -w vmnic0-pcap -C 2000 & \
timeout -t 360 pktcap-uw --uplink vmnic1 --proto 0x6 --dir 2 -o - | tcpdump-uw -r - -w vmnic1-pcap -C 2000 & \
timeout -t 360 pktcap-uw --uplink vmnic4 --proto 0x6 --dir 2 -o - | tcpdump-uw -r - -w vmnic4-pcap -C 2000 & \
timeout -t 360 pktcap-uw --uplink vmnic5 --proto 0x6 --dir 2 -o - | tcpdump-uw -r - -w vmnic5-pcap -C 2000 &

Explanation of onliner above:

  • timeout 360 : limit packet capturing to 6 minutes to keep overall packet capture data capacity below 30 GB
  • -o - : Sends raw pcap data to stdout.
  • tcpdump -r -: Reads from stdin
  • -w /tmp/vmk0-%Y%m%d-%H%M%S.pcap: Uses timestamped filenames.
  • -C 2000: Splits output files every 2000 MB (~2GB).

I've sent this new packet capture to VMware Support again and waited for their response.

Another meaningful communication with VMware support (2025-05-15 - 92 days after opening a support ticket)

VMware response ...

Hello David,

Etcd is the misbehaving application. Looks like some of the hosts (100.68.81.23 and 100.68.81.21) dont have etcd configured and this host is trying to reach them. Can you help check why this configuration is missing on some of the hosts.

34 0.087251 0.000057000 100.68.81.23 100.68.81.22 2380 → 58192 [RST, ACK] Seq=0 Ack=2589825032 Win=0 Len=0 34
35 0.087370 0.000119000 100.68.81.23 100.68.81.22 2380 → 58193 [RST, ACK] Seq=0 Ack=1816019462 Win=0 Len=0 35
38 0.093287 0.000060000 100.68.81.21 100.68.81.22 2380 → 58194 [RST, ACK] Seq=0 Ack=3524013708 Win=0 Len=0 38
39 0.093407 0.000120000 100.68.81.21 100.68.81.22 2380 → 58195 [RST, ACK] Seq=0 Ack=2552292164 Win=0 Len=0 39
42 0.186674 0.000065000 100.68.81.23 100.68.81.22 2380 → 58196 [RST, ACK] Seq=0 Ack=428680618 Win=0 Len=0 42
43 0.186793 0.000119000 100.68.81.23 100.68.81.22 2380 → 58197 [RST, ACK] Seq=0 Ack=1113298373 Win=0 Len=0 43
46 0.193167 0.000056000 100.68.81.21 100.68.81.22 2380 → 58198 [RST, ACK] Seq=0 Ack=1739165024 Win=0 Len=0 46
47 0.193286 0.000119000 100.68.81.21 100.68.81.22 2380 → 58199 [RST, ACK] Seq=0 Ack=3827463043 Win=0 Len=0 47
50 0.286874 0.000073000 100.68.81.23 100.68.81.22 2380 → 58201 [RST, ACK] Seq=0 Ack=1641220058 Win=0 Len=0 50
51 0.286874 0.000000000 100.68.81.23 100.68.81.22 2380 → 58200 [RST, ACK] Seq=0 Ack=1825411290 Win=0 Len=0 51

./var/run/log/etcd.log:1556:2025-02-13T12:59:27Z Wa(4) etcd[28532348]: health check for peer 7312e1f21f195833 could not connect: dial tcp 100.68.81.21:2380: connect: connection refused
./var/run/log/etcd.log:1557:2025-02-13T12:59:30Z Wa(4) etcd[28532348]: health check for peer 5c34e4f236d566f0 could not connect: dial tcp 100.68.81.23:2380: connect: connection refused
./var/run/log/etcd.log:1558:2025-02-13T12:59:30Z Wa(4) etcd[28532348]: health check for peer 5c34e4f236d566f0 could not connect: dial tcp 100.68.81.23:2380: connect: connection refused
./var/run/log/etcd.log:1560:2025-02-13T12:59:32Z Wa(4) etcd[28532348]: health check for peer 7312e1f21f195833 could not connect: dial tcp 100.68.81.21:2380: connect: connection refused
./var/run/log/etcd.log:1561:2025-02-13T12:59:32Z Wa(4) etcd[28532348]: health check for peer 7312e1f21f195833 could not connect: dial tcp 100.68.81.21:2380: connect: connection refused
./var/run/log/etcd.log:1562:2025-02-13T12:59:35Z Wa(4) etcd[28532348]: health check for peer 5c34e4f236d566f0 could not connect: dial tcp 100.68.81.23:2380: connect: connection refused

My thought process ...
 
Interesting. Why is there any ETCD in my vSphere/vSAN deployment? AFAIK, ETCD is only used when vSphere with Tanzu (TKG, Supervisor Cluster, Workload Management) is enabled. But this is not my case. I have pure vSphere with vSAN enabled.
 
I was thinking how can I help VMware support to check why ETCD configuration is missing on some of the hosts? Well, I think there should not be any ETCD in my deployment. So, lets check the ETCD status on all 6 ESXi hosts in my cluster.
 
I used following three commands on each ESXi host ...
 
ls -la /var/run/log/etcd.log            # Does exist etcd log file?
tail -f /var/run/log/etcd.log            
# What is the last etcd.log log entry?
ps | grep etcd                                # Does etcd process run in ESXi host?
 
... and summarize the findings into the following summary.

DCSERV-ESX05
etcd process: not running
last log entry: 2025-01-23T04:57:01Z In(6) etcd[19020602]: started streaming with peer 28f1baf9f89e1c97 (writer)

DCSERV-ESX06
etcd process: is running ... Why?
last log entry: 2025-05-15T21:05:20Z Wa(4) etcd[44266208]: health check for peer 5c34e4f236d566f0 could not connect: dial tcp 100.68.81.23:2380: connect: connection refused
 
DCSERV-ESX07
etcd process: not running
last log entry: 2024-12-18T17:26:22Z In(6) etcd[8404413]: started streaming with peer 549aa92459681df0 (writer)
 
DCSERV-ESX08
etcd process: not running
last log entry: 2024-11-25T15:26:45Z In(6) etcd[2115318]: stopped peer 71ecff499039aa21
 
DCSERV-ESX09
etcd process: is running ... Why?
last log entry: 2025-05-15T21:11:53Z Db(7) etcd[25597540]: start time = 2025-05-15 21:11:53.01956 +0000 UTC m=+20117.190157001, time spent = 120µs, remote = 100.68.81.25:28729, response type = /etcdserverpb.Cluster/MemberList, request count = -1, request size = -1, response count = -1, response size = -1, request content =
 
DCSERV-ESX10
etcd process: not running
last log entry: none, log file empty : -rw-------    1 root     root             0 Nov 21 15:35 /var/run/log/etcd.log

What does this all mean?

ETCD is running on two ESXi hosts: DCSERV-ESX06 and DCSERV-ESX09
 
TCP Connection Half Open Drop Rate is observed on three ESXi hosts: DCSERV-ESX05 (~55%), DCSERV-ESX06 (~98%), DCSERV-ESX07 (~55%)

The only common determinator is DCSERV-ESX06
 
It does not seem to correlate.
 
I would like to get answer to following question?
  • Why ETCD is running on two ESXi hosts when I have just vSphere and vSAN? There is no Tanzu (aka VMware vSphere Kubernetes Service) enabled.
  • I realized that two running ETCDs could be associated with two vCLS Pods and when consulting with ChatGPT, I have got following answers
    • In 8.0.2 and newer, VMware started shifting vCLS to “vCLS Pods”, running containers inside the VM, using a small internal container runtime.
    • VMware uses ETCD inside these pods as part of the vCLS control plane
    • vCLS Pods communicate over port 2380, which is etcd’s peer port

I will share my findings and thoughts with VMware support and wait for their response, because we cannot trust ChatGPT and vendor support is the main authority for their product. 

Another meaningful communication with VMware support (2025-05-23 - 100 days after opening a support ticket)

VMware response ...

just to follow up on previous mail

I checked this internally, etcd can run even if WCP/TKG isn't in use, this could be a 3 etcd node cluster, so may not be running on some hosts,

The number of half open drops are increasing because the connection requests are being denied by the other host as the service is not currently running on them.

Can you send me the output of the below command on the vcenter

/usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster status

Can you also upload a full vcenter log bundle along with the host logs

What is command /usr/lib/vmware/clusterAgent/bin/clusterAdmin?

The clusterAdmin tool in VMware ESXi is a command-line utility used for managing and administering vSphere clustering functionality, particularly vSphere HA (High Availability) and DRS (Distributed Resource Scheduler) operations at the host level. This tool is part of the cluster agent infrastructure that runs on each ESXi host and handles communication between the host and vCenter Server for cluster-related operations. 

Primary Functions: 

  • Managing cluster membership and host participation in vSphere clusters
  • Configuring and troubleshooting vSphere HA settings on individual hosts
  • Handling cluster state information and synchronization
  • Managing resource pool configurations and DRS policies
  • Performing cluster-related diagnostic operations


Common Use Cases:

  • Troubleshooting cluster connectivity issues
  • Manually triggering cluster reconfiguration operations
  • Checking cluster agent status and health
  • Resetting cluster configuration when hosts become disconnected
  • Diagnosing HA or DRS failures


Typical Usage: The tool is usually invoked with various subcommands and parameters, such as:

  • Status checking operations
  • Configuration reset commands
  • Cluster membership management
  • Resource allocation adjustments

This utility is primarily intended for VMware support engineers and advanced administrators who need to perform low-level cluster troubleshooting or maintenance operations that aren't available through the vSphere Client interface. It's part of the internal clustering infrastructure and should be used carefully, typically only when directed by VMware support or when following specific troubleshooting procedures.

Well, that's the case. VMware suport engineer (TSE) was asking for command outputs, so here are outputs from all ESXi hosts in vSphere/vSAN Cluster ...

dcserv-esx05

[root@dcserv-esx05:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster status
{
   "state": "hosted",
   "cluster_id": "5bab0e84-305e-4966-ae6e-b9386c6b19f3:domain-c2051",
   "is_in_alarm": false,
   "alarm_cause": "",
   "is_in_cluster": true,
   "members": {
      "available": false
   }

}
[root@dcserv-esx05:~]

dcserv-esx06

[root@dcserv-esx06:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster stat
us
{
   "state": "hosted",
   "cluster_id": "5bab0e84-305e-4966-ae6e-b9386c6b19f3:domain-c2051",
   "is_in_alarm": true,
   "alarm_cause": "Timeout",
   "is_in_cluster": true,
   "members": {
      "available": false
   }

}
[root@dcserv-esx06:~]

dcserv-esx07

[root@dcserv-esx07:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster stat
us
{
   "state": "hosted",
   "cluster_id": "5bab0e84-305e-4966-ae6e-b9386c6b19f3:domain-c2051",
   "is_in_alarm": false,
   "alarm_cause": "",
   "is_in_cluster": true,
   "members": {
      "available": false
   }

}
[root@dcserv-esx07:~]

dcserv-esx08

[root@dcserv-esx08:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster stat
us
{
   "state": "standalone",
   "cluster_id": "",
   "is_in_alarm": false,
   "alarm_cause": "",
   "is_in_cluster": false,
   "members": {
      "available": false
   }
}
[root@dcserv-esx08:~]

dcserv-esx09

[root@dcserv-esx09:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster stat
us
{
   "state": "hosted",
   "cluster_id": "5bab0e84-305e-4966-ae6e-b9386c6b19f3:domain-c2051",
   "is_in_alarm": false,
   "alarm_cause": "",
   "is_in_cluster": true,
   "members": {
      "available": true
   },

   "namespaces": [
      {
         "name": "root",
         "up_to_date": true,
         "members": [
            {
               "peer_address": "dcserv-esx09.dcserv.cloud:2380",
               "api_address": "dcserv-esx09.dcserv.cloud:2379",
               "reachable": true,
               "primary": "yes",
               "learner": false
            }
         ]
      }
   ]
}

[root@dcserv-esx09:~]

dcserv-esx10

[root@dcserv-esx10:~] /usr/lib/vmware/clusterAgent/bin/clusterAdmin cluster stat
us
{
   "state": "standalone",
   "cluster_id": "",
   "is_in_alarm": false,
   "alarm_cause": "",
   "is_in_cluster": false,
   "members": {
      "available": false
   }
}
[root@dcserv-esx10:~]

It seems to me that output above means that hosts

  • 4 nodes (dcserv-esx05, dcserv-esx06, dcserv-esx07, dcserv-esx09) are in cluster
  • only dcserv-esx09 have members available
  • dcserv-esx06 is in alarm state and alarm cause is Timeout
    • all other nodes are not in alarm state
  • when I check if etcd is running (ps | grep etcd), it runs only on following two ESXi hosts
    • dcserv-esx06, dcserv-esx09

VMware TSE mentioned that ... "etcd can run even if WCP/TKG isn't in use, this could be a 3 etcd node cluster". However, I see etcd service running only on two of six ESXi hosts. TSE believes there should be running 3 nodes. It leads into the following questions ...

Q1: What is the purpose of 3-node ETCD in vSphere/vSAN cluster?

Q2: Why only 2-nodes are running?

Anyway. I do not understand  /usr/lib/vmware/clusterAgent/bin/clusterAdmin tool. This is VMware low level internal tool. So let's wait for next VMware Support follow up.

System Logs from vCenter along with the host logs have been exported and uploaded to VMware Support Case. I'm looking forward to seeing if this will help VMware support to identify the root cause.

Another meaningful communication with VMware support (2025-06-12 - 120 days after opening a support ticket)

VMware Support team opened PR (Problem Report) with VMware Engineering team. 

They asked me to run ... 

python3 dkvs-cleanup.py -d ignore -w skip -s norestart

... however, I have never get email notification about support case and just got email that my case was closed and I can provide survey about my experience with the case.

To be honest, my experience was far from perfect and I was not able to re-open closed ticket.

I give up, because it seems that the problem does not have any negative business impact and I have no more energy to find the root cause. 

I have opened anopther VMware support ticket (2025-07-30 - 168 days after opening a support ticket) and realized that I have not run suggested command.

I have stored provided script at https://github.com/davidpasek/vmware-gss-scripts/blob/main/dkvscleanup32.py

Below is the output from provided script ... 

 [root@dcserv-esx05:/tmp] python /tmp/dkvscleanup32.py -d ignore -w skip -s norestart  
 Traceback (most recent call last):  
  File "/tmp/dkvscleanup32.py", line 9, in <module>  
   import psycopg2  
 ModuleNotFoundError: No module named 'psycopg2'  
 [root@dcserv-esx05:/tmp]  

This does not lead anytime and support case is closed, so the root-cause is unknown, but it does not have any business impact, so I give up. 


Wednesday, February 12, 2025

VMware vs OpenStack

Here are scrrenshot from Canonical webcast

Feature comparison


OpenStack technological stack

 

System containers (LXD) vs Application Containers (Docker)


 

 

 


Thursday, January 30, 2025

vSphere 8 consumption gui

Source: https://www.linkedin.com/posts/katarinawagnerova_vsphere-kubernetes-vms-ugcPost-7213567854271492099-ygOq?utm_source=share&utm_medium=member_ios

Infrastructure & Application Monitoring with Checkmk

Source: https://checkmk.com/ 


docker container run -dit -p 8080:5000 -p 8000:8000 --tmpfs /opt/omd/sites/cmk/tmp:uid=1000,gid=1000 -v monitoring:/omd/sites --name monitoring -v /etc/localtime:/etc/localtime:ro --restart always checkmk/check-mk-cloud:2.3.0p24
 
 

VCF - nested ESX

Source: https://mhvmw.wordpress.com/2024/12/29/part-iii-beginners-guide-using-nested-esxi-hosts-for-a-vcf-5-2-1-home-lab/

 

Shodan - Search Engine for the Internet of Everything

Search Engine for the Internet of Everything

https://www.shodan.io/


Shodan is the world's first search engine for Internet-connected devices. Discover how Internet intelligence can help you make better decisions.

Network Monitoring Made Easy

Within 5 minutes of using Shodan Monitor you will see what you currently have connected to the Internet within your network range and be setup with real-time notifications when something unexpected shows up.

ČRa new data center

Source: https://www.cra.cz/tiskove-centrum/datova-centra/cra-se-stanou-jednickou-mezi-provozovateli-datovych-center-ziskaly-uzemni-rozhodnuti-pro-nove-dc

CRA se stanou jedničkou mezi provozovateli datových center, získaly územní rozhodnutí pro nové DC

CRA se stanou jedničkou mezi provozovateli datových center, získaly územní rozhodnutí pro nové DC

České Radiokomunikace (CRA) finišují s přípravami jednoho z nejambicióznějších projektů v oblasti digitální infrastruktury v České republice, nového datového centra. Podařil se další významný krok, CRA získaly územní rozhodnutí. V lokalitě Praha Zbraslav vznikne do dvou let jedno z největších zařízení svého druhu nejen v České republice, ale i v Evropě, které bude mít kapacitou přes 2 500 serverových racků a příkon 26 megawattů.

„Hlavními atributy našeho projektu jsou inovativnost, udržitelnost, efektivita, spolehlivost a bezpečnost. Našim cílem je přivést do Česka velké společnosti, které zde dosud nemohly služeb datacenter využít z kapacitních důvodů s ohledem na jejich velikost či obsazenost,“ upřesňuje Miloš Mastník, generální ředitel Českých Radiokomunikací. „Nyní máme platné územní rozhodnutí a to znamená, že můžeme znovu pokročit s finálními přípravami,“ doplňuje Miloš Mastník.

Datové centrum bude mít rozlohu 5 622 m² s rozměry budovy 320 × 45 metrů a vyroste na revitalizovaných pozemcích, kde stály původně tři středovlnné rozhlasové vysílače CRA. Bude vybaveno kapacitou 2 500 serverových míst (racků) s příkonem 26 MW z dvou nezávislých tras pro bezpečné ukládání a správu dat. Prostory půjde přizpůsobit specifickým potřebám jednotlivých zákazníků. Každá místnost bude mít také vlastní kancelářské a úložné prostory, čímž se centrum stane komplexním řešením pro technologické potřeby firem.

Datové centrum bude splňovat nejpřísnější technologické i ekologické standardy. Bude plně napájené z obnovitelných zdrojů, konkrétně ze solární článků umístěných na střeše budovy. Díky strategické poloze, inovativnímu systému chlazení s hodnotou GWP <10, využívání zbytkového tepla a optimalizovanou výkonovou kapacitou bude efektivita provozu na špičkové úrovni s hodnotou PUE (Power Usage Effectiveness) 1,25. Například pro lepší distribuci vzduchu a hygienické standardy budou využity deskové podlahy, což zlepší chlazení a zároveň umožní výkonové zatížení jednotlivých racků až na 20 kW bez nutnosti dodatečného posílení chlazení.

CRA plánují splnit certifikace LEED Gold a dodržet standardy ASHRAE, projekt vzniká v souladu s principy ESG.

Projekt získal podporu Ministerstva průmyslu a obchodu, které se společností CRA podepsalo memorandum o porozumění. Memorandum stanovuje rámec spolupráce mezi státem a CRA v rámci pravomocí a platných předpisů s cílem podpořit digitální transformaci, výzkum a vývoj technologií a zajistit nezbytnou infrastrukturu pro další růst ekonomiky.

CRA již provozují osm datových center v České republice, například na pražském Žižkově, Strahově a Cukráku, stejně jako v Brně, Ostravě, Pardubicích a Zlíně. Zájem o pronájem kapacit stále roste, proto CRA otevřely nový datový sál letos na jaře v rámci vysílače Cukrák, koupily datové centrum Lužice a chystají modernizaci a rozšíření DC Tower na Žižkově.

Zbraslavské datové centrum má být ve spolupráci s mateřskou firmou Cordiant Digital Infrastructure dokončeno v roce 2026. Stavební a další nezbytná povolení od různých regulačních orgánů plánují CRA získat na jaře 2025. Samotná výstavba potrvá přibližně 24 měsíců. Díky již existující infrastruktuře včetně připojení na optickou síť, silniční napojení a dostupné energie, bude projekt schopen rychlé realizace.

 

Tarsnap - Online backups for the truly paranoid

Source: http://www.tarsnap.com/

 

NAS Performance: NFS vs. SMB vs. SSHFS | Jake’s Blog

Source: https://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html 

NAS Performance: NFS vs. SMB vs. SSHFS

This is a performance comparison of the the three most useful protocols for networks file shares on Linux with the latest software. I have run sequential and random benchmarks and tests with rsync. The main reason for this post is that i could not find a proper test that includes SSHFS.

NAS Setup

The hardware side of the server is based on an Dell mainboard with an Intel i3-3220, so a fairly old 2 core / 4 threads CPU. It also does not support the AES-NI extensions (which would increase the AES performance noticeably) the encryption happens completely in software.

As storage two HDDs in BTRFS RAID1 were used, it does not make a difference though, because the tests are staged to hit almost always the cache on the server, so only the protocol performance counts.

I installed Fedora 30 Server on it and updated it to the latest software versions.

Everything was tested over a local Gigabit Ethernet Network. The client is a quadcore desktop machine running Arch Linux, so this should not be a bottleneck.

SSHFS (also known as SFTP)

Relevant package/version: OpenSSH_8.0p1, OpenSSL 1.1.1c, sshfs 3.5.2

OpenSSH is probably running anyway on all servers, so this is by far the simplest setup: just install sshfs (fuse based) on the clients and mount it. Also it is per default encrypted with ChaCha20-Poly1305. As second test i did choose AES128, because it is the most popular cipher, disabling encryption is not possible (without patching ssh). Then i added some mount options (suggested here) for convenience and ended with:

sshfs -o Ciphers=aes128-ctr -o Compression=no -o ServerAliveCountMax=2 -o ServerAliveInterval=15 remoteuser@server:/mnt/share/ /media/mountpoint

NFSv4

Relevant package/version: Linux Kernel 5.2.8

The plaintext setup is also easy, specify the exports, start the server and open the ports. I used these options on the server: (rw,async,all_squash,anonuid=1000,anongid=1000)

And mounted with: mount.nfs4 -v nas-server:/mnt/share /media/mountpoint

But getting encryption to work can be a nightmare, first setting up kerberos is more complicated than other solutions and then dealing with idmap on both server an client(s)… After that you can choose from different levels, i set sec=krb5p to encrypt all traffic for this test (most secure, slowest).

SMB3

Relevant package/version: Samba 4.10.6

The setup is mostly done with installing, creating the user DB, adding a share to smb.conf and starting the smb service. Encryption is disabled by default, for the encrypted test i set smb encrypt = required on the server globally. It uses AES128-CCM then (visible in smbstatus).

IDmapping on the client can be simply done as mount option, i used as complete mount command:

mount -t cifs -o username=jk,password=xyz,uid=jk,gid=jk //nas-server/media /media/mountpoint

Test Methodology

The main test block was done with the flexible I/O tester (fio), written by Jens Axboe (current maintainer of the Linux block layer). It has many options, so i made a short script to run reproducible tests:

#!/bin/bash
OUT=$HOME/logs

fio --name=job-w --rw=write --size=2G --ioengine=libaio --iodepth=4 --bs=128k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-write.log
sleep 5
fio --name=job-r --rw=read --size=2G --ioengine=libaio --iodepth=4 --bs=128K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-read.log
sleep 5
fio --name=job-randw --rw=randwrite --size=2G --ioengine=libaio --iodepth=32 --bs=4k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randwrite.log
sleep 5
fio --name=job-randr --rw=randread --size=2G --ioengine=libaio --iodepth=32 --bs=4K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randread.log

First two are classic read/write sequential tests, with 128 KB block size an a queue depth of 4. The last are small 4 KB random read/writes, but with are 32 deep queue. The direct flag means direct IO, to make sure that no caching happens on the client.

For the real world tests i used rsync in archive mode (-rlptgoD) and the included measurements:

rsync --info=progress2 -a sshfs/TMU /tmp/TMU

Synthetic Performance

Sequential

sequential read diagram

Most are maxing out the network, the only one falling behind in the read test is SMB with encryption enabled, looking at the CPU utilization reveals that it uses only one core/thread, which causes a bottleneck here.

sequential write diagram

NFS handles the compute intensive encryption better with multiple threads, but using almost 200% CPU and getting a bit weaker on the write test.

SSHFS provides a surprisingly good performance with both encryption options, almost the same as NFS or SMB in plaintext! It also put less stress on the CPU, with up to 75% for the ssh process and 15% for sftp.

Random

4K random read diagram

On small random accesses NFS is the clear winner, even with encryption enabled very good. SMB almost the same, but only without encryption. SSHFS quite a bit behind.

4K random write diagram

NFS still the fastest in plaintext, but has a problem again when combining writes with encryption. SSHFS is getting more competitive, even the fastest from the encrypted options, overall in the mid.

random read latency diagram random read latency diagram

The latency mostly resembles the inverse IOPS/bandwith. Only notable point is the pretty good(low) write latency with encrypted NFS, getting most requests a bit faster done than SSHFS in this case.

Real World Performance

This test consists of transfering a folder with rsync from/to the mounted share and a local tmpfs (RAM backed). It contains the installation of a game (Trackmania United Forever) and is about 1,7 GB in size with 2929 files total, so a average file size of 600 KB, but not evenly distributed.

mixed read diagram mixed write diagram

After all no big surprises here, NFS fastest in plaintext, SSHFS fastest in encryption. SMB always somewhat behind NFS.

Conclusion

In trusted home networks NFS without encryption is the best choice on Linux for maximum performance. If you want encryption i would recommend SSHFS, it is a much simpler setup (compared to Kerberos), more cpu efficient and often only slightly slower than plaintext NFS. Samba/SMB is also not too far behind, but only really makes sense in a mixed (Windows/Linux) environment.

Thanks for reading, i hope it was helpful.