Pages

Monday, December 23, 2024

Write FreeBSD disk image to USB disk (/dev/da1)

Here is the dd command 

dd if=FreeBSD-14.2-RELEASE-amd64-memstick.img of=/dev/da1 bs=1M conv=sync status=progress

Saturday, December 14, 2024

How to activate Windows 11?

Go to https://github.com/massgravel/Microsoft-Activation-Scripts and follow the instructions.

Or download Windows LTSC (Long-Term Servicing Channel) from https://massgrave.dev/windows_ltsc_links#win11-iot-enterprise-ltsc-2024 

Windows LTSC is intended for IoT and does not require a product key.

Tuesday, December 3, 2024

A problem occured while reading the OVA-FIle, Type Error - RHEL, Centos 9

Are you deploying vCenter from a Redhat workstation by any chance?

If so try installing the libnsl package via the command dnf install libnsl then then try deploying again!

vCenter Server 8.0 appliance deployment fails while performing vCenter server 8.0 deployment using a UI installer on the RHEL 9 operating system, the deployment wizard fails with an error message: A problem occurred while reading the OVA File: TypeError: Cannot read properties of undefined reading 'length'.

On the RHEL operating system, install the libnsl package using the command

dnf install libnsl.

Ensure to configure the required repositories prior to execution of the command. 

Source: https://www.dell.com/support/manuals/cs-cz/vmware-esxi-8.x/vmware_8.x_rn_pub/known-issues?guid=guid-ea80ce97-07db-402f-a99c-36109663f276&lang=en-us


#VCSA, #OVF, #OVA

Friday, November 29, 2024

VMware vSAN ESA - storage performance testing

I have just finished my first VMware vSAN ESA Plan, Design, Implement project and had a chance to test vSAN ESA performance. Every storage should be stressed and tested before being put into production. VMware's software-defined hyperconverged storage (vSAN) is no different. It is even more important because the server's CPU, RAM, and Network are leveraged to emulate enterprise-class storage.

Sunday, November 17, 2024

ESXi update from cli

Step 1: upload the VMware-ESXi-8.0U3b-24280767-depot.zip file to a datastore accessible by the host.

esxcli software sources profile list -d /vmfs/volumes/[datastore]/VMware-ESXi-8.0U3b-24280767-depot.zip

esxcli software profile update -d “/vmfs/volumes/[datastore]/VMware-ESXi-8.0U3b-24280767-depot.zip” -p ESXi-8.0U3b-24280767-standard


Saturday, November 16, 2024

VMware vCenter (VCSA) Update via shell command software-packages.py

Online update

cd /usr/lib/applmgmt/support/scripts

./software-packages.py stage --url --acceptEulas

./software-packages.py list --staged

./software-packages.py validate

./software-packages.py install

ISO update

Download the VCSA patch which should end with FP.iso from support.brocade.com > selecting VC and the version.

Upload the file to a datastore and map it to the VCSA VM through CD / DVD Drive option.

Patch the VCSA from CLI.

Run the following commands

software-packages.py stage –-iso

software-packages.py list –-staged

software-packages.py install –-staged

Reboot the VCSA VM.

This should patch the VCSA

Thursday, November 14, 2024

Mount SFTP share via sshfs

 #!/bin/bash

sshfs david.pasek@gmail.com@sftp.virtix.cloud:./ ~/mnt/sftp -p 55022

Wednesday, November 13, 2024

OpenNebula - VMware Alternative

Web Admin Management Interface (SunStone) is at https://[IP]:2616

Main Admin User Name: oneadmin

Default network is 172.16.100/24

Repo: https://downloads.opennebula.io/repo/

Tuesday, November 12, 2024

Linux Remote Desktop based on open-source | ThinLinc by Cendio

Linux Remote Desktop based on open-source | ThinLinc by Cendio

https://www.cendio.com/

Keywords: RDP

Monitoring VMware vSphere with Zabbix

Source: https://vmattroman.com/monitoring-vmware-vsphere-with-zabbix/

Zabbix is an open-source monitoring tool designed to oversee various components of IT infrastructure, including networks, servers, virtual machines, and cloud services. It operates using both agent-based and agentless monitoring methods. Agents can be installed on monitored devices to collect performance data and report back to a centralized Zabbix server.

Zabbix provides comprehensive integration capabilities for monitoring VMware environments, including ESXi hypervisors, vCenter servers, and virtual machines (VMs). This integration allows administrators to effectively track performance metrics and resource usage across their VMware infrastructure.

In this post, I will show you how setup Zabbix monitoring with VMware vSpehre infrastructure.

Jak zabránit čekání na obnoveni NFS datastore při startu ESXi?

Jak zabránit čekání na obnoveni NFS datastore při startu ESXi? 

Když vám ESXi odmítá startovat 1-2 hodiny, protože se pokouší připojit NFS datastore, které jsou dávno odstraněné.  

1. Proveďte restart ESXi 

2. Stiskněte Shift+O při startu 

3. Na konec řádku zadejte jumpstart.disable=restore-nfs-volumes 

4. Potvrďte pomocí klávesy Enter

Backup and restore ESXi host configuration data using command line

 Source: https://vmattroman.com/backup_and_restore_esxi_host_configuration_data_using_command_line/

Backup and restore ESXi host configuration data using command line

 

In some cases we need to reinstall ESXi host. To avoid time consuming setting up servers, we can quickly backup and restore host configuration. To achieve this, there are three possible ways: ESXi command line, vSphere CLI or PowerCLI.


In this article I will show how backup and restore host configuration data using ESXi command line.

Monday, November 4, 2024

New SKUs / pricing (MSRP) for VMware available

 A new pricebook is out, effective November 11 2024:

The Essentials Plus SKU (VCF-VSP-ESPL-8) is going EOL as of 11th, therefore Enterprise Plus is coming back.

Also there is a price adjustment for VVF.

Item NumberDescriptionprice per Core per year MSRP USD
VCF-CLD-FND-5VMware Cloud Foundation 5$350,00
VCF-CLD-FND-EDGEVMware Cloud Foundation Edge - For Edge Deployments Only$225,00
VCF-VSP-ENT-PLUSVMware vSphere Enterprise Plus - Multiyear$120,00
VCF-VSP-ENT-PLUS-1YVMware vSphere Enterprise Plus 1YR$150,00
VCF-VSP-FND-1YVMware vSphere Foundation 1-Year$190,00
VCF-VSP-FND-8VMware vSphere Foundation 8, Multiyear$150,00
VCF-VSP-STD-8VMware vSphere Standard 8$50,00

What is ESXi Core Dump Size?

ESXi host Purple Screen of Death (PSOD) happens when VMkernel experiences a critical failure. This can be due to hardware issues, driver problems, etc. During the PSOD event, the ESXi hypervisor captures a core dump to help diagnose the cause of the failure. Here’s what happens during this process.

Sunday, November 3, 2024

List devices

esxcli storage core device list

Get S.M.A.R.T information

[root@esx24:~] esxcli storage core device smart get -d t10.NVMe____KINGSTON_SNVS1000GB_____________________55FA224178B72600

[root@esx24:~] esxcli storage core device smart get -d eui.0000000001000000e4d25c0f232d5101


The output looks like this


VMware KB: https://knowledge.broadcom.com/external/article/314303/displaying-smart-data-for-nvme-devices-i.html

Saturday, November 2, 2024

Install and setup xfcw window manager in Centos 9

Install and setup xfcw window manager in Centos 9

sudo dnf -y groupinstall "Xfce"

Optional if multi-user console is used as default target. This would configure the system to boot into the graphical interface by default.
sudo systemctl set-default graphical.target

echo "xfce4-session" > $HOME/.xsession

chmod +x $HOME/.xsession

sudo reboot

Applications are defined in directory /usr/share/applications
Every application has its own definition file with file extension .desktop

For example you can use file chrome.desktop with the following content

[Desktop Entry]
Version=1.0
Name=Chrome
Comment=Google Chrome
Exec=/opt/google/chrome/chrome
Icon=/opt/google/chrome/product_logo_64.png
Terminal=false
Type=Application
Categories=Network;WebBrowser;


Thursday, October 31, 2024

pktcap-uw and tcpdump-uw

List VMs and their uplinks.

netdbg vswitch instance list

Capture DHCP traffic (udp 67, udp 67) n vmnic0 interface and send it to tcpdump to filter DHCP communication. 

pktcap-uw --uplink vmnic1 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - udp port 67 or udp port 68

14:45:46.375602 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:50:56:99:fe:6a (oui Unknown), length 300
14:45:46.376233 IP 192.168.4.5.bootps > 192.168.4.178.bootpc: BOOTP/DHCP, Reply, length 307


Filter TCP Open Connections

This is the tcpdump command to display attempts to open TCP connections (TCP SYN) from IP address 192.168.123.22

pktcap-uw --uplink vmnic4 --capture UplinkRcvKernel,UplinkSndKernel -o - | tcpdump-uw -r - 'src host 192.168.123.22 and tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0'

Explanation:

  • -n → Do not resolve hostnames.
  • -i <interface> → Specify the network interface (e.g., eth0).
  • 'src host 192.168.123.22' → Filter packets from the source IP 192.168.123.22.
  • 'tcp[tcpflags] & tcp-syn != 0' → Match packets where the SYN flag is set.
  • 'tcp[tcpflags] & tcp-ack == 0' → Ensure the ACK flag is not set (to exclude SYN-ACK responses).

 

 

Wednesday, October 30, 2024

Colossus AI Supercluster with over 100,000 @NVIDIA H100 GPUs

Source: https://nvidianews.nvidia.com/news/spectrum-x-ethernet-networking-xai-colossus

Inside the @xai Colossus AI Supercluster with over 100,000 @NVIDIA H100 GPUs. If you want to see why the @Supermicro_SMCI liquid-cooled cluster is awesome, then check this one out.

O čem to video je?

100 000 GPU v datacentru

2 CPU and 8 GPU in 4U server chassis
8x server per rack
Takže 64 GPU per rack

1 563 racků v datacentru

Chlazení kapalinou. Liquid cooling.

Sunday, October 27, 2024

How to resize the disk of a Fedora guest VM in VMWare ESXi

Source: https://serverfault.com/questions/422930/how-to-resize-the-disk-of-a-fedora-guest-vm-in-vmware-esxi/422972#422972

This is a bit of a cut'n'paste of a document I wrote for internal use and although it probably over-answers your question I thought I'd put it on here in case it's of use to you or others OK.

  1. Login to the machine as root or sudo each of the following commands, enter fdisk –l, you should see something like this;

    Disk /dev/sda: 21.1 GB, 21xxxxxxxxx bytes
    255 heads, 63 sectors/track, 5221 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14        2610    20860402+  8e  Linux LVM
    

    In this case I've altered the values but as you can see this machine has a single ~20GB root virtual disk with two partitions, sda1 and sda2, sda2 is our first LVM 'physical volume', see how LVM uses a partition type of '8e'.

  2. Now type pvdisplay, you'll see a section for this first PV (sda2) like this;

      --- Physical volume ---
    PV Name               /dev/sda2
    VG Name               rootvg
    PV Size               19.89 GB / not usable 19.30 MB
    Allocatable           yes (but full)
    PE Size (KByte)       32768
    Total PE              636
    Free PE               0
    Allocated PE          636
    PV UUID               PgwRdY-EvCC-b5lO-Qrnx-tkrd-m16k-eQ9beC
    

    This shows that this second partition (sda2) is mapped to a 'volume group' called 'rootvg'.

  3. Now we can increase the size of the virtual disk using the usual vSphere VSClient by selecting the VM, choosing to 'edit settings', then selecting 'Hard Disk 1'. You can then increase the 'Provisioned Size' number – so long as there are no snapshots in place anyway – and select OK. This will take a few seconds to complete.

  4. If you then switch back to the Linux VM and enter

    echo "- - -" > /sys/class/scsi_host/hostX/scan
    

    where the X character is likely to be zero, it will perform a SCSI bus rescan, then run fdisk –l, you should see something like;

    Disk /dev/sda: 42.2 GB, 42xxxxxxxxx bytes
    255 heads, 63 sectors/track, 5221 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14        2610    20860402+  8e  Linux LVM
    

    You'll see that the disk size has increased, in this case to ~40GB from ~20GB but that the partition table remains the same.

  5. We now need to create a new LVM partition, type parted, you should see something like this;

    GNU Parted 1.8.1
    Using /dev/sda
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted)
    

    You'll now need to create a new partition for the extra new space, type 'p' to see the current partition table such as this;

    Model: VMware Virtual disk (scsi)
    Disk /dev/sda: 42.9GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End     Size    Type     File system  Flags
     1      32.3kB  107MB   107MB   primary  ext3         boot
     2      107MB   21.5GB  21.4GB  primary               lvm
    

    Then type mkpart, then select 'p' for 'Primary', for file system type enter 'ext3', for start enter a number a little higher than the combination of both 'sizes' listed above (i.e. 107MB + 21.4GB, so say 21.6GB), for end type the size of the disk (i.e. in this case 42.9GB). Once you press enter it will create this new primary partition, type 'p' to show the new partition table, you should see something like;

    Model: VMware Virtual disk (scsi)
    Disk /dev/sda: 42.9GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End     Size    Type     File system  Flags
     1      32.3kB  107MB   107MB   primary  ext3         boot
     2      107MB   21.5GB  21.4GB  primary               lvm
     3      21.5GB  42.9GB  21.5GB  primary               ext3
    

    You'll see that the new partition started after the first two and fills the available space, unfortunately we had to set it to a type of 'ext3', so let's change that.

  6. Type 't', then the partition number (in our case 3 as it's the third partition), then for the 'hex code' enter '8e' – once you'd done this type 'p' again and you should see it change to 'Linux LVM';

    Disk /dev/sda: 42.9 GB, 42949672960 bytes
    ads, 63 sectors/track, 5221 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        1          13      104391   83  Linux
    /dev/sda2           14        2610    20860402+  8e  Linux LVM
    /dev/sda3         2611        5221    20972857+  8e  Linux LVM
    
  7. Now we need to create a new LVM 'physical volume' in this new partition, type pvcreate /dev/sda3, this should then create a new LVM PV called /dev/sda3, type pvdisplay to check;

    --- Physical volume ---
    PV Name               /dev/sda3
    VG Name              
    PV Size               20.00 GB / not usable 1.31 MB
    Allocatable           no
    PE Size (KByte)       0
    Total PE              0
    Free PE               0
    Allocated PE          0
    PV UUID               gpYPUv-XdeL-TxKJ-GYCa-iWcy-9bG6-tfZtSh
    

    You should see something similar to above.

  8. Now we need to extend the 'rootvg Volume Group', or create a new one for non-root 'volume group', type vgdisplay to list all 'volume groups', here's an example;

    --- Volume group ---
    VG Name               rootvg
    System ID
    Format                lvm2
    Metadata Areas        2
    Metadata Sequence No  19
    VG Access             read/write
    VG Status             resizable
    MAX LV                0
    Cur LV                8
    Open LV               8
    Max PV                0
    Cur PV                2
    Act PV                2
    VG Size               21.3 GB
    PE Size               32.00 MB
    Total PE              1276
    Alloc PE / Size       846 / 26.44 GB
    Free  PE / Size       430 / 13.44 GB
    VG UUID               tGM4ja-k6es-la0H-LcX6-1FMY-6p2g-SRYtfY
    
    • If you want to extend the 'rootvg Volume Group' type vgextend rootvg /dev/sda3, once you press enter you should see a message saying the 'volume group' has been extended.

    • If you wanted to create a new 'volume group' you'll need to use the vgcreate command – probably best call me for help with that.

    Once extended enter vgdisplay again to see that the 'rootvg' 'volume group' has indeed been extended such as here;

    --- Volume group ---
    VG Name               rootvg
    System ID
    Format                lvm2
    Metadata Areas        2
    Metadata Sequence No  19
    VG Access             read/write
    VG Status             resizable
    MAX LV                0
    Cur LV                8
    Open LV               8
    Max PV                0
    Cur PV                2
    Act PV                2
    VG Size               39.88 GB
    PE Size               32.00 MB
    Total PE              1276
    Alloc PE / Size       846 / 26.44 GB
    Free  PE / Size       430 / 13.44 GB
    VG UUID               tGM4ja-k6es-la0H-LcX6-1FMY-6p2g-SRYtfY
    

    You can see the 'VG Size' is as expected.

  9. Now we need to extend the 'logical volume', type lvdisplay to show our 'logical volumes', you'll see something like;

    --- Logical volume ---
    LV Name                /dev/rootvg/var
    VG Name                rootvg
    LV UUID                NOP1jF-09Xt-LkX5-ai4w-Srqb-xGka-nYbI2J
    LV Write Access        read/write
    LV Status              available
    # open                 1
    LV Size                3.00 GB
    Current LE             320
    Segments               3
    Allocation             inherit
    Read ahead sectors     auto
    currently set to       256
    Block device           253:2
    

    If we want to expand the /var file system from 3GB to 10GB then type lvextend –L 10G /dev/rootvg/var, now type lvdisplay again, you'll see the 'logical volume' has grown to 10GB;

    --- Logical volume ---
    LV Name                /dev/rootvg/var
    VG Name                rootvg
    LV UUID                NOP1jF-09Xt-LkX5-ai4w-Srqb-xGka-nYbI2J
    LV Write Access        read/write
    LV Status              available
    # open                 1
    LV Size                10.00 GB
    Current LE             320
    Segments               3
    Allocation             inherit
    Read ahead sectors     auto
    currently set to     256
    Block device           253:2
    
  10. Now the last thing we need to do is to grow the actual file system, this doesn't have to use all of the newly added space by the way. Enter df –h to show the current filesystems, here's an example;

    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/rootvg-root
                          2.0G  1.4G  495M  74% /
    /dev/mapper/rootvg-home
                          248M  124M  113M  53% /home
    /dev/mapper/rootvg-var
                          3.0G  1.1G  1.8G  30% /var
    /dev/mapper/rootvg-usr
                          3.0G  936M  1.9G  34% /usr
    /dev/mapper/rootvg-opt
                          3.0G  811M  2.0G  29% /opt
    

    If we want to expand the /var file system from 3GB to 10GB then type resize2fs /dev/mapper/rootvg-var (or on CentOS maybe xfs_growfs /dev/mapper/rootvg-var, or similar commands depending on the type of file system). When you press enter the actual filesystem will grow, this may take time, enter df –h once completed to check.

    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/rootvg-root
                          2.0G  1.4G  495M  74% /
    /dev/mapper/rootvg-home
                          248M  124M  113M  53% /home
    /dev/mapper/rootvg-var
                          9.88G  1.1G  8.2G  12% /var
    /dev/mapper/rootvg-usr
                          3.0G  936M  1.9G  34% /usr
    /dev/mapper/rootvg-opt
                          3.0G  811M  2.0G  29% /opt
    

You're now finished!