AlmaLinux 9 & Centos 9 Stream builds can now be created for oVirt Nodes

When PR https://github.com/oVirt/ovirt-node-ng-image/pull/146 lands you will be able to build oVirt nodes using AlmaLinux 9 (new) and Centos 9 Stream (fixed).

There are still a few issues with AlmaLinux 9, not because of AlmaLinux, but because the oVirt engine force enables a bunch of Centos 9 Stream repositories breaking the system. You can get around this by basically disabling those repositories in /etc/yum.repos.d/ as they appear – bugs will be filed with the engine.

VS Code on Almalinux 9

Step 1. Add the signing key


sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

Step 2. Add the repository


echo -e  "[vscode]\nname=packages.microsoft.com\nbaseurl=https://packages.microsoft.com/yumrepos/vscode/\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\nmetadata_expire=1h" | sudo tee -a /etc/yum.repos.d/vscode.repo

Step 3. Refresh the yum meta data


sudo dnf update -y

Step 4. Install vscode


sudo dnf install code -y

Microsoft Edge on Almalinux 9

Step 1. Import the key


sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

Step 2. Add the repos


sudo dnf config-manager --add-repo https://packages.microsoft.com/yumrepos/edge

Step 3. Refresh the yum meta data


sudo dnf update --refresh

Step 4. Install Edge


sudo dnf install microsoft-edge-stable

Microsoft Edge should now be available for you to use either via the CLI or the Gnome Deskop

Compacting WSL hard disk

Like many people I am an extensive user of WSL and Linux under Windows in general. It’s the only real option I have at work and it’s quite a reasonable proposition.

That being said though, the WSL vhdx files can grow as you’re doing Linux work and while you can (and should) clean up side the Linux environment it’s not reflected back to windows as free space.

So how do you compact your WSL file?

  1. Shutdown your WSL system.

    wsl.exe --list --verbose # note the verbose is required to get the state
    Output from wsl.exe to show running WSL environment

    wsl.exe --terminate Ubuntu-24.04
    This shows the output of the wsl command with the instance being terminated
  2. Shrink the disk using diskpart

    diskpart

    The diskpart dialogue is displayed

    You need to select the vhdx file for your WSL instance. The VHDX file is typically found in your AppData folder. In my case it was this.

    Windows explorer show the location of my VHDX file

    I copy the VHDX file location as a path.

    The easiest way to get the full filename + path is to use explorer to copy as path

    DISKPART> select vdisk file="C:\Users\geoff\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu24.04LTS_79rhkp1fndgsc\LocalState\ext4.vhdx"

    Output from selecting the vdisk in diskpart
  3. Compact the vdisk

    DISKPART> compact vdisk

    output of diskpart compact
  4. The results.

    BEFORE

    Explorer vhdx filesize before compact

    AFTER

    Explorer vhdx file size after compact

    As you can see, i’ve freed up nearly 6Gb.

Getting WSL2 just right

Recent changes to WSL2 by Microsoft have made using Linux on Windows even more comfortable. I’ll describe some of the options and their use below.

Firstly, do you have WSL2 installed? If not, then this will help https://learn.microsoft.com/en-us/windows/wsl/install-manual#step-1—enable-the-windows-subsystem-for-linux

In order to best use WSL, you of course need to have a distribution installed. Ubuntu is one of the easiest and most common to install.

wsl --list --online           #check what distros are available
wsl --install -d Ubuntu-24.04 #latest at the time of writing

Now that you have a distro installed, we have something to configure. There are 2 configuration files that customise the distribution experience under WSL2. wsl.conf and .wslconfig

wsl.conf contains per-distribution settings, whereas .wslconfig configures global settings for the WSL2 environment.

wsl.conf is stored in the /etc directory within the distribution.

.wslconfig is stored in your %UserProfile% folder.

.wslconfig

The .wslconfig file is in .ini format with the GA features found under section [wsl2]. There is also an [experimental] section for unreleased options.

Note: All options may not be available to you as they are Windows OS and WSL version dependent. You can reasonably assume if you are running Windows 11, 22H2 or higher that most of the options described below are available to you. This is not the complete list, just the one’s I have found to be quite useful

GA features that I find useful

Accessible via the [wsl2] section of the .wslconfig file

KeyValueNotes
memorymemory size (Mb, Gb)Default is 50% of the windows memory. I find it useful to constrain the memory (in conjunction with the experimental memory release features below)
processorsnumberDefault is the same as present in windows
localhostForwardingtrue/falseDefault is true, this allows your WSL2 application to be accessible via localhost:port
nestedVirtualizationtrue/falseAllow nesting inside WSL2, Windows 11+
networkingModestring, NAT, mirroredThe default is NAT, mirrored turns on mirrored networking mode. Mirrored mode is a great addition for many of my use cases.
firewalltrue / falseHyper-V firewall can filter WSL network traffic
dnsTunnelingfalsesee the experimental section
Some of the GA features

Experimental (though very useful) features

Accessible via [experimental] section of the .wslconfig file.

KeyValueNotes
autoMemoryReclaimdisabledDefault is disabled, but options list gradual and dropcache can dramatically return memory outside wsl2. I default to gradual.
sparseVHDfalseWhen set to true new VHD’s are created as sparse saving considerable disk with all the overprovisioning issues. By default, i’m using sparse, but then again i’ve been using sparse filesystems for many years
useWindowsDnsCachefalseIf you have dnsTunneling turned on then this option allows you to use or ignore what windows dns may’ve cached
hostAddressLoopbackfalseif networkingMode is set to mirrored then the loopback 127.0.0.1 address can be used to access the host and the container depending on where the listening resource may be running – windows or wsl2. This is a great option if you want better sharing between windows and wsl2 distro. For example, i’ve had a mongo client on windows and mongo in wsl2 ubuntu.
Some of the Experimental features

wsl.conf

As I mentioned above the /etc/wsl.conf within the distribution controls some interesting behaviours, especially on distro launch.

[boot]

systemd=true

Solves the problem where you’re reliant on systemd resources within your WSL2 distro. I normally have it turned on.

[automount]

KeyValueNotes
enabledtrue / falseAllows windows fixed drives to be automatically mounted under /mnt (or where the root key points). I have this enabled by default
root/mntWhere the mounts occur for auto mounted systems
mountFsTabtrue/falseAllow the /etc/fstab to be processed at WSL distro boot time. Great to get those SMB/NFS mounts going. I have this set to true as I use a lot of NFS in my test environment.

Some things to note.

Windows disks are mounted using Drvfs and are by default case sensitive. You can override this behaviour for all or single drives. More information is available at Per-directory case sensitivity and WSL – Windows Command Line (microsoft.com)

[network]

KeyValueNotes
generateHoststrue/falsewsl will generate an appropriate /etc/hosts file based on the windows environment. I generally set this to true (the default).
generateResolvConftrue/falsewsl will generate an appropriate list of dns resolvers. I generally set this to true (the default).
hostnamestringThis sets the hostname to be used within the distro. The default is the windows hostname, but this is useful if you run multiple WSL instances.

Milestone 0.1.5 for Hyper-V ansible collection

Milestone 0.1.5 has been achieved for the gocallag.hyperv collection which you can find in ansible galaxy at Ansible Galaxy – gocallag.hyperv.

Release notes for 0.1.5 can be found over at github at Release Milestone 0.1.5 · gocallag/hyperv (github.com)

Further information on upcoming milestones and their associated features can be found at Milestones – gocallag/hyperv (github.com)

What’s Changed

  • Added Feature to vm_info to provide search capability by Name
  • Closed Issue 2 – Feature to allow vm info to search by power state and names
  • BugFIX: Use the convertto-json , convertfrom-json to avoid loop in Exit-Json (limiting depth)
  • Closed Test Issue: Added verifier for vm_info module checks and cleanups
  • Closed Test Issue: Added basic asserts for switch_info testing via molecule
  • Closed Test Issue: Added verifier asserts for switch module as part of molecule testing
  • Closed Test Issue: Added assert based testing for vm module as part of molecule
  • BugFIX, Issue 12: bug vm info fails molecule verifier on running vm check

Feedback / Requests welcome.

Hyper-V force shutdown VM

Let’s face it, sometimes a Hyper-V gets stuck in a funny state and you can’t shut it down from the UI. Fear not, you can easily force it down using powershell.

Firstly, get the guid of the VM.

$VMID = (Get-VM '<name of the VM from the hyper-v manager (or Get-VM)').id

Then, find the process that’s running that VM.

$VMProcess = (Get-WMIObject Win32_Process | ? {$_.Name -match 'VMWP' -and $_.CommandLine -match $VMID})

Then you can force stop that process.

Stop-Process ($VMProcess.ProcessId) –Force

Hey presto, the VM should be down.

Hyper-V set display resolution

With all the migration from VMware to other sources, Hyper-V is making quite a comeback. Hyper-V is a lot better than the good (aka bad) old days, but you still need to know how to handle certain quirks.

In this case I needed to change the resolution of my Hyper-V windows guest to something higher as it was stuck on the default 1152×864.

It’s simple to fix, just shutdown your VM in the Hyper-V manager (or use powershell) and when the system is down, open a powershell window and use.

Set-VMVideo -VMName "<Name of VM in Manager>" -HorizontalResolution 1920 -VerticalResolution 1080 -ResolutionType Single

Then when you power the VM back on and access via the Hyper-V display option you will have a VM with the larger screen resolution.

Exploring oVirt: A Journey Beyond VMware

The recent changes to VMware by Broadcom have left many of us pondering alternatives for our home labs. As a former user of the free ESXi, I found myself in a predicament when that option started disappearing.

Enter oVirt, an open-source project that serves as the upstream counterpart to Red Hat Virtualization (RHV). As someone familiar with Red Hat products at work, I was intrigued by oVirt’s potential. Interestingly, Red Hat itself is planning to exit the RHV space, which seems like a bold move given the industry landscape. However, oVirt remains open-source and, hopefully, resilient. Oracle also utilizes oVirt for its OLVM product.

The oVirt documentation is a mixed bag—sometimes comprehensive, sometimes lacking. When you encounter issues, consider raising a GitHub defect. As part of my contribution to the community, I’ll do my best to address them.

So, how does one dive into the world of oVirt?

  1. Hypervisor-Only Node: Like ESXi, oVirt allows you to create a hypervisor-only node. This minimalist setup is familiar and straightforward.
  2. Self-Hosted ovirt-engine: Think of this as the vCenter equivalent in the oVirt ecosystem. It manages your oVirt environment. While the documentation can be verbose and occasionally outdated, the following steps should help you get started:
    • Choose Your Path: Opt for the oVirt Node and self-hosted ovirt-engine setup. It’s my personal favorite and promises an engaging experience.
    • Storage Connection: I’ll be connecting my oVirt Hypervisors to my QNAP NAS via NFS. Simplicity wins in my home lab.

Remember, oVirt is an adventure—a chance to explore beyond the familiar VMware landscape. Let’s embark on this journey together! 

Getting the media

Head off to oVirt Node | oVirt to download the ‘hypervisor-only’ ISO. I chose from the 4.5.5 released ISO’s and I picked the CentOS 9 version.

Install the Hypervisor

Fortunately, the hypervisor install is very simple, it’s just another anaconda-based install ISO. You can find detailed instructions at this link Installing oVirt as a self-hosted engine using the command line | oVirt and when you’re done, you can logon and you should see the following.

Deploying the self-hosted engine

So, how do you deploy the self-hosted ovirt-engine – that is, the ovirt engine appliance hosted on the oVirt node you just built. It’s a simple command, but it will take a while to execute. It downloads / installs an RPM that contains the appliance OVA. It powers it on, patches it and then installs the ovirt-engine into the new virtual machine.

The ovirt-engine will then perform extra configuration of your oVirt node and as part of the last step it will copy the ovirt-engine VM to your shared storage. You’ll see the important parts in the process detailed below.

Firstly, before you start, make sure the oVirt Node is defined in your DNS and make sure the ovirt manager is also in your DNS.

Start tmux, and then run the installer.

There are lots of questions to answer, but they’re mostly self-explanatory. Note: The –4 option passed to the command sets up IPv4 only.

Here is how I responded to the questions, note, both the VM name and the node name must resolve in the DNS that you nominate.

The setup script has just about everything it needs at this stage. I’ve called out some step that will take a while to perform.

This step takes a while as the RPM contains the base OVA for the ovirt-engine appliance, it’s a big RPM.

and this takes even longer

Once the OVA is finally available, it gets deployed and powered on. Once deployed, the tool will install the ovirt-engine on the new VM and apply all the patches. This will take another long time.

Then the oVirt engine gets installed and configured.

Note: Once the oVirt Engine starts it will reach back into your oVirt node and perform a range of extra configuration of the oVirt node.

The installer will then prompt you for shared storage options to add into the oVirt node. This is required as the installer will move the oVirt Engine VM from the oVirt node local disk to the shared storage for HA purposes.

In my case I chose NFS.

At this point, the installer asks the oVirt Engine to create the new storage domain. The oVirt Engine will talk to VDSM on the oVirt node to configure the shared storage.

Once the storage domain has been created, the installer will create the final oVirt Engine VM and copy the disk image from the local hard drive to the shared storage domain. You have an option to increase the disk size of the appliance. I left it at the default.

This will also take a while depending on your infrastructure.

Eventually you will get to the end of the script and you’ll have an operational self-hosted oVirt Engine running on your oVirt node.

Voila!

Navigation