<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Scaling VMs on OpenShift Virtualization Training</title><link>/docs/scaling-vms/</link><description>Recent content in Scaling VMs on OpenShift Virtualization Training</description><generator>Hugo</generator><language>en</language><atom:link href="/docs/scaling-vms/index.xml" rel="self" type="application/rss+xml"/><item><title>VM disk images</title><link>/docs/scaling-vms/vm-images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/docs/scaling-vms/vm-images/</guid><description>&lt;p>If we want to run VMs at scale it makes sense to manage a set of base images to use for these VMs. It is not very convenient
to spin up and install some requirements for each single VM oneself. There are several ways we can distribute VM images in our cluster.&lt;/p>
&lt;ul>
&lt;li>Distribute images as ephemeral container disks using a container registry
&lt;ul>
&lt;li>Be aware of the non-persistent root disk&lt;/li>
&lt;li>Depending on the disk size, this approach may not be the best choice&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Create a namespace (e.g., &lt;code>vm-images&lt;/code>) with pre-provisioned PVCs containing base disk images
&lt;ul>
&lt;li>Each VM would then use CDI to clone the PVC from the &lt;code>vm-images&lt;/code> namespace to the local namespace&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>At the end of this section, we will have two PVCs containing base disks in our namespace:&lt;/p></description></item><item><title>Virtual machine pools</title><link>/docs/scaling-vms/vm-pools/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/docs/scaling-vms/vm-pools/</guid><description>&lt;p>A VirtualMachinePool tries to ensure that a specified number of virtual machines are always in a ready state.&lt;/p>
&lt;p>However, the virtual machine pool does not maintain any state or provide guarantees about the maximum number of VMs
running at any given time. For instance, the pool may initiate new replicas if it detects that some VMs have entered
an unknown state, even if those VMs might still be running.&lt;/p>
&lt;h2 id="using-a-virtual-machine-pool">Using a virtual machine pool&lt;/h2>
&lt;p>Using the custom resource VirtualMachinePool, we can specify a template for our VM. A VirtualMachinePool consists of a
VM specification just like a regular VirtualMachine. This specification resides in &lt;code>spec.virtualMachineTemplate.spec&lt;/code>.
Besides the VM specification, the pool requires some additional metadata like labels to keep track of the VMs in the pool.
This metadata resides in &lt;code>spec.virtualMachineTemplate.metadata&lt;/code>.&lt;/p></description></item><item><title>Virtual machine replica sets</title><link>/docs/scaling-vms/vm-replica-sets/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/docs/scaling-vms/vm-replica-sets/</guid><description>&lt;p>Just like a VirtualMachinePool, a&lt;/p>
&lt;p>VirtualMachineInstanceReplicaSet resource tries to ensure that a specified number of virtual machines
are always in a ready state. The VirtualMachineInstanceReplicaSet is very similar to the Kubernetes ReplicaSet&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>However, the VirtualMachineInstanceReplicaSet does not maintain any state or provide guarantees about the maximum number of VMs
running at any given time. For instance, the VirtualMachineInstanceReplicaSet may initiate new replicas if it detects that some VMs have entered
an unknown state, even if those VMs might still be running.&lt;/p></description></item></channel></rss>