Note: This is a cross post of an original colaboration between Jason Langer and myself. It is also posted on his blog at virtuallanger.com
The idea of this blog post came up during a conversation I was having with Tim Antonowicz (blog / twitter) and Jason Langer (blog / twitter) around Ravello Systems and their recent participation at the Virtualization Field Day 5 (blog) event held in Boston June 24th thru the 26th. They key point of the conversation focused around Ravello System’s ability to run hypervisors on top of existing cloud providers (currently supported on AWS and Google). With the use of their HVX platform you can run nested ESXi hosts, something up to this point that had not been possible. This got us thinking, are the days of running your own hardware at home for lab purposes numbered? Can the case now be made that running your lab in the cloud is more cost effective?
In an attempt to answer those questions, Jason Langer and I have put together a list of requirements and assumptions on how a home lab maybe used and what features/uses cases it need’s to support. For example, if you are currently using your home lab to provide other services (Plex, file storage, etc) for your household, this might not translate well to Ravello Systems. Whereas the need to leverage to hone your skills on VMware features like vMotion/DRS would translate to both solutions.
Besides requirements and assumptions we compiled a few home lab builds that are popular among the VMware community to provide a few infrastructure examples. These examples will assist us in providing a pricing comparison between purchasing your own gear and paying for CPU time in one of the backend cloud providers Ravello Systems currently supports.
If you want to get a better understanding of Ravello Systems and the solution they provide, the links below will take you to the VFD5 presentations::
- Ravello Systems Company Overview
- Ravello Systems Demo and Use Cases
- Ravello Systems Technology Deep Dive
- Ravello Systems Nested Virtualiztion
- Both physical and virtual labs require a 3 node cluster design
- VMware vSphere will be the hypervisor of choice
- VMware vCenter Server will run as a virtual machine inside the environment
- VMware vSphere features such as vMotion and DRS need to be supported
- Intel NUC based physical design will leverage 16GB of RAM per host
- Micro-ATX based physical design will leverage 32GB of RAM per host
- Ravello virtual lab designs will be based on current available CPU/memory offerings
- As electrical pricing varies from state to state, we will assume $25.00 a month as the cost of running each of the physical lab designs.
- Physical lab design will assume net new purchase of all gear
- Physical lab will only be used for VMware educational purposes. No sharing infrastructure to provide home “services” (Plex, photo storage, etc)
- Physical lab will be fully depreciated over a 3 year term.
- Time spent in physical or virtual lab will be calculated at 12 hours per week.
Physical Lab Designs
Mini Lab with Intel NUC
Our first physical lab design is based on the small form factor/small footprint of Intel’s NUC platform. The Intel NUC has seen good traction in the VMware home lab scene with their small foot print and lower cost of entry. While these designs are more economical compared to the Micro-ATX build, there are some trade off’s. From an overall compute sizing you will be limited to 16GB of RAM per host and 1 socket/2 cores of CPU @ up to 2.6Ghz. The last thing to mention is you will also be confined to a single 1Gbe network uplink. With this mind you will still be able to leverage the needed vSphere technologies to build functioning vSphere hosts.
To round out the rest of the Intel NUC build we will be leveraging the local storage options to hold a 128GB mSATA SSD and a 500GB 7200RPM Western Digital disk drive. Laying VMware VSAN on top of the NUC’s will provide decent storage performance and capacity. For some additional storage “umph” a two bay Synology unit combined with two Samsung 250GB SSD’s has been added to allow for the use of NFS/iSCSI in the environment. Tying it all together is a Cisco SG300-10 ten port Gigabit switch. This will provide the connectivity for the baseline of ports (5, three for the NUCS, 2 for the Synology) and with some additional room to grow. The SG300 line of switches from Cisco are great for home lab use as they support additional features such as VLAN routing and jumbo frames.
Below is the full build list and the corresponding links to the items on NewEgg.com. At the time of this writing (July 2015) the total build cost came in at $2,788.30 plus tax and shipping:
- 3 x Intel NUC BOXD54250WYKH
- 3 x 2 x 8GB Crucial Memory Kits
- 3 x 128GB Crucial mSATA SSD for NUC
- 3 x 500GB 7200 RPM WD Scorpio Black HDD for NUC
- 3 x SanDisk 16GB USB Thumb Drive
- 1 x Synology DS214+ 2 Bay NAS
- 2 x Samsung EVO 850 250GB SSD’s
- 1 x Cisco SG300-10 10 port Gigabit Switch
Micro ATX Lab (Baby Dragon)
Next up on our list is probably one of the most popular VMware home lab configurations, the “Baby Dragon”. Type that into your favorite search engine and a VMware lab to it and you will see several blog posts and pictures/images of this build from different sites. With that in mind I won’t spend too much time rehashing a lot of the content that is out there, but just provide a quick high level overview.
The key element to this lab design is the use of Micro-ATX mother boards (loaded with features) and small form factor (IE Micro-ATX) compute cases. The combination of SuperMicro based boards and Lian-LI cases seem to lead the charge here (heck, I even built one a few years back). On the compute side the benefits of this build (for me at least) has to deal with each host being able to support up to 32GB of RAM. From there the additional motherboard features are just the icing on the cake:
- IPMI for remote KVM/management
- Dual onboard 1Gbe links
- 6 x SATA 6.0 Gb/S
- Onboard RAID
From there we have mated the motherboard with an Intel 3.1Ghz quad core Xeon processor and 32GB of RAM from Crucial. A Lian-LI case will provide the home for the components and power is provided by a SeaSonic 400W fanless power supply. As these host will be not be leveraging local storage, again we looked to a Synology unit for NAS/iSCSI presentation. Will be using a 4 bay unit lined out with four x 256GB Transcend 370S SSD’s. And just like with the Intel NUC build, a Cisco SG300-10 10 port Gigabit switch will be used to provide the need plumbing. This will still get us by, though eight of the ten ports will be consumed by the vSphere hosts and the Synology unit.
Below is the full build list and the corresponding links to the items on NewEgg.com. As you can imagine this build will raise the price tag a bit over the Intel NUC configuration. However, coming in at $3,626.63 (or almost 1K more) plus tax and shipping, this lab could provide all the tools to play/test even the most complicated of VMware software implementation. Full build list:
- 3 x SuperMicro MBD-X10SLH-F-O uATX Motherboard
- 3 x Intel Xeon E3-1220V3 Haswell 3.1Ghz Server Processor
- 6 x Crucial 16GB (2 x 8GB) ECC Unbuffered Server Memory
- 3 x Lian Li PC-V351B MicroATX Computer Case
- 3 x SeaSonic Platinum Series 400W Fanless PSU
- 3 x SanDisk 16GB USB Thumb Drive
- 1 x Synology DS415+ 4 Bay NAS
- 4 x Transcend 370S 256GB SSD
- 1 x Cisco SG300-10 10 port Gigabit Switch
Ravello Systems Virtual Lab Design
For the Ravello Systems based lab we followed the same design requirements as laid out for the physical labs. We will be leveraging three ESXi hosts for the foundation of the environment, but will also be using three additional “support” servers. The support servers will consist of a virtual machine for vCenter, NFS storage, and finally a general purpose Windows workstation VM. Details for the cloud based lab virtual machines are as follows:
- vCenter Server Appliance – 2 vCPU, 8GB of RAM
- Windows Management Server – 1 vCPU, 4GB of RAM
- 3 x ESXi Hosts – 4 vCPU, 8GB of RAM per virtual machine
- NFS Server – 2 vCPU, 8GB of RAM
Logical illustration of the lab layout using Ravello Systems canvas:
I did create an NFS server to emulate shared storage, and I could have made the Management Workstation a VM riding on the cluster itself, but I figured why bother consuming those resources.
I found the setup to be very straightforward. You simply drag a component onto the canvas and change the properties for it. In a sense it is very much like Visio. This is not a lesson in how to use Ravello (we will save that for another post), so I will not delve into the weeds. Suffice it to say that I was pleasantly surprised by the power of this platform and its ease of use.
For our “CloudLab” setup once I clicked update I was presented with the following dialog box. That’s right, $1.9946 per hour for 17 CPU’s, 42GB RAM, and 1.2TB of storage.
That’s it. No upfront costs, no parts to order, no screws to turn, no electricity to be consumed (at least not in my home).
- Ravello was created by the same team that created the KVM hypervisor. Their deep expertise in this technology enabled them to create Inception, the only way to run nested ESXi on AWS or Google Cloud.
- With Ravello and their HVX (Inception), you are able to create complex designs in a cloud platform.
- Ability to leverage an overlay network for complex networking scenarios.
- KVM like capabilities, thanks to the VNC based Console utility that is integrated into the platform.
- 8GB per VM; but since I have no hardware cost if I need more resources I can scale out
- I am not sure what the upper limit of NICs is. I stopped trying after adding 11.
- I am not sure what the upper limit of Drives is. I stopped trying to add after hitting 14.
- 4 vCPU per VM; but as I said for the RAM, since I have no hardware cost, if I need more resources I can scale out.
Cost Comparison/Breakdown Between Physical and Virtual Designs
For a cost breakdown we are going to have to make a lot of assumptions. First, I am going to assume that I will be able to amortize my purchased lab over a 3 year period. I also am going to assume that your “cloud” lab will not be powered on 24/7 to achieve the greatest possible savings. As stated in the assumptions, we will base this illustration on 12 hours of use per week.
|Cost of Equip||Amort Mo Cost||Est Power Cost||Mo Usage Charge||Total|
|Intel NUC Lab||2,788.30||77.45||25.00||–||102.45|
|Baby Dragon Lab||3,626.63||100.74||25.00||–||125.74|
|Ravello “Cloud” Lab||–||–||–||103.72||103.72|
If anyone is hesitant about the premise of the “Software Defined Datacenter”, Ravello is a great example of what can be done in strictly software these days.
Due to the limitations of their implementation, namely the limit to 8GB RAM our comparison is not apples to apples. I would therefore say if you are new to Virtualization, or looking to build a homelab, I would strongly suggest that you consider Ravello as a viable option.
If you already have a lab, you might already have more resources available to you. Even so, you may have some of the newer applications in the stack that you do not have enough resources for, or maybe your current hardware will not support the new features in vSphere 6; either way, Ravello is a very viable option to augment your current homelab.
Another option that I am exploring is keeping my on-prem homelab pristine, and instead testing newer features in the cloud. Once I have vetted them out, I can then promote the change in my on-prem homelab.