I use my home lab to both learn about new technologies and to challenge my own biases on how things “should” be done. I have always used SATA drives to boot my ESXi hosts with and never gave it much thought that perhaps there was a better way. Then I read this post by Bob Plankers over at the The Lone Sysadmin and thought “Now that just makes sense, and especially for a home lab!”
My logic here is pretty simple:
- I have had ESXi boot drives fail in the past. That means a 60 minute round trip to a Best Buy (my closest but not preferred option) and a price approximately equal to what I would spend to buy dinner for the family. That can get me a decent pair of pizzas, but a pretty mundane hard drive. I can replace 2 SD cards for about $10 with a 10 minute round trip to a Walgreens that is open 24 hours.
- SATA drives draw a lot more power, generate a lot more heat, and produce a lot more noise. Booting from an SD card uses very little power, does not generate any noticeable heat from what I can tell, and is completely silent.
- I really don’t need all of the space that a SATA drive provides for a boot drive, so I end up with either wasted space or isolated local datastores on the hosts. Neither of these options appeal to me. If I am going to have local SATA drives in my hosts I will use them not for booting ESXi with, but as dedicated components in a Virtual SAN solution (a future post once my new NICs arrive and I build it). I have all the space I need on an SD for booting ESXi with!
Now I know that SD cards are not the most reliable storage components out there, but this is a home lab built with consumer grade components to begin with. I’m not relying on this infrastructure to pay my bills with. If you are considering using SD cards for your production environment make sure you use a solution that is fully supported by both VMware and your server vendor!
Question #1 – “What hardware should I use?”
My servers are really just glorified workstations in a 4U rackmount chassis. Unlike an enterprise class server solution designed with ESXi in mind my hosts do not have onboard SD slots to boot from. I needed to find a solution that would allow me to boot from an SD drive using one of my motherboard’s onboard SATA connections. Plus I wanted a solution that would have easy access for replacing the SD card if it should fail.
A quick search on Amazon made it clear that I had plenty of options to choose from. I decided on a SD to SATA converter in a 2.5” drive form factor. This adapter can actually hold 2 SDs and will automatically apply RAID 0 to the SD pair. You cannot disable the RAID 0 though, so I could not put in 2 SD cards and then use my motherboard’s built in RAID capabilities to mirror the SDs using RAID 1. Not a deal killer by any means, but some thing that I will keep in mind if I should ever revisit this design or build a new host.
To provide easy access to the drive I purchased a removable hard drive bay that fits into a PC expansion slot as it was less expensive than many of the traditional hot swappable drive bays that are located in the front of a system. Here again I ran into an unexpected issue, because the SD to SATA adapter has a gap where you insert the SD cards. The drive bay in the PC expansion slot has a small metal piece that presses against the end of a drive when you close it to fully insert the drive into the bay. I soon realized that I needed to have 2 SD cards installed in order to provide the surface area needed for the mechanism to work. Frustrating, but again not a deal killer.
Which brings me to the final component which is the SD cards themselves. I went with SanDisk 8GB Ultra SD cards that are class 10. With SD cards the higher the speed class rating the faster the SD card can be written to. This is why I went with class 10 SD cards, and since I was buying bundled 3 packs at a cost of less then $4 per SD card I decided to buy 2 of the packs to have some spare SD cards lying around just in case. This was a lucky move on my part as it meant I could still proceed with my project when I learned about the drive bay mechanism issue.
So with their powers combined all of the above items formed: BOOT DRIVE!
Notice that there is a 4 pin Molex power connection that converts to a SATA power connection as well as splits off for the LED status light component of the drive bay. Why is it a Molex connection instead of just using a SATA power connection? I have no idea. All I know is that the LED status light will not work unless you use the included power adapter. I can live with that.
Question #2 – “Once installed will it work?”
I removed one of my hosts and looked at the available PC expansion slots. Where should I install the drive bay?
Dear PCI slot, you have a purpose in life once more!
The installation of the drive bay was literally as easy as installing a PC expansion slot card. Testing the hard drive bay for ease of access to the drive worked like a charm!
Finally I came to the big moment and installed ESXi. The SD cards using RAID 0 were shown as a single 14.8GiB device, or roughly 15.9GB in capacity, which is what I would expect from 2 8GB devices combined into one device when you account for any overhead.
In conclusion…
The installation of ESXi seemed significantly faster using the SD cards as the boot device. I timed the installation as taking 2 minutes and 34 seconds. Since I have never timed how long ESXi takes to install to my SATA drives I don’t know how much of an improvement that is if any. I did time how long it takes to boot my ESXi host from a SATA device in comparison to the SD cards, and the SATA boot took 4 minutes and 55 seconds while the SD boot took 4 minutes and 42 seconds…
That was one hell of a disappointment! But I never took this project on because I wanted a performance increase, so I can’t complain.
In the end I do feel that this little project was worth the $150 in parts that I needed to upgrade all 3 of my hosts to boot from SD cards. Like I wrote at the beginning of this post, if a SATA boot drive fails I would have to go to a nearby Best Buy and pay $45 for a new drive. Or I could pull one of my old drives off of the shelf, but then my identical hosts would start to be slightly different and that just annoys me. Plus I got some experience in using host profiles in vSphere 6 when rebuilding my hosts (which is a very cool feature I must say).
Now I just have to start work on my Virtual SAN project! Oh home lab! You keep me off of the streets and out of trouble!