After careful consideration I decided on what to buy for my first homelab server as explained in part 1. Having used corporate laptops as my daily driver for the better part of the last two decades I seriously can’t remember when I last built a PC. So using a barebones system was the safest possible option. What I wound up buying is the following.

Bill of materials

QuantityPart
1Shuttle XPC cube SH370R8
1Intel Core i9-9900
4Crucial CT32G4DFD8266 DDR4 DIMMs 32GB
1WD Black NVMe SSD SN750 1TB
1Samsung 860 EVO 1TB
1Sandisk Ultra Fit USB 3.1 32GB Black
1Inno3D GeForce GTX 1650 Single Slot
1Mellanox ConnectX-3 MCX311A
1Noctua A9 PWM 46.44 CFM 92 mm

Case

This machine is based off of a Shuttle XPC cube SH370R8 barebones PC. It has a capacity of 14.2 liters which is more than enough. In fact, I have plenty of space to stuff this machine with additional storage if needed.

Storage

Because I’m using it both as a lab server and run games on it occasionally I fitted it both with a WD SN750 NVMe drive on which I installed Windows. A Samsung 860 EVO 1TB drive is used as a datastore for VMware ESXi Server. I use an USB thumb drive to boot ESXi.

Memory

The lab functionality required me to have loads of RAM so I installed the maximum of 128GB.

CPU

Having that much RAM I chose a CPU with the maximum number of cores possible on a LGA 1151v2 socket. This left me with a somewhat healthy core/thread to memory ratio. Because of the motherboard used for the barebones PC I had no alternative to go for an Intel CPU where an AMD Ryzen 9 might have been a better choice performance-wise. Having said that I feel more comfortable using an Intel CPU for running ESXi as this is what most of the homelab community uses.

Networking

Initially the built-in Intel I211 gigabit NICs are great to begin with because I will run most labs on a nested vSphere environment. The NICs turned up in ESXi without loading any additional drivers which is excellent. For some use cases I also want to make use of 10GbE networking. I got a really good tip from fellow vExpert Wouter Kursten to source a Mellanox ConnectX-3 MCX311A NIC from AliExpress which was a really good deal compared to other stores. The gigabit connections will be connected to a Ubiquity edgerouter which I use as my homeoffice L3 switch. And for 10GbE connectivity I’ll get somethink like a Microtik CRS305.

Graphics

As this machine only has two expansion slots and I already used one of them for the 10GbE NIC this left me only with 1 slot for a GPU. Luckily I was able to find an Inno3D GeForce GTX 1650 Single Slot GPU which has been impossible to find anywhere online after I purchased it. This card strikes a good balance between performance and the space constraint.

Cooling

The case itself has a very nice heatpipe cooling system which does a great job. I was a bit disappointed with the default case fan which made quite a bit of noise when running heavy CPU load. So I replaced that with a Noctua NF-A9 PWM, 92mm fan. Ideally I’d also want to replace the PSU fan but I don’t feel comfortable opening up a PSU.

The build

So ‘the build’ is a somewhat overrated as a title because building this system simply meant installing CPU, memory, storage which is more like Lego for grown ups than anything else. I do have some nice pictures of the build process. I’m really satisfied with the end result and am already working on a second node.


Rudolf Kleijwegt

I am an experienced IT professional with over 20 years of hands-on experience designing, deploying, and maintaining IT infrastructure in both enterprise and service provider environments. My skills span across Linux and Windows and a multitude of server applications, allowing me to excel in a wide range of IT roles. Currently, my primary focus is on Software Defined DataCenter and DevOps. I am passionate about staying up to date with the latest trends in the industry to achieve superior outcomes.

2 Comments

Chris B. · December 8, 2021 at 4:25 am

Looking forward to seeing this develop. Where can we find part 1?

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *