I have been enjoying 3D printing projects recently. I saw a little control board for changing audio levels, and having hotkeys while playing games. The printing took a good long while, and I had to edit some of the parts to work with the parts I found currently on Amazon. I will post the parts list below. The soldering was straight forward, and the project came with a PDF that had good instructions. This also turned into a good opportunity for me to use the new Wiring Pencil, which worked surprisingly well.
Diode soldering jig
Diodes In!
Diodes Soldered!
Sliders In!
Messy Wires
All Over Wires!
Tie Wires Together!
Cleaner
Coming Together Clean
Together Without Knobs and Buttons
Toda!
For hardware, I am using a Teensy; the Teensy can be a USB keyboard or MIDI device or joystick or serial over the USB connection. The project comes with a premade Arduino file to run it as a MIDI controller. I had not worked before with MIDI input like this, but it seemed the best path forward compared to trying to emulate a keyboard and hitting odd key combinations. Or the alternative of writing something that output serial data then finding, or writing, a daemon for my PC to listen to that device.
For software, I looked at several pieces of software to use the keys and sliders with. I looked at software like VoiceMeeter. While overall that worked, it was very inflexible, and had a giant interface for things I didn’t want to use. Then I found Midi-Mixer, a passion project by a single dev and it is EXACTLY what I needed. The sliders can control single app volume, which is easy to select. And the buttons can be programmed for anything! And easily with a GUI instead of conf files like some other open source projects.
Overall I am enjoying the finished project. It sits next to my keyboard, and allows easy changing of levels while playing games. I added little rubber feet I had laying around so the plastic housing doesn’t slide around on the desk.
While taking photos and uploading them places, like this blog, I get the photos in .heic format from the iPhone, then need to convert them into JPEG for WordPress. There are a few paid, and some questionable freeware out there to do it, but I wanted to use open source tools. ImageMagick is an open source tool that can do the conversion, but that requires the command line, so I found the registry keys needed to add a right click context menu to convert the images!
This context menu only shows up when selecting a .heic file as well, which is a nice way to do it. How to install:
Install ImageMagick (link), the version I got was “ImageMagick-7.0.11-4-Q16-HDRI-x64-dll.exe”
Copy the following lines into a text document
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\SystemFileAssociations\.heic\Shell\convertojpeg]
@="Convert To JPEG"
[HKEY_CLASSES_ROOT\SystemFileAssociations\.heic\Shell\convertojpeg\command]
@="\"C:\\Windows\\System32\\cmd.exe\" /C magick.exe mogrify -verbose -format jpg \"%1\""
Name it install_imagemagick.reg (or really anything.reg)
Open that file in file explorer
After install you should be able to right click a .heic photo and “Convert To JPEG”. I did not need to restart/logout/restart explorer. I am calling cmd.exe first instead of the program directly, because this allows you to easily update ImageMagick and not need to directly link to the file.
I have been using the Google OAuth for some of my projects at work for a while. A recent request was to add custom user-agent strings to different apps for the people doing analytics on which apps are using the authentication servers. I have some functions that do custom HTTP Get calls using the Bearer token we get from the OAuth flow, then the library also does its own calls behind the scene. I was able to add a user-agent to my calls easily, but the under the hood ones the library does kept coming up as “Google-HTTP-Java-Client/1.34.2 (gzip)”. I tried a few different ways, and at the same time was searching online, and didn’t see anyone speaking about this. Below is a quick block to put into your app if you want to set the user-agent.
These are the current versions of the OAuth library, and the http client I have been using to do auth.
For my setup, I have the OAuth Servlet that initializes the OAuth flow, then a second servlet which handles the callback; as documented here. I added to the “class OauthCallback extends AbstractAuthorizationCodeCallbackServlet” the following ConnectionFactory under the override for the initializeFlow() function. Replace “myApp-v1.0.1” with your app name. Hope this helps someone!
@Override
protected final AuthorizationCodeFlow initializeFlow() throws IOException {
ConnectionFactory connectionFactory = url -> {
HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();
httpURLConnection.setRequestProperty("user-agent", "myApp-v1.0.1");
return httpURLConnection;
};
return new AuthorizationCodeFlow.Builder(BearerToken.authorizationHeaderAccessMethod(),
new NetHttpTransport.Builder().setConnectionFactory(connectionFactory).build(),
new JacksonFactory(),
.... (code removed);
}
One piece that sits at the heart of my Homelab is the NAS I have. This is actually the same NAS I have written about years ago, looking back on that post brought back memories of the pervious system and Server 2008 that I didn’t recall. In the last year I have added several drives and a new network card to this box, I thought that as well as my experience running FreeNAS, now TrueNAS Core, over 8+ years was worth discussing.
When I built out that box, I had 5x3TB drives, each around $125 dollars. Now those same drives are $40. The rough rule of thumb I was always told is 1GB of RAM for ZFS for every TB of storage you have. So I maxed the mini-itx motherboard out at 16GB of RAM to get as close as I could get. This let met run basic services and I was running a few small VMs/Jails on the box. This did cut into the RAM I had available, but was a nice feature. This allowed me to run the Unifi controller without another system running. Back then, Raspberry Pis came with 256MB of RAM, making it not ideal to run too many services. I later would end up moving all of those to dedicated Raspberry Pis then later VM hosts.
These 5 disks served me well for a while; I every year or two would have a drive die, and it got cheaper and cheaper to replace them. I use this NAS for backups from my Windows desktop, and Macbook. Time Machine backups over the network to Macs works very well with TrueNAS. I ended up getting a smaller version of this box for my parents home, and sister, you can run the OS off a USB with a single or 2 small hard drives in a box like an Intel NUC, then have it always backup their PCs. Reminding people “plug in that USB drive” to backup seems to never stick. TrueNAS offers one click updates, with optional automatic checkin; this makes keeping the system up to date easily.
There have been reports of recent corruption with 12.0, but I have not seen that. Also there was a bug where you could get a banner saying “THIS IS A TESTING RELEASE NOT FOR PRODUCTION” on a production branch, so that is fun. These days those backups, and my Veeam backups are done to the NAS. I tried to use it as a iSCSI and then a NFS target, but the IO was a bit too much for these old spinning drives. Now I use vSAN, as mentioned, which has performed well for VMs, that leaves the NAS just as dumb storage for Veeam. Veeam is a good product that makes it very easy to backup VMs, I will probably write an article on it later. The software has a free 10 VM backup license for Homelabs.
Please note and enjoy how the back and front of this box state different specs
In 2020 I was using a high percentage of the storage for backups and VMs, and was pondering upgrading. I didn’t want to throw down enough money to build a whole new system, and I liked this case a lot, so I started to look at what I could do to add to it. I was using 5 drives, but the case technically supports 7, with 2 being on the bottom. The issue was, I didn’t have enough SATA ports to add to the system. This brings me to one of the scariest, worst, best, cards I have bought. This card, adds 4 ports through a mini PCI-E connection. It actually works really well, with the drives coming up like any other, it gives you 1 PCI E Lane at roughly 2.5Gbp/s for my version. I have 2 drives of the now 7 I have in a RAIDZ2 (RAID 6), and for over a year it has worked well. The one other thing I added to the box was a 10GB networking card, I did a push a bit ago to move most of the Homelab server stuff to 10GB, and this box was part of that. TrueNAS is built on FreeBSD, and has good hardware compatibility, I got an old Intel X520 for compatibility and ease. I have seen it get near 5gbit/s, averaging closer to 2gb/s with writes.
First of all, yes the card is at a slight angle, but it works fine and is secure, so we will ignore that. I also used this time to upgrade the CPU. If you look for 7 year old CPUs on eBay, they are actually not that much money. I went from a Celeron from when I bought the system to a i5-4590. With this new CPU (and breaking a leg on the stock cooler) I ordered a new CPU cooler. That turned into an issue because they sent me the wrong version for an AMD instead of the Intel mount. You can see the very very tiny clearance that the CPU cooler has to the chipset heatsink. I also had this system in the office, since with adding disks to ZFS you need to destroy the pool and rebuild. I had to move all the data off to another system, destroy the array, then move it all back. Dynamically adding disks is always a dream ZFS has had and is always around the corner. Hopefully with OpenZFS 2.0, and the merging of the Linux and Unix code bases, we will get shiny new features like that.
Crazy SATA card working
Wire mess of the HDD tray
Overall the system has worked well for the last 8 or so years, I have 4TB which is about 30% free still. I could probably clean it out more if I tried. I also have been using OneDrive to backup critical things like family photos, which slightly lowers my need for the system. The homelab AD has all the machines automount a chunk of storage as a shared drive, which makes normal home things and transferring files easier. I will continue to run this, and see how vSAN works for me going forward. I am a bit wary of vSAN running into issues on the consumer level gear I have, so having a whole backup of my VMs on the NAS gives me some peace of mind.
The years of using FreeNAS/TrueNAS were a good jumping off point as we recently got new Netapp Appliances at work, and I was tasked with learning them. Netapp ONTAP uses very similar concepts; instead of zVol you have FlexVol, instead of Datasets you have FlexGroups. Netapp also does some weird things like using Raid-4 or Raid-4 with added protection, instead of a traditional Raid-5/Raid-Z. If you work for a company that has a Netapp and want to learn more about it, I would push you to get the Netapp Simulator. It is a VM image that contains a virtual Netapp to play around with. It’s much better to break a virtual Netapp than a production one.
Over the holidays I got parts to put together a Mister FPGA system (project home, sub-reddit). This is an open source project which allows to run classic game consoles and classic computers in hardware on the FPGA. Instead of normal emulation, where in software you pretend to be the CPU/GPU/Hardware of what the original code would run on, this projects has a Field Programable Gate Array that can change itself into being that hardware. By doing this, the system can get very very close to 100% accurate running of these old systems. Each system is created into a “core” which is applied to the FPGA to run software. The community around the Mister Project is growing, there are some projects to get systems like N64, and PSX working on this platform; but the Mister Project standardized a while ago on one FPGA, which may not be up to that task once the new cores are done because of their size and complexity.
There are many nice features that have been built out for the projects over the years. Standardizing around the DE10-Nano FPGA, there are many add-on boards you can get for it. From additional RAM, to VGA outputs. The FPGA has a ARM CPU that manages the base system, that supports Wi-Fi cards, Bluetooth, and has automatic updating features. With an IO board that most people who use the project get, you can click a button to reboot the system, or another to go back to the main menu and select the core you want to run. I have a standard IO board, USB Hub, and 256MB of RAM addon. The documentation for the open source project is actually good, with it all centering around the Github Wiki. There are automatic installers for the SD card you need to do the initial ARM side setup.
I was most interested in one of the completed cores, it is a 486DX (project home) with Sound Blaster, and everything you need to run Dos/Windows 3.1/Win 95. Having played many games as a child in that environment, having a 386SX, I was excited to give it a try.
Hardware
When I was thinking of getting the parts for the project, I looked on Thingiverse to see if anyone had put a case up; there are several. The one that caught my eye had an embedded keyboard in it (link, updated case), that one had a note on it that an update to the case was coming soon, and to hold off on printing. The estimate for printing the case was around 24 hours, and I didn’t want to do it twice, so I waited. I reached out to the creator who worked away over the holiday season to get the update out. Myself and another were chatting with him in the comments about printing it, and the creator graciously put up the design, before all the instructions were done so the two of us could start printing.
USB Board, with input against the case
This is the largest thing I have printed on the printer, with my print bed holding up to 220mm, and the case coming in at ~210mm. It printed great. I used PETG instead of PLA plastic to have added resistance to heat. After that, it was screwing parts together, and making a tiny circuit board to support the normal buttons on the top of the case. I ran into a small problem with the updated USB board I have, its input was blocked by the side of the case. The creator had a different revision of the USB board, and thus hadn’t tested with my version. I ordered some cables online and ended up checking the pinouts and making my own header to USB cable, after that it was smooth sailing.
I ordered a collection of M3 screws, to have different sizes. That is the size the case was built around. I also had some screws that do not have heads on them, I was able to use these internal screws to hold some of the boards in. I will put a full list of the parts I ordered below, including the headers for the Mister IO board, which took a bit of research to find.
The USB board, and the Mister FPGA itself need 5V power, the USB board came with a Y cable to breakout a single power brick into the 2 boards, but it was not designed for them to be this far apart. Usually the USB board stacks directly under the FPGA, with this case they sit several inches apart. I ended up getting a 1ft extension cable to be able to make up the difference. While that worked I then got a 2.1×5.5mm barrel connector and socket to put on the back of the case, now it has a nice flush place on the back of the case to plugin the power for the USB board. I am using a SD card right now for all my storage. The 128gb it gives me is fine to get started. I have seen people with setups that have a SATA SSD in the case with a USB adapter. This case supports in in the spot under the FPGA. If you load the system up with a ton of classic games and systems, that may be needed.
Circuit board
Internal, no head, screws
Mid construction photos
Setup and Software
Setup I used the Mister “Mr Fusion” Windows installer. Popped in a 128gb micro SD card, and a few minutes later it was ready to go. It takes about 10 minutes the first time it is setup and has internet access to download all the “updates” which is every core registered with the project. The Wi-Fi and Bluetooth dongles were automatically detected, I just had to enter Wi-Fi credentials.
I think the case came out nicely, and have been having fun installing things on it and playing with it. While the 2GB virtual hard drive I gave Windows 95 is on a SD card and gives decent read/write speeds there, the FGPA 486 at 90mhz still struggles a bit with Windows 95. People are working on getting the perf better. Improvements like recently added L2 caching can help. With the click of a button I can swap it over to Windows 3.1 on a different virtual drive and load up my DOS collection. One of the benefits of the Mister project as mentioned is the ARM management layer, I can add files to a ISO, then SCP it to the system. You can also use any size SD card for all your images, and when you want a new virtual hard drive, its a few clicks away. Then mounting those images is straight forward. Windows 3.1 and 95 are supposed to be able to open a null modem connection to the host and transfer files/browse the internet that way, I have yet to get this working.
Completed project photos
After all the posts I have done on here recently I couldn’t just play around with the 486. I also got the Mac Plus side of the house running. You can run with 512kb, 1mb, or 4mb of RAM. It has a 20MB HDD, and 2 floppy drives. There is also a Turbo mode, which we obviously need because turbo! And because classic Macs can be slow…
An example of one of the Core’s menus
Mac OS 6 running
All together it is a fun project I continue to play with. I like being able to play with classic systems like a Commodore 64 without it using up space in my small apartment. The ease of loading software also makes for a very enjoyable experience. If anyone has experience with this, or has questions feel free to comment below!
Parts List
I tend to get packs of things when working on a project like this. I can use them later and it gives be options with several sizes. I did not include the Mister Board and IO board since there are many sellers of those standard parts, I did include the USB and Bluetooth because they have been proven to work.
What I want to say is, after deciding it was time to move to VMware and attempt to use vSAN instead of Storage Spaces Direct (S2D) I wanted to research the hardware I had and see if it would work on ESXi 7.0. But of course I did not thoroughly read all of the changes vSphere 7.0 has brought. The holiday was approaching and I was going to use this time to do my migration. I had read up on vSAN and knew I needed cache drives. I bought a few small (250GB) NVME drives to put into each system. Getting those drives installed took a day because I needed to create a custom 3D printed mount. That would give me a good speed boost for my storage no matter what. Having recently upgraded to 10GB networking, I already had HP and SolarFlare 10gb networking cards. The time came and I copied all of the VMs I had in Microsoft VHDX format to my NAS (which wasn’t getting changed), then unplugged the first Hypervisor, and attempted a ESXi 7.0 Install.
One hardware change I should note, I am using USB 3.0 128GB thumb drives for the ESXi OS. This also allowed me to leave the original Windows drive untouched, allowing for easy rollback if this was a nightmare. I put the ESXi 7.0 disk into the first system AND! Error, no networking card found… I started searching online and quickly found a lot of people pointing to this article. ESXi 7.0 had cut a ton of network driver support, everything from the Realtek motherboard NIC to the 10GB SolarFlare card would not be supported, with no way around it (I tried). It comes down to 6.x had a compatibility layer in it where Linux drivers could be used if there were not native drivers, 7.0 removes this. I then got a ESXi 6.7 installer (VMware doesn’t allow you to just download older versions on a random account, but Dell still hosts their version) and installed that. Everything came online and started working. Now that I knew the one thing blocking me was that, I installed all my systems with 6.7 while I waited for the 3 new Supermicro AOC-STGN-i2S Rev 2.0 Intel 82599 2-Port 10GbE SFP+ cards I ordered. Using the Intel 82599 chipset, they have wide support. 2 Ports is nice; and, the 2.0 revision of the card is compact allowing them to fit into my cases. So far I recommend them, they also are around $50 on eBay, which is not bad.
I played with a few of the systems, but decided to wait till the new network cards were in a few days later to initialize vSAN and copy all of the data back over. I used this guide, from the same author of the other post about ESXi 7.0 changes to configure the disks in the system how I wanted them. At one point I thought I was stuck, but I just had to have VMware rescan the drives. I setup a vSphere appliance on one of the hosts. This gives me all the cluster functionality, and single webpage to manage all the hosts. Here I an also create a “Distributed Switch” which is a virtual switch template which can be applied to each of the hosts. I can set the vlans I have, and how I want them to work in one place, then deploy it to all the systems easily. This works as long as all your hosts have identical network configurations. After watching a YouTube video or two on vSAN setup I went ahead setting that up. The setup was straight forward, the drives reported healthy, and now I was ready to put some data on it.
A small flag about vSAN, it uses a lot of RAM to manage itself and track which system has what. I was seeing about 10-12 gb of ram used on each of my hosts, that has 32gb to begin with. There are guides online for this, and I believe it can be tweaked. It has to do with how large a cache drive you have, and your total storage. Not a big deal, but if you are running a full cluster, something to be aware of.
Migrating the old VMs from their Hyper-V disk images to VMware was not too difficult. I used qemu-img to convert from VHDX to VMDK. The VMDK images that qemu creates are the desktop version of the VMDK format. VMwares desktop products create slightly different disk images than the server versions. I then unloaded these VMDKs onto the vSAN and used the internal vmkfstools on ESXi Shell to convert those images to the server versions. The Windows systems realized the changes, and did a hardware reset, they worked right away. The Linux systems (mostly CentOS 8) would not boot under any of the SCSI controllers VMware had. After reading online, and a bit of guessing, I booted them with the IDE controller which appeared to be the only one dracut had modules for. Once the systems were online I could do updates, and with the new kernel version they had available they made new initrd images. These images being created on the platform with the new virtual hardware, installed the SCSI controller modules and could then be changed from IDE to SCSI mode.
So far other than the hardware changes that needed to happen, moving to VMware has worked out well. I am using a VMware Users Group license, https://www.vmug.com/, which is perfect for homelabs, and doesn’t break the bank. I am starting to experiment with some of the newer or just more advanced VMware features that I have not used before. We spoke of vSAN, I also have setup DRS (Distributed Resource Scheduler, allowing for VMs to move between hosts as resources are needed), and want to setup a key manager server to play with VM encryption and virtual TPMs.
Now that I am off of that… unsupported… Storage Spaces Direct configuration updates are much easier. I can put a host into maintenance mode, which moves any running VMs, then reboot it and once its back online, things re shuffle. This does mean I need enough space on the cluster for 1/3 of it to be off at a time, but that is ok. I am running 32gb of ram, with 2 empty DIMMS in each system, when the time comes I can inexpensively add more RAM.
If you/your work has a NetApp subscription, there is a NetApp Simulator which is a cool OVA you can deploy on VMware to learn NetApp related things. I was using that at work to learn how to do day to day management of NetApps. Another neat VM image that comes in the form of OVA I found recently is Nextcloud’s appliance. They have a single OVA that has a great flow for taking you through configuring their product.
Overall the VMware setup as been as easy as I thought it could be. Coming from a workplace who runs their management systems without a lot of access, it has been nice having vSphere 7.0. It automatically checks in online, and lets me know when there are updates for different parts of the system.
For the last year I have been running Microsoft Hyper-V on Server 2019. Due to mounting issues I moved over to VMware vSphere, this first post will discuss my Hyper-V setup and feedback about it; then the next post will speak about my migration and new setup. When I started building out my home setup I was studying to take a Windows Server certification for work, with that and about half of the virtual machines I had at home were Windows, Hyper-V was the choice for hypervisor. One feature that stood out to me was Dynamic Memory on Hyper-V because my home setup was not that large; as well as the automatic virtual machine activation (Microsoft Doc). Later, I was attempting to run Storage Spaces Direct (S2D, SSD would be a confusing acronym so Storage Spaces Direct goes by S2D), except my setup was not supported, which made me run a… not recommended configuration… more on that soon; and I kept having issues around the Hyper-V management tools. I decided it was time to migrate from Hyper-V and S2D to VMware vSphere and vSAN.
(Please note for feedback I am discussing Windows Server 2019 here, and VMware vSphere 7.0)
Selecting a Hypervisor
I wanted to briefly go over a bit more of my thought process when selecting a Hypervisor. I already mentioned some of the reasons it made sense, but I also wanted to mention more of my thoughts. I started the search looking for a Type-1 Hypervisor awhile ago, I was going to be running on a Intel NUC with a few Windows and Linux VMs. Being a homelab, I thought I would look at free options.
Having used Proxmox years ago with a ton of issues I wanted to steer clear of it (looking back this probably was not fair, it was several years since I last used it and I believe it has gotten better); I had also used CitrixHypervisor (formerly XenServer) with many issues including a storage array killing itself randomly in one reboot. One of the requirements I gave myself was to have a real management system, I did not want to run KVM on random Linux hosts. That brought me to the 2 big ones, VMware and Microsoft. VMware has a lot of licensing around different features, but was the system I knew better. I could get a VMware Users group membership for homelabs, and that would take care of the licensing. On the other hand, with me studying for Windows Server tests, and the book speaking of different Windows Server and Hyper-V features, I thought I would give it a try. The following are things I liked about it, and then what turned me away from it.
Great Things About Hyper-V
I want to give a fair overview of my year plus running Hyper-V. There are some great features; dynamic memory allows you to run modern OSes with a upper and lower limit on memory, and then most of the time while the VMs are idling your memory footprint is very low. Another great feature is the earlier mentioned automatic activation, as long as your Hyper-V host (Windows Server 2019 Standard or Datacenter not the free Hyper-V Server) is activated, it can pass that activation to your guests and allow you to run Server 2012+. All of the services are running on Windows, out of the box you get all the benefits there; such as creating group policies for your servers and using that to do a lot of your fleet management. I recently have started using Windows Admin Center, which gives you a single view on all your Windows systems and allows you to update them all in one place. Hyper-V works well if you have a single node, and want to do basic things with it; when you move to doing clustering and advanced storage Hyper-V starts to give you a lot of issues.
Hyper-V Manager on Server 2016 (and 2019)
General Hyper-V Issues
To dive more into the woes I was having with Hyper-V, some of it is my own doing, some of it is the tools. Even before I was running S2D, I was running several Hyper-V boxes each with its own storage. I will go into my issues with S2D soon. Hyper-V’s management tools are not good. You have several options on how you will manage the systems, the first and easiest is Hyper-V Manager. This is a simple program that allows you to 1-1 manage a Hyper-V system. I mean 1-1 because if you have VMs that are part of a failover cluster, you can connect to them here to view them but that is it. Hyper-V Manager only allows you to manage VMs that live on one hypervisor with no redundancy; for casual use, it works. I use it for my primary AD host because I don’t want anything fancy going on with that box, when I need to start everything from scratch, I need AD and DNS to come up cleanly.
Maybe you have outgrown the one off server management and want to move your systems into a cluster. Now its time for Failover Cluster Manager. You add all the servers into a Failover Cluster together, and get through the checks you have to pass. Then there is a wizard to migrate your VMs from Hyper-V Manager into Failover Cluster Manager. One requirement to do this is to have storage that every box in the cluster can use, either S2D or iSCSI (you can do things like Fibre Channel, but I was not going to do that). I used the tool and the VM said all its files were moved onto shared iSCSI storage that all the machines could use. Should be good right? Things seem to be working. Then I would move certain VMs to other hosts and it would fail, just some of them. It came down to either a ISO, or one of the HDD hibernation files, or checkpoints (Microsoft version of Snapshots) being on one of the hosts, and the UI NOT mentioning this. Thus, when the VM tried to load on another system, a file it needed was not there and it could not load. Failover Manager is also fairly simplistic and doesn’t not give you a ton of tools. Again, Windows Admin Center adds some nice info on a standard cluster, but it is not fantastic; leaving you to dig through Powershell to try to manage your Failure Cluster.
On occasion the Virtual Machine Manager service that is in charge of managing the VMs, and gives the interface to monitor, modify, and access the VMs would lock up. Hyper-V Manager and Cluster Manager would show no status for the VMs, and I would have to restart the service. These minor issues would stack up over time.
To manage Hyper-V remotely (meaning from any other system) you need to setup Windows Remote Management, winrm. This system by default uses unencrypted HTTP. Encryption can be turned on with a few commands in the command line, but it creates a cert based on your hostname and IP address. If you have more than one IP, OR you are in a Failover Cluster this means you will be spending a lot of time customizing these certificates because it will just get a cert for your host, and when that node because the Failover Cluster manager, it needs that virtual IP and hostname in the cert. I had to create different certs for that virtual interface and put them on the different nodes manually, there are people in the Microsoft support forum talking about this. Here is an example incase it helps anyone of creating a cluster listener after manually creating a cluster cert.
There also is a System Center Manager that is another package you can purchase from Microsoft to manage Hyper-V. Having dealt with System Center to manage Windows systems at work, I did not want to touch that at all. Hyper-V has a lot of things going for it, and the underlying code running VMs works well 99% of the time. I wish Microsoft put more time to grow the tools you use for managing it. Parts of the process like setting up networking on different nodes could be much smoother, in comparison with VMware Distributed Switching. I installed one of the systems I had on Windows Server Core (no user GUI) to learn more about that. If your primary interface needs a VLAN for management, this a painful experience. You have to create the Hyper-V virtual switch and attach your management interface to it and assign the VLAN all from within Powershell. If you need to do it, this is a good resource. Thing like this, and the winrm issues, make Hyper-V feel unpolished even after being in the market for years.
S2D Issues
I wanted to put these systems into a failover cluster, allowing them to move VMs between each other as needed, except then I needed shared storage. I attempted to use iSCSI from my FreeNAS box; alas, with 7, old, spinning drives, the speed was not great with more than a few VMs. Then I thought I had some spare SATA SSDs and I could use S2D to do shared storage. For those who have not attempted to setup S2D, your hard drives have to either be NVME, or an internal HDD controller. The system will refuse to work on any configuration that it does not like. With most of my systems being small form factor PCs, and I am just using a few SATA drives I got USB 3.1 4 bay SATA enclosures. Not optimal but decent speed and it allows me to add a good number of drives to each system without a large expense of a full RAID or SAS controller.
S2D refused to work with these drives. I believe it came down to the controller the USB drives was using and it not signaling something the systems wanted. The drives showing up as Removable also made Windows refuse. There are commands you can run like below that will enable more disk types to work, but I could not get my dries to show up.
(Get-Cluster).S2DBusTypes=4294967295
Powershell command to enable all disk types in Storage Spaces Direct from this article
Then I had an idea, an evil and terrible and great idea. I created 3 VMs, one on each Hyper-V box, then gave the 3 disks of each server to the VM in full. Now I had 3 VMs, each with 3 drives (and a separate OS drive) to run S2D. To the VM OS, it looked like they had 3 SCSI HDDs that was happy to use for S2D. I put these three Windows Servers into a failover cluster together, and setup S2D. Overall setup was not too bad. If you have Windows Admin Center configured it is much easier to setup and use Storage Spaces Direct than the GUI in Windows Server. There are a ton of Powershell commands for configuring S2D and you will probably end up using a bunch of them.
This worked! The systems were in a failover cluster, of their own, and my main failover cluster that controlled the VMs could use it as shared storage. If you use Windows Admin Center you can get nice stats from the Storage cluster about the sync status of the disks. Every time one of the storage nodes reboots, the cluster needs to re-sync itself. There are different RAID levels you can set the S2D setup to, I set it to have 2 additional copies of each set of data, this means each node has a full copy of the data; this uses a lot of space but i can have 1 node run everything (which ended up being overkill).
This setup ran for a while decently, other than the small VM overhead, it was fast and worked. The issues arose when the second Tuesday of the month came around and I needed to do patching. The storage network was sitting on top of the Hypervisors, and they didn’t really understand that. I often ran into problems where I would shutdown one of the storage nodes to patch it, and patch the host, then the other 2 nodes would lock up or say all storage was lost. This would occur even when preemptively moving who was the main node, and prepping to restart. With storage dropping out from under all the VMs, they would die and need to manually be rebooted or repaired. This made me start to look for a new setup after a few of these months.
All in all, I ended up running for over a year about 5 Windows VMs and 5 Linux VMs on Hyper-V with good uptime. One benefit of Hyper-V is you get the hardware compatibility of Windows, which is vast. The big downside of Hyper-V is the tools around it. At times they seem unfinished, at other times buggy. My next post will be about the migration, and my experience with vSphere 7.0!
As a younger person in my career I got a few Cisco certs, the study material was available to me, and I thought it would be an interesting thing to learn. At this point, I have had a CCNP for almost 10 years and I still enjoy messing with networking even if it is not my day to day job. While I historically have used Cisco a lot, there are many other brands out there these days that have good gear, some even low power enough that I can run at home and not worry about the power bill. Below is my current home setup, it has changed a lot over time, and this is more of a snapshot than a proper design document. That is what homelabs are for right? Messing around with things.
Firewall
The firewall I am running is one I have mentioned on here before. The system itself is an OLD Dell Optiplex 990, released in early 2011 and soon to have its 10th birthday! Idling at ~30 watts, it works well for what I need it to with a gen 2 i7, and 8gb of ram. I added a 4 port Intel gigabit ethernet card to it, which allows for more ports and hardware offloading of a lot of IP tasks.
I looked around at different firewall OS options. Pfsense is the obvious one, but I found its interface lacking. (I use Palo Alto Networks firewalls at work and that interface/flow is more what I am used to) Opnsense is a bit better, but still leaves something to be desired on the UI side. Then I tried a Home License of Sophos XG. It is free as long as you stay at 4 cores or less, with 6 or less GB of ram used; you are given an “evaluation license” until 2099-12-31, if it runs out I will ask for an extension. For more than a year I have been enjoying it, the interface is slick, and you get the enterprise auto patching built in. In the time I have run it, I have had 1 zero-day attack on the product and it was immediately patched without me having to login. I use it as my home firewall between vlans, a DHCP server, and I also have IPSec and SSL VPNs for when I am away from home. The system does DNS for the house (on the less secure vlans, AD does those) and allows for block lists to be used. This is like a pihole but built into the product.
There are a few things it does a little odd, but I enjoy not having to go and write weird config files on the backend of some Linux/BSD to have my firewall work. I have it hooked into AD for auth, and that way I can login with a domain admin, and allow users who have domain accounts to VPN into home. It has been VERY stable, and usually only reboots when I tell it to do an update, or that one time the ~10 year old PC blew a power supply.
Cross Room Link
At the start of the year, I was running a Ubiquiti Wi-Fi mesh at home, it got decent speeds, and allowed me to not run wires over the apartment. The access points used were these models, link. They were only 2×2 802.11AC Wave 1; got decent speeds (around 400mbps), but being in a New York City apartment, I would get interference sometimes, even on 5ghz. The interferences would cause issues when playing games or transferring files. The bigger issue was my desk with a bunch of computers, and the firewall were on different sides of this link, meaning any data that was on a different vlan had to go over and back on this Wi-Fi link. On top of that, I will mention I basically HAVE to use 5ghz, I did a site survey with one of the APs and the LOWEST used 2.4ghz band near me was 79% utilized…
Anyway I started looking around for what I would replace it with, I always thought fiber could be a way to go since its small and if I could get white jackets on it, then it would blend in with the wall. I spend a few weeks emailing and calling different vendors trying to find someone who would do a single cable run of white jacketed fiber. Keep in mind this is early 2020 with Covid starting up. Lots of places could not do orders of 1, or their website would say they could and later they would say they couldn’t and refund me. Finally I found blackbox.com, I have no affiliation with them they just did the job quickly and I appreciate that. I got a 50 meter or so run, and was able to install that with the switches below.
Switches
Now that I had the fiber I needed some small switches I could run at home. After looking at what others have on reddit and www.servethehome.com I found the Ruckus ICX 7150-C12P. A 14 1gb/s ethernet, switch with 2 1/10gb SFP+ ports. The switch is compact, fan-less, and has 150 watts of POE! I can run access points, and cameras off of it without other power supplies. I have learned to look for before buying this sort of gear off eBay to try to get the newest firmware. With Cisco and HPe they love to put it behind a wall that requires an active support contract. Not only does Ruckus NOT do that, they have firmware available for their APs that allows it to run without a controller, more on that later.
I ended up buying 1 of the Ruckus switches “used” but it came sealed in box. Then getting another one broken, after seeing some people online mention they sometimes over heat if it was somewhere without proper ventilation and that can kill their power supply. The unit is fan-less, but the tradeoff there is nothing can sit on top of it, because it needs to vent. I was able to get one for around $40, then a new power supply for $30, all in I spend $70 for a layer 3 switch with 10gb ports! Now I have these 2 units on opposite sides of the room, in a switch stack. This way they act as one and I only need to manage “one switch”.
With the Ubiquiti gear no longer acting as a Wi-Fi link, which I have written about before, I only had one of the APs running. As mentioned before the access point was only 2×2 antennas and 802.11AC Wave 1. I was pondering getting a new Wi-Fi 6 access point, while looking around someone on reddit, again, suggested looking at Ruckus access points. Their antenna design is very good, and with their “Unleashed” firmware you get similar features to running a Ubiquiti controller. After looking at the prices I had to decide if I wanted to go Ubiquiti with Wi-Fi 6, and wait for their access points to come out, or get something equally priced but more enterprise level like a 802.11AC Wave 2 access point (like a Ruckus R510 or R610 off eBay).
I recently had a bad experience with some Ubiquiti firmware, then all of a sudden they killed Ubiquiti Video with very little warning, and some the more advanced functions I would want to do are either minimally or not documented with Ubiquiti. One could argue that I am used to enterprise gear, and Ubiquiti is more “pro-sumer” than enterprise; thus, I should not be upset at the lack of enterprise features. That made me decide to try something new. I ended up getting a Ruckus R610 off eBay and loading the “Unleashed” firmware on it. I can say the speeds and coverage is much better than the older access point. It is 3×3 802.11AC Wave 2, and with most of my devices still being 802.11AC I figured that was a good call.
One feature of the Unleashed firmware is it can manage all your Ruckus hardware. The web management portal has a place to attach your switches as well, and do some management of them there. I have been scared to do this, and coming from a traditional CLI switch management background have yet to do so.
Unleashed Home Screen
I was able to POE boot the AP just like I did with Ubiquiti, converting the firmware was easy, and there are many guides on Youtube for it. The UI does not have the same polish that Ubiquiti does, but the controller is in the AP itself which is very nice. There is a mobile app, but it is fairly simplistic. The web interface allows for auto updating, and can natively connect to Active Directory making it very easy to manage authentication.
There are 3 wireless networks in the home, 1 is the main one for guests, with their 6 year old unpatched android phones, that has a legacy name and meh password, that way I don’t have to reset some smart light switches Wi-Fi settings. This is where all the IoT junk lives. There is one with a better password that connects to the same vlan I am slowly moving things in the house over to, at least the key is more secure. Then there is the X wireless network, this one is not broadcast and has 802.1X on it. When a user authenticates with their domain creds, depending on the user and device I send them to a different vlan. This is mostly used for trusted devices like our laptops, and iPad when I want to do management things. This network for my domain account allows me on the management network.
10gb/s Upgrade!
The latest upgrade I have embarked on was 10gb/s. I moved my active VM storage off of the NAS to Storage Spaces Direct for perf. While the NAS has worked well for years, the 7 – 3 TB disks do not give fantastic IOPS when different VMs are doing a lot of transactions. After lots of thought and trials I went with Storage Spaces Direct and will write about it later. The main concern was that it allows all the hypervisors to have shared storage and keep it in sync, and to do that they need good interconnects. This setup is the definition of, lab-do-not-do-in-prod, with 3 nodes each with 3 SSDs over USB 3. I knew with USB 3 my theoretical bottle neck was 5gb/s, which is much better than the 1gb/s I had, that also had to be shared with all server and other traffic.
First I had to decide how I would layout the 10gb/s network, while the ICX 7150 has 2 – 10gb/s ports, 1 is in use to go between the switches. After looking around and comparing my needs/wants/power/loudness-the-significant-other-would-put-up-with I got a MikroTik CRS309-1G-8S+IN. I wasn’t super excited to use them, since their security history is not fantastic, but I didn’t want to pay a ton or have a loud switch. I run the switch with the layer 2 firmware, and then put its management interface on a cut off vlan, that way it is very limited on what it can do.
After that I got a HP 10gb/s server cards, and tried a Solarflare S7120. Each had their ups and downs, the HPs are long and would not fit into some of the slim desktops I had. But when the would work, like in the Dells, they would work right away without issues. The Solarflare are shorter cards which is nice, but most of them ship with a firmware that will not work on some motherboards or newer operating systems. For these you need to find a system they work in, boot to Windows (perhaps an older version) then flash them with a tool off their website. After that they work great. I upgraded the 3 main hypervisors, and the NAS. I have seen the hypervisors hit 6.1gb/s when syncing Storage Spaces. With memory caching I can get over the rated disk speed.
That is the general layout of the network at this point. I am using direct attached cables for most of the systems. I did order some “genuine” Cisco 10gb/s SFP+ off Amazon for ~$20, I didn’t believe they would be real, but I had someone I know who works at Cisco look them up and they are real. Old stock shipped to Microsoft in 2012 or so, but genuine parts. The Ruckus switches and these NICs do not care which brand the SFPs are, so I figured I would get one I knew. The newer Intel NICs will not work with non Intel SFPs so look out.
To summarize, the everything comes in from my ISP to the Sophos XG box, then that connects to a port on one of the Ruckus switches. Those two Ruckus switches have a fiber link between them. Then one of the SFP+ ports on the Ruckus switch goes to a SFP+ port in the MikroTik switch. All the hypervisors hang off that MikroTik switch with SFP+ DACs. Desktops, video game consoles, and APs all attach to 1gb/s ethernet ports on the Ruckus switches. I have tried my best to label all the ports as best I can to make managing everything easier. I’m sure this will evolve more with time, but for my apartment now 10gb/s networking with a Ruckus R610 AP has been working very well.
I have started a transition from Hyper-V and Storage Spaces Direct to VMWare vSphere and vSAN. I apologize that these blog posts order is all over the place. Part of the transition is upgrading the hardware on some of the hosts I have, including getting 250GB NVME drives for vSAN cache. I started the migration with one of the desktops that run in the cluster, a Lenovo ThinkCentre M710s. After finding the small slot the NVME drive goes in, I realized there is a manufacture piece of plastic you are supposed to get to install a NVME drive. Since I do not have that, and do not want to pay for it, I spent a good bit more than a hour the first day of the migration creating this bracket and 3D printing it. Then while that was printing, I realized one of the feet on the system had gone missing, so I made a small one of those.
This post is just a quick update and a preview of more to come.
I recently attempted to boot a Dell Precision M6800 into ESXi 7.0u1 to test some functionality before going to prod. Unfortunately this was met with “Invalid Partition Table”, switching between UEFI and BIOS boot didn’t seem to fix it giving “No boot device available” instead. After searching online I found this, https://communities.vmware.com/t5/ESXi-Discussions/quot-Invalid-Partion-Table-quot-Error-booting-ESXi-7-from-USB/m-p/1823852 which had comments such as “just dont run on a laptop” which was not very helpful. I spent a chunk of time playing with the partitions and seeing how they were configured. I noticed when I went into the UEFI on the laptop it said it couldn’t find any file systems available, but when I loaded Windows or Linux on the system, the UEFI could see those boot partitions. I tried updating the firmware like Dell recommended, with no change. I then realized the ESXi 7.0 image is FAT16 for the EFI partition, while all other EFI partitions I have seen are FAT32.
I copied the files and folder out of the boot partition, reformatted it with FAT32 instead of FAT16, marked it as EFI type (ESP in Gparted), and moved the files back. The system booted fine the first time, with ESXi running happily. If you need boot ESXi on a Dell M6800, or M4800, or other give that a try. If this worked or didn’t work for you leave a comment below.