Homelab

Homelab Token Ring

For the LAN Before Time, my retro rack, I wanted to mix the most diverse set of CPU/OS/Networking I could find. There are not a ton of networking standards out there, as Ethernet took over so quickly. One that has always interested me is Token Ring, IEEE 802.5 standard, mostly from IBM as a competitor to Ethernet. Token Ring went through many transitions in its time on the scene, from speed changes to connector changes, lasting from the mid 1980s through the 1990s.

Connectors

Photo creative commons from Wikipedia

The protocol started at 4mb/s (megabits a second), with the computer having a DB9 connector going to a giant 4 pin plug.

Later 16mb/s was added. Most of the cards you will find are 4/16 cards.

The physical connector, and connection speed are independent, you can use either the DB9 or RJ45 connectors to run 4mb/s or 16mb/s.

The cards started in the ISA era and later continued into the PCI era. The connector also evolved to a standard RJ45. There were adapters to go between the older connectors and newer ones. Later cards would include both DB9 and RJ45 connectors. With RJ45, only the middle 4 pins were used, but in a straight through way, allowing normal Ethernet straight through cables to be used.

In the last updates to the protocol, 100mb/s Token Ring was added, but by the time that came out Ethernet had taken much of the market share. And finally in 2001 a 1000mb/s standard was created, but Wikipedia says no devices ever came out for it.

MAUs

Unlike Ethernet, Token Ring cannot connect two computers directly. You need to go through a Media Access Unit, or MAU. These units control ports going in and out of the ring. They can be thought of like an Ethernet hub or switch. The Token Ring itself also needs a terminator on it. Later models contained internal terminators if put into a specific mode. There are MAUs with the old large IBM connector, and there are newer ones with RJ45. There were adapters between any of these connection types for networks in transition.

My MAU Journey

I picked up 2 of the same model MAU. ODS/Motorola 877. These are great units after some hardware tweaks and I would recommend them. While they are the same model, and same firmware revision, Motorola bought the company ODS (Optical Data Systems) which made them. The first one I got has ODS branding and a spot for two switches to control the mode and speed of the MAU. The second one is Motorola branded on the case, but not the board, and is missing the cut out in the case for switches.

From what I can learn with working on it, looking at documentation for other MAUs, and Claude; the device can work in three modes:

  • RING: Normal Token Ring operation, requires external RI/RO loopback cable to close the ring, use this when daisy-chaining multiple MAUs together, all active lobe ports are part of the ring.
  • STAR: Each port operates independently (not a true ring), used for certain troubleshooting or special configurations.
  • LOOP: Internally connects Ring In to Ring Out, self-terminates the ring without external cables, perfect for a single standalone MAU.

The MAUs were designed to have a switch to go between modes. Neither of mine did, both had a physical soldered in jumper setting their mode. The Motorola one didn’t have a hole in the case for a switch to exist, but the PCB is the same. I removed the soldered jumper and replaced it with a standard PC jumper pin, that way I could easily change it when I wanted to. In the end I will leave them both in LOOP mode most of the time, that has internal termination and is used for simple 4 port usage. Bridging the top and middle pin put it into LOOP mode, which is what I needed. Before that it was in RING without termination; each device would join the ring for 10 or so seconds, not hear anything else on the ring, and then disconnect. This MAU appears to be able to automatically go between 4mb/s and 16mb/s mode and I never moved the speed jumper.

The two modifications I made to these devices were the mentioned jumper change; and they come with a FGG 2P power connector onto a RJ45 plug. It says it needs 12V on it, and I wanted to just be able to use a wall plug, I first tried to get that connector, but after finding it tiny and hard to work with, I replaced the port in the device with a standard barrel plug.

Token Ring Drivers

One difficult part of finding Token Ring cards on eBay, you never know if you can find all the drivers. The card I have is a later model PCI card. It’s a Thomas Conrad TC4048. Thomas Conrad seems to have been an interesting company putting out different network cards over the 80s and 90s before ethernet took off. It is easy to find their Token Ring and Arcnet cards online. Finding their drivers on the other hand, proved to be difficult.

Driver Hunting

I found https://archive.org/details/pwork-297 this archive.org ISO, it contains a TON of drivers for devices in the 90s. It lists TC4048 as one of them. I download the image, install the driver AND… Windows 98 says it has the tc4048 files it needs except a “tc4048.dos”. I then found https://www.minuszerodegrees.net/software/Compaq/allfiles.txt this site which has every HP/Compaq driver that used to be on their site. Those are much easier to search. There were several TC4048 items.

I found an archive at https://ftp.zx.net.nz/pub/archive/ftp.compaq.com/pub/softpaq/sp19501-20000/, and downloaded sp19859.exe, which expanded and had “DOSNDIS” and “OS2NDIS”. I knew Compaq rebranded this card, so I yoloed and renamed “DOSNDIS/CPQTRND.DOS” to “tc4048.dos” and put it with the drivers I got from the archive.org image. The Thomas Conrad drivers from different vendors had similar files with different names, but they were the exact same size, and appeared to be the same… I hoped it would just work if I renamed a file from a different vendor to the one I needed. I made progress with error messages now seeing “svrapi.dll” missing in C:\Windows\, and found that file in C:\Windows\System32… and just copied it up one directory…

And magically that worked! I had a 16mb/s connection working between the Cisco 3825 (core) and the Windows 98 PC (edge)! The core of my retro network is a Cisco router. I purchased this Cisco 3825 system a while back because it’s the last one that supports Token Ring, but new enough to have 1gb/s uplink port to my core network. This allows me to host some retro VLANs internally, and firewall them off for security (since none of these systems have gotten patches for decades). I can play with Novell Netware and host a file share of games for the retro systems on this network as well. Using even legacy networks to move files is still a lot easier than a ton of floppy disks. I leave this router off most of the time because it’s a bit power hungry and loud. I have written about it before, and it also hosts my dial up connections.

I now had the Cisco 3825 with a Token Ring card and Windows 98 PC joining a Ring and communicating! I have watched a bunch of clabretro’s videos on Token Ring, and I saw the same issue with the Thomas Conrad drivers that he saw with his cards, Windows joining a Token Ring network and the drivers have an odd interaction. When the computer boots, at that point it tries to join the ring, the system will stay at the Windows startup screen an extra-long amount of time as it tries to enter the ring. The system will also wait at shutdown as it attempts to leave the ring. If the Token Ring card is not plugged in, you get a message about failing to connect after a prolonged startup.

Future Token Ring Plans

I plan to play with Token Ring a bit more both as a standard networking technology alongside the Ethernet network I have. Now that I have two working MAUs I want to experiment with linking them over the ST fiber connectors they have and getting a Token Ring connection over fiber. I am pondering learning FPGAs by building a Token Ring to Ethernet bridge using an FPGA connected to an ISA Token Ring card. I just find it interesting and it would push my FPGA skills; the project would need to translate the headers of Token Ring at layer 2 to Ethernet headers.

Token Ring is the layer 1 and layer 2 technology, after that we use standard TCP/IP on top of it; this has made it easy to get started with Token Ring over another protocol like AppleTalk or IPX. Once the physical connection was up, and devices could enter the ring; I was able to use standard Cisco commands and create a routable DHCP pool for Token Ring.

10″ Homelab Rack

I am working on a project that involves Intel vPro and AMD Dash. I am hoping to automate vPro and DASH systems with ansible the same way you can do iDrac and iLO for OS level development. I needed a few of those Tiny/Mini/Micro PCs to have the actual system API available. After stacking a bunch of them on the shelf next to my desk, I wanted a nicer way to organize them. I tried 3D printing a stand for one, and that worked for a time, then I decided to stack them. First I tried 3D modeling and printing little shelves that could fit them, this didn’t work great, I made the tolerances too tight making them hard to snap together. Then I decided to pivot to a 10″ Homelab rack.

There are many of these out there that come as full kits, but those can go for $100+, and I have a 3D printer… so where is the fun in that…

I found this https://www.printables.com/model/329801-10-ah-rail-based-servernetwork-rack/files model. You buy the metal posts offline, then this is the frame that holds the posts together. That brings the cost to 2 sets of posts (about $10 each), then some filament. I got these posts. That is more my style! There are many 10″ rack 3D models available. Many of them are BIG and engineered with handles so you can carry the systems around. Some are engineered to be 100% 3D printed with no metal posts. I wanted a rack with the strength of the metal posts, but simple.

I printed this, and while it was printing saw someone made a beefed up version with thicker corner pieces: https://www.printables.com/model/666403-10-in-mini-rack. I wish now that I printed that one instead because of the flimsiness, but it’s working as is. The frame is a bit flimsy, instead of the frame holding in the gear, the gear is more holding together the frame. I printed all these parts with 3 walls, and 50% infill. More than likely overkill, but I was just using PLA and wanted extra strength.

I went with 4U rack posts, I don’t need that much more than that now, didn’t want it to get too big, and for $20 investment I can always change the posts out later. Each of the mini pcs is 1U, this would give me 4 “slots”.

I printed this to hold the HP models I have: https://www.printables.com/model/585091-10-inch-rackmount-for-mini-hp-prodesk-elitedesk-g1. They were strong but I wanted to support that back of the mini pcs also, and found someone made a remix for that, short Remix: https://www.printables.com/model/841903-10-forward-or-reverse-rackmount-for-hp-elitedeskpr. The systems airflow is mostly front to back, and there are gaps between the systems for airflow.

I have an Intel NUC I also want to have in the rack. The NUCs are a little taller than 1U it seems, so all the 3D models to mount them are 2U. I didn’t want to use 2U up for it, I would just put it up top and it could stick out. I found this nice shelf: https://www.printables.com/model/1002978-geeekpi-10-in-rack-shelf, and printed that. I used this one specifically because it had adjustable rear supports.

The little rack is working well for me, sitting on a shelf and holding what it needs. The power supplies for these systems go down the shelf the rack is on to a power strip. Since these are development systems, I skipped having a UPS. Once I got the rack posts, the whole rack came together in a day and was easy to put together. This was a better solution than any other home grown one, because I get access to the entire ecosystem of 10″ standard rack parts!

Step-By-Step Setting Up Networking for Virtualization on OpenShift 4.19 for a Homelab

As we continue our Openshift journey to get virtualization working, we have a vanilla node already setup and now we need to get the networking configured. The examples here are from Openshift 4.19.17.

Networking in OpenShift is conceptually two parts that connect. The first part is the host level networking; this is your CoreOS OpenShift host itself. Then there is how do the pods connect into that networking. Usually, the network connects through your network interface card (NIC), to the Container Networking Interface (CNI), then to your pod. Here we will be using a meta plugin that connects between the NIC and the CNI called Multus. Redhat has a good post about it.

Host Level Networking

This part of the networking stack is straight forward if you are used to Linux system networking, and it is setup the same way. Treat the CoreOS node like any other Linux system. The big decision to make in the beginning is how many interfaces you will have.

Networking diagram without sub interface

If you have 1 interface and plan on using virtualization, are you going to use VLANs? If so, then you may want to move the IP of the interface off of the primary interface and onto a VLAN sub interface. This moves the traffic from untagged to tagged traffic for your network infrastructure.

Another reason is there are bugs in the Mellanox firmware, mlx5e, where Mellanox 4 and 5 cards can think you are double VLAN encapsulating, and will start automatically stripping VLAN tags. The solution is to move all traffic to sub interfaces. You will get an error in your dmesg/journalctl of: mlx5e_fs_set_rx_mode_work:843:(pid 146): S-tagged traffic will be dropped while C-tag vlan stripping is enabled

With the interface moved, that frees us up to use it for other VLANs as well. If you deployed network settings via a MachineConfig, you would have to override them there.

Networking diagram with sub interface

The rest of the configuration will be done via the NMState Operator and native Openshift.

NMState VLAN and Linux Bridge Setup

NMState is a Network Manager policy system. It allows you to set policies like you would in Windows Group Policy, or Puppet to tell each host how the network should be configured. You can filter down to specific hosts (I do that for testing, to only apply to 1 host) or deploy rules for your whole fleet assuming nodes are all configured the same way. It’s possible to use tags on your hosts to specify which rules go to which hosts.

NMState can also be used to configure port bonding and other network configurations you may need. After configuration, you get a screen that tells you the state of that policy on all the servers it applies to. Each policy sets one or more Network Manager configurations, if you have multiple NICs and want to configure all of them, you can do them in one policy, but it may be worth breaking the policies apart and having more granularity.

Another way to go about this section, is to SSH into each node, and use a tool such as nmtui to manually set the networking. I like NMState because I get a screen that shows all my networking is set correctly on each node, and updates to make sure it stays that way. I put an example below of setting up port bonding.

  • Go to the OpenShift web console, if you need to setup OpenShift I suggest checking out either my SNO guide or HA Guide.
  • Click Operators -> OperatorHub.
  • Once installed, you will need to create an “instance” of NMState for it to activate.
  • Then there will be new options under the Networking section on the left. We want NodeNetworkConfigurationPolicy. Here we create policies of how networking should be configured per host. This is like Group Policy or Puppet configurations.
  • At the NodeNetworkConfigurationPolicy screen, click “Create” -> “With YAML”.
  • We need to create a new sub-interface off of our eno1 main interface for our new vlan, then we need to create a Linux Bridge off that interface for our VMs to attach to.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: vlan19-with-bridge           <-- Change This
spec:
  desiredState:
    interfaces:
      - name: eno1.19             <-- Change This
        type: vlan
        state: up
        ipv4:
          enabled: false
        vlan:
          base-iface: eno1
          id: 19                     <-- Change This
      - name: br19                   <-- Change This
        type: linux-bridge
        state: up
        ipv4:
          enabled: false
        bridge:
          options:
            stp:
              enabled: false
          port:
            - name: eno1.19       <-- Change This
              vlan: {}
  • Important things here:
    • Change the 19s to whichever VLAN ID you want to use.
    • “ipv4: enabled: false” says we want an interface here, but we are not giving it host level IP networking on our OpenShift node.
    • Remove the <– Change This comments
    • You MUST leave the “vlan: {}” at the end or it will not work, adding this tells it to leave vlan data how it is because we are processing via the kernel via sub interfaces.

Now we have this configuration, with a secondary interface off of our NIC, and an internal Linux Bridge for the VMs.

The great thing about doing this configuration via NMState, it applies to all your nodes unless you put a filter in, and you get a centralized status about if each node could deploy the config.

Here is an example from my Homelab, with slightly different VLAN IDs than we have been discussing. You can see all three nodes have successfully taken the configuration.

OpenShift VM Network Configuration

Kubernetes and OpenShift use Network Attachment Definitions (NADs) to configure rules of how pods can connect to host level networking or to the CNI. We have created the VLANs and Bridges we need on our host system, now we need to create Network Attachment Definitions to allow our VMs or other pods to attach to the Bridges.

  • Go to “Networking” -> “NetworkAttachmentDefinitions”.
  • Click “Create NetworkAttachmentDefinition”
  • This is easily done, and can be done via the interface or via YAML, first we will do via the UI then YAML.
  • Before entering the name, make sure you are in the Project / Namespace you want to be in, NADs are Project / Namespace locked. This is nice because you can have different projects for different groups to have VMs and limit which networks they can go to.
  • Name: This is what the VM Operator will select, make it easy to understand, I do “vlan#-purpose“, example: “vlan2-workstations”.
  • Network Type: Linux Bridge.
  • Bridge Name: what was set above, in that example “br19“, no quotes.
  • VLAN tag number: Leave this blank, we are processing VLAN data at the kernel level not overlay.
  • MAC spoof check: Do you want the MAC addresses checked on the line. This is a feature which allows the network admin to pin certain MAC addresses and only send traffic out to those allowed. I usually turn this off.
  • Click “Create

The alternative way to do a NAD is via YAML, here is an example block:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: vlan19-data-integration
  namespace: default
spec:
  config: |-
    {
        "cniVersion": "0.3.1",
        "name": "vlan19-data-integration",
        "type": "bridge",
        "bridge": "br19",
        "ipam": {},
        "macspoofchk": false,
        "preserveDefaultVlan": false
    }

You can verify the NAD was created successfully by checking the NetworkAttachmentDefinitions list. Your networking is ready now. Next post, we will discuss getting storage setup.

Additional NodeNetworkConfigurationPolicy YAMLs

NIC Bonding / Teaming

Use mode 4 (802.3ad/LACP) if your switch supports link aggregation; otherwise mode 1 (active-backup) is the safest fallback.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: bond0-config
spec:
  desiredState:
    interfaces:
      - name: bond0
        type: bond
        state: up
        ipv4:
          enabled: false
        link-aggregation:
          # mode=1 active-backup
          # mode=2 balance-xor
          # mode=4 802.3ad
          # mode=5 balance-tlb
          # mode=6 balance-alb
          mode: 802.3ad
          options:
            miimon: '140'
          port:
            - eno1
            - eno2

Useful Links

https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md

https://medium.com/@tcij1013/how-to-configure-bonded-vlan-interfaces-in-openshift-4-18-0bcc22f71200

Funny icon for Openshift post

Step-By-Step Getting Started with High Availability OpenShift 4.19 for a Homelab

Last post, looked at getting started with a SNO (Single Node OpenShift) system. Next we will look at a build with multi-node, or multi-master, OpenShift. This runs the core service of etcd on more than one node, allowing for a single node failure. Some services like the virtual machine services need to run on a master as well, having more than one relieves pressure on that system. With SNO, if your master does not start, the entire cluster cannot start. In addition, SNO upgrades will always introduce downtime with the single master rebooting.

Master nodes do have more services than a simple worker, if you are running a small cluster with 3 nodes, you may want to decide if the extra overhead on the second and third nodes are worth it, or if you want to run leaner and run SNO with extra workers. In my experience of vanilla OpenShift, masters run about 20GB of ram more than worker nodes with no additional services on them.

I have a 3 node cluster that I was migrating from VMware and wanted to run HA. This allows me to do no downtime upgrades, with the three nodes sharing the control role.

My Setup

I am installing onto 3 HP Elitedesk 800 G5s, each with an Intel 9700, and 96GB of RAM (they can go to 128GB when RAM prices aren’t insane). I have a dual 10gb/s NIC in each for networking since I will be running ceph. This is the same Homelab cluster I have had for a bit. These machines aren’t too expensive, they have 8 cores each, can go to 128GB of RAM, and have several PCI slots, and NVMe slots. I have used this guide to install OpenShift 4.17-4.20.

Installation Steps for HA OpenShift

Any line starting with $ is a terminal command to use. The whole process will take about an hour; 30 minutes or so to collect binaries and prep your config files, a minute or two to create the ISO, then 30 minutes of the cluster sitting there and installing.

One important thing to say up front to those who have not used Openshift or Kubernetes before: there is 1 IP that all the applications use, the web server looks at the request coming in and WHICH DNS NAME YOU CONNECTED TO, and then routes your traffic that way. You can have 100% of the things setup right, and when you browser to the IP you get “Application is not available” when trying to access the console. This means the system is working! You just need to connect via the correct DNS name.

  1. Prerequisites: Start by going to the same place as the original post to get a pull secret and binaries you will need for the install. These include openshift-install, and oc.
  2. I am on Fedora 42 and needed to run sudo dnf install nmstate to install nmstate. This is required to transform the configs in the agent-config.yaml into the configs that will be injected into the installation ISO.
  3. Make a folder, called something like “ha-openshift”, and put all the binaries in there.
  4. Config Files: Before we had install-config.yaml, now we will have that AND agent-config.yaml.
  5. Below is an install-config.yaml, I will call out things you will want to change for your setup:
    • apiVersion: v1
      baseDomain: example.com
      compute:
      - architecture: amd64
      hyperthreading: Enabled
      name: worker
      platform: {}
      replicas: 0
      controlPlane:
      architecture: amd64
      hyperthreading: Enabled
      name: master
      platform: {}
      replicas: 3
      metadata:
      name: cluster1
      networking:
      clusterNetwork:
      - cidr: 10.131.0.0/16
      hostPrefix: 23
      machineNetwork:
      - cidr: 192.168.4.0/24
      networkType: OVNKubernetes
      serviceNetwork:
      - 172.30.0.0/16
      platform:
      baremetal:
      apiVIPs:
      - 192.168.4.5
      ingressVIPs:
      - 192.168.4.7
      pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3Blbn==","email":"not-my-real-email@gmail.com"}}}'
      sshKey: ssh-rsa AAAAB
    • The “baseDomain” is the main domain to use, your hosts will be master0.<baseDomain>, the cluster name will be <metadata.name>.<baseDomain>. Make sure you put in what you want here because you can’t change it later. This is how users will reference the cluster.
    • Under workers and controlPlane, you put how many worker nodes and master nodes you want. This is a big difference between SNO and HA, we are saying 3 instead of 1 master.
    • metadata.name is the sub name of this exact cluster. You can have multiple clusters at lets say “example.com”, then setting this will make the cluster apps.cluster1.example.com. (Yes the DNS names get long with OpenShift)
    • clusterNetwork and serviceNetwork will be used internally for backend services, only change these if you are worried about the preset ones conflicting with your IP space.
    • machineNetwork.cidr is the IP space your nodes will live on, this needs to be set for your DHCP network. This is the range the network will use. Some of the IPs below will need static reservations in your DHCP network, the worker and master nodes can have general pool DHCP addresses. We are assuming DHCP here, you can statically assign IPs but its more work and not something I am going to talk about right here.
    • platform.baremetal.apiVIPs is where the API for your cluster will live, this is an additional IP the HA masters will hand back and forth to give the appearance of a single control plane.
    • platform.baremetal.ingressVIPs is another IP that will be handed back and forth but will be the HTTPs front door for applications.
  6. agent-config.yaml, I will call out things you will want to change:
    • apiVersion: v1alpha1
      kind: AgentConfig
      rendezvousIP: 192.168.4.10
      hosts:
        - hostname: hv1
          role: master
          rootDeviceHints:
            serialNumber: "AA22122369"
          interfaces:
            - name: enp1s0f0
              macAddress: 0c:c4:7b:1e:42:14
            - name: enp1s0f1
              macAddress: 0c:c4:7b:1e:42:15
          networkConfig:
            interfaces:
              - name: bond0.4
                type: vlan
                state: up
                vlan:
                  base-iface: bond0
                  id: 4
                ipv4:
                  enabled: true
                  address:
                    - ip: 192.168.4.10
                      prefix-length: 24
                  dhcp: false
              - name: bond0
                type: bond
                state: up
                mac-address: 0c:c4:7b:1e:42:14
                ipv4:
                  enabled: false
                ipv6:
                  enabled: false
                link-aggregation:
                  mode: 802.3ad
                  options:
                    miimon: "150"
                  port:
                    - enp1s0f0
                    - enp1s0f1
            dns-resolver:
              config:
                server:
                  - 192.168.3.5
            routes:
              config:
                - destination: 0.0.0.0/0
                  next-hop-address: 192.168.4.1
                  next-hop-interface: bond0.4
                  table-id: 254
        - hostname: hv2
          role: master
          rootDeviceHints:
            serialNumber: "AA22628"
          interfaces:
            - name: enp1s0f0
              macAddress: 0c:c4:7b:1f:06:e2
            - name: enp1s0f1
              macAddress: 0c:c4:7b:1f:06:e3
          networkConfig:
            interfaces:
              - name: bond0.4
                type: vlan
                state: up
                vlan:
                  base-iface: bond0
                  id: 4
                ipv4:
                  enabled: true
                  address:
                    - ip: 192.168.4.20
                      prefix-length: 24
                  dhcp: false
              - name: bond0
                type: bond
                state: up
                mac-address: 0c:c4:7b:1f:06:e2
                ipv4:
                  enabled: false
                ipv6:
                  enabled: false
                link-aggregation:
                  mode: 802.3ad
                  options:
                    miimon: "150"
                  port:
                    - enp1s0f0
                    - enp1s0f1
            dns-resolver:
              config:
                server:
                  - 192.168.3.5
            routes:
              config:
                - destination: 0.0.0.0/0
                  next-hop-address: 192.168.4.1
                  next-hop-interface: bond0.4
                  table-id: 254
        - hostname: hv3
          role: master
          rootDeviceHints:
            serialNumber: "203129F9D7"
          interfaces:
            - name: enp1s0f0
              macAddress: 0c:c4:7b:1f:03:c2
            - name: enp1s0f1
              macAddress: 0c:c4:7b:1f:03:c3
          networkConfig:
            interfaces:
              - name: bond0.4
                type: vlan
                state: up
                vlan:
                  base-iface: bond0
                  id: 4
                ipv4:
                  enabled: true
                  address:
                    - ip: 192.168.4.30
                      prefix-length: 24
                  dhcp: false
              - name: bond0
                type: bond
                state: up
                mac-address: 0c:c4:7b:1f:03:c2
                ipv4:
                  enabled: false
                ipv6:
                  enabled: false
                link-aggregation:
                  mode: 802.3ad
                  options:
                    miimon: "150"
                  port:
                    - enp1s0f0
                    - enp1s0f1
            dns-resolver:
              config:
                server:
                  - 192.168.3.5
            routes:
              config:
                - destination: 0.0.0.0/0
                  next-hop-address: 192.168.4.1
                  next-hop-interface: bond0.4
                  table-id: 254
    • rendezvousIP is an IP of a node in charge of the setup. You pick one of them to wait for all other masters/workers to be ready before starting the installation. It will wait for all nodes to be online, check they are ready, install them, then install itself.
    • The rest of this config is a three times repeated (one per host) setup of each host, things you will want to change:
  7. DNS Entries: Having created those two files, you know what you want your DNS to be. It’s time to go into your location’s DNS servers and enter addresses just like in the original post. These entries can be made at any time before you start the installation. In the end you should have 1 IP for ingress, 1 for api, then one per node.
    • api.cluster1.example.com -> apiVIPs, in my config 192.168.4.5
    • api-int.cluster1.example.com -> apiVIPs, in my config 192.168.4.5
    • *.apps.cluster1.example.com -> ingressVIPs, in my config 192.168.4.7
    • master0.cluster1.example.com -> node1 IP, in my config hv1 so I put 192.168.4.10
    • master1.cluster1.example.com -> node2 IP, in my config hv2 so I put 192.168.4.10
  8. Image Creation:
  9. $ mkdir ocp
  10. $ cp *.yaml ocp
  11. $ ./openshift-install –dir ./ocp/ agent create image 
  12. This will create a ocp/agent.x86_64.iso 
  13. Installation: Boot that iso on all servers. The image will use the hardware you specified in agent-config.yaml and DNS lookups to identify each node. Make sure the systems NTP is working, and their time looks correct, then that each node can curl:
    • registry.redhat.io 
      quay.io 
      cdn01.quay.io 
      api.openshift.com 
      access.redhat.com
  14. The stack should now install, the main server will show a screen saying the state of the other masters, and when they are all ready, it will proceed with install. This can easily take 30 minutes, and the screen on the rendezvous server can be slow to update.

With any luck you will have all the nodes reboot, and a running stack you can access at your console server location; here that would be console-openshift-console.apps.cluster1.example.com. Each node should show a normal Linux boot up sequence, then will show a login prompt, with that nodes name, and IP address(es). In this learning experience, feel free to restart the installation and the system will wipe the machines again.

In the ha-openshift folder, then the ocp subfolder there will be an auth folder. That will have the kubeadmin and kubeconfig files to authenticate to the cluster. The kubeadmin password can be used to login to oauth at console-openshift-console.apps.cluster1.example.com. The kubeconfig file can be used with the oc command downloaded from Redhat. using $ ./oc --kubeconfig ./ocp/auth/kubeconfig get nodes will show the nodes and their status from your installation machine.

Properly installed cluster example: 
~/homelab_openshift $ ./oc --kubeconfig ./ocp/auth/kubeconfig get nodes
NAME   STATUS   ROLES                         AGE   VERSION
hv1    Ready    control-plane,master,worker   44d   v1.32.9
hv2    Ready    control-plane,master,worker   44d   v1.32.9
hv3    Ready    control-plane,master,worker   44d   v1.32.9

This is an example of a successfully upgraded cluster running, and I am running the standard OpenShift oc get nodes command. Note: the version is the version of Kubernetes being run, not OpenShift.

I will continue this series with posts about Networking, Storage, and VM setup for OpenShift.

Troubleshooting

The install process for OpenShift has a big learning curve. You can make it a bit easier by using Redhats web installer, but that also puts some requirements on the system that a Homelab usually can’t hit, doing the agent based installer bypasses those checks. Once you get your configs dialed in, I have found it easy to reinstall a stack, but getting configs for a stack setup correctly the first few times is tough. The installer also does not do a ton to make it easier on you, if something goes wrong, the biggest indicators I have found are: when SSHed into the installer, the memory usage, the journalctl logs in the installer, and about 8-10 minutes into a good install, you will see the DVD image start to read a lot of data, constant activity on the indicator for a few minutes (that is the CoreOS being written to the disk).

Random things to check in a failing install:

  • SSH into a node using the SSH key in the install-config.yaml, run $ sudo journalctl and scroll to the bottom to see what’s going on, or just run $ sudo journalctl -f.
    • You may see something like:
      • “failing to pull image”: It can’t hit Redhat, or your pull secret expired
      • “ip-10-123-123-132.cluster.local node not recognized”: DNS entries need updated
  • If the system successfully reboots after an install, but you are not seeing the console start, SSH into a node using the SSH key in the install-config.yaml, run $ top. If your RAM usage is about:
    • 1GB, Kubernetes is failing to start, this could be a DNS or image download issue.
    • around 8GB, the core systems are attempting to come online, but something is stopping them such as an issue with the api or apps DNS names.
    • 12-16+GB of ram used, the system should be online.
  • Worth repeating for those who haven’t used Openshift before, internal routing is done via DNS names in your request, if you attempt to go to the ingress VIP via the IP you will get “Application is not available”. This is good! Everything is up, you just need to navigate to the correct URL.

Footnotes

Helpful examples: https://gist.github.com/thikade/9210874f322e72fb9d7096851d509e35

Step-By-Step Getting started with Single Node OpenShift (SNO) for a Homelab

Preface 

I will explain why OpenShift, and will have that blurb after the tutorial for those interested. I have some information for those completely new to OpenShift and Kubernetes (shorthand “K8s”), feel free to jump to “Installation Steps for Single Node OpenShift” for steps. This guide walks you through doing a Single Node OpenShift installation. This should take about 1-2 hours to have a basic system up and running.

In later posts I will go over networking, storage, and the rest of the parts you need to setup. I spoke to some of their engineers, and they were confused when I said this system is not easy to install, and they need to make an easy installation disc like VMware or Microsoft have. 

It is worth noting at this point that OKD exists. OKD is the upstream (well moving upstream), open-source version of OpenShift. You are more bleeding edge, but you get MOST of the stack without any licensing. Almost like CentOS was to Redhat Enterprise Linux, except more upstream than in line. There are areas where that is not true, and other hurtles to use it; but I am going to make another post about that. 

Single Node OpenShift vs High Availability

There are two main ways to run OpenShift, the first is SNO; Single Node OpenShift. There is no high availability, everything runs with 1 master node, which is also your worker node. You CAN attach more worker servers to a SNO system, but if that main system goes down, then you lose control of the cluster. The other mode to run in is HA, where you have at least 3 nodes in your control plane. For production you would usually want HA, and I will have an article about that in the future, for now I will just install SNO. 

Big Changes to Keep in Mind From VMware

A quick note to all the administrators coming from VMware or other solutions, OpenShift runs on top of CoreOS. An immutable OS based on Redhat and ostree. The way OpenShift finds out which config to apply to your node is via DHCP and DNS. These are HARD REQUIREMENTS to have setup for your environment. The installation will fail, and you will have endless problems if you do not have DHCP + DNS setup correctly; trust me, I have been there.

K8s Intro 101

For those who haven’t used Kubernetes before (me a few weeks ago), here are some quick things to learn. A cluster has “master” nodes and “worker” nodes, masters orchestrate, workers run pods. Master nodes can also be worker nodes.

OpenShift by default cannot run VMs. We are installing the Virtualization Operator, operators are like plugins, which will give us the bits we need to run virtualization. OpenShift has OpenShift Virtualization Operator, OKD has KubeVirt. OpenShift Virtualization Operator IS KubeVirt with a little polish on it and supported by Redhat.

Homelab SNO Installation 

OpenShift is built to have a minimum of 2 disks. One will be the core OS and the containers that you want to run. The other will be storage for VMs and container data. By default the installer does not support partitioning the disk, forcing you to have 2 disks. I wrote a script that injects partitioning data into the SNO configuration. The current SNO configuration does not seem to have another easy way to add this. The script: Openshift-Scripts/add_parition_rule.sh at main · daberkow/Openshift-Scripts, needs to be run right after “openshift-install”, Step 18. It is run with “$ ./add_parition_rule.sh ./ocp/bootstrap-in-place-for-live-iso.ign ./ocp/bootstrap-in-place-for-live-iso-edited.ign”, then “./ocp/bootstrap-in-place-for-live-iso-edited.ign” is used for Step 20. 

I am running on a Hp ProDesk 600 G5 Mini with an Intel 9500T, 64GB of RAM, and a 1TB NVMe drive. You need any computer you can install an OS onto with at least 100GB of storage and probably 32GB of RAM. Redhat CoreOS is a lot more accepting of random hardware than VMware ESXi is.

Installation Steps for Single Node OpenShift

OpenShift has several ways to do an installation, you can use their website and do the Assisted installer or create an ISO with all the details baked in, this time we will go over how to do it with creating a custom ISO with an embedded ignition file.

The following steps will be for a Mac or Linux computer. The main commands you will use interact with your cluster are `kubectl` and `oc`; `oc` is the openshift client, and a superset of the features in the standard `kubectl` command. Those tools work on Windows and have builds. The `openshift-installer` does not, so we can’t install with just Windows. You can try to use WSL to do the install, but it always gave me issues. The Linux system needs to be Rhel 8+/Fedora/Rocky 8+ or Ubuntu 20.10+ because of the requirement for Podman. 

As mentioned, DHCP + DNS are very important for OpenShift. We need to plan what our cluster DOMAIN and CLUSTER NAME will be. For this I will use “cluster1” as the cluster, and “example.com” as the domain. Our example IP will be 192.168.2.10 for our node. When I put a $ at the start of a line, that is a terminal command. 

  1. First, we will setup DNS, that is a big requirement for OpenShift, to do that you need a static IP address. Give the system a reservation or static IP address for your environment. 
  2. Now go and make the following addresses point to that IP, because we are on a single node, these can all point to one IP. Note this is for SNO, for larger clusters you need different hosts and VIPs for these IPs.
    1. api.cluster1.example.com -> 192.168.2.10
    2. api-int.cluster1.example.com -> 192.168.2.10
    3. *.apps.cluster1.example.com -> 192.168.2.10
    4. The two api addresses are used for K8s API calls, *.apps is a wildcard where all the sub apps within the cluster will be accessed. These applications use the referrer url of the web request to figure out where the traffic should go, thus everything has to be done via DNS name and not IP.
    5. Note: The wildcard for the last entry is needed for some services to work, you can individually add them, but it becomes a lot of work. Wildcards can not be used in hosts file, which means you do need proper DNS. There is a footnote for all the DNS entries you may if you want to run out of a hosts file.
  3. Go to Download Red Hat Openshift | Red Hat Developer
  4. Sign up for a Redhat Developer account and click “Deploy in your datacenter”. 
  5. Click “Run Agent-based Installer locally”. 
  6. Download the OpenShift installer, your “pull secret”, and a command line tool.
  7. Open a terminal and make a “sno” folder wherever you want. 
  8. Install Podman on your platform, if that’s Windows that means within WSL2, not on the Windows host.
  9. Copy/extract the openshift-installer, oc, and kubectl commands to that folder. 
  10. $ export OCP_VERSION=latest-4.19
  11. $ export ARCH=x86_64
  12. $ export ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep $ARCH | grep iso | cut -d\” -f4) 
  13. $ curl -L $ISO_URL -o rhcos-live-fresh.iso
    • I used “rhcos-live-fresh.iso” for the clean ISO, then copied it every time I needed to start over, I found this easier than redownloading. 
  14. $ cp rhcos-live-fresh.iso rhcos-live.iso 
  15. Create a text file called “install-config.yaml”, copy the following and edit for your setup: 

    • apiVersion: v1
      baseDomain: example.com
      compute:
      - name: worker
      replicas: 0
      controlPlane:
      name: master
      replicas: 1
      metadata:
      name: openshift
      networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
      hostPrefix: 23
      machineNetwork:
      - cidr: 192.168.2.0/24
      networkType: OVNKubernetes
      serviceNetwork:
      - 172.30.0.0/16
      platform:
      none: {}
      bootstrapInPlace:
      installationDisk: /dev/nvme0n1
      pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3BllBFa…0M4NjNSaEo0RmNXZw==","email":"danisawesome@example.com"}}}'
      sshKey: |
      ssh-rsa AAAAB3QQe/… /h3Pss= dan@home

Note: I have removed most of my pull secret, and ssh key 

  • baseDomain: This is your main domain 
  • clusterNetwork: The internal network used by the system, DO NOT TOUCH 
  • machineNetwork: Network your system will have a NIC on, change this to your network 
  • serviceNetwork: Another internally used network, DO NOT TOUCH 
  • installationDisk: The disk to install to
  • pullSecret: Insert that secret downloaded from Redhat in Step 6 
  • sshKey: The public key to your local accounts ssh key, this will be used for auth later 
  1. $ mkdir ocp 
  2. $ cp install-config.yaml ocp 
  3. $ ./openshift-install –dir=ocp create single-node-ignition-config 
    • Optional to operate off a single disk
    • ./add_parition_rule.sh ./ocp/bootstrap-in-place-for-live-iso.ign ./ocp/bootstrap-in-place-for-live-iso-edited.ign
  4. $ alias coreos-installer=’podman run –privileged –pull always –rm  -v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data  -w /data quay.io/coreos/coreos-installer:release’ 
  5. $ coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso 
  6. Boot rhcos-live.iso on your computer, it will take 20 or more minutes, then the system should reboot
  7. If everything works, the system will reboot, then after 10 or so minutes of the system loading pods, https://console-openshift-console.apps.cluster1.example.com/ should load from your client computer. The login will be stored on your sno/ocp/auth folder. 
Openshift login screen

Many caveats here: if your install fails to progress, you can ssh in with the SSH key you set in the install-config.yaml file. That is the only way to get in. Check journalctl to see if there are issues. It’s probably DNS. You can put the host names above into the hosts file of the installer and then after reboot the host itself to boot without needing DNS.

You CAN build an x86_64 image using an ARM Mac. You can also create an ARM OpenShift installer to run on a VM on a Mac. The steps are very similar for an ARM Mac except they have aarch64 binaries at: mirror.openshift.com/pub/openshift-v4/aarch64/clients/ocp/latest-4.18/, and you use “export ARCH=aarch64”. Be careful on an ARM Mac about using the x86_64 installer for targeting an x86_64 server, and a aarch64 installer for ARM VMs. Or you will get “ERROR: release image arch amd64 does not match host arch arm64” and have to go to ERROR: release image arch amd64 does not match host arch arm64 – Simon Krenger to find out why. 

Hopefully this helps someone, I think OpenShift and OKD could be helpful for a lot of people looking for a hypervisor, but the docs and getting started materials are hard to wrap your head around. I plan to make a series of posts to help people get going. Feel free to drop a comment if this helps, or something isn’t clear.

DNS SNO Troubles

This section is optional, and for those who would like to run without external DNS for a stack. It can lead to the stack being odd, if you dont need this, you may not want to do it. All this was tested on 4.19.17.

The issue you run into here, is the fact that the way DNS works in OpenShift is pods are given CoreDNS entries, and they are given a copy of your hosts resolv.conf. In the event you want to start an OpenShift system completely air-gapped, with no external DNS, you need the entries we stated in other articles, mainly: api.<cluster>.<domain>, api-int.<cluster>.<domain>, *.apps.<cluster>.<domain>, master0.<cluster>.<domain>. Wildcard lookups cannot be in a hosts file. Luckily, because of this, OpenShift ships with dnsmasq installed on all the hosts.

Our flow for DNS will be: the host itself runs dnsmasq, and points to itself for DNS. It has to point to itself on its public IP because that resolv.conf file will be based onto pods; if you put 127.0.0.1 then pods will get that and fail to hit DNS. Then dnsmasq points to your external DNS servers. That way, all lookups hit dnsmasq first, then can be filtered to the outside.

When installing OpenShift: there is the install environment itself, then the OS after reboot, we need these entries to be in both environments.

I have created a script, it is used like the partition script I used in the SNO post. To use it, create your ignition files with openshift-install, then $ ./add_dns_settings.sh ./ocp/bootstrap-in-place-for-live-iso.ign ./ocp/bootstrap-in-place-for-live-iso-edited.ign and install with that edited ignition file.

This allows you to set all the settings you need, and a static IP setting for the host that will run single node. When installing this way, you will need to add some hosts file entries to your client because outside the cluster the DNS entries dont exist. The new SNO system is not in external DNS and that is how OpenShift routes traffic internally. Adding the below line to your clients hosts file with cluster and domain changed should be enough to connect:

192.168.1.10 console-openshift-console.apps.<cluster>.<domain> oauth-openshift.apps.<cluster>.<domain> 

Backstory About Why OpenShift

After all the recent price hikes by Broadcom for VMware, my work – like many – have been looking for alternatives. Not only do high hypervisor costs make it expensive for your existing clusters, it makes it hard to grow clusters with that high cost. We already run a lot of Kubernetes and wanted a new system that we could slot in, allowing for K8s and VMs to run side by side (without paying thousands and thousands per node that Broadcom wants). I was tasked with looking at alternatives out there, we were already planning on going with OpenShift as our dev team had started using it, but it doesn’t hurt to see what else is out there. The requirements were: had to be on-prem, be able to segment data by vlan, run VMs with no outside connectivity (more on that later), and have shared storage. There were more but those were the general guidelines. For testing the first thing I installed was Single Node OpenShift (SNO), and that’s what I will start going over here. It does do the job decently well enough, but the ramp up is rough. Gone are the VMware nice installers, and welcome to writing YAML files.

The big other players were systems like Hyper-V, Nutanix, Proxmox, Xen Orchestra, KVM. We are not a big Microsoft shop and a lot of our devs had a bad experience with Hyper-V, so we scratched that one. Also, Hyper-V doesn’t seemed all that loved by Microsoft for on-prem, so that turned us away. I investigated Nutanix but they have a specific group of hardware they want to work with, and a very specific disk configuration where each server needs 3 + SSDs to run the base install. I did not want to deal with that, so we moved on before even piloting it. Proxmox is a community favorite, but we didn’t want to use that for production networks, and thought getting it passed security teams at our customers would be difficult. Xen Orchestra is getting better but in testing had some rough spots and getting the cluster manager going gave some difficulty. This left raw KVM, and that was a non-starter because we want users to easily be able to manage the cluster. 

Without finding a great alternative, and the company already wanting to push forward on Redhat OpenShift, I started diving into what it would take to get VMs to where we needed them to be. What I generally found is there is a working solution here, that Redhat is quickly iterating on. It is NOT 1:1 with VMware. You are running VMs within Pods in a K8s cluster. That means you get the flexibility of K8s and the ability to set things up how you want; along with the troubles and difficulties of it. Like Linux, the great thing about K8s is there are 1000 ways to do anything, that also is its greatest weakness.

Footnotes / Reading Materials 

DNS Entries needed for normal use:

Chapter 2. Installing OpenShift on a single node | Installing on a single node | OpenShift Container Platform | 4.18 | Red Hat Documentation 

SNO on OCP-V – OpenShift Examples 

Red Hat OpenShift Single Node – Assisted Installer – vMattroman 

Fedora CoreOS VMware Install and Basic Ignition File Example – Virtualization Howto 

https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_bare_metal/user-provisioned-infrastructure#installation-user-infra-machines-advanced_vardisk_installing-restricted-networks-bare-metal

butane/docs/config-openshift-v4_18.md at main · coreos/butane 

Some useful information for networking: Deploying Single Node Openshift (SNO) on Bare Metal — Detailed Cookbook | by Reishit Kosef | Medium 

Offline installs 

https://hackmd.io/@johnsimcall/Sk1gG5G6o

Ruckus H550 Odd Recovery, & Wifi Upgrades

I have a bad habit of buying networking gear when my work/life gets hectic. In a recent time of chaos I decided I should update a Ruckus H510 AP I have to a H550. I saw one on eBay, gave an offer and it was accepted! When the unit came, it still had the config of a news company. I had to factory reset it like mentioned in previous articles. The odd thing is it would not come up for me to access. I could see the “Configure.Me” Wifi network, but when I went to it nothing. I tried going to the web page and got nothing, I set the default IP and couldn’t contact it via wired. I then took out Wireshark and started looking for what happened when I joined its Wifi.

It was looking for assets from 10.154.231.125? (Later I would find others mention this https://community.ruckuswireless.com/t5/Apps-and-SPoT/Wrong-IP-on-mobile-device-using-unleashed-Configure-Me-xxxxxx/m-p/25722) I set my laptops IP to 10.154.231.124/24, and I was able to connect and flash the Unleashed firmware like normal. I was getting other IP information and failing to get a web page on the wired port. I hope this helps someone out there.

One other thing I didn’t realize… The H550 is taller than the H510… and I had a shelf right above the spot it was mounted. So I did the right thing, and mounted the new one upside down, so the taller part goes down and doesn’t hit the shelf.

While I was here getting more Wifi 6 goodness in my home (I have been running mostly Wifi 5 (802.11ac Wave 2) I thought I would look for deals. I saw someone selling a R550 for sale for parts. It said that when turned on it had a red light. I know that these access points can take a good 3-5 minutes to start, and while they do, they have a red light on… What are the chances that this access point just has a bad firmware image, or… nothing is wrong with it and this person just didn’t wait…

I offered $50 for the broken access point that usually goes for $250, and they accepted! Then I waited for the device to come, and later that week, I plugged it in, powered it up, and… it booted fine! I flashed it over to Unleashed and suddenly I had another great Wifi 6 access point.

I have mostly moved all my access points to Wifi 6 at this point which means I can go above firmware 200.15 (the last for the Wifi 5 systems), but I still haven’t since its still recommended to stay there by some places. And I am pondering setting up the old H510 as a small access point and ethernet port at my workbench.

Mellanox SX6012 Homelab Upgrade

For the last few years, I have been using a Mikrotik CRS309-1G-8S+. A small, low power, 8 port, 10gb/s switch. It worked well for me. One of the main things I liked about it was the low power usage. There are always discussions on different homelab forums about which switch to use. Some people like to use Arista or Cisco gear. I enjoy that gear and use it at work, but with my small and low power homelab an Arista switch would triple my power usage (a lot of them idle at 200-300 watts). There are nice features on those switches, but to get those nice features they have whole small computers as the management plane, and then power-hungry chips for switching.

The time came where I wanted to upgrade past this small Mikrotik switch. 8x10gb/s ports were great for a while, but 1 was uplink to the home core switch; then with running vSAN, I wanted 2 ports per host, and I have 4 hosts. While not urgent, I started to search for a bigger switch. Mikrotik has some bigger offerings, also low power, but a lot of the offerings were $400-$600+ to go to 12+ 10gb/s ports.

One place I like to browse periodically is the ServeTheHome forums. There homelab users talk about many different homelab things including networking. Many users seem to be interested in the Mellanox SX6012 or SX6036. This switch is discontinued from Mellanox (now Nvidia) making them go for fairly inexpensive on eBay.

The SX6012 is a 12 port, 40gb/s switch; capable of using 40gb break out cables. That means each 40gb/s port can be 4x10gb/s ports. The switch is technically an Infiniband switch, which can get an optional Ethernet license. There are some switches sold with the license, along with guides online to enable that part of the switch. Apparently, there are also people on eBay who can “assist you” in licensing the switch for $50. Being the switch is no longer supported, I think a lot of the eBay buyers are homelab people going through the guided process of configuring the switch with a license. The switch was reported to be “not that loud”, which is true after some fan setting tweaks; and also idles at 30 watts from a low power PowerPC chip. This made it a go to for me. Plenty of ports to upgrade to over time, and a low power budget.

In looking at the switch, one thing that was heavily mentioned are the different editions of it. There are 12 and 36 port versions, along with Mellanox vs other OEM sub branded versions. For example, you can get a Dell/EMC Branded switch which will come with different features than a HPe switch, or a Mellanox themselves branded on. I wanted the 12-port version because (in theory according to online) it had slightly lower power draw. The 36-port version is supposed to be a big quieter (having more room to cool), but I also saw some firmware hacks to lower the fan noise. I saw one SX6012 unit which had the black front bezel (apparently that makes it Mellanox Brand) sitting on eBay with an expensive Buy It Now, or Make Offer. While they still go for around $250, I gave an offer for a good amount lower, and they took it! Score!

Flash forward a few days; I got the switch from the seller, powered it up, and was met with a dreaded bootloader… The OS had been wiped from the switch completely… along with everything on the flash. After a brief moment of dread, I thought about finding one of the guides online for managing these switches. Those guides are not just about enabling features like Ethernet, they are there to show you how to load different firmware revisions and where to currently find it. The Mellanox firmware itself was behind a support portal which got folded into Nvidia. Although these switches were also sold under Dell/EMC/HP brands, and some of those brands still provide the firmware packages. There are community scripts which can take in a HP firmware package and convert it to a Mellanox or other brand firmware package.

Mellanox port mgmt

After a slow TFTP image load, I got the switch online. This allowed be to get a GUI and more easily load the follow up firmware packages. After many reboots (which can be heard throughout the house with the fans ramping to 100%), and a few upgrades later I had the switch in a good place at the last available firmware for it. For the last several months the switch has quietly been working well for me. I have one QSFP to SFP+ adapter for the 10GB from my core switch coming in. Then I have 2 QSFP -> SFP+ break out cables going to the small cluster I am running. This means I am running on this one switch, without high availability right now. If I want to reboot or patch the switch, I need to shut down my VMware cluster. One benefit to an out of support switch without firmware updates… You have no firmware updates to do!

The CLI is similar to Cisco. Like many other switch vendors, they seem to follow a similarly universal CLI. The hardest part of getting the switch going for me was figuring out the command to set the QSFP port to breakout mode. Once that was done, it creates 4 virtual sub-ports which you configure with vlans and such. The UI showed the ports as single ports, even with the breakout cable until I went in the CLI and set it to breakout mode.

With this switch working well, I moved the old 8x10gb/s Mikrotik switch over to be my new 10gb core switch. The current flow is Internet in -> Sophos XG Firewall on a Dell Optiplex 5050 -> Ruckus ICX7150 POE switch for Wifi and a few wired ports -> 8 port 10gb/s Mikrotik -> Mellanox SX6012. The house can run with just the firewall and Ruckus switch (which powers all the Wifi APs). The Mikrotik is near the router, and also allows a Cat5e run (19 meters) already in the wall to go up to the attic and give 10gb/s to a NAS and AP up there. (I know 10gb RJ45 is supposed to be Cat6, this line was run before I was here and tested fine, it has been working well the whole time) Then the Mikrotik switch has a SFP that does a longer fiber run to where my little homelab rack is. The whole system is a glorified “router on a stick” with the firewall doing all the routing between vlans.

This setup has been working well, has plenty of room for expansion, and achieved my goal of being fast with relatively low power use. I have the management for the switches on a disconnected vlan that only certain authenticated machines can connect to. This makes me feel better about its not getting security updates.

Mellanox at 29w

Currently I have 4 small Dell Optiplex systems as my homelab cluster along with the Mellanox switch. All together the rack idles around 130 watts. Together the systems have about 20 physical cores (not hyper threaded cores), and 288GB of RAM. It can certainly spike up if I start a bunch of heavy workloads, but I continue to find it very impressive.

Ruckus Unleashed ICX Management Stuck at “Connecting”

I have a mostly Ruckus and Mikrotik network stack at home. For the longest time, Ruckus Unleashed has had the ability to manage ICX switches; but every time I went to add my switch to the Unleashed interface it would hang at “Connecting…”. After a bunch of troubleshooting, I figured out why it was not working.

Unleashed likes to automatically adopt blank switches, if your switch is already configured you may have the same issue. The issue is Unleashed cannot use a ICX switch with an enable password. I had to run:

SSH@switch(config)#no aaa authentication enable default radius local

Then suddenly if I ran “# show log” I could see Unleashed adding settings to the switch. Unleashed seems to use SSH as the main mechanism for setup, then adds a RO SNMP string to the switch. Hope this helps someone!

Homelab HCI Storage Adventures

I have written before about storage for my homelab. I have a NAS; and then for the VMware cluster, I had USB 3.0 attached 3.5″ hard drive bays. The hard drive bays shared a single USB 3.0 5 gbps connection. And being that storage has come down in price, these were SATA SSDs. Having (at the time) 4 SATA SSDs sharing a single USB 3.0 connection was not ideal; not only because of the single pipe, but because of the overhead of USB. When the vSAN these disks hit any more than idling IOPS number, latency would go through the roof. That was the main item I was attempting to correct.

Having used “disk shelves” before at work, I thought I would try to make a compact version for my homelab. I figured, all I need is an away to connect the SSDs over external SAS, an eSAS HBA, and some power. This project ended up going on for far too long and ending with a much simpler solution.

I started where any good project does, finding the general parts I will use for the project. I came across this adapter. It allows you to put 6, 2.5″ drives into a single 5.25″ DVD bay. Each drive gets its own SATA connection, and it even has fans on the back to cool them. I started designing the case around that. Then I found this little adapter to go from 2 internal SAS cables to external SAS. My thought was externally I would have eSAS into my “server”, and then convert that SAS to 4 SATA connections each.

Now I needed to start creating a case to 3D print. Every other eSAS enclosure I found online was HUGE, I wanted something small that could fit the power supply, and the connections I needed. This went through many… many… iterations.

Some of the prints didn’t come out great; I spent some time getting the printer dialed in.

This was a bad path I went down; I was hoping to cut down the plastic and thought I could have levels and it stand on columns, this turned into much more of a mess (and hard to get to stay in the right position) than waiting for the bit prints to just finish.

Next I had to figure out power. Each drive I had can pull up to 1.5 amps at 5 volts. This means I need 9 amps on 5 volts. That is a good amount of power on one rail. I thought I could use a standard PC power supply, with a cable to turn it on with a switch. These PSUs were big and made the design a bit bulkier. The next idea was to just use a wall power supply, a 5 volt one with enough amps. Also, I planned to only use the 4 drives per unit I had, so at least at first, I could cut the amp requirement down.

Now I ran into a new problem. The fans for the drive holder ran on the 12 volt line of the SATA cable. The SATA cable only needed 5 volts for the drives but needed 12 volts for the fans. I got a voltage converted and wired it in. I added a switch so the whole unit could be turned off and on.

Finally, it is time to add the HBA (not raid controller) to the Dell Optiplex and bring the drives up. This is where everything fell apart. The Optiplexs REALLY didn’t want to start with the HBA controllers. I ordered MANY off ebay to try. Older gen, newer gen, different chipsets… Sometimes they would see SOME of the drives on start-up, sometimes if I bounced the container, then it would see the drives, but there was no consistency. One of the HBAs wouldn’t allow the desktop to boot at all when the card was in. Someone online mentioned, if you put tape over one of the pins on the front of the PCI express connector, the PC won’t be able to read the bus ID it doesn’t understand, and this will allow it to boot. I couldn’t believe when that worked! It still had issues seeing the drives, but interesting none the less.

After all of this, I decided it was too much hassle and I wanted something more reliable for the system. I did what I should have done from the start… Used the ports the system already had in the systems… I went from 4 SATA SSDs to 3 SATA SSDS, and 2 NVMe drives. One in the onboard NVMe slot, and another in the PCIe x4 slot that I had. I tried a PCIe card that allows 4 NVMe drives by PCI bifurcation. This is a newer feature which only a few systems support, and these Optiplexs don’t. In either PCIe port. I also want to flag, even though the chipset in these says it supports 128GB of ram, and I can put in 32GB DIMMs and they work fine. The max on the Optiplex 5050 and 5060 is 64GB. I also added a small Noctua fan to the front of the case for additional airflow.

In the end, each of the VMWare nodes has 3 roughly 1TB SSDs, then 2 NVMe drives, one for vSAN cache, one for normal storage. I am booting the nodes off a USB drive in the back, not the most supported config, but has been working well for me. The machines have a dual 10gb nic in the x16 slot, then the secondary NVMe in the x4 slot.

VMWare EAM Failing, and not Allowing Upgrades

I was attempting to upgrade my homelab which I pushed to VMWare vSphere 8.0 because of… YOLO… and after a recent 8.0.1 update I was no longer able to upgrade individual ESXi hosts. I had already updated vCenter to the latest version, now I wanted to upgrade the hosts. That is my normal course of action, vCenter, then hosts; as recommended. When I went to upgrade the hosts I was told:

"Health check fails to retrieve data about service 'vSphere ESX Agent Manager' on '3 Node And Friends'. Verify that the service 'vSphere ESX Agent Manager' is running and try again."

This had me SSH into the appliance and looking at logs. (To quickly mention EAM = “vSphere ESX Agent Manager“) Here are some of the fun errors I was getting in “/var/vmware/eam/eam.log”:

  • “Re-login to vCenter because method: currentTime of managed object: null::ServiceInstance:ServiceInstance failed due to expired client session: null”
  • “failed to authenticate extension com.vmware.vim.eam to vCenter”

Some older guides mentioned unregistering EAM and then re-registering it. This broke my install even worse, and I ended up reverting to a snapshot. (Always snapshot before upgrades…) When I reverted back to before the vCenter upgrade, I realized that EAM was actually failing before the vCenter upgrade; except now I had EAM back in my extension list both on https://vcenter/mob/?moid=ExtensionManager and in vCenter, which was missing after I followed the guide saying to un-register it.

Now that I had the plugin registered, again, I found this KB, and this persons blog very helpful. I ran the recommended commands:

mkdir /certificate

/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store vpxd-extension --alias vpxd-extension --output /certificate/vpxd-extension.crt

/usr/lib/vmware-vmafd/bin/vecs-cli entry getkey --store vpxd-extension --alias vpxd-extension --output /certificate/vpxd-extension.key

python /usr/lib/vmware-vpx/scripts/updateExtensionCertInVC.py -e com.vmware.vim.eam -c /certificate/vpxd-extension.crt -k /certificate/vpxd-extension.key -s vcenter.my.domain -u Administrator@vsphere.local

And then EAM suddenly showed happy, and the log started showing useful things:

2023-06-06T16:53:37.573Z |  INFO | vim-monitor | ExtensionSessionRenewer.java | 190 | [Retry:Login:com.vmware.vim.eam:f86509907b4cb7c6] Re-login to vCenter b
ecause method: currentTime of managed object: null::ServiceInstance:ServiceInstance failed due to expired client session: null
2023-06-06T16:53:37.573Z |  INFO | vim-monitor | OpId.java | 37 | [vim:loginExtensionByCertificate:443bbd7c03dce9c6] created from [Retry:Login:com.vmware.vim
.eam:f86509907b4cb7c6]
2023-06-06T16:53:37.947Z |  INFO | vim-async-2 | OpIdLogger.java | 35 | [vim:loginExtensionByCertificate:443bbd7c03dce9c6] Completed.

Thats it! Now I can run updates again! If anyone has the same issue, drop a line in the comments. I hope this isn’t a big new vSphere 8.0 issue. I had upgraded this appliance from 7.0, and perhaps that or a cert issue caused issues.

Below is some of my eam.log to help people:

2023-06-06T02:20:29.728Z | ERROR | vlsi | DispatcherImpl.java | 468 | Internal server error during dispatch
com.vmware.vim.binding.eam.fault.EamServiceNotInitialized: EAM is still loading from database. Please try again later.
        at com.vmware.eam.vmomi.EAMInitRequestFilter.handleBody(EAMInitRequestFilter.java:57) ~[eam-server.jar:?]
        at com.vmware.vim.vmomi.server.impl.DispatcherImpl$SingleRequestDispatcher.handleBody(DispatcherImpl.java:373) [vlsi-server.jar:?]
        at com.vmware.vim.vmomi.server.impl.DispatcherImpl$SingleRequestDispatcher.dispatch(DispatcherImpl.java:290) [vlsi-server.jar:?]
        at com.vmware.vim.vmomi.server.impl.DispatcherImpl.dispatch(DispatcherImpl.java:246) [vlsi-server.jar:?]
        at com.vmware.vim.vmomi.server.http.impl.CorrelationDispatcherTask.run(CorrelationDispatcherTask.java:58) [vlsi-server.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_362]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_362]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]
2023-06-06T02:20:31.769Z |  INFO | vim-monitor | ExtensionSessionRenewer.java | 190 | [Retry:Login:com.vmware.vim.eam:9ae94019eb8cb9a2] Re-login to vCenter b
ecause method: currentTime of managed object: null::ServiceInstance:ServiceInstance failed due to expired client session: null
2023-06-06T02:20:31.769Z |  INFO | vim-monitor | OpId.java | 37 | [vim:loginExtensionByCertificate:b63ca4cf0b995a54] created from [Retry:Login:com.vmware.vim
.eam:9ae94019eb8cb9a2]
2023-06-06T02:20:34.775Z |  INFO | vim-async-2 | OpIdLogger.java | 43 | [vim:loginExtensionByCertificate:b63ca4cf0b995a54] Failed.
2023-06-06T02:20:34.775Z |  WARN | vim-async-2 | ExtensionSessionRenewer.java | 227 | [Retry:Login:com.vmware.vim.eam:9ae94019eb8cb9a2] Re-login failed, due
to:
com.vmware.eam.security.NotAuthenticated: Failed to authenticate extension com.vmware.vim.eam to vCenter.
        at com.vmware.eam.vim.security.impl.SessionManager.convertLoginException(SessionManager.java:329) ~[eam-server.jar:?]
        at com.vmware.eam.vim.security.impl.SessionManager.lambda$loginExtension$4(SessionManager.java:154) ~[eam-server.jar:?]
        at com.vmware.eam.async.remote.Completion.onError(Completion.java:86) [eam-server.jar:?]
        at com.vmware.eam.vmomi.async.FutureAdapter.setException(FutureAdapter.java:81) [eam-server.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$ClientFutureAdapter.setException(MethodInvocationHandlerImpl.java:731) [vlsi-c
lient.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$RetryingFuture.fail(MethodInvocationHandlerImpl.java:578) [vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$RetryingFuture$RetryActionImpl.proceed(MethodInvocationHandlerImpl.java:625) [
vlsi-client.jar:?]
        at com.vmware.eam.vim.security.impl.ExtensionSessionRenewer.retry(ExtensionSessionRenewer.java:149) [eam-server.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$RetryingFuture.setException(MethodInvocationHandlerImpl.java:541) [vlsi-client
.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.setResponse(ResponseImpl.java:239) [vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.parseResponse(HttpExchangeBase.java:286) [vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchange.invokeWithinScope(HttpExchange.java:54) [vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.TracingScopedRunnable.run(TracingScopedRunnable.java:24) [vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.run(HttpExchangeBase.java:60) [vlsi-client.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_362]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_362]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]
Caused by: com.vmware.vim.binding.vim.fault.InvalidLogin: Cannot complete login due to an incorrect user name or password.
        at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) ~[?:?]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_362]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_362]
        at java.lang.Class.newInstance(Class.java:442) ~[?:1.8.0_362]
        at com.vmware.vim.vmomi.core.types.impl.ComplexTypeImpl.newInstance(ComplexTypeImpl.java:174) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.types.impl.DefaultDataObjectFactory.newDataObject(DefaultDataObjectFactory.java:25) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.ComplexStackContext.<init>(ComplexStackContext.java:30) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.parse(UnmarshallerImpl.java:167) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.unmarshall(UnmarshallerImpl.java:105) ~[vlsi-core.jar
:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:92) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:86) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.SoapFaultStackContext.setValue(SoapFaultStackContext.java:41) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.processNextElement(ResponseUnmarshaller.java:127) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.unmarshal(ResponseUnmarshaller.java:70) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.unmarshalResponse(ResponseImpl.java:284) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.setResponse(ResponseImpl.java:241) ~[vlsi-client.jar:?]
        ... 7 more
2023-06-06T02:20:34.777Z | ERROR | vim-monitor | VcListener.java | 124 | An unexpected error in the changes polling loop
com.vmware.eam.EamRemoteSystemException: Unexpected error communicating with the vCenter server.
        at com.vmware.eam.vim.server.impl.VimRoot.rootOperation(VimRoot.java:106) ~[eam-server.jar:?]
        at com.vmware.eam.vim.server.impl.VimRoot.currentTime(VimRoot.java:78) ~[eam-server.jar:?]
        at com.vmware.eam.vc.VcListener.main(VcListener.java:140) ~[eam-server.jar:?]
        at com.vmware.eam.vc.VcListener.call(VcListener.java:118) [eam-server.jar:?]
        at com.vmware.eam.vc.VcListener.call(VcListener.java:58) [eam-server.jar:?]
        at com.vmware.eam.async.impl.AuditedJob.call(AuditedJob.java:58) [eam-server.jar:?]
        at com.vmware.eam.async.impl.FutureRunnable.run(FutureRunnable.java:55) [eam-server.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_362]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_362]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]
Caused by: com.vmware.vim.binding.vim.fault.NotAuthenticated: The session is not authenticated.
        at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) ~[?:?]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_362]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_362]
        at java.lang.Class.newInstance(Class.java:442) ~[?:1.8.0_362]
        at com.vmware.vim.vmomi.core.types.impl.ComplexTypeImpl.newInstance(ComplexTypeImpl.java:174) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.types.impl.DefaultDataObjectFactory.newDataObject(DefaultDataObjectFactory.java:25) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.ComplexStackContext.<init>(ComplexStackContext.java:30) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.parse(UnmarshallerImpl.java:167) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.unmarshall(UnmarshallerImpl.java:105) ~[vlsi-core.jar
:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:92) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:86) ~[vlsi-core.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.SoapFaultStackContext.setValue(SoapFaultStackContext.java:41) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.processNextElement(ResponseUnmarshaller.java:127) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.unmarshal(ResponseUnmarshaller.java:70) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.unmarshalResponse(ResponseImpl.java:284) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.setResponse(ResponseImpl.java:241) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.parseResponse(HttpExchangeBase.java:286) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchange.invokeWithinScope(HttpExchange.java:54) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.TracingScopedRunnable.run(TracingScopedRunnable.java:24) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.run(HttpExchangeBase.java:60) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingBase.executeRunnable(HttpProtocolBindingBase.java:229) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingImpl.send(HttpProtocolBindingImpl.java:114) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.sendCall(MethodInvocationHandlerImpl.java:693) ~[vlsi-client.jar:
?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.executeCall(MethodInvocationHandlerImpl.java:674) ~[vlsi-client.j
ar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.completeCall(MethodInvocationHandlerImpl.java:371) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invokeOperation(MethodInvocationHandlerImpl.java:322) ~[vlsi-client.jar:?]
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invoke(MethodInvocationHandlerImpl.java:195) ~[vlsi-client.jar:?]
        at com.sun.proxy.$Proxy51.currentTime(Unknown Source) ~[?:?]
        at com.vmware.eam.vim.server.impl.VimRoot.rootOperation(VimRoot.java:101) ~[eam-server.jar:?]
        ... 9 more
2023-06-06T02:20:34.778Z |  INFO | vim-monitor | VcListener.java | 125 | Full stack trace: com.vmware.eam.EamRemoteSystemException: Unexpected error communic
ating with the vCenter server.
        at com.vmware.eam.vim.server.impl.VimRoot.rootOperation(VimRoot.java:106)
        at com.vmware.eam.vim.server.impl.VimRoot.currentTime(VimRoot.java:78)
        at com.vmware.eam.vc.VcListener.main(VcListener.java:140)
        at com.vmware.eam.vc.VcListener.call(VcListener.java:118)
        at com.vmware.eam.vc.VcListener.call(VcListener.java:58)
        at com.vmware.eam.async.impl.AuditedJob.call(AuditedJob.java:58)
        at com.vmware.eam.async.impl.FutureRunnable.run(FutureRunnable.java:55)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: (vim.fault.NotAuthenticated) {
   faultCause = null,
   faultMessage = null,
   object = ManagedObjectReference: type = ServiceInstance, value = ServiceInstance, serverGuid = f0ee8343-1721-4676-9069-1a837625c60b,
   privilegeId = ,
   missingPrivileges = null
}
        at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at java.lang.Class.newInstance(Class.java:442)
        at com.vmware.vim.vmomi.core.types.impl.ComplexTypeImpl.newInstance(ComplexTypeImpl.java:174)
        at com.vmware.vim.vmomi.core.types.impl.DefaultDataObjectFactory.newDataObject(DefaultDataObjectFactory.java:25)
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.ComplexStackContext.<init>(ComplexStackContext.java:30)
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.parse(UnmarshallerImpl.java:167)
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl$UnmarshallSoapFaultContext.unmarshall(UnmarshallerImpl.java:105)
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:92)
        at com.vmware.vim.vmomi.core.soap.impl.unmarshaller.UnmarshallerImpl.unmarshalSoapFault(UnmarshallerImpl.java:86)
        at com.vmware.vim.vmomi.client.common.impl.SoapFaultStackContext.setValue(SoapFaultStackContext.java:41)
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.processNextElement(ResponseUnmarshaller.java:127)
        at com.vmware.vim.vmomi.client.common.impl.ResponseUnmarshaller.unmarshal(ResponseUnmarshaller.java:70)
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.unmarshalResponse(ResponseImpl.java:284)
        at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.setResponse(ResponseImpl.java:241)
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.parseResponse(HttpExchangeBase.java:286)
        at com.vmware.vim.vmomi.client.http.impl.HttpExchange.invokeWithinScope(HttpExchange.java:54)
        at com.vmware.vim.vmomi.client.http.impl.TracingScopedRunnable.run(TracingScopedRunnable.java:24)
        at com.vmware.vim.vmomi.client.http.impl.HttpExchangeBase.run(HttpExchangeBase.java:60)
        at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingBase.executeRunnable(HttpProtocolBindingBase.java:229)
        at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingImpl.send(HttpProtocolBindingImpl.java:114)
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.sendCall(MethodInvocationHandlerImpl.java:693)
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.executeCall(MethodInvocationHandlerImpl.java:674)
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.completeCall(MethodInvocationHandlerImpl.java:371)
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invokeOperation(MethodInvocationHandlerImpl.java:322)
        at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invoke(MethodInvocationHandlerImpl.java:195)
        at com.sun.proxy.$Proxy51.currentTime(Unknown Source)
        at com.vmware.eam.vim.server.impl.VimRoot.rootOperation(VimRoot.java:101)
        ... 9 more

2023-06-06T02:20:34.778Z |  INFO | vim-monitor | VcListener.java | 131 | Retrying in 10 sec.