Software

Homelab: Overview

I am starting a series about my homelab and how it is all laid out. I have written this article a few times, with months in between. Each time the setup changes, but we seem to be at a stable-ish point where I will start this series. Since I wrote this whole article and now a while later am editing it, I will mark with italics and underline when present me is filling in. I think it will give a neat split of growth in the last year or so I have been working on this. Or it will make it illegible, we will see. My home setup gives me a good chance to test out different operating systems and configs in a domain environment before using that tech elsewhere like at work.

Hypervisor

Starting off with virtualization technology, I settled a while ago on Microsoft Hyper-V instead of ESXi, the main reason behind it is I already had Windows Server, and Hyper-V allows for Dynamic memory, and allocating a range of memory for a VM. When something like an AD controller is idling, it doesn’t need much memory; when it starts it may, Dynamic memory allows me to take that into account. I will say one place that has bit me later is file storage, but that will be a later post.

The setup is technically “router on a stick”, where the Sophos XG firewall functions as the router, and the rest of the devices hang off of that. The Sophos XG machine is a old Dell Optiplex 990 (almost 10 years old!) with an Intel quad NIC in it. That way it can do hardware offloading for most of the traffic. I intend to do posts for networking, hypervisors, file storage, domain, and more; thus I will not get too in the weeds right now on the particulars.

The file storage is a FreeNAS box recently updated to 7, 3TB HDDS. I have had this box for about over 6 years (I just looked it up in November 2020, one of the drives has 55257 hours or 6.3 years of run time on it); it is older but has worked well for me so far.

The network backbone is a new switch I really like that I was able to get 2 of off eBay; they were broken but I was able to repair them, more on that later as well. They are Brocade, now Ruckus, ICX7150-12P; 12 1GB/s POE ports, 2 additional 1GB/s uplink port, and 2, 1/10GB/s SFP/SFP+ ports. These switches can run at layer 3, but I have the layer 2 firmware on them currently. They have a fiber connection between them, before that I was using 2 Unifi APs in a bridge, that didn’t work fantastic however because A. I am in NY, B. they were only 2×2 802.11AC Wave 1, and C. I am in NY. I custom ordered (so the significant other would not get mad) a white 50m fiber cable to go around the wall of the apartment.

With SSDs in the hypervisor boxes (I call them HV# for short) and iSCSI storage for VMs as well, which VMs are on which host doesn’t particularly matter. Flash forward 6 months or so, since that first sentence was written, I now still use the NAS for backups, but the hypervisors are running Storage Spaces Directed and doing shared storage now. This allows the hypervisors to move move VMs around during patching or pause during a system update if they are less critical. The Intel NUC and small Dell Inspiron are much under powered compared to the mid tower hypervisors, so they run usually only 1 or 2 things. The NUC runs the primary older domain controller, and that is it. It is an older NUC that I got about 7 years ago, so its not that fast. The “servers” in the hypervisor failover cluster are a Lenovo and 2 Dell Optiplex 5050s. I like these Dells because they go for about $200 on eBay, while having a Intel 7600 i5, can support 64GB of ram, and have expansion slots for things like 10gb SFP+ cards. These machines also idle at about 30 watts, which makes the power bill more reasonable.

Some of the services I run include:

  • 2 Domain Controllers (Server 2016, and 2019)
    • Including Routing and Access service for RADIUS and 802.1x on wifi on wired
  • Windows Admin Center Server (Windows Server 2019)
  • Windows Bastion (This box does Windows Management) (Server 2019)
  • Veeam Server (Server 2019)
  • Unifi Controller/Unifi Video for security camera (Ubuntu)
  • 3 Elastic Search boxes for ELK (CentOS 8)
  • Linux Bastion (CentOS 8)
  • Foreman Server (CentOS 8)
  • LibreNMS (This I grew to really like) (CentOS 8)
  • Nessus Server (CentOS 8)
  • Jira Server (CentOS 8)

That is the general overview, I will spend the next while diving into each bit and discussing how it is configured and what I learned in doing that.

Redhat/CentOS 7-8 PKI/CAC/Smart Card SSH Login with Active Directory and SSSD

I was experimenting with integrating CentOS with my home Active Directory (AD) cluster. I wanted centralized user management, and for a stretch goal, get PKI login working for Smart Card auth. I have used winbind before to connect CentOS 6 to Active Directory, that configuration before was a bit annoying. These days with CentOS/RHEL 7 and 8 we have SSSD, which is more straight forward. For all the following tests I used Putty-CAC (link), a Windows app that allows GSSAPI, and Smart Card auth.

SSSD Config

I will start off with my experience, then follow up with a how to; for this article I already have AD configured to support Smart Card auth, and have stored the Smart Card public key for my user. I will follow up with an article about that configuration. Active Directory integration is straight forward and easy. One setting you can enable is: hiding the domain names from the username, this allows the users to feel native to the system. Using users and groups are easy; I made a group to which I gave sudo access. When using Smart Cards you will need to put NOPASSWD in the sudo entry for that group, because the Smart Card users usually do not have passwords, usually… You can use Smart Card auth with Active Directory AND a password as long as you do not set “Smart card is required for interactive logon”. If you do check that box, AD sets a random password on the backend for that user.

After setup, with this config we store the authorized_keys in AD under the attribute altSecurityIdentities. The main tool to debug Smart Card auth is the tool sss_ssh_authorizedkeys, this allows you to have the system attempt to pull their ssh key on demand. A big warning about SSSD, it loves to cache information. If you attempt to run that command, and then make changes to your sssd.conf or AD, and re-run sss_ssh_authorizedkeys, it will fail because it is caching the failed lookup from before. My recommended command as root between tests where it may be caching is:

systemctl stop sssd && rm -rf /var/lib/sss/db/* && rm -rf /var/lib/sss/mc/* && systemctl start sssd

SSSD Config

1. Setup hostnamectl (make sure your host knows what its name is supposed to be) and dns, for SSSD to work well you need the system to be able to find itself in DNS, you can set up SSSD to auto register with dynamic DNS (more on that later)
2. Install Packages
     - Ubuntu
       apt -y install realmd sssd sssd-tools libnss-sss libpam-sss adcli samba-common-bin oddjob oddjob-mkhomedir packagekit    
     - CentOS
       sudo yum install realmd sssd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation       

At this point running “# realm discover your_domain_fqdn” will list out services your domain needs for users to login. Usually the main program you need to enable is oddjobd which will create home directories when users login. Note, for these examples I find it easier to have a domain in them than the subsistute it, I will use my home test domain “home.ntbl.co” here.

3. systemctl enable oddjobd
4. systemctl start oddjobd
5. realm join -U admin_user_on_domain home.ntbl.co
6. vim /etc/sudoers.d/winadmins
Add the line “%domain\ admins@home.ntbl.co ALL=(ALL) ALL“, where “domain admins” is a group I have in AD, and “home.ntbl.co” is my domain. This setup does not support Smart Card login with sudo, since you need NOPASSWD for that sudo login. Example "%domain\ admins@home.ntbl.co ALL=(ALL) NOPASSWD:ALL". You can create a sub sudo file like I did here, or visudo to edit sudo and have it syntax checked.


7. Below is my /etc/sssd/sssd.conf without Smart Card auth setup.

 [sssd]
 domains = home.ntbl.co
 config_file_version = 2
 services = nss, pam
  
 [domain/home.ntbl.co]
 ad_domain = home.ntbl.co
 krb5_realm = HOME.NTBL.CO
 realmd_tags = manages-system joined-with-adcli
 cache_credentials = True
 id_provider = ad
 krb5_store_password_if_offline = True
 default_shell = /bin/bash
 ldap_id_mapping = True
 use_fully_qualified_names = false
 fallback_homedir = /home/%u@%d
 access_provider = ad
  
 dyndns_update = true
 dyndns_refresh_interval = 43200
 dyndns_update_ptr = true
 dyndns_ttl = 3600 

Adding “use_fully_qualified_names” changes your username from “dan@home.ntbl.co” to “dan”. Not a requirement, but a nice, quality of life setting. The bottom adds dynamic dns, which will push your IP to AD DNS. Windows does dynamic DNS updates by default, and unless the systems are statically assigned, or even if they are, this can be a nice feature. Now "systemctl stop sssd" and “systemctl start sssd”, then you should be able to login with your AD account.

GSSAPI

Before getting into Smart Card auth, I wanted to briefly mention GSSAPI. This is a method to do auth between systems. It allows Windows clients to one click login to SSH by passing an auth token from your Windows session right to SSH. If you setup SSSD, enable GSSAPIAuthentication in /etc/ssh/sshd_config then you can use an app like Putty-CAC to SSH with GSSAPI. I have found this usually works with SSSD by just setting GSSAPI to yes. If you just want to admin Linux from AD, and have no other requirements I would suggest you look into this for your environment because it is so easy. If yo are going to follow the rest of the guide, make sure to turn GSSAPI back off, or it will log you in automatically and you may think its Smart Card auth working; that fooled me for a few minutes.

Smart Card Auth

For all of my tests, I used the following Smart Card, Amazon link. I think these other cards would work as well, and they are cheaper; but I have not personally tried them. Amazon link. I may write an article later about setting up these cards, if you are interested write a comment below.

Add Certs to AD

You need the Smart Card’s public key data in SSH authorized_keys format. This guide will show you how to get that string from Putty CAC. You have to enjoy when a .gov site tells you to go to user NoMoreFood and get security software, the open source world is great.

In Active Directory, go to Active Directory Users and Computers, turn on Advanced Features, by going to the View menu, and enabling Advanced Features. Then select the user you want to add ssh keys for, and select the “Attribute Editor” tab. You will find an entry at the top called “altSecurityIdentities”, add the line that would usually be in ~/.ssh/authorized_keys there, it should look like “ssh-rsa key_stuff”.

Configuring SSSD for Cert Auth

To add Smart Card auth to SSSD, just add the following to your sssd.conf, merge the sections with the ones from above.

[sssd]
services = nss, pam, ssh, sudo

[pam]
pam_cert_auth = True

[domain/home.ntbl.co]
enumerate = True
ldap_user_extra_attrs = altSecurityIdentities:altSecurityIdentities
ldap_user_ssh_public_key = altSecurityIdentities
ldap_use_tokengroups = True

Now restart sssd. If you run "sss_ssh_authorizedkeys dan" with dan replaced with your name, then you SHOULD get a key back if everything is setup correctly. If you do not get a key back, use the command below to reset sssd and reload. If you still do not get a key then you will need to edit settings in sssd.conf, and continue to tweak:

systemctl stop sssd && rm -rf /var/lib/sss/db/* && rm -rf /var/lib/sss/mc/* && systemctl start sssd

I will say this does seem to take some trial and error. /var/log/sssd/ has some good logs that can help point you in the correct direction if you are running into issues. One quick note I will make, you may see people online say “use the command ‘sss_ssh_authorizedkeys -debug 4 home.ntblc.o’ to debug the command.” This command does not have a debug throw, that that does is uses the -d argument which is domain, then tries to parse the rest. You end up with key lookup attempts on domain “ebug” for user 4. Sadly sss_ssh_authorizedkeys is not very verbose, debugging it is a bit of a pain; do not listen to people who mention the above debug command, at least on CentOS/Rhel 7 and 8 it does not work.

As long as you are getting a key back from the above command, then you can wire it into SSH. Edit /etc/ssh/sshd_config with the following, note some sites say AuthorizedKeysCommandUser should be root, some say it should be nobody. I error on the side of lesser permissions and set it to:

 AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys
 AuthorizedKeysCommandUser nobody

Hope something here has helped someone, feel free to drop a comment.

iOS/macOS On-Demand IPsec VPN with Sophos XG

Having a small home lab I wanted to be able to setup internal services, and then on the go be able to access them. While I could setup a L2TP or SSL VPN and connect whenever I wanted to use these services, I thought I would give On-Demand VPN via a iOS/macOS configuration a try. Little did I know the world of hurt I was entering. I will start with the settings you need to get it working, since a lot of people just want that. Then I will talk about the crazy and painful road I went down before finding 1, just 1, set of settings that seem to work. If you have any questions, thoughts, or success stories please comment below!

Fun fact: I will be calling the protocol IPsec here. That is what the original RFC called it, what the original working group was called, and the capitalization they used. Sophos agrees and uses that capitalization, while Cisco and depending on which web page you are on for Microsoft may call it IPSEC or IPSec or IPsec.

On-Demand VPN gives you the ability to set certain websites or IPs, and when your phone or laptop attempts to connect, the machine silently brings a IPsec tunnel online and uses it for that traffic. This allows you to run services at home, and to users (your mom or cat or whomever) it looks like just another website. Apple has 1 big requirement for them, you have to use certificate based auth. You can not use a pre-shared key/password. Also up front, to save you a few days of trying things. iOS and macOS will NOT check your certificate store for your VPN endpoint (Sophos XG) certificate, it HAS to ship with the firmware or you will get the fantastic and descriptive “Could not validate the server certificate.” Also believe it or not, that is one of the most descriptive errors you will get here. There are some posts on the Apple support forums from Apple engineers saying the root CA has to be in already on the device. If anyone gets it to work with your own let me know.

Sophos XG Setup

I am using Sophos XG v18 with a Home license, backed by AD running on a Dell Optiplex for this guide (dont worry it as a cool Intel Nic in it). To setup the IPsec server in Sophos XG first we need to make 2 certificates. Login to the admin portal, then on the bottom left select “Certificates”. You need 2 certificates; 1 is our “local certificate” (we will call it Cert-A) this is a cert that is used for the server (Sophos) end. As previously mentioned, this has to be a real signed cert. I ended up forwarding a subdomain on my site to the firewall, and then using Let’s Encrypt to create a cert for that URL. I used this site, https://hometechhacker.com/letsencrypt-certificate-dns-verification-noip/ to guide me in creating the cert on my laptop, then I uploaded that to the Sophos firewall. This will require you to have access to your domains DNS settings or be able to host a web file.

The second cert (Cert-B) is for the client, Sophos will call it “The Remote Cert”; this is to auth to the firewall, that can just be a locally generated cert. All devices will share this cert. The devices will use their username and password combination to identify the user. I used email as the cert ID, note this email does not have to exist, I just made one up on my domain so I will know what this cert is. Once created, go back to the main Certificates page and download the client/remote certificate, I suggest putting an encryption password on it since the Apple tools seem to freak out if that is missing. But ALSO the password for this cert will be in clear text in your config, so don’t make it a password you care about. These certs all need to be rotated at least once a year, with the newer requirements; Let’s Encrypt is every 90 days and I intend on automating that on one of the Linux machines I have.

Note - March 9, 2022: As of Sophos XG 18.5.1 you can no longer export the private keys to self signed cert. For Cert-B you may want to use another CA. Either locally signed OpenSSL cert or Microsoft CA. - Thanks for the heads up Florian!
Self-signed client end cert

Now that we have our 2 certificates, lets go over to “VPN” on the left hand navigation. I have tried many settings in the main “IPsec Connections,” and none of them have worked for me. I get fun and generic errors from the Mac of “received IKE message with invalid SPI (759004) from other side” or “PeerInvalidSyntax: Failed to process IKE SA Init packet (connect)”.

Click the “Sophos Connect Client” tab, the back end of this client is just a well setup IPsec connection. Fill in the form, from the external interface you want to use, to selecting “Digital certificate” as your auth method, followed by the “Local certificate” which is the Let’s Encrypt one (Cert-A). “Remote certificate” is the one we will load on your device (Cert-B).

Now you select which users you want to have access to use this. I have Active Directory backing my system, so I can select the AD users who have logged in before to the User Portal. This is a trick to Sophos XG you may need, if you use AD and a user doesn’t show up, that means they need to login to the User Portal first.

Select an IP range to give these clients, I suggest something outside any of your normal ranges, then you can set the firewall rules and know no other systems are getting caught in them. Once you are happy, or fill in other settings you want like DNS servers, click “Apply”. After a second it will activate, you can download the Windows and Mac client here, or follow along to make a profile.

Apple Configuration

To create a configuration file you need to download Apple Configurator 2, https://apps.apple.com/us/app/apple-configurator-2/id1037126344 onto a Mac. I know what you are thinking, 2.1 Stars, Apple must love enterprises. Download that from the store and open it up. If you do not have a Mac I attached a template that you can edit as a text document down below. This profile needs a Name, as well as an identifier. The identifier is used to track this config uniquely, if you update the profile, then your device will override old configs instead of merging. You will see on the left there are LOTS of options you can set, the only 2 week need are “Certificates” and “VPN”.

Starting with Certificates, click into that section, then hit the Plus in the top right. Upload the cert we exported from Sophos (Cert-B) earlier for the end device, and enter the password for it. Again note, this password is in plain text in the config file.

Now for the VPN Section. Click the Plus in the top right again to make a new profile, name the connection anything. Set the Connection Type to “IPsec”. IKEv2 is IPsec but a newer version, I will get into some of this later after our config is done and I can rant. Server is your Sophos XG URL. Account and password can be entered here to ease setup, or you can leave one or both blank to make the user enter it when they import the config. You can leave the user/password fields blank (it will give you a yellow triangle but that is fine) and then give it out widely and not have your creds in it… For “Machine Authentication” you want “Certificate”; you will see in selecting “Certificate” all of a sudden the On-Demand area appears. For “Identity Certificate” select the one we uploaded before. Finally we can enable “Enable VPN On Demand” and select the IPs or URLs you want to trigger the VPN.

Once that is done, save the profile and open it on a Mac or you can use this configuration tool to upload it to an iOS device. That should be it! Your devices should be able to start the connection if you ask it, and if you go to the website should auto vpn. Make sure you have firewall rules in Sophos XG for this new IP range, or that can block you from being able to access things.

A small note, from my tinkerings with the On Demand profile if you go to Safari on a iOS device, it will connect when you visit a website that is in the configuration. If you use a random app, such as an SSH application, I didn’t find it always bringing the tunnel up, and at times it had to manually be started. Something to lookout for, a nice part of the the IPsec tunnel is that it starts quickly.

Now that the config is done, I want to mention some of the other things I have learned in tinkering with this for several days. The only way I got it to work is using that Sophos Connect area, and the other big not documented thing is you have to use a publicly trusted cert for the Sophos end. I found 1 Apple engineer mention this on their forum, and a TON of people talking about how they couldn’t get the tunnel to work with their private CA. I have tried uploading a CA, and injecting it different places with different privileges for the Mac and never could get it to work. The Let’s Encrypt cert imminently worked.

For IPsec v1, aka IKEv1, Apple uses the BSD program racoon on the backend to manage the connection. Using the “Console” app you can find the logs of this. For IKEv2 it seems Apple wrote their own client around 2016-2018, there are a lot of reports online that it just doesnt work at all with cert based auth. All the guides about it working stop around 2016. You can find earlier ones, or people using pre-shared keys, but selecting pre-shared keys doesnt allow us to do a On Demand VPN. The bug has been reported for a while, https://github.com/lionheart/openradar-mirror/issues/6082. If you try to do this, you can expect A LOT of “An unexpected error has occurred” from the VPN client. Even looking at the Wireshark traffic didn’t lend any help on tuning Sophos to give the IKEv2 client something it would accept. If someone figures out how to get that to work in this setup please let me now.

Now that everything is setup you can host things yourself. I give the auto connecting VPN less rights than when I do a full tunnel on my laptop, but it allows for things like Jira to be hosted, then mobile clients to easily connect.

Template

For your cert to work in the template it needs to be converted. Sophos will give you a .p12 file for your cert, use the following command to get the version that needs to be in the .mobileconfig file. You’ll at minimum want to edit the cert area and put yours in there, set the password for the cert, and any URLs you need.

openssl enc -a -in user.p12 -out user.enc

Transferring Files To The Macintosh SE over Serial/Kermit

The Mac journey continues with me searching for a way to transfer files from my modern PC/Mac onto the old Macintosh SE I recently was restoring; a way without constantly removing the SD card from the SCSI2SD adapter and mounting it in an emulator. After reading a lot of different pages, and hitting different dead ends, or methods that involved a lot of hardware, time, or monetary investment I found an old reliable way to transfer files.

One of the methods I looked at was an ethernet LAN adapter for the Mac SE; the issue I saw was some of them were expensive and a lot of them required more RAM than the 1MB my SE had. I then turned to the serial ports available in the back of the machine. The Mac does not come with a lot of software to help in this endeavor, which made me use the SCSI2SD adapter to load the initial setup on, then I could use the software to transfer after that.

I ended up using the Kermit protocol, the same protocol used to transfer software to the Compaq Portable II. The project was run by Colombia University for many years. While they have since transferred it to be an open source project, the original project files are still on their FTP server, and this offers everything from DOS to Mac to C64 binaries. ftp://columbia.edu/kermit hosts all the files, for archival purposes I also uploaded a clone of that folder to archive.org; https://archive.org/details/kermit_202008 . Kermit is not fast, being serial and the Mac can’t support anything over 57600 baud; but it offers compatibility with almost every OS at this point. Get ready to experience what dialup was like all over again.

Required Hardware:

  • Serial Adapter for the modern computer if your system doesnt have one on it
  • RS-232 to mini DIN 8 cable (I used this one)

To start the connection, I will be using a modern Mac as the server (a modern Mac being a 2012 Macbook Air), and a USB Serial cable to connect to the Mac SE as client. Using homebrew on the Mac, you can install “c-kermit”. Once that is installed search for your serial device under /dev/, mine is /dev/tty.usbserial1420. Please note wherever you start kermit, will be the home folder for file transfers, I suggest making a folder somewhere that you will drop files to transfer.

Server

$ kermit

> set port /dev/tty.usbserial1420

> set carrier-watch off   # Assume there is no carrier signal

> set speed 57600          # Or whatever the speed has to be

> connect

Get ftp://columbia.edu/kermit/mac/mackermit.hqx and get it onto your Mac SE, through some means. I transferred the whole “mac” folder from Colombia’s FTP server onto my Mac SE. I would suggest a SCSI2SD adapter for this initial transfer. You may be able to use a floppy, but you may hit issues depending on your model of SE. Mine has a 800kb floppy drive, so results of writing floppies from a modern PC usually end with it not reading them. Modern floppy drives are cheap working at 1.44mb, and the tracks wont align. Once you have the Kermit app on the Mac open it up.

Select “Settings”, at the top, then “Communications”. Here you can set the speed to the max speed supported of 57600 over the default 9600 baud. Both of these are terribly slow… but there is nothing we can do about that. Make sure to select the Phone or Serial port based on which you are using; I used the Phone port.

Sorry for the odd quality, capturing a CRT isn’t the easiest

Afterwards, click the “File-Transfer” menu at the top, then “Set Directory” to set where the files transferred should end up. Then open the same “File-Transfer” menu again and “Get file from server”; here you can type in a filename that exists in the folder you opened Kermit on the Server.

Taking photos of CRTs is not the easiest…

Now be prepared to wait for a while… Eventually the files will be in the folder you selected and you are good to go!

A few things to look out for, if you have a older Mac SE like the one here and it only has 1MB of RAM, that means you can only run Mac OS 6. (https://www.lowendmac.com/oldmac/compact3.html) I may upgrade this system in the future to its max which I believe is 4MB, but for now I am stuck with 6. This also means I can only use DiskCopy 4.2, and some good amount of classic apps will not work on Mac OS 6. The biggest issue is there are a lot of archives that are in DiskCopy 6 format, which I can’t load on the system.

The first thing I thought I would do is extract the archive on an old Mac VM on my modern computer, then transfer the files onto the Mac Se. Here I ran into a lot of issues with the file types that exist. If you want to go down a weird rabbit hole, the classic Macs used an odd 4 letter system for the file type, and 4 letter for which program created it, http://livecode.byu.edu/helps/file-creatorcodes.php . The Mac mostly ignores file extensions. There are programs such as ResEdit (that comes on the provided SCSI2SD disk image I used in restoration) where you can edit these attributes, but it usually leads to weird outcomes. Kermit tends to bring files over as “text”. StuffIt seems to do a decent job of just looking at the file extension and allowing you to expand it, then those files are the correct type. This whole issue is something to look out for, doubly so on a System 6 machine and can not run DiskCopy 6.

Otherwise stick to websites that say they backup with DiskCopy 4, or get more RAM… Then have fun with the system! Write that novel you have always wanted to write without distraction.

Email Alerts on Different Platforms

Different network gear I have has had many problems trying to get email alerts working. I thought I would document them. All of these systems use a service gmail address I made on free/public gmail to send alerts to me.

Sophos, and LibreNMS gave me no problems; if you have issues with them drop a comment below and I can post my settings.

Ruckus AP

The trick to getting Ruckus Unleashed, I used “smtp.gmail.com” and port 587. The issue I ran into is the service email I use to send emails had a long password. Ruckus Unleashed v200.8 supports a maximum of 32 character passwords. I would also mention it dumps the password raw into the logs, so make an account you dont care much about.

Unifi Controller

After digging through logs and getting lots of “There was an error sending the test email to x@gmail.com. Failed to send email for unknown reasons.”, I found one post that mentioned a fix for the console log of “fail to send email: api.err.SmtpSendFailed”. You need to once again use smtp.gmail.com, and port 587, but since its TLS, you need to counter intuitively UNCHECK “Enable SSL”.

Windows Server DNSSEC Error 9110

TL;DR; Check that your Domain Controllers are in the correct OU and that Microsoft Key Distribution Service is running

I ran into an issue recently when DNSSEC signing a dns zone where Windows Server 2019 gave a very vague error, and would only display that error after 10 minutes of timeout. This made iterating on it very slow since every change I made was a 10 minute wait. Every guide to setup DNSSEC mentioned right clicking the zone, then clicking sign and as long as you select the default it should just work. On another domain, that happened for me and it just worked; except the one original one that kept timing out.

In setting a custom DNSSEC signing policy I noticed that there were different keystores each of which gave a different error. This made me think it was something to do with the specific one I was using. It was time to troubleshoot the service itself not DNSSEC.

I got a list of the services from a known good, and signing, domain controller; then compared that to the bad one to see what was different. Part way down the list I noticed that Microsoft Key Distribution Service was failing to start, and if I tried to start it, there was an error.

Group Key Distribution Service cannot connect to the domain controller on local host Status 0x80070020.

Checking the Event Log showed an issue in finding the Domain Controllers on the network (error above), which was weird because it is a Domain Controller… In looking at where this system was placed in the domain tree, I saw it had been moved from the original OU for domain controllers to another place. I dragged it back, after applying all the GPOs that were on that other folder to the original Domain Controller folder. Then held my breath, hit start on the Key Distribution Service and it started right away.

After that DNSSEC signed with no issues. Long story short, dont move your DCs it’ll only end in pain. And to the one other person on the internet who has seen this problem and never solved it, 5+ years ago https://www.reddit.com/r/sysadmin/comments/3dedwm/dnssec_will_not_sign/ there is your answer!

Building a PDP-8 Kit

A bit ago I picked up a PDP-8 replica kit, the PiDP-8. A kit can be picked up here http://obsolescence.wixsite.com/obsolescence/pidp-8-get-one . They are under $200, and I always found older computers interesting so I thought I would give it a shot. I also find The Digital Equipment Corporation and interesting tale of a far gone computing era. (There were t-shirts with the DEC logo on Amazon, but they are gone now)

IMG_4855.JPG

The kit itself is a little smaller than the original control panel; photo from the creators blog above. This is not a real PDP-8, it is a front panel with a Raspberry Pi on the back of it. The Raspberry Pi has an image that is on the user forums (which are incredibly helpful as well as a nice community) which boots very quickly and dives right into the modified emulator. The design is wonderful and just uses the pinout on the Pi.

I got the kit, then ended up moving across the country and did not setup the kit for several months. When I got to building the kit (2015 version, pictured above) it was 2016 and instructions were up for both version. Not many differences except the switches, and how they are mounted. My version needed me to remove pins from each switch then mount each on a rod to keep them aligned. The 2016 version also has more authentic looking switches. I got the switch rod put together with no difficulties.

Then it was down to soldering the trillions, well it felt that way, LEDS to the PCB that came on the kit. Small soldering is not my favorite thing, so this took a bit; but in the end it was done and I was happy with it.

I wanted to test my soldering skills, or lack there of. I plugged the Pi in, and started the image. A few of the lights dimly came up, the rest of them just were dead. Darn this means somewhere it’s broken. I did some traces with a multi-meter, and couldn’t find the fault. Then I realized while it was plugged in the one integrated circuit that handles the LEDs were was getting very hot. I emailed Oscar who made the project and he quickly responded and said it sounded like the integrated circuit was dead and he would mail one the next day or so.

He was extremely helpful and kind, and I got the new chip a few days later. I had to go to Radioshack, (I was surprised I could find one! And its no longer there a few months later) to get a desoldering wick. I haven’t used this before, but it helped me remove the old chip. I soldered the new chip in, and powered it up. Instantly it all came online! I wanted to check all the LEDs, to verify if the OS was keeping some off, or if the circuit was bad, I got a diagnostic program that was written for this system. It did indeed show there was a error, and after resolding a small point then everything was working!

Now that the system works, and I sized it in the box; it was time to paint the switches! I covered half of them with painters tape and painted some brown. Then later did another coat. Then did the white ones so they were not off white or having the red dots on them.

After it dried, I cut a hole in the side of the case so that I could access the USB ports of the Pi. I just had a tiny hobby hack saw and a drill, these were not the best tools to cut the hole but it worked out. I also put electrical tape over the edges of the hole to cover up my handiwork. Then I mounted the PCB with wooden blocks for support into the box. I got some velcro with tape on the back so attach the front panel; that way I can remove it whenever I want for service and easily reattach it.

I got a power switch that is inline with a USB cable. That way I can have a switch to power on and off the device. Then I thought the blinky lights were neat, so I mounted it on my wall for now. It boots directly into OS/8 and in idling does a little light show.

20160525_150912714_ios

The project came out well, and I am excited for Oscar to release his PDP-11 clone he has been working on in the background. I haven’t spent that much time programming it, but it is nice to have a piece of computer history above my desk. A big part of this project has been the awesome community over at the forum https://groups.google.com/forum/#!forum/pidp-8 and the kindness of the project owner and his willingness to help. Oscar’s blog has some cool stuff as well, http://obsolescenceguaranteed.blogspot.com/ .

Next I want to do an Altair 8800 kit, while I wait for the PDP11 version! https://www.altairduino.com.

Update: The Altair-Duino! Altair 8800 Kit

Using Kermit to Serial Transfer Files

I recently restored my Compaq Portable II. If you haven’t read Open (Link) about the forming of Compaq I would suggest it, I highly enjoyed it. In doing so I thought I would transfer some files to the 286 via serial, instead of taking the Compact Flash card it ran on out of the adapter, and then copying new software.

I started my journey thinking I would use the old Microsoft Interlnk software that came with MS-DOS 6.22, and then perhaps a virtual machine on laptop to serve the files. The laptop I had on hand was a Macbook Pro, I thought I could do a MS-DOS vm, then hook the USB serial adapter up to the vm allowing MS-DOS to see it as COM1. This turned into a giant headache, VMware and Virtualbox (I tried both) kept giving me errors. They really didn’t like the USB serial adapter (who does), after a few hours of playing with it I made a silly decision of: the easier way would be write a new InterLnk server in Java and let my modern OS talk to DOS directly.

I spent some time configure two Vmware Fusion vms’ to have virtual serial lines go to named pipes. Then I had socat interconnect the two pipes and log the traffic, (I put the command below if anyone is interested). On one hand, I found it interesting researching debugging serial communications using virtual machines. On the other hand, after more time than I care to admit, I didn’t see a clear pattern to the serial data, along with was getting the data going between the systems but not in a super clear format, Wireshark has spoiled me. I finally decided it was time to try another plan. A quick detour to try to decompile the app made me more confused than ever, and we were back searching for a new method of connection.

socat -v -x GOPEN:/Users/Dan/DOS/pipe GOPEN:/Users/Dan/DOS/pipe2

After researching different methods of serial file transfer, such as xmodem, ymodem, and Kermit; I thought I would give Kermit a try. I have used xmodem for dead Cisco devices, and thought Kermit would be easiest to server from my Mac. It allows things like packets in the protocol, which makes it be able to speed up and slow down transfers as the transfer goes on.

To configure the server on the laptop I used Homebrew, installing with:

brew install c-kermit

Then I loaded the app via the Compact Flash card for the Portable. I got Kermit for DOS off of http://www.bttr-software.de/freesoft/comm1.htm . Version 4 for DOS works mostly the same as version 9 from Homebrew. The app lets you change directory to a folder you want to operate out of, set your connection settings, which I have below. Then you hit the command “receive” or just “r” on one side to receive. The other side then pushes whichever file you want.

Client

set port com1           # Or COM2 or whatever the port is

set carrier-watch off   # Assume there is no carrier signal

set speed 57600         # Or whatever the speed has to be

connect

Server

set port /dev/tty.usbserial

set carrier-watch off   # Assume there is no carrier signal

set speed 57600          # Or whatever the speed has to be

connect

I selected 57600 because I was not sure if the Compaq could handle the next jump to 115200. This page has a lot of info about setting up the program, there are a lot of dials and knobs that can be moved. http://www.columbia.edu/kermit/k95faq.html

You can bring up a serial connection between the two and just type messages, but in the end I remembered how slow serial links were. I ended up powering off the system, pulling the Compact Flash card and then loading most files that way. Sort of felt like cheating, but when transferring the Windows 3.0 install files were going to be a 15 minute plus affair, I want to the good ol’ USB 3.0 Compact Flash Reader.

Building a Tiny Classic Mac Part 1

I saw online someone who made a tiny Mac (The Verge) and thought it looked like a neat project to attempt. I started by selecting the original Macintosh as the template I wanted to emulate. Macintosh-HelloSeveral people had made 3D models of the original Macintosh over on thingiverse.com, I used a combination of those and other sources online including photos to make a cleaned up model for myself in Sketchup. After having that model I went about breaking down how I would make it.

I recently have been using laser cutters for fun at TechShop, so I made the body of the machine out of clear acrylic. Then 3D printed a face plate that was glued onto the acrylic case. After that, it was painted with several coats of spray paint. I left the back door off so that I could work on installing the electronics, and setting up the software. That will be another article later.

The first unit I made was for myself, then two more for friends; the original one never got painted, I thought the clear body was neat and showed off the internals. It also gave me a good model to hold when working with the opaque other units.

Clear Mini Mac

Mini Mac v1

Each unit had a little screen that connected to a Raspberry Pi via a ribbon cable. Then a USB port in the front where the old unit had a keyboard port. The back had a ethernet port for updating the system itself, audio out, and micro-USB port for power. One of the hardest parts of the project was finding a ribbon cable that could handle the frequencies and work between the screen and the Raspberry Pi. A lot of the GPIO ribbon cables online actually flip what wire is in the 1 position with its neighbor; my solution was a 6 inch IDE extension cable. The cable can handle high frequencies, as well as fit the pin out perfectly.

20150918_152507000_iOS

Example Painted Side

After testing several different color paints, I ended up using Rost-Oleum Ivory Bisque semi-gloss as the beige shade. All the sides were glued together except the back, The back was held on by tiny brackets that were 3D printed and then screwed into. This allows access to the inside without breaking glue somewhere. Originally I was going to attempt to put a little handle on it, but that increased the complexity; in the end the top is flat.

All the laser cutting and 3D files I used I tracked with Git over at https://github.com/daberkow/minimacparts . I will put a few photos of the clear unit below, and of the final unit. Then later post another article about the electronics, and software to run it. There are also photos of the many many attempts at different sized bodies and painting side panels. My original model was almost exactly 1/3rd scale. Then I had to make it a tiny bit bigger because of the screen I used.

Standard disclaimer that I do not own or hold any rights for the Macintosh name, or Apple logo. I do this as a fan for fun.

Parts:

  • Screen, JBtek® Latest Version 4 ” inch IPS Display (Super TFT) 480×320, (Amazon)
  • Screen Cable, IDE Extension Cable, (Amazon)
  • Audio Cable, 3.5mm right angle cable (Amazon)
  • USB Extension cable, with 90 degree plug so that it fits in the case (Amazon)
  • Micro USB extension for power, with 90 degree head (Amazon)
  • For ethernet I made my own cable, it had a RJ45 head and a RJ45 keystone for the back

 

Updated Windows Sudo

Recently I updated my Windows sudo program and added a command for Super Conduit, this is what I call some tweaks that you can make to a Windows Vista+ system. This allows someone to copy sudo.exe to a systems, system32 folder; then after running “sudo cmd” you can run “sudo /write” so add ls, ifconfig, and superc as a option in the command line.

Superc has options of enable, disable, and show. Making it easy to run. 🙂

Newest build is always here https://github.com/daberkow/win_sudo/raw/master/sudo/sudo/bin/Release/sudo.exe