Moving From Dropwizard to Spring Boot

Where I work historically, we have used Dropwizard, a Java framework for creating web apps a lot. This framework went head-to-head with Spring Boot, in the last few years and has seem to drop out of favor with the community. Nonetheless, I thought for a new project I would dive into using that to get more acquainted with it. After trying to get two of the basic things I needed done, I ended up giving up on Dropwizard and pivoting to Spring Boot instead.

The first big library I tend to use with Java projects, especially web apps, is jOOQ. This framework creates simple objects and gives many ways to interact with your database. The best feature for me is you can have a Gradle plugin scan your database, then create all the objects automatically in Java. Not only does this save you from handwriting a bunch of SQL queries, but it also means when you update the database (probably using something like Flyway), your objects automatically get updated. Now when you compile your program, if you forgot to add that new field somewhere when editing an object, you get a compilation error instead of the application silently failing SQL queries in production.

Dropwizard does not natively support jOOQ, I went looking for a library to add the support I needed. I found benjamin-bader/droptools library. It seemed to do what I needed. I got it wired in, and soon everything was working! I could make objects and with one or two lines edit objects web requests. Wonderful. Then Dropwizard did a major update; version 3.0 was created to keep the javax namespace, and 4.0 was created to move to the jakarta namespace. These versions also moved a bunch of the internals of the Dropwizard libraries around, meaning supporting libraries like droptools would need updated.

That’s when I saw droptools had not been updated for 3 years… I decided I would open a GitHub issue. With not hearing anything for a few days I started tinkering with it. I got a updated build working for Dropwizard 3.0 and did a pull request back to the main repo. In doing this I realized with the Dropwizard 3.0 and 4.0 split, we would need at least 2 versions of the library created at one time. Then on top of that, jOOQ 3.16 was the last to support Java 11, and jOOQ 3.18 was out as the main community supported branch. This means we need to make 4 versions; 2 with Dropwizard 3.0, and 2 with 4.0, then each one having jOOQ 3.16 and 3.18. I rewrite the build pipeline from the Travis CI the repo had to Github actions, and got all 4 versions compiling with some regex to do the edits in the code that were needed. I then used my earlier article, to publish these 4 assets to Maven Central.

This allowed me to update to Dropwizard 4.0, and the Jakarta namespace.

Next, I need to get basic authentication working. My plan is to use Google OAuth as the login mechanism. I do not feel like writing my own for a side project, and out of the ones out there (Google, Facebook, Twitter, Github) I thought it had the most coverage of people, with the least surveillance factor. It is easy enough to get setup with a developer account and get the client-id and secret I needed for OAuth.

Now I had to wire up the OAuth on the application side, this is not too hard I have done this many times with applications at work, but usually there I am using internal libraries. Heading over to the Dropwizard docs didn’t give me exactly what I wanted. They are pretty sparce, and when it comes to setting it up, they mention how to do OAuth but then mention you need to write your own Authenticator and Authorizer for it. I don’t want to do that. I have done that before for servlet-based apps, but this is supposed to be a fun thing, and on the general internet I want a supported auth library. I went searching for an example of how to use the OAuth system. I could not find anything that got me what I wanted.

Then I remembered using Pac4J before with other Java frameworks, this is a security library that has support for many login methods, and many web frameworks. Dropwizard is listed as supported! But the last time it was touched by a human, and not a bot, was over a year ago, and that was just for a small CI fix… I’ll try to get it working anyway!

The dropwizard-pac4j library is what I need, and there is a dropwizard-pac4j-demo which walks you through setting everything up! I get the demo working, I added in Google login support, which wasn’t there by default. Then I spent a day… Where I wanted to get this auth working on Dropwizard 3.0 or 4.0. I don’t want to start working in the older 2.x framework to get stuck later. I downloaded dropwizard-pac4j and the demo locally and started editing them to get the dependencies updated and try to get everything onto the jakarta namespace.

This is where the dependency hell came in. dropwizard-pac4j-demo depends on dropwizard-pac4j, which makes sense. dropwizard-pac4j sets a lot of your project versions based on what it has in it. After updating a ton of dependencies to try to get it to compile it came down to DropwizardTestSupport.java failing to run because it relies on jax-rs-pac4j. jax-rs-pac4j is still in the javax namespace and hasn’t been touched by a human in 6 months or more. This library would need to be updated, because it links directly to the main Jetty Server project which has a dependency on jakarta.servlet.SingleThreadModel in ServletHolder.java, which has been deprecated and removed (discussion), then and I could not get the demo project to load with any combination of dependencies. They all wanted this Jetty 11 file, which should have jakarta.servlet.SingleThreadModel removed, but doesn’t.

I went back and tried to move to Dropwizard 3.0, going back to the javax namespace, but that opened up a bunch of similar issues and a ton of conflicting dependencies in different versions of code dropwizard-pac4j needed. I have my code on GitHub if anyone wanted to continue this journey, or in the future things are in a better place.

With all of that, I thought I would go and check the documentation for Spring Boot. There is a giant page, with in-depth, step by step instructions on how to get Google or GitHub auth working in your app. There is a night and day difference between the support and thoroughness of Dropwizard docs and Spring Boot. With seeing that, I had to decide to change my plans away from Dropwizard. Many on the Java subreddit will debate Spring Boot vs Quarkus; for me, who has only used servlets in the past with embedded Tomcat, I think starting with the popular Spring Boot makes the most sense.

A cup of java with audio waves behind it, from Bing image creator

PodcastFeedHandler and Java-LAME

As I have mentioned in previous posts, I am working on a side project where I work with podcast feeds. Part of the idea is being able to act as a sort of middleware for podcasts. Give the application your podcasts, and you will be able to do things like search transcripts; and, if a podcast has a long intro on each episode, remove that. Searching for where someone mentioned a thing in an episode can be hard to research by just scrolling through an episode; if computer transcripts are 95% effective, that is better than nothing. For trimming podcasts, I find when you go through a big backlog of podcasts, hearing the same 1-minute intro every 30 minutes takes a good chunk of time.

Being that I use Java a lot for work, and am comfortable there, I wanted to make this project with Dropwizard and React. The first bit of the project has been working on the audio recognition engine, which will be a whole post in and of itself. After that I needed to start getting the supporting libraries I wanted. I tend to try to make as much of my code native to the language I am using as possible. That means we want to do as much in Java itself as possible. There are a ton of libraries that call out to FFMPEG or a command line app to handle the feeds; I don’t want to do that. If a side effect of this project is helping the community and writing some additional libraries, that is a win in my book.

PodcastFeedHandler

The first library I needed was a library to read AND write podcast feeds. With this app being middleware, we need to be able to do both. I found MarkusLewis‘s Podcast-Feed-Library, this works great for reading feeds in, but does not support writing. I took a look at his library and architecture a library similarly, except added the ability to get your feed object and then write it out again. In the end I made https://github.com/daberkow/PodcastFeedHandler. This library is written entirely in Java with no dependencies. Using Java 11, I can have all the native XML parsing I need. The rest of my project is in Java 17, but I thought others may find 11 useful. I am not sure there are any fragments in the library that wouldn’t allow me to go lower, except its 2023, and 11 is an older LTS at this point. An exciting part of that sub-project was getting Maven publishing working. Now I can publish for my domain of ntbl.co.

This project also got me used to using Github Actions. I have used CircleCI before but thought I would try Github Actions as they give you unlimited runtime for public repositories. Thanks Microsoft! I have the library build, get signed, and upload via Actions. I wanted to make sure the library preformed as I wanted and reached out to JetBrains to get an Open-Source maintainer license for Intellij. They kindly approved me!

Java-LAME

The next part of the project was parsing and fingerprinting the audio to search for duplicate segments. I will get more into that at a later time. To be able to fingerprint I needed the WAV/PCM format of the audio. Podcasts tend to be MP3 or AAC files. There are a ton of libraries to convert media in Java, except most of them had a FFMPEG external dependency. That is something I wanted to avoid. By having 100 percent native code, I can more easily create the workers that will handle these duties. Anywhere Java can run, they can run on or be compiled to; instead of having external dependencies.

I found nwaldispuehl‘s java-lame, this is a copy of the fantastic native Java port of LAME; described as “This java port of LAME 3.98.4 was created by Ken Händel for his ‘jump3r – Java Unofficial MP3 EncodeR’ project: http://sourceforge.net/projects/jsidplay2/“. The library hasn’t been updated in a while but does everything I need. It can convert MP3s but needs a file location to be passed in before converting to a byte array. I do not want to have to write to disk. The workflow would be, download podcast, store on disk, read from disk, convert. We should be able to do all this in memory. Doing all these operations in memory also means the workers do not need a bunch of scratch disk space, which is a plus. It’s more memory intensive but cuts down on disk usage. In 2023, I would rather have a slightly more memory intensive application than be doing a ton of extra read/writes to SSDs.

Throughout this project I have been thinking of: if I use it a lot or have friends using the web app, and it is constantly reading and writing audio files, how can I minimize bottlenecks. I forked the Github repo for java-lame, then added in paths to allow in-memory MP3 feeding and processing. This allows me to add a S3 client to the workers, and natively work on those files without ever writing to disk.

This library has a bunch of more functionality than I am using. It was a full LAME port, including the command line system and processing. I am planning to remove that as I go to shrink the library. I also want to replace some of the core conversion to WAV/PCM into having it in-memory compression, and functions to handle chunking the files and processing them piece by piece.

I took a This American Life episode, 1 hour in length, 67MB as MP3. Converting it to WAV/PCM I needed created a 678MB file. About a 10x size difference. Compressing that data lossless-ly with standard ZIP compression got the file down to 437MB, about 65% the size the original WAV/PCM. I can retrieve the ZIP data as a stream, and being audio, I am not jumping around; thus, that works well for me. 678MB for a file doesn’t sound so bad, a worker then just needs 1GB of RAM or so to process it, right? My worry is other podcasts. Shows like Dan Carlin’s Hardcore History can easily be 5 hours per episode, that is a 200+MB MP3, and then would be 2-3 GB of RAM to process one episode. If I can take 35% off for relatively small compute overhead, I want to.

I will post more as I go through the project. If these libraries or the blog have helped anyone, feel free to drop a comment! I always appreciate it when people do.

(The photo is something I through together on Bing image creator, its Java with audio 😊 )

GameCube Mods

I have… many… older video game consoles. One thing I like to do to them after they have ended support and enter old age is add available mods to them. This offers updates after official support and features; such as downloading discs I own to a memory card or hard drive. Many times, older console CD/DVD drives will start to die, and with the way some systems (Xbox specially) cryptographically pair the motherboard and CD/DVD drive you can never replace the drive. Having a GameCube and recently seeing the amazing work of Maciej Kobus on PicoBoot, I had to give it a go. PicoBoot uses a Raspberry Pi Pico (an Arduino competitor) to jump into the boot process of the GameCube and load Swiss, the GameCube software manager.

The other piece of hardware that started me down this whole path was the LaserBear BlueRetro replacement controller board. This board replaces the board controllers plug into and allows you use to modern bluetooth controllers instead! You can pair any Xbox/PS3-5/Nintendo controller with bluetooth to the console, and when you do the controller port lights up blue!

This introduced me to the BlueRetro project, and awesome project which aims to allow you to use those modern controllers on classic consoles! There are many sellers using this open-source code to make products, many of them on Aliexpress and other stores. The most impressive thing is the adapters tend to be a reasonable price on Aliexpress from many vendors with good reviews!

The Laserbear mod is straight forward, and they include a great guide. It involves removing the old controller board, placing the new one’s ribbon into the slot, and then moving 2 power wires. Very straight forward, no soldering.

On the flip side is the PicoBoot install. I have not used a Raspberry Pi Pico before; I am more familiar with Arduinos and older microcontrollers. The code uploading method is very neat, you hold a button, then the device mounts as a drive on your computer. you copy the binary file onto the drive, and when you eject the device it writes the payload. The next part of the install involves soldering, and this soldering is a bit tiny. The install is only 5 wires, but you are working on a small board, with wiring that cannot be that long because of how the mod works with the boot process.

Luckily there are many guides on YouTube on how to do this. And on the first try I had it working, and in the end stuck it behind where the BlueRetro lives. For PicoBoot to load, you also need a SD card adapter for the GameCube. Those are available on eBay/Amazon/your local mod shop for cheap.

The PicoBoot in the end was a little too close to the controller board for my liking, I added electrical tape to the top of the Pico to make sure no contact was made between these lovers. This was a fun afternoon, and now I can get a longer life out of this little guy. I also got a HDMI cable for the GameCube, the model of GameCube I have allows for digital out, but those cables are expensive so I am using the analog out right now.

Publishing Java Libraries to Maven Central with GitHub Actions and Gradle (Gradle 7/8 in 2023)

Intro

I recently started a new, grander, project for my spare time. The project involves working with Podcast feeds, and I was going to use this as an opportunity to use a framework I haven’t before, Dropwizard. I found a Java library that did what I needed in MarkusLewis – Podcast Feed Library; except this library only read feeds, I want to be able to read AND write. I decided to make my own, and I wanted to host the library, allowing anyone else to use it if they want. I created the repo and got a basic version working. This is the repo which can be referenced as an example.

I am using GitHub Actions as my CI/CD pipeline. I thought I should easily be able to host the final Jar files there for Gradle. Turns out, this is sort of true… If you host your library on GitHub itself, as this doc goes over, you can easily upload and host the packages; except there is no un-authenticated access to it. No matter what, an end user has to auth with Gradle/Maven before downloading the assets. Instead of dealing with that (specially for a public repo), I thought I would give a try to getting my package into Maven Central. Once I figured out the process, and found out how to publish with up to date Gradle, it as straight forward. I thought I would document it for the greater internet, and my future self (I have already used it). I know others have done this as well, except I wanted to do it with Gradle instead of Maven, with GitHub Actions doing all the work.

Throughout this guide there are items you need to record to bring to the next step, I have underlined the important ones.

Steps:

  • Setup Repo
  • Register For Maven/Sonatype
  • Setup GPG for Repo
  • Configure GitHub
  • Publishing

Setup Repo

Setup a normal GitHub repo, and setup a blank Gradle project. More on the repo/Gradle config later.

Register for Maven/Sonatype

Sonatype is the company who runs Maven Central. They allow free hosting and registration for Java Libraries; the main requirement is for it to be under 1GB in size per file. This adventure starts over at their Jira to register for an account, this Jira account will be your credentials for all future interactions with Maven Central, so make them secure, and have a long password! Once you have an account, use the above like to go to their Jira again to create an issue. This ticket grants you permissions to publish to https://s01.oss.sonatype.org/. You can also login there with the credentials created for Jira. You will have to verify either your GitHub account, or your domain before publishing. A bot handles all this and I had it done in 20 minutes or so. I have a domain I wanted to use, and there is a guide on how to go through this process.

Once you register a group ID you can use this account to publish anything under that ID; for example, I registered my domain of ntbl.co (making the group ID in Java terms “co.ntbl”. First, I published the library above, then I added a fork of a Java-Lame library; I tried to submit a ticket for the second library to be sure, and the bot tells you that you are already good to go.

Setup GPG for the Repo

One requirement for posting assets to Maven Central is to GPG sign the packages. This means we need to generate a key, and then upload the private portion of the key to GitHub secrets, and the public to a public key repository. Below are the commands to do this, the key ID is an example one I have, you will need to replace it with yours:

gpg --gen-key
gpg --list-keys
gpg --export --export-options backup --output public.gpg  co.ntbl.podcastfeedhandler
gpg --export-secret-keys --export-options backup --output private.gpg co.ntbl.podcastfeedhandler
gpg --export-ownertrust > trust.gpg
gpg --list-secret-keys --keyid-format LONG
gpg --export-secret-keys --armor 3F6F38BA13BEBB6941F823DCEFAAE414FF016215
gpg --keyserver keys.openpgp.org --send-keys 3F6F38BA13BEBB6941F823DCEFAAE414FF016215

Line 1 creates the keys, for name you enter the full project name, for example: co.ntbl.podcastfeedhandler . Group ID + the project root name. The email can be any email you have. Then the passphrase which will be used and uploaded to Github secrets. I suggest using a password generator and making it long, you should never have to actually type this in. Next, the exports are for you to back up the key incase the system you are creating it on dies and the data is lost in the GPG instance. You shouldn’t generally need it after this is setup, but it felt like best practice.

Record the output of the 7th line (export-secret-keys), that will need to be added to the Github secrets in the next step.

Configuring GitHub

The last command publishes the public keys to a global repo which is checked against. If this publish is not done, then the verification of the package will fail.

The two items we need to upload to GitHub for GPG are the password added when the key was generated, and the private key we got from the 7th command.

Go to your GitHub repo, then go to the Settings tab. Using the left-hand navigation, go to “Secrets and variables”, and the “Actions” submenu.

We need to create 4 secrets; these need to be kept secret:

  • GPG_SIGNING_KEY – The private key, copy the text from the “–export-secret-keys” command, this formats it correctly. The string should start with “—–BEGIN PGP PRIVATE KEY BLOCK—–“
  • GPG_SIGNING_PASSPHRASE – The password added when generating the key
  • OSSRH_TOKEN – This is the password you set for Sonatypes Jira
  • OSSRH_USERNAME – The Sonatype Jira username

Below is a minimal example build.gradle for your project. I removed a lot of normal extra things you would add to a buidl.gradle, to see a full example, visit this GitHub repo.

Gradle

plugins {
    id 'java-library'
    id 'signing'
    id 'maven-publish'
}

group = 'co.ntbl'
version = '0.1.2-SNAPSHOT'
rootProject.description = 'Read and Write Podcast feeds from Java.'

sourceCompatibility = 11
targetCompatibility = 11

tasks.register('createProperties') {
    doLast {
        new File("$projectDir/src/main/resources/version.properties").withWriter { w ->
            Properties p = new Properties()
            p['version'] = project.version.toString()
            p.store w, null
        }
    }
}

classes {
    dependsOn createProperties
}

jar {
    manifest {
        attributes(
                "Class-Path": "co.ntbl.podcastfeedhandler",
                "Main-Class": "PodcastFeedHandler",
                "Implementation-Title": project.name,
                "Implementation-Version": version,
                "Implementation-Vendor": "Daniel Berkowitz",
                "Build-Jdk": org.gradle.internal.jvm.Jvm.current(),
                "Gradle-Version": GradleVersion.current().toString()
        )
    }
    duplicatesStrategy = DuplicatesStrategy.EXCLUDE
    from {
        configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) }
    }
}

java {
    withJavadocJar()
    withSourcesJar()
}

ext.admin = System.getenv("MAVEN_USERNAME")

signing {
    required { admin }
    def signingKey = System.getenv("GPG_SIGNING_KEY")
    def signingPassword = System.getenv("GPG_SIGNING_PASSPHRASE")
    useInMemoryPgpKeys(signingKey, signingPassword)
    sign publishing.publications
}

repositories {
    mavenCentral()
}

dependencies {
...
}

//
// MAVEN
//

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java

            pom {
                name = 'PodcastFeedHandler'
                description = rootProject.description
                url = 'https://github.com/daberkow/PodcastFeedHandler'
                licenses {
                    license {
                        name = 'MIT License'
                        url = 'https://github.com/daberkow/PodcastFeedHandler/blob/main/LICENSE'
                        distribution = 'repo'
                    }
                }
                developers {
                    developer {
                        id = 'daberkow'
                        name = 'Daniel Berkowitz'
                        email = 'dansberkowitz@gmail.com'
                    }
                }
                scm {
                    connection = 'scm:git:git://github.com/daberkow/PodcastFeedHandler.git'
                    developerConnection = 'scm:git:ssh://git@github.com:daberkow/PodcastFeedHandler.git'
                    url = 'https://github.com/daberkow/PodcastFeedHandler'
                }
            }
        }
    }
    repositories {
        maven {
            name = "OSSRH"
            if (admin) {
                credentials {
                    username = System.getenv("MAVEN_USERNAME")
                    password = System.getenv("MAVEN_PASSWORD")
                }
            }
            def releasesRepoUrl = 'https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/'
            def snapshotsRepoUrl = 'https://s01.oss.sonatype.org/content/repositories/snapshots/'
            url = version.endsWith('SNAPSHOT') ? snapshotsRepoUrl : releasesRepoUrl
        }
    }
}

A few things to point out. Under publishing you need to enter all the information for this repository/project. If you have another publishing section in your Gradle file you will need to condense them together. Having multiple leads to Gradle getting confused and usually using the first one it sees. You will also see some variables such as “MAVEN_USERNAME”, these get the values of our secrets during the GitHub actions publish process, which we will go over next. I am getting the version, and using the end of it containing “SNAPSHOT” to say if we should publish to a snapshot repo or prod.

I also am using the build.gradle version as the canonical version. This variable could be in a Gradle settings file, or properties, but for ease I have it in the build file. I want 1 version file location; having multiple leads to more confusion during releases. The createProperties task creates a properties file that is added to the build to give the code itself a way to see which version it is. There are more elaborate ways to do this, but it works for me. This function does need the resources folder to be in the “src/main” folder; if your project is not using this the easiest way to add an “empty folder” is add the “resources” folder and then add the following .gitignore to it. This will make sure the contents of this folder are never saved.

# Ignore everything in this directory
*
# Except this file
!.gitignore

Requirements for posting to Maven central are: including source, checksums, Javadocs, and signing your packages. I am using useInMemoryPgpKeys to sign in GitHub Actions. This is part of the signing plugin. I have seen others use sign configuration.packages instead of sign publishing.publications, I found that not to work in many trials.

GitHub Actions

In your repository, create a .github folder, then a workflows folder. Below is my publish.yml, or it is available here. This file is currently set to publish when a new release is tagged, you can also change this to commits or some other trigger.

name: Publish package to the Maven Central Repository and GitHub Packages
on:
  release:
    types: [published]
jobs:
  publish-release:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout latest code
        uses: actions/checkout@v3

      - name: Set up JDK 11
        uses: actions/setup-java@v3
        with:
          distribution: adopt
          java-version: 11
      - name: Validate Gradle wrapper
        uses: gradle/wrapper-validation-action@e6e38bacfdf1a337459f332974bb2327a31aaf4b
      - name: Publish package
        uses: gradle/gradle-build-action@67421db6bd0bf253fb4bd25b31ebb98943c375e1
        with:
          arguments: publish
        env:
          MAVEN_USERNAME: ${{ secrets.OSSRH_USERNAME }}
          MAVEN_PASSWORD: ${{ secrets.OSSRH_TOKEN }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
          GPG_SIGNING_PASSPHRASE: ${{ secrets.GPG_SIGNING_PASSPHRASE }}

Here we convert GitHub secrets to local environment variables. Note the change in name from OSSRH_USERNAME to MAVEN_USERNAME and OSSRH_TOKEN to MAVEN_PASSWORD. This is simply to make the variables more clear, and they can be whatever you wish. We also validate Gradle for this final build. Another note, in my setup we are not passing assets from earlier builds into this publish stage, we are rebuilding the jar completely, depending on the size of your job, this may or may not make sense. If you have all this setup correctly, you should be able to commit the code, tag a release with “0.0.1-SNAPSHOT” or any version ending in SNAPSHOT and it should publish to the snapshot repo.

Publishing

Now that we have working snapshot releases, we need to do a full release. This involves you using the credentials created with the Sonatype Jira account earlier and logging into the Nexus panel. When you are ready, go to GitHub, and mark a new release with the version not ending with SNAPSHOT. The GitHub action should finish successfully, yet your asset is not up at https://repo1.maven.org/maven2/ yet. Head over to https://s01.oss.sonatype.org/ and click “Log In” in the top right.

Select “Staged Repositories” on the left. Note: this server seems to be very busy during the day, doubly so if it is a weekday. You will frequently see “There was an error communicating with the server: request timed out”. Come back later or keep hitting refresh.

Clicking a repository will allow you to browse the contents, and make sure it looks how you want it to. When you are ready you click “Close” at the top of the pane to finalize this version. Closing the repository starts all the checks on the repository, this includes making sure GPG signatures are there, the sources, Javadoc, and checksums are there. If they are not, you will get an error and be forced to Drop the release and try again. You also will get a vulnerability scan, including dependencies, to your email on file.

After the repo successfully closed, you can click Release! This is another stage where you can get many timeouts and be forced to wait till the server is less busy. After it successfully releases, it takes about 30 minutes for it to show up in the global Maven repo.

Selecting “Repositories” at the left allows you to browser the global Snapshots and Releases repositories; I have found this screen updates quicker than other locations to see if your assets are starting to propagate, including faster than the main Maven repo.

After about 30 minutes, your release should start to show up at Maven Search, although it can take longer. Another popular place to check packages is mvnrepository, I have found this site seems to take about a day to find new packages.

I hope this guide can help someone (and probably my future self), feel free to drop a comment if it helps or if something is unclear!

Footnotes / Useful links

https://theoverengineered.blog/posts/publishing-my-first-artifact-to-maven-central-using-github-actions

Ender 3 Pro Safety PSA

I have had an Ender 3 Pro for over 3 years. A year and a half ago I replaced the main controller board with the v4.2.7 silent board. I was surprised how much that lowered the noise from the 3D printer. I have also added more parts like the auto leveling bed probe; doing this had me compiling my own firmware to make sure all the add-ons worked.

Everything was great until recently I smelled a bad smell from the printer. I have gotten an enclosure over the holidays and thought it may just be I wasn’t used to the smell so concentrated. After it went on a few more prints I started searching to see if anyone else had this happen. It was a much worse smell than the normal PLA smell. After searching the Ender 3 subreddits, I found posts talking about how the terminals can melt.

I opened up the controller compartment and was shocked to see how badly the terminals had been melting! Apparently, the wires are tinned and overtime with the movement of the printer and work their way out of the terminal. This leads to the power arcing and melting the plastic. The suggested fix is to replace the terminals (or the board), and then install “ferrules” on the ends of the cleaned cables.

I contacted Creality for support, which they redirected me to Amazon, and opened an email chain, which Amazon never responded to. The v4.2.7 board was $30, I bought a new one to not deal with all of that. When I installed the ferrules and the new board it came right back up. Then inserting my SD card, which still had the last firmware I made on it, brought the printer 100% back.

Quick Game Review: Firewatch

Firewatch is a game I have had on my virtual shelf for a long time. I recently got a Steam Deck and figured I would give the game a try; the game is even “verified” for the Steam Deck. I went in not knowing anything about the story/game/art style. This is a story driven game, that through a quick opening scene puts you into the main character who is spending a summer in an old fire tower.

There are moments that the game feels action-y, like you have to get somewhere quickly, but in the end this game is similar to some others that fall into the “walking simulators” genre. Most of the game you are going point to point, and having the story unfold as you go. I went through the whole game in a little over 3 hours. I HIGHLY enjoyed it. The story was great and takes your actions/chats into account, changing the outcome as you go. There is also a way to play though the game with creator commentary on. You play the game again like normal, but around the map see stands that are commentary points. Shortly after finishing the game for the first time, I started another playthrough with this on. You know a game has to be good when years after it was created it still has an active subreddit.

I would recommend this game, but parts of the story are sad, so be prepared for that. At its standard price of $20 USD I think it is worth it; but this game also goes on sale regularly. I was excited to see that the developer of this game was planning another game, a new one set in Egypt. Then it turned out the company got bought by Valve and are all now on different projects.

(Art from fans on the subreddit)

Adding Content Security Policy (CSP) Support to Embedded Tomcat 10

Continuing the series of hardening embedded Tomcat in Java to meet Nessus security scans, I am back with an example of adding a Content Security Policy to your app. There are some ways in a more standard Tomcat server to provide CSP policies, but with an embedded server that can be more difficult.

I have used an embedded Tomcat server for years to build applications. The following example is using Tomcat 10, but the principle is the same or Tomcat 9. The main difference as a Tomcat 9 to 10 transition is moving from the javax namespace to jakarta. With more and more libraries, such as Jooq, moving to more modern Java versions; as well as, some of the new Java versions offering good performance improvements out of the box, it may be time for everyone to move to the Jakarta namespace. (Even if that means leaving some libraries such as Google OAuth behind)

In my recent example project going over how to use Pac4J for Oauth with Tomcat 10, I have added an example here of what the FilterBase class would look like. You then need to initialize the filter where you are starting the Tomcat thread. That will add the needed header to all the web requests your application processes.

Pac4J Integration with Embedded Tomcat 10 using Generic OAuth via Keycloak

(I will ramble for a bit, if you just want the guide jump below) I want to start a series more around programing than the other articles I have put up here. I know everyone here knows me as the good-looking hardware hacking guy, but most of my time at work is spent on programming and systems automation. I haven’t used the programming tag on this blog in a while, and I want to start this new series beginning with discussing upgrading to Tomcat 10.

I have for years been using an embedded Tomcat + Servlet backend, with a jQuery frontend for different small webapps I have made. I know for anyone who learned webdev in the recent past that sounds very old. I am using more and more Dropwizard and React these days, but that does leave my legacy projects on this old framework. While not the newest or flashiest thing, it does perform well with some systems handling hundreds to thousands of calls a second. (My hope is to get approval and open source some of them soon.) With the changes in the Java universe, (after Oracle bought Sun and decided to ruin everyone’s fun and causing splintering) I had to start moving from the traditional JavaX servlet namespace that Tomcat 9 and before used to Tomcat 10s Jakarta namespace. Migrating the servlets themselves was not so bad. The first big issue arose around Google’s OAuth library. I have used this library for a long time. It provides the easy-ish ability to connect to any OAuth server you want (I have specific ones at work I use) for authentication. Recently Google, doing what they do, marked this library as Maintenance Mode Only, stating they would only do emergency security fixes, but overall, its abandoned. Not what you want to hear from your authentication library. They also are not planning to move it over the Jakarta namespace making me stuck on Tomcat 9 for as long as it has support. This should be a long-time sine many big companies and projects are right where I was, and the plan is for 9 and 10 to develop together for a long time. This does mean that I cannot use the newer features of Java though. From this I knew I had to start looking at other options.

Every time I have to work on auth systems, it is maze, and once I get it working, I want it to stay working for a while, where I hopefully don’t have to touch it. There are not a ton of OAuth libraries for Java backend systems, and I wanted one that I knew had community support and would last for a while. That brought me to the popular PAC4J project. A lot of the guides I found for using this were around using JavaX and/or using PAC4J to integrate with Google Auth, or Facebook Auth, or other auth systems. I want to be able to use the systems at work, or a more generic provider such as Keycloak. I spent a good amount of time bringing different bits of information together to get a fully working PAC4J 5.7 + Tomcat 10 + Servlets setup working. I posted it on Github, and below has a guide on how to configure Keycloak in Docker to demonstrate this. This took a fair amount of time to put together, I hope it helps you out there, if it does, please give the post a like or star the repo, it pushes me to keep doing these tutorials.

Keycloak Setup

To start at the beginning, Keycloak is a webserver that works as an Identity Provider (IDP). It has its own database for users and groups, or can link into many other systems such as Google, Facebook, and many more. When hooking up to it, you can decide to use SAML or OpenID. I tend to use OAuth which is a subset of the OpenID standard (I think thats the order). This guide will use Keycloak as our IDP, then connect to that with RAC4J. I do believe PAC4J has a more native and easier to use Keycloak integration, but where is the fun in that.

Keycloak out of the box has a Docker image for development. You optionally can attach persistent storage to make users and groups stay after you delete and rebuild the container. Using this container with local storage is not a good setup this way for production, you should use a database like Postgres to back the container instead. But for our home testing this setup is fine and will work well. I use persistent storage so I can have a local Keycloak that can be updated without recreating everything. More information about the Keycloak docker container can be found here.

Setting up Keycloak with persistent storage

docker volume create keycloak
# We need to set the keycloak user to be the owner of that folder
docker run --rm --entrypoint chown -v keycloak:/keycloak alpine -R 1000:0 /keycloak
docker run -p 8081:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -v keycloak:/opt/keycloak/data/h2 quay.io/keycloak/keycloak:20.0.2 start-dev --http-relative-path /auth

Setting up Keycloak WITHOUT persistent storage

docker run -p 8081:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:20.0.2 start-dev --http-relative-path /auth

After a minute you should be able to browse to “http://127.0.0.1:8081/auth” and get:

Keycloak can use OpenID as its connector which is more of a superset of OAuth itself. For this guide I want to go over using OAuth; so we will tweak some of the Keycloak configs to work how we want, as a generic OAuth provider.

  • Login with the username and password of “admin” that we set in creating the container
  • Click in the top left dropdown where it says “master” and create a new realm, we will call it “example
  • Now that we are in your new realm, go to “Users” on the left
    • Create a new user, lets name this new user “Jon” (or whatever your heart desires)
    • Once created, click the “Credentials” tab, and set a password, I disable “temporary password” because this is not a production auth system
  • Click “Client Scopes” in the top left
    • Create client scope
    • Name “openid”
    • Type “Default
    • Save
    • Mappers” tab
    • Add predefined mapper
    • I did “full name”, “email”, “username”, “groups”
    • Note: I found the add screen here buggy, if you page to the next list of mappers, saving doesn’t work, so do one page at a time
  • Click “Clients” in the top left
    • Create client
    • Client-id “example-client”, I like to enable “Always display in console”, “Next”
    • Enable “Client Authentication”, then save.
      • “Client Authentication” enables “confidential access type”, this is the classic OAuth where you need a client secret to access the server
    • For this demo we need to enter “http://127.0.0.1:8080/oauth/redirect*” to the “Valid redirect URIs“, and save
    • Go up to “Client scopes” and add the “openid” scope as a “Default” type
    • Note: I have found with PAC4J + Keycloak specifically, if you do not add the openid type, then you will get through auth but when PAC4J goes to get user information you will get an error of WARN org.keycloak.events type=USER_INFO_REQUEST_ERROR, realmId=59d04435-daf4-4ca7-8623-195769911c0e, clientId=null, userId=null, ipAddress=172.17.0.1, error=access_denied, auth_method=validate_access_token. You may also see WARN org.keycloak.services KC-SERVICES0091: Request is missing scope 'openid' so it's not treated as OIDC, but just pure OAuth2 request. If your client is not requesting the openid scope.
    • Go to the “Credentials” tab, view the “Client secret” and save that for later

At this point going to http://127.0.0.1:8081/auth/realms/example/.well-known/openid-configuration will give you the OAuth endpoints we need.

Using The Example Code

Clone down the repository over at my Github. The README.md should contain what you need to get going! After adding your client secret from above, if you followed this guide, that demo should work for you with http://127.0.0.1:8080/ being the web app, and http://127.0.0.1:8081/auth being the auth server.

In the end you get a simple screen like this one that lets you play around with the functionality and experiment with the code.

Bitbucket: Convert From Standalone ElasticSearch to Embedded OpenSearch

At work I maintain random stacks of software, and sometimes help people with other stacks that they maintain. Recently I was asked to help bring a Atlassian Bitbucket stack up to date. In the past Atlassian always included a built-in ElasticSearch (ES) server. This was used to index code in Bitbucket and allow searching. It’s not a hard requirement for the server to function, but important for user experience.

When an environment moves from Bitbucket Server to Bitbucket enterprise you are supposed to go to a standalone ES over the embedded one for performance. I don’t know if people elsewhere commonly do this, but the stacks I have seen have just continued to use the embedded version. Admittedly, these are smaller instances; at scale I would understand that. That was until recently, when due to a licensing change Atlassian could no longer embed a up to date ElasticSearch. For a while they decided the best way to move forward was to keep bundling the one from before the licensing change (I think 7.10).

This works until you have an infosec team use Nessus and find you have an out-of-date ES sitting around when 7.16, or the 8.0 branch are out. From all that, this one stack had moved to a standalone ES cluster. We also now had to install the Atlassian security plugin into ES; this was not a simple task, and this plugin only supports a few versions of ES, none of which were current. At least then we are at a BETTER spot with security.

Now fast forward a few months of this mess going on, and Atlassian moved Bitbucket from ElasticSearch to OpenSearch. OpenSearch is a fork of ElasticSearch at version 7.10.2 from Amazon to get around these new licensing terms. Normally if you were still using the embedded version of ES, when you did your next upgrade of Bitbucket it would move you to OpenSearch. Because this stack had already moved to standalone instance it did not migrate over. We are now in the worst of both worlds, off the supported path, and can’t get back on it. If you search the Atlassian documentation there are guides on how to move to a standalone version, but not back. A big catch I found was they use default passwords in the embedded version, that are not easy to find, which lead you making it hard to migrate back.

Migrating Back

Below are some notes I have on migrating back. Hopefully they help someone.

There are two main folders we will work in, one is your Atlassian Bitbucket installation folder for this version, I will call it %atlassian-install%, then there is your Bitbucket data folder that moves between your versions, with your upgrades, we will call that %bitbucket-home%. (Note: I did all this on Linux, but I am calling the variables that because it is easy)

Default %atlassian-install% is /opt/atlassian/bitbucket/7.21.7, or your current version. Default %bitbucket-home% is /var/atlassian/application-data/bitbucket, but I tend to move that to /opt.

Under %atlassian-install%/opensearch/plugins/opensearch-security/securityconfig/internal_user.yml is the details Bitbucket needs to connect to this OpenSearch instance. The default password is “bitbucket-changeit”. To create a new hash of a password, the following file needs to be given execute privileges and does not come with that on Linux; %atlassian-install%/opensearch/plugins/opensearch-security/tools/hash.sh .

Go into %bitbucket-home%/shared/bitbucket.properties if you have one, this file is created as you migrate between versions or databases; and remove any legacy elasticsearch username/password/url settings. For example: plugin.search.elasticsearch.baseurl or plugin.search.config.baseurl as shown in the documentation. The properties file overrides settings you have in the instance/database. You may have a SystemD service file to automatically start Bitbucket, this file has the start-bitbucket.sh file starting with -ns or --no-search to run a standalone instance, remove the no search option.

Now start Bitbucket and go to Administration -> Troubleshooting and support tools -> System Information, you will see Search failed to connect. Go to Administration -> Server settings, then enter your new search information there. If you just removed ElasticSearch, and started OpenSearch with the server, all you have to do is make sure the port is right, by default 7992 I believe, then make sure the username is “bitbucket” and the password is “bitbucket-changeit”. If you get a connection error it may be that you have to setup a TLS trust between Bitbucket and Opensearch, but that is outside the scope of this guide.

Below is the default %bitbucket-home%/shared/search/config/opensearch.yml

cluster.name: bitbucket_search
node:
  name: bitbucket_bundled

network.host: _local_
discovery.type: single-node

path:
  logs: ${BITBUCKET_HOME}/log/search
  data: ${BITBUCKET_HOME}/shared/search/data

action.auto_create_index: false

http.port: 7992
transport.tcp.port: 7993

# The OpenSearch security plugin stores its configuration in an index in the cluster itself. On startup if the
# security index doesn't exist yet, sitting this to true will cause the security plugin to read the yml files and
# configure the index using the contents of the files.
plugins.security.allow_default_init_securityindex: true

# Using the yml files with default initialisation, we create a bitbucket user and give it the all_access in-built role.
# However, access to the REST API is disabled by default even for the all_access role so we need to explicitly give
# it permission here so that the bitbucket user can access the OpenSearch REST API.
plugins.security.restapi.roles_enabled: ["all_access"]

# Mandatory TLS setup for transport layer
plugins.security.authcz.admin_dn:
  - CN=BITBUCKET
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.pemcert_filepath: bitbucket.pem
plugins.security.ssl.transport.pemkey_filepath: bitbucket-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem

# Logs audit events to bitbucket_search_server.json
plugins.security.audit.type: log4j
plugins.security.audit.config.log4j.logger_name: audit
plugins.security.audit.config.log4j.level: INFO
Optiplex 5050 Back view

Dell Optiplex 5050 Micro Windows Server Installation

Recently I was able to pick up some Dell Optiplex 5050 Micros for $60 on eBay. These are tiny machines, with an Intel i5-7500T (4 core/4 Thread) CPU, 8GB of ram, and a 256GB SSD. For $60 they needed a power supply, but those are easy to come by. My plan was to replace my aging Intel NUC that is the core domain services for the house (AD, Radius, CA) and perhaps the aging firewall, if I can figure out how to get a second NIC into the system, more on that later.

My philosophy when running a standalone network (even with internet access) is to have at least 1 of your Domain Controllers (DCs) be a physical box at all times. An alternative is a dedicated hypervisor with local disks, but anyone who has tried to start a VM manually on VMWare knows how painful it can be without any interface to the system other than the command line. In addition, these days it’s easy to make all the DCs virtual, but if you ever have to cold boot your environment; then you run into not having DNS. Following not having DNS, things like vCenter and vSAN can’t come up cleanly, and there are more and more chain on effects. Having a physical machine allows you to bring DNS and core services up first, then start all other services that rely on your domain.

The first task I had was to get one of the Optiplex 5050s ready for Windows Server. I started with upgrading the ram to 16GB, because I had it laying around. After that, since this is an eBay purchase, I updated the firmware/BIOS and ran diagnostics before it touched the home network. The seller was nice enough to install Windows 10 Pro on the machine, which has a license in the BIOS; but I formatted the drive before starting that instance. People are generally nice, but who knows what was in that image. After getting Windows Server 2022 installed I hit my first issue. Server 2022 does not have a driver for the Intel i219-V that is in this chassis.

I tried getting the drivers from the Dell site, but Windows refused to use them because they were for Windows 10, and not Server edition. My current fix for this was going to select the driver, telling it to “Browse my computer for drivers”, letting me pick, then manually selecting the “Intel” “Intel(R) Ethernet Connection (2) I219-V” driver. I had a USB ethernet dongle that worked for me to get online and at least be able to see that driver. Now the box is happily online. The main issue with this technique is that I keep getting an “Optional” Windows Update for an updated driver that seems to never install. I think that is because I installed the Dell driver, but it never runs correctly.

Another thing I try to do with most systems, especially the systems in charge of security is get Virtualization Based Security running. This is a newer Windows feature, where core elements that need to maintain secrets are run in tiny Hyper-V containers. The user never sees it, but this gives added protection to the system. If you run “msinfo32”, you can get an output of its status. Most of the time, you need to enable chipset virtualization support; then add the system feature of “Host Guardian Hyper-V Support”. On older systems (Windows Server 2019) and desktops, I think it’s just called “Hyper-V”, then you get these features enabled.

On paper this machine is 78% faster than the Intel i5-3427U, and that makes a world of difference. The old system took a while to boot, and a while to backup, which is what spurred me to upgrade. This system feels amazingly fast for a $60 system. Keep in mind that it cost less than the Raspberry Pi 4, has Intel, and didn’t have to wait the years Raspberry Pis take right now!

I have the main DC run domain services, DNS, Network Policy Service (RADIUS), and certificate services. For the first two, I just had to install Domain Services and DNS and the system started acting in that role. For NPS I exported the config from the old DC, and then installed the service and imported onto the new one. As a reminder, Domain Services has to be installed first, or if you have NPS/Certificate Services installed, then try to do Domain Services, it will tell you it can’t install. Certificate Services, I added a new CA, stopped the old one’s service, and removed it as an enrollment agent in ADSI. My 802.1x and other certs given out by GPO are short lived, around 90 days; I will wait for the old ones to expire and systems to naturally get newer certs.

The second system I got; I thought I would try to do some hardware hacking. My hope was to repurpose it as a firewall for my aging Dell Optiplex 990 from 2011. To do this I would want to add at least 1 more NIC to the system. I ordered a 1gb ethernet NIC that goes where the WLAN chip goes. At first it did not show up in Linux and I was worried. Turns out the system bios had “wlan” disabled, and by enabling that, it turned on that PCIe channel. Then the card would show up. Having mounted the ethernet port in the extra serial blank this system has did make it look very clean and easy. I had to tuck the wire away as it came from the front of the unit to the back and had the sata drive siting on it. After playing with it a good amount, removing the card, reseating, putting electrical tape under it, I was able to get the line up, but not reliably at 1gb/s, it tended to go down to 100mb/s a lot in coming up. While things like loosening the screw holding it down, and putting electrical tape under it helped, the system was not reliable enough for me to feel comfortable using it for homelab-production. I shaved down the connectors at the end of the card, with them being that large, the screw couldn’t easily get between them. That did not help that much.

In the end I am enjoying the one system as a new DC. And eventually will figure out what I want to do with the other one. With having a NVMe slot, and SATA internally, in addition to being able to go up to 32GB of ram on a low power budget they are very capable little machines.