Software

Bitbucket: Convert From Standalone ElasticSearch to Embedded OpenSearch

At work I maintain random stacks of software, and sometimes help people with other stacks that they maintain. Recently I was asked to help bring a Atlassian Bitbucket stack up to date. In the past Atlassian always included a built-in ElasticSearch (ES) server. This was used to index code in Bitbucket and allow searching. It’s not a hard requirement for the server to function, but important for user experience.

When an environment moves from Bitbucket Server to Bitbucket enterprise you are supposed to go to a standalone ES over the embedded one for performance. I don’t know if people elsewhere commonly do this, but the stacks I have seen have just continued to use the embedded version. Admittedly, these are smaller instances; at scale I would understand that. That was until recently, when due to a licensing change Atlassian could no longer embed a up to date ElasticSearch. For a while they decided the best way to move forward was to keep bundling the one from before the licensing change (I think 7.10).

This works until you have an infosec team use Nessus and find you have an out-of-date ES sitting around when 7.16, or the 8.0 branch are out. From all that, this one stack had moved to a standalone ES cluster. We also now had to install the Atlassian security plugin into ES; this was not a simple task, and this plugin only supports a few versions of ES, none of which were current. At least then we are at a BETTER spot with security.

Now fast forward a few months of this mess going on, and Atlassian moved Bitbucket from ElasticSearch to OpenSearch. OpenSearch is a fork of ElasticSearch at version 7.10.2 from Amazon to get around these new licensing terms. Normally if you were still using the embedded version of ES, when you did your next upgrade of Bitbucket it would move you to OpenSearch. Because this stack had already moved to standalone instance it did not migrate over. We are now in the worst of both worlds, off the supported path, and can’t get back on it. If you search the Atlassian documentation there are guides on how to move to a standalone version, but not back. A big catch I found was they use default passwords in the embedded version, that are not easy to find, which lead you making it hard to migrate back.

Migrating Back

Below are some notes I have on migrating back. Hopefully they help someone.

There are two main folders we will work in, one is your Atlassian Bitbucket installation folder for this version, I will call it %atlassian-install%, then there is your Bitbucket data folder that moves between your versions, with your upgrades, we will call that %bitbucket-home%. (Note: I did all this on Linux, but I am calling the variables that because it is easy)

Default %atlassian-install% is /opt/atlassian/bitbucket/7.21.7, or your current version. Default %bitbucket-home% is /var/atlassian/application-data/bitbucket, but I tend to move that to /opt.

Under %atlassian-install%/opensearch/plugins/opensearch-security/securityconfig/internal_user.yml is the details Bitbucket needs to connect to this OpenSearch instance. The default password is “bitbucket-changeit”. To create a new hash of a password, the following file needs to be given execute privileges and does not come with that on Linux; %atlassian-install%/opensearch/plugins/opensearch-security/tools/hash.sh .

Go into %bitbucket-home%/shared/bitbucket.properties if you have one, this file is created as you migrate between versions or databases; and remove any legacy elasticsearch username/password/url settings. For example: plugin.search.elasticsearch.baseurl or plugin.search.config.baseurl as shown in the documentation. The properties file overrides settings you have in the instance/database. You may have a SystemD service file to automatically start Bitbucket, this file has the start-bitbucket.sh file starting with -ns or --no-search to run a standalone instance, remove the no search option.

Now start Bitbucket and go to Administration -> Troubleshooting and support tools -> System Information, you will see Search failed to connect. Go to Administration -> Server settings, then enter your new search information there. If you just removed ElasticSearch, and started OpenSearch with the server, all you have to do is make sure the port is right, by default 7992 I believe, then make sure the username is “bitbucket” and the password is “bitbucket-changeit”. If you get a connection error it may be that you have to setup a TLS trust between Bitbucket and Opensearch, but that is outside the scope of this guide.

Below is the default %bitbucket-home%/shared/search/config/opensearch.yml

cluster.name: bitbucket_search
node:
  name: bitbucket_bundled

network.host: _local_
discovery.type: single-node

path:
  logs: ${BITBUCKET_HOME}/log/search
  data: ${BITBUCKET_HOME}/shared/search/data

action.auto_create_index: false

http.port: 7992
transport.tcp.port: 7993

# The OpenSearch security plugin stores its configuration in an index in the cluster itself. On startup if the
# security index doesn't exist yet, sitting this to true will cause the security plugin to read the yml files and
# configure the index using the contents of the files.
plugins.security.allow_default_init_securityindex: true

# Using the yml files with default initialisation, we create a bitbucket user and give it the all_access in-built role.
# However, access to the REST API is disabled by default even for the all_access role so we need to explicitly give
# it permission here so that the bitbucket user can access the OpenSearch REST API.
plugins.security.restapi.roles_enabled: ["all_access"]

# Mandatory TLS setup for transport layer
plugins.security.authcz.admin_dn:
  - CN=BITBUCKET
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.pemcert_filepath: bitbucket.pem
plugins.security.ssl.transport.pemkey_filepath: bitbucket-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem

# Logs audit events to bitbucket_search_server.json
plugins.security.audit.type: log4j
plugins.security.audit.config.log4j.logger_name: audit
plugins.security.audit.config.log4j.level: INFO
Stable Diffusion Image, "computer with redhat logo on screen, in a field with mountains and a dinosaur in the background"

Clean Tenable Nessus Scans for RHEL 7 with Podman

There can be an alert misfire for Tenable Nessus plugins 137561, 138032, 142002 based on your YUM repo configuration. This leads to 3 medium alerts that should not be there.

Plugins:

  • RHEL 7 : OpenShift Container Platform 4.3.25 containernetworking-plugins (RHSA-2020:2443) (Plugin 137561)
  • RHEL 7 : OpenShift Container Platform 4.2.36 containernetworking-plugins (RHSA-2020:2592) (Plugin 138032)
  • RHEL 7 / 8 : OpenShift Container Platform 4.6.1 package (RHSA-2020:4297) (Plugin: 142002)

If you have a stack that is using podman with RHEL 7 and does not have the default redhat.repo file, then packages are installed that have newer versions in the OpenShift repos. Normally this would be fine, but the Nessus scanner is supposed to check if you have OpenShift repos enabled, and if not then stop and say the latest versions from RHEL 7 OS is good; but the check fails if you are missing the RHEL 7 OS repos. The OS repo HAS to be enabled also, or the check will show as failing. This situation can easily happen if you have an air gapped system or a system on Satellite where you are not using the default repo in redhat.repo. Luckily the baseurl does not matter, as long as you set the name to “rhel-7-server-rpms”, and I put the name= line in there for good measure, then the check will come back clean.

/etc/yum.repos.d/redhat.repo

[rhel-7-server-rpms]
name = Red Hat Enterprise Linux 7 Server (RPMs)
baseurl=file:///opt/rhel_7_x86_64_os/
enabled = 1
gpgcheck = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Before (or after) setting that, you will need to disable the YUM Redhat Subscription Manager plugin, or the next time you run “yum” it will wipe your redhat.repo and reload it from subscription manager. To do this, go to /etc/yum/pluginconf.d/subscription-manager.conf and set “enabled=0”. Also # subscription-manager config --rhsm.manage_repos=0

Below are examples of the errors you can see from Nessus.

Remote package installed : containernetworking-plugins-0.8.3-3.el7_8
Should be                : containernetworking-plugins-0.8.6-1.rhaos4.2.el7
OR
Remote package installed : runc-1.0.0-69.rc10.el7_9
Should be : runc-1.0.0-81.rhaos4.6.git5b757d4.el7

The default thinking may be, it says I need to update to the OpenShift packages; then it makes sense to install the OpenShift repos. And if you go get a Redhat developer account to debug this, you have the OpenShift repos there. That is because the developer account gives you a lot of entitlements including OpenShift, and if you add the OpenShift repos to a bunch of systems, you may be liable to get OpenShift licenses, or get errors because those systems do not have the entitlements. The key is the packages say “.el7_8″/”.el7_9″ instead of “.rhaos4.2”. This is a plugin misclassification, not a need for updates.

Note: The image is a random AI generated one: Stable Diffusion Image, “computer with redhat logo on screen, in a field with mountains and a dinosaur in the background”. I think they are fun.

Missing Email Alerts from LibreNMS

I realized recently that I haven’t gotten any alerts from LibreNMS recently, including when I rebooted devices for patching. After going to the “Alert Transport”, and attempting to send a message I got “SNMP Error: Could not authenticate.” Others seem to recently get this as well. (Link)

Turns out after May 31st (although for me it seems more like June 6th, 2022) Google disabled simple password logins for Gmail accounts. You need to enable two factor auth, then enable an app specific password for LibreNMS. This was a good quick guide on how to do that. With LibreNMS sending alerts when something is wrong, but not having a alert that it is working, it may be worth going and checking if you use LibreNMS and Gmail.

Computer Vision for Datacenter Auditing

I am going to start a series of posts of random ideas I have had but not had time to fully implement. The first in this series is a idea I have worked on about ~3 years ago (November 2019)  for being able to audit a datacenter as well as map systems physicals location to their logical one in a network.

The core of the idea is to use cameras in a datacenter to see servers in the rack, these could be security cameras, then use that data to map out the datacenter and save the administrators time from having to manually preform these actions. The process begins by training a machine vision learning model on what a server looks like. Most of the time at work I am working with Dell servers so I thought that was a good starting point. To make the model generic enough I was just attempting to train it on what a 1U server looks like vs what a 2U server looks like.

At this point I needed A LOT of photos of servers with different lighting and angles. I took a bunch myself as a seed set of different racks I had, then I turned to the web. Where could I get a large assortment of photos of Dell servers in different configurations and lightings? The homelab section on reddit! People all the time post their setups at home and what they have. I went through and downloaded several hundreds photos of different peoples setups. Another place to get photos was from eBay where a lot of sellers put up photos of servers in different settings; the downside is that a lot of people reuse the same photos again and again. I don’t know if the internet has yet to figure out what the copyright rules of using photos from online to train a model.

I researched a bunch of different techniques and was playing around with OpenCV, but then found a tutorial that seemed to be in line with what I was looking to do. (This one is also good, and very similar material) I researched different image processing models, and played around with several.

Now that I had the photos, I downloaded GitHub – tzutalin/labelImg: 🖍️ LabelImg is a graphical image annotation tool and label object bounding boxes in images. This is a tool where you go to each photo, select the item you are trying to learn, and label it. This took a while, its fully manual work. A lot of the photos from the web had multiple servers in the photo, and each one would need to be selected. This proved to be one of the more time consuming parts of the project. I had to manipulate the photos to allow the rectangular bounding boxes to be able to fit the servers, even when the photos are at weird angles.

I had to pick some of the photos to be the training set, than other photos to be the testing set. With that metadata ready and everything marked, I converted the final metadata from XML to CSV, using xml_to_csv.py provided at the above example repos. That was then fed into Tensorflow. The system I had to start with for this other than a laptop was a CentOS 7 server, this proved to be very annoying because some dependencies such as protobuf were not available at new enough versions and had to be custom compiled.

It was time to let the model run for a while and see what it could learn. Several important things were learnt in this process. First, if you have GPUs makes sure you have a Tensorflow that is compiled and ready to use them. The speed you speed you get with and without them is kind of crazy. Also, more RAM and GPU helps a lot speed up the process. At first I was playing with this on just a laptop, and that one didn’t have the GPU drivers for CUDA. This was taking DAYS to work on the model. Later I switched to using GPUs I had in a server, and this greatly increased the iteration cycle speed.

Off the bat it was able to get a decent percentage recognition of the servers in the photos I had presented it! I do think a lot of the photos I then tested it on were fairly ideal conditions, with good lighting and camera angle. This may give a better than real world experience with it working. To improve the model I can always find more photos and train it with more images. I was able to get the model to recognize about 80% of the servers in racks I showed it at this time. Another factor that could help in the future is the evolution of cameras. A lot of places are replacing 720p/1080P cameras with 4k cameras, the more resolution the system has to work with the better.

The next step I wanted to do was start matching physical location to logical. The idea behind this is, I can find regions in a photo or video where servers are, and each server through its iDrac/IPMI allows me to blink front chassis lights. So one host at a time I will have automation send the command to blink the front chassis lights, and perhaps some lights on the HDDs, then scan for which region in the photo has started to blink!

This is the idea I have slowly worked on for the last little while, I have prototypes of most of it working, but have not had a lot of time to put into it.  The hope would be we could use existing cameras to get the footage we need to map existing datacenters we have. Then perhaps in the future port this system to something like Hololens, or Apple/Meta AR system. Once we have that mapping, now we can start to draw out the physical servers and their location in the world/racks on a webpage, and make it easier for people working in a datacenter to find boxes they need. Hopefully one day allowing for people to click a server on a webpage, and then connect into its controller without a human painstakingly going to each box and doing this mapping. Of course all of this is fixed by a team labeling each server, but where is the fun in that.

CentOS 8 Migration

I have a pipeline which creates live images to network boot different systems. Historically this has been based on CentOS. A little while ago I moved it to CentOS 8 because I had some newer hardware that was not supported on the older kernel of 7. Everything was working well until recently when CentOS 8 went end of life, and I could no longer rely on the CentOS 8 Docker containers.

The journey began for a new EL8 system. I wanted to keep on EL8 instead of switching to Streams because all the other systems I had running were EL8 (CentOS 8 or RHEL8), and I wanted to keep compatibility. At the same time, I didn’t want to do a new build of the image, have things break, and not realize it was because of a CentOS Streams change upstream. I also used the CentOS 8 docker container which seems to have been pulled, so that forced me to do this change now.

My first thought was Oracle Linux. It has been around for a while, is ALMOST drop in compatible, and can be used without going and getting licenses (RHEL). (There are some small silly things like instead of “epel-release” the package is “oracle-epel-release-el8”) This lead to nothing but issues. I replaced all the repos I had in the image creation stage with Oracle Linux ones, then every build I got a ton of “nothing provides module(platform:el8)” lines for any package that used yum/dnf modules. I spent a chunk of time on this, finding no real answers, and one Oracle support page that looked like it could help saying I needed to buy a support contract. Classic Oracle. At one point I thought it had something to do with Commit – rpms/centos-release – 89457ca3bf36c7c29d47c5d573a819dd7ee054fe – CentOS Git server where a line in os-release confuses dnf, but then that line was there. Also Oracle doesn’t seem to have a kickstart url repo, which is needed to do this sort of network boot. They wanted the end user to set that repo up, which may be the source of my issues. This also touched on the issue Disable Modular Filtering in Kickstart Repos – Red Hat Customer Portal, but I wasn’t even getting to a base OS setup, then I could make changes to the os and dnf for how it processes modules.

In my searches I did find this nice script to get bash variables for OS and version. https://unix.stackexchange.com/a/6348

Then I figured I would try either AlmaLinux or Rocky Linux. They both came out around when Redhat said Cent 8 was going away. Looking into both projects, they both are backed by AWS and Equinix who are big players, which made me feel a bit better about it. I had heard a bit more about Rocky and its support, so I tried that. I dropped in the new repos, and kickstart location, and everything just worked… Even things that were a issue when playing with Oracle Linux went away. For example, epel-release was once again called what it should be.

In the end so far it seems to be happy! We will see if any other small differences pop up and bite me…

Below is an example of the top of the kickstart I am using, if anyone is interested in more of how I create live images, leave a comment and I can do a post on it:

lang en_US.UTF-8
keyboard us
timezone Europe/Brussels --isUtc
auth --useshadow --enablemd5
selinux --disable
network --device=eno1 --bootproto=dhcp
skipx
part / --size 4096 --fstype ext4
part /opt --size 4096 --fstype ext4
firewall --disabled

url --url=https://download.rockylinux.org/pub/rocky/8/BaseOS/x86_64/kickstart/

# Root password
rootpw --iscrypted <Insert encrypted password here>

repo --name=baseos --baseurl=https://download.rockylinux.org/pub/rocky/8/BaseOS/x86_64/os/ --install
repo --name=extras --baseurl=https://download.rockylinux.org/pub/rocky/8/extras/x86_64/os/ --install
repo --name=appstream --baseurl=https://download.rockylinux.org/pub/rocky/8/AppStream/x86_64/os/ --install

Migrating Chrome Plugins from Manifest v2 to v3 Impressions

Google has recently decided that soon everyone will need to migrate from Manifest v2 of Chrome Plug-ins to Manifest v3. The one big change for me other than some of the new syntax changes, was you can no longer inject scripts into webpages. A lot of the changes for Manifest v3 are around the security context for plugins, which is good to see. In the past I could append to a webpages <script> data, and then have the page process that script in the pages context, now all that processing has to take place within the plugin itself, instead of on the page. You can still add to pages, but it has to be more static content, instead of dynamic.

One change that creates for you is which browser context you are working in. If you are on the page, you can directly hit all aspects of the page, and do AJAX requests under the users context. Now, any scripting you want done has to be done in the plugin itself, and if you want to access a non-public asset, the plugin requires the user to login itself. If you attempt to inject scripts onto the page you will get a CORS error, stating its from a different context..

For the main plugin I dabble with and work on, the API I access is open. This allows me to not worry about the context I am working in the browser too much. If it was an authed API, I would have to worry about having the user auth to the plugin itself. I moved all the logic from a split context of the plugin doing some of the work, then handing high level data to scripts injected into the webpages; to doing all the work in the plugin, then injecting final results (HTML data), and assets I want to change onto the page. In the end, this leads to a cleaner solution, and centralizes all the logic.

A big added benefit I saw to switching from Manifest v2 to v3 was in the security review process that is done when you upload an updated plugin, you get approved faster than in the past. For me, I got my new plugin approved in around a day (note the plugin I was working on is relatively small).

Hardening Embedded Apache Tomcat 9

I recently was working to make sure some of my web apps can pass a Tenable Nessus security scan. Since I tend to use the same embedded Tomcat for a lot of the apps I kept hitting similar findings. I had to do a bit of digging to find some of these answers so I thought I would document them. If anyone else has any helpful tips for embedded Tomcat please feel free to comment!

Apache Tomcat Default Files

Apache Tomcat Default Files | Tenable®

The main issue with this finding is that the 404 page the app presents has the Tomcat version number. This could be a issue because if there is a vuln in that version, you can be targeted.

final Tomcat tomcat = new Tomcat();

var host = (StandardHost) tomcat.getHost(); 
var errorReportValve = new org.apache.catalina.valves.ErrorReportValve();
errorReportValve.setShowReport(false); 
errorReportValve.setShowServerInfo(false); 
host.addValve(errorReportValve);

errorReportValve.setProperty(“errorCode.0”, “empty.html”);

The above line can be used if you want to specify a 404 page to use instead.

Source: https://stackoverflow.com/a/59967152

Web Application Potentially Vulnerable to Clickjacking

Web Application Potentially Vulnerable to Clickjacking | Tenable®

This finding is because the application is not sending the proper X-Frame-Options or Content-Security-Policy headers.

final Tomcat tomcat = new Tomcat();

final Context ctx = tomcat.addContext("/", MY_FILE_LOC);

FilterDef httpHeaderSecurityFilter = new FilterDef();
httpHeaderSecurityFilter.setFilterName("httpHeaderSecurity");
httpHeaderSecurityFilter.setFilterClass("org.apache.catalina.filters.HttpHeaderSecurityFilter");
httpHeaderSecurityFilter.addInitParameter("antiClickJackingEnabled", String.valueOf(Boolean.TRUE)); 
httpHeaderSecurityFilter.addInitParameter("antiClickJackingOption", "DENY");
httpHeaderSecurityFilter.addInitParameter("xssProtectionEnabled", String.valueOf(Boolean.TRUE));
httpHeaderSecurityFilter.addInitParameter("blockContentTypeSniffingEnabled", String.valueOf(Boolean.TRUE));
httpHeaderSecurityFilter.setAsyncSupported(String.valueOf(Boolean.TRUE));

FilterMap httpHeaderSecurityFilterMap = new FilterMap();
httpHeaderSecurityFilterMap.setFilterName("httpHeaderSecurity");
httpHeaderSecurityFilterMap.addURLPattern("/*");
httpHeaderSecurityFilterMap.setDispatcher("REQUEST");

ctx.addFilterDef(httpHeaderSecurityFilter);
ctx.addFilterMap(httpHeaderSecurityFilterMap);

Source: https://github.com/jiaguangzhao/base/blob/905aaf4111f4779e236043ff423951672ade848a/src/main/java/com/example/base/aop/configure/TomcatConfigure.java

CentOS/Rhel 8 Auto login Fix

I have a PXE environment that requires systems to boot up, then automatically login and start a program on boot. All of a sudden this stopped working after years of working. It took me a while to figure it out so figured I would post in case anyone else ran into this.

I have been doing auto login the recommended systemd for a while, as shown: https://wiki.archlinux.org/title/Getty. I copied /lib/systemd/system/getty@.service into /etc/systemd/system/getty@tty1.service. Then with a script edited it using sed in the build pipeline. In the end the line was:

ExecStart=-/usr/bin/agetty --noclear %I $TERM --autologin username

This worked for YEARS, then suddenly stopped. In investigating, I saw another file was being written next to mine at /etc/systemd/system/getty@tty1.servicee ; with another e added to the end of service, making it servicee. After a lot of playing around with it and looking at other guides I figured out, there was a update to systemd/getty and now it cares that all options are before the terminal variable is presented. Changing that line to the following fixed it.

ExecStart=-/usr/bin/agetty --noclear --autologin username %I $TERM 

Homelab: 802.1x 2021

One technology I have played around with a little at work but wanted to get a better handle on is 802.1x. I have taken and passed the Cisco ISE cert a few years back and have used that with other services at work, but for the home setup I mostly wanted to be able to put different wireless devices onto different vlans based on device and user. Windows Server natively makes this possible with Network Policy Server (NPS).

An example of me playing with Network Policy Server

NPS is a Radius server in Windows at the end of the day. It gives you the conditions and a rules system to respond to different Radius calls, as well as a way to setup Accounting. It is fairly simple compared to something like ISE that can also do Posture and Profiling for devices; but for a quick free solution works well for home. You can say of a client is attempting to authenticate over a system like wireless, then accept x methods, vs if its wired or a switch login then accept other forms. Instead of going point by point of how to set it up, which you can find elsewhere online, I want to give some high level edge cases you may run into. First NPS need Windows Server with the Desktop experience, if you are running member servers or domain controllers as Server Core to simplify the environment then it will not work. NPS also does not easily HA. You can run multiple servers with it running, and export the config from one, then import it in another, but there is not a good system for dynamically syncing these (less you call random peoples PowerShell scripts a good system).

Some good reasons to use NPS is the simple AD integration, you can have users use their domain auth and easily get access. Or do as I do, really too much for home, or possibly anywhere, setup a domain CA, have a GPO that creates certs for each machine, then use cert based auth via 802.1x deployed via GPO. If anyone has questions about this I am happy to answer, but there are many places online that will talk about each of those configs and how to do them. Another place to integrate Radius other than 802.1x for wired and wireless is network device login. I use Radius for the stack of Ruckus switches (2 is considered a stack (like when you run k3s as a “cluster” of 1)) I have at home.

This is one of those Windows Services that works well, but also has not been touched in YEARS by Microsoft; like WSUS, or any other service that is useful. To backup this point, I installed several old versions of Windows Server in ESXi that I had laying around. Lesson 1 that I learned, the web console doesn’t work well with some of the legacy mouse support systems. Second you may need legacy VMware tools iso VMware Tools support for Windows 2000, Windows XP, and Windows Server 2003 (81466). The internet seems to say it came out in Server 2008.

https://social.microsoft.com/Forums/getfile/51145/

Converting .heic on Windows With Open Source Tools and a Context Menu Shortcut

While taking photos and uploading them places, like this blog, I get the photos in .heic format from the iPhone, then need to convert them into JPEG for WordPress. There are a few paid, and some questionable freeware out there to do it, but I wanted to use open source tools. ImageMagick is an open source tool that can do the conversion, but that requires the command line, so I found the registry keys needed to add a right click context menu to convert the images!

This context menu only shows up when selecting a .heic file as well, which is a nice way to do it. How to install:

  • Install ImageMagick (link), the version I got was “ImageMagick-7.0.11-4-Q16-HDRI-x64-dll.exe”
  • Copy the following lines into a text document
Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\SystemFileAssociations\.heic\Shell\convertojpeg]
@="Convert To JPEG"

[HKEY_CLASSES_ROOT\SystemFileAssociations\.heic\Shell\convertojpeg\command]
@="\"C:\\Windows\\System32\\cmd.exe\" /C magick.exe mogrify -verbose -format jpg \"%1\""

  • Name it install_imagemagick.reg (or really anything.reg)
  • Open that file in file explorer

After install you should be able to right click a .heic photo and “Convert To JPEG”. I did not need to restart/logout/restart explorer. I am calling cmd.exe first instead of the program directly, because this allows you to easily update ImageMagick and not need to directly link to the file.