Random Technology

Sonnet Labs Firmware Poking

A few weeks ago I got the Sonnet Labs, Sonnet One long after we thought the Kickstarter was just dead. A pleasant surprise, until it was missing any instructions, and the setup link didn’t go anywhere. I ended up writing up what I could piece together on the web into an Unofficial manual of sorts, but then decided I wanted to know a bit more about the firmware currently on it.

Finding bits like the FCC manual, which didnt have much (https://fccid.io/2AN8Z-SONNET/Users-Manual/user-manual-4003660). Then someone on a comment back in the Kickstarter’s past mentioned a draft of a manual was put up (https://www.dropbox.com/s/avmes7rhanx2vli/Sonnet%20User%20Manual%20v0.4.pdf) that gave some actual guidance about the device, but then one user wrote on a forum where he was early testing the device that the devs had given him SSH access to control the backend of the device.

Now I was interested! You can get SSH and control the whole device? How does one get that. First I looked at the code of the web app that was running, and just wanted to see if there were any admin pages I could click and just not see. Nothing big stood out, except the code on the device, and the code on the GitHub looked a bit different than each other, and there is not even a README about how to get the repo up and working.

Running nmap against the device just displayed the few ports we knew it had to have open: web server, DNS, DHCP.

That’s when I saw a reference for http://repo-test.sonnetlabs.com(backed up on archive.org), a place where all the different firmware versions has been stored. Some are marked “stable” some are “beta” or “alpha”. With a bit of searching around I found a site that walked through easily extracting OpenWRT firmware. After a quick Brew install on mac, I was able to binwalk the file and open the firmware that I seemed to have on my device, vs one of the beta ones. Looking around, its a fairly standard OpenWRT firmware with some tweaks done for the long range radios. It also has services like dropbear for SSH. In the beta/alpha releases they are missing one line that the stable ones has: “option enable ‘0’” in the /etc/config/dropbear file disabling SSH.

Looking more, there are some init scripts the system starts, one for the app, and another for the radios. The code is actually decently documented,

If we (the internet) had a build of the firmware with SSH enabled, it would make checking and seeing information about the mesh much easier (since the UI seems to have no indication of what is going on with that), there appears to be a backend app for managing the mesh. This is used in the startup of the radios:

    ${MESH_CONTROLLER_CMD} reset
    echo "mesh: using channel ${channel}"
    ${MESH_CONTROLLER_CMD} set NCP:Channel ${channel}
    ${MESH_CONTROLLER_CMD} set Network:Name Sonnet
    ${MESH_CONTROLLER_CMD} set Network:PANID 0x4700

Most of the operations for the app seem to handled by /usr/bin/sonnet_server. And the web part of the app is in /usr/share/sonnet_app where it has all of its node modules setup, and cordova.js for some offline stuff. This code is much different than the one on the Github. Which makes the timeline/code confusing. If they shipped around October, they had to have the firmware more or less finalized a while before that. How do we have this one code base with some stuff from August 30th, then this firmware from 18-Jun-2019.

Anyway that was a bit of playing around with it. What I would like is Sonnet Labs to put a firmware out for us with SSH on, and everything else stock. Then go and add documentation to the repo, perhaps a easy way to iterate on the code and put it on the device. After that, if the community wants to help make the app better and perhaps put a UI on the actual mesh part. Or Sonnet Labs can update their product. The fact that http://repo.sonnetlabs.com has a bunch of 0 byte files last updated May 2018 does not fill me with hope…

Sonnet Labs, Sonnet One Unofficial Manual

Today a device I forgot I ordered arrived, the Sonnet Labs One, a mesh point that reports to connect two places over lower frequency radio to allow large mesh networks. It works by getting one of the units and connecting it to your local wifi. Then the other one, according to the box can be up to 1ish miles in a city, and up to 10 miles out in the middle of no where. When I got it I unpacked it, and all that was in the box was the device, a micro USB cable, and a charger. Where are the instructions? I check the other (you need at least 2, or hope your town has some other people who got them years ago on Kickstarter or Indiegogo (yes they did both) ) and that also didn’t have instructions.

Ah the top of the box says sonnetlabs.com/start, perfect! 404 page not found, with a few other links, which also 404. Looking on the Indiegogo and Kickstart pages most people are still waiting for theirs, so I guess I am too early for the website? The sonnetlabs website also shows an early mockup of the device and nothing what it looks like now. I started digging through comments, and looking at their GitHub. I have pieced together some aspects of the device and figured I would start a manual since so far non exists. And at this point, the project seems to have maybe 1 or 2 people working to try to fulfill it. I would be happy to chat with the devs.

Turning on the device:

  • The first flag contains a port for an antenna, the second has the micro USB charging port, and a micro SD card slot, the last has a USB host port to charge your devices.
  • Plug in the included charger into the middle micro USB port
  • Hold the Orange button for about 3 seconds and a green light should appear on the top of the unit

Connecting to the device, and to the internet:

  • When the unit is on, you should see a new Wifi network near by, Sonnet-XXXX connect to this with the password of “sonneteer”
    • Note: I had issues with my Mac doing this and had to manually enter the credentials
  • The web server that hosts the app takes a minute to start, if you try to go to it too fast it will fail to load
  • In a browser go to https://app.sonnetlabs.com, the IP is usually 192.168.47.1; but if you go to the IP the settings menu seemed total for me, it looks like they hard coded that URL in some locations
  • There will be an error about the certificate, this is because the device made it, accept to continue, you may have to hit “Details” or “Advanced”
  • You should see the login screen, click “Register New User”
    • This makes a user to send messages, and use the basic aspects of the device, fun fact all this data is stored locally in your browser, so the users generally don’t matter. This does secure your chat message data, because you need to use that exact browser and that exact device to ever see these messages again. If you change the address you are going to, or the browser, or anything you need to go re-register a user.
    • The Administrative password, set later, does actually store on the device and persist
  • Put in a name, username, and password
  • Once you see “Registration Successful” go back to the login screen and login
    • Note: Hitting the “Enter” key on the password field doesn’t trigger a login, you need to click “login” (at least in Firefox)
  • Now you are at the default page, “Contacts”, go to the “Settings Page”
  • At the bottom of the “Settings” page it will say “Set New Password” and “New password”, you have to put something in here to get Admin access to the device, that isn’t clear, but it is needed. This is saved to the device.
    • Note: This password is entered later in the UI and is just displayed on your screen, don’t make it something you don’t want people to see
  • Once you set a password there, you will get a real “Admin” page
  • Here you can click “Wifi” and start the process to connect to your own wifi network
  • This took a minute, then mine displayed a green check mark and I could connect to the internet, through my internet, even though I was bouncing through the device
  • Note at this time, I have gotten the two devices to use the “Chat” function, but not the mesh internet functions

Using Chat:

I was able to get chat to work between two of the devices, setting up one, then turning the other one seems to auto pair them. At this time I can not find any user interface on how to confirm/configure/see anything about the mesh. But Chat worked… so that’s something.

  • Login to the web interface
  • Go to “Contacts” on BOTH devices
  • Click the + in the top right
  • Select “From Network”
  • If both sessions are online, you should see the other user
  • Then if you can, you can click there name and this will send a request to add
  • On the second device, there should be a red dot on the + in the top right of “Contacts” screen, go to the +
  • Select “Requests” and approve the request
  • Then you can chat, I haven’t done much testing on how much caching of messages the devices do, but in my first test one device missed a message because the window timed out and was “offline” again
  • Note: If you see “Offline” as a bar at the top of the Window, your browser has disconnected from the device itself, clicking “Offline” should reconnect

I have not gotten QR codes to work, even with a very clear photo.

I took some screen shots of other screens. If you want more info, or have more info please leave a comment!

Additional Info:

I found a bunch of info poking around online, here are some notes:

Github for the app the device runs, but an OLD build: https://github.com/SonnetLabs/sonnet-webapp/

People talking about the device: https://community.gotennamesh.com/t/sonnet-devices-beta/4328/22

From what I can tell at the address above, a user states he was working with the devs and got SSH access to the device. I believe the image he had was a dev build with SSH installed, and the normal image we all have on the production units have this disabled.

The creators posted a early draft of a manual a while ago, copy below for archiving: https://www.dropbox.com/s/avmes7rhanx2vli/Sonnet%20User%20Manual%20v0.4.pdf

The FCC registered manual, very light: https://fccid.io/2AN8Z-SONNET/Users-Manual/user-manual-4003660

And the last laugh, there is a subdomain under sonnetlabs.com that the Digital Ocean server now belongs to someone else, so hilariously redirects. I give you lithium.sonnetlabs.com

FusionIO ioMemory VSL4 on CentOS/Rhel 7.5

With CentOS 7.5, ioMemory VSL 4.3.3 kernel module would no longer load; I could not get it to recompile from source either. I tried a bunch of things including moving my CentOS 7.5 box to the EL7 4.17 kernel to see if that helped me compile from source, no luck. Then I found a forum post, https://forums.servethehome.com/index.php?threads/centos-7-fusionio-users-do-not-upgrade-to-kernel-3-10-0-862-2-3-el7-yet.19760/ where they speak of patching VSL 3.2.15. Using this and some playing around I got VSL 4.2.1 to work with my system. This method may work for some later versions, yet 4.3.3 had some other code changes that were causing it not to compile, so I used 4.2.1. Below are the steps to get a working ioMemory VSl 4.2.1 for Centos 7.5; comments if it worked or didnt are welcome.
Note: I did all these steps as my user, and not as root. My card is a FusionIO ioMemory SX350
  1. Before we begin, I am assuming you have GCC, kernel-devel, if you do not; go to https://wiki.centos.org/HowTos/I_need_the_Kernel_Source to get kernel and the rpm build parts.
  2. Go to link.sandisk.com, register, and login
  3. Browse to the software download page, https://link-app.sandisk.com/Home/SoftwareDownload
  4. Download the “iomemory-vsl4-4.2.1.1137-1.0.src.rpm” source package
  5. In a terminal, change to the directory you downloaded it to
  6. Extract the contents of the RPM to disk
    1. rpm2cpio iomemory-vsl4-4.2.1.1137-1.0.src.rpm

  7. That gives us some metadata and a tar, extract the contents of the tar.gz file
    1. tar xvzf iomemory-vsl4-4.2.1.1137.tar.gz

  8. Change directory to the kernel modules folder
    1. cd iomemory-vsl4-4.2.1.1137/root/usr/src/iomemory-vsl4-4.2.1/

  9. Using your favorite file editor, edit kblock.c
    1. vim kblock.c

  10. Edit line 2592
    1. Before
      1. elevator_exit(q->elevator);

    2. After
      1. elevator_exit(q, q->elevator);

  11. Save the file, and quit the text editor
  12. Compile the kernel module
    1. make modules

  13. If that completes without errors, then install
    1. sudo make modules_install

  14. Let’s add the module to the running kernel
    1. sudo modprobe iomemory_vsl4

    2. If you have issues, you may need to do “sudo modprobe -r iomemory_vsl4” to force a reload of the module if one was already present
  15. You should now have the fio in /dev/, or after installing the utils from the Sandisk site, see the card under “fio-status”

iDrac6 Recovery Through TFTP and Serial

The History:
This week I had a Dell PowerEdge R510’s iDrac completely die on me; I attempted repairs with several utilities that Dell gives out on their site and all of them ended with failure. I thought it might have been because I upgrade the iDrac from an old version to the latest, without components like the BIOS or NIC, that the iDrac communicates with, being upgraded as well. After upgrading everything, iDrac still was not working, after a few days of messing with it, I found out through piecing together several sites how to force the iDrac in recovery mode to do a TFTP repair, writing a new image to it.

The symptoms:
The system used the Windows iDrac Updater, which stated the update had competed successfully. I then, remotely, told the system to reboot; it shut down and never came back up. When I physically went to the server, it was at the BIOS start screen stating “Error Communicating with iDrac. Press F1 to continue, or F2 for System Setup.” In restarting the server I found that “System Services” were disabled. Then the system would go through normal boot sequence, but when it tried to communicate with the iDrac it would fail then restart the server. After restarting, it would allow a full boot, but would give that same “Press F1 to continue, or F2 for System Setup” message. Thus the server would not boot without physical intervention at the machine.

This is a Dell PowerEdge R510, I attempted to upgrade the iDrac from 1.3.* to 1.6.5.

The Fix:
We need to get to the iDrac’s serial recovery mode, and then we can recover the system.

  1. Reboot the system, and after the system resets itself for not being able to reach iDrac go into “System Setup”, the F2 key
  2. Hit down until you select “Serial Communication”, enter that menu
  3. Set the following settings:
    • Serial System Setup Settings
    • Serial Communication : On With Console Redirection via COM2
    • Serial Port Address : Serial Device 1=COM1, Serial Device2=COM2
    • External Serial Connector : Serial Device 1
      • This could be Remote Access Device, but that gave me problems (I may have had a bad serial cable)
    • Failsafe Baud Rate : 115200
      • For the 11G servers this is the default baud rate
    • Remote Terminal Type : VT100/VT220
    • Redirect After Boot : Enable
  4. Then rebooted the system. I got Windows to start by manually hitting F1
  5. At this point you need to go to support.dell.com, lookup downloads for your system, then under “Embedded Server Management” there is “iDRAC6 Monolithic Release 1.97” (or whatever version is newest)
  6. There are several versions, for my system I got “iDRAC6_1.97_A00_FW_IMG.exe (50 MB)”
  7. After downloading, running this file will extract “firmimg.d6” and a readme file.
    • The readme has no useful information in it, it just tells you to search for the user guide
  8. The “firmimg.d6” file needs to be placed on a TFTP server that the iDrac can hit
  9. Using Putty in Windows I connected the COM2 at 115200 Baud, this is the iDrac being redirected. Connect to your systems Com2 however you can
    • Note all this is being done on the server and nothing is done on a other machine, I had TFTP running on this Windows system
  10. Hitting enter should show a recovery menu
    • Unfortunately I did not save pictures of the recovery screen, some of the next menu options may not be the exact wording
  11. I had DHCP on the network my iDrac was sitting on so I hit 9 to get a IP address, this can also be set manually
  12. Hit 7 to change the TFTP server IP address
  13. Now hit the option that says “Firmware Upgrade”, this will go to the TFTP server specified, download the firmware, and reinstall all pieces of the iDrac from that file. It takes about 5 minutes.
  14. Keep in mind you are in your OS, for me Windows, while the iDrac and its system upgrades and reboots
  15. After it reboots successfully the recovery console stops getting data, I was next to the server, when the iDrac reboots the fans go to full speed then calm back down. That’s how I was able to tell it restarted
  16. Now you can use the RACADM commands if open manage/iDrac tools are installed, or reboot and you should see “System Services” back online, then you can change the IP of the iDrac like normal

Everything should work now and the world is happy!

Update (September 2020): I wanted to signal boost some of the comments below, if you have a 12th gen system with a SD card slot then the following may be the best path forward. Thanks Simon!

Just want to add to this for anyone who comes across similar issues on a PowerEdge R720/R720XD – if the amber light on the rear of the server is flashing then put the firmimg.d7 file on a FAT formatted SD card and put the SD card in the slot at the back. The flashing light should turn solid and 5 or so minutes later iDRAC should be back up and running.

Updated Windows Sudo

Recently I updated my Windows sudo program and added a command for Super Conduit, this is what I call some tweaks that you can make to a Windows Vista+ system. This allows someone to copy sudo.exe to a systems, system32 folder; then after running “sudo cmd” you can run “sudo /write” so add ls, ifconfig, and superc as a option in the command line.

Superc has options of enable, disable, and show. Making it easy to run. 🙂

Newest build is always here https://github.com/daberkow/win_sudo/raw/master/sudo/sudo/bin/Release/sudo.exe

Super Conduit

Due to the high latency of the lines between my works offices, file transfers can be slow. There are settings in Windows Vista+ systems that can allow the TCP window to grow, and allow much higher utilization on these lines. I call it Super Conduit. This may be possible on *nix systems, but the way this tweak works is that it tells the other side it will be doing this tweak. That means that both sides have to be at least Windows Vista Kernel, (Server 2008 works) that also means that linux file servers will not work because them seem to be linux machines with SMB. This should be done over wired connections, because the packet loss on wireless hurts these connections more than anything else.

With the “autotuninglevel” change, I have seen speed changes from a 1megabit a second line go to 150-200 megabits a second.

WARNING: Windows Vista/7 IP stack can not handle changing this setting and using normal connections, meaning once this is done usually the internet stops working until the setting is reversed. Windows 8+ seems to have no problems with this setting, and the internet; it just makes Win 8/8.1 more awesome than it already is, which is pretty awesome.

  1. Login under a administrator account to the Windows machine
  2. Open ‘cmd’ as a administrator
    1. Title bar should be “Administrator: C:\Windows\System32\cmd.exe”
  3. “netsh interface tcp show global” will show the current settings of your machine
    1. Command Line Status
  4. “netsh interface tcp set global autotuninglevel=experimental” enables the majority of what you need for faster transfers, all you will get back in response is “Ok.”
    1. Image2
  5. Another setting I have used in the past is “netsh interface tcp set global ecncapability=enabled” this adds a flag to the packs that tells routers “I dont care if I get slowed down, please dont drop me completely”. The problem you run into with large TCP windowing is one dropped lowers the TCP window size a lot and slows the connection making it a lot more spiky. This command doesnt always help, but setting it hasnt hurt in the past.
    1. Image3
  6. The “rss” receive-side scaling state should be set to enabled, that should be the default. This allows the receiver to do these types of conenctions.
  7. When you are done your transfer just run “netsh interface tcp set global autotuninglevel=normal”

 

Troubleshooting Notes:

Windows 7 seems to act oddly when starting to use this setting, so I would enable it then restart the machine. I believe that cached sessions already in progress do not take the new setting.

 

YAY MATH:

http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

Default window size: 65536 bytes * 8 = 524288 bits

73ms latency between cross country offices, 524288 bits / 0.073 seconds = 7,182,027 Bits throughput, theoretically. 897,753 B/s, max.

This setting increases that window size to something larger, much larger, and thus gives better speeds. The only interesting downside is that since the TCP window is big, if a packet is then lost, TCP resizes the window to a much smaller setting; forcing the window to climb again.

That is a 1GB link going across the country.

That is a 1GB link going across the country.

VM Experimentation

I am the type of programmer/IT person who enjoys having all my experimentation of systems done inside a virtual machine. That way if I break something, I can easily role back the virtual machine or just delete it. As seen in my last post, I recently built a new NAS. The original plan was to turn my old server into a Proxmox or ESXi box, the downside to that plan I found out quickly; the old box used DDR2, and at this point to get DDR2 memory it is quite expensive. That, along with my worry of power usage on the old box, I decided to give another solution a try.

After researching around I found my local Fry’s Electronics had the Intel NUC in stock. This is a tiny tiny PC that can take up to 16GB of RAM, has an Intel Core i5, and only uses 17 Watts. The box also has Intel vPro; what is vPro you ask? vPro allows you to remotely manage the system, so I can remote into it without buying a fancy management card, I can also remote power the box on and off, or mount a virtual CD. not bad for a ~$300 box. The model I got, DC53427, is a last gen i5, so it was a little cheaper, at the cost of having only 1 USB 3.0 port. It came with a VESA mount, so the NUC could be attached to the back of a monitor, that was a nice feature. I got USB 3.0 enclosure for 2 older 500GB hard drives, and used those as my storage. I installed Proxmox  on the system since my work has been starting to use that software more and more, and this was a chance for me to learn it.

A quick note about Proxmox to those who have not used it, I had come from a VMWare background so my work was my first experience with Proxmox. It is a free system, the company offers paid subscriptions for patches and such, without that the web page bothers you one time when you login, and you just dismiss the message. The software is a wrapper around KVM and some other Linux virtualization technologies. It can handle Windows and Linux systems without a problem. The interface is completely web based, with a Java virtual console; if you don’t update to the latest patches the java console can break with Java 7 Update 51. The software works well enough. There are still some areas that is needs improvements; in VMWare if you want to make a separate virtual network you can use their interface, on Proxmox that’s when you go to the Linux console and start creating virtual bridges. But once I got everything working, it seemed to work well. I don’t know how long I will keep it without trying another system, but for now it is nice. Since the system relies on KVM, it can do feautres like Dynamic memory allocation, if a VM is only using 1 GB of ram but is allocated 6, it will only take 1GB at that time. Also KVM can do deduplication of memory, so if two VMs are running the same OS, it only stores those files in memory once, freeing up more memory space.

I ran into one problem during install of Proxmox, the NUC is so fast, that it would start to boot before the USB 3.0 hard drives had been mounted. After searching around everywhere I found a fix on http://forum.proxmox.com/threads/12922-Proxmox-Install-on-USB-Device; adding a delay in the GRUB boot loader allows enough time for the system to mount the LVM disks correctly and then start. At first I just went to the Grub boot menu, hit “e” then added “rootdelay=10”, to the “linux /vmlinuz-2.6.32-17-pve root=/dev/mapper/pve-root ro rootdelay=10 quiet” line. After the system loaded I went into /Boot and added the same entry to the real Grub menu. Now I had a Intel NUC with 1TB of storage and 16GB of RAM. I could have used the NAS with iSCSI, but that was a lot of config I didn’t want to do; along with, I was setting up some Databases on the system and didn’t want the overhead of using the NASs RAIDZ2 at this time.

I have been using it for a few weeks, and its a nice little box. It never makes a audible level of noise (although it does sit next to its louder brother the NAS). Down the road if I want more power I can always get another NUC and put Proxmox into a clustered mode. These boxes keep going down in price and up in power, so this can grow with my needs.

NAS Migrations 2013

For years I used a Windows Server 2008 for my home files, having TechNet I used Windows Server 2008 and then later 2008 R2. While this was nice, it was using software RAID and a random assortment of drives that were cloning (RAID 1 style) between themselves. I originally went with this for the ease that Windows brings to things, but in the end with it mainly being a file server it just sat there initialized.

Fast-forward to this November, with space running out, I decided it was time to get a new system and replace the aging AMD Windows Server.

I wanted a RAID 5 or 6, so that I was not losing as much space as the RAID 10s that I had been using. I also wanted the system to be less maintenance than a Windows Server that needs patched every month. Recently I had heard good things about FreeNAS (freenas.org), from reddit.com/r/homelab; after seeing all the features of ZFS, I decide on a RAID 6, with ZFS. This is also known as a RAIDZ-2.

At first I looked at HP Microservers, http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=5379860 – !tab=features, yet after looking at what you got for the price, decided I wanted to build the new system myself.

The first challenge was finding a small case, that could hold the amount of hard drives I wanted, at least 5, without having a large footprint. After some searching I came across the LIAN LI PC­Q25B, http://www.newegg.com/Product/Product.aspx?Item=N82E16811112339, while not a cheap case, it offered a 5 hard drive tray and at the same time was not that large. This suited my purposes nicely.

Next I had to find which CPU I wanted; since I was hoping to keep the cost of the system down I looked at the AMD processors available. I was disappointed to see how cheap Intel processors were beating or matching far more expensive AMD chips. AMD would throw items in to sweeten the deal such as a decent GPU on the chip. However this was a NAS, I did not need all that extra stuff that would just sit there using power.

My final selection was an Intel Pentium G3220, http://www.newegg.com/Product/Product.aspx?Item=N82E16819116950; this part offers decent performance, and is the latest Haswell chip. This would allow me to upgrade the system down the road if need be. The part is also the latest socket, meaning that it could handle the larger memory sizes available, while I could use the MicroATX board the case required.

I threw in 16GB of ram (if you haven’t looked ZFS eats memory, you need about 1GB of memory per TB just to idle), and 5 – 3TB hard drives. I got the hard drives from different batches, so if something similar to Seagate’s 7200.11 drive failure happened again (http://www.theinquirer.net/inquirer/news/1050374/seagate-barracuda-7200-drives-failing) I would be protected.

Now that you know the hardware I will talk a little about the experience I have had with FreeNAS. The system is easy to install and has a nice interface. Using ZFS and the terminology they use takes a little getting used to, but the wiki can clear up a lot about what the different options do. I started the box on 9.1.0 and have updated to the latest 9.2.1; you can do updates through the web interface, and in the short time they have fixed a lot of little bugs, cleaned up the interface, as well as added new features. A nice new feature is the ability to make “Jails” of any Linux variety. These are hypervisor level VMS that can run on the system at little cost. I tend not to use them because when I use a VM to develop I tend to need a decent amount of memory, and my FreeNAS with ZFS uses 12GB of the 16GB doing nothing. But a nice feature non-the-less. FreeNAS also has some plugins that are a few clicks away; I installed Plex so I could stream media easily over the home network. FreeNAS uses Jails to run its plugins, creating a separate VM for each, this allows for security between your hosts data, and your plugins.

In the end, I am very happy with the box and its performance; my roommate and myself have been able to sustain 100MB/s writes to the box.

A quick side note, Plex is also a fantastic piece of software. You load it on a PC or NAS, point it at your media and sit back. It scans through all your media and gets all the metadata automatically. Then you can stream with the web interface, or through a DLNA device in your network. There are also iPhone and Android apps that let you stream without setting up weird port forwarding: just a very slick and well working product.

Java Windows Shortcut Library (Parsing and Creating!)

Recently I have been working on a project that involves extracting a bunch of files from zips. The problem I faced was all the shortcuts within the zips were hard coded to locations, making it impossible for me to move the extracted zip data to wherever I may want. I wanted a native library that could read and modify Windows Shortcuts so I could drop my zip data anywhere; my project is in Java, and its instant cross compatibility was needed. I know all my clients have Java installed, so that made its dependency not a issue. After looking around on the internet and finding several options, including the popular https://github.com/jimmc/jshortcut. Now the downside the this popular jShortcut library is you need a DLL, why you need a DLL to write a binary file, I am not sure. More specifically, you need a DLL for your PCs instruction set, ick! After searching the far reaches of github, and getting to the end of my rope I found https://github.com/kactech/jshortcut, written 5 years ago, and not really popular on github I thought I would give it a try. IT’S AMAZING! With no dependencies, and just a single include, you can write, modify, and create new Windows Shortcuts! There is example code included, and it couldn’t be easier to use. I just wanted to make sure anyone who has had the same problem knows about this great library.

How To Remove Branding From a Dell OEM Server

NOTE: This is for Dell OEM systems only, run at your own risk.

Recently I have RMAed motherboards for non-branded Dell servers. The problem I ran into is I was getting branded system boards back when I had originally had non-branded. The non-branded BIOSes would just be blank with a progress bar instead of having the Dell logo. I ended up spending more time and energy talking to Dell again trying to get boards to my specifications. I was told by several Dell engineers that unfortunately there was no way to fix this other than the factory setting the board up.

Well they were wrong, and because I didn’t find this anywhere online I am going to detail the instructions. Note: this is ONLY for people who need to un-brand systems from Dell, I have done this with 12th Generation servers and nothing else.

  1. Remove the old motherboard, and install the new motherboard into the chassis
  2. Now the first thing Dell training says is to set the service tag on the system now, DO NOT DO THIS YET
    • If you set the service tag, the unbranding tool will not work. If you have already set the service tag, more than likely by booting to DOS and using ftp://ftp.dell.com/utility/asset_a209.com, then you can still fix this. Boot back to DOS and use the tool again, except with “asset_~1 /s /d”. This is an undocumented feature that will remove the service tag of the box.
  3. Start up any version of Windows that is at least Windows Vista loaded. I used Windows 8 because you can get a 90 day evaluation for free. And that is enough for me to do what work I need done on the box before handing it over.
  4. Go to Support.Dell.com, and look up the box by the service tag to get to the OEM support site. If you don’t have the service tag, look up the generic version and get the url, currently for a R720 it looks like this http://www.dell.com/support/troubleshooting/us/en/04/Product/poweredge-r620. Now if you replace “poweredge” with “oth” you get the oem version. So http://www.dell.com/support/troubleshooting/us/en/04/Product/oth-r620”.
  5. Go to Drivers and Downloads, and find the download for “Identity Module”, I had to switch the OS selector to “Windows Server 2008 x64” to find it. Then hit “Download File”
  6. Now it will offer ~3 different files, one will be similar to “R620_Identity-Module_Application_WCPFW_WN32_1.01_A00.EXE”, stating “Identity-Module_Application”, download this file.
  7. Run this in Windows, it will ask if you are sure and just say yes. It can take up to 5 minutes, MAKE SURE NOTHING INTURUPTS THE SERVER IN THIS TIME.
  8. Reboot the server, and it will come up with the branding again, then it will give a special message once it gets past post similar to “modifying branding”
  9. The system will reboot again, and the branding is gone
  10. Now go into the DOS bootable drive, USB works well, and set the service tag for the system.

Now your OEM box that was impossible to unbrand has been unbranded.