<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>A Geek with Guns - Me</title>
    <link rel="self" type="application/atom+xml" href="https://www.christopherburg.com/tags/me/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://www.christopherburg.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-02-13T13:00:00-06:00</updated>
    <id>https://www.christopherburg.com/tags/me/atom.xml</id>
    <entry xml:lang="en">
        <title>Disabling Homebrew in Dinosaur OS</title>
        <published>2026-02-13T13:00:00-06:00</published>
        <updated>2026-02-13T13:00:00-06:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/disabling-homebrew-in-dinosaur-os/"/>
        <id>https://www.christopherburg.com/blog/disabling-homebrew-in-dinosaur-os/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/disabling-homebrew-in-dinosaur-os/">&lt;p&gt;Since I &lt;a href=&quot;&#x2F;blog&#x2F;introducing-dinosaur-os&#x2F;&quot;&gt;released Dinosaur OS&lt;&#x2F;a&gt; last September, I&#x27;ve had to make very few changes to my image. This is a testament to the overall stability of Bluefin, the image upon which Dinosaur OS is based. But I started receiving error notifications a couple of weeks ago whenever the automatic update service ran. The initial error was actually due to a Flatpak problem. The developer of a package I installed uploaded a new version with the same version number as my currently installed version. This cause the Flatpak update program to fail. I managed to fix that, but the service was still displaying an error because it wasn&#x27;t able to update Homebrew.&lt;&#x2F;p&gt;
&lt;p&gt;Homebrew is a package manager that was originally written for macOS. It was released back when I used macOS so I tried it and discovered that it was a train wreck and opted to use &lt;a href=&quot;https:&#x2F;&#x2F;www.macports.org&#x2F;&quot;&gt;MacPorts&lt;&#x2F;a&gt; instead. Homebrew had a number of bizarre design decisions. The biggest was it wanted to install packages at a system level. Normally that&#x27;s not a problem with a package manager, but Homebrew tied the system level directory to your user account&#x27;s user ID number. Effectively Homebrew installed packages at a system level that only a single user account to use or modify. There was an option to install packages into your home directory, but a number of packages failed to run when you did that.&lt;&#x2F;p&gt;
&lt;p&gt;When it was announced that Homebrew was available for Linux, I dismissed it entirely. Why would I want a poorly designed package manager on a system that already has a plethora of very good package managers? My experience with Homebrew was so bad that I initially intended to remove it from Dinosaur OS. I ultimately decided that enough time had passed that I should give Homebrew another chance. My latest experience mirrored my previous experience.&lt;&#x2F;p&gt;
&lt;p&gt;Homebrew on Linux suffers the same problem as it does on macOS. It doesn&#x27;t support systems with multiple user accounts well. When Bluefin installs Homebrew, it creates the &lt;code&gt;&#x2F;home&#x2F;linuxbrew&#x2F;&lt;&#x2F;code&gt; directory and sets its user and group IDs to 1000. Bluefin is based on Fedora and by default the first user account created on a Fedora system has the user and group IDs of 1000. All packages installed with Homebrew are installed into the &lt;code&gt;&#x2F;home&#x2F;linuxbrew&#x2F;&lt;&#x2F;code&gt; directory. This means Homebrew on Bluefin is configured so that only the very first user account created on the system can use it.&lt;&#x2F;p&gt;
&lt;p&gt;This is fine for most users, but I&#x27;m not most users. I have two user accounts on my system. The first is my administrator account, the second is a regular user account that I use for my day to day tasks. Administrator rights are required to create new user accounts so obviously I create my administrator account first. This means the actual account I use day to day, which has the user and group IDs of 1001 (the default on Fedora systems for the second user account created), can&#x27;t use Homebrew.&lt;&#x2F;p&gt;
&lt;p&gt;There are ways around this. I could change the owner permissions on &lt;code&gt;&#x2F;home&#x2F;linuxbrew&#x2F;&lt;&#x2F;code&gt; to 1001. Homebrew on Bluefin is setup using the &lt;code&gt;brew-setup.service&lt;&#x2F;code&gt; systemd service, which sets the permissions. I could change that unit file in my image to set the user and group IDs to 1001. Either option would allow my day to day user account to use Homebrew installed packages, but would prevent my administrator account from using them. The bottom line is Homebrew is a poorly designed package manager.&lt;&#x2F;p&gt;
&lt;p&gt;I chose a third option: ignore Homebrew entirely. There was no downside to this option at first, but a few weeks ago changes were made to Bluefin&#x27;s automatic updater that caused me to reexamine my decision. As noted at the beginning of this article, I started receiving notifications that the automatic update service failed. Checking journalctl showed me the source of the error was that the update utility, &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ublue-os&#x2F;uupd&quot;&gt;UUPD&lt;&#x2F;a&gt;, was unable to upgrade Homebrew. This failure was caused by &lt;code&gt;&#x2F;home&#x2F;linuxbrew&#x2F;&lt;&#x2F;code&gt;, which normally contains the brew executable used to install and update packages, being empty (I didn&#x27;t investigate why it was empty since I was already done with Homebrew).&lt;&#x2F;p&gt;
&lt;p&gt;Fortunately disabling and removing Homebrew from Bluefin is straight forward. Homebrew is installed by the &lt;code&gt;brew-setup.service&lt;&#x2F;code&gt; systemd service, which is enabled by default on Bluefin. Disabling the service prevents it from automatically installing Homebrew so Dinosaur OS disables it. I also add a script, &lt;code&gt;&#x2F;usr&#x2F;libexec&#x2F;remove-brew&lt;&#x2F;code&gt;, to the image, which removes Homebrew from a system if it&#x27;s already installed. This makes Dinosaur OS nondestructive in that it won&#x27;t automatically remove Homebrew from a system where it&#x27;s already installed. Removing Homebrew requires manual work. It also means Homebrew can be installed again by either starting or enabling &lt;code&gt;brew-setup.service&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;I still had the problem where UUPD would throw an error because it was unable to update Homebrew (which was now missing entirely). UUPD on Bluefin accepts arguments that disable update modules though. The final change I made to Dinosaur OS is adding &lt;code&gt;--disable-module-brew&lt;&#x2F;code&gt; to the ExecStart line of &lt;code&gt;uupd.service&lt;&#x2F;code&gt;, which is activated by a timer periodically. &lt;code&gt;uupd.service&lt;&#x2F;code&gt; is a system file, which means the user cannot edit it. Therefore, if you&#x27;re running Dinosaur OS and install Homebrew, your Homebrew packages won&#x27;t be automatically updated by &lt;code&gt;uupd.service&lt;&#x2F;code&gt;. The best way to change this behavior is to copy &lt;code&gt;&#x2F;usr&#x2F;lib&#x2F;systemd&#x2F;system&#x2F;brew-update.service&lt;&#x2F;code&gt; to &lt;code&gt;&#x2F;etc&#x2F;systemd&#x2F;system&#x2F;uupd.service&lt;&#x2F;code&gt; and remove &lt;code&gt;--disable-module-brew&lt;&#x2F;code&gt; from the ExecStart line.&lt;&#x2F;p&gt;
&lt;p&gt;Overall I still like Bluefin a lot. I agree with most of the design decisions and appreciate that it&#x27;s been easy for me to change the decisions I dislike. I continue to run Dinosaur OS on my desktop systems and haven&#x27;t faced any catastrophic problems. If you want to create your own image based on Bluefin and want an example image to get started, check the &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ChristopherBurg&#x2F;dinosaur-os&quot;&gt;Dinosaur OS repository&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Upgrades</title>
        <published>2025-10-15T12:00:00+00:00</published>
        <updated>2025-10-15T12:00:00+00:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/upgrades/"/>
        <id>https://www.christopherburg.com/blog/upgrades/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/upgrades/">&lt;p&gt;I&#x27;ve been more silent as of late than usual. This is because I&#x27;ve been spending a lot of my free time upgrading my network infrastructure in anticipation of a major Internet upgrade. Most of this involved backend work that you readers won&#x27;t notice. Suffice to say that I made major edits to my Ansible playbooks and redid several parts of my home network.&lt;&#x2F;p&gt;
&lt;p&gt;A bit more than five years ago my wife and I moved to this house located in the rural Wisconsin. We loved everything about the property except for one thing: the only Internet option was DSL. I had DSL when I was in college and wasn&#x27;t looking forward to going back to it after having enjoyed cable Internet for a decade, but the property was good enough that I was willing to make the sacrifice. One upside is the DSL was decent at 20 Mbps down and 1.5 Mbps up. I was also able to get a static IP address so I could continue self-hosting my services.&lt;&#x2F;p&gt;
&lt;p&gt;Eventually Starlink became available in my area so I signed up. It was a significant upgrade. I regularly got 200 Mbps down and between 15 and 20 Mbps up. The two downsides were that my Starlink connection went offline during severe storms and I couldn&#x27;t get a static IP address. Therefore, I kept the DSL so I had a backup when the weather was bad and could continue hosting my services. My home network had two gateways and each client was assigned the appropriate gateway from my DHCP servers. My self-hosted services used the DSL gateway and everything else used the Starlink gateway unless there was severe weather. When there was severe weather, I ran an Ansible script that rebuilt my DHCP servers&#x27; configurations to assign every client the DSL gateway. Each client would start using the DSL connection when their DHCP lease expired and renewed (I use short leases for this reason). I admit it wasn&#x27;t the most elegant solution, but it was good enough for how rarely the Starlink connection went offline.&lt;&#x2F;p&gt;
&lt;p&gt;When I first bought this house, the DLS was provided by CenturyLink. CenturyLink are a shower of bastards. Whenever the DSL went offline, I had to suffer through a minimum of five phone transfers to get to a tech that could actually fix the issue. Eventually CenturyLink sold its DSL business to Brightspeed. Brightspeed somehow managed to be worse.&lt;&#x2F;p&gt;
&lt;p&gt;About a year ago a contractor buried fiber down my road. I expected to receive some notice that an ISP would be providing fiber service in my area but no such notice ever arrived. I searched high and low for the ISP that owned the fiber but found none. A few months ago Brightspeed announced that my static IP address would change. My history with Brightspeed told me that the changeover wouldn&#x27;t go well so I searched for the owner of the buried fiber once again. This time I found the ISP, which is Lakeland Communications. I gave them a call and they confirmed that they provided fiber to my area. The only downside to me was the cost of installing the fiber from the road to my house. My house is far from the road so the cost of connecting my house to the fiber wasn&#x27;t cheap (it was reasonable though considering the distance).&lt;&#x2F;p&gt;
&lt;p&gt;I was able to push off Brightspeed&#x27;s static IP address assignment, which turned out to be a blessing. To say the static IP address change went poorly would be an understatement. They managed to fuck it up completely. I had no Internet connectivity after the change. I spent a total of two hours on the phone with their technical support, all of whom are worthless, to no avail. Fortunately, Lakeland was scheduled to complete the fiber installation two days after that so I was only without my self-hosted services for about 48 hours.&lt;&#x2F;p&gt;
&lt;p&gt;Lakeland Communications proved to be competent and easygoing. Because I self-host services including e-mail, I expected to need to sign up for a business account (what I&#x27;ve always had to do with other ISPs), but was told that I could self-host from a residential connection without any issue. They were also more than happy to let me use my router instead of theirs, which made reconfiguration my network as easy as changing the static IP address in my UniFi Network Controller. The speeds are good enough that I have to upgrade my router since I&#x27;m running an old Ubiquiti Security Gateway 3P and my Wi-Fi access points.&lt;&#x2F;p&gt;
&lt;p&gt;Now when you access this site, it should download significantly faster. I can finally make full use of a number of my self-hosted services too. It&#x27;s nice to have these capabilities again after five years of DSL restricting me to making sparse use of my services when I&#x27;m not home.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Introducing Dinosaur OS</title>
        <published>2025-09-05T12:00:00+00:00</published>
        <updated>2025-09-05T12:00:00+00:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/introducing-dinosaur-os/"/>
        <id>https://www.christopherburg.com/blog/introducing-dinosaur-os/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/introducing-dinosaur-os/">&lt;p&gt;Along with &lt;a href=&quot;&#x2F;blog&#x2F;the-thinkpad-t16-gen-4-amd-accepts-96-gb-of-ram&#x2F;&quot;&gt;my new laptop&lt;&#x2F;a&gt;, I decided to try a new operating system. I&#x27;ve been running Fedora Workstation since I bought my ThinkPad P52s in 2018. It&#x27;s a great distribution. However, I&#x27;ve been interested in immutable Linux distributions for a long time. Unfortunately, getting the proprietary Nvidia driver (the P52s has an Nvida GPU) working on Fedora Silverblue without having to disable SecureBoot was a hurdle I didn&#x27;t want to jump over. Fortunately, my new laptop has an AMD processor and integrated GPU so I don&#x27;t have to deal with Nvidia&#x27;s shenanigans anymore. I originally intended to run Fedora Silverblue, but then I came across &lt;a href=&quot;https:&#x2F;&#x2F;projectbluefin.io&#x2F;&quot;&gt;Bluefin&lt;&#x2F;a&gt;, which is a &lt;a href=&quot;https:&#x2F;&#x2F;universal-blue.org&#x2F;&quot;&gt;Universal Blue&lt;&#x2F;a&gt; image, which is a Silverblue based image meant to be a foundation for building images.&lt;&#x2F;p&gt;
&lt;p&gt;Immutable Linux distributions differ from typical distributions in a number of ways. The biggest difference, which is in the name, is that the base system is immutable. This means you can&#x27;t install packages in the normal manner. Silverblue has support for overlay packages, but installing overlay packages on an immutable distribution can cause headaches down the road (especially when performing major version upgrades). The advantage is that upgrades are simple and rolling back to a previous version is as simple as rebooting. Rebasing to other immutable images is also simple (as is rolling back if you decide you don&#x27;t like the new image).&lt;&#x2F;p&gt;
&lt;p&gt;I chose Bluefin over Silverblue for a few reasons. First, Silverblue uses the Fedora flatpak repository by default and all of the included flatpaks are sourced from that repository. I prefer to use Flathub because the packages are up-to-date. Bluefin uses Flathub. Second, Bluefin includes &lt;a href=&quot;https:&#x2F;&#x2F;distrobox.it&#x2F;&quot;&gt;Distrobox&lt;&#x2F;a&gt;. Silverblue includes Toolbox, which I find inferior to Distrobox. Third, Bluefin includes Bazaar as its graphical Flatpak manager. Silverblue, like Fedora Workstation, still uses GNOME Software, which is so buggy that I end up managing flatpaks via the command line when I use Workstation. Fourth, dinosaurs. Bluefin&#x27;s theme involves dinosaurs and dinosaurs are cool (hence the name of my modification of Bluefin).&lt;&#x2F;p&gt;
&lt;p&gt;There&#x27;s just one problem with Bluefin (Silverblue has this problem too). It doesn&#x27;t include libvirt. I rely heavily on libvirt. It&#x27;s my virtual machine manager of choice. Installing libvirt on Fedora Workstation was a simple dnf command away. But Bluefin is immutable so dnf isn&#x27;t a lot of help. Bluefin does offer a solution out of the box in the form of the Bluefin Developer Experience (DX). Bluefin includes an easy way to rebase to Bluefin DX. The problem with Bluefin DX is that it includes a lot of tools I don&#x27;t want to use such as Docker, Visual Studio Code, and Incus. This lead me down a rabbit hole of learning how to make my own modified version of Bluefin.&lt;&#x2F;p&gt;
&lt;p&gt;Modifying Bluefin or any other Universal Blue image is dead simple. There&#x27;s a &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ublue-os&#x2F;image-template&quot;&gt;handy template&lt;&#x2F;a&gt; that you can fork to get started. From there you can modify the Bluefin build process to add, remove, or modify whatever you want. The result of my efforts is &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ChristopherBurg&#x2F;dinosaur-os&quot;&gt;Dinosaur OS&lt;&#x2F;a&gt;. Dinosaur OS is basically Bluefin with libvirt added. There are a few other minor modification too, but the inclusion of libvirt is the main difference. Switching from Bluefin to Dinosaur OS is as simple as issuing the &lt;code&gt;sudo bootc switch ghcr.io&#x2F;christopherburg&#x2F;dinosaur-os&lt;&#x2F;code&gt; command. Once it completes downloading and staging the image, reboot your computer and you&#x27;ll be in my modified version of Bluefin.&lt;&#x2F;p&gt;
&lt;p&gt;The instructions to get started are all in the template&#x27;s README.md file. But there are two files that will contain a lion&#x27;s share of your changes. The first is Containerfile. The second is &#x2F;build_files&#x2F;build.sh. I&#x27;ve organized my repository since my initial release, but most of the changes are made by the shell scripts in &#x2F;build_files&#x2F;. For example, &#x2F;build_files&#x2F;base&#x2F;00-install-libvirt.sh executes dnf to install the libvirt packages. It also adds the libvirt group to the image so you can add user accounts to the group. It also enables a systemd service that fixes some SELinux permission issues. None of this was my original idea. I pieces together how &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ublue-os&#x2F;bluefin&quot;&gt;Bluefin DX&lt;&#x2F;a&gt; installs libvirt and made those modification in my image.&lt;&#x2F;p&gt;
&lt;p&gt;Now that it&#x27;s configured, my GitHub repository rebuilds the image once a day. This pulls in the updates from Bluefin, which in turn pulls in the updates from its base Universal Blue image. Dinosaur OS downloads new images automatically so when I reboot my computer, I boot into the latest image.&lt;&#x2F;p&gt;
&lt;p&gt;I don&#x27;t expect anybody to run Dinosaur OS themselves. It&#x27;s custom tailored to my use case. My hope is that it can work as a template or example for anybody who wants to make their own image. The coolest thing about image based distributions is that it&#x27;s trivial to make a bespoke image that fits your use cases. The coolest thing about using Universal Blue as the foundation is that it automates most of the work.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>If You&#x27;re Reading This</title>
        <published>2025-08-30T12:00:00+00:00</published>
        <updated>2025-08-30T12:00:00+00:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/if-you-re-reading-this/"/>
        <id>https://www.christopherburg.com/blog/if-you-re-reading-this/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/if-you-re-reading-this/">&lt;p&gt;If you you&#x27;re reading this, I&#x27;ve configured my new reverse proxy correctly. I&#x27;ve spent the last couple of days performing a major overhaul of my network infrastructure. This overhaul gave me the opportunity to rethink a few of my servers. The biggest rework was with my reverse proxy. For years I&#x27;ve been using Nginx to provide TLS connections for my various self-hosted services. Many of my Nginx configuration files were nightmares to read and edit because of the complexity of some of the services I self-host. It was finally bad enough that I started searching for alternatives. I eventually came across &lt;a href=&quot;https:&#x2F;&#x2F;caddyserver.com&#x2F;&quot;&gt;Caddy&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Caddy fucks. Consider the Nginx reverse proxy configuration for this blog, which is one of the simplest configurations on my reverse proxy server:&lt;&#x2F;p&gt;
&lt;pre class=&quot;z-code&quot;&gt;&lt;code&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;map $http_upgrade $connection_upgrade {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    default upgrade;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    &amp;#39;&amp;#39; close;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;}
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;server {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    listen *:80;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    server_name www.christopherburg.com;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    include conf.d&#x2F;certbot.conf;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    location &#x2F; {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        return 301 https:&#x2F;&#x2F;www.christopherburg.com$request_uri;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    }
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;}
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;server {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    listen *:443 ssl http2;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    server_name www.christopherburg.com;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    autoindex off;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    access_log &amp;lt;path to the access logs&amp;gt;;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    ssl_certificate &amp;lt;path to the certificate&amp;gt;;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    ssl_certificate_key &amp;lt;path to the certificate key&amp;gt;;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    ssl_trusted_certificate &amp;lt;path to the trust chain&amp;gt;;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    # A bunch of TLS configurations to ensure old, weak ciphers aren&amp;#39;t used.
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    include conf.d&#x2F;ssl.conf;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    location &#x2F; {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_pass http:&#x2F;&#x2F;&amp;lt;actual server hosting site&amp;gt;;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        include conf.d&#x2F;headers.conf;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header X-Forwarded-Port $server_port;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header X-Forwarded-Scheme $scheme;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header X-Forwarded-Proto $scheme;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header X-Real-IP $remote_addr;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header Host $host;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_request_buffering off;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_read_timeout 86400s;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        client_max_body_size 0;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        # Websocket
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_http_version 1.1;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header Upgrade $http_upgrade;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;        proxy_set_header Connection $connection_upgrade;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    }
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now look at the Caddy equivalent:&lt;&#x2F;p&gt;
&lt;pre class=&quot;z-code&quot;&gt;&lt;code&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;www.christopherburg.com {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    reverse_proxy &amp;lt;actual server hosting site&amp;gt;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Caddy handles most of the heavy lifting. It ensures TLS is enabled, automatically pulls certificates for your sites via Let&#x27;s Encrypt, automatically redirects HTTP to HTTPS, and has sane defaults for all the reverse proxy configurations. Writing the Ansible playbook to build my reverse proxy took me about an hour and that&#x27;s with zero Caddy experience beforehand.&lt;&#x2F;p&gt;
&lt;p&gt;With this change comes another. I finally shutdown the old Wordpress blog. It&#x27;s URL, blog.christopherburg.com, now redirects to here. Setting up this redirect in Caddy was as simple as:&lt;&#x2F;p&gt;
&lt;pre class=&quot;z-code&quot;&gt;&lt;code&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;blog.christopherburg.com {
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;    redir https:&#x2F;&#x2F;www.christopherburg.com&#x2F;blog&#x2F;
&lt;&#x2F;span&gt;&lt;span class=&quot;z-text z-plain&quot;&gt;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Caddy is working so well that I&#x27;m wondering when the other shoe will drop. When will a weird edge case rear its ugly head and bring me hours of frustration as I try desperately to fix it? I don&#x27;t know the answer to that. Until it does appear though, I&#x27;m very impressed with Caddy. If you&#x27;re self-hosting websites, I strongly encourage you to check it out.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Diet Cleanup</title>
        <published>2025-07-20T12:00:00+00:00</published>
        <updated>2025-07-20T12:00:00+00:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/diet-cleanup/"/>
        <id>https://www.christopherburg.com/blog/diet-cleanup/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/diet-cleanup/">&lt;p&gt;The diet of the average American is revolting. It seems to be made up primarily of carbohydrates. While it&#x27;s trendy and easy to blame this on the food pyramid (the base of which is carbohydrates and top includes more carbohydrates), the average American diet is even worse than that. The side effect of this diet is easy for anybody with eyes to see. Everywhere you go is populated with obese people. The upside for us Americans is that there&#x27;s an easy starting point for cleaning up our diet: cut back on the carbohydrates.&lt;&#x2F;p&gt;
&lt;p&gt;I&#x27;ve gone through several iterations of diet cleanup. My first one was simple. I all but completely stopped drinking carbonated high fructose corn syrup (commonly referred to as pop or soda). The only thing I didn&#x27;t eliminate entirely was root beer. I&#x27;ll drink two or three in a year. But the only things I regularly drink are water, milk, tea, and coffee. Simply cutting out soda can remove a tremendous amount of empty calories.&lt;&#x2F;p&gt;
&lt;p&gt;My previous cleanup efforts focused on two things. The first was my main weakness when it comes to food: salty snacks like potato chips, pretzels, etc. Like my efforts with soda, I didn&#x27;t completely eliminate those junk foods from my diet. I prefer moderation over elimination for my diet. The second was eating more vegetables. Each weak I&#x27;d cut up a bunch of vegetables, put them into containers, and eat them with meals throughout the week. I ended up abandoning that effort because it resulted in some unpleasant digestive side effects. I still eat vegetables, but not as many as I was.&lt;&#x2F;p&gt;
&lt;p&gt;At this point my diet is pretty decent. I&#x27;ve eliminated most of the common American problems and am now focusing on tweaking things. My latest efforts have focused on further increasing protein intake. Recommendations for protein intake vary. I&#x27;ve seen numbers as low as 0.8 g per kilogram of body weight and as high as 3.1 g per kilogram of body weight. I&#x27;m aiming for a daily intake between 1.5 to 2 g per kilogram of body weight. Part of my diet being pretty decent is that I&#x27;m consuming enough calories to maintain (I&#x27;m neither losing or gaining) body weight. I want to keep it that way. In order to accomplish that, I need to increase the amount of protein I intake per calorie. Another thing I want to do is increase my intake of dietary fiber. I, like most Americans, consume an inadequate amount.&lt;&#x2F;p&gt;
&lt;p&gt;The first change I made was to what I call mobile meals. I need to drive into the office two days a week. I have three options for lunch on those days: eat out, pack a lunch, or fast. Eating out is expensive and eating healthy at a restaurant is challenging (actually damn near impossible). I&#x27;m already maintaining body weight so I&#x27;m not interested in fasting. That leaves the option of packing a lunch. On one of those two days I also have martial arts classes in the evening so I leave home in the morning and don&#x27;t return until late at night. That means I need to pack dinner too. Since the dinner will be sitting in a lunch box all day, I also need something that will keep without refrigeration and doesn&#x27;t require cooking (admittedly I could bring my small camp stove to cook on the go, but I&#x27;m lazy).&lt;&#x2F;p&gt;
&lt;p&gt;My wife and I buy half a cow every year from a friend of ours who raises beef cattle. When you buy half a cow, you end up with &lt;em&gt;a lot&lt;&#x2F;em&gt; of ground beef. This gives me a great option for mobile meals: homemade beef sticks. I bought a dehydrator at the beginning of this year specifically for this. Unlike store bought beef sticks, homemade ones don&#x27;t need to be loaded up with salt, sugar, and other common ingredients I want to avoid. While they do require refrigeration (we don&#x27;t use curing salts in ours) for long term storage, they easily survive the day without it. If you don&#x27;t have a dehydrator and supply of ground beef, summer sausage also works well (but you&#x27;ll need to keep the ingredient list in mind when buying from a grocery store). I also buy mixed nuts from the grocery store, which are a decent source of protein along with other nutrients. My mobile meals typically consist of a sandwich, homemade beef sticks, and mixed nuts. I might toss in a protein bar too (all of the ones I&#x27;ve tried taste like ass though so I only eat them when I need to in order to meet my protein intake goal).&lt;&#x2F;p&gt;
&lt;p&gt;The second change I made was to breakfast. I developed an overnight oats recipe that consists of rolled oats; vanilla protein powder; a mix of chia, flax, and hemp seeds; collagen; creatine; milk; and nonfat Green yogurt. When I pull it out of the fridge, I mix in a bunch of berries too. This ends up being a huge intake of both protein and fiber. It&#x27;s also easy to prepare. On Sunday I toss all of the dry ingredients into mason jars. Every evening I add the milk and yogurt to a jar, stir the contents up, and place the jar in the fridge.&lt;&#x2F;p&gt;
&lt;p&gt;There are three key points I want you to take away from this post. First is the importance of diet. It&#x27;s one of my three pillars of fitness along with exercise and sleep. All three pillars are weak for most Americans, but the diet pillar is probably the easiest to improve quickly because there is an obvious strategy: reduce carbohydrate intake. Second is improving your diet in stages. You don&#x27;t need to drastically change your entire diet immediately. Most people who do this fail long term. Instead start with easy targets. Cut down on the biggest culprits like soda and candy. Then address the next biggest culprits. Continue this over time (the timeframe can be months or years) until you develop a diet that is pretty decent. Once your diet is pretty decent you can tweak it for specific goals. Third is moderation. You don&#x27;t need to completely eliminate things like soda, candy, and junk food. You can, but simply cutting down the amount you consume over time is also an effective strategy.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>A Fresh Start</title>
        <published>2024-10-29T01:53:00+00:00</published>
        <updated>2024-10-29T01:53:00+00:00</updated>
        
        <author>
          <name>
            Christopher Burg
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://www.christopherburg.com/blog/a-fresh-start/"/>
        <id>https://www.christopherburg.com/blog/a-fresh-start/</id>
        
        <content type="html" xml:base="https://www.christopherburg.com/blog/a-fresh-start/">&lt;p&gt;Did you come to this site from my old blog? If so, welcome. If not, welcome all the same.&lt;&#x2F;p&gt;
&lt;p&gt;For the two of you who read my old blog, you noticed the almost complete lack of new content appearing in the last few years. Part of this is because after blogging regularly for 10 years, I got sick of creating new content constantly. I also decided to pull back from online discourse because things have become so divisive in the United States that it&#x27;s damn near impossible to hold a civil conversation about anything but the weather (and even that can be controversial).&lt;&#x2F;p&gt;
&lt;p&gt;Why the return? I still like writing and a blog is a good excuse to do so. It also has the benefit of providing me the delusion that I have an audience. Why the new site? If you&#x27;re not aware, the old blog is&#x2F;was run on WordPress. I&#x27;ve long wanted to move away from WordPress for a number of reasons. The biggest of which is it&#x27;s a nightmare to maintain. I self-host almost everything and maintaining a self-hosted WordPress site required maintaining the website the site runs on, which involved maintaining PHP and MariaDB, as well as the WordPress software. The WordPress developers have also implemented a number of interface changes that I don&#x27;t like. Although most of them can be undone with add-ons, add-ons are also the single biggest security issue facing WordPress sites. I also don&#x27;t need a lot of the features. For example, comments. See my remark about discourse above. If you want to comment on something I write, you can do it on your own website.&lt;&#x2F;p&gt;
&lt;p&gt;Static site generators are the hip thing and there are several strong technical reasons for that. The foremost being it&#x27;s easy to host a static site. You don&#x27;t need PHP or a database. The entire site can be committed to a git repository for easy version controlling. I also like that the writing tool is separate from the server. In 2023, WordPress moved to its new Gutenberg interface for writing articles and it&#x27;s shit. I installed an add-on to disable it. With a static site, I use my preferred text editor to write articles with Markdown. It&#x27;s easy and clean.&lt;&#x2F;p&gt;
&lt;p&gt;I&#x27;m writing this article in vim like a civilized person. The site you&#x27;re reading was generated with &lt;a href=&quot;https:&#x2F;&#x2F;www.getzola.org&#x2F;&quot;&gt;Zola&lt;&#x2F;a&gt;, which I chose because it&#x27;s written in Rust and Rust is the single greatest programming language on Odin&#x27;s blood soaked planet. Also, I found a theme I like called &lt;a href=&quot;https:&#x2F;&#x2F;duckquill.daudix.one&#x2F;&quot;&gt;Duckquill&lt;&#x2F;a&gt;. As dumb as it sounds, I spent a lot of time looking for a theme I liked. It has dark mode too so your eyes won&#x27;t get blasted if you use that like I do.&lt;&#x2F;p&gt;
&lt;p&gt;What about the old site? I&#x27;m going to keep it running for a while longer mostly to create a post that gives everyone still subscribed to my RSS feed a heads up that the site is moving. For the reasons I mentioned above, I don&#x27;t want to maintain that old WordPress site forever so I used the excellent &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lonekorean&#x2F;wordpress-export-to-markdown&quot;&gt;wordpress-export-to-markdown&lt;&#x2F;a&gt; application (and the also excellent &lt;a href=&quot;https:&#x2F;&#x2F;distrobox.it&#x2F;&quot;&gt;Distrobox&lt;&#x2F;a&gt; to isolate that NPM garbage in an easily destroyable container) to migrate all of my old WordPress articles to this site. They are available at the &lt;a href=&quot;&#x2F;archive&#x2F;&quot;&gt;Archive&lt;&#x2F;a&gt;. The archive is crude. None of the links for my old blog will work because the URL format changed from WordPress to this site. The old tags are also missing. They&#x27;re embedded in the Markdown source code, but don&#x27;t show on the site. There are probably a bunch of other issues with the archived articles due to the automated way I ported them. I have zero motivation to go through 10 years of articles and correct them so they are as-is. The search at the top should help you find an old article if, for some reason, you&#x27;re looking for one.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
