<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2026-03-09T21:09:31+00:00</updated><id>/feed.xml</id><title type="html">Justin O’Conner</title><subtitle>Software Architect, Coffee Drinker, Kind of a Dork</subtitle><author><name>Justin O&apos;Conner</name></author><entry><title type="html">Bitmap Fonts in MonoGame</title><link href="/blog/2026/03/09/bitmap-fonts-in-monogame.html" rel="alternate" type="text/html" title="Bitmap Fonts in MonoGame" /><published>2026-03-09T08:35:43+00:00</published><updated>2026-03-09T08:35:43+00:00</updated><id>/blog/2026/03/09/bitmap-fonts-in-monogame</id><content type="html" xml:base="/blog/2026/03/09/bitmap-fonts-in-monogame.html"><![CDATA[<div style="display: flex; justify-content: center; padding: 10px 10px 10px 30px; min-width: 100px; max-height: 400px;">
    <img src="https://monogame.net/images/logo_dark.svg" />
</div>

<p><br /></p>

<p>This is a quick update chronicling my journey with fonts in <a href="https://monogame.net">MonoGame</a>. Like the XNA Framework before it, it comes with decently robust support for creating sprite fonts from <code class="language-plaintext highlighter-rouge">.ttf</code> files as part of its content pipeline. You can then use those fonts to render text to the display very easily.</p>

<p>However, also like XNA, it’s notorious for making those fonts look like absolute garbage - especially at lower resolutions.</p>

<h2 id="the-problem-with-stock-spritefont">The Problem with Stock <code class="language-plaintext highlighter-rouge">SpriteFont</code></h2>

<p>MonoGame’s <code class="language-plaintext highlighter-rouge">SpriteFont</code> has serious limitations. It makes several assumptions about how the font should be rasterized, and most of them can’t be disabled in the font’s definition file. It applies some pretty fuzzy anti-aliasing, and the smaller you try to render the font, the worse the effect gets. Take a look at this example:</p>

<p><img src="/images/monogame_blurryfont.png" /></p>

<p>I put this together using <a href="https://fonts.google.com/specimen/Roboto">Roboto Medium</a> at 12pt, rendered onto a 480x270 internal surface. <small>(The actual display has been scaled up to 960x540.)</small></p>

<p>You’ll notice that it looks… wrong. Obviously it’s a competently rasterized font, but it almost looks like you’re reading it through a magnifying glass, or like it’s been scaled up using a non-integer method. Alas, no - that’s just how fonts look in MonoGame.</p>

<p>The sprite font configuration (an XML manifest in a <code class="language-plaintext highlighter-rouge">.spritefont</code> file) is very straightforward:</p>

<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">&lt;?xml version="1.0" encoding="utf-8"?&gt;</span>
<span class="nt">&lt;XnaContent</span> <span class="na">xmlns:Graphics=</span><span class="s">"Microsoft.Xna.Framework.Content.Pipeline.Graphics"</span><span class="nt">&gt;</span>
    <span class="nt">&lt;Asset</span> <span class="na">Type=</span><span class="s">"Graphics:FontDescription"</span><span class="nt">&gt;</span>
        <span class="nt">&lt;FontName&gt;</span>Roboto-Medium.ttf<span class="nt">&lt;/FontName&gt;</span>
        <span class="nt">&lt;Size&gt;</span>12<span class="nt">&lt;/Size&gt;</span>
        <span class="nt">&lt;Spacing&gt;</span>0<span class="nt">&lt;/Spacing&gt;</span>
        <span class="nt">&lt;UseKerning&gt;</span>true<span class="nt">&lt;/UseKerning&gt;</span>
        <span class="nt">&lt;Style&gt;</span>Regular<span class="nt">&lt;/Style&gt;</span>
        <span class="nt">&lt;CharacterRegions&gt;</span>
            <span class="nt">&lt;CharacterRegion&gt;</span>
                <span class="nt">&lt;Start&gt;</span><span class="ni">&amp;#32;</span><span class="nt">&lt;/Start&gt;</span>
                <span class="nt">&lt;End&gt;</span><span class="ni">&amp;#126;</span><span class="nt">&lt;/End&gt;</span>
            <span class="nt">&lt;/CharacterRegion&gt;</span>
        <span class="nt">&lt;/CharacterRegions&gt;</span>
    <span class="nt">&lt;/Asset&gt;</span>
<span class="nt">&lt;/XnaContent&gt;</span>
</code></pre></div></div>

<p>If you’re looking for <a href="https://docs.monogame.net/articles/getting_to_know/whatis/content_pipeline/CP_SpriteFontSchema.html">additional properties</a> to configure the font’s rasterization further… too bad. There aren’t any.</p>

<p>Now, if you’re rendering at higher resolutions, like 720p and up, the blurry effect is far less noticeable. In fact, smoothing might even be desirable at those scales; 12pt font isn’t actually very big on a 720p display. <strong>However</strong>, what if you don’t want any smoothing at all? What if you’re making a retro game where the chunky pixels are a feature?</p>

<h2 id="introducing-bitmap-fonts">Introducing Bitmap Fonts</h2>

<p>I am, of course, not the first person to make this observation about MonoGame. Being a relatively well-loved and proven framework, tons of developers have identified common faults in MonoGame and its abandoned predecessor, and they’ve sought to correct them. One such improvement is the <a href="https://www.monogameextended.net/docs/about/introduction/">MonoGame.Extended</a> library. It brings implementations for several common idioms, but most relevant to this discussion, it provides support and extensions for <strong>bitmap fonts</strong>.</p>

<p>So what is a <code class="language-plaintext highlighter-rouge">BitmapFont</code> as compared to a <code class="language-plaintext highlighter-rouge">SpriteFont</code>? Well, in both cases the font gets rasterized to a texture by a tool ahead of time. Then, to draw a string at runtime, portions of that texture are rendered to the screen, glyph by glyph, by the framework. It knows all the texture coordinates, character spacings, kerning pairs and so on. The most pertinent difference is that <strong>you have full control over how the texture gets produced</strong> for a <code class="language-plaintext highlighter-rouge">BitmapFont</code>, and the framework doesn’t do anything to the glyphs aside from render them to the display directly.</p>

<p>Sounds great! You just need to produce a font texture using some kind of tool and write nice, chunky text pixels to the screen… right?</p>

<h2 id="most-fonts-are-not-pixel-fonts">Most Fonts Are Not Pixel Fonts</h2>

<p>Yeah, about that. It turns out that the overwhelming majority of fonts are absolutely not meant to be rendered at low resolutions, and even then definitely not without some kind of smoothing. The <a href="https://www.monogameextended.net/docs/features/fonts/bitmapfont/">documentation for MonoGame.Extended</a> recommends an ancient tool called BMFont for rasterizing your font. In BMFont, you’re asked to provide the size of your font in <strong>pixels</strong>. I don’t know as much about computer font rendering as I should, but I do know that font “points” are not the same as pixels. Still, if you’re anything like me, you’ll experiment with a few different pixel sizes on your desired fonts. If you’re targeting low resolutions with a chunky pixel aesthetic, you might want your font to be somewhere between 8 and 12 pixels tall. Here’s what Arial looks like at 12px in BMFont’s preview:</p>

<p><img src="/images/monogame_arial_bitmap.png" /></p>

<p>And here’s what it looks like in-game:</p>

<p><img src="/images/monogame_arial_test.png" /></p>

<p>At first glance, it’s not bad! It’s appropriately chunky and the fuzzy anti-aliasing is completely absent. Unfortunately, it falls apart when you start looking at the details. For example, look at the <code class="language-plaintext highlighter-rouge">0</code> (zero). It looks like a <code class="language-plaintext highlighter-rouge">D</code>. Note the strange spacing on the lowercase <code class="language-plaintext highlighter-rouge">z</code> and uppercase <code class="language-plaintext highlighter-rouge">X</code>. See how the <code class="language-plaintext highlighter-rouge">*</code> (asterisk) is garbled and missing the top prong? How the <code class="language-plaintext highlighter-rouge">-</code> (minus) character is too short? None of these flaws are acceptable for a “production” font. The smaller you go, the more garbled the font gets.</p>

<p>In fact, it turns out that going below 12px with almost every font I’ve tried produces completely illegible garbage. Here’s Arial again at 8px:</p>

<p><img src="/images/monogame_arial_8px_test.png" /></p>

<p>That’s not a squished image <small>(I mean, probably not - I don’t know how you’re reading this)</small>. It’s just mangled. Unfortunately, it seems most fonts won’t work straight out of the box, even with <code class="language-plaintext highlighter-rouge">BitmapFont</code>.</p>

<h2 id="putting-in-the-work">Putting in the Work</h2>

<p>Remember how I said that with a <code class="language-plaintext highlighter-rouge">BitmapFont</code>, you have full control over how the texture gets produced? For better or worse, it’s true. From here, you basically have two options to make a polished pixel font:</p>

<h3 id="modify-the-bmfont-output">Modify the BMFont Output</h3>

<p>If you followed the instructions in the MonoGame.Extended docs, BMFont produces two files. The first is a <code class="language-plaintext highlighter-rouge">.fnt</code> file whose format you specified in the Export Options. The second is a texture whose format was also specified there. MGE wants you to output the <code class="language-plaintext highlighter-rouge">.fnt</code> file as binary and the texture as <code class="language-plaintext highlighter-rouge">.png</code>. However, if you instead output the <code class="language-plaintext highlighter-rouge">.fnt</code> as XML, the MGE content pipeline extension can still parse it but you can also edit the output after it’s produced.</p>

<p>Combined with your ability to edit the <code class="language-plaintext highlighter-rouge">.png</code> file in an external editor, you can hand-correct the mistakes made by the BMFont rasterizer. It won’t be fun, though. BMFont packs the texture pretty tightly and out of order, and the <code class="language-plaintext highlighter-rouge">.fnt</code> file is dense and precise. Any changes you make will need to be painstakingly accounted for in the texture coordinates and offsets it produced for you. You can give yourself some space to work within by adding spacing between the characters in BMFont (in the Export Options), but you’ll still need to be meticulous about it.</p>

<p>Or…</p>

<h3 id="design-your-own-font">Design Your Own Font</h3>

<p>When you read through the XML <code class="language-plaintext highlighter-rouge">.fnt</code> file, you’ll realize that although the spec is <em>dense</em>, it’s not complicated. Here’s an excerpt from our Arial 12px font produced by BMFont earlier:</p>

<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">&lt;?xml version="1.0"?&gt;</span>
<span class="nt">&lt;font&gt;</span>
  <span class="nt">&lt;info</span> <span class="na">face=</span><span class="s">"Arial"</span> <span class="na">size=</span><span class="s">"12"</span> <span class="na">bold=</span><span class="s">"0"</span> <span class="na">italic=</span><span class="s">"0"</span> <span class="na">charset=</span><span class="s">""</span> <span class="na">unicode=</span><span class="s">"1"</span> <span class="na">stretchH=</span><span class="s">"100"</span> <span class="na">smooth=</span><span class="s">"0"</span> <span class="na">aa=</span><span class="s">"1"</span> <span class="na">padding=</span><span class="s">"0,0,0,0"</span> <span class="na">spacing=</span><span class="s">"1,1"</span> <span class="na">outline=</span><span class="s">"0"</span><span class="nt">/&gt;</span>
  <span class="nt">&lt;common</span> <span class="na">lineHeight=</span><span class="s">"12"</span> <span class="na">base=</span><span class="s">"9"</span> <span class="na">scaleW=</span><span class="s">"256"</span> <span class="na">scaleH=</span><span class="s">"256"</span> <span class="na">pages=</span><span class="s">"1"</span> <span class="na">packed=</span><span class="s">"0"</span> <span class="na">alphaChnl=</span><span class="s">"4"</span> <span class="na">redChnl=</span><span class="s">"0"</span> <span class="na">greenChnl=</span><span class="s">"0"</span> <span class="na">blueChnl=</span><span class="s">"0"</span><span class="nt">/&gt;</span>
  <span class="nt">&lt;pages&gt;</span>
    <span class="nt">&lt;page</span> <span class="na">id=</span><span class="s">"0"</span> <span class="na">file=</span><span class="s">"arial_12_0.png"</span> <span class="nt">/&gt;</span>
  <span class="nt">&lt;/pages&gt;</span>
  <span class="nt">&lt;chars</span> <span class="na">count=</span><span class="s">"191"</span><span class="nt">&gt;</span>
    <span class="nt">&lt;char</span> <span class="na">id=</span><span class="s">"32"</span> <span class="na">x=</span><span class="s">"197"</span> <span class="na">y=</span><span class="s">"26"</span> <span class="na">width=</span><span class="s">"3"</span> <span class="na">height=</span><span class="s">"1"</span> <span class="na">xoffset=</span><span class="s">"-1"</span> <span class="na">yoffset=</span><span class="s">"11"</span> <span class="na">xadvance=</span><span class="s">"3"</span> <span class="na">page=</span><span class="s">"0"</span> <span class="na">chnl=</span><span class="s">"15"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;char</span> <span class="na">id=</span><span class="s">"33"</span> <span class="na">x=</span><span class="s">"232"</span> <span class="na">y=</span><span class="s">"17"</span> <span class="na">width=</span><span class="s">"1"</span> <span class="na">height=</span><span class="s">"7"</span> <span class="na">xoffset=</span><span class="s">"1"</span> <span class="na">yoffset=</span><span class="s">"2"</span> <span class="na">xadvance=</span><span class="s">"3"</span> <span class="na">page=</span><span class="s">"0"</span> <span class="na">chnl=</span><span class="s">"15"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;char</span> <span class="na">id=</span><span class="s">"34"</span> <span class="na">x=</span><span class="s">"163"</span> <span class="na">y=</span><span class="s">"26"</span> <span class="na">width=</span><span class="s">"3"</span> <span class="na">height=</span><span class="s">"2"</span> <span class="na">xoffset=</span><span class="s">"0"</span> <span class="na">yoffset=</span><span class="s">"2"</span> <span class="na">xadvance=</span><span class="s">"3"</span> <span class="na">page=</span><span class="s">"0"</span> <span class="na">chnl=</span><span class="s">"15"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;char</span> <span class="na">id=</span><span class="s">"35"</span> <span class="na">x=</span><span class="s">"251"</span> <span class="na">y=</span><span class="s">"9"</span> <span class="na">width=</span><span class="s">"4"</span> <span class="na">height=</span><span class="s">"7"</span> <span class="na">xoffset=</span><span class="s">"0"</span> <span class="na">yoffset=</span><span class="s">"2"</span> <span class="na">xadvance=</span><span class="s">"5"</span> <span class="na">page=</span><span class="s">"0"</span> <span class="na">chnl=</span><span class="s">"15"</span> <span class="nt">/&gt;</span>
    <span class="c">&lt;!-- ... --&gt;</span>
  <span class="nt">&lt;/chars&gt;</span>
  <span class="nt">&lt;kernings</span> <span class="na">count=</span><span class="s">"59"</span><span class="nt">&gt;</span>
    <span class="nt">&lt;kerning</span> <span class="na">first=</span><span class="s">"32"</span> <span class="na">second=</span><span class="s">"65"</span> <span class="na">amount=</span><span class="s">"-1"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;kerning</span> <span class="na">first=</span><span class="s">"121"</span> <span class="na">second=</span><span class="s">"46"</span> <span class="na">amount=</span><span class="s">"-1"</span> <span class="nt">/&gt;</span>
    <span class="c">&lt;!-- ... --&gt;</span>
  <span class="nt">&lt;/kernings&gt;</span>
<span class="nt">&lt;/font&gt;</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">&lt;info&gt;</code> element contains metadata about the font. <code class="language-plaintext highlighter-rouge">&lt;common&gt;</code> has information about the font as a whole - <code class="language-plaintext highlighter-rouge">lineHeight</code> is how many pixels tall the font is, <code class="language-plaintext highlighter-rouge">base</code> is where the base line of the font falls within that line height, <code class="language-plaintext highlighter-rouge">scaleW</code> and <code class="language-plaintext highlighter-rouge">scaleH</code> are the size of the output texture, and so on.</p>

<p>The character definitions are even more straightforward.</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">id</code> refers to the character’s integer Unicode representation (e.g. <code class="language-plaintext highlighter-rouge">32</code> is space, <code class="language-plaintext highlighter-rouge">33</code> is <code class="language-plaintext highlighter-rouge">!</code>, <code class="language-plaintext highlighter-rouge">34</code> is <code class="language-plaintext highlighter-rouge">"</code>, etc.) and the <code class="language-plaintext highlighter-rouge">x</code>/<code class="language-plaintext highlighter-rouge">y</code> coordinates are texture coordinates for this glyph.</li>
  <li>Combined with the <code class="language-plaintext highlighter-rouge">width</code> and <code class="language-plaintext highlighter-rouge">height</code>, this defines a rectangle for the glyph that will be used to render it to the display from the texture.</li>
  <li>I’m not entirely sure how the <code class="language-plaintext highlighter-rouge">xoffset</code> and <code class="language-plaintext highlighter-rouge">yoffset</code> work - presumably it offsets the character on the display by the given amounts when rendering - but you can configure BMFont to rasterize the font such that the offsets are always zero anyway.</li>
  <li><code class="language-plaintext highlighter-rouge">xadvance</code> defines how many pixels to advance forward after rendering this character.</li>
  <li><code class="language-plaintext highlighter-rouge">page</code> refers to which texture (defined in the <code class="language-plaintext highlighter-rouge">&lt;pages&gt;</code> section) this character comes from.
    <ul>
      <li>This is only important if you select <em>a ton</em> of characters for your font. Otherwise it’ll be 0.</li>
    </ul>
  </li>
  <li><code class="language-plaintext highlighter-rouge">chnl</code> has something to do with the RGBA layout of the characters as defined in BMFont (alongside the <code class="language-plaintext highlighter-rouge">alphaChnl</code>, <code class="language-plaintext highlighter-rouge">redChnl</code>, <code class="language-plaintext highlighter-rouge">greenChnl</code>, and <code class="language-plaintext highlighter-rouge">blueChnl</code> settings in the <code class="language-plaintext highlighter-rouge">&lt;common&gt;</code> element).
    <ul>
      <li>I didn’t bother to fully wrap my head around this, I just followed what BMFont did based on the presets I selected. You can apparently mask multiple characters into a texture by layering them in the RGB channels, but this was never pertinent for my use case.</li>
    </ul>
  </li>
</ul>

<p>With this knowledge in mind, it’s not complex to build your own font following this spec. You may not be able to pack the texture as optimally as BMFont (although you could always lean on a tool like TexturePacker to do it for you, if you need to optimize it), and you’ll definitely want to write a script or a tool to produce the XML <code class="language-plaintext highlighter-rouge">.fnt</code> file, but it’s pretty straightforward. You’re more-or-less just defining the texture regions for each glyph in your texture.</p>

<p>Drawing your own font isn’t as hard as you might imagine, depending on how complex you need it to be. You basically just need to draw characters in a grid onto a PNG file. A tool like Aseprite will make it quite simple, especially if you’re creating a monospaced font. Variable width fonts are more difficult, of course, and if you try to get the kerning right too, you’ll spend quite a lot of time tweaking it.</p>

<p>Now, if you’re making a text-heavy game, the font is one of the most important visual elements of your game. It’s worth putting the time in to get it right. If not, you can probably get away without kerning and with minimal tweaking. Either way, once you’ve got your font prototyped, it’s worth setting up a script to produce the XML automatically. You should absolutely test the font in a real scenario - rendered by the <code class="language-plaintext highlighter-rouge">BitmapFont</code> extensions in MGE - frequently to make sure your texture coordinates, offsets, and other settings are working properly.</p>

<h3 id="tricks-and-workarounds">Tricks and Workarounds</h3>

<p>Personally, I have already drawn a few pixel fonts in PICO-8 and Picotron and I found that workflow to be better than drawing them in Aseprite. These low-resolution environments are perfect for building and testing a small pixel font since you can iterate and test basically instantly. As a proof-of-concept, I wrote a cart in Picotron to export fonts in BMFont format:</p>

<p><img src="/images/monogame_picotron_test.png" /></p>

<p>As far as I can tell, this is a pixel-perfect recreation of Picotron’s variable-width and mono fonts. If you want to see it in action, you can run <code class="language-plaintext highlighter-rouge">load -u #picotron_bmfont_exporter</code>. It exports the texture to the <code class="language-plaintext highlighter-rouge">/desktop</code> folder alongside the XML <code class="language-plaintext highlighter-rouge">.fnt</code> spec. (If you use this tool, make sure to remove the <code class="language-plaintext highlighter-rouge">pod</code> annotation from the top of the XML document when you take it outside of the fantasy environment.)</p>

<p>Going into detail on how to build fonts in Picotron or PICO-8 is outside the scope of this article, but thankfully it’s a well-documented and supported process (<code class="language-plaintext highlighter-rouge">load #font_snippet</code> in PICO-8 will get you started).</p>

<p>Alternatively, you can also find fonts which are designed to be rendered very small. This is obviously the path of least resistance. Several pixel font packs on <a href="https://itch.io">itch.io</a>, for example, come pre-rasterized and with XML specs already written.</p>

<h2 id="fonts-are-hard">Fonts are Hard</h2>

<p>Turns out it’s not easy to get a pretty, well-formed pixel font appropriate for low resolutions. It takes some work, especially if you’re trying to go smaller than 12px. I imagine this explains some of the reasoning behind the trade-offs inherent to the stock <code class="language-plaintext highlighter-rouge">SpriteFont</code>: they might not be pretty, but they’re legible and very easy to create.</p>

<p>Thankfully, the tools are out there to make your perfect chunky pixel font look great. I hope this little journal helped you learn something new about fonts in MonoGame - it certainly took a bit of discovery for me to get the results I was looking for.</p>

<p>If you’re interested in picking up this adventure where I left off, one piece of software I never looked into was <a href="https://fontforge.org/">Font Forge</a>. Supposedly, it gives you full control over the design and implementation of true-type fonts, so perhaps the most “complete” solution would be to build your font from the ground up as one which scales to any size. That way, if you need multiple font sizes in your game, you’re not stuck having to scale or re-draw your original font.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="gaming" /><category term="development" /><category term="monogame" /><category term="font" /><summary type="html"><![CDATA[Pixel fonts in stock MonoGame are a pain. Bitmap fonts can help, but you have to put in the work.]]></summary></entry><entry><title type="html">The Aspirational Mirror</title><link href="/blog/2025/08/05/the-aspirational-mirror.html" rel="alternate" type="text/html" title="The Aspirational Mirror" /><published>2025-08-05T12:16:43+00:00</published><updated>2025-08-05T12:16:43+00:00</updated><id>/blog/2025/08/05/the-aspirational-mirror</id><content type="html" xml:base="/blog/2025/08/05/the-aspirational-mirror.html"><![CDATA[<h2 id="babys-first-game">Baby’s First Game</h2>

<div style="display: flex; justify-content: center; padding: 10px 10px 10px 30px; min-width: 100px; max-height: 400px;">
    <img src="/images/babys-first-game.png" />
</div>

<p>I wrote my first game when I was 12.</p>

<p>I had grown up on video games—my parents tell me I was playing Mario before I was 3 years old—and I always wanted to make my own. But, to make video games, I had to learn how they were made. My 12-year-old self managed to figure out that I needed to learn programming, and games were usually written in C++. So, naively, I assumed that if I could just learn C++, I could make games. My parents and I were at a bookstore one weekend and I spotted a book: <em>Sam’s Teach Yourself C++ in 21 Days</em>. “Wow,” I thought. “Just 21 days? I’ll be making games in no time!” I begged and pleaded for them to buy it for me, and after some discussion, they did.</p>

<p>I was lucky enough to grow up in a very technology-rich household, much to the chagrin of my parents’ credit ratings. They loved having hi-fi stereos, game consoles, and of course, multimedia PCs… whether they could afford them or not. But as a result, I had grown up surrounded by games: NES, SNES, Genesis, Playstation, and PC games. My dad was also particularly proficient (for the time) with this stuff, and he taught me how to play MUDs via Telnet. Eventually, he introduced me to EverQuest; it lit up my young brain like a Christmas tree.</p>

<p>So, 12-year-old me sat down in front of the computer with an outdated (even at the time) C++ book and started reading. I don’t know what the curriculum is today, but at my school in 2000, I had only just started learning algebra (and even then, only because I opted into an accelerated learning path). So when I first got started, I didn’t really even understand the concept of <em>variables</em>. Instead I did what most starting programmers used to do: I typed out all the examples and started tweaking them.</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/djgpp.png" />
    <figcaption>
        <div><p><small><em>Very much not state of the art, even in 2000</em></small></p></div>
    </figcaption>
</div>

<p>I noticed that all the examples in the book focused on text-based output. That wasn’t exactly what I wanted; where were the flashy graphics and explosions? But, since I had some experience with MUDs, I knew I could make a text-based RPG. I did all my editing in a DOS-based environment called <a href="https://www.delorie.com/djgpp/">DJGPP</a>, because that’s what the book shipped with on the included CD. My code was probably one long <code class="language-plaintext highlighter-rouge">int main()</code> function full of the kind of cyclomatic complexity that would make your head explode.</p>

<p>Yet, there it was. After reading (parts of) the book and really committing to the task, I had <em>debatably</em> learned to program. It took longer than 21 days, that’s for sure, but I made my first game. It had an enemy that could attack you, you could attack it back, and the stats all worked. If you defeated the enemy, you got a little “congratulations” and then the program ended. That’s it. Baby’s first game.</p>

<h2 id="obsession">Obsession</h2>

<p>The itch to write more games hit me hard. I sent my game to all my friends, who undoubtedly never bothered unzipping it but told me it was cool anyway. Then, I started thinking about what comes next. I had grand plans in my mind of starting a game studio in my garage with my friends and acquaintances. Some of them played music, some of them drew, and some had expressed interest in level editors for games like Half-Life. It felt like we had all the ingredients.</p>

<p>My friends and I had LAN sleepovers where we’d play games, of course, but also have little game dev jams. I discovered DarkBASIC, a BASIC interpreter stapled onto a DirectX helper API, and it blew my mind. Not only was BASIC so much easier than C++, DarkBASIC had all these graphics and sound commands built in, and it even loaded Half-Life maps straight out of Hammer! All the pieces were there.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/dbpro_cover.jpg" />
    <figcaption>
        <div><p><small><em>DarkBASIC Pro</em></small></p></div>
    </figcaption>
</div>

<p>My friends and I put together several stupid little games over the next few years. We tried building an FPS, we did some experimenting with a Final Fantasy-style RPG, we had a trivia puzzle game, and we even started a Castlevania-style action platformer. This was an era before Unity, Unreal, or Godot had commoditized game toolkits. As the only programmer for most of these projects, I had to learn a lot. Game engines, realtime simulations, asset pipelines, data structures—as the games got more complex, my skills had to grow to accommodate.</p>

<p>I dabbled in OpenGL, DirectX, various open source game engines, and learned a little bit of 3D math to try and make it all make sense. I wrote code in BASIC, C, C++, C#, Lua, and probably more. We <em>found</em> copies of 3DS Max and Photoshop, discovered Blender, and tooled around with RPG Maker 2000. There was so much to explore, so much to learn, and so much time to spend on it. After all, we were relatively privileged teenagers. What else did we have to do? Schoolwork? Psh.</p>

<h2 id="the-real-world">The Real World</h2>

<p>All told, we probably started and worked on 15 different games over 6 years. We only really “finished” a few, and they were all messy hobby efforts full of copyrighted assets and bad ideas, but that’s how you forge a real skillset. You learn by doing new things… and we did a lot of new things.</p>

<p>By the time we graduated high school, I had become a legitimate programmer. Even for a couple years following, we kept going, getting more and more professional and developing our talents. I think if we’d managed to get better a little faster, take it a little more seriously, and make a few different decisions, we could’ve really achieved that dumb little childhood dream of starting a game studio in the garage.</p>

<p>Unfortunately, the real world is harsh to naivety. We all went off to college after high school, and our interests wavered as they had to compete with things like… you know, relationships, jobs, other hobbies, stuff like that. Suddenly we had responsibilities. I also hit a severe bout of depression for a few years which made it hard to do much of anything at all.</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/sad_programmer.png" />
    <figcaption>
        <div><p><small><em>Me, basically</em></small></p></div>
    </figcaption>
</div>

<p>By the time I was 21, not even a decade after I started making games, I had stalled out. When I was 22, I decided it was time to “get a real job.” Until then, I had been going to college and working various part time jobs, splitting an apartment with my then-girlfriend (now wife). I considered getting a job in the game industry, but by 2010 <a href="https://en.wikipedia.org/wiki/Erin_Hoffman">it was widely known just how miserable the working conditions were</a>. I made the painful decision to let game development remain a hobby and start a traditional IT career instead.</p>

<h2 id="a-real-job">A Real Job</h2>

<p>Never let it be said that I don’t commit. When I decided to get a “real job,” I went at it pretty hard. Thanks to my prior game programming experience (and some SQL fundamentals in college), I entered the industry as a Software Engineer II. From there I made my way up the ladder for 15 years, learning and growing along the way, until suddenly I found myself as a Senior Director of Engineering.</p>

<p>Being a professional engineer, leader, and mentor taught me more than I could possibly write here. I could fill a pretty fat book with the lessons I learned, decisions and mistakes I made, values I came to hold dear, and so on. The most important thing I took from it, though… is that I’m worthy of being a leader. Even as far back as my “garage game studio” days, I doubted whether I was the right person to lead such a thing. I never saw myself as a leader.</p>

<p>Even once I made my way into a career leadership position, I still didn’t see myself as a leader. I was just the guy who did what needed to be done, whether that was setting direction for a team to align with business goals or working out the architecture for a new project. It took a long time of gradual self-reflection for me to realize being a leader isn’t about any of that shit you read on LinkedIn. It’s about being trustworthy and responsible, about being decisive and knowledgeable, and about putting your team over yourself.</p>

<p>I’ve met too many so-called leaders who fail at this. Miserably.</p>

<h2 id="reflections">Reflections</h2>

<p>At time of writing, I recently lost my “real job.” Don’t worry, I’ll be fine - but for the first time in 15 years, all the noise and anxiety of my day job has gone quiet. I’ve finally had a chance to stop and think about that decision I made so long ago.</p>

<p>At various points since that decision, I’ve worked on games in my spare time. I still write down all my ideas. I’ve made multiple concerted efforts to spend nights and weekends on game projects, and while I don’t have much worth uploading here to my site, I constantly feel that itch in the back of my mind: make video games. It’s the one aspiration on which I’ve never really wavered.</p>

<p>Of course, tech evolves fast. To keep up with my career, I’ve had to devote many long nights and weekends to working late, being on call, or just keeping up with the latest advancements. I devoted so much of myself to my day job that, by the end of the day, my brain was empty and I didn’t want to sit in front of an IDE for another 4-6 hours. Hell, sometimes I just wanted to sit down and watch a movie with my wife.</p>

<p>Some particularly grind-minded individuals would probably call those excuses. Others would accuse me of not <em>really</em> enjoying game development, if I don’t feel like doing it after a 10 hour work day. I’m sure there’s some small amount of truth to those statements - that I could be <em>more passionate</em> - but like most gatekeepers, they hold everyone to an unreasonable standard to make themselves feel better. It’s reasonable to be <em>human</em>.</p>

<p><strong>But what if now is the time?</strong></p>

<h2 id="the-exceptions-which-prove-the-rule">The Exceptions Which Prove The Rule</h2>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/Balatro_cover.jpg" />
    <figcaption>
        <div><p><small><em>I've got about 50 hours in this game...</em></small></p></div>
    </figcaption>
</div>

<p>If you haven’t already, you should read <a href="https://localthunk.com/blog/balatro-timeline-3aarh">LocalThunk’s timeline of his development on Balatro</a>. It tells story of a man who made games as a hobby for a long time, ended up quitting his job to work on his passion project, and found massive success.</p>

<p>Or how about <a href="https://en.wikipedia.org/wiki/Vampire_Survivors#Development">the story of poncle</a> (Luca Galante), developer of Vampire Survivors? A man who made games both as a hobby and professionally, ended up quitting his job to work on his passion project, and found massive success.</p>

<p>Go back far enough and you might even recall the story of Notch, and a little game called Minecraft. He turned out to be a real piece of garbage, but few can deny the wild success of his passion project. How did he make it happen? Well, he quit his job to work on it.</p>

<p>If you were starry-eyed enough, like me, you might read stories like these and think, “Wow, all I need to make it as an indie dev is a passion project, hard work, and some luck! I can do this!” You might even be correct, technically… but it’s not what you think. Here’s my estimated breakdown:</p>

<ul>
  <li>0.1% passion</li>
  <li>0.2% hard work</li>
  <li>99.7% luck</li>
</ul>

<p>For every LocalThunk, poncle, and Notch, there are 1,000 aspiring indie developers whose games rot in the back catalog of Steam. Some of those games are trash, certainly, but some of them are fantastic gems. They just haven’t sold. No streamers have played their game yet. No publishers were interested in picking them up. Most of those games will fade into the ether, and their developers will end up back at their day jobs (or worse).</p>

<p>Balatro, Vampire Survivors, and Minecraft are exceptions which prove the rule: the indie game market is <em>unimaginably overcrowded</em>. Your chance of success in this market is almost zero. Are you financially dependent on your game being successful? Don’t become an indie dev. It’s more akin to playing the lottery than working a day job.</p>

<p>That’s the general guidance, anyway. Hell, even several successful indie developers have said the same thing. After reading <a href="https://www.penny-arcade.com/news/post/2011/03/16/slam-bolt-scrappers">this article about Slam Bolt Scrappers on Penny Arcade</a> back in 2011, I reached out to the studio head, Eitan, for advice. He responded (props to him) and more-or-less told me exactly what I already knew, deep down.</p>

<blockquote>
  <p>I wouldn’t recommend quitting your day job unless you really have a game plan for it.</p>
</blockquote>

<p>I only learned while doing research for this article that his studio, Fire Hose Games, ran out of money and went out of business last year. Their final game, <a href="https://store.steampowered.com/app/1457320/Techtonica/#app_reviews_hash">Techtonica</a>, has a <code class="language-plaintext highlighter-rouge">Mixed</code> rating on Steam. Their socials are silent… it appears they petered out unceremoniously.</p>

<p>I can’t say anything. I didn’t buy the game either.</p>

<h2 id="existential-conflict">Existential Conflict</h2>

<p>The raging existential conflict which has agonized me for 15 years is this: you miss all the shots you don’t take, but it’s super risky to take the shot. I can never be successful if I don’t try, but it’s extraordinarily likely I won’t be successful even if I do. It’s gambling.</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/missed_shots.jpeg" />
    <figcaption>
        <div><p><small><em>I don't even watch The Office, but...</em></small></p></div>
    </figcaption>
</div>

<p>Am I willing to gamble away my livelihood for the chance to fulfill my childhood dream?</p>

<p>One of the recurring elements among all the stories I told earlier - LocalThunk, poncle, and even Notch - is that they worked on their games in their spare time. Nights and weekends, while working a day job, until some threshold was crossed where they began to see a legitimate chance of success. Eitan from Fire Hose warned me 14 years ago not to quit my day job.</p>

<p>Earlier I said that it’s reasonable to be human. Maybe, though, if I really want this so bad… it’s not. Perhaps, until I can find the energy and focus within myself to power through my day job for 8-10 hours and then stare at the IDE for another 4-6, I don’t have what it takes. Am I really “obsessed” with game development if I only want to work on it while my brain is fresh? Do I even deserve it?</p>

<p>I don’t know. I’ve never known, and it’s part of why I originally made the decision. It gnaws at me constantly.</p>

<h2 id="so-now-what">So Now What?</h2>

<p>A small peek behind the curtain: this article took me a long time to write. I decided to take a few weeks off work between jobs—my wife and I just moved 4,000 miles across the entire continent of North America and I wasn’t, originally, going to get a break to rest. Starting this article is one of the first things I did after we moved into our new place. Other posts have taken longer, but none have been so personal.</p>

<p>This has been my way of reminding myself why I decided, in 2010, to get into traditional IT instead of trying to be an indie game developer. It’s been me painfully navigating the same decision yet again. As a kid, all I ever wanted was to start a game studio with my friends and make games for a living. How naive and selfish.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/game_designers.jpg" />
    <figcaption>
        <div><p><small><em>The recipe for success is to tighten up the graphics on level 3</em></small></p></div>
    </figcaption>
</div>

<p>Still, despite all the advice to the contrary, all the obvious signs that the industry is flooded, and all the common sense and risk assessment telling me it’s stupid, there’s a huge part of my inner self tied up in the desire to make games. To get something creative out there that people enjoy and leaves an impact on their lives in some way… that’s what drives me.</p>

<p>The tech industry right now is <em>so far abstracted</em> from making products people want to use. It’s all caught up in hype cycles and massive bubbles; meanwhile, the products we actually depend on day-to-day are rotting away or being hollowed out to make room for AI. I hate it. I just want to make cool things that make people happy.</p>

<p>I don’t yet know where I’m going to end up. Professionally and personally, it’s a time of major change for me at what might end up being one of the worst possible times in history to make major changes. All I know is that wherever I land, it’s going to be somewhere that I can meaningfully contribute to, and perhaps even lead, a product that actually impacts someone.</p>

<p>That’s all I want: to make a positive impact on the world, however small. Is that outmoded now? I guess I’ll find out. Check back here in a few months to see if I’m starting a GoFundMe to help pay my bills…</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="gaming" /><category term="development" /><category term="indie" /><category term="startup" /><summary type="html"><![CDATA[A reflection on my history with game development, and the impact of peering into an alternate universe mirror showing what could have been.]]></summary></entry><entry><title type="html">The Self-Betrayal of Vibe Coding</title><link href="/blog/2025/03/17/the-self-betrayal-of-vibe-coding.html" rel="alternate" type="text/html" title="The Self-Betrayal of Vibe Coding" /><published>2025-03-17T06:11:52+00:00</published><updated>2025-03-17T06:11:52+00:00</updated><id>/blog/2025/03/17/the-self-betrayal-of-vibe-coding</id><content type="html" xml:base="/blog/2025/03/17/the-self-betrayal-of-vibe-coding.html"><![CDATA[<div style="display: flex; justify-content: center; padding: 10px 10px 10px 30px; min-width: 100px; max-height: 400px;">
    <img src="/images/times_square.jpg" />
    <figcaption>
        <div><p><small><em>Times Square <a href="https://commons.wikimedia.org/w/index.php?curid=10048145">by Francisco Diez, CC BY 2.0</a></em></small></p></div>
    </figcaption>
</div>

<p>Like most software engineers in the futuristic year of 2025, I sometimes use LLMs like ChatGPT to help me write code. Especially for tedious, but well-solved problems like writing common regular expressions, I give my brain a break as a little treat. The LLM acts as a glorified search engine and strings together something which is good enough for what I need in the moment. It’s a win/win: I didn’t need to exert any real brain power, and the problem got solved. Really, that’s what software engineers are. We’re problem solvers. And the problem got solved.</p>

<p>Right?</p>

<p>… Right?</p>

<p>Yeah, you know where this is going.</p>

<p><em>(<strong>Edit 2025/07/30</strong>: I don’t usually edit my articles after the fact, but the previous version of this one contained a particularly vitriolic admonition of the start-up/venture capital game which probably wasn’t entirely fair. This new version tones down the bile a bit.)</em></p>

<h2 id="what-is-vibe-coding">What is Vibe Coding?</h2>

<p>If you aren’t aware already, let me introduce you to an idea which has gained popularity lately: <em>vibe coding</em>. To summarize, it’s a term for when someone isn’t actually writing any code; they’re simply prompting an LLM to generate code for them based on a description of their intent. Then, they copy/paste that code into their editor and tweak stuff until it builds and produces the results they’re looking for. Repeat ad nauseum until the task is complete.</p>

<p>The vibe coder has completed their work without ever needing to engage the engineering part of their brain. They probably “produced” more lines of code in less time than another developer who wrote it all themselves. However, perhaps controversially, I’m going to argue that if you’re a professional developer, you need to stop <strong>solving problems</strong> and instead become a <strong>problem solver</strong>. Let’s talk about the difference.</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/sad_programmer.png" />
    <figcaption>
        <div><p><small><em>This guy is definitely a professional</em></small></p></div>
    </figcaption>
</div>

<p>First, we’re going into this conversation with a few assumptions. Most importantly, note that I specifically call out <em>professional developers</em>. A hobbyist or general user who needs a script or a custom application gets no judgement from me, no matter how they achieve their goals. This (one-way) conversation is between me and engineers working for companies, producing software on a daily basis. Presumably, that software is used by a customer or stakeholder of some sort to create value, and they are paid for their labor, making them a professional.</p>

<p>Another foundational principle here is the idea that software engineering is a discipline worthy of refinement. I don’t want to engage in this conversation with tech bros who lack respect for the trade which pays their bills. Those people are often incapable of having a good faith discussion on this topic because their fundamental philosophies do not align with long-term goals. Their objectives are usually very short-term, lack an overarching strategy, and originate from a mindset geared toward playing a very different game. It goes without saying that vibe coding is very popular with the sorts of people who see programming as a necessary evil on the path to getting bought by Amazon.</p>

<h2 id="the-difference-between-solving-problems-and-being-a-problem-solver">The Difference Between Solving Problems and Being a Problem Solver</h2>

<p>You know who gets left in the dust when hype bubbles burst? You and I. Thankfully, we can pick ourselves back up, get back on the job market, and go apply our skills elsewhere as engineers—as problem solvers. Unless you’re not a problem solver.</p>

<p>Using an LLM to <em>vibe code</em> certainly takes more skill than browsing the internet. You’ve got to have some intuition on how programming might work, and enough sense to generally grok the syntax of whatever language you’re using. You need enough to make minor changes, at least. <strong>But don’t fool yourself</strong>. Software engineering, architecture, infrastructure, data science, etc. are all extraordinarily complex and nuanced fields. They’re packed full of decades of hard-learned lessons forged in the real pressure of projects where millions of dollars are at stake. Your own experience comes from passively absorbing the mentorship of your senior colleagues, making your own mistakes, active learning, contributing to projects, and slowly diamondizing.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/the_thinker.jpg" />
    <figcaption>
        <div><p><small><em>Use your noodle</em></small></p></div>
    </figcaption>
</div>

<p>The most important experience, though, is problem solving. It’s a mindset like a muscle, and you have to train it. Anyone can <em>vibe code</em> because everyone can more-or-less wrap their head around what a programmer does on the surface: write instructions to make a computer do something. The ability to proficiently solve complex problems—to break down confusing, nuanced, sparsely detailed, or misleading issues and turn them into achievable and actionable objectives—isn’t something an LLM can do for you. Your value as an engineer isn’t your ability to write code, it’s your ability to be a problem solver. You can only exercise the muscle if you use it.</p>

<p>Interestingly, you’ll find that being a problem solver is important in more than just programming. It’s a skill applicable to your entire life and across every career. Employers need problem solvers because someone has to figure out what to do when unexpected things happen. Science needs problem solvers to discover the undiscovered. The world needs problem solvers to navigate the unprecedented.</p>

<p>Vibe coding robs you of valuable problem solving experience, because you’re not solving any problems. The LLM is. In order to survive in this industry, you can’t be disposable. You need to be indispensable. <strong>You need to be a problem solver</strong>.</p>

<h2 id="prompts--problems">Prompts != Problems</h2>

<p>I’ve had a couple of discussions very similar to this with colleagues and… let’s call them “industry pundits.” Frequently, there are two counterpoints to my perspective on vibe coding.</p>

<ol>
  <li><strong>Good faith argument</strong>: successfully writing prompts to coerce the LLM into giving you the results you need is similar to writing code, and therefore contributes to one’s problem solving experience.</li>
  <li><strong>Bad faith argument</strong>: the tech industry is in a post-principle state; no one cares about any of this stuff anymore. The only thing that matters is slinging code well enough to get to the next round of funding, or to keep operations running, or whatever.</li>
</ol>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/firehose.jpg" />
    <figcaption>
        <div><p><small><em>Firehose <a href="https://commons.wikimedia.org/w/index.php?curid=1569313">by Jip Valentijn, CC BY-SA 2.0</a></em></small></p></div>
    </figcaption>
</div>

<p>Let me be very grim and serious for a moment. We’re going to talk about #1 later, but anyone who tries to legitimately pitch #2 to me? They can fellate a firehose. Like so many of the problems plaguing the world at time of writing, the toxic idea that we’re in a post-discipline world is a dangerous lie. It comes from <strong>a)</strong> people actively invested in devaluing your skills to drive down their labor costs; <strong>b)</strong> grifters looking to maximize their short-term gains at the expense of overall customer satisfaction or long-term sustainability of the business; or <strong>c)</strong> hawkers looking to increase the perceived value of AI. The degradation of principles in the tech industry is directly correlated to the <a href="https://en.wikipedia.org/wiki/Enshittification">enshittification</a> of the products and services it offers, which is itself inseparable from a market ruled by the FAANG-like goliaths terrified they can’t keep the financial line going up forever. I won’t be gaslit by these people, and neither should you.</p>

<p>We good? Okay. Let’s talk about prompts vs. problems.</p>

<p>Perhaps the biggest difference between solving a problem yourself and crafting a prompt is in how you engage your brain. Engineering is a declarative action. Compare it to building a bookshelf. You understand what the bookshelf should look like, what kinds of tools and supplies you need to build it, and—if you know what you’re doing—viable dimensions and construction methods that are likely to result in a stable piece of furniture. Then, you express your will upon the world directly by crafting the supplies, your knowledge, and the tools together to produce a bookshelf. You’ve exercised your spatial reasoning, planning, tool discipline and technique, and so on.</p>

<p>Engineering <em>should</em> be just like that: you know what the software needs to do, what sorts of platforms and languages are appropriate to express the requirements, and with experience, the design patterns and architectures that are likely to result in a stable piece of software. Then, you express your will upon the computer directly by crafting syntax, your knowledge, and infrastructure together to produce the application. You’ve exercised your problem solving, planning, architecture and design acumen, and so on.</p>

<p>Now imagine you’re trying to tell an LLM to build a bookshelf. What skills are you exercising? You’ll learn the particular quirks of that model and how to phrase prompts with enough detail that it understands what sort of bookshelf you’re looking for. Then, if you’re lucky, you end up with a bunch of mismatched prefab pieces that you have to assemble like you went dumpster diving at Ikea.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/fjaellbo-shelf.png" />
    <figcaption>
        <div><p><small><em>I have two FJÄLLBO shelves from Ikea myself, you know.</em></small></p></div>
    </figcaption>
</div>

<p>As I’ve already alluded to, if you’re a hobbyist and you just need a dang bookshelf, <em>there’s nothing wrong with an Ikea bookshelf</em>. But if you’re a professional furniture manufacturer and all you do is bash pieces of prefab furniture together and call it a product worthy of sale to a client? It doesn’t cut it. You’re scamming your customers, and you never learned how to actually build bookshelves. You don’t know how to plan for materials because the pieces were prefabricated, you can’t figure out how to fix problems because you don’t understand how they’re made, and you don’t have any tool discipline or technique so you’re highly likely to chop off your fingers trying to cut the wood.</p>

<p><small>(Before you take this all too literally and tell me that furniture companies source their prefab pieces from multiple manufacturers and resell them as a cohesive unit, yes. I understand that. However, someone had to understand how a bookshelf is made well enough to know that the pieces would fit together, look good together, where to source them from, how to forecast supply from their vendors to meet the demand for their downstream product, and so on. If you want to keep abusing the metaphor, though, and you think you’re the reseller instead of the manufacturer of the pieces, then try this. Pretend that you’re building a SaaS website. You don’t write your own database, web server, application framework, infrastructure framework, or any of that. But you still have to understand how to build your service’s bespoke value-generating logic and how to utilize those “prefab” components to do it.)</small></p>

<h2 id="who-are-they">Who Are They?</h2>

<p>If vibe coding is so detrimental and generates such poor problem solvers, you might be asking who are these so-called <em>vibe coders</em> then? Do they even exist, or are they just a figment of the imagination in the minds of AI tycoons? No, it’s indisputable that they exist. If you’re working in the industry today, chances are very good you know one or more of them.</p>

<p>Sometimes, they’re burnt out or distracted, but obviously need to keep their job. So they turn to LLMs as a way to keep producing code without having to engage. Other times, they agree with the (toxic) mindset that the only priority is churning out code without much regard to any of the principles or disciplines of engineering. You know, the stuff we worked for decades to figure out was absolutely necessary for the longevity of software.</p>

<p>The vibe coders that worry me the most, though, are the new developers entering the industry now. As an unregulated, uncertified discipline, software engineering is <em>incredibly</em> vulnerable to cargo cult practices and cynically-motivated fads. These newbies are getting fed all kinds of counterproductive codswallop by an industry that absolutely does not have their best interests in mind. One of the more public meta-voices of the tech industry, The Primeagen, <a href="https://www.youtube.com/watch?v=1Se2zTlXDwY">reported on</a> how the overreliance on LLMs is stripping junior developers of the critical foundational “muscle memory” they need to succeed.</p>

<p>Ten years ago, we used to accuse bad developers of just copy/pasting code from Stack Overflow. Vibe coders are a similar phenomenon, except now they’re not even getting exposed to the community interactions around those posts. But so long as IT remains a lucrative career, there will always be people looking to get in without doing the prerequisite work. Just like with Stack Overflow, LLMs are a crutch, and some people never get back on their own two feet.</p>

<h2 id="good-vibrations">Good Vibrations</h2>

<p><em>Vibe coding</em> doesn’t teach you to be a good programmer, engineer, teammate, or problem solver. It only teaches you how to be a good LLM-wrangler, and no one arguing in favor of vibe coding today is arguing that it generates quality code. They’re not even arguing that it’s easy; most vibe coders acknowledge that they have to spend a decent chunk of time fixing issues it creates. They’re only arguing that it’s <em>good enough</em>, and I think that’s demonstrably false. That doesn’t mean you should necessarily avoid LLMs wholesale. My point is that they should be just that: tools in your broad arsenal of problem solving strategies. At least for now, an LLM can’t be a problem solver, and you can’t be a problem solver if you rely too heavily on them.</p>

<p>I still have hope. There’s still time for us to influence new devs to engage with the principles of engineering. We can still hold ourselves accountable to being the best versions of ourselves. I also know that this “race to the bottom” era will end as soon as the consequences start piling up. Investing in yourself is never a bad investment; you’ll want to be primed for the rebound instead of lagging behind.</p>

<p>Perhaps most importantly, don’t betray yourself. Problem solving is as much a creative expression as it is a scientific one, and LLMs are still purely derivative. This is your advantage in the AI-obsessed industry we find ourselves part of today.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="development" /><category term="ai" /><category term="quality" /><summary type="html"><![CDATA[An assessment of so-called "vibe coding" and a warning to its practitioners.]]></summary></entry><entry><title type="html">Nostalgia</title><link href="/blog/2023/11/12/nostalgia.html" rel="alternate" type="text/html" title="Nostalgia" /><published>2023-11-12T10:17:35+00:00</published><updated>2023-11-12T10:17:35+00:00</updated><id>/blog/2023/11/12/nostalgia</id><content type="html" xml:base="/blog/2023/11/12/nostalgia.html"><![CDATA[<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
    <img src="/images/nes-console.jpg" />
    <figcaption>
        <div><p><small><em>Nintendo Entertainment System</em></small></p></div>
    </figcaption>
</div>

<p>Nostalgia has a lot of haters. <em>“Move on,”</em> people say. <em>“Things now are better than they were. You think you do [want old stuff], <a href="https://www.youtube.com/watch?v=0Wrw3c2NjeE">but you don’t</a>. You’re just wearing rose-tinted glasses hiding the flaws of all those things you used to enjoy.”</em> Kind of like obesity, it gets attributed to personal weakness or lack of discipline. Being nostalgic is childish, right? It’s a yearning for a time which has passed and for a world that no long exists.</p>

<p>Frankly, I can see their point. I don’t miss VHS tapes getting eaten by a screwy VCR, or having to wait for them to rewind. I don’t miss internet speeds measured in bits per second rather than megabits. I don’t miss the Cold War. I don’t miss the times when various diseases which are now curable were death sentences. Culture, society, science, technology—we’ve made so many amazing advancements in innumerable areas that we have the amazing luxury of getting to take for granted today.</p>

<p>And yet… it’s weird. I actually do miss the era before streaming video services had completely taken over and then fractured, leaving my ability to access media at the whims of Hollywood execs. I fondly recall that brief window before the Internet was a massive advertising duopoly transforming misinformation and anger into profit. I remember a time when politics weren’t <em>entirely</em> a red vs. blue clown show. Beyond all that, though, I kind of miss being bored—or perhaps I miss having to make the most out of my limited options.</p>

<h2 id="the-argument-for-nostalgia">The Argument for Nostalgia</h2>

<p>I’m certain this is not a new idea—there are areas of academic study entirely dedicated to nostalgia—but I would wager that it’s not always an “overfondness of the past.” Instead, I posit that that nostalgia is frequently a manifestation of dissatisfaction with the state and direction of culture, entertainment, or even society itself.</p>

<p>Before we proceed, let me be clear on the term <em>nostalgia</em> in the context of this article. There are, to me, two definitions of nostalgia which refer to related, but slightly different, phenomena. First is in the same vein as déjà vu: a reflexive feeling one experiences circumstantially, evoking the wistful and sentimental aspects referenced in e.g. the <a href="https://www.merriam-webster.com/dictionary/nostalgia">Merriam-Webster definition</a> of the word. In this post, however, I want to focus on something more specific: the desire to evoke that feeling intentionally.</p>

<p>As a feeling, nostalgia might seem unattractive. For many, it carries elements of melancholy and ennui. This is the dissatisfaction I alluded to earlier: contrasting how things <strong>were</strong> with how they <strong>are</strong>.</p>

<ul>
  <li><em>“Star Wars used to be good, now it’s bad. I miss when it was good.”</em></li>
  <li><em>“Video games used to be full of secrets and fun stuff to unlock, now they’re all microtransactions and Skinner boxes. I miss when they were more straightforward.”</em></li>
  <li><em>“Politics used to be much more balanced and professional, now it’s a bunch of jokers appealing to the lowest common denominator. I miss when politicians worked for the people.”</em> <small>(Yes, I hear you out there. Yeah, you, the cynical person screaming at their screen that politicians have always been garbage. I’m aware, thanks.)</small></li>
</ul>

<p>The common factor here is the desire for a previous state. However, those who actively choose to immerse themselves in nostalgia aren’t focusing on those negative aspects. Instead, they’re purposefully evoking a happiness found in that previous state. Those who seek nostalgia as a feeling are hoping to experience some combination of sentimental reflection and wistful comfort in the way things used to be as a coping mechanism against the way things are today. To put it another way, many of those who indulge in nostalgia are using it to process the trauma of living in the modern world.</p>

<h2 id="nostalgia-as-a-vice">Nostalgia as a Vice</h2>

<p>Of course there are those who take their nostalgia too far. Some folks will get lost in the past, losing sight of the real world and creating a fantasy for themselves. They craft a version of their ideal world which never really existed, whose scale and scope depends on financial constraint more than self-restraint. Overuse of any sort of coping mechanism is a problem—some people drink too much, some folks spend every non-working hour in an MMO, and others drown themselves in nostalgia. This cannot be held against nostalgia exclusively unless one is willing to start moralizing all coping mechanisms, and I suspect that’s a glass house which can’t withstand too many thrown stones.</p>

<p>It could perhaps be argued that nostalgia is a particularly addictive feeling. An innate emotional reaction with strong psychological effects which can be triggered simply by exposing oneself to the correct stimuli is indeed powerful. It’s a double-edged sword; it’s accessible <small>(<em>for the most part</em>, though hyper-monetizing nostalgic media is becoming more common)</small>, effective, and in most cases harmless. But it’s also habit-forming in the same way people can literally <a href="https://www.psychologytoday.com/us/blog/where-science-meets-the-steps/201403/are-you-addicted-unhappiness">become addicted to misery</a>. It’s an unfortunate truth that most of us have some kind of vice. Thankfully, I imagine immersing oneself in nostalgia is healthier than snorting coke, though self-destructive behavior of any kind is still a problem.</p>

<p>Perhaps that’s the only important takeaway here: nostalgia is often referred to as a weakness, an immaturity, or a refusal to let go. I think, for those seeking it out, it’s just like any other hobby, legal vice, or habit. It has its positives and negatives, but often it’s a reactionary defense created to help shield the seeker from some kind of distress (in the scientific sense).</p>

<h2 id="responding-to-change">Responding to Change</h2>

<p>In my examples earlier, I said that I don’t miss the inconvenience of rewinding VHS tapes. However, I <em>do</em> miss what those tapes represent for me. They’re emblematic of a time before the very concept of <em>ownership</em> was being eroded. When you owned a VHS tape of Star Wars, you <strong>owned</strong> it. You could watch it over and over, on your own terms, so long as you had a working TV, VCR, and tape. DVDs were largely the same, but Blu-Rays started introducing draconian DRM schemes which often made it challenging to watch the movie without a stable internet connection or a new enough player. Now, much of the media created in the past few years will likely never see a physical release at all, forever to be locked away behind a monthly subscription which will <a href="https://www.theverge.com/2022/1/14/22884263/netflix-price-increases-2021-us-canada-all-plans-hd-4k">keep going up in price</a> for the foreseeable future. Don’t you kind of miss owning things?</p>

<p>In fact, no one enjoyed rewinding VHS tapes. It was tedious, frustrating, and another point of failure in a system with too many moving parts. But, the instant gratification and expanded capacity of DVDs made entertainment slightly less special. Instead of watching a couple episodes of a TV show at a time, you could catch up on entire seasons with only a few disc swaps. Now, with streaming, you can binge entire shows without ever leaving your couch. Combined with the crippling social media addictions many of us have, well… don’t you often find yourself tuning out the show you just put on? It becomes background noise while you scroll on your phone instead. Don’t you kind of miss being bored?</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 200px;">
    <img src="/images/echo-dot.jpg" />
    <figcaption>
        <div><p><small><em>Amazon Echo Dot</em></small></p></div>
    </figcaption>
</div>

<p>Smart home devices are another example: to get the weather, you used to have to read the newspaper or catch the forecast on TV at the right time. Now you can just ask the friendly little puck to tell it to you… so long as you’re fine with it also listening to every word you say and selling that data to advertisers. Don’t you miss the days before every device in your house was spying on you and boiling you down to advertising metrics?</p>

<p>Each of these examples is part of the nostalgia puzzle. They represent subtle but substantial changes in the ways we experience our lives. I’ve focused primarily on media, but in many ways, fundamental assumptions about the way the world works are being challenged across the board. We’re losing rights, privacy, ownership, and livelihood in a measured erosion, and it’s impossible not to feel that on some level. Some people cope by seeking out nostalgia.</p>

<h2 id="rejection">Rejection</h2>

<p>Nostalgia is a rejection of the consequences of what we’re told is progress. It’s intentionally putting oneself backwards in time in order to, just for a moment, escape the “dystopic future” it sometimes feels like we’re in. It’s immersing oneself in a context before all of the very real pressure being exerted on us by the numbing manufactured serotonin hits of social media, the glamorous but soulless games full of psychologically-manipulating microeconomies, the dishonest Twitter performances of two-faced politicians getting rich and fat off our division, and so on. Often, it’s allowing oneself to mentally relax within a period of one’s life that was, in some tangible or intangible way, “better” than it is now.</p>

<p>Truthfully, <em>that’s</em> the danger of nostalgia. It’s not the feeling itself, nor the seeking of it, which is harmful. It’s the retreating into the past and giving up on the <strong>now</strong>. Responsible nostalgia is healthy escapism, but too often nostalgia becomes an excuse to not try and make things better today. It’s easier to just wistfully look backward than to sternly march forward, even though the things we want from the past could, theoretically, exist today in harmony with the very real progress we’ve made. We’ve just got to fight for it ourselves rather than wish for a golden time that never actually existed.</p>

<h2 id="so-wait-are-you-pro--or-anti-nostalgia">So, Wait… Are You Pro- or Anti-Nostalgia?</h2>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 200px;">
    <img src="/images/cypher.png" />
    <figcaption>
        <div><p><small><em>"Ignorance is Bliss"</em></small></p></div>
    </figcaption>
</div>

<p>My stance is simple: excessive nostalgia in a population is a symptom that something is wrong. Things will never be perfect, and therefore nostalgia will always exist. We, as people, love to focus on the negatives of today and the positives of yesterday. However, I propose that an <em>abundance</em> of it is a strong indicator that something is fundamentally damaged in society. Like with Neo in The Matrix, it’s a sensation you feel almost subconsciously; a reaction to a stimulus you weren’t even fully aware of. And when you’re choosing how to react to it, you can either choose to escape the Matrix to fight for a better real world, or you can be Cypher, longing so hard for the memory of what used to be that you’d do anything to have it back.</p>

<p>This is where I arrived after reflecting on my own nostalgia. I’ve spent far too much money on 30-year-old video games, dusty old computers, movies from my childhood recorded to a format so outdated they might as well be etched in stone tablets, and keyboards emulating “<a href="https://justinoconner.me/blog/2017/03/06/keyboard-review-unicomp-ultra-classic.html">the good old days</a>.” I even bought a 50-pound hunk of glass, plastic, and analog electronics they used to call a <em>CRT TV</em>—look it up in your history books next to the dinosaurs. And while I will continue to indulge in my escapism from time to time, I recognize that there is a line across which lies a dangerous place full of comfortable wistfulness I could uselessly wallow in for the rest of my life.</p>

<p>So, is nostalgia dangerous? Probably not. Is it healthy? Also probably not. It’s like any other vice or habit: take it in moderation, and use it responsibly. Remember that you can’t go back in time, that the fog of the past makes everything seem simpler than it was, and that no matter what, you still live in the <em>now</em> and have a responsibility to work toward a better <em>today</em>. Maybe if all us nostalgic fools work hard enough to make a better world, future generations will create a lot fewer of us.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="culture" /><category term="entertainment" /><category term="nostalgia" /><summary type="html"><![CDATA[What if nostalgia isn't necessarily a weakness, as it's often framed, but instead a reaction to the direction of culture?]]></summary></entry><entry><title type="html">The Cyborg Strategy</title><link href="/blog/2022/03/26/the-cyborg-strategy.html" rel="alternate" type="text/html" title="The Cyborg Strategy" /><published>2022-03-26T04:25:50+00:00</published><updated>2022-03-26T04:25:50+00:00</updated><id>/blog/2022/03/26/the-cyborg-strategy</id><content type="html" xml:base="/blog/2022/03/26/the-cyborg-strategy.html"><![CDATA[<p>Imagine you live in a dystopian cyberpunk future full of technological implants capable of enhancing your abilities beyond the capacity of a normal human. You can get bionic eye implants to improve your sight, ear replacements to give you the sensitive hearing of a dog, artificial blood and organs to supercharge your athletic performance, and a potent cocktail of drugs which unlock and amplify your mind. Eagerly, you snap up all these opportunities. Before long, over half your body is cybernetic. At what point after these “upgrades” are you no longer yourself? Where is the threshold across which you become something else entirely?</p>

<p>Over three years ago now, I wrote an article titled <a href="/blog/2018/06/12/the-inertial-deathblow.html">The Inertial Deathblow</a> where I theorized that having decision makers mired deep in their own inertia is a critical hindrance to a competitive business. Time has only strengthened my experience here—since the original article, I’ve seen some very promising projects get flushed because of a few myopic leaders. Once those folks left or were removed from decision-making roles, I watched as newer and better solutions flourished.</p>

<p>One thing continually struck me as odd, though. I alluded to this at the end of that first piece: the “correct” option is usually to build on top of the old stuff… <strong><em>sort of</em></strong>. However, despite it being “correct,” it’s not what I saw happen. Let me show you why it didn’t happen that way, and at the same time you’ll come to understand what I mean when I say “sort of.”</p>

<h2 id="legacy-resists-change">Legacy Resists Change</h2>

<p>Instead of happening <em>within</em> the old code, all the cool new stuff got created entirely <em>around</em> the old code. Sometimes modules would get broken off to live somewhere near the old behemoth and speak its arcane, dead language, but most of the time the developers of the shiny modern systems avoided it like the plague. If you’ve worked in software engineering for long, you’ll already understand why this happened, but if not, let me explain.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 200px; max-width: 300px;">
<img src="/images/katamari.jpg" />
</div>

<p>The first iteration of any system is almost always a monolith, especially when it’s being written by a start-up team and/or by newer developers. Monoliths are not, inherently, bad. In fact, from an architecture perspective, I’d wager all solutions should begin as “trivial monolith” and only become more complex as necessary from there. However, monolithic applications have some key weaknesses. One of the most impactful is the tendency for them to slowly acquire their own gravity. As they grow, and as things get bolted onto them, they act as Katamaris, growing bigger and sucking up more and more things until eventually they become their own little planets with complex ecosystems. As soon as you run into a snag, stuff starts breaking.</p>

<p>The sheer force which can be exerted by a large monolithic application is unmatched. Skilled developers can defend against it, but most shops don’t exclusively employ folks trained in the art of anti-gravity. These systems can create vast wells of tribal knowledge, which makes them resistant to training and documentation. Since all the application logic lives in one place, devs tend to store all the data in one place too, creating monolithic databases. These behemoths are huge single points of failure often only mitigated by vast, expensive backups and wasteful failover clusters. Straddling all this is the inertia created by having such a large, tightly-knit system. Despite the technical debt it generates, it was created as a single infallible unit and, by its very nature, has tightly interwoven its various units and created omniscient dependencies which become nearly impossible to unwind.</p>

<p>As a result of this tendency, it becomes very challenging to change the monolith. Often, logic within it depends so strongly on other pieces of itself that it cannot be meaningfully reused anyway—so developers don’t. They just write new code which does functionally the same thing any time they need to interact with a piece of the legacy system. However, as hinted earlier, these monolithic systems have their own arcane language. They often do not present any sort of API at all; the monolithic logic directly manipulates its components. So, if your external software wants to interoperate with the system, it must either:</p>

<ol>
  <li>Directly manipulate the monolith’s components as well, causing the new software to take a transitive dependency on the monolith anyway. It must “play nice” with the monolith and access the system the same way, but without being able to use the same code modules due to a spider web of internal dependencies. Or…</li>
  <li>Throw away the old system and reimplement it from scratch. Or…</li>
  <li>Carefully operate on the monolith, excising and enhancing pieces of it to create an API where one never existed before.</li>
</ol>

<p>It’s this third option which we’re going to focus on as the superior strategy.</p>

<h2 id="a-cybernetic-organism">A Cybernetic Organism</h2>

<p>Perhaps the most important lesson to learn about monolithic systems is that they were almost never intended to be so gravitational. Do not make the mistake of hating the original developers—instead, try to understand them. Put yourself in their mindset. Why would they have created the application/service/system the way they did? What forces were acting on them which kept them from making better decisions? When were these decisions made and how does that affect the context? Answering these questions will help you avoid making the same mistakes, sure, but it will also inevitably lead you to a dire conclusion.</p>

<blockquote>
  <p>All software, long-enough lived, will eventually become a monolith.</p>
</blockquote>

<p>It is crucial that you understand this; otherwise, it cannot be countered. All designers, developers, and stakeholders on a piece of software must keenly grasp this truth. The mitigation requires cooperation from everyone, from the architects and product owners through the most entry-level engineers.</p>

<p>Armed with this knowledge, you will come to understand a universal law: <strong>monoliths are a part of real-world software systems and must be handled</strong>. They emerge from well-intentioned systems after years of bolt-on additions, quick ‘n dirty hacks, inexperienced developers making poor or uninformed decisions, and late night crunch time. Whether it was designed as a monolith or was once a tiny little microservice, the unyielding entropy which bloats and contorts software in the production world nudges it ever closer to its fate.</p>

<p>So, how do you handle monoliths? Many junior devs will argue for scrapping the whole thing and starting from scratch. This works for trivial applications, unimportant systems, or for companies with ludicrous amounts of expendable capital, absolutely. But, most commonly, an application which became a monolith evolved that way because it was too integral to the business. Even if you tried to run the new system in parallel with the old, the chance of introducing critical regressions at that scale of change nears 100%. Most stakeholders aren’t going to accept that level of risk. <small>Of course, risk aversion and risk management are all pieces of the puzzle which agile development and DevOps try to address… but that’s another topic for another time.</small></p>

<p>No, the answer is not to scrap the monolith. You must instead replace its organs with cybernetic enhancements.</p>

<h2 id="living-tissue-over-a-metal-endoskeleton">Living Tissue Over a Metal Endoskeleton</h2>

<p>Fundamentally, all units within a piece of software can be replaced… eventually. The nature of most programming languages means that coming into a given unit of logic, you have a few inputs:</p>

<ul>
  <li>“Global” state (e.g. global variables, static fields)</li>
  <li>Module-level state (e.g. instance fields/properties)</li>
  <li>Local state (e.g. parameters to the unit, variables in the unit scope)</li>
</ul>

<p>…and a few outputs:</p>

<ul>
  <li>Side-effects on shared (global or module) states</li>
  <li>I/O operations</li>
  <li>Return values</li>
</ul>

<p>Well-written code will likely be very careful about how these inputs and outputs flow through a unit, usually tending toward a functional style, but sadly not all code is well-written. Legacy, monolithic code very often tries to manipulate any state it can legally touch, all across the system, without any consideration to how those side-effects might affect anything else. This makes the proposition of trying to replace any given unit a daunting one. Often, it can feel impossible to unwind the various layers of dependencies a method might take. However, <strong>it is possible</strong>. You just have to be willing to accept some compromises and commit to a vision. And, in the spirit of agile, make sure each iteration is bringing its own value, because you never know when conditions will change and your vision will get shelved.</p>

<h3 id="thought-exercise">Thought Exercise</h3>

<p>Let’s take a close look at a fairly trivial example. Pretend this code lives in a monolithic three-tier MVC service (<strong>WARNING: intentionally bad code ahead</strong>):</p>

<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// This code is intentionally very bad in so many ways</span>
<span class="k">public</span> <span class="n">AssetOperationEvent</span> <span class="nf">CreateAsset</span><span class="p">(</span><span class="n">AssetViewModel</span> <span class="n">asset</span><span class="p">)</span>
<span class="p">{</span>
    <span class="k">using</span> <span class="p">(</span><span class="kt">var</span> <span class="n">repo</span> <span class="p">=</span> <span class="n">Repositories</span><span class="p">.</span><span class="n">CreateRepository</span><span class="p">&lt;</span><span class="n">AssetRepo</span><span class="p">&gt;())</span>
    <span class="p">{</span>
        <span class="kt">var</span> <span class="n">newAsset</span> <span class="p">=</span> <span class="k">this</span><span class="p">.</span><span class="n">AssetConverter</span><span class="p">.</span><span class="nf">MapViewModelToModel</span><span class="p">(</span><span class="n">asset</span><span class="p">);</span>
        <span class="kt">var</span> <span class="n">img</span> <span class="p">=</span> <span class="n">repo</span><span class="p">.</span><span class="nf">LoadImage</span><span class="p">(</span><span class="n">asset</span><span class="p">.</span><span class="n">Image</span><span class="p">);</span>
        <span class="n">img</span> <span class="p">=</span> <span class="n">ImageHelpers</span><span class="p">.</span><span class="n">Validation</span><span class="p">.</span><span class="nf">SizeLimiter</span><span class="p">(</span><span class="n">img</span><span class="p">);</span>     <span class="c1">// throws if invalid</span>
        <span class="k">if</span> <span class="p">(</span><span class="n">newAsset</span><span class="p">.</span><span class="n">Name</span><span class="p">.</span><span class="nf">Trim</span><span class="p">().</span><span class="n">Length</span> <span class="p">&gt;</span> <span class="m">0</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">repo</span><span class="p">.</span><span class="nf">SaveImage</span><span class="p">(</span><span class="n">newAsset</span><span class="p">,</span> <span class="n">img</span><span class="p">);</span>
            <span class="n">newAsset</span><span class="p">.</span><span class="n">ImageData</span> <span class="p">=</span> <span class="n">img</span><span class="p">;</span>
            <span class="n">repo</span><span class="p">.</span><span class="nf">SaveAsset</span><span class="p">(</span><span class="n">newAsset</span><span class="p">);</span>
            <span class="k">return</span> <span class="n">AssetOperationEvent</span><span class="p">.</span><span class="nf">Successful</span><span class="p">(</span><span class="n">newAsset</span><span class="p">);</span>
        <span class="p">}</span>
        <span class="k">else</span>
        <span class="p">{</span>
            <span class="k">throw</span> <span class="k">new</span> <span class="nf">InvalidOperationException</span><span class="p">(</span><span class="s">"Name is required"</span><span class="p">);</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>If you’ve been tasked with creating a new Asset Management REST API, but the old asset-related functionality in the monolithic application must persist, how would you approach it? Would you…</p>

<ol>
  <li>Move the asset creation logic into a library which can be shared by both the API and the monolith?</li>
  <li>Rip the logic out of the monolith entirely, put it into the API, and have the monolith call the API?</li>
  <li>Change the front-end to call into the new API instead of the monolithic back-end when creating assets?</li>
  <li>Rewrite the asset creation code in the API, have it integrate with the same tables as the monolith, and maintain both code paths?</li>
</ol>

<p>Seriously, stop reading for a minute and decide which one you’d choose. I’ll wait.</p>

<hr />

<p>Have you decided? Good. Let’s break down the pros and cons of each of these options.</p>

<p><strong>Option 1</strong>: <span style="background-color: rgba(50,50,0,5);"><em>Move the asset creation logic into a library which can be shared by both the API and the monolith</em></span><br />
If you picked this one, you’re going to end up unwinding spaghetti for a long time. Your library is going to have to introduce a new transitionary DTO between <code class="language-plaintext highlighter-rouge">AssetViewModel</code> and <code class="language-plaintext highlighter-rouge">Asset</code>, know about the repository class (and therefore the database connection), contain all of the image validation (which isn’t necessarily only used here), discover and reimplement the implied business rules enforced by exceptions that could be thrown by your transitive dependencies, and so on. Going down this road is a path to pain. That being said, it’s also one of the only options which keeps you from needing to rewrite the business logic, which lessens the chance of regressions.</p>

<p><strong>Option 2</strong>: <span style="background-color: rgba(50,50,0,5);"><em>Rip the logic out of the monolith entirely, put it into the API, and have the monolith call the API</em></span><br />
This might seem like the most straightforward approach at first glance, but you’re still saddled with needing to understand and reimplement various layers of business rules which are not immediately obvious. On top of that, you’ve also introduced another network hop—calls to create an asset now must travel from the client, to the service, to the API, and back again. Perhaps a single additional hop both ways isn’t so bad, but when this gets out of hand, <a href="https://www.youtube.com/watch?v=gfh-VCTwMw8">it can be catastrophic</a>. This isn’t even considering that, assuming your new API is following some semblance of best practices, you’ll also need to worry about service-to-service authentication and authorization.</p>

<p><strong>Option 3</strong>: <span style="background-color: rgba(50,50,0,5);"><em>Change the front-end to call into the new API instead of the monolithic back-end when creating assets</em></span><br />
Perhaps the “cleanest” of the four options (in the sterile sense), this option still requires that your new API reimplement all of the business rules, and as we’ve found, this is not trivial. It can also create fragmentation on your front-end application as those developers hack in branching paths for each of the operations your API supports vs. the ones it doesn’t yet. However, it removes the need for multiple back-end code paths, and doesn’t introduce an extra network hop.</p>

<p><strong>Option 4</strong>: <span style="background-color: rgba(50,50,0,5);"><em>Rewrite the asset creation code in the API, have it integrate with the same tables as the monolith, and maintain both code paths</em></span><br />
Frankly, it turns out that most of the other options are going to have you rewriting a significant portion of the asset creation code anyway. Unfortunately, this option introduces the highest amount of additional maintenance cost. Now you’ve got two separate, completely distinct logical paths trying to implement the same business rules against the same backing tables. They must target the same tables, or else operations against the API won’t show up in the monolithic application and vice versa. Any deviance between them could violate business rules and cause hard-to-diagnose errors in either system. Maybe the most damning disadvantage of this approach is that it doesn’t do anything to discourage the monolith. Since both code paths continue to exist, the default one will continue to be the monolith’s version. Your new logic will always just be a copycat.</p>

<hr />

<p>None of these options are ideal. Everyone has their particular philosophy which probably leads them down one path over another. Personally, I propose that any approach which results in supporting and maintaining two code paths is unacceptable, no matter the cost otherwise. The risk of divergence is so high, and it can manifest in so many subtle ways (and even if you did have unit tests, those would diverge as well), that almost anything else is better. Which option you chose isn’t the most important part, though.</p>

<p>The important part is that they all have something in common. They each create a new component which does the same thing, but better in some way. One way or another, you’ll wind up moving out the capability from the old thing and hooking up all the connective tissue to the new. That’s why I call this <em>the Cyborg Strategy</em>: it’s an approach based in maintaining or enhancing legacy systems by slowly upgrading them, unit by unit, towards a fundamentally different architecture.</p>

<h2 id="looking-in-the-mirror">Looking In the Mirror</h2>

<p>The important part of this strategy isn’t just ripping functionality out from a monolith and replacing it with a newer, better, modular component. It’s the idea that over time, more and more pieces of the system will work this way, all while maintaining the veneer of the old system. At the beginning of this article I asked, “where is the threshold across which you become something else entirely?” The philosophy behind this approach is that eventually, you’ll reflect back on the old code and realize that it’s not the same application anymore—like our hypothetical cyborg looking in the mirror and realizing they’re no longer truly human.</p>

<div style="float: left; padding: 10px 10px 10px 30px; min-width: 200px; max-width: 300px;">
<img src="/images/terminator-mirror.jpg" />
</div>

<p>I’ve made this whole thing sound pretty amazing, but this strategy is not without its downsides. The act of surgically removing the “organs” of your old code gets easier as more and more units are removed, but early on it can be a massive investment fraught with compromises. You’ll often find yourself taking less desirable routes—like in our trivial example above, you might need to reimplement small pieces and find ways to keep it all working together without bringing down the house of cards.</p>

<p>These compromises are acceptable so long as you’re continuing to bring value with each surgery, but what about when you can’t? What about those scenarios so intertwined or fragile that you can only remove large, interlocked chunks at a time? In these situations, I’ll cite Martin Fowler, from his article <em><a href="https://martinfowler.com/articles/is-quality-worth-cost.html">Is High Quality Software Worth the Cost?</a></em>:</p>

<blockquote>
  <p>The annoying thing is that the resulting crufty code both makes developers’ lives harder, and costs the customer money. … High internal quality reduces the cost of future features, meaning that putting the time into writing good code actually reduces cost.</p>
</blockquote>

<p>Code that can only be operated on in massive, expensive chunks is absolutely <em>overgrown</em> with cruft. At some point, the value proposition for that surgery overwhelms any counter-arguments. There’s no way that your legacy code could simultaneously be so easy to maintain and update that it remains cheap to do so, yet so hard to split apart and modularize that it must be tackled in huge projects. They’re mutually exclusive.</p>

<p>The mechanics of pitching this value to the business are out of scope here—a topic for another article. Maybe I’ll write that one in 2025. Nevertheless, you will find yourself at a point where your path leads you here: removing and replacing components of a legacy application with augmented pieces in order to evolve it toward a better architecture over time. Your objective is to look back on yourself and your team and realize that the application you started with is gone, replaced by the cybernetically enhanced version you’ve been piecing together over months or years.</p>

<h2 id="the-sacrifice-">The Sacrifice 👍</h2>

<p>Once you’ve finally created a cyborg, what should you do with it? It still wears the face of the application it used to be, and probably still hosts some amount of the original logic, but most of the truly valuable pieces have been modularized. Ultimately, there are three possible fates for this wayward hybrid.</p>

<ol>
  <li><strong>Continue supporting it until deprecation</strong>. There is a strong argument for keeping it alive, serving its original purpose until the business no longer needs it. You’ll continue paying technical debt on those leftover bits, but eventually, conditions and requirements will change enough that the app can be retired. New systems can come in and take advantage of the APIs you’ve created in a harmonious way, the extracted logic continuing to provide value for years.</li>
  <li><strong>Transplant everything that remains into a new, more suitable form</strong>. By the time you’ve reached this stage, the facade of what the app once was is now vestigial. All of the value generation occurs elsewhere. There is even more value to be gained by “finishing the job” and moving the leftovers into a more accommodating form, essentially just continuing the augmentation you’ve been doing until almost 100% of the old code is gone.</li>
  <li><strong>Sacrifice the old shell</strong>. This is a good opportunity to revisit the original business needs served by this hybrid system. You may find that it’s time to cast aside the potentially ancient assumptions that fed into its creation and start fresh with a new set. After all, the valuable parts have been removed and can now be leveraged by the new shell.</li>
</ol>

<p>Which option is correct for you depends on the requirements, the application, your roadmap, and more. You’ve painstakingly bought yourself these opportunities. Choose wisely.</p>

<h2 id="finale">Finale</h2>

<p>Did I abuse the whole “cyborg” metaphor too much? Obviously <em>I</em> don’t think so, but I’m biased. I wrote the damn thing.</p>

<p>I’m aware that these ideas aren’t new—though I haven’t read them yet, I know there are books <a href="https://www.amazon.com/dp/0134757599">by Martin Fowler</a> and <a href="https://www.amazon.com/dp/0131177052">Michael Feathers</a> which cover these topics in much greater detail, I’m sure. However, I wanted to share and express my philosophy and mindset behind how best to break down monolithic legacy applications. It’s a topic about which I have a lot of passion and, I believe, some unique perspectives.</p>

<p>I truly hope this helps someone wrap their head around an often overwhelming and difficult problem, and gives them some hope that it can be done. If you want additional reading, I recommend checking out everything I’ve linked within the article—there are some amazing resources out there on this topic, since it’s something almost every engineer runs into during their career.</p>

<p>Next time, I’m going to follow up on another of my teased articles and discuss how certain hiring practices and team structures feed into failure and promote the development of these nasty monoliths.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="quality" /><category term="development" /><category term="operations" /><summary type="html"><![CDATA[Transforming legacy software with... cybernetics?]]></summary></entry><entry><title type="html">Musings on Game Design 1: Difficulty vs. Challenge</title><link href="/blog/2021/01/10/musings-on-game-design-01.html" rel="alternate" type="text/html" title="Musings on Game Design 1: Difficulty vs. Challenge" /><published>2021-01-10T18:45:50+00:00</published><updated>2021-01-10T18:45:50+00:00</updated><id>/blog/2021/01/10/musings-on-game-design-01</id><content type="html" xml:base="/blog/2021/01/10/musings-on-game-design-01.html"><![CDATA[<div style="float: left; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 400px;">
<img src="/images/doom-eternal-marauder.jpg" />
</div>

<p>Doom Eternal is a strange game. Despite being a huge fan of the Doom series since I was a kid, and even though I greatly enjoyed Doom (2016), I bounced off of Eternal for a while. I loved the Heavy Metal aesthetic (and soundtrack), found the weapons incredibly satisfying, and connected well with the new movement mechanics, yet something didn’t click with me at first. The hype died down pretty quick and I was left frustrated, racking up a lot of deaths.</p>

<p>The game is <em>hard</em>. I started playing on the “normal mode”, <em>Hurt Me Plenty</em>. By the fourth level, I cranked it down to easy mode (<em>I’m Too Young to Die</em>) and still kept getting wrecked. Sure, I was making more progress, but I could tell something wasn’t right. My initial reaction was that the game was just too damn difficult. I’ve played difficult games before, but Doom Eternal felt different. I put the game down for a while.</p>

<h2 id="speedruns-to-the-rescue">Speedruns to the Rescue</h2>

<p>My inability to enjoy the game didn’t stop me from watching its speedruns. Specifically, I started tuning into <a href="https://www.twitch.tv/byteme">Byte’s stream</a> and watched his Ultra Nightmare 100% attempts. He’s not the best Eternal player out there, but I found his style entertaining and had enjoyed his Doom (2016) run at AGDQ 2020. At first, I was just wowed by his mastery over the mechanics. While watching, though, I started to pick up on some patterns.</p>

<p>He used the flame belch (lovely name, right?) far more than I did, and on the weakest enemies. He never ran out of ammo, somehow, while I was always low. He seemed to throw freeze grenades at heavy demons and then move away from them instead of closing the gap to kill them. Noticing all this inspired me to get back into the game and start experimenting more with the tools at my disposal.</p>

<p>Instantly, I was 1000% better at the game and beat it, no problem, on Ultra Nightmare… is what I hoped would happen. Instead, I still got my ass kicked repeatedly on easy mode. But I was <strong>learning</strong>. I figured out that the flame belch doesn’t spawn armor shards <em>just because</em>. It spawns them as a renewable way to refill your armor, and armor is better than health. The chainsaw regenerates a charge of fuel because you’ll quickly run out of ammo if you don’t replenish it; the ammo limits are significantly lower than the previous game for a reason. The freeze grenade is a crowd control tool, offering you an opportunity to get some distance and reevaluate your situation.</p>

<p>Perhaps most importantly, I started having fun again. Where before the game felt difficult to the point of frustration, now it felt like it was challenging me to do better and learn more about its mechanics. I kept improving until eventually I felt comfortable moving back up to normal mode, where I promptly got stomped again. But I didn’t feel too bad about it this time. I <em>understood</em> the game on a much deeper level than I did initially, and knew that success was within reach. I just had to get good, basically.</p>

<h2 id="difficulty-and-challenge-are-not-the-same">Difficulty and Challenge Are Not the Same</h2>

<p>I’ll be honest: the “correct” way for my Doom Eternal story to end would be for me to say that by understanding it, I was able to beat it. It would feel like the natural conclusion. But no, I actually never beat it. Don’t get me wrong: I got much further, and I didn’t stop playing because of that original frustration this time. I probably will eventually circle back around and finish the game—like a lot of folks, I’m spoiled for choice when it comes to entertainment, and other things grabbed my attention.</p>

<p>Despite having not finished it, my experience with Eternal left an impression on me about the contrast between <em>difficulty</em> and <em>challenge</em>. People tend to use the words interchangeably, but I think there’s a very clear distinction between them. Indeed, a challenging game is difficult, but a difficult game isn’t necessarily challenging.</p>

<p><strong>Difficulty</strong> in a game (or in most activities, for that matter) is when something requires a great deal of investment to achieve. The difficult action is non-trivial to complete successfully. Critically, however, there is no implication that the investment be tied to skill, talent, or practice. Something can be difficult to do simply because it’s very time-consuming.</p>

<ul>
  <li>Hitting the max level in Final Fantasy VII is difficult, but it’s not challenging. If you spend enough time grinding, you’ll get there, and the grinding is mindless busywork (i.e. it requires little thought or input).</li>
  <li>Finishing Dragon Quest II is difficult. Many of the enemies in the end game can kill your party in a single action. Luck is more of a factor here than anything else.</li>
  <li>Many roguelike games (e.g. Nethack) are incredibly difficult to complete. The difficulty lies in deep randomness, but challenge can be found in strategies to mitigate it.</li>
  <li>Battletoads is a difficult and challenging game, but many facets of its difficulty originate from unrealistic and unfair mechanics which can only be addressed by rote memorization.
    <ul>
      <li>This is true of many older arcade- and NES-era action games. These games were designed to keep you playing even when the quantity of game content was low. This is especially true for arcade games, where the goal is to keep you pumping quarters into the machine.</li>
    </ul>
  </li>
  <li>Hundreds of mobile games (and, increasingly, big budget retail releases) are very difficult, but can be made easier through purchases or time-gated resources. This difficulty cannot be overcome through skill; it requires an investment of time and/or money.</li>
</ul>

<p>I don’t mean to imply a lack of challenge in these games. Going back to the Final Fantasy VII example, a skilled player with a good strategy will complete the game more quickly than one without. But ultimately, the game can be completed, more-or-less, by anyone.</p>

<p><strong>Challenge</strong>, on the other hand, is when the action cannot be accomplished successfully until you perform it better. “Better” usually means past a designated threshold of skill, but can sometimes be on a sliding scale of judgement based on the goals of the game (think the scoring system in Devil May Cry). Some difficult games can be brute-forced, while by definition a challenging game cannot—unless the repetition results in skill improvement, of course, but then we just call it practice.</p>

<ul>
  <li>Doom Eternal turned out to be a very challenging game for me. Until you understand the rules of the game and gain some skill at the necessary tactics and mechanics, you will reach a point where you can no longer progress.</li>
  <li>Dark Souls is famous for being a highly challenging game.</li>
  <li>The mainline Mario games often offer a natural scale of selectable and/or optional challenge, but even at the lowest levels, almost the entire game design is built around challenge.
    <ul>
      <li>The required challenge may not be very high, but it’s there nonetheless, especially for newer players who possess less baseline skill. Not everyone is intimately familiar with the hand-eye coordination and mental mapping necessary to move a character in 3D space with a controller.</li>
    </ul>
  </li>
</ul>

<p>To put it succinctly, challenging games are literally encouraging you to get better at them through their design.</p>

<h2 id="challenge-is-not-necessarily-fun">Challenge Is Not Necessarily Fun</h2>

<p>So far, this article has painted a fairly negative picture of games that possess a high level of <em>difficulty</em> without necessarily having <em>challenge</em>. I even said that once I discovered the challenge of Doom Eternal, I started having more fun. The truth, though, is that the fun factor of a game is not exclusively tied to its challenge or difficulty.</p>

<p>Let’s look at the Final Fantasy VII example again. It’s a highly regarded classic, often topping peoples’ “best games of all time” lists, but as I said earlier: the game can be beaten by pretty much anyone. This means it has no challenge, right? Well, maybe not in the core game, but players seeking a challenge actually have several options.</p>

<div style="float: right; padding: 10px 10px 10px 30px; min-width: 100px; max-width: 300px;">
<a href="/images/ff7-bradygames-rubyweapon.png" target="_blank"><img src="/images/ff7-bradygames-rubyweapon.png" /></a>
<figcaption>
    <div><p><small><em>The BradyGames Strategy Guide's Blerb for Ruby Weapon</em></small></p></div>
</figcaption>
</div>

<ul>
  <li>They can take on the optional superbosses, whose defeat requires more than just grinding XP and reaching max level.
    <ul>
      <li>Ruby Weapon and Emerald Weapon are much more challenging fights than anything in the core game.</li>
      <li>Today, the age of the game plus the drastically increased availability of internet access has made strategies for killing them commonplace.</li>
      <li>Back in the day, though, everything was hearsay. They were shrouded in rumors and even the strategy guides didn’t have great tips for how to beat them. Finding someone that had was genuinely impressive.</li>
    </ul>
  </li>
  <li>They can just force themselves to play more intelligently and optimize their gameplay more.
    <ul>
      <li>For example, rather than just leveling up every time you lose to a boss, these players might retry several times with different tactics.</li>
    </ul>
  </li>
  <li>They can enforce their own gameplay restrictions.
    <ul>
      <li>This is a niche audience, to be sure, but it’s one that enjoys taking otherwise “easy” games and making them challenging through intrinsic objectives such as not equipping Materia or shooting for the lowest possible character level at the end of the game. Speedrunning falls into this category as well.</li>
    </ul>
  </li>
</ul>

<p>But what if you just <em>don’t want to be challenged</em>? Isn’t it valid to want to enjoy an experience like Final Fantasy, with its intricate story and elaborate cutscenes and grand adventure, without having to get good? What about players with disabilities, or those with little capacity to practice a video game for long periods of time? Many games are tailored around the experience of playing them without necessarily requiring a show of skill to “earn” that experience, and that’s okay too.</p>

<p>Not every game needs to be challenging. Presenting an obstacle to overcome is only one of the many tools in the design toolbox that can make something fun. Sometimes, challenge is a barrier to enjoyment. Take Dark Souls, for example. I’ve never beaten it. Hell, I’ve barely made it past the starting area. Despite that, I still enjoy the game’s atmosphere, lore, and art. For me, it’s the perfect Let’s Play game. I’m not interested in putting in the effort of gaining the skill necessary to see more, and the pleasure I get from overcoming the challenges doesn’t outweigh the frustration I feel with myself when I continually fail. Since I enjoy the presentation of the game, though, and find its gameplay loop interesting (in theory), I also enjoy watching others play it and discuss it.</p>

<h2 id="every-game-has-its-place">Every Game Has Its Place</h2>

<p>The intangible elements that make a game fun are extremely hard to pin down. Plenty of awesome games are challenging, tons of amazing games are easy, and games exist on every point in the diamond-shaped spread between ease, difficulty, accessibility, and esotericism. There is a place for all games, whether they challenge you to be better or not. It’s very interesting to me how often I see difficulty conflated with challenge, however. I think the industry would do well to learn the difference and better identify where designers are substituting <em>challenge</em> for plain <em>difficulty</em>.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="gaming" /><category term="gamedesign" /><category term="musing" /><category term="doom" /><summary type="html"><![CDATA[Is "difficult" the same as "challenging" in games?]]></summary></entry><entry><title type="html">A Follow-up: World of Warcraft Classic</title><link href="/blog/2019/08/24/a-follow-up-world-of-warcraft-classic.html" rel="alternate" type="text/html" title="A Follow-up: World of Warcraft Classic" /><published>2019-08-24T17:16:50+00:00</published><updated>2019-08-24T17:16:50+00:00</updated><id>/blog/2019/08/24/a-follow-up-world-of-warcraft-classic</id><content type="html" xml:base="/blog/2019/08/24/a-follow-up-world-of-warcraft-classic.html"><![CDATA[<p><img src="/images/wowclassic.jpg" alt="WoW Classic" /></p>

<p>Over three years ago, <a href="/blog/2016/04/14/is-wow-afraid-of-legacy.html">I wrote an article</a> about my suspicion that the World of Warcraft team was afraid of creating their own legacy servers. In it, I suggested that they were not only worried about the engineering effort and maintenance costs behind running two vastly different versions of the game, but also, on some level, fearful of just how positive a response those servers might receive.</p>

<p>Well, if my hypothesis was correct, Blizzard must be absolutely shitting themselves with terror right now. It turns out all they needed was a little push from the community, because in late 2017 they finally announced WoW Classic. Since that announcement, the hype around the release has only gotten stronger. Now that we’re mere days away from release, the original World of Warcraft—which Blizzard’s top brass seemed so confident <a href="https://www.youtube.com/watch?v=0Wrw3c2NjeE">we didn’t actually want</a>—is experiencing the kind of resurgence other companies can’t even pay for.</p>

<p>Since the initial run of servers went live for character reservations on August 12th, Blizzard has added (or will add before launch) over a dozen additional servers. One can assume they were added out of necessity; it seems that even once they conceded to actually offering a Classic version of WoW, they still couldn’t believe anyone actually wanted to play it. Now, the numbers speak for themselves. The servers are filling up before the majority of players, who likely wouldn’t have bothered reserving their characters ahead of time, even log in for the first time. By all accounts (<a href="https://old.reddit.com/r/classicwow/comments/ct08c7/welcome_to_the_rclassicwow_subreddit_ama_with_the/exi4fyg/?context=3">including their own</a>), WoW Classic is more massive than Blizzard expected.</p>

<p>I won’t spend this article gloating about how right the community was about the demand for Classic. While I intend to play it for a good, long time, everyone will eventually reach a point with it where the nostalgia has worn off. Myself included. We’ll have seen the expansive world, relived the peaks and valleys of joy and pain, and remembered both how we loved and why we hated old-school MMO design. We’ll be ready to move on. I hope Blizzard learns a few things from this massive initial success, however, and eventually releases Burning Crusade and Wrath of the Lich King-era servers as well.</p>

<h2 id="a-link-to-the-past">A Link to the Past</h2>

<p>Instead, I want to congratulate Blizzard from an engineering perspective. They have managed to achieve something near-miraculous with Classic: they brought the future back to the past. For those unaware, Blizzard essentially had three options to bring a legacy version of WoW to market:</p>

<ol>
  <li>Stand up the old server software and release the 1.12 client from 2006 and let the players deal with the known bugs, exploits, security holes, and lack of modern integrations (e.g. with the Battle.net client).</li>
  <li>Use the 2006 code, but fix all known exploits and security issues and add the necessary integrations.</li>
  <li>Take the modern code and re-implement the 1.12 logic and systems.</li>
</ol>

<p>It’s option #3 that I didn’t consider in my original article. Actually, that’s not entirely true—the idea passed my mind as I wrote it, but I discarded it. It was so absurd, so unrealistic, and so expensive that I assumed such a thing was fiscally irresponsible. That’s the option they ended up going with. I only have myself to blame for this lack of foresight; after all, I even said before that Blizzard has vast resources and talent at its disposal. Their engineers considered the available options and chose the one which made sense to them.</p>

<p>But why? Why risk the millions of possible regressions that come with trying to force a newer system to emulate such a complex old system, especially one where the nuances of the system are a critical part of the experience? There are a few talks from Blizzard engineers on this topic, but perhaps the most appropriate one is the “<a href="https://www.youtube.com/watch?v=hhKkP8LryYM">Restoring History: Creating WoW Classic</a>” panel from Blizzcon 2018. In this presentation, the engineers don’t give a strong reasoning behind their choice of direction. They briefly gloss over some pros and cons, but I imagine most folks at that panel didn’t care too much about the engineering behind Classic. So, we’re left to pick from the crumbs and make assumptions.</p>

<h2 id="blizzard-a-gaming-enterprise">Blizzard: A Gaming Enterprise</h2>

<p>Blizzard Entertainment, the game studio underneath the parent company Activision-Blizzard, has over 5000 employees. They are a $700 million dollar business. Obviously, this is massive. Compared to their size at WoW’s launch in 2004—around 400 people—the scale of their operations has grown by an order of magnitude. They’re playing in the big leagues now, and they’re on the same turf as many major corporations. As a result, it only makes sense to think of them from a corporate perspective, and what’s the most important thing to corporate IT? Infrastructure.</p>

<p>Infrastructure maintenance, stability, and growth are among the foremost concerns of corporate engineering teams. The rise of platforms like Kubernetes, DevOps and SRE roles, and cloud computing has demonstrated how pivotal infrastructure is to the success and resiliency of a company. Blizzard is surely no different. While it’s impossible to know what their infrastructure looks like today, I guarantee it’s nothing at all like how it was 15 years ago.</p>

<p>Teams move away from old infrastructure because they outgrow it, sure, but they also move away because of the flaws. Lack of scalability, technical debt, security holes, dead technologies; any of these alone are perfectly valid reasons to update your infrastructure. I imagine Blizzard, a company with zero experience in MMO infrastructure in 2004, had all these and many more. To a team operating at this scale, returning back to the ancient “broken” infrastructure would be poisonous to the entire ecosystem. The entire platform is at risk if a single component is weak.</p>

<p>Infrastructure was almost certainly the reason Blizzard chose to use the modern WoW code as a starting point. The team simply could not take the risk of running ancient servers, designed for hardware and networks that are no longer available, on their systems. They couldn’t expose their customers to the Internet of 2019 with a client from 2004. The cost/benefit analysis was likely very conclusive: it’s cheaper and safer to just throw programmers at the problem.</p>

<h2 id="eventual-consistency">Eventual Consistency</h2>

<p>Throw programmers at the problem they did. A year passed between the initial announcement and the first playable demo at Blizzcon 2018. I presume they waited until they knew how to solve the problems before announcing the project, so it’s reasonable to say that by late 2017 they had chosen to re-implement Classic’s logic on the modern architecture and had learned how to import the old data structures. This means that the year between Blizzcon 2017 and 2018 was almost certainly spent entirely on game systems.</p>

<p>This alone was not enough. Famously, Classic had a lengthy closed beta period and several public stress tests where players were finding bugs left and right. Many of them were indeed regressions introduced by the modern engine. Especially interesting were the defects found that were also defects in the original game. The above-linked panel from Blizzcon 2018 demonstrates one of them: a missing texture on a lamp, defective for 15 years—long enough that, like most unfixed bugs, it became a feature. Of course, passing through the fog of time is hard for even the best of us, so many of these “bugs” were actually intended behavior.</p>

<p>Indeed, Blizzard could not have simply left the recreation of 15-year-old logic to fading memories and speculation, so they created an internal version of the game as it truly existed in 2006. At a small scale, without being publicly exposed, they were able to use the old 1.12 server as a rubric of sorts. It served as the authoritative guide for the nuances of the old systems.</p>

<p>In a very intelligent move, Blizzard used a combination of internal testing, player testing, and the original systems to check their re-implementation. This allowed them to use their modern infrastructure for the live game, while still eventually ending up at a ruleset so very close to 1.12, even down to reproducing bugs and long-surpassed architectural limitations. While I have no doubt that more bugs will be found when the game goes lives on August 26th, this “eventual consistency” approach is, in hindsight, certainly the correct approach for a company like Blizzard.</p>

<h2 id="grats">Grats!</h2>

<p>So, congratulations are in order. Blizzard has managed to satisfy two camps of people notoriously difficult to satisfy: ops teams and players. While it remains to be seen how stable the launch is, the beta and stress test periods have already shown us just how good a job they’ve done with the authenticity of the systems. What seemed to me like an absurdity turned out to be the perfect solution. I think there’s a lesson to be learned in there somewhere.</p>

<p>Also, to whomever spoke up at Blizzard and pushed hard for the investment into Classic, I specifically offer you my thanks. Without you, the company may still be telling us just how much we think we do, but we don’t.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="gaming" /><category term="warcraft" /><category term="followup" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">The Inertial Deathblow</title><link href="/blog/2018/06/12/the-inertial-deathblow.html" rel="alternate" type="text/html" title="The Inertial Deathblow" /><published>2018-06-12T23:20:15+00:00</published><updated>2018-06-12T23:20:15+00:00</updated><id>/blog/2018/06/12/the-inertial-deathblow</id><content type="html" xml:base="/blog/2018/06/12/the-inertial-deathblow.html"><![CDATA[<p>In my career up to this point, I’ve had the pleasure (and sometimes displeasure) of working with a huge swath of individuals from all walks of life. I’ve worked with fresh-faced kids straight out of college with a ton of passion; grizzled veterans; smoke-blowers with few actual skills; flighty wizards who could fix anything in a day, if you could just get them to focus—the list goes on. However, nothing makes me more sad than the folks so bogged down in their own inertia that they’ve lost their ability to be creative. Here’s how brainstorming sessions go with these people:</p>

<blockquote>
  <p><em>You:</em> “Let’s consider using Docker for this service.” <br />
<em>Them:</em> “No, we can’t do that.” <br />
<em>You:</em> “Why not?” <br />
<em>Them:</em> “Because, we’ve never done that before. Besides, Docker isn’t proven technology.”</p>
</blockquote>

<p>It makes me want to pull my hair out when someone dismisses an idea out of hand like this. Solving problems in the software industry is all about choosing the right tool for the job. Innumerable factors go into that choice, but you <em>must</em> be willing to consider any solution, or you’re not doing your due diligence. To be clear, I’m not trying to dismiss the benefits of using familiar solutions, or rock-solid and time-tested tech stacks. It’s just… those are <em>considerations</em> that factor into the decision you make, not road blocks themselves.</p>

<p>Technology advances. New stacks have benefits that old ones lacked; after all, they were put together by someone to solve a problem. If it’s popular and well-supported, it’s solving that problem for a lot of people. True, you can solve pretty much any problem with any tech stack. There are shops out there supporting petabytes of data and thousands of users on top of ancient mainframes and COBOL. However, as a business grows, the problems grow and shift alongside it. Sometimes you have to give up the old comfortable tools and move on to something that is better suited to solving the problems you have <em>today</em> instead of the problems you had <em>five years ago</em>.</p>

<p>Let’s put aside some of the other reasons you should consider moving away from aging tech stacks, like the inability to hire anyone. I want to focus on this: if a company gets enough of those inertially-bound people in decision-making roles, it’s a deathblow for the business. The company’s problem-solvers are stuck in the past, which means the business can’t grow. Any attempts to grow are met with…</p>

<ol>
  <li><strong>Resistance</strong>. <em>“Our servers can’t handle that!”</em></li>
  <li><strong>Failure</strong>. <em>“We wrote 20,000 lines of code over the weekend to reinvent the wheel, but there were some bugs, so it didn’t work at all and we were down for 4 hours.”</em></li>
  <li><strong>Sluggishness</strong>. <em>“I estimate about 6.5 months to complete this project. What do you mean our competitors did it in two weeks?! There’s no way!”</em></li>
  <li><strong>High cost</strong>. <em>“We’ll need to buy two new VM hosts so we can spin up the six new servers dedicated to this service. They’ll be idle 75% of the time, but we’ll really want all six servers at peak load, so we need six.”</em></li>
  <li><strong>Resignation</strong>, the worst one of all. <em>“Sure, whatever, we’ll get it done. Just hack some shit together and ship it. It doesn’t matter, the architecture is fundamentally broken anyway.”</em></li>
</ol>

<p>I wish I were making these up, but I’m drawing from (exaggerated) real-world experience here. At no point in these conversations did anyone else recommend making changes to the tech stack to strike a better balance between the available hardware and the current-day workload requirements. Any time I brought it up, it was shot down. Why? “We can’t do <em>that</em>. We’ve never done it before. It’s not proven technology.”</p>

<p>Innovation is the lifeblood of companies in competitive markets, and inertially-bound people are the enemy of innovation. Look, I get it. I really do. Change can’t happen overnight, and it’s often expensive in terms of manpower, capital expense, or both. But, if you don’t make room for it, you’re going to quickly end up as the last horseback cavalry unit in the era of unmanned drone strikes.</p>

<p>Anyway, enough ranting for now. Next time, I’m going to dive into a little more depth and talk about some of the causes of career inertia, how to avoid it yourself, and how to make those judgment calls on when to go with something new instead of leaning on an old stack. Spoiler alert: despite everything I just said, 90% of the time the correct answer <em>actually is</em> to build onto the old stack… sort of. You’ll see what I mean.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="quality" /><category term="development" /><category term="operations" /><summary type="html"><![CDATA[The story of how putting inertia-bound individuals into leadership spells doom for your team, and potentially your business as a whole.]]></summary></entry><entry><title type="html">The Quiet Death of the 9-to-5 Job</title><link href="/blog/2018/01/30/the-quiet-death-of-the-9-to-5-job.html" rel="alternate" type="text/html" title="The Quiet Death of the 9-to-5 Job" /><published>2018-01-30T07:49:44+00:00</published><updated>2018-01-30T07:49:44+00:00</updated><id>/blog/2018/01/30/the-quiet-death-of-the-9-to-5-job</id><content type="html" xml:base="/blog/2018/01/30/the-quiet-death-of-the-9-to-5-job.html"><![CDATA[<p>I was on LinkedIn the other day when one of the suggested articles caught my attention. It was titled “The 9-to-5 job has disappeared.” While it turned out to actually just be a summary of a single point of a Wall Street Journal article of much more substance, it got me thinking. I scrolled down into the comments and found a wonderful gem from a man with one of the most corporate BS-sounding job titles I’d ever heard:</p>

<figure>
  <blockquote>
    <span>&#147;</span>The 9-to-5 job has disappeared and depending on your role or vocation, that is OK.  If success measurements are based on outcomes, then the “time” you spend ‘at work’ shouldn’t be a measure.  On the other hand, if expectations are appropriately established on the front-end, burning out should not be the result.  There a few presumptions of reasonableness in my statement.<span>&#148;</span>
  </blockquote>
  <figcaption style="text-align:right;">&mdash; A "Client Partner/Trusted Advisor to C-Suite Executives"</figcaption>
</figure>

<p>Let’s put his <em>impressive</em> title aside and focus instead on the comment. This man—let’s call him Bob—suggests that the amount of “time” you spend ‘at work’ doesn’t matter if your success is measured by outcomes. To translate back into English, he’s saying that you shouldn’t be worried about your hours, you should be worried about <em>results</em>. Hours aren’t a consideration when you’re focused on the “outcomes”—i.e. getting stuff done. Bob implies that so long as it’s understood you’ll be working a crap-load, then you’ve got no reason to get burnt out by long hours.</p>

<p>Now, I can sit here all day and break apart his argument, but I want to put emphasis on the idea that ol’ Bobby here expects you to not burn out. After all, your expectations were established along with your success metrics. You <em>are</em> focused on outcomes, right? In his model of the workplace, burnout is caused by coming up short in your “success measurements.” Clearly, given his wording and punctuation choices, he doesn’t put a lot of weight behind the “time” you spend ‘at work’.</p>

<h2 id="the-problem-with-results-as-the-only-metric">The Problem with Results as the Only Metric</h2>

<p>As a software engineer, I work in an industry notorious for long hours, no overtime pay, scope creep, and poor planning. Let me tell you, Bob: burnout happens whether your expectations were set properly or not. It occurs even if your success measurements are overflowing with outcomes—it’s not the same as being dissatisfied. You burn out because you’ve stressed yourself to the limit repeatedly. You burn out because you forget what your wife’s face looks like. You burn out because you catch yourself working out how to sleep and shower in the office to save time. To put it another way, you burn out because you consistently have too much work to do in too little time.</p>

<p>Especially in software, the 9-to-5 job died a quiet death a long time ago. There are too many eager kids willing to light the candle at both ends to prove themselves, because programming is their passion; too many old neckbeards living at work because they don’t have a life to go home to. These people set the expectation. Putting in extra time goes from being a thing you do when you’re dedicated, passionate, and invested in the project/product/business, to being the norm. It stops sounding like “that Nathan is a hell of a worker” and becomes “look at <em>that guy</em> going home at 5 while the rest of us are here until 7 or later every day.” Before you know it, <em>not</em> working 60 hour weeks is getting counted against you in your annual evaluations.</p>

<p>Combine an over-working culture with project mismanagement and you’ll have the murder weapon used to kill the 9-to-5 job. I’ll be talking at length in a later article about how mismanagement leads to burnout. However, it’s not hard to see why only working 8 hours a day isn’t sufficient anymore. We live in a culture where “you’re lucky to even have a job.” There are limited advancement opportunities and tons of competition for your position. Employers are keen to encourage this; after all, it keeps everyone on their toes.</p>

<p>Call me old-fashioned, but I believe that people are more productive and put out higher-quality work when you give them some time to have a life outside their job. Some people live to work, others work to live. However, like most things in life, the sweet spot is a balance somewhere between the two. You enjoy your job and are passionate about it, but you go home and mentally pack it away for the rest of the night so that you can be the master of your own destiny for a while. Maybe you’ll squander that time watching Netflix and eating popcorn or maybe you’ll invest it into new skills, hobbies, or your family. Either way, it’s time where your only obligations are to yourself and your loved ones. This time helps you decompress and analyze the problems you faced at work subconsciously. It leads to the fabled “ah-hah!” moments in the shower where your brain has finally put the pieces together for you. Without this time, you’re going to run out of gas eventually.</p>

<h2 id="hes-dead-jim">He’s Dead, Jim</h2>

<p>So, Bob, you’re right. The 9-to-5 job has died, and we were all so focused on our <em>outcomes</em> that we didn’t notice or care. The worst part is, we’ve bred a culture where humble-bragging about how much you work is a show of dedication. If you truly enjoy working yourself down to the bone (either literally or proverbially), you’ve got my respect. However, for whatever reason—let’s avoid political, economical, or ideological arguments here—this has now become the standard. Anything less is insufficient. I can’t foresee it getting better any time soon, but the best way to improve the culture is to not participate in it. Don’t let your <em>success measurements</em> cloud your perception of what’s truly important to <em>you</em>. And most importantly, don’t let guys like Bob tell you otherwise.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="quality" /><category term="development" /><category term="crunch" /><summary type="html"><![CDATA[Reading this article will synergize your success metrics with your outcomes.]]></summary></entry><entry><title type="html">Small Update on Topre</title><link href="/blog/2017/06/05/small-update-on-topre.html" rel="alternate" type="text/html" title="Small Update on Topre" /><published>2017-06-05T05:44:15+00:00</published><updated>2017-06-05T05:44:15+00:00</updated><id>/blog/2017/06/05/small-update-on-topre</id><content type="html" xml:base="/blog/2017/06/05/small-update-on-topre.html"><![CDATA[<p><img src="/images/realforce87u.jpg" alt="Realforce 87u" /></p>

<p>On a whim after I wrote my Realforce RGB article, I decided to sell that keyboard and pick up a <a href="https://www.amazon.com/Realforce-87U-Tenkeyless-55g-Black/dp/B00MV84Y2Y">Realforce 87u 55g</a> instead. I kept thinking that my experience with the Realforce RGB <em>couldn’t</em> be the norm for Topre keyboards. They’re so loved by many mechanical keyboard enthusiasts… surely they’re not all just misguided? After doing a bit of YouTube research into the sound profile of other Topre boards, I decided to give cup rubber another chance. I’m glad I did.</p>

<p>I don’t have time to write a full review of the Realforce 87u, so I’ll just give some quick impressions. Probably the best praise I can give the board is this: it feels exactly like I expected Topre to feel. It’s soft, tactile, and gives a solid “thock” on every depression while still remaining fairly quiet—overall it’s just a joy to type on.</p>

<p>I still have some minor concerns about the quality of the keyboard, given the price point. The case is still made of plastic, a top and bottom plastic shell that are clamped and screwed together. There is a seam between the two halves, much like the plastic case of the Unicomp Ultra Classic I reviewed a while ago, but it’s nowhere near as flimsy as that. The cable feels cheap and weak; it’s worse than on the Realforce RGB but at least the cable gutter works properly. Despite all of these shortcomings, the Realforce 87u still manages to feel much more like a premium keyboard than the Realforce RGB did. There’s no flex, no creak or squeak, and the switches are largely consistent. The upstroke noise that bothered me so much on the RGB is almost completely absent on the 87u. Overall I have a much more positive opinion of this board than the other.</p>

<p>Once again, however, the elephant in the room rears his ugly mug. Not only is the Realforce 87u expensive, it’s <em>more</em> expensive than the RGB despite the smaller form factor and complete lack of backlight. Topre is an amazing switch when well-executed, there’s no denying it. At $270+ it just can’t compete. The switches themselves are great, but these keyboards are built like $50 Microsoft rubber dome keyboards: very sturdy and reliable, but still plastic and with some annoyances one shouldn’t have to wrangle at over $200.</p>

<p>Ultimately, the decision of whether or not to buy Topre comes down to whether or not you can justify spending so much on what are, in my opinion, quickly diminishing returns. For $200 you can buy some of the highest quality keyboards ever manufactured, and at $300 you can get a <a href="https://1upkeyboards.com/">custom-built mechanical keyboard</a> made from carbon fiber plates and $1-per-unit Zealios switches. Topre promises an unparalleled tactile typing experience, but without the rest of the package to back it up, it’s hard to recommend such a barebones keyboard.</p>]]></content><author><name>Justin O&apos;Conner</name></author><category term="blog" /><category term="keyboard" /><category term="review" /><category term="topre" /><category term="followup" /><summary type="html"><![CDATA[A quick revisit to the land of cup rubber with the Realforce 87u.]]></summary></entry></feed>