Project needs dictated that I can take a quick trip out east, so I squeezed in a stop at J's place for some video games, brew tasting, and quality time with the fam.
The NFC championship started as I got to the airport and Patriots-Chiefs game finished as I was getting to J's place. The games were exciting enough that the dude next to me took out a light while celebrating a Goff interception.
On the flight I completed off the last chapter of Fire Emblem Fates/Revelation. The chapter was, of course, a two-parter with a multi-stage final boss. I guess my squad build was good enough, I was able vanquish Anankos without any retries.
Compared to previous endgames, this one was actually somewhat easy. My gold production engine ensured I could raise the level cap on a handful of characters, putting them well above anything that spawned in. And while grinding levels yields diminishing returns, the daily refresh of challenge levels throughout the campaign meant I could get into the endgame with topped out characters.
The endgame segments were nothing too exciting; a simple map with constantly respawning enemies that you have to fend off while plinking away at Anankos. Naturally, most skills are completely useless against the boss - he can't be insta-killed, poisoned, attribute-sealed, etc. Anankos hit hard enough to two-hit all but my tanks, so they were on damage dealing while everyone else performed crowd control. Astra - five sequential hits at partial damage - was especially clutch as it did plenty of regular damage and built up defensive shields for paired units.
The story ended with the usual "everybody's happy", but this one is super happy because kingdoms you choose between in the first two branches instead make peace with each other.
After an uneventful landing and beltway drive, I got to J's place to start on the Divinity: Original Sin endgame that we had queued up from our previous remote session. We had actually witnessed most of the game's plot in our final few hours of play time, having neglected the giant bloodstone quest that unlocks 90% of the rooms in Homestead/End of Time.
To our dismay, the Homestead areas we opened would have been rather useful if we had been able to access them when the designers intended (though they, of course, clearly also intended for you to be able to shoot yourself in the foot). High level vendors, respec stations, an explosion of miniature imps... all would have been very helpful throughout the playthrough. Of course, we didn't really have much difficulty with any part of the game that didn't involve or extremely obscure puzzles.
So we quickly caught up on the plot, which was somewhat disappointing after the quirky and inventive side quests sprinkled throughout the game. Two ancient generals (surprisingly reincarnated as the main characters), failed to defend an artifact and release some sort of dragon upon the world. Pretty standard fantasy fare.
After reading that the endgame was about an hour, we fought our way through several hours of pre-bosses and puzzles to get to the mighty void dragon.
The void dragon battle was pretty epic, much like Fire Emblem it featured an extremely tough boss and a continuous stream of minions. It was nice to have a final fight that required burning most of the items we'd saved up over the course of the game. It was not an easy fight, but in the end we prevailed...
... on our second try. Deep into the first attempt, the dragon wailed on an NPC ally and forced an instant failure that kicked off a strange and probably-not-as-clever-as-originally-though game over screen. Windows, smh.
Update from September of 2024: omg I just realized this was probably a play on the DOS acronym collision. I'm not sure that makes it any better, but it makes a bit more sense.
Our tasting adventures took us to Rocket Frog and Crooked Run. I was pretty optimistic; both breweries had board games available and Rocket Frog even had the Pipeline Masters on. The lineups looked good - IPAs with a few other good looking porters and such. Crooked Run had a taco shop inside the tasting room with some pretty decent street tacos. On the downside, neither of us much liked any of the beers we tried. There were too many hazy brews and the IPAs tasted flat and flavorless. The stouts and porters weren't terrible but just heavy enough to not be worth more than a 4oz pour. Suffice it to say, rather than filling the growler we instead stopped at Wegmans for mix and match sixers.
The work portion of the trip was pretty much the usual - we capped a nine hour meeting at Black Flag.
We've stepped up our equipment game with the unlockables that you get throughout the main quest. The helo with rocket pods is particularly useful.
Like the last installment, the game's pace seems just about right. There are just enough outposts to liberate, side quests to do, and main quest segments. And with a healthy dose of unlockables and collectibles, you always feel like you're doing something worthwhile, even if it's strafing elk.
PUBG
The squad got its first Vikendi chicken dinner a little while back. We're still getting the occasional royale session in.
Everyone's Gone to the Rapture
Since it was a free ps+ game, I downloaded Everyone's Gone to the Rapture. I thought it'd be a good one to play with Jes one a cold weeknight; a story-driven game that doesn't demand dual stick mastery. It certainly was a dialogue-driven walking simulator with some eerie qualities. But it was a bit slow/sparse to really keep our attention.
Deep Rock Galactic
The lolbaters squad is doing some Deep Rock Galactic - a (still in development?) co-op space dwarfing gaming.
Basically, you drop in to a cavern and explore destructible terrain filled with hostile creatures to ultimately accomplish some sort of mining objective. Once you've finished your task, you scram to the drop pod. Stuff you pick up on the way helps with equipment and leveling.
There are character classes with distinct combat abilities and tools that help you get around the mines.
Inevitably, an alien swarm is unleashed on you and it becomes a full-on shooter for a couple minutes.
Ubermosh
CattleDecapitation's xmas Steam gift to the 'baters was Ubermosh, a simple arcadey survival shooter with great music.
You have a gun and use a sword to slice bullets. Yup.
There are a few other classes as well.
Viscera Cleanup Detail
Mark's gift was basically what happens in the aftermath of an ubermosh.
I guess if people play Farming Simulator and Flight Simulator and Desert Bus, there's a niche here. For most of us, it's just work and trolling your squadmates.
Boardgames
We finally finished Charterstone. It was a pretty amazing experience with a rather dark ending - at least, the ending we got. Though heavy minmaxing, I managed to edge Corey out by a few VPs for the victory. Hooray, I am [spoilers]!
Gloomhaven has been on a bit of a hiatus, so we're all jonesing to get back into it. I'm making the Quartermaster class work well, I think.
Outings
In spite of all the travel and cozying up by the gaming rigs, we've gotten out a couple times. Jessica took me to David Sedaris, and there was a thing at Second Chance.
Renovation and redecoration
We upgraded the bed in the master and had the bedrooms recarpeted.
Ear surgery, take two
After draining and waiting on Kaf's ear, as it was nearly back to normal it suddenly filled up with blood again.
So we took him for urgent surgery and sutures. Fingers crossed that it's a permanent fix.
Back at DL4J
I kind of meandered and stalled a bit after my first foray into the Java Deep Learning library DL4J. One of the main blocking issues was matrix math exceptions way down in the ND4J math library that I assumed were due to my network not being set up correctly. The stack traces were pretty intimidating and I didn't have the source loaded into my IDE. I stumbled upon the answer when trying a fresh problem - some critical portion of the code was simply not thread safe. Or so I ascertained from toggling my asynchronous GUI updater. Huh. Could have avoided a lot of headscratching.
Possibly repeating myself here, but I was experimenting with the library partially to have fun and partially to come up with a neat graphics processing tool. A few examples:
Define a network to train on input/output images to apply a photography postprocessing technique - hue/contrast/sharpness/vignette. Maybe even after classifying the image type. The beauty here is that I have decades of input data (raw photos and postprocessed results).
Going back to the neural style transfer stuff, there is a use case for campy artistic effects. E.g. I recently edited an image set for slideware that amounted to contrast/saturation changes and a subtle mix of filters. To do this automatically would be amazing.
Other interesting stuff like automatic grain reduction or moving around the color space - colorizing images, recoloring images, intelligent balance of RGB to monochrome.
I started again with the Mona Lisa drawer (x, y -> rgb), expanding the input to an n-by-n grid that would train on an output image. The network would quickly memorize the image and produce it regardless of input, as one might expect. I then threw randomized snippets of images and noise at an n-by-n, hoping that I could train a network to be a passthrough - produce whatever image was given to it - as a baseline for making graphical mods.
While I never got close to a passthrough (more on that in a second), it was a crash corse in network configuration which I had previously been simply lifting from examples. I found that learning rate and momentum are serious business and based on input/network size. You can easily underflow your weight values and basically dead-end your training. I frequently had to adjust these values by a factor of 10 and visually check for changing/learning output. Loss function, too, has a significant impact on network behavior.
Activation functions are important and application specific. In the most general sense, they tolerate positive and negative numbers differently. I eventually went from representing my color space with [-1.0, 1.0] to [0.0, 1.0].
After many unsuccessful attempts, I found out I was aiming to create an autoencoder:
An autoencoder is often trained using one of the many variants of backpropagation (such as conjugate gradient method, steepest descent, etc.). Though these are often reasonably effective, there are fundamental problems with the use of backpropagation to train networks with many hidden layers. Once errors are backpropagated to the first few layers, they become minuscule and insignificant. This means that the network will almost always learn to reconstruct the average of all the training data. Though more advanced backpropagation methods (such as the conjugate gradient method) can solve this problem to a certain extent, they still result in a very slow learning process and poor solutions. This problem can be remedied by using initial weights that approximate the final solution. The process of finding these initial weights is often referred to as pretraining.
Aha, I was indeed recreating the average of all training data. So maybe an autoencoder implementation will give me an image recreation capability I can tweak to accomplish graphics processing.
Neural style transfer using DL4J and other starter projects. An ATL trip and two hours before the mast.
Related / external
Risky click advisory: these links are produced algorithmically from a crawl of the subsurface web (and some select mainstream web). I haven't personally looked at them or checked them for quality, decency, or sanity. None of these links are promoted, sponsored, or affiliated with this site. For more information, see this post.
Introduces a browser-based sandbox for building, training, visualizing, and experimenting with neural networks. Includes background information on the tool, usage information, technical implementation details, and a collection of observations and findings from using it myself.