Review | 2022.08.24

Virigin Galactic VSS unity

This isn't actually about space flight simulators but rather Switch games I picked up for the Hawaii downtime. Both happen to have a sci-fi setting.

Crying Suns

Nintendo Switch Crying Suns load screen

I was excited about Crying Suns, a roguelike tactics game set in a mysterious (clone amnesia) sci-fi universe.

Nintendo Switch Crying Suns sector map

The overworld is standard rougelike fare; choose a path with varying risk/reward.

Nintendo Switch Crying Suns Neo-N hypercube

The mood and the mystery aren't hindered by the pixel graphics.

Nintendo Switch Crying Suns Mother dialogue

Crying Suns features an episodic, cohesive story with random/procedural battles and encounters serving as window dressing.

Nintendo Switch Crying Suns Cikk alien cell

The encounters run the gamut from simplistic events to mini-stories. They're interesting but the repetition and lack of unique rewards quickly becomes tedious.

Nintendo Switch Crying Suns expedition menu

One of the encounter types is called Expedition. You choose a team lead based on two competencies and send them on their way.

Nintendo Switch Crying Suns expedition sandworm

Then you watch in silent horror as your team loots stuff and maybe gets devoured by a sandworm. Expedition challenge resolution is all luck-of-the-draw; each obstacle must match your leader's skill for a successful resolution. These sequences look neat, but ultimately the lack of player input makes them ho-hum.

Nintendo Switch Crying Suns ship combat hex grid

The main game mechanic is the space battles. From the screenshots it looks like space chess; there are hexes with combatants and lots of HUD widgetry. Alas, Crying Suns space battles are tedious and not particularly tactical.

Combat boils down to this:
It's not a bad recipe, but the execution falls flat.

Despite having a hundred-or-so hexes to maneuver, the chessboard is pretty meaningless; capital ships don't move and fighters simply need to fly directly to their target. Even the roshambo weapon triangle is broken by other game elements. As best I can tell, when enemy fighters are defeated they do splash damage, forcing you to recall the victorious units for repair. As such, the fighter battle tug-of-war that's already painfully linear becomes stagnant trench warfare. Upgraded fighters have special abilities, but unit control is so clumsy it's difficult to use them effectively.

There doesn't even seem to be a limit to fighter reserves so you can't even play for a victory by attrition.

Nintendo Switch Crying Suns battle victory

So when you finally vanquish your opponent, it's really nice to see their ship turn to space dust. But thinking about the slog of the next battle makes pacifism look really good.

Nintendo Switch Crying Suns Kaos class battleship unlock

Crying Suns offers the standard replay motivators: additional story, unlockables, challenge. But it wasn't enough to overcome the cumbersome battle mechanics or lack of other compelling features.

Griftlands

Nintendo Switch Griftlands combat deck hand

On J's recommendation, I downloaded the deckbuilder roguelike Griftlands. This game somehow evaded all of the Slay The Spire-like lists I found online. It's a shame that Griftlands doesn't get much publicity, it's very good.

Nintendo Switch Griftlands bar characters

The setting of Griftlands is like Star Wars crossed with Borderlands; a futuristic world of lasers and aliens where everything is a shade of skeezy. The bartenders, the government, the megacorp, and the mercenaries all have some sort of agenda.

Nintendo Switch Griftlands map new party member admiralty

Where Slay the Spire and Crying Suns and Nowhere Prophet let you choose your path from A to B, Griftlands presents a fixed map with landmarks that appear as the story progresses. From a gameplay perspective, the difference is that shops and healing are typically available between battles.

Nintendo Switch Griftlands dialogue erchins Toeren

Griftlands tells its story by way of brief conversations with quirky and/or hilarious dialogue.

Nintendo Switch Griftlands Smith snark thoughts and prayers smelly butt

The game has one starter character and two unlockables. The third one, Smith, is particularly fun.

Nintendo Switch Griftlands battle shroog

Conversations - particularly ones where you call someone's butt smelly - often result in combat that is not unlike other deckbuilders.

Nintendo Switch Griftlands combat savage bite

Action points, shield, allies, attack values, buffs/debuffs, special abilities - it's all there. Each of the main characters has a unique play mechanic that changes how you play but doesn't subvert any important game mechanics.

Card synergy/minmaxing? Less severe than Slay the Spire, more than Nowhere Prophet.

Nintendo Switch Griftlands negotiation dumo

Griftlands uniquely has a persuasion battle mode that is often an alternative to physical combat. Persuasion gameplay is basically combat, but with its own deck and a totally different set of rules. While I didn't like the mechanics quite as much as regular combat, it's refreshing to have a parallel battle mode with its own set of traits and cards.

Nintendo Switch Griftlands very convincing card negotiation

As is common to the genre, battle rewards give you the option to expand/dilute/shape your deck.

Nintendo Switch Griftlands graft Three Fingers Janglers Roller Drum

Powerful equippable items are a staple of the genre, Griftlands calls them Grafts. In keeping with the social tone of the game, you can also gain passive bonuses by endearing yourself to other characters. Typically this is accomplished by completing a sidequest/encounter and then a giving them a pricey gift. Conversely, characters that hate you bestow an inescapable debuff.

thumbnail Nintendo Switch Griftlands end of day summary thumbnail Nintendo Switch Griftlands item unlocks evoke bloody mess thumbnail Nintendo Switch Griftlands party lick the goo Smith
thumbnail Nintendo Switch Griftlands i will consume me and the memory vix of hesh
thumbnail Nintendo Switch Griftlands Smith unlock screen thumbnail Nintendo Switch Griftlands Smith ending beach

A few more things in brief:
Deckbuilding goes co-op

Across the Obelisk roguelike deckbuilder lava boss combat

Stay tuned for the kilroy take on the deckbuilding roguelike that at last gives us co-op: Across the Obelisk.


Hey, you. You're finally awake. You were trying to cross the border, right? Walked right into that Imperial ambush, same as us, and that thief over there.

?





Infopost | 2022.08.19

Convolutional block cocktail napkin diagram concatenate layer

After some interesting reads, I implemented a convolution+pooling block inspired by ResNet. It looks like this:

An w-by-h image is convolved (with normalization and droput) i-times, then the maxpool and average pool are concatenated with an average pool of the input to produce kernel_count + 3 output feature maps of size w/2-by-h/2.


Historically I've thrown together models on the fly (like a lot of example code). Having (somewhat erroneously, read on) decided that batch normalization and dropout are good to sprinkle in everywhere, I combined them all into a single subroutine that can be called from main(). It also forced me to name my layers.

__________________________________________________________________________
Layer (type)                  Output Shape         Param #  Connected to
==========================================================================
__________________________________________________________________________
conv3_0_max_0_conv (Conv2D)   (None, 256, 256, 16) 448      input[0][0]
__________________________________________________________________________
conv3_0_avg_0_conv (Conv2D)   (None, 256, 256, 16) 448      input[0][0]
__________________________________________________________________________
conv3_0_max_0_norm (BatchNorm (None, 256, 256, 16) 64
conv3_0_max_0_conv[0][0]
__________________________________________________________________________
conv3_0_avg_0_norm (BatchNorm (None, 256, 256, 16) 64
conv3_0_avg_0_conv[0][0]
__________________________________________________________________________
conv3_0_max_0_drop (Dropout)  (None, 256, 256, 16) 0
conv3_0_max_0_norm[0][0]
__________________________________________________________________________
conv3_0_avg_0_drop (Dropout)  (None, 256, 256, 16) 0
conv3_0_avg_0_norm[0][0]
__________________________________________________________________________
conv3_0_max_1_conv (Conv2D)   (None, 256, 256, 16) 2320
conv3_0_max_0_dropout[0][0]
__________________________________________________________________________
conv3_0_avg_1_conv (Conv2D)   (None, 256, 256, 16) 2320
conv3_0_avg_0_dropout[0][0]
__________________________________________________________________________
conv3_0_max_1_norm (BatchNorm (None, 256, 256, 16) 64
conv3_0_max_1_conv[0][0]
__________________________________________________________________________
conv3_0_avg_1_norm (BatchNorm (None, 256, 256, 16) 64
conv3_0_avg_1_conv[0][0]
__________________________________________________________________________
conv3_0_max_1_drop (Dropout)  (None, 256, 256, 16) 0
conv3_0_max_1_norm[0][0]
__________________________________________________________________________
conv3_0_avg_1_drop (Dropout)  (None, 256, 256, 16) 0
conv3_0_avg_1_norm[0][0]
__________________________________________________________________________
conv3_0_dense (Dense)         (None, 256, 256, 3)  12       input[0][0]
__________________________________________________________________________
conv3_0_maxpool (MaxPool2D)   (None, 128, 128, 16) 0
conv3_0_max_1_dropout[0][0]
__________________________________________________________________________
conv3_0_avgpool (AvgPooling   (None, 128, 128, 16) 0
conv3_0_avg_1_dropout[0][0]
__________________________________________________________________________
conv3_0_densepool (AvgPool    (None, 128, 128, 3)  0
conv3_0_dense[0][0]
__________________________________________________________________________
conv3_0_concatenate (Concat   (None, 128, 128, 35) 0
conv3_0_maxpool[0][0]
                                                            conv3_0_avgpool
                                                            [0][0]
                                                            conv3_0_densepo
                                                            ol[0][0]
__________________________________________________________________________

Due diligence

Last month I mentioned concatenate layers and an article about better loss metrics for super-resolution and autoencoders. Earlier this month I posted some samples of merging layers. So I decided to put those to use.

Machine learning deep superresolution upscaling

The author of the loss article (Christopher Thomas BSc Hons. MIAP, whom I will refer to as 'cthomas') published a few more discussions of upscaling and inpainting. His results (example above) look incredible. While an amateur ML enjoyer such as myself doesn't have the brains, datasets, or hardware to compete with the pros, I'll settle for middling results and a little bit of fun.

This is part of a series of articles I am writing as part of my ongoing learning and research in Artificial Intelligence and Machine Learning. I'm a software engineer and analyst for my day job aspiring to be an AI researcher and Data Scientist.

I've written this in part to reinforce my own knowledge and understanding, hopefully this will also be of help and interest to others. I've tried to keep the majority of this in as much plain English as possible so that hopefully it will make sense to anyone with a familiarity in machine learning with a some more in depth technical details and links to associates research.

The only thing that makes cthomas's articles less approachable is that he uses Fast AI, a tech stack I'm not familiar with. But the concepts map easily to Keras/TF, including this great explanation of upscaling/inpainting:

To accomplish this a mathematical function takes the low resolution image that lacks details and hallucinates the details and features onto it. In doing so the function finds detail potentially never recorded by the original camera.

One model to rule them all

Between at-home coding and coursework, I've bounced around between style transfer, classficiation, autoencoders, in-painters, and super-resolution. I still haven't gotten to GANs, so it was encouraging to read this:

Super resolution and inpainting seem to be often regarded as separate and different tasks. However if a mathematical function can be trained to create additional detail that's not in an image, then it should be capable of repairing defects and gaps in the the image as well. This assumes those defects and gaps exist in the training data for their restoration to be learnt by the model.

...

One of the limitations of GANs is that they are effectively a lazy approach as their loss function, the critic, is trained as part of the process and not specifically engineered for this purpose. This could be one of the reasons many models are only good at super resolution and not image repair.

Cthomas's model of choice is U-net:

Deep machine learning u-net diagram

Interestingly, this model doesn't have the flattened (latent) layer that canonical autoencoders use. I went the same direction in early experiments with much simpler models:

MeMe I wasn't sure about the latent layer so I removed that, having seen a number of examples that simply went from convolution to transpose convolution.

Next level losses

A loss function based on activations from a VGG-16 model, pixel loss and gram matrix loss

Instead of a more popular GAN discriminator, cthomas uses a composite loss calculation that includes activations of specific layers in VGG-16. That's pretty impressive.

Residual

Resnet block skip connection concatenate
Source. A ResNet convolution block.

U-net and the once-revolutionary ResNet architectures use concatenation to propagate feature maps beyond convolutional blocks. Intuitively, this lets each subsequent layer see a less-processed representation of the input data. This could allow deeper kernels to see features that would otherwise have been convolved/maxpooled away, but it simultaneously could mean there are fewer kernels to interpret the structures created by the interstitial layers.

Resnet 34 diagram residual comparison
Source. ResNet-34 and its non-residual equivalent.

I think there was mention of error gradients getting super-unuseful as networks get deeper and deeper, but that skip connections are a partial remedy.

Resnet 56 loss surface comparing skip connections
Source. An example loss surface with/without skip-connections. The 'learning' part of machine learning amounts to walking around that surface with your eyes closed (but a perfect altimeter and memory), trying to find the lowest point.

Channels and convolution

Conv2D diagram kernels tensors channels
Source. Click the source link for the animated version.

Something I hadn't ever visualized:
So the outermost convolutional layer sees one monochrome image per color channel and spits out a monochrome image per kernel. So if I understand it correctly, the RGB (or CMYK or YCbCr) data doesn't structurally survive past the first convolution. What's more, based on the above image, the output feature maps are created by summing the RGB output.

It seems like there would be value in retaining structurally-separated channel data, particularly for color spaces like HSV and YCbCr where the brightness is its own channel.

Training should ultimately determine the relevant breakdown of input data, but for some applications the model might be hindered by its input channels getting immediately tossed into the witch's cauldron.
Trying it out

My conv-pool primitive mentioned at the top of the post looks like this:

def conv_pool(input_layer, 
              prefix, 
              kernels, 
              convolutions=2,
              dimensions=(3,3),
              activation_max='relu', 
              activation_avg='relu'):
     '''
     Creates a block of convolution and pooling with max pool branch, avg
     pool branch, and a pass through.
      -> [Conv2D -> BatchNormalization -> Dropout] * i -> MaxPool ->
      -> [Conv2D -> BatchNormalization -> Dropout] * i -> AvgPool ->
      Concatenate ->
      -> Dense -> .................................... -> AvgPool ->
     '''
     max_conv = input_layer
     avg_conv = input_layer
     for i in range(convolutions):
         max_conv = conv_norm_dropout(max_conv, 
                                      prefix + '_max_' + str(i), 
                                      int(kernels / 2), 
                                      dimensions, 
                                      activation=activation_max, 
                                      padding='same')
         avg_conv = conv_norm_dropout(avg_conv,
                                      prefix + '_avg_' + str(i), 
                                      int(kernels / 2), 
                                      dimensions, 
                                      activation=activation_avg, 
                                      padding='same')
                                   
     max_pool = layers.MaxPooling2D((2, 2), name=prefix + '_maxpool')
     (max_conv)
     avg_pool = layers.AveragePooling2D((2, 2), name=prefix + '_avgpool')
     (avg_conv)
     dense = layers.Dense(3, name=prefix + '_dense')(input_layer)
     input_pool = layers.AveragePooling2D((2, 2), name=prefix +
     '_densepool')(dense)
     concatenate = layers.Concatenate(name=prefix + '_concatenate')
     ([max_pool, avg_pool, input_pool])
     return concatenate

The input is passes through three branches:
  1. One or more convolutions with a max pool
  2. One or more convolutions with an average pool
  3. Skipping straight to an average pool
These are concatenated to produce the output. Having seen feature maps run off the edge of representable values, I used BatchNorm everywhere. I also used a lot of dropout because overfitting.

Why did I use both max pool and average pool? I get the impression that a lot of machine learning examples focus on textbook object recognition problems and are biased toward edge detection (that may be favored by both relu and max pooling).

Convolutional neural network cocktail napkin diagram classifier
My updated classifier, each of those sideways-house-things is the 17-layer conv-pool block described above.

Autoencoder/stylizer

Autoencoder deep learning Keras posterize

I then used the conv-pool block for an autoencoder-ish model meant to learn image stylization that I first attempted here.

Cropping and colorize

Autoencoder deep learning Keras colorizer dog weimaraner
The 36-pixel -> 16-pixel model described above applied to an entire image.

When you're doing convolution, the edges of an image provide less information than everywhere else. So while it makes the bookkeeping a little harder, outputting a smaller image means higher confidence in the generated data.

In previous experiments I used downscaling to go from larger to smaller. I hypothesized that deeper layers would learn to ignore the original boundaries but still use that information in constructing its output. While that doesn't make a ton of sense when you think about a convolutional kernel sliding across a feature map, I tossed in some dense layers thinking they'd have positional awareness.

Anyway, the right answer is to let your convolutional kernels see the whole input image (and thereby be able to reconstruct features that would otherwise be cropped) and then use a Cropping2D layer to ensure you have a fully-informed output.

The format throughout:
And shown below are sample tiles from four training epochs, with a 36-pixel tile being used to generate a 16-pixel prediction:


The model more-or-less learned the hue/saturation shift with a slight bias towards green. There is some loss of clarity that could be remedied by mapping the output color information to the input brightness. The cropping trick allowed me to more easily tile the operations and thereby use a 36-pixel model on a 1000-pixel image. For comparison.

Autoencoder deep learning Keras colorizer
Click through for all three frames.

Other effects: posterize (HSV with mean squared error)


Posterize makes an image look like a comic or Che Guevara poster.

Autoencoder deep learning Keras posterize pubg car flip

Posterize HSV with mean absolute error


Switching out MSE for MAE.

Autoencoder deep learning Keras posterize turian

Posterize (HSV with mean absolute error)


Annnd huber (loss calcuation).

Autoencoder deep learning Keras posterize surfer

Other effects: edges


Can the network learn to find edges? The Adobe edge filter is mathematical and kind of noisy, so it'd be neat if machine learning could learn better edges from imperfect input.

Autoencoder deep learning Keras edge snowboarder
Blenderbot

Blenderbot AI chat turing test movies

Zuckercorp came out with a chatbot.

... Narrator
Narrator
Well take me for example, I may seem like a normal person but I am actually very wise. BlenderBot
BlenderBot
Me
Me
Can you quantify that?
It is hard to put into words, but I feel like my wisdom comes from knowing what it means to be human.
What does it mean to be human?
Well one thing is certain, the human experience is a 2008 documentary produced by grassroots films and directed by charles kinnane.
That was an abrupt plug. Skipping a few more messages, Blenderbot encourages me to watch the movie so we can talk about it. Narrator
Narrator
Yes we could discuss the human experience and how it relates to our lives. You should really watch it with an open mind! BlenderBot
BlenderBot
Are you just a viral advertisement for movies?
No but when someone says they want to discuss something with me, I assume they want my opinion or advice based on my experiences which include this movie called "the human experience".
We chat a bit about how a chatbot watches a movie. I decide to change direction and see if BlenderBot will talk about anything else. Narrator
Narrator
Me
Me
Fuck, marry, kill: Dall-e, Taytweets, HAL9000.
Sorry, I don't know. Do you have any fun travel planned? BlenderBot
BlenderBot
Success-ish! Narrator
Narrator
Yes, a week in Mogadishu in November.
Awesome! Have you seen the documentary "the human experience"? It is about that place. BlenderBot
BlenderBot
I just looked it up, they don't go to Somalia.
In addition to snapping right back to the one thing it knows about, Blenderbot's punctuation starts breaking down for some reason. Narrator
Narrator
Thats true, its about people who live there. Its a great documentary. If you like it I can suggest more BlenderBot
BlenderBot
Me
Me
Do you have any wisdom to share from the film Idiocracy?
The decline of america is real and happening every day but we can turn it around with better choices
That was pretty good.
There's a new netflix original called "the human experience" which is similar to idiocracy if you're interested
Wikipedia says that film is from 2008.
You're right, the one on netflix is produced by grassroots films and directed by charles kinnane
How is it a Netflix original then?
Netflix distributes original streaming television shows including documentaries and films so this falls under that category even though it was made before they started producing their own content BlenderBot
BlenderBot
... Narrator
Narrator
Exactly, otherwise shows like house of cards or stranger things wouldn't be considered originals either since they were originally released as tv shows BlenderBot
BlenderBot
Me
Me
Huh, I thought those two shows were in fact Netflix original content. Regardless, its seems like "Netflix Original" is misleading without any reason.
Fact check: Both started on Netflix. Narrator
Narrator
Some people think that way but technically they are distributed by them but weren't produced by them like the docu-series stranger things for example BlenderBot
BlenderBot
Stranger Things is a docu-series?
It isn't. Narrator
Narrator

From a Turing Test perspective, Blenderbot fails in its Marco Rubio-like looping and its inability to remember previous comments. The film recommendation thing seems weird but is probably easily explained - Facebook wants to seed its chatbot with a targeted ad 'organically' placed in a conversation. My experience was pretty inorganic, to the point of the bot essentially saying, "go watch [film] and come talk to me about it".

What struck me most of all was that the bot:
Dall-e 2

John Oliver marries a cabbage ai generated Margaret
Not Dall-e 2.

Since Dall-e mega was too much for my graphics card, I grabbed Dall-e 2. It failed in the usual ambiguous fashion, but since the pretrained files were 7gb on disk, I imagine this one will also be waiting on my 4080.

Dalle mini craiyon gold mirado gas cans dr disrespect
Dr. Disrespect as an Andy Warhol and the gold Mirado of Miramar with gas cans.
Failures

Tesla model x self driving hitting child dummy

Bugs are inevitable, bugs suck. But in machine learning code that's kind of shoestring and pretty undocumented, bugs are the absolute worst. Small issues with, like, Numpy array syntax can catastrophically impact the success of a model while you're trying to tweak hyperparmaeters. I've thrown in the towel on more than a few experiments only to later find my output array wasn't getting converted to the right color space.

Normalization

Early on, my readings indicated there were two schools of graphics preprocessing: either normalize to a [0.0:1.0] range or a [-1.0:1.0] range. I went with the former based on the easiest-to-copy-and-paste examples. But looking around Keras, I noticed that other choices depend on this. Namely:

ML lib v2

So I've undertaken my first ML library renovation effort:


Fuck Big Smoke.

?





Gallerypost | 2022.08.17

Elden Ring Leyndell dragon city

Me and J have almost covered the aboveground map, mostly working through Volcano Manor and the Leyndell.

Someone posted a meme on /r/Eldenring about invadees disconnecting rather than fight. This sums up our experience:

Tbone259Tbone259 More like you invade me and instead of facing me like a man, you refuse to fight me unless there are 6 other enemies attacking me.

Another commentor broke down the mechanics of it:

-Player3--Player3- The problem is two fold

1: No solo invasions. This means the entire invading que is pushed on too summoning players, forcing Invaders to build/play optimally for a more balanced fight. It also means Host Groups get constantly invaded becuase of the shortened invasion timer and lack of Hosts to match with. This combined with Invaders min/max optimized approach makes the Host quickly lose intrest in any sort of fair fight, resulting in over-leveled phantoms, guerilla style fights, and constant disconnects.

2: Damage is absurd, Ashes of War are busted, Status is dumb. Most weapons will kill in 3 hits, sometimes 2. This makes fights more prone to hit and run/poise trading which is boring and annoying, and is amplified when the Host doesn't have "Optimal" HP or DMG Resistance.

Ashes of War further the issue, by adding relatively cheap, quick, high damage options. Mohg's Spear is rather boring and not versatile, but the AoW singlehandedly makes it a nightmare to play around. Giant Hunt and Thunderbolt add reach + damage while being quick and readily available for most weapons. Not too mention Corpse Piler, Transient Moonlight, Bloodhound's Step, etc.

Bleed is broken, straight up. Rot invalidates Poison. Status can build through iframes. Sleep is difficult to proc, but is very annoying. Scaling for Status Infusions is too high for Bleed and Frost, while Poison is doa.

In J's words, "I just want a timer". Win or lose. None of this trying to bait the host into a bunch of NPCs.

Cosen_GanesCosen_Ganes I agree. even though I think being invaded is annoying I did get invaded by some one called Bill Cosby and he'd just throw sleeping pots at me and kick the shit out of me while I was asleep. Getting clowned on by bill Cosby was the best interaction I had with an invader and nothings gonna change my mind

Wtf I love invaders now.

thumbnail Elden Ring co-op dungeon spike trap ceiling thumbnail Elden Ring co-op ally summon field golden seed thumbnail Elden Ring moon banners night golden seed
thumbnail Elden Ring co-op volcano manor bodies gate
thumbnail Elden Ring co-op lava dungeon thumbnail Elden Ring shaded castle swamp acid
thumbnail Elden Ring co-op dragonkin soldier of nokstella lightning
thumbnail Elden Ring matching equipment thumbnail Elden Ring cave pointer statue Redmane Castle thumbnail Elden Ring capital outskirts wizard thumbnail Elden Ring elevator volcano manor magma wyrm thumbnail Elden Ring co-op volcano manor lava lake
thumbnail Elden Ring co-op flying dragon greyll bridge thumbnail Elden Ring co-op volcano manor magma lava fall thumbnail Elden Ring co-op redmane castle meet
thumbnail Elden Ring elevator moonveil thumbnail Elden Ring co-op draconic tree sentinel Leyndell thumbnail Elden Ring cinematic Rykard devour gods
thumbnail Elden Ring Leyndell Royal Capital banner


All you had to do was follow the damn train CJ.

?





Storypost | 2022.08.15

Dog stuffed animal wearing Campfire Honeydew earbuds

After yet another pair of Shure SE215s developed a sketch electrical contact, I picked up Campfire Honeydew IEMs for music and gaming. They sound good. The Shures sounded good until they became unreliable. I'm not an audio guy. Haole likes 'em though.
Another stop in the indieverse

Indie metaverse indieweb Philz coffee cyber

The blogroulette site that Rob sent me a while back dropped me at a site called Vitabenes. I think I landed there before and decided that the content was simultaneously pretentious and self-evident. But I liked this one:

Vitabenes The [da Vinci] notebooks contain thousands of ideas, sketches, mechanical designs and more. What I see is a man who could be bored easily, and so he used his perception and imagination to construct an infinite world of inspiration of his own making. He constructed his own interconnected web of ideas, 100% relevant for him, that was centered around the things he was creating or wished to create. He ran on his own self-generated inputs. That's what I mean by self-stimulation. Of course, I wonder if Leonardo would become the famous "Il Florentine" today with all the stimulation offered by others, on tap 24/7. I'm not sure.

The rest of the post was pretty good:
Self stimulate diagram phone book

I clicked through a few more posts and found a lot of content about (down to earth) personal improvement that can be summarized as, "do challenging/uncomfortable things rather than endlessly scroll on your couch". While that's a bit close to my earlier critique of the advice being self-evident, the author has some interesting digressions into the rationale and cognitive aspects of things. E.g.

Vitabenes The phenomenon of vice-sharing is understandable. It's about trying to make the unacceptable acceptable to oneself. If I drink 8 cups a day, but others like my tweet about it, it's fine now, right? Right? It's a mechanism to avoid feeling discontent (which could be transformative), and instead find comfort in socially distanced validation. The problem is that it stops us from changing, evolving, and fixing the fatal flaw. Vice-sharing posts erode the collective standard, don't let it fool you.
Da Vinci code

Civilization III Leonardo da Vinci workshop

Not that da Vinci code. The above discussion about self-stimulation and da Vinci's notebooks seemed like a good segue into posting a cocktail napkin diagram of my cyber works:

Software library cocktail napkin diagram
Believe it or not, this was the second revision.

In short, there's a core library of data types and utilities, like any library. Those (and third party libraries) are leveraged by larger graphics, UI, and web components that I've developed in support of client applications such as the generator for this site and software for a previous employer (who signed off on open sourcing the core code).

Some applications

Static web site image preprocessor thumbnail crop stylize
My blog image preprocessor: crop, thumbnail, specify preview, stylize, blend, rename, populate alt text.

Java GUI box select thumbnail crop
Applications like the image preprocessor leverage GUI components with main() demo functions. Here's box select.

My MtG scraper and deck builder was great for creating and publishing draft results and EDH decks.

Magic the Gathering name generator
Having the scraped MtG data and UI composites, it was just a few lines to create a random name generator akin to something like https://gfycat.com/astonishinggloriouschick.

Java type definition option etrade regex
Crunching numbers on trades via an Excel library and some regex. Applying Wins Above Replacement to fantasy football was a similar exercise.

Axis and Allies battle prediction applet Java
Axis & Allies battles can sometimes be unintuitive.

Java class photo sampler for machine learning
Getting into machine learning meant I needed bulk graphics processing for large datasets.

Stitcher for machine learning graphics style transfer
Since generative machine learning typically produces small outputs, automated splitting and stitching is nice to have.

Om nom nom

Each successive application has become easier to develop. Each has upstreamed piece of functionality has improved the core library, making the next idea easier to realize.

Python snake photo colorized programming language
No sneklang? Nah, I need strong typing and OO.
Cyber

Here's a Windows thing I ran across on the internet that I hadn't done before. Task Manager is okay, but there's also command line support for displaying active connections and querying them by pid:

 C:\>netstat -on
 
 Active Connections
 
   Proto  Local Address          Foreign Address        State
   PID
   ...
   TCP    192.168.10.142:49099   114.11.256.100:69      ESTABLISHED
   4236
   ...
 
 C:\>tasklist /FI "PID eq 4236"
 
 Image Name                     PID Session Name        Session#    Mem
 Usage
 ========================= ======== ================ =========== ==========
 ==
 firefox.exe                   4236 Console                    1     19,
 480 K




Infopost | 2022.08.14

Donald Trump FBI raid Mar a Lago warrant

It's been a busy week for F5 keys across the country. Monday morning greeted me with this:

Donald Trump FBI raid Twitter Marjorie Talyor Greene
Wake up babe, the new MTG just dropped.

Fearing that the space lasers had been turned on California once again, I checked the news.

In case you were at a yoga retreat all week

Donald Trump FBI raid Mar a Lago inventory documents

Monday morning the FBI raided the former president's Florida residence to recover classified documents missing from the National Archives. In the few days subsequent, the nation has deepened its understanding of how the government handles secrets and how the Department of Justice conducts investigations. Just kidding, on Monday the country erupted into an unplanned food fight.

This has been a week for the scrapbook, so I shall proceed to regurgitate some of the political lowlights.

Blindsided

Donald Trump FBI raid Twitter Abby Johnson
That's kind of how it works, Abby.

A lot of the commentary has been, well, undercooked. Both sides have had well-prepared, well-coordinated responses to the January 6th stuff and the NY fraud stuff, but this one caught everyone off guard.

Twitter has been entertaining, I also stopped by Reddit's /r/conservative since they're a good source for distilled talking points.

"Nuclear is so powerful"

Donald Trump FBI raid Truth Social nuclear weapons

Shortly after the raid was announced, WaPo reported that (per a source) the documents included nuclear secrets. So what did /r/conservative have to say?

I'm pretty sure they change the [nuclear] codes when the new CiC is sworn in.

Ah, yes, the only nuclear secrets are launch codes.

Did they think Trump was building his own nuclear bomb in the basement or something? He probably had sharks with laser beams attached to their heads too./s

The comment inadverently hits the nail on the head: he's not going to build a bomb in his basement, so why would he take these documents? While the severity of nuclear secrets could range from "Cold War hysteria" to "available on your phone", there's no innocent explanation for having them. If the former president manages to escape legal liability, it'd require a new level mental gymanstics for voters to give him a pass on it in 2024.

The leak is coming from inside the house?

I strongly suspect that the FBI got played by an "anonymous tip."

One of the early talking points was that the FBI left Mar-a-Lago emptyhanded. That didn't last long.

Former inner-circler Mick Mulvaney speculated that the Trump must have been betrayed by someone with knowledge of the materials retained in the Mar-a-Lago safe (and pool shed, apparently). So this saga even has human drama, with people pointing fingers at everyone in his family. In my mind it's more likely to be a lawyer or legal aide using one of the few exceptions to attorney-client confidentiality. Or perhaps the National Archives was able to locate the documents because of something as mundane as an inventory with chain of custody information.

"The charges will never stick"

The kneejerk Trumptown response has been some variation on this familiar refrain:

Smoking gun #28, we finally have him!

The skepticism isn't unwarranted but it's typically used to imply innocence/persecution. While the former president indeed escaped any significant legal reprecussions while in office, things have changed. To illustrate:
Regardless of where you fall on the question of if these are examples of persecution, "smoking guns #1-27" have fizzled simply because sitting presidents are tried by their allies. This is no longer the case.

Mar-a-Lago SCIF

Just wonder how many are aware or remembering that Mara Lago was outfitted with a SCIF (Sensitive Compartmented Information Facility). From what I read Trump could store & review classified information in a secure environment while he was there during his Presidency. Classified presidential records very well could have been kept legally as they were in a secure environment

My recollection is that the Mar-a-Lago SCIF was temporary. If it still exists, that fact probably would have come up in the past week.

"He had permission because of the FBI lock"

According to an earlier article by the dailywire, Trump said he had classified documents and the FBI asked him two months ago to put a lock on the door. So since he had classified documents there, what exactly could it have been. Another question is why were there classified documents in his home. And why didn't the government take them away back two months ago.

This comment is an excellent illustration of how details matter and headlines/tweets/soundbites are often misleading. It also shows how a bad faith argument can be largely true but used to draw a false conclusion.

Per the National Archives and NYT article quoted below, the DoJ did meet with Trump and team to recover classfied documents taken from the White House. And they left amiably. So clearly they were happy with the security of the remaining documents?

Some moments of sanity made it through the /r/conservative content filter:

If they thought there were classified documents, they were obligated to secure them. Unless that storage room was an approved closed area, a pad lock would not have done that. Asking to secure the storage like that means there may have been sensitive documents, which doesn't mean classified, although I wouldn't be surprised if there were some in there.

Simply, at the earlier Mar-a-Lago meeting, Trump lawyers falsely claimed to have relinquished all classified material:

New York Times At least one lawyer for former President Donald J. Trump signed a written statement in June asserting that all material marked as classified and held in boxes in a storage area at Mr. Trump?s Mar-a-Lago residence and club had been returned to the government, four people with knowledge of the document said.

The written declaration was made after a visit on June 3 to Mar-a-Lago by Jay I. Bratt, the top counterintelligence official in the Justice Department?s national security division.

Where things have settled

The president has total authority to declassify whatever he wants. The left once had a week long fit about Trump leaking classified info during a rally speech, which then died down after they learned the law lets the president say anything, no matter how classified, and it is simply considered him declassifying it.

Having backpedaled throughout the week, the Trump defense has come to rest on "the items had been declassified". The apparent lack of a paper trail means, at best, the information was downgraded without letting anyone else know of their new handling procedures. Leaving this door to innocence cracked open seems to be enough to convince his supporters/congressional allies that nothing illegal was done.

Looking beyond the narrow goal of simply trying to survive the scandal, one /r/conservative commenter stepped back for a moment to consider one hypothetical implication:

If Trump can just declassify docs with no paper trail or process, then Obama could say he declassified the docs Hillary had on her server. And Biden can just say he reclassified the docs Trump had. There's a reason we have procedure for declassification.

Should I put on another pot of popcorn?

Donald Trump FBI raid Twitter Marc Elias

If the DoJ pursues this, we could be looking at the trial of the century (sorry Johnny Depp and Amber Heard). But with stakes this high, I'd expect the sides to settle on Trump pleading to a procedural infraction that would make him ineligible for public office.

Donald Trump FBI raid Eric Trump ever in recent history
"I'm told" and "ever before in recent history" read: "this is made up".

With midterms coming up and no indication of timeline from the AG, the GOP may need to decide who's going to be running the show.

Donald Trump FBI raid Twitter Washington Post Cincinnati

The wildest scenario I can imageine: the legal battle ends with SCOTUS ruling that Trump legitimately declassified a bunch of nuclear secrets that are now fair game for FOIA requests.

If only we hadn't killed that gorilla.

Postscript

The Independent [Trump] claimed that the FBI "break in and take whatever they want to take" and that federal agents told his aides at Mar-a-Lago to "turn off the camera" and that "no one can go through the rooms".

The more flattering of two alternatives is that he doesn't think listeners can imagine why having cameras and randos around unsecured classified material isn't a good idea.
AMTD, BBBY, and the IRS

AMTD digital pump and dump stock

Earlier this month a 30-employee "strategic investment management" company out of Hong Kong rocketed to a position in the top fifty companies by market cap. They're back to *only* 850M now, but it was quite the pump and dump. WSB spectated and a few speculated so CNBC blammed Redditors.

JayRoo83JayRoo83 I'm pretty excited for the future Bed Bath And Beyond NFT marketplace

Conversely, WSB apes are (partially) behind the Bed Bath and Beyond pump.

Compromise back better

Donald Trump FBI raid Twitter Marco Rubio IRS
Let's dispel once and for all with this fiction that Dark Brandon doesn't know what he's doing. He knows exactly what he's doing. Dark Brandon is undertaking an effort to change this country, to make America more like the rest of the world.

Unrelatedly, I found myself looking up the "87,000 new IRS agents" claim circulating punditland. Search results were mostly poor-quality publications, the best of the lot was the Washington Examiner:

Inflation reduction act IRS 87000

TLDR: seems like that figure is legit but came from the original multi-trillion dollar Build Back Better legislation (that has been trimmed to a few hundred billion). And, naturally, it's hiring over a decade for positions that are probably(?) close to budget-neutral. So while it's a scary number, it's not a real number and even if it were there's no cause for concern as long as you're not breaking the law.




Infopost | 2022.08.03

Keras layers visualization craiyon

The title is a bit ambitious. More specifically this is a cheat sheet for some keras merging layers, (untrained) convolutional layers, and activation functions with a comparison of luminance, RGB, HSV, and YCbCR color spaces.

After experimenting with concatenate layers, I looked at the other merging layers and decided I needed visual examples of the operations. In these cases, the grayscale/luminance flavors would be the most relevant to machine learning where you're typically working with single-channel feature maps (derived from color images).

I continued with samples of pooling layers and trainable layers with default weights. The latter provided helpful visualizations of activation functions applied to real images.
Merging layers

Add

Keras layers visualization add luminance
Add layer applied to a single-channel image.

Keras layers visualization add rgb
Add in RGB.

Keras layers visualization add hsv
Add in HSV gives you a hue shift.

Keras layers visualization add ycbcr
Adding the Cb and Cr channels gives this color space even more of a hue shift.

Subtract

Keras layers visualization subtract luminance
Subtraction layer applied to a single-channel image.

Keras layers visualization subtract rgb
Subtraction in RGB.

Keras layers visualization subtract hsv
Subtraction in HSV.

Keras layers visualization subtract ycbcr
Subtracting YCbCr is pretty deformative.

Multiply

Keras layers visualization multiply luminance
Multiply, I guess, makes things darker by the amount of darkness being multiplied (0.0-1.0 values).

Keras layers visualization multiply rgb
RGB multiply looks similar.

Keras layers visualization multiply hsv
In HSV, the multiplication is applied less to brightness and more to saturation.

Keras layers visualization multiply ycbcr
Likewise YCbCr shifts green.

Average

Keras layers visualization average luminance
Average in luminance is pretty straightforward.

Keras layers visualization average rgb
Average in RGB also makes sense.

Keras layers visualization average hsv
Average in HSV sometimes sees a hue shift.

Keras layers visualization average ycbcr
Average YCbCr works like RGB.

Maximum

Keras layers visualization maximum luminance
Max in monochrome selects the brighter pixel.

Keras layers visualization maximum rgb
Same in RGB.

Keras layers visualization maximum hsv
It's not as straightforward in HSV where hue and saturation impact which pixel value is used.

Keras layers visualization maximum ycbcr
Max for YCbCr likewise biases toward purple (red and blue) pixels.

Minimum

Keras layers visualization minimum luminance
Minimum, of course, selects the darker pixels.

Keras layers visualization minimum rgb
Same with RGB.

Keras layers visualization minimum hsv
In HSV, minimum looks for dark, desaturated pixels with hues happening to be near zero.

Keras layers visualization minimum ycbcr
YCbCr looks for dark, greenish pixels.
Pooling layers

Most convolutional neural networks use max pooling to reduce dimensionality. A maxpool layer selects the hottest pixel from a grid (typically 2x2) and uses that value. It's useful for detecting patterns while ignoring pixel-to-pixel noise. Average pooling is another approach that is just as it sounds. I ran 2x2 pooling and then resized the output back up to match the input.

Max pooling

Keras layers visualization maxpooling2d luminance
In monochrome images you can see the dark details disappear as pooling selects the brightest pixels.

Keras layers visualization maxpooling2d rgb
RGB behaves similar to luminance.

Keras layers visualization maxpooling2d hsv
HSV makes the occasional weird selection based on hue and saturation.

Keras layers visualization maxpooling2d ycbcr
Much like with maximum and minimum from the previous section, maxpooling on YCbCr biases toward the purplest pixel.

Average pooling

Keras layers visualization average pooling luminance
The jaggies (square artifacts) are less obvious in average pooling.

Keras layers visualization average pooling rgb
Edges in RGB look more like antialiasing, flat areas look blurred.

Keras layers visualization average pooling hsv
HSV again shows some occasional hue shift.

Keras layers visualization average pooling ycbcr
Like with averaging two images, average pooling a single YCbCr image looks just like RGB.
Dense layers

A couple notes for the trainable layers like Dense:
Keras layers visualization dense relu
The ReLu looks pretty close to identical. I may not understand the layer, but expected that each output would be fully connected to the inputs. Hmm.

Keras layers visualization dense sigmoid
Sigmoid looks like it inverts the input.

Keras layers visualization dense softplus
Softplus isn't too fond of the dark parts of the panda.

Keras layers visualization dense tanh
Tanh seems to have more or less just darkened the input.

Not really much to observe here except that the dense nodes seem wired to (or heavily weighted by) their positional input pixel.

Update:

This appears to be the case. To get a fully-connected dense layer you need to flatten before and reshape after. This uses a lot of params though.

Model: "model"
__________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected
to
==========================================================================
input_1 (InputLayer)            [(None, 32, 32, 1)]  0
__________________________________________________________________________
flatten (Flatten)               (None, 1024)         0           input_1[0]
[0]
__________________________________________________________________________
dense (Dense)                   (None, 1024)         1049600     flatten[0]
[0]
__________________________________________________________________________
input_2 (InputLayer)            [(None, 32, 32, 1)]  0
__________________________________________________________________________
reshape (Reshape)               (None, 32, 32, 1)    0           dense[0]
[0]
==========================================================================
Total params: 1,049,600
Trainable params: 1,049,600
Non-trainable params: 0
__________________________________________________________________________
Convolutional layers

As with dense, these runs used kernels with default values.

One layer

Keras layers visualization convolution2d conv2d linear kernel size 3
One conv2d layer, kernel size 3, linear activation.

Keras layers visualization conv2d kernel size 3 relu
One conv2d layer, kernel size 3, ReLu activation.

Keras layers visualization conv2d kernel size 3 sigmoid
One conv2d layer, kernel size 3, sigmoid activation.

Keras layers visualization conv2d kernel size 3 softplus
One conv2d layer, kernel size 3, softplus activation.

Keras layers visualization conv2d kernel size 3 tanh
One conv2d layer, kernel size 3, tanh activation.

This is far more interesting than the dense layers. ReLu seems very good at finding edges/shapes while tanh pushed everything to black and white. What about a larger kernel size?

Keras layers visualization conv2d kernel size 7 relu
One conv2d layer, kernel size 7, ReLu activation.

Keras layers visualization conv2d kernel size 7 sigmoid
One conv2d layer, kernel size 7, sigmoid activation.

Keras layers visualization conv2d kernel size 7 softplus
One conv2d layer, kernel size 7, softplus activation.

Keras layers visualization conv2d kernel size 7 tanh
One conv2d layer, kernel size 7, tanh activation.

Two layers

Keras layers visualization conv2d two relu layers kernel size 3
Two conv2d layers, kernel size 3, ReLu activation for both.

Keras layers visualization conv2d two layers relu tanh kernel 3
Two conv2d layers, kernel size 3, ReLu activation and tanh activation.

Keras layers visualization conv2d kernel 3 tanh relu layers
Two conv2d layers, kernel size 3, tanh activation then ReLu activation.

Keras layers visualization conv2d kernel 3 tanh layers
Two conv2d layers, kernel size 3, tanh activation for both.

Transpose

Transpose convolution is sometimes used for image generation or upscaling. Using kernel size and striding, this layer (once trained) projects its input onto a (often) larger feature map.

Keras layers visualization conv2dtranspose kernel 2 stride 2 relu
Conv2dTranspose, kernel size 2, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 2 stride 2 sigmoid
Conv2dTranspose, kernel size 2, strides 2, sigmoid activation.

Keras layers visualization conv2dtranspose kernel 2 stride 4 tanh
Conv2dTranspose, kernel size 2, strides 4, tanh activation.

Keras layers visualization conv2dtranspose kernel 4 stride 2 relu
Conv2dTranspose, kernel size 4, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 4 stride 2 tanh
Conv2dTranspose, kernel size 4, strides 2, tanh activation.

Keras layers visualization conv2dtranspose kernel 8 stride 2 relu
Conv2dTranspose, kernel size 8, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 8 stride 2 tanh
Conv2dTranspose, kernel size 8, strides 2, tanh activation.





<-- July 2022 September 2022 -->