From Corona to SpaceX to the Floyd protests, there's a lot going on and I'm just sitting here isolating. If only I had a 5G bioshield I could go ride some flat waves, maybe next birthday. Still, I can't complain about a beer and cheese of the month club and some home office-friendly dinosaur slippers. Thanks, Jes!
The Year of Zoom continues
Yay Jori I'm so happy for you guys! In real life, it apparently looked something like this:
And Gage, Derrick, and spouses played some Jack Box on Zoom with us.
Now before you say anything, because Gage always gives me crap about references, I saw fit to use the handle Vas Referens and needed an image to match. That one part is a brain, so the handle is about as intellectual as they come.
(I took some liberties with the above image's original depiction of the LA skyline).
CattleDecapitation rated Bosch above average. In these quarantine times, I decided to give it a watch. Two seasons in, I'd call it above average. It has a lot of semi-subtle detective novel tropes regarding the protagonist, his relationship with his family, and his maverick personality. Any sort of discussion with federal agencies amounts to shouting about jurisdiction and need to know. Zzzzz. On the other hand, it's not all CSI-y and has refreshingly down-to-earth (sometimes gritty) material.
On the plus side, the borrowed actors and plotlines from The Wire are neat. On the minus side, there's handwavey stuff that keeps the plot from going off the rails but is central to its progression. Specifically for season two, the lack of followup to the major mid-season incident and all the "politics makes x happen" are hard to buy.
I wouldn't have made 8this video, but Cattle wanted a screenshot.
RIP PUBG
Cattle has decided to quit PUBG and I am inclined to agree.
A little history...
PUBG has done a few things over the years to keep interest in the game. A few examples:
Removing all-chat
Open chat in the queue was rife with... well exactly what you'd expect from internet randos. Still, it was something to do. And I'm pretty sure you could turn it off. Plane rides were a lot more fun with Music Guy (the guy who would play music on all-chat).
Cosmetics
The obligatory money maker. Cosmetics meant they added an XP/grinding system. That's sure to keep players around!
Anti-cheat
The prevalence of cheating forced Blue Hole to add anti-cheat software and, more recently, phone verification. Naturally, having a level playing field is pretty darn important to keeping people engaged. They didn't really succeed here, but they made progress.
New maps
After Erangel and Miramar came Sanhok, Vikendi, and Karakin. New scenery, new tactics, new game speed. These were good!
New weapons
Guns were steadily added, not making a huge difference but they were nice to have. The introduction of the Panzerfaust had the potential to be a major change but Blue Hole played it safe and left it somewhat underpowered.
New vehicles
Cars, trucks, and bikes speckle the PUBG landscape. And then there was an airdropped APC that was very fun to use. And an ultralight. And then an automated Vikendi train. Each of these added variation to the experience regardless of how well they were used.
Events
For a brief period there were multi-day events that served as the groundwork for the deathmatch mode. Their coolness ranged from 'meh' to the awesomeness of vehicles and smoke grenades only. Then they stopped doing them. Why???
Map makeovers
The older maps got visual refreshes. This probably didn't matter since serious players use low graphics settings.
Map selection
The ability to select your map came and went. It should have remained.
MMR
Blue Hole added a few beta MMR tests. These made a substantial difference to how fun the game was since it's more fun to play with people your skill level.
Speed changes
Game tweaks and map additions changed how quickly the game progressed from drop to final circle. A lot of players asked for quicker matches.
Weather
Rain and fog were neat effects that had an impact on play.
And there are some things they never tried:
Trios. If you had three, you were going against full squads.
Persistent MMR. I'm not sure what was wrong with matchmaking.
Game modifiers, similar to events. Shotguns only, no armor, boats spawn with level 3 gear, footsteps muted, deathfog (you take damage when outside a vehicle with a gun equipped), the list is endless. Making these optional and avoiding hated ones are always a concern, but enabling a variety of play styles could have kept things fresh.
And now, this...
Now they've made two major changes. The garbage AIs that were introduced with PUBG Mobile are now in the PC version. Theoretically it's nice to have easier targets from which you can farm loot before encountering real players in the later stages of the game. But it feels like the kiddie pool.
And, like with most e-sports, Blue Hole has added a ranked mode based on their experiments with MMR. I'll describe this one in story form:
Ever the optimist, I think, "Ranked mode, great! We'll get placed in whatever is below bronze league and it'll be like MMR on stims." Cattle and I set out to play our five pre-rank matches. First red flag: squads only. So we play as a two-man squad, further diminishing our chances to get any ranking points. Second red flag: "BTW queue times will be upwards of seven minutes". Okay, at least they're honest, is this reflective of all ranked mode or just the pre-rank?
We play four pre-rank matches. All Sanhok. Everyone on the server hides until late. No big deal. Not fun, but rip the band-aid off. Through bad luck and lack of skill, we do pretty bad, which is kind of what we want. Pre-rank match five: Erangel. Okay. We're on the plane with 16 other players. What? You put us on the second-biggest map with a handful of players? We laugh that it might be the bronze pre-league. As you might imagine, it's pretty quiet til the end. We play it right except that everyone is level 3'd and we're still a two-man squad.
Get back to the ranked menu: "You've played 4/5 pre-rank matches." To quote the player mantra that Blue Hole adopted, "Fix PUBG".
Except this time I think we're done.
More Payday
We didn't play a ton of Payday 2 the first time around. I think the late-night marathon attempt to beat Hoxton's Revenge had us jumping back to the relative low tension of PUBG. We started where we left off; a successful Hoxton run. Our squad this time was a little different, but we managed a full stealth run after only a few dozen restarts. Anyway, it's a great map if you have the game and haven't played it.
Of course, all the stealth makes you want some action. As such, we're doing the highly-recommended Firestarter mission. Three segments, stealth optional.
We got through all three on the lowest difficulty having started but not ended in stealth. The next time around we bumped it up to hard and still couldn't manage a full stealth run, but felt a bit better since the combat was much more perilous.
No, I'm not nerding out about newest Persona game, I'm nerding out about site meta! (There's the thin connection that it's about people and in the video game when a character attacks they sometimes shout "persona" because it's a jrpg). Anyway, I wanted to finish the chat bubble-style html implementation of conversations that started here:
I did the screen grab after I started coding, so I had broken the link to the 'me' avatar, but you get the idea.
Conversation element
So some things things that weren't great about the initial implementation:
Stretching text boxes to accommodate a 256px avatar.
In-line chat bubbles, traditionally they're offset toward their side of the screen.
(Unseen) support for just two people.
With some cellspan-fu, I implemented lazy avatar placement and bubbles thats that take up 2/3 of a center area. Supporting multiple participants made the logic more complicated, but like the thumbnail/gallery view I hope to have done it and never have to change it. Also I followed the left/right paradigm where I'm on the left and others are on the right, excepting the case where I'm not in the conversation.
And here's how it looks:
KO
I always say the absence of evidence is not the evidence of absence.
What?
Connie
Simply because you don't have evidence that something does exist does not mean you have evidence of something that doesn't exist.
What?
What country are you from?
What?
'What' ain't no country I ever heard of! They speak English in 'What'?
What?
KO
English, motherfucker! Do you speak it?
Yeah.
Connie
So you understand the words I'm saying to you!
Yeah.
Well, what I'm saying is that there are known knowns and that there are known unknowns. But there are also unknown unknowns; things we don't know that we don't know.
What?
Say what again! Say 'what' again! I dare you! I double dare you, motherfucker! Say 'what' one more time!
Supporting pseudos/names/avatars meant actually compiling an inital list of recurring characters in this quiet corner of life. From there, I addressed the conversation text coloring challenge by hashing people's IDs to an RGB value. So:
By brightening and darkening #adcbb8 to values closer to black and white, I came up with a per-person color palette. Of course, at these limits, a lot look similar, but different enough for the purpose.
The big retrofit
And since everyone (well not everyone yet) had an avatar and a color scheme, I figured I might as well compile username mentions. Some things, like code changes, are easy. Others are sweat equity. This was the latter. Naturally, I automated finding names in historical markup files, but QA was all manual labor. Here is a list of people I now hate:
People whose names are verbs, specifically Rob and Chase.
People with the same name. Ryan and Ryan. Jamie and Jamie.
People whose pseudos are overloaded like EA and Mouse.
Looking forward, I'm just adding the markup 'c/' to denote a person's handle. All of this would have paid off recently when asked for neat images of certain people for an album.
Next
Chris, it's Fonse. Wtf, man?
Anonymous
Yeah, my husband has a neat, slightly-insulting persona, where's mine?
Anonymous2
Chris
They're coming, I promise!
I added handles for the people most mentioned these past 20 years. I still have some ground to cover.
Oh yeah
Of course, going over old stuff, I found some things to fix - not the least of which is broken external links. I added a few images to posts where I was storage-constrained, including interpretive depictions of my technology progression:
What does a responsible reopening of society look like? I'm not sure anyone knows. Other than a few trips to work, I've gently transitioned to the front porch and then a ride out to get a milkshake and check out the gross red tide water and Windansea.
For Jon's postponed bachelor party, we gathered around a virtual poker table from something like 9pm to 2am.
BL3: Revenge of the Cartels
Gearbox released another (free?) mini-DLC that is pretty Scarface-y. The downside was that on release the boss was bugged to be invincible.
Divinity
As mentioned in a previous installment, J and I left the starter island and have been adventuring on the Reaper's Coast. Having run away from a bunch of higher level enemies, I put some thought into why I think the Divinity formula works.
Divinity = Fallout + Fire Emblem + Sam and Max
Fallout - Divinity largely duplicates the RPG elements that aren't, obviously, exclusive to the Bethesda series. A roleplaying system would feel very different without skills and perks, but Divinity brings those with a perk system that is even more gamechanging. More specific to Fallout, D:OS captures the unforgiving element that lets you wander into battles totally underleveled. Also it has a similar overworld/cave system where each is carefully planned and brimming with puzzles and lore.
Fire Emblem - the combat is tactical in both positioning and actions. A doorway can be the difference between a won battle and total disaster.
Sam and Max - D:OS has more than a little intelligent and often zany dialogue. The writing is largely original but features callouts to literature and pop culture that are a bit more subtle than, say, Borderlands. And though it's not the most enjoyable experience, Divinity rewards you for scanning everything, sort of like clicking on everything on each screen of Day of the Tentacle.
Of course, it's give and take. Divinity doesn't carry the twitch skill required for non-VATS Fallout combat. Its tactics system (particularly once blinks are unlocked) is only mildy positional. And it's not quite the guided, on rails experience of an early LucasArts toon game.
It shouldn't go without mentioning that cheesing battles is both fun and shameful. The AI is subtle enough that it's hard to know what's going to happen from moment to moment, but with a few attempts at a harsh encounter you can typically find ways to abuse the scenario. These often involve abusing the simultaneous states of being in battle and out of battle/conversation, depending on the character.
High points
While cheesing battles is a guilty pleasure, sometimes a good plan comes together. In the graveyeard there's a sorcerer dog (right?) that summons an undead hulk that can oneshot anyone who isn't a tank. The dog himself is pretty strong, but we whittled his physical armor down, at that point I cast shackles of pain.
Maybe good for some damage, sometimes you just cast it because you have an action point left. The enemies did their actions as we mildly watched while thinking about our next move. Then the XP award popped up and we exited battle. Mental rewind, what??? I burst out laughing. The undead hulk oneshot my character, doing so much damage that it also killed the shackled summoner. Battle over by pure accident.
There's a particularly challenging battle in the Blackpits where you have both inextinguishable fire that heals enemies and a very suicidal NPC to protect. Looking online for tips, J found that people were calling the battle impossible and broken. Using the borrowed tactic of teleporting the NPC into a tent and placing a crate to block the door, we managed to succeed in our first attempt.
GBES
GBES rode again with a backyard exploration that featured Rouleur to-go crowlers...
Kilroy had a little technical debt built up. This, combined with coffee and covid, resulted in a Saturday that was something of a blur but ended with solved problems and new problems. First...
Issue
Fix
Tag pages are supposed to display an image and sometimes a text snippet from each associated post. Somewhere along the way, the tag pages and actual posts stopped agreeing on the preview image and there were broken links.
Generating everything anew helped with this. I also changed out the treatment of external content (linked images). From the perl markup days I had been truncating the often-lengthy source filenames. Now they're hashed. On the downside, I didn't propagate web traffic information to the new naming scheme.
In spite of having a markup language that ties in to html generation, site infrastructure, and image processing, I was using 'blockquote' tags for quotes and 'pre' for code. This isn't terrible, but my markup processing was adding unnecessary linebreaks in them. For the sake of consistent parsing and future formatting, it makes sense to go full custom markup.
This was really just a matter of adding markup tags and changing existing stuff. I got a bonus trip down memory lane where I found what was left of the postprocessed Blogger scrape.
Spacing between elements was a bit haphazard. For example, gallery markup elements (a bunch of thumbnails) had a large vertical margin whereas images would not. I ended up inconsistently trying to address it in the markup file.
My markup model is, in short, a sequence of content containers, initalized with the content (local image, remote image, gallery, etc.) and used by calling a getHTML() that generates the final content. Adding a 'self_spaced' value to each element did the trick here by allowing the code that concatenates the getHTML() results to decide when a 'br' is necessary.
My thumbnail algorithm was not handling panoramas very well. Note to self: scale using shortest side rather than longest side.
Used shortest side for scaling, added a check for less than thumbnail size. This flowed into a bonus improvement wherein I scale the original image differently based on the thumbnail size. It really just required articulating that a thumbnail should be the most interesting x% of the photo, and be sure to adjust based on the output size.
Monochrome images weren't being processed correctly, I think this might have been an argb/rgba thing because it looked like one channel was getting dropped.
The hot/top sidebar item was calculated based on current information. That is, it would read all logs up until now and list top all time visits and recent ones weighted by age. This clearly presents a problem in re-generating or updating old posts. To work around this, I had previously added the logic, "if hot/top exists in the destination html, re-use it". It's klugy.
Not only did this present some annoying corner cases in generating html, it meant that I had recent data for old posts (that pre-dated hot/top).
Each time the code generates a hot/top list, the log parser simply ignores content and hits that occurred after the specified date. It's kind of a 'duh' approach, but relied on previous work of enumerating all of content sitewide. It also means hot/top starts around 2014 when I started pulling server logs.
There are a lot of tags, each with their own html file (because js and php are yuck). For flow and file size constraints, I moved 'related tags' (a band-aid for irregular tagging) to the navigation bar. This required finally creating a home link - the kilroy/bpf graphic in glorious ascii form.
Web site coding flows in and out of graphics coding. I picked up where I left off, kind of.
I was thinking about histograms and color spaces and wondered about how many 16,777,216 representable rgb colors an image actually used. With the idea of style transfer, filters, and posterizing in mind, I thought about how I might map the colors of one image onto another.
General implementations probably exist in Adobe products, and it's the foundation for:
Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range images in a medium that has a more limited dynamic range.
For more examples and why I hate tone mapping, visit /r/shittyHDR. On the other hand, tone mapping!
One of the main applications (other than making boring photos look different) is to stylize a photo into a different/compressed color scheme. I started simple: take a photo (top left), take a few colors (top right), replace each pixel with the closet intensity value from the new color space. That is, I wanted the output image to have the darks still be darks and lights be lights but use whatever color is closest in the alotted space. Since the intensity for #ff0000 is the same as #00ff000, there is a secondary matching of the closest RGB delta once the intensity is matched.
I quickly found that jpg compression creates a wide range of in-between colors, even from a very contrasty source. E.g.
JPG: 438 intensities for 764928 values.
BMP: 51 intensities for 764928 values.
Even the bitmap flavor had what was probably antialiasing. Good? Bad? Well, it can be manipulated.
Here's a crazy one. Using the RGB values from Super Mario Bros in jpg form, a Horizon screen can be almost exactly reproduced. Honestly I'm 50% that I miscoded something, though I should say the algo used the reddish ground that was chopped from the screenshot. But if I take that bafflingly large SMB color space and remove all values that aren't highly represented, mapping Aloy to Mario produces the reduced color space you'd expect. Same code so... I guess it wasn't buggy.
I hit go on a few photos, mapping a->b and b->a. Toning changes can be ever so slight since the color palettes are large.
Taking a step back, my approach was to match an input color to one that has an identical intensity, choosing the closest actual value if possible. This could result in a lot of identical input/output, and some oddballs where you're forced to a very different color because that exact intensity isn't represented. The algorithm could be softened up.
But what about shifting gears and instead of primarily matching intensity we primarily match hue? You could have lights replacing darks but the colors will be as close to the original as possible.
Again, jpg minimizes any transformative effect until we start pruning the outliers. When you do, you see a bush pop up in Vale's knee pucks, Laguna Seca runoff matching [?] boxes, and Red Bull being recreated with the sky.
The experiment was fun, even if it's a bit of a rabbit hole.
I've done a little more work with my graphics library, following a few threads:
Recent efforts to auto-process photos for web publishing.
Expansion of my code base, including use of Java parallel streams.
Image stylization. Historically I've been experimenting with neural networks, but there's a lot that can be done with algorithms and rngs.
Important: you have to click through on a lot of these images to see what's going on. Even then, keep in mind it's a 50% compressed jpg scaled down to about 1000 pixels on the longest side.
Sampling
Rather than clicking through a file chooser every time, I finally started building a directory of sample photos and screenshots. They're primarily portrait-ish but have a decent variety in most other graphical aspects.
Implementation fundamentals
I didn't have a set goal here, but rather meandered about, implementing bits of functionality based on three primary image processing components:
Canvas: a raster image sort of like BufferedImage, but with the features and interoperability I need.
Selection: a collection of coordinates that resembles selections in Photoshop or Paint. This is how I might, say, define a box for a median filter or an arbitrary selection of similar values.
Brush: a transform that occurs on a selection of a Canvas. This would be like a median filter (box, circle, or other) or an assignment to a given value.
The selection + brush operations can be parallelized easily using parallelStream().
Basic selections shapes include a circle, box, diamond, and a (box + diamond) star that approximates a circle with no floating point garbage.
Additionally, there are content-sensitive shapes that might grow and shrink based on, say the similarity of their contents. Side note, jpg compression makes single-pixel selections look bad.
*
* * x * *
* * x x o x x * *
* x x o o o o o x * x * * *
* x o o o o o o o o x o x x x *
* x o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o o o x *
* x o o o o o o o o o o o o o o x *
* x o o o o o o o x x o x x x *
* x x x o x x x * * x * * *
* * * x * * * *
*
Sometimes you need to dump a selection as text to debug.
So if I wanted to do a median filter, I could define that brush and then apply it to the selection shape I wanted. As you can see above, a rounder filter can preserve some detail, such as Nick Valentine's eyes.
And while a naive median filter distorts edges into simple shapes, by using a content-sensitive selection, an average filter can provide an airbrush effect. And by content-sensitive, here I mean it only operates on regions that have similar contents.
Edges
Deltas
Edges are both simple and complex. Dragging a box filter across an image is pretty standard, but it has its shortcomings.
So if I wanted to make this image of a helicopter dog pop more, it'd be neat to trace the outline of his head and ears. (Side note: this disgusts me as a photo enthusiast, photos shouldn't be subjected to instagram filters). Since Kafcopter is the color of cement, we only see definition on part of his schnozz and can't easily get our outline. Depth of field also shows that blurred edges largely escape a naive edge filter.
For 'delta' I primarily used absolute delta between R/G/B between given pixels in a sample. For edge value, I experimented with the max delta between any pixels in a given area as well as the average delta. Of course, these yield different ranges of values that then have to be applied in some way (lighten/darken).
Sampling radius can change what you find and increase the effective edge width.
Almost every sample from an image has a delta of some sort, so truncating low end values reduces the noise introduced to the image. Taking this further, I might consider an edge to be a 0/1 type of thing, do I set a threshold and move edges to black/white? Maybe, though being a bit less drastic and binning the results provides a smoother output. In the above example, the top right is the basic edge values, the lower left bins them into four levels with each level getting assigned pink/yellow/green.
Applying a median filter before looking for edges seems to increase smoothness and reduce the very thick edges that result from dense patches of contrast.
Tracing
The question remains: how do I find edges that exist between similarly-colored surfaces? One idea that I haven't implemented is to normalize contrast by the average value in the region. This would ensure that any region would have edges and elevate the mild edges to match the more distinct ones. I tried out another approach: tracing. Iteratively land somewhere on the image, find the highest delta, then greedily follow the next point of highest delta until some limit is reached. On the plus side, it gives an "edge/no edge" result, rather than a "maybe this is an edge". But, like any heuristic, it can be fooled.
Stylization
So in addition to having a blob/amoeba that grows by eating up similar pixels, I have a snake. It behaves similarly, but each added pixel cannot border two already in the set. I wasn't sure what it'd lead to, but it was a pretty simple variation on the amoeba. Averaging the selections yielded by the snake gave a pixelated or canvasy texture.
With random color overlays it's easier to see the snake paths. In addition to wanting to eat between i and j pixels, the snake can grow outwardly any number of times after its done.
As with all photo editing, a light touch yields more pleasing results.
Histograms are a common thing for photo editing, going back to the dark room and ensuring your print had max black and almost max white. This can equally apply to filter effects, so I implemented a very basic pixel value (brightness) correction. Using the delta to 0x000000 and 0xffffff, the algorithm 'stretches' each pixel toward either end. The middle is the median of the existing histogram, then as you walk away from the middle, you increasingly add to the pixel value until at the end you're adding the rest of the gap.
The results were pretty easy to test: take an image, mess up the histogram, run the algorithm on it. In this case, it was given max white, but no max black pixels. The correction appears to lose some saturation, but that's not a difficult thing to fix.
And the other way: in addition to not regaining saturation, the restored image is contrastier than the original.
Lastly, if the image is center-biased without either end. Saturation, again.
The road ahead
I have a lot of brushes to code up. Then effects to stack.