This one is all about plateaus and precipices. And other things.
Ukraine - precipice
Global security and politics are a bit out of scope here, but I ran into some interesting stuff during nap time reading. News headlines at the moment are all about Russia potentially reprising its 2014 Crimea invasion with another attack on Ukraine.
These headlines make me think back to when zillennials were convinced that the Soleimani thing meant they would definitely be drafted for a war with Iran. Having been inundated by headlines about Saddam, the Kims, and Putin himself over the years, it feels like these things are rarely more than posturing/distraction. But from the perspective of media and dramanauts, pre-scheduled military exercises are far less exciting than "guaranteed nuclear war".
But like I said, this post isn't about "is he or isn't he", it's more of a disussion of global security and politics. And while I don't really know the first thing about either, I have played Twilight Struggle so I'm something of a foreign policy expert.
One thing that's come up, I didn't realize how heavily propagandized NATO is in Russia. Russian media seems to treat this mutual defense pact as some sort of offensive menace, as if the NATO member states could/would band together and march on Moscow. Somehow this idea has escaped the tiny amount of scrutiny required to dispel it completely.
So while I don't expect anything to come of this crisis, various legitimate entities in Russia have laid claim to Ukraine and expressed a desire to use it as a buffer against the west. That's rather funny in the context of their fear of NATO - their solution to NATO encirclement is to create a significantly larger border with it.
On the homefront, I caught a Steve Innskeep interview with a Bush ambassador who is now working at a think tank. The dude was livid that Biden wasn't rushing forces to eastern Europe. That might be the play, but he didn't even acknowledge that Cold War brinkmanship may be exactly what Putin needs to justify a preemptive invasion. But there's always politicization, i.e. his criticism was just a disingenuous attempt to make the administration look weak.
This all seems fascinating and extremely wasteful in equal measure.
/u/AdvancedAdvance
Time to stop fucking around threatening Russia with sanctions or military action and use some threats with some teeth -- banning Russia for life from EuroVision
/u/weber_md
That and an Adidas embargo will bring them to their knees.
/u/TheBladeRoden
Flood the region with chairs to kill the local squatting industry.
I happened upon a pretty interesting Reddit comment that connected a bunch of dots between this and various news items over the past decade. The author explains that it's a synopsis and up to the reader to validate to their satisfaction. Emphases mine, spelling errors left as-is:
xlDirteDeedslx
Putin sold gas extremely cheap to mobster Dmytro Firtash in Ukraine from the Russian state owned gas company Gazprom. Firtash sold that gas on to Ukraine and Europe for a huge markup. Firtash used the money to corrupt politics, buy out businesses in Ukraine, and install people in power sympathetic to Russia. Firtash also kicked money back from the sales to Putin and his Oligarchs in various ways so they could profit from the state owned Gazprom gas off the books.
Trumps later campaign manager Paul Manafort worked in Ukraine to revamp the image of Viktor Yanukovych. Yanukovych was a politician, crook, and Putin puppet. Manafort helped to get him elected as leader of Ukraine and Firtash provided the money for him to do so. So basically Manafort and Firtash helped install a Putin puppet leader in Ukraine with discount Russian gas proceeds. Something Manafort himself would repeat in the US years later. Yanukovych proceeded to steal about 1 billion from the people of Ukraine. So basically Russia was in full control there.
Yanukovych was ousted as leader of Ukraine during their revolution in 2014. He refused to sign agreements that would bring Ukraine closer to the West and the people had grown tired of his corruption. He fled before the Parliament could vote to impeach and replace him. Once a new president was chosen they wanted to go after corruption and much of that was in the gas industry that was funding these Russian puppet leaders and corrupt businessmen.
Biden's son Hunter got a job at one of the major gas companies in Ukraine (Burisma) and Joe worked towards getting corrupt officials prosecuted there. They (Western Governments) were going after the politicians and funding behind the corruption more or less. Biden/Obama administration could get nowhere because Firtash had bought out the judicial system including Viktor Shokin the head prosecutor of Ukraine. Biden threatened to withhold 1 billion in aid to Ukraine unless Shokin was removed as prosecutor, Ukraine of course quickly did so.
This opened the door for the corrupt politicians and business owners puppeted by Russia to be prosecuted. Firtash got arrested on an unrelated bribery charge and he wanted a stay of extradition to the US from Trump. To get it he offered Guliani manufactured dirt on the Bidens and sent two men on his payroll Lev Parnas and Igor Fruman to accomplish it. Parnas and Fruman were in the states funneling dirty Russian money to Republican politicians.
They all ended up getting caught because the whole quid pro quo phone call got leaked. So the whole thing was because Firtash wanted revenge against the Bidens for dismantling his corrupt empire that let him buy out everyone in Ukraine. Guliani tried to paint it as Joe Biden got the prosecutor fired in Ukraine because Hunter was corrupt and Joe didn't want him to be investigated. In reality the prosecutor was fired because he wouldn't investigate criminals because he was bought out by Firtash and Russia. Putin took Ukraine via corruption and now that's ended he wants to take it by force.
This is a summary and accurate to the best of my knowledge. Best way to learn about all this is research the names above especially Dmytro Firtash.
Edit - Just remember this is a summary people, not an investigative report. To learn the full story and details RESEARCH, lots of articles out there.
Some of that discussion pointed to a book I've heard referenced over the years. Because nap time, here I finally went so far as to read the Wikipedia page. Published in 1997, the book - well, as wiki quotes - "reads like a to-do list for Putin's behaviour on the world stage".
Russia should use its special services within the borders of the United States to fuel instability and separatism, for instance, provoke "Afro-American racists". Russia should "introduce geopolitical disorder into internal American activity, encouraging all kinds of separatism and ethnic, social and racial conflicts, actively supporting all dissident movements ? extremist, racist, and sectarian groups, thus destabilizing internal political processes in the U.S. It would also make sense simultaneously to support isolationist tendencies in American politics".
I remember this came up during that Maria Butina thing. And bringing it back to current events:
Ukraine should be annexed by Russia because "Ukraine as a state has no geopolitical meaning, no particular cultural import or universal significance, no geographic uniqueness, no ethnic exclusiveness, its certain territorial ambitions represents an enormous danger for all of Eurasia and, without resolving the Ukrainian problem, it is in general senseless to speak about continental politics". Ukraine should not be allowed to remain independent, unless it is cordon sanitaire, which would be inadmissible.
A JHU/AIS professor and former State Department dude wrote an insightful piece in The Atlantic.
But the stakes are higher for Russia. It can temporarily insulate itself from economic sanctions, but the cost of war with Ukraine will eventually be even more instability at home. The secret police can poison, imprison, or kill dissident leaders such as Alexei Navalny, but it will have a lot more difficulty massacring crowds of angry mothers of wounded or dead soldiers. A Russia isolated from the West and punished by economic sanctions will become, more than it already is, a kind of vassal state to China, and Russian diplomats and soldiers know that the Chinese are unsentimental in their treatment of their dependents and satellites.
The Western reaction thus far has been prudent and effective. The United States has led effectively, and President Joe Biden has behind him a remarkably bipartisan consensus. The administration has made appropriate threats, prepared appropriate sanctions, and begun taking the most important step, delivering anti-tank and antiaircraft missiles to the willing hands of Ukrainian soldiers.
Far Cry - plateau
J and I have hit the Ubi plateau - that part of every open world Ubi game where the difficulty curve and newness have both flattened. Luckily it plays well and has fun characters, so while we're not going to grind out every hostage rescue, we'll probably do all the major side tasks.
Like a lot of games, many of FC6's side tasks are formulaic; variety and intrigue are carried by the main quest. That being said, some of the treasure hunts are pretty neat. The Emerald Skull and Goonies-like Sword-Crossed Lovers hunts were both excellent.
Playoffs - what's after the precipice? Oh yeah, demise.
The memes were good but alas, the Raiders were bested by the Bengals.
Stonks - precipice?
Is this the correction? Who can even say. Friday looked grim. Today the major indexes dropped several percent only to finish the day in the green? Wild times.
While last week was pretty bloody, the sideshow was Netflix's 20% pullback after earnings. On Friday NFLX dropped from $500 to $400 and had crazy-high IV; 0DTE $395 puts were selling for $4.50. I could not resist. There was no second leg down, but toward the end of the day it looked like I might be the proud owner of some NFLX at $395. A last minute pump triggered my $50 exit price and made for a fun day of F5ing.
Home - plateau
Roof's done, no falls off of this plateau. Except for once through scaffolding, but that was months ago.
Luckily the veranda roof was not immediately destroyed by the 1-3' tsunami that I was warned of just in time.
Tsunamis notwithstanding, it's good weather for getting outside.
The Unnatural Ones have twice failed our smash and grab scenario, despite the superb Cragheart/Cloak of Phasing combination.
Following my last post's extensive discussion of neural graphics editing, I threw together the gui and business logic to make it work nicely. And now, some banter:
KO
Yeesh.
But then again Em is gawaii neoi.
You are good.
You lost me on neoi... Gwaii is ghost. What's neoi?
Connie
Gwaii is good.
Neoi is daughter.
Bwahaha!!! Okay I got the pronunciation down.
She's a gwaii neoi, but Liam is a kwaii jai.
KO
Hahahah.
And CR is gwai lo.
[Ghost emoji]
[Eyes emoji]
ME3 - precipice
Adrea Shepard is onto the final stretch of the ME trilogy and its Season 8-tier ending. I never did see the 'extended' ending that EA/Bioware was quick to release when the fan response was "wait, wtf?"
Citadel DLC
I read the gamefaqs just to make sure I didn't miss anything big, they all recommended playing the Citadel DLC just before the final series of missions. Centered on some shore leave hijinks, the DLC is campy and lighthearted and brings together characters from all three games. I think the faqs are wrong. It'd be better to play it before the main character deaths that occur mid-game.
Jan 1's covid spike looked bad, but case rates have just kept going. As such, here's a long post that covers fractals, autoencoders, style transfer, and some vidya.
Burning ship
Burning ship fractal, looking like it sailed up the Seine and parked in front of a big gothic cathedral.
I've never done a fractals exercise before, but wanted to give it a go to see if it could be used for graphics stuff. Based on the name and pseudocode, Burning Ship looked like a fun/easy one to try. There are some pretty cool fractals algorithms that do recursive computations, but this one is just an (x, y) -> value computation. The code amounts to this:
Non-standard types used here... RGBPixel is just a personal implementation of an rgb value with helpers. Location (of the viewer) is just a glorified struct with values capturing zoom and center, and I'm not entirely sure I calculated these correctly.
"Zoom and center?" Yeah so the fractal plots a fixed image where you can infinitely pan and zoom to see interesting stuff. Like this:
It was a fun mini-project, but has pretty limited application to image stylization without a lot of code/creativity. There is some implementation left to the user in terms of converting a 0.0-1.0 value to an RGB value, so I opted for the closest I could come up with to the colors of an actual burning ship.
Got tiling going (to drag an autoencoder over a larger image). Here.
Applied the technique to reproduce graphics filters. Here.
For everything south of here, I used various implementations of tiling - from hard checkboard stitch to edge feathering. So if you see harsh edges, it's just because I didn't throw as many cycles at that image.
Input image, compare to the AE-generated image above that looks mildly-stylized.
My second AE implementation was pretty straightforward (code below) and didn't require the model to do much work to remember images. I trained it with adam and MSE. Notice it's not really an autoencoder because there's no latent/flat layer.
Model #5 attempted to introduce a convolutional bottleneck by putting two eight-kernel layers of sizes 7 and 13 in the middle of the network. The idea was to create kernels that are estimations of large features and let the smaller/more numerous kernels downstream reconstruct that as it saw fit. Once again, it's pretty close to the source image.
Along the way I added tweaks to support non-RGB color spaces (individually) for input and output. Model #6 stacked convolutional layers of 3, 5, 7, 9, 5, 3 with some dense layers and noise/dropout. Again it looks like a non-blocky, compressed image (which is autoencodery!).
Using model #6 in hsv mode, I swapped MSE loss calculation out for Huber, which (I think) is like MSE but doesn't punish large differences quite so much. I think this one was trained pretty lightly, but shows some a less-perfect attempt at image reconstruction.
But what about real bottlenecking?
Autoencoders are all about shrinking the input, squeezing it through latent space, and transpose convolving it up to the input size. For whatever reason, I haven't had much success with this. I tried the standard convolve-maxpool-2x2-repeat to shrink the input space and allow the inner kernels to 'see' more of the image. Then it was reconstructed using transpose convolution. My output resembled a checkerboard that seemed to correlate with the latent layer, so like the transpose layers weren't doing much?
I switched things up a bit and did a single maxpool 8x8, no flattening, and an 8x transpose convolution to restore the original dimensionality. Once again, the output looked like a bad upscale.
I traded out the maxpool-transpose strategy for strided convolution and upsampling followed by convolutions (in place of transpose). The output was less pixelated, but still it seemed like the generative half of the network wasn't being particularly helpful.
Back to variations on a convolutional theme
Some output of the models talked about in the earlier section.
Input modification
While bottlenecking the autoencoder is one way to force it to learn rather than remember, giving it dirty input is also useful. While supervised learning often involves finding, buying, or making labeled datasets, this was simply a matter of throwing my substantial Java graphics library at manipulating the 'good' images.
Finding the right amount and type of artifact-generation is not straightforward, so I just WAGged it with two approaches:
Median filtering and desaturating various rectangular areas in the image.
Introducing small artifacts like areas of noise, unfilled rectangles, and other color manipulations.
Top image is a partially blurred/desaturated version of the one used previously. The rest of the images are autoencoder outputs, using the same models from before.
The results showed that the autoencoder was trying to fix those areas and perhaps would be more successful with more/better training. On the other hand, this wasn't too far off of some upscaling/inpainting examples that I've seen, with the exception of GANs. GANs seem to do a lot better at inventing plausible images but often create something that looks right until you realize, say, the rose petal has thorns. That said, generating a rose petal with thorns sounds pretty neat. Well, it's not the most sophisticated symbolism, but I'm not going to judge the visual metaphors of an hours-old artificial intelligence.
I found that my more successful implementations were bottoming out (loss-wise) after a few hours of training. I mentioned earlier that I experimented with Huber loss since I'm not so much looking for (nor computationally equipped for) photorealistic image reproduction. I think it'd be neat to have a content-aware style filter, so a single pixel deviating from its expected value need not be punished with squared error.
And with more thought, it felt like in a small image with 65k pixels, I might be reaching a point of trading loss in one place for another. Beyond Huber, I had a few thoughts on this. One consideration is that losses can be whatever you want so it would be reasonable to supplement MSE or Huber with a more forgiving loss computation, e.g. those same metrics run on a median-filtered image might allow spatially-close guesses to be rewarded even if they're not pixel-perfect.
I ran into a wall when I looked to do anything more complex than adding precanned loss functions. It's probably not too difficult, but between effort level, Python-tier documentation, and miserable search results, I simply did not go very far down that path. And even when I did the simple thing of setting loss to MSE + KL (a cool measure of entropy introduced), running the autoencoder later gave me an exception that the custom loss function wasn't captured in the model (and somehow needed for a forward pass).
I'm sure a lot of this is just my unfamiliarity with the framework. I'll keep at it, since I think loss calculation can be pretty important to align with what you want out of your network. Ideas:
Already mentioned, median filter the predicted/true matrices to not punish pixel-by-pixel error quite so hard.
Weight pixel brightness accuracy over hue/saturation.
Supplement an MAE or Huber loss with MSE on sampled areas.
Use KL to encourage the network to either simplify or complicate.
Fiddle with coefficients of multi-parameter losses.
Style transfer
The input and three output images manually combined to produce the image at the top of the post. Style images used to produce each are shown. This wasn't particularly well-baked, you can see the per-tile variation that could be normalized with more passes.
To recap neural style transfer, it's a nontraditional machine learning technique that feeds a content and style image into a popular classification network, then modifies the input image to minimize inaccuracy between the two. It was one of those popular medium.com things five or so years ago, but isn't quite as exciting as the hype.
I modded the neural style transfer demo code that works on 224x224 images to instead apply the algorithm to as many tiles as needed to cover a larger image.
I let the algorithm try a sampling of different input areas from the style image to let it decide which part it could best optimize.
I decided style transfer is an interesting, content-sensitive method for photo stylization, but requires postprocessing to blend it nicely. It's also time/compute-intensive and thereby only suitable for sparing use. In contrast, styling with traditional deep learning methods is costly to train but quick to execute.
The style transfer image generation technique chooses various so-called 'style layers' from the VGG19 convolutional blocks. I previously pulled on the thread of randomly dropping some of these layers just to see what would happen. I didn't go far down that path, but autoencoder training gave me some time to dive back in. My variation on the previous idea was to use a completely random selection of convolutional layers to be the style component.
Results
Using that Horizon Zero Dawn screenshot from the autoencoder section (first image below) with a brush style input, I got a pretty wide variety of results. Note the tiling created some hard seams that could be fixed with additional iterations.
To see the layers used, look at the image names, e.g. b2c2b5c2 used block2_conv2 and block5_conv2.
Switching it up to the Kanagawa wave and Vale exiting the 'Screw:
Some additional run parameters:
Looked at four different style tiles to find the best one.
Descended on each style tile for 14 iterations.
Content was weighted about 100 times more than style.
Again, as standalone images these are fairly meh, but even with some mild blending back to the original image you can produce neat stylizations:
ME3:LE - Leviathan, Omega, and Tuchanka
Spoilers to follow...
Leviathan DLC
So I had a pretty good idea that Leviathan was about meeting the Reapers' makers, but for some reason I was pikachufaced by them being huge, Reaper-shaped, and aquatic. The confrontation/conversation was pretty neat if you ignore that part of the premise is negotiating them out of complacence and into being a minor war asset. And while the Leviathan/Reaper/Prothean lore is all worth listening to, this DLC and the Javik one exhaust the great mysteries of the ME universe (except, of course, why the council refuses to believe anything). Some of it could have been left to speculation.
Leviathan could have been the ME2 plot; track down the Reaper progenitors to find out how to survive the cycle.
Omega DLC
I guess Cerberus (it's always them) took Omega to do reapery stuff and Aria wants it back. Kind of a blah plot, it's amazing how the galaxy's Q-Anon can manage to take over colonies, lawless strongholds, and even the Citadel. Aria reclaiming her throne is yet another, "hey guys, half the galaxy is being destroyed, do we really have time for this?" I can successfully tell myself the ending invalidates it all anyway, so I might as well enjoy it. So while this story arc isn't particularly important or interesting, the setting and characters are pretty great.
The story is a bit heavy-handed at driving home the idea that Nyreen Kandros is the anti-Aria, but she's otherwise a neat frenemy in her short screen time.
Speaking of heavy-handed. "Hey, how do we make this villain seem smart? Like *real* smart? Oh I know, show him playing chess!"
Act II
Tuchanka is saved, Victus is vindicated, and Mordin is a damned hero :'(
And yeah, the Indocrinated, Illusive Man managed to pull a January 6th on the Citadel. It's not nonsensical, but more and more I feel the series would have been a lot better off if Cerberus had remained a minor plot arc in ME1. Instead, they are the Cersei to the Reapers' White Walkers, except this Cersei is absolutely everywhere and seems to have infinitely deep pockets despite how failure-prone she is.
In contrast, the Geth-Quarian and Krogan-Everyone arcs are pretty solid.
Gloomhaven
A Party Has No Name has gotten a few sessions in recently. Not liking the early-game Spellweaver, I was happy to hit my easy retirement goal (200 wingwangs) and unlock the Quartermaster. He's always prepared and kind of looks like a Bee. So he's Prepared Bee. He's trustworthy, loyal, helpful ... you get the picture.
The party hit some difficulties with early scenarios playing on hard. We dialed it back to normal and have beaten one, breezed through another, and gotten wrecked by a boss.
Also a clencher, the week 18 de facto playoff matchup was, in the words of NFL twitter, "Best season finale ever?". Early-game momentum swings, a Chargers comeback capped by Herbert taking hits from Crosby for downs 1-3 and then getting it done on 4th and long. There were plenty of referee antics as well.
With both teams converting FGs in OT, the tie was within reach. The Raiders ran the ball (effectively) and weren't hurrying. They needed to get down the field and not give up a punt or turnover. The Chargers looked to bend but not break - then again they were having trouble with Jacobs all game.
Perhaps Bisaccia and Staley exchanged glances across the field and decided they could mutually accept guaranteed playoffs.
The timeout
With a 40-second play clock, it really doesn't matter whether it's run with 38 or 34 seconds; if they run and the ball stays in bounds, they can let the clock run down to zero after the play in either scenario.
Were the Chargers trying to force the Raiders to run a play to get the ball back? Absolutely not. For one, if that had been the case, Los Angeles would have called a timeout immediately after the second-down run. It did not. Furthermore, it had nothing to gain by getting the ball back. Staley's team is in a vulnerable position given the field position and doesn't want to incentivize the Raiders to try to score.
Indeed, a timeout with 38 seconds meant the Raiders could run/kneel on third down and let the game end unless the Chargers used their second timeout. After the game Staley explained that it was to change personnel and he perhaps signaled that by waiting for the play clock to run down. Then again, even the casual NFL fan has seen some pretty bizarre effects of fog of war in weird situations that no team prepares for. Normally it's a PAT question or on which side of the two minute warning to call timeout. This one was considerably less conventional, with the added ambiguity of what Staley was intending with his timeout call.
Cris Collinsworth quickly drew the wrong conclusion and thereby stoked the controversy, because of course.
After the timeout, Jacobs picked up a first down and got his team in position for a sub-50 yarder or (with the first down) the ability to kneel the game out.
The decision
deutschdachs
"Well we could tie for the meme and play a team who's been in the past two Super Bowls ORrrr we could win this game and play a team whose last playoff win came during the Cold War and against the Houston Oilers..."
Biscaccia decided to wind the clock down to two seconds and attempt the game-winner. It was mathematically riskier than a kneeldown, but I presume there was a Bengals/Chiefs calculation that had been thought through before the game.
The NFL subreddit is pretty good and the post+game threads had some spicy takes. Neutrals very much wanted the tie to happen. Part of the rationale was that this would be the last chance for the tie dilemma to happen - the NFL will have these games play simultaneously starting next year. That's how it's been done in soccer since a similar controversy and it generally works. It won't be quite so straightforward in a league/sport with a clock that stops, turn-based play, and high-bandwidth communication between the sideline and players.
Myself, going in I would have thoroughly enjoyed a 'Tiegate' scenario where both teams very obviously settled for a 0-0 tie. Al Davis and John Madden would probably have enjoyed the unique situation where "Just win, baby" actually needs to consider winning the game and winning the season. I imagine Chucky would have enjoyed his team embarrassing the NFL after it leaked his "locker room emails" while investigating the misdeeds of another organization. Goodell would probably have disqualified both teams, because the spirit of competition outweighs the unforseen technicalities (except when tucking the ball is considered an incomplete pass).
After 69:58 minutes of game time, kneeling it out felt wrong. That game and, specifically Justin Herbert, changed my mind.
Epilogue
In honor this historic matchup, I'll post the Go Charge Go copypasta in its entirety:
hello. chargers need to win. i think they cand o that by applying more pressure on both sides of the ball. by dosng this they should draft good people that can do that. joey bosa is a good start but not good enough. i will alszso review their draft picks here. as i said joey bosa is a good but he was not th e best. they should have got jeremy tunsil. then in the next round they got hunter henery. i think he is goo d because he is a tight end like rob gronkowski who is also good. then they got max tuerk. this was a bad pick. they did not neeed a center. then they gotr joshua perry. this was an excellent pick. the biggest seal of the draft. he will mbe amazing. then they got drew paser a punter. this is a pure sit pick because he is a punter and it is late. i hate the chragrers for pickig this person. bten i forgive them for getting donovan clark. this is a good person. tunsil would be bette rthough. now that he have established the good of the draft picks we should see were thesee fit in. bosa can start riigh away. he is ready for it. hutner henery can aalso sart. i also beleve that max tuerk can start as well as s joshua perry. but not tat punter. ia also belueve donovan clark can do well bcause he is ctaully my second vousin. i do not if he can start though. but what about the quaerter back situation. phillip rivers is not that great abd he is aaging. if they want an aging old qb they should get peyont manning. i beleive theuy should have drafte dpaxton lynch. joey bosa is good btt he coudl ahve waited untitl the second round. paxtont lunch could have sstarted. i hate seeing him cry drafted by the broncoss. the chargers also signed lotes of udfass. whcih mean s undrafted free agent. will go go througg each one. first ikechi ariguzp. he is ok but we dont need old. then manuel aspiralla. we do not need cb, bad choice. then ben becjwith. this is a good signing. he can start. then cameron clemmons. anither good one. he can start. then titus davis. we dfinitely need wrs so this is okay. then nick dubar.nhhe is okay.then jahwan edwards. we do need rb bcause we signed dameraco murray. bad choice.then we signed erioc frohnhapefle. h won the chad pennington award is good. then curtis grant. def need ildb good. then brock hekking. this is good but i also would have rather brock oswwieler because we dont need rivers. then gordon hill. he is a good peosn. then josh lambo. another dumb punter. stop.then johhny lowder milk. we do need s so good. then ryan mueller. he has two positions which is weird. dont like. then brian parker. he was good. rhen dreamius smith. also good. thenf inally we have cole stoudt qb. this is the taelnt we need. the future of our franchise. rivers is old and bad. this is the toung yalent we want. then tyrell williams. good complement to stoudy. then again demetrious wilson. good backup to stoudy. by doing all ive said we can definitely became a great teama gain. use the plaeyrs well. go charge go
Changing the calendar number from 2021 to 2022 means there's data to be analyzed.
Omicron
Per Jes, hopefully the case rates mean we get to immunity faster. I can't imagine if we had stats like these before vaccines were available; all that masking and "two weeks to flatten the curve" and "shelter in place" probably saved a lot of lives.
Weaving through web traffic logs
I did some coding on a rainy day, electing to hit an issue I had been ignoring for some time. To track hot/top posts and images (see nav bar) I parse server logs for hits. It's not all that interesting, but I was unfairly counting certain images based on not requiring a click-through to be seen.
Fixing the issue involved adding a parser for the html request referrer, which led down a small rabbit hole I'll get to in a second. But while I was checking out the logs I found some neat injection attack requests. One used an escape sequence in the request, another was this:
Capturing the referrer data meant I could toss everything into a hashset and see if there's anything interesting. The breakdown:
Search engine queries accounted for most referred hits.
Another big one was those sites that crawl the web for images and attach autogenerated text to them in hopes of getting page rank and clicks. E.g. this one used a Viscera Cleanup Detail screencap. Most referred to outsourced diagrams I posted for shower plumbing and VR4 maintenance. Also a fair number of these sites are no longer valid; if I had to say, they disappear as soon as they're caught googlebombing.
Similarly, I borrowed an image for a convolutional autoencoder post that was then featured here. That might be what's linked from a Colorado State University student forum, but I can't access it.
And to round it out, this blog post seems to have been written by a human and includes a kinda-funny Borderlands tierlist that I posted above.
Navigation
While I was knuckles-deep in the code, I tried using iframes to do the navigation bar found on the right. Currently the hot/top, features, and monthly links are generated with each post. On the plus side, the hot/top information for each page remains fixed for that point in time - if I iframed a shared nav page, it'd just be whatever is the latest. On the minus side, embedding the full nav pane on each post is a fair chunk of storage (on aggregate) and it means old months won't have links to future months.
Anyway, in experimenting with iframes, I quickly ran into the frustrating limitation that they require a fixed height. With a little digging, I found that in CSS you can specifiy "height: 100%", but that would simply scale it to the browser height and then add a scroll bar. Gross. I could specify a massive fixed height. Also gross.
What's the standard solution? Like everything with web development, it's a Javascript band-aid. No thanks.
Using iframe tag the content inside the tag is displayed with a default size if the height and width are not specified. thou the height and width are specified then also the content of the iframe tag is not displayed in the same size of the main content. It is difficult to set the size of the content in the iframe tag as same as the main content. So its need to be dynamically set the content size when the content of iframe is loaded on the web page. Because its not possible to set the height and width all the time while the same code execute with a different content.
There is a way to make it dynamically by using some attribute of JavaScript.
Lunch Lady
Cattle's Steam gift to everyone was Lunch Lady. It's a co-op stealth game that really captures the childhood experience where you're breaking into school at night to find the final exam answers (spread throughout the campus) but need to avoid the alien/zombie lunch lady that patrols at night.
It's good for one or two sphincter-pinching rounds.
Of my four leagues, only the Dominicas made it through to the championship. It's not looking promising, but I believe in Travis Kelce.
Active/wheel trading
Okay yeah I realized later I could have made the inside and outside bets make sense.
Now that 2021 is in the books, I have some investment performance data to look at.
A couple of quick caveats:
I switched to OpenOffice and it's frustratingly unable to show x-axis dates correctly. I won't bore you with the threads I chased down that confirmed it is an issue.
I parsed the basic data provided by my broker, which doesn't include open/close date information. Wanting to make some cumulative plots, I used option expiration dates to indicate the closure of the position (and accumulation of funds). In a lot of cases (especially selling options), this just means everything is shifted right by a week or two.
Nominal P/L isn't all that helpful but I don't have a great day-to-day measure of what funds I dedicate to this strategy. If it helps at all, I allocated 60-80k for this exercise at the beginning of the year.
I truncated the boring/insignificant parts of some of the charts below.
YOLOs vs theta gang
Cumulative profit and loss on options purchased (not my preferred strategy) for the year. SPX is shown in the background, not to scale but rather to illustrate any correlation to market change.
Buying options hasn't been a large part of my strategy. I'll buy them as a hedge (volatility, inverses) and occasionally when I think I know what's going to happen, e.g. ERIC and RIDE. I'm actually pretty happy I ended the year in the black for these.
Cumulative profit and loss on options purchased (not my preferred strategy) for the year. SPX is again shown in the background, as reference information.
Okay let's trade the echocardiogram out for a nice, smooth growth curve. Here's the cumulative premium collected for options sales - the compound interest you don't get from buy-and-hold.
I've mentioned it before, but wheel trading has downsides. The first is limited gains, something I experienced vividly with AMC. The other is that your wheel stock might plunge in value. More on that shortly.
Tickers
Here's an exhaustive list of aggregate option purchase performance by ticker:
Ticker
Gain/Loss
Per transaction
AAPL
$83.94
$41.97
AMD
$458.88
$114.72
BB
$337.83
$112.61
CLX
$295.97
$295.97
COIN
-$327.12
-$14.87
COST
$514.41
$171.47
CRSR
$192.47
$192.47
DOG
-$205.37
-$68.46
ERIC
$749.73
$374.87
GLD
-$874.57
-$291.52
GME
$336.38
$67.28
INDA
-$427.57
-$427.57
IXC
$664.86
$664.86
IYR
-$387.57
-$387.57
MJ
$521.74
$173.91
MRNA
$785.97
$785.97
NFLX
-$661.03
-$661.03
NVDA
$1,588.92
$794.46
OIH
$226.27
$32.32
OLN
-$227.55
-$227.55
PFE
-$812.63
-$270.88
PLTR
$167.83
$55.94
RIDE
$1,614.97
$230.71
RKT
$216.34
$72.11
RWM
-$1,499.25
-$499.75
SMH
$254.97
$254.97
SPCE
$617.72
$77.22
SPXS
-$202.55
-$202.55
SPXU
-$683.69
-$113.95
SPY
-$760.62
-$152.12
SQQQ
-$314.66
-$62.93
SWBI
-$315.60
-$157.80
TWM1
-$258.08
-$129.04
TWTR
$306.94
$153.47
TZA
-$313.54
-$313.54
UCO
$425.37
$141.79
UDN
-$152.58
-$152.58
USO
$476.79
$119.20
UVXY
-$748.75
-$124.79
UVXY1
-$921.14
-$307.05
VXX
-$376.54
-$376.54
XLE
$473.72
$94.74
XME
-$130.52
-$130.52
XOM
$279.81
$69.95
XOP
$224.44
$112.22
Total
$1,967.47
$11.37
As the cumulative plot indicated, buying options was very much a mixed bag. You see some SQQQ, UVXY, and VXX that expired worthless but would have softened the blow of a major correction that would have destroyed my wheel trades. My RIDE DD paid off modestly and I closed my Ericsson LEAPs before the share price returned to 10ish.
And here's the sold call/put premiums:
Ticker
Gain/Loss
Per transaction
AAPL
$609.40
$121.88
AMC
$1,763.03
$88.15
AMD
$643.84
$80.48
BB
$3,648.77
$140.34
COIN
$3,526.80
$352.68
CRSR
$1,461.36
$243.56
CVX
$112.48
$112.48
ERX
$331.37
$110.46
GME
$6,748.33
$1,349.67
HAS
$224.48
$224.48
MJ
$875.49
$62.54
MSFT
$162.48
$162.48
NFLX
$162.48
$162.48
NVDA
$829.95
$414.98
OIH
$3,126.76
$260.56
RKT
$1,747.09
$109.19
SOFI
$1,778.86
$68.42
SPCE
$4,961.69
$160.05
TSM
$271.44
$90.48
TUP
$84.48
$84.48
TZA
$125.96
$62.98
UCO
$394.44
$131.48
URA
$178.96
$89.48
USO
$150.44
$50.15
UVXY
$506.80
$72.40
XLE
$1,046.09
$61.53
XOM
$374.88
$62.48
XOP
$347.96
$173.98
Total
$37,593.99
$145.15
On the theta gang side, I largely stuck to my favorite stonks. GME worked out nicely and I kept tapping OIH and XLE as the economy recovered. COIN had great premiums but wasn't for the faint of heart.
So those were just options premiums. When you include exercise/assignment, you get the full story:
Ticker
Gain/Loss
Per transaction
AAPL
$748.22
$93.53
AKAM
-$1,534.16
-$1,534.16
AMC
$1,835.89
$87.42
AMD
$1,010.27
$77.71
BB
$3,986.60
$137.47
CLX
$295.97
$295.97
COIN
$7,512.14
$227.64
COST
$1,208.65
$302.16
CRSR
$1,653.83
$236.26
DIS
-$1,710.26
-$1,710.26
ERIC
$1,227.28
$409.09
ERX
$634.99
$127.00
FAZ
-$144.56
-$144.56
GLD
-$1,237.06
-$206.18
GME
$14,118.08
$1,283.46
HAS
$224.48
$224.48
INDA
-$427.57
-$427.57
IXC
$664.86
$664.86
IYE
$207.41
$103.71
IYR
-$387.57
-$387.57
LOW
$223.97
$223.97
MJ
$2,166.75
$120.38
MRNA
$2,022.32
$1,011.16
MSFT
$162.48
$162.48
NFLX
-$498.55
-$249.27
NIO
$220.55
$220.55
NVDA
$3,719.64
$743.93
OIH
$9,102.24
$455.11
PFE
-$690.23
-$138.05
QQQ
-$1,760.09
-$1,760.09
RGR
-$2,419.43
-$806.48
RIDE
$1,614.97
$230.71
RIVN
-$187.29
-$187.29
RKT
$1,834.17
$91.71
RWM
-$1,433.30
-$358.33
SMH
$254.97
$254.97
SOFI
$1,287.78
$47.70
SPCE
$5,131.91
$128.30
SPLK
$192.97
$192.97
SPXS
-$202.55
-$202.55
SPXU
-$683.69
-$113.95
SPY
-$760.62
-$152.12
SQQQ
-$343.18
-$57.20
SWBI
-$315.60
-$157.80
TSM
-$657.51
-$164.38
TUP
$158.45
$79.23
TZA
-$560.41
-$140.10
UCO
$1,090.72
$155.82
UGL
$139.47
$139.47
URA
$958.91
$239.73
USO
$627.23
$89.60
UVXY
-$5,870.31
-$419.31
VCAIX
$189.32
$189.32
VXX
-$1,895.32
-$473.83
XLE
$2,597.41
$112.93
XOM
$805.60
$73.24
XOP
$884.30
$176.86
Total
$46,445.84
$95.57
Looking at the big picture (options + trades), it's been a learning year. Bearish and volatility bets/hedges lost money, as hedges often do. Some earnings plays (Akamai and Disney) left me holding a position I closed at a loss. But the wheel trading premiums look even better when you add the gains from moving shares.
Wheel trading isn't great in a bear market, so I won't pretend this year's returns were anything but a product of JPow's transitory money printer. That being said, plenty of tickers lost value, specifically some of the IPOs/SPACs I liked (CRSR, LZ, RKT, FIGS, SOFI).
So how did the wheel do against tickers that ended the year with substantial unrealized losses?
I YOLOed into BB back during the GME squeeze. It was good for volatility at the time, but my remaining shares are sitting at a $3,400 unrealized loss. On the plus side, I pulled in $3,900 from premiums and exercised calls. So (tax notwithstanding) it's a few hundred free shares of RIM and a little extra.
It's not such a rosy picture with Virgin Galactic. The dilution and commercial launch delay happened all at once and I'm bagholding a bunch of shares to the tune of $24,000. I think it has long term hold potential, but selling calls and puts this past summer only made the lesson about $5,000 less painful.
If I could do it all over again, I'd have stayed on the GME theta train.