Infopost | 2021.01.02

Blacks Beach San Diego sunset panorama with surfer

It's 2021 now. Not too different from 2020 but people are more optimistic.
Investing

Investment performance vs index

A new calendar year is a chance to take a look at the year I started actively investing. Etrade provides a helpful comparison to indexes, but it isn't particularly useful due to the inseparable inclusion of ESPP.

Luckily, I can do simple maths:

Type      Cost          Proceeds      P/L      P/L %    Notes
--------------------------------------------------------------------------
Stocks:    67,731.58 ->  70,417.55 =  2,685.97  3.9%
Options:   54,759.95 ->  56,314.35 =  1,554.40  2.8%
ETFs:     105,840.48 -> 100,117.95 = -5,722.53 -5.4%    FU Barclays
Bonds:     83,766.56 ->  93,714.51 =  9,947.95 11.8%    Doesn't include
divs
Mutuals:                                                [All long term]

It's probably not useful to look at long-term holds, so I didn't include those.

As I suspected at various points throughout the year, individual bond investments have given me the best gains. And this is just the ones I bought and sold - bonds that gained ~10% and had a low coupon value. It also doesn't account for interest payments (can't find that in etrade for the life of me). ETFs were something of a mixed bag since I traded regular market index-types and inverses. My ETF bottom line is dominated by the Barclay's oil fund that I bought just before crude bottomed out. I went in with the intent of holding it until the inevitable rebound (petrol and plastics simply aren't going away). Unfortunately, Barclay's decided to liquidate the fund and I was effectively forced to sell low.

I guess SPY finished 15% up from the start of the year, so if that's the benchmark I am a terrible investor. Then again, if you told me in March that I could net 10% on the year safely or 15% while risking a pandemic market, I'd have taken the former. Since the year was about cashing out long positions and learning how to stonks, it's been one for setting a baseline rather than being compared to one.

Should I just hold SPY until retirement because stonks always go up? Well:

Warren Buffett meme fake quote investing
Sunset session

Backlit sunset surf Blacks Beach San Diego

Black's is supposed to be firing tomorrow, but I took the 500mm down this evening to get some sunset shots. I'm not super into the silhouettes and focus was troublesome (backlit haze), but it was worth experimenting.

San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach
San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach
San Diego surf photo Blacks Beach San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette
San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette
San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette San Diego surf photo Blacks Beach sunset silhouette
Krieg and Zane

Borderlands PS4 Krieg good evil

On the Borderlands 3 front, me and J finished the Krieg DLC and proceeded to switch to our alt characters (Zane and Moze, respectively) to level them, play through the DLC, and check out their new skill tree.

Borderlands PS4 Amara ECHO

Having a fourth skill tree is somewhere between 'awesome' and 'game-changing', simply because there are so many ways to create synergy between skills. Really the only disappointment was a lack of class mod and relic diversity.

Borderlands 3 PS4 Anathema the Relentless spectating Amara

The new(ish) takedown isn't easy. Like, squishies take minutes to kill with builds that can rip through Penn and Teller with relative ease. Aside from the difficulty, the fatal flaw for me was the platform jumping. This really sucks when there are no revives. Still, J's Amara build made short work of the bosses after I fell to my death.

We're going to take a second pass at Krieg in a bit, but have since switched to...
Bloodborne

Bloodborne PS4 coop pistols

Continuing a saga of Souls-like games that started with Nioh, we're roaming a monster-filled gothic metropolis.

Bloodborne PS4 atmosphere screenshot cooperator

It's a very atmospheric game. Dark and magical isn't my preferred genre, but I can still appreciate the style.

Bloodborne PS4 ladder screenshot gothic architecture

Like Nioh, the maps are meant to be played through multiple times. Instead of save points, you unlock shortcuts so that when the boss beats you, it's not a 45-minute return trip.

Bloodborne PS4 Yarnham mob pitchforks torches not awesome

The combat is... not easy. One of the main combat mechanics involves blasting enemies with your pistol when they do a strong attack to stagger them. Tens of levels in, I still can't get the timing - or really recognize the incoming attack. With insta-death hits and combos, I probably would not last very long in Bloodborne without my coop buddy. Even with him (and sometimes an internet rando/guru), the game can be pretty frustrating but that's only because I haven't played Battletoads in a long time.
Socially distant

Golfing

It's mostly video games, coding, streaming, and house projects. Still. But there was some golf.
Style non-transfer

Weimaraner puppy neural network deep learning posterize effect tiling

Continuing down the road of photo augmentation that doesn't rely on that one style transfer algorithm, I tried to train a network to mimic the posterize artistic filter. My dataset amounted to a sampling of images from the archive with the posterize filter run on them en masse. Starting with a monochrome network, I pretty much just stacked some convolution and dense layers, omitting any maxpooling or transpose convolution.

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d (Conv2D)              (None, 192, 192, 16)      800
_________________________________________________________________
gaussian_noise (GaussianNois (None, 192, 192, 16)      0
_________________________________________________________________
dense (Dense)                (None, 192, 192, 96)      1632
_________________________________________________________________
dropout (Dropout)            (None, 192, 192, 96)      0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 192, 192, 16)      38416
_________________________________________________________________
dense_1 (Dense)              (None, 192, 192, 96)      1632
_________________________________________________________________
dropout_1 (Dropout)          (None, 192, 192, 96)      0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 192, 192, 16)      13840
_________________________________________________________________
batch_normalization (BatchNo (None, 192, 192, 16)      64
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 192, 192, 16)      2320
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 192, 192, 1)       145
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 192, 192, 1)       2
=================================================================
Total params: 58,851
Trainable params: 58,819
Non-trainable params: 32
_________________________________________________________________

For each of these I ran a set number of epochs that would be 4-6 hours, spitting out sample tiles along the way.


Input / actual / prediction


Applying the trained network to hard-edged tiles of a complete image gives an output that isn't too far off from the answer key (input, actual, prediction below):


In principle, a machine learning algorithm has the potential to be more content-sensitive in its application of lame instagram-y filters. The key to this, however, may rest on properly training it to identify when the hard algorithm does things well and poorly. And what about training it on orthogonal output data and seeing if it can find a happy medium between, say, posterizing and crappy HDRization? Well, a lot of this rests on moving out of monochrome.

Color space

I'd been thinking about color spaces for a while; one of the training examples I'd seen used YCbCr (luma/brightness, red-delta, blue-delta, implied green-delta). Notionally, when you treat a loss as the RGB distance from the desired output, you're penalizing color difference. This approach isn't unreasonable, but in many cases one of the most critical things to get right is the brightness of the output. So using YCbCr or HSV (hue, saturation, value) seems like good way to prioritize getting brightness correct. In talking image stylization, oftentimes style is derived from differences in hue or saturation so there's a case to be made for writing a loss function that emphasizes the brightness component even more.

Neural network deep learning posterize surfer

Here's the posterize 'answer key'.

RGB


RGB isn't too far off, but isn't guessing colors very well.

Neural network deep learning posterize surfer RGB tiling

YCbCr


YCbCr similarly finds the contour, but wants everything to look puke green.

Neural network deep learning posterize surfer YCbCr tiling

HSV


HSV seems to replicate contrast the best.

Neural network deep learning posterize surfer HSV tiling

Images/numpy

Trying various ml applications left me with a utils file that grew like mold and was as organized. Finally understanding all(?) the needs I might have for preprocessing an image file into a numpy array, I wrote the following that probably demonstrates how unwilling I am to look up proper pythonese:

def file_to_np_array(size, infile, outfile=None, color='HSV', mode='none'):

    input_image = Image.open(infile)
    if outfile is not None:
        output_image = Image.open(outfile)        
        return image_to_np_array(size, input_image, outimage=output_image,
        color=color, mode=mode)
    else:
        return image_to_np_array(size, input_image, color=color, mode=mode)
        


def image_to_np_array(size, inimage, outimage=None, color='HSV', mode=
'none'):
    if mode == 'none':
        if inimage.width != size or inimage.height != size:
            print('None mode using image size: ',inimage.width,'x',inimage.
            height,' for size length ',size)
            raise
        i = inimage.convert(color)
        if outimage is not None:
            if outimage.width != size or outimage.height != size:
                print('None mode using image size [truncated]')
                raise            
            o = outimage.convert(color)
        else:
            o = None
    elif mode == 'scale':
            print('Scale not yet implemented')
            raise        
    elif mode == 'sample':
        if outimage is not None:
            if inimage.width != outimage.width or inimage.height !=
            outimage.height:
                print('Input/output image different size [truncated]')
                raise
        box = get_random_crop_dimensions(inimage, size, size)
        i = inimage.crop(box)
        i = i.convert(color)
        if outimage is not None:
            o = outimage.crop(box)
            o = o.convert(color)
        else:
            o = None
    else:
        print('Undefined mode: ',mode)
        raise

    if (o is not None):
        return (np.array(i) / 255.0, np.array(o) / 255.0)
    else:
        return (np.array(i) / 255.0, None)
        

def np_array_to_image(array, incolor='HSV', outcolor='RGB'):
    array = array * 255.0
    array = array.astype(np.uint8)
    if incolor == 'L':
        return Image.fromarray(array[:,:], incolor).convert(outcolor)
    else:
        return Image.fromarray(array, incolor).convert(outcolor)

Beyond posterizing

Neural network deep learning edge emphasis CBR motorcycle wheelie

I tried that model on a hodgepodge of other filter effects, it seems like it might work for an arbitrary transform. I still need to refine the stitch though.
Vignettes de Battlegrounds Pt II


In this episode: a drive in the gold Mirado gone wrong, opponents feeling the bite of the circle, an am-bush, more creative physics, and a glider drive-by.
Fantasy finale: Tannehilled

Week d'san andreas da bears
- Medieval Gridiron -
Covid-20
- Password is Taco -
Dominicas
- Siren -
1 Danville Isotopes
110.8 - 72.5 W (1-0)
Black Cat Cowboys
155.66 - 78.36 W (1-0)
TeamNeverSkipLegDay
136.24 - 107.50 W (1-0)
2 Screaming Goat Battering Rams
119.9 - 105.9 W (2-0)
[Random UTF characters resembling an EQ]
115.50 - 115.74 L (1-1)
Dem' Arby's Boyz
94.28 - 102.02 L (1-1)
3 Nogales Chicle
106.5 - 117.8 L (2-1)
Circle the Wagons
100.42 - 90.02 W (2-1)
JoeExotic'sPrisonOil
127.90 - 69.70 W (2-1)
4 Britons Longbowmen
122.9 - 105.1 W (3-1)
Staying at Mahomes
123.28 - 72.90 W (3-1)
Daaaaaaaang
138.10 - 108.00 W (3-1)
5 Toronto Tanto
105.0 - 108.2 L (3-2)
Robocop's Posse
111.32 - 134.26 L (3-2)
Alpha Males
86.20 - 76.12 W (4-1)
6 Only Those Who Stand
108.2 - 66.7 W (4-2)
KickAssGreenNinja
65.10 - 84.02 L (3-3)
SlideCode #Jab
71.60 - 53.32 W (5-1)
7 San Francisco Seduction
121.7 - 126.4 L (4-3)
Ma ma ma my Corona
118.22 - 84.20 W (4-3)
G's Unit
109.20 - 92.46 W (6-1)
8 LA Boiling Hot Tar
116.2 - 59.4 W (5-3)
Kamaravirus
118.34 - 109.94 W (5-3)
WeaponX
113.14 - 85.40 W (7-1)
9 SD The Rapier
135.0 - 90.8 W (6-3)
C. UNONEUVE
117.80 - 90.16 W (6-3)
Chu Fast Chu Furious
128.28 - 59.06 W (8-1)
10 West Grove Wankers
72.9 - 122.8 L (6-4)
Pug Runners
98.90 - 77.46 W (7-3)
NY Giants LARP
75.24 - 75.06 W (9-1)
11 SF Lokovirus
127.9 - 87.1 W (7-4)
Bravo Zulus
116.34 - 45.50 W (8-3)
HitMeBradyOneMoTime
107.42 - 89.22 W (10-1)
12 Danville Isotopes
154.7 - 98.9 W (8-4)
Forget the Titans
92.84 - 125.14 L (8-4)
TeamNeverSkipLegDay
132.78 - 140.84 L (10-2)
13 Screaming Goat Battering Rams
136.9 - 84.5 W (9-4)
[Random UTF characters resembling an EQ]
135.20 - 72.52 W (9-4)
Dem Arby's Boyz
97.62 - 63.52 W (11-2)
P-1 Bye
99.2
Bye
129.30
Bye
94.12
P-2 Screaming Goat Battering Rams
112.0 - 125.4 L
Ma ma ma my Corona
127.42 - 104.46 W
G's Unit
118.56 - 142.52 L
P-3 Britons Longbowmen
86.4 - 125.8 L (4th)
Forget the Titans
114.84 - 115.72 L (2nd)
TeamNeverSkipLegDay
78.62 - 94.44 L (4th)




Related - internal

Some posts from this site with similar content.

Post
2020.12.12

On lock

The covid surge that everyone expected after Thanksgiving has hit. Jes is busy at work. I can get by with games, streaming, jogging, and taking the dog out.
Post
2020.11.29

Mods

Since it was just the two-ish of us, Jes and I went to the Lodge for Thanksgiving lunch.
Post
2020.12.06

Edges and corners

Taking the 500mm out for some surf shots. Tweaking neural style transfer.

Related - external

Risky click advisory: these links are produced algorithmically from a crawl of the subsurface web (and some select mainstream web). I haven't personally looked at them or checked them for quality, decency, or sanity. None of these links are promoted, sponsored, or affiliated with this site. For more information, see this post.

timdettmers.com

Machine Learning PhD Applications Everything You Need to Know Tim Dettmers

This blog post explains how to proceed in your PhD applications from A to Z and how to get admitted to top school in deep learning and machine learning.
polukhin.tech

Lightweight Neural Network Architectures | Andrii Polukhin

As the field of Deep Learning continues to grow, the demand for efficient and lightweight neural networks becomes increasingly important. In this blog post, we will explore six lightweight neural network architectures.
coen.needell.org

ResMem and M3M

In my last post on computer vision and memorability, I looked at an already existing model and started experimenting with variations on that architecture. The most successful attempts were those that use Residual Neural Networks. These are a type of deep neural network built to mimic specific visual structures in the brain. ResMem, one of the new models, uses a variation on ResNet in its architecture to leverage that optical identification power towards memorability estimation. M3M, a...

Created 2024.03 from an index of 147,616 pages.