Infopost | 2022.08.03

Keras layers visualization craiyon

The title is a bit ambitious. More specifically this is a cheat sheet for some keras merging layers, (untrained) convolutional layers, and activation functions with a comparison of luminance, RGB, HSV, and YCbCR color spaces.

After experimenting with concatenate layers, I looked at the other merging layers and decided I needed visual examples of the operations. In these cases, the grayscale/luminance flavors would be the most relevant to machine learning where you're typically working with single-channel feature maps (derived from color images).

I continued with samples of pooling layers and trainable layers with default weights. The latter provided helpful visualizations of activation functions applied to real images.
Merging layers

Add

Keras layers visualization add luminance
Add layer applied to a single-channel image.

Keras layers visualization add rgb
Add in RGB.

Keras layers visualization add hsv
Add in HSV gives you a hue shift.

Keras layers visualization add ycbcr
Adding the Cb and Cr channels gives this color space even more of a hue shift.

Subtract

Keras layers visualization subtract luminance
Subtraction layer applied to a single-channel image.

Keras layers visualization subtract rgb
Subtraction in RGB.

Keras layers visualization subtract hsv
Subtraction in HSV.

Keras layers visualization subtract ycbcr
Subtracting YCbCr is pretty deformative.

Multiply

Keras layers visualization multiply luminance
Multiply, I guess, makes things darker by the amount of darkness being multiplied (0.0-1.0 values).

Keras layers visualization multiply rgb
RGB multiply looks similar.

Keras layers visualization multiply hsv
In HSV, the multiplication is applied less to brightness and more to saturation.

Keras layers visualization multiply ycbcr
Likewise YCbCr shifts green.

Average

Keras layers visualization average luminance
Average in luminance is pretty straightforward.

Keras layers visualization average rgb
Average in RGB also makes sense.

Keras layers visualization average hsv
Average in HSV sometimes sees a hue shift.

Keras layers visualization average ycbcr
Average YCbCr works like RGB.

Maximum

Keras layers visualization maximum luminance
Max in monochrome selects the brighter pixel.

Keras layers visualization maximum rgb
Same in RGB.

Keras layers visualization maximum hsv
It's not as straightforward in HSV where hue and saturation impact which pixel value is used.

Keras layers visualization maximum ycbcr
Max for YCbCr likewise biases toward purple (red and blue) pixels.

Minimum

Keras layers visualization minimum luminance
Minimum, of course, selects the darker pixels.

Keras layers visualization minimum rgb
Same with RGB.

Keras layers visualization minimum hsv
In HSV, minimum looks for dark, desaturated pixels with hues happening to be near zero.

Keras layers visualization minimum ycbcr
YCbCr looks for dark, greenish pixels.
Pooling layers

Most convolutional neural networks use max pooling to reduce dimensionality. A maxpool layer selects the hottest pixel from a grid (typically 2x2) and uses that value. It's useful for detecting patterns while ignoring pixel-to-pixel noise. Average pooling is another approach that is just as it sounds. I ran 2x2 pooling and then resized the output back up to match the input.

Max pooling

Keras layers visualization maxpooling2d luminance
In monochrome images you can see the dark details disappear as pooling selects the brightest pixels.

Keras layers visualization maxpooling2d rgb
RGB behaves similar to luminance.

Keras layers visualization maxpooling2d hsv
HSV makes the occasional weird selection based on hue and saturation.

Keras layers visualization maxpooling2d ycbcr
Much like with maximum and minimum from the previous section, maxpooling on YCbCr biases toward the purplest pixel.

Average pooling

Keras layers visualization average pooling luminance
The jaggies (square artifacts) are less obvious in average pooling.

Keras layers visualization average pooling rgb
Edges in RGB look more like antialiasing, flat areas look blurred.

Keras layers visualization average pooling hsv
HSV again shows some occasional hue shift.

Keras layers visualization average pooling ycbcr
Like with averaging two images, average pooling a single YCbCr image looks just like RGB.
Dense layers

A couple notes for the trainable layers like Dense:
Keras layers visualization dense relu
The ReLu looks pretty close to identical. I may not understand the layer, but expected that each output would be fully connected to the inputs. Hmm.

Keras layers visualization dense sigmoid
Sigmoid looks like it inverts the input.

Keras layers visualization dense softplus
Softplus isn't too fond of the dark parts of the panda.

Keras layers visualization dense tanh
Tanh seems to have more or less just darkened the input.

Not really much to observe here except that the dense nodes seem wired to (or heavily weighted by) their positional input pixel.

Update:

This appears to be the case. To get a fully-connected dense layer you need to flatten before and reshape after. This uses a lot of params though.

Model: "model"
__________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected
to
==========================================================================
input_1 (InputLayer)            [(None, 32, 32, 1)]  0
__________________________________________________________________________
flatten (Flatten)               (None, 1024)         0           input_1[0]
[0]
__________________________________________________________________________
dense (Dense)                   (None, 1024)         1049600     flatten[0]
[0]
__________________________________________________________________________
input_2 (InputLayer)            [(None, 32, 32, 1)]  0
__________________________________________________________________________
reshape (Reshape)               (None, 32, 32, 1)    0           dense[0]
[0]
==========================================================================
Total params: 1,049,600
Trainable params: 1,049,600
Non-trainable params: 0
__________________________________________________________________________
Convolutional layers

As with dense, these runs used kernels with default values.

One layer

Keras layers visualization convolution2d conv2d linear kernel size 3
One conv2d layer, kernel size 3, linear activation.

Keras layers visualization conv2d kernel size 3 relu
One conv2d layer, kernel size 3, ReLu activation.

Keras layers visualization conv2d kernel size 3 sigmoid
One conv2d layer, kernel size 3, sigmoid activation.

Keras layers visualization conv2d kernel size 3 softplus
One conv2d layer, kernel size 3, softplus activation.

Keras layers visualization conv2d kernel size 3 tanh
One conv2d layer, kernel size 3, tanh activation.

This is far more interesting than the dense layers. ReLu seems very good at finding edges/shapes while tanh pushed everything to black and white. What about a larger kernel size?

Keras layers visualization conv2d kernel size 7 relu
One conv2d layer, kernel size 7, ReLu activation.

Keras layers visualization conv2d kernel size 7 sigmoid
One conv2d layer, kernel size 7, sigmoid activation.

Keras layers visualization conv2d kernel size 7 softplus
One conv2d layer, kernel size 7, softplus activation.

Keras layers visualization conv2d kernel size 7 tanh
One conv2d layer, kernel size 7, tanh activation.

Two layers

Keras layers visualization conv2d two relu layers kernel size 3
Two conv2d layers, kernel size 3, ReLu activation for both.

Keras layers visualization conv2d two layers relu tanh kernel 3
Two conv2d layers, kernel size 3, ReLu activation and tanh activation.

Keras layers visualization conv2d kernel 3 tanh relu layers
Two conv2d layers, kernel size 3, tanh activation then ReLu activation.

Keras layers visualization conv2d kernel 3 tanh layers
Two conv2d layers, kernel size 3, tanh activation for both.

Transpose

Transpose convolution is sometimes used for image generation or upscaling. Using kernel size and striding, this layer (once trained) projects its input onto a (often) larger feature map.

Keras layers visualization conv2dtranspose kernel 2 stride 2 relu
Conv2dTranspose, kernel size 2, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 2 stride 2 sigmoid
Conv2dTranspose, kernel size 2, strides 2, sigmoid activation.

Keras layers visualization conv2dtranspose kernel 2 stride 4 tanh
Conv2dTranspose, kernel size 2, strides 4, tanh activation.

Keras layers visualization conv2dtranspose kernel 4 stride 2 relu
Conv2dTranspose, kernel size 4, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 4 stride 2 tanh
Conv2dTranspose, kernel size 4, strides 2, tanh activation.

Keras layers visualization conv2dtranspose kernel 8 stride 2 relu
Conv2dTranspose, kernel size 8, strides 2, ReLu activation.

Keras layers visualization conv2dtranspose kernel 8 stride 2 tanh
Conv2dTranspose, kernel size 8, strides 2, tanh activation.




2022.07.29

Adhesives

Prepping wood planks for the next project, chasing an elusive Gloomhaven personal quest, concatenate layers in ML, and a little bit of recreation.
2022.08.14

Popcorn

Watching this week's political news from the peanut gallery.


Related / internal

Some posts from this site with similar content.

Post
2022.08.19

Rewiring

Playing with autoencoders for graphics processing.
Post
2023.10.08

Granularity

Linking internally and externally using text tokens and n-grams.
Post
2022.01.15

Deep graphics

An implementation of the Burning Ship fractal and some experiments in creating autoencoders. Changing the style layers in style transfer and combining the outputs into a composite image. Mass Effect 3 Leviathan and Omega DLCs.

Related / external

Risky click advisory: these links are produced algorithmically from a crawl of the subsurface web (and some select mainstream web). I haven't personally looked at them or checked them for quality, decency, or sanity. None of these links are promoted, sponsored, or affiliated with this site. For more information, see this post.

blog.keras.io

Building Autoencoders in Keras

Has a preview image link and yet 404 :/
pyimagesearch.com

PyTorch: Training your first Convolutional Neural Network (CNN) - PyImageSearch

In this tutorial, you will receive a gentle introduction to training your first Convolutional Neural Network (CNN) using the PyTorch deep learning library.
Has a preview image link and yet 404 :/
www.asimovinstitute.org

The Neural Network Zoo - The Asimov Institute

With new neural networkarchitectures popping up every now and then, its hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first. So I decided to compose a cheat sheet containingmany of thosearchitectures. Most of theseare neural networks, some are completely []

Created 2024.10 from an index of 456,834 pages.