The Women's World Cup and Gold Cup have offered some exciting and anxiety-inducing soccer.
It was time to drain and refill the pool.
I concreted in the light housing.
The GBES event this month was Gravity Heights. Finally, an alternative to Karl Strauss in the Sorrento Area, dog friendly too.
The pizza is phenomenal.
I was shocked and dismayed to find that Lightning Jack's closed.
Risk of Rain 2
Otherwise, the man cave has been a great place to stay cool and get in some pc and ps4 gaming.
The lolbaters/RoR squad has made some progress on unlockables and secrets. We managed to find the Gilded Coast and Aurelionite on our first attempt:
Borderlands GOTY
J and I blasted through the Borderlands GOTY DLC. We were impressed, once again, by the ambition of the Knoxx DLC and the hilarity of the Claptrap saga.
Fight for Sanctuary
From GOTY we rolled right into the Borderlands 2.5 DLC. We mostly played it for the story; our OP8 builds were pretty dominant. It was a pretty good ride, though it was a bit self-congratulatory (paging Mass Effect Citadel DLC). The dialogue was funny, as always, though I'm not sure I'll be able to handle the new Claptrap voice actor for a full game.
Between some great new releases and an attendee crew, this was a pretty fun E3.
Borderlands
Without a doubt, Gearbox/2K conducted a masterful exhibition of Borderlands 3. They checked all the boxes:
A tight, playable demo.
A theater portion where the finer points of the game are disclosed.
A wide variety of hype material - adverts inside and outside, props, a number of different flashy videos.
Good, non-limited swag.
There were ample demo stations and staff, and no media-only restrictions or such nonsense.
The game plays like its predecessor, but they seemed to have evolved most aspects of the experience. More skills, more items, more convenience features, more game modes.
Ultimately it seems the game's reception will hinge on how well Gearbox executed on the story and how much variety is offered by the interplanetary journey.
Exhibitors
Once again, Microsoft bought out a theater separate from the convention center. It was a quality production but they didn't have much content.
After Keanu's Cyberpunk introduction (which was strangely reminiscent of Peter Dinklage's Destiny performance), there was plenty of hype around CDPR. We did not brave that line.
After a zombie-filled reveal last year, I was hoping to get my hands on Dying Light 2. Unfortunately, Techland only had an appointment-only(ish) conference room with a cinematic demo.
Final Fantasy
Jeff - along with most other attendees - was excited about Final Fantasy VII Remake.
Others
As usual, there were plenty of smaller titles that were easy to get our hands on.
G4M3R DR1NKS!!!
It wouldn't be E3 without new energy drink brands that will soon be out of business or new flavors from existing brands that should have never been conceived. There were a few powder additive options this time around as well as champagne-flavored Bang with creatine.
Swag
It also wouldn't be E3 without collectibles, photobooths, and swag.
13-0
Tuesday stepped out to Yard House to catch the WWC match. Rumor has it Dr. Disrespect was also there, prior to his infamous incident.
Accommodations
We had an airbnb Tuesday night. Pool, pizza, Mario Party. Perfect
I bought a wood chipper because I have lots of long, 0.5"-5" thick branches with leaves to dispose of - enough that taking them to yard waste recycling is not an attractive proposition. A lot of chipper/shredders seemed to have a separate feed for chipping (pictured above), one that is small enough that they appear to require thinning the branches out to pass them through. I don't know how true this is, but it's why I went for the single, conical feed where I could theoretically feed a bushy tree branch. Importantly, having usable mulch wasn't a goal of mine, so I was willing to risk chunkier output that might come with a single chipping function.
So I bought the Power King 7hp for about $1000.
Assembly was a reasonably-straightforward hour or so. The machine is basically a Kohler engine strapped to a chipping chassis that's of different build quality. On a positive note, no welds or fittings appeared unsound, but the thing is covered with the absolute minimum-effort paint process. Breathing on it wrong will scratch the orange paint right off. While it's not particularly concerning for my low-mobility purposes, the wheels are held on to the axle with just a cotter pin, so they slide left and right as you pull the thing.
As many others did, I immediately removed the 'OSHA curtain', as it looked like it would inhibit the feeding process. I later found that it does indeed have the purpose of blocking much of what the PK throws back out of the hopper - so eye protection is critical, curtain or no.
Usage
Using the PK is a matter of learning curve. I jammed it a half-dozen times in my first few hours of use and have since gotten considerably better at it.
Volume matters. If you dump a bag of leaves in the hopper, the PK won't throw them out fast enough and you'll have a jam. This is not a major usability concern (for me) unless you're unaware of it.
Branch diameter. I put a piece of e-tape (see above) on the hopper measuring the advertised 3" to avoid feeding anything too big. Maybe, on a good day, if you feed the PK a 3" piece of balsa wood that's been in an autoclave for 30 years it'll handle it. Practically, I've found that you max out at 1.5" or so. If this didn't cover 70% of the branches I'd be chipping, I would have returned the thing straight away. But I'm still rather salty about its disparity from the advertised capabilities.
You have to feed it slowly. That is, take a long branch that isn't bushier than, say, 2', and feed it slowly into the PK, thick-end first. You'll listen to the engine to know how fast to feed it and get the hang of it pretty quickly. The PK pulls branches in far faster than it can chip them, so you have to control the autofeed until you're getting down to the 0.5" diameter range where you let go and find out if the branch was too bushy.
Always watch the hopper and exhaust. The hopper will sometimes have debris that bounces up and down and is never sucked through, this is because the exhaust is clogged. Leaves and wet branches sometimes pass through the PK and get caught on the exhaust spout, causing everything else to bounce back into the chipper chamber. Most times you can clear them without opening the exhaust, then look to see that hopper debris has passed through. In this vein, it's best not to angle the exhaust pipe.
Problem solving requires removing two nuts for the hopper, exhaust, and/or the shaft cover. A quick release mechanism for these would significantly improve user experience, but after getting used to the limitations of the PK, it's not too bad. Manually turning the shaft to expel branches caught between the chipper drum and chamber works most times, but if it's really bad you'll just strip the shaft.
At one point I was using a drill and sawzall to unstick the chipper. Another design flaw - there are bolts sticking out where you have to manually turn the shaft, so your wench can only hit it from certain positions and for a very small rotation.
A short story from graphics to parallelism to lambda to mapreduce to hadoop to Spliterator to ParallelStream.
As my graphics library has been growing slowly, but the building blocks are there for slightly interesting things. Solved things, but not always so customizable when bought off the shelf. Elegant designs have won out for the most part, and that's brought creeping overhead. So you get to the thumbnail algorithm that would take seconds to complete. For my baseline image, it was 3.61 seconds (we'll come back to this at the very end). That's not bad for hitting go on a blog post (which chugs on ftp so who cares about some slow graphics processing) or a background batch operation, but not great for the realtime case.
Of course in the modern era you want to farm graphics processing out to the GPU. In addition to the difficulty of connecting the CUDA layer (in Windows), there are some conceptual concerns that kept me from driving down this path. For one, there's overhead moving data for incremental graphics operations. And while GPUs are superb, there are limited gains vs an x-core high clock rate CPU when you're talking about general purpose code (with branching) on a single image. There's absolutely a use case for shipping stuff off to my still-awesomeish graphics card, but I don't think photo processing is it - with the side comment that the GPU is used by the JVM for the actual rendering.
Having some experience with distributed processing, I know that applying a pixel transform to 800x600 = 480,000 pixels is a large and very parallelizable task. For example converting RGB to luminance is the same operation, with zero state or dependencies, executed 480,000 times. It's also a prime counter example to my conclusions from the above paragraph, but let's suspend disbelief because this is about Java parallelization, not throwing my hands up at Windows/CUDA and installing Ubuntu then refactoring code into something that works with a graphics API.
The old school approach to the luminance conversion would be:
convert(int pixels[][])
foreach(pixel)
value = (pixel & 0x00ff0000 * 0.2126 + pixel & 0x0000ff00 * 0.7152 +
pixel && 0x000000ff * 0.0722) / 3;
pixel = value | value << 8 | value << 16;
Simple, sequential.
The functional programming approach would be to map the function to the data. Your lambda function is just:
convert(int pixel)
value = (pixel & 0x00ff0000 * 0.2126 + pixel & 0x0000ff00 * 0.7152 +
pixel && 0x000000ff * 0.0722) / 3;
pixel = value | value << 8 | value << 16;
...
convert->map(pixels[][]) // or however your language's syntax prefers
... and the means by which it is applied to the data is left unspecified.
The unspecified means of application left me wondering if there was a way to leverage this for parallelism. In Java. I quickly found that 1.8 had introduced the Spliterator interface. The silly portmanteau name (plus the context by which I got to it) led me to believe this might be the way to iterate over elements in an distributed way. It was even more encouraging that Collection had implemented the interface, so Lists and Sets could be natively split. The docs and StackOverflows had examples like:
Spliterator it = list.spliterator();
it.forEachRemaining(lambdaFunction);
Oh, independently/sequentially, not independently/concurrently. Damn. What I was learning is well-articulated here:
Spliterator itself does not provide the parallel programming behaviour ? the reader might be interested in experimenting with a Collection type holding a vastly larger number of objects and then look to implement a parallel processing solution to exploit the use of the Spliterator. A suggestion is that such a solution may incorporate the fork/join framework and/or the Stream API.
Spliterator doesn't do much unless you want to add the code to kick off threads/processes to leverage its real contribution: chopping up the work nicely and with thread safety. And so this is a solution to the luminance problem, but it's a bad one. To get parallelism, I would need to explicitly split my data into a specific number of subsets and then kick off threads for each. This is both laborious, verbose, and full of overhead. I want to parallelize, but more than that I want to map the function, that is, semantically convey that the data is processed independently.
This led to a brief digression into MapReduce - the concept of mapping a function and then performing a lassoing of results. That led to Hadoop; an Apache platform for farming out large MapReduce and similar distributed processes. Neat, but overkill for this.
Things were looking grim, but then I at last checked up on another type that had popped up in some of my queries. Stream. You know, that vague supertype that your grandpappy used for IO and de/serialization Turns out this isn't it. InputStream and OutputStream and their many descendents actually are just subclasses of Object. Stream was introduced in 1.8.
Okay so they're pulling a parallel stream from the roster Collection and performing a bunch of steps on it. That actually looked good.
filter() calls a member function of each element in the steam. This is great because it means they support member calls.
mapToInt() seems to be collecting data for postprocessing.
average() and getAsDouble() are operating on the collected data.
The code is straightforward and the documentation seemed to indicate it'd be a parallel effort with width determined by the platform. It was worth a try, so I decided to see if I could feed my thumbnailer code into Stream. Pardon my switching of example horse mid-stream, so to speak, but this one had that nice 3.61-second benchmark.
The baseline code just took a user-specified number of areas and did a bunch of computations on them. The area size would be inversely proportional to the number of areas, so we're talking generally the same amount of work regardless of the client code.
foreach (area)
area.computeInterest(); // Does a lot of pixel math.
I fumbled a bit with how to back into the area.computeInterest() call:
This and similar formulations gave me errors that seemed to understand how badly I was abusing their interface. I calmed down and read a bit more, oh there's a foreach: