is8ac

You can find my main pages at www.isaacleonard.com and https://twitter.com/is8ac/

moonlit-tulip:

yumantimatter:

earlgraytay:

bygodstillam:

ninja-kitty-more-like-no:

wiz-witch:

pastel-player:

thepoweroffriendshipgivesumoney:

over-the-misty-mountains:

jabberwockypie:

uberniftacular:

copperbadge:

cluegrrl:

selkierot:

deckerbunny:

teal-deer:

callmebliss:

image
image

The Mandalorian and Ghost of Tsushima

Which is really just Lone Wolf and Cub, so, yeah

Good Eats and Yakuza: Like a Dragon and I am totally here for it.

Man v. Food Animal crossing?

Truth Seekers Solitaire… that’s just sad.

Animal Crossing: Supernatural. 

Oh dear. 

Taskmaster: Animal Crossing

It’s a beautiful day in Regency England, and you are a horrible goose.

@lady-byleth​ talk to me about this Untamed / Three Houses crossover

Forged in Fire + Assassins’ Creed II.
Cool

DuckTales and PuyoPuyo Tetris… PLEASE.

Last show I watched was Nailed It, and I recently played the DuckTales video game…

… That actually sounds fun

Sky: the Clone Wars

The Witcher 3: Cold Case Files

Hades and the Princesses of Power

Stardew Alchemist - oh man I want this now

Fate/Arknights: Unlimited Blade Works

NieR: Shuumatsu Ryokou


Amid the desolate remains of a once-thriving city, only the rumbling of a motorbike breaks the cold winter silence. Its riders, 2B and 9S, are the last survivors in the war-torn city. Scavenging old military sites for food and parts, the two androids explore the wastelands and speculate about the old world to pass the time. 2B and 9S each occasionally struggle with the looming solitude, but when they have each other, sharing the weight of being two of the last humans becomes a bit more bearable. Between 9S’s clumsy excitement and 2B’s calm composure, their dark days get a little brighter with shooting practice, new books, and snowball fights on the frozen battlefield. Among a scenery of barren landscapes and deserted buildings, NieR: Shuumatsu Ryokou tells the uplifting tale of two androids and their quest to find hope in a bleak and dying world.


Or alternatively:


NieR: Shuumatsu Ryokou tells the story of Chito and Yuuri, and their battle to reclaim the machine-driven dystopia overrun by powerful machines.
Humanity has been driven from the Earth by mechanical beings from another world. In a final effort to take back the planet, the human resistance sends a force of android soldiers to destroy the invaders. Now, a war between machines and androids rages on… A war that could soon unveil a long-forgotten truth of the world.

nostalgebraist:

[Attention conservation notice: machine learning framework shop talk / whining that will read like gibberish if you are lucky enough to have never used a thing called “tensorflow”]

I’ve probably probably spent 24 solid hours this week trying (for “fun,” not work) to get some simple tensorflow 1.x code to run on a cloud TPU in the Google-approved manner

By which I mean, it runs okay albeit slowly and inefficiently if I just throw it in a tf.Session() like I’m used to, but I wanted to actually utilize the TPU, so I’ve been trying to use all the correct™ stuff like, uh…

…“Datasets” and “TFRecords” containing “tf.Examples” (who knew serializing dicts of ints could be so painful?) and “Estimators” / “Strategies” (which do overlapping things but are mutually exclusive!) and “tf.functions” with “GradientTapes” because the “Strategies” apparently require lazily-defined eagerly-executed computations instead of eagerly-defined lazily-executed computations, and “object-based checkpoints” which are the new official™ thing to do instead of the old Saver checkpoints except the equally official™ “Estimators” do the old checkpoints by default, and oh by the way if you have code that just defines tensorflow ops directly instead of getting them via tf.keras objects (which do all sorts of higher-level management and thus can’t serve as safe drop-in equivalents for “legacy” code using raw ops, and by “legacy” I mean “early 2019″) then fuck you because every code example of a correct™ feature gets its ops from tf.keras, and aaaaaaaaaaaaaargh!!

This solidifies the impression I got last time I tried trusting Google and using fancy official™ tensorflow features.  That was with “tensorflow-probability,” a fancy new part of tensorflow which had been officially released and included cool stuff like Bayesian keras layers… which were impossible to save to disk and then load again… and this was a known issue, and the closest thing to an official reaction was from a dev who’d moved off the project and was now re-implementing the same thing in some newly-or-differently official™ tensorflow tentacle called “tensor2tensor,” and was like “uh yeah the version here doesn’t work, you can try tensor2tensor if you want”

(I still don’t know what “tensor2tensor” is.  I refuse to learn what “tensor2tensor” is.  They’re not going to get me again, dammit)

I don’t know whether the relevant category is “popular neural net frameworks,” or “large open-sourced projects from the big 5 tech companies,” or what, but there’s a certain category of currently popular software that is frustrating in this distinctive way.  (Cloud computing stuff that doesn’t involve ML is often kind of like this too.)  There’s a bundle of frustrating qualities like:

  • They keep releasing new abstractions that are hard to port old code into, and their documentation advocates constantly porting everything to keep up

  • The new abstractions always have (misleading) generic English names like “Example” or “Estimator” or “Dataset” or “Model,” giving them a spurious aura of legitimacy and standardization while also fostering namespace collisions in the user’s brain

  • The thing is massive and complicated but never feels done or even stable – a hallmark of such software is that there is no such thing as “an expert user” but merely “an expert user ca. 2017″ and the very different “an expert user ca. 2019,” etc

  • Everything is half-broken because it’s very new, and if it’s old enough to have a chance at not being half-broken, it’s no longer official™ (and possibly even deprecated)

  • Documentation is a chilly API reference plus a disorganized, decontextualized collection of demos/tutuorials for specific features written in an excited “it’s so easy!” tone, lacking the conventional “User’s Manual” level that strings the features together into mature workflows

  • Built to do really fancy cutting-edge stuff and also to make common workflows look very easy, but without a middle ground, so either you are doing something very ordinary and your code is 2 lines that magically work, or you’re lost in cryptic error messages coming from mysterious middleware objects that, you learn 5 hours later, exist so the code can run on a steam-powered deep-sea quantum computer cluster or something

Actually, you know what it reminds me of, in some ways?  With the profusion of backwards-incompatible wheel-reinventing features, and the hard-won platform-specific knowledge you just know will be out of date in two years?  Microsoft Office.  I just want to make a neural net with something that doesn’t remind me of Microsoft Office.  Is that too much to ask?

This is why I began leaving TF in late 2018.

IMO, TF 1.3 was the last good version. Back then, it was a simple and elegant compute DAG compiler/runtime. It abstracted over different hardware types and data storage. It was simple. For a while after the python API began to go downhill, I used the autogenerated go bindings. They at least preserved a simple abstraction.

Now I hand code all my ML work in plain rust with occasional Vulkan compute shaders if I really need the performance. (Although I’m not doing traditional gradient decent so it is not really TFs fault that I don’t use it.)

nostalgebraist:

(Until looking over my tag just now, I had this impression I had said almost this exact thing before, but it looks like I’d only done so much more implicitly and reservedly than I remembered, and anyway not recently, so … )

I really get a lot of value out of it when other people read Almost Nowhere and say things about it.  I would be really happy if more people did this.

—————

I’m pretty nervous about feeling like I’m begging for attention and validation here, cf. the way I started off this post with the parenthetical above and even now am derailing it anew with this sentence.

In particular, I have this intuition that “when I do things that people actually like, they become self-advertising” – I didn’t have to write posts like this about TNC, or for that matter about my own nonfiction effortposts here, and if I want a comparable level of interest in AN then (the line of thinking goes) I should just keep writing and making it as good as possible, and “if I build it they will come.”

However, AN is really in a somewhat different situation than those other things.  It is a relatively long story – I can imagine it being 2x the wordcount of Floornight by the end – that I am creating very slowly over a number of years, with more care and deliberateness than I’ve applied in the past.

I feel confident that I will complete the whole thing within (to set a goofy upper bound) the next ten years, but I expect it to take at least another 1-2 years, possibly more.  I know it’s hard to get people interested in a WIP, or in very piecemeal occasional updates that don’t build an exciting sense of momentum.  I know people want to read complete things, and “read it when it’s done” might still be the best option even though it means you’ll be reading it in (could well be) 5 years when you and I and the world are five years older and god only knows what’s happened in the interim.  Just, that’s what that option looks like.

—————

And I realize that was an uninviting downer of an advertisement inasmuch as it was an advertisement at all, so here’s another one.

The reason I make posts like this is that I’m extremely proud of Almost Nowhere.  Like, distinctly prouder of it than any other creative or quasi-creative thing I’ve ever made, as far as I can tell.

I can’t say it’s strictly better than my previous novels, since they’re all doing different things and can’t be usefully compared like substitutes for one another.  But when I re-read the earlier novels, there are parts I like and parts I don’t, there are things I cringe at, places where I think “ugh, I took the easy road” or “oh, I feel bad about this chapter, should I skip it?”

Yet when I re-read AN, as I do every so often, I just feel this sense of pure glee over the whole thing, even parts I wrote 2 or 3 years ago: I like each chapter individually, I like every character and plot thread and theme and verbal motif, I like virtually every sentence.  It feels like what I imagine an actor or animator might feel watching their own demo reel, curated to string together only the peaks of their output without anything else.  I’m inordinately pleased with what I’ve done here.  (Admittedly some of this comes easier with something incomplete, as endings are uniquely hard to pull off for writers in general and me in particular, but still.)

So, if you tend to like things I like, much less things I make, you might really like this one.  FWIW.

Know that I am eagerly awaiting each new chapter of Almost Nowhere. It is as you say, every character and plot thread, theme and verbal motif, almost universally, they are deserving of life.

Please, finish it quickly. Please, take the time necessary to continue to produce an excellent work.