Status 6-Mar-23

I've been a little quiet on here the past few weeks -- first slight overbusyness, follow by slight crash. All things in cycles. As I think I've mentioned before, when I need little snippets of extra time, writing these shards is usually the first piece of compression. (I tend to write them in the morning before starting the day job, which means that it can easily get squeezed by having excess miscellaneous tasks or being slightly behind from needing more rest. If it's a choice between writing these and making coffee, I know on which side my bread is buttered.)

(Sick fish don't help, either.)

One quick share today: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit

Choice quote:

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."

Choice because a) it's nice to remember some of the shitheads backing this, but more saliently, b) 'since our research is free from financial obligations, we can better focus on a positive human impact' is so on the money (hah) that it hurts.

I refer you back to Filtered for AI Apocalypse, specifically:

This is the sort of thing that bothers me here -- the economic incentives at work (and not out of nowhere: specifically courted by those making these things do not align -- remotely! -- with responsible creation of these products. The $ numbers at work are staggering, and if you're on the hook for that amount of money, I don't feel like you're going to be too prudent about the limits you set on these things if you can get away with them.

Which links in well to this Margins piece, talking about why voice assistants never quite stuck the landing as they might have:

ZIRP

Of course I'm going to go there: ZIRP played a huge role. Just think about every cycle we saw over the past decade. There would be some new technology like Blockchain, IoT, AI, VR, (and I'm sure I'm missing a few), and instantly everyone had to pretend it would change everything. All of these technological advancements could’ve been implemented gradually, finding product-market fit and building solid businesses from there. But instead, every startup in the space had to spout off big ideas to then be force-fed capital like a goose bred for foie gras. Then they’d never live up to their potential and be pushed into the trough of disillusionment. Startups that tried to grow responsibly would be blitzscaled into oblivion.

10x not 10%

Then you had the big tech companies. As each one sat comfortably on its own monopolized territory, gradual innovation simply didn't make any economic sense. If you're churning out profits, the incremental benefit from steady growth built on a new innovation would be a boring distraction. I still remember (and at the time buying into it) reading about the head of Google X saying that it was easier to create a 10x innovation than build a 10% improvement. People had to make statements like that because the 10% improvement would never get you the resources or promotions.

Voice couldn't simply be a cool feature that just gave you sports scores and told you the weather, and then evolve into something grander. Amazon is the flywheel king of losing money on certain things in order to build larger network effects, but it was mid-2010s blasphemy to simply have made money by…selling their speakers.

Enshittification by market autophagy.