technology

Filtered for AI Apocalypse

On top of that, these products are extremely expensive to run. ChatGPT burns millions of dollars a day in computing power, while charging a per-access-token price for companies to plug into its models. OpenAI expects $200 million in revenue in 2023, claims that they’ll be making a billion a year in revenue in 2024, but as their AI becomes more widely adopted into other products, so will the cost of providing that service, and I see nothing about how this company could possibly be profitable. On top of that, OpenAI will be handing over 75% of its profits until Microsoft has recouped its $10 billion investment. To survive and perform even its most basic tasks, OpenAI must constantly consume information and burn capital, and as it grows in complexity, so will its technological demands, ethical concerns and genuine threats to society. ... As said last week, tech companies are incentivized to grow at all costs, even if said costs involve them acting in reckless, unethical ways. A ChatGPT-powered search engine marketed by a massive tech company as a search engine is one that users are likely to believe the answers of, meaning that when said search engine gives patently incorrect and “unhinged” results, we are likely to see situations of misinformation that dwarf the damage caused by Cambridge Analytica and Facebook. Microsoft rushed out Bing AI because they wanted their shareholders to see them as constantly growing, despite the fact that it is both regularly wrong and actively manipulative users. ... And the reason that these risks are so significant is that both Microsoft and Google were desperate to show that they will never, ever stop growing. They did not slowly and methodically roll out these products with constant reminders of their fallibility - they branded and marketed them as the future, as the new way in which we request and process information. Microsoft and Google’s customers are victims of rich people playing with toys, hoping to find ways to please rich people that invest the money of other rich people.

Ed Zitron: The Rotten AI

This is the sort of thing that bothers me here -- the economic incentives at work (and not out of nowhere: specifically courted by those making these things do not align -- remotely! -- with responsible creation of these products. The $ numbers at work are staggering, and if you're on the hook for that amount of money, I don't feel like you're going to be too prudent about the limits you set on these things if you can get away with them.

I refer you back also to BRIAN BLESSED PIGEON (undeterred).

AI (which we’ll use to describe the new wave of natural language systems despite the fact that it’s basically a branding exercise invoking benign superconsciousnesses from mid-20th Century SF to sell an interactive probability map) is going to disrupt our lives. That’s a given. In exactly what ways remains to be seen. At the moment the AI chat systems we’re talking about are the image of a persuasive liar: smooth-talking fabricators of plausible nonsense without the hindrance of conscience or a residual allegiance to truth. In fact it’s worse than it appears - they’re number generators emitting the expression of statistical likelihood dressed up as meaning. Unchecked, they are misogynistic, racist conspiracists gleefully plagiarising or collaging slabs of coherence, their entire being an intricate patchwork of human self-expression stitched into the shape of consciousness.

Nick Harkaway: Not The Future

Tom Scott: I tried using AI. It scared me.

BRIAN BLESSED PIGEON (undeterred)

I finally did some prodding at ChatGPT last week. I wrote a little bit before about some of my problems with the way these things are trained, though the problem is slightly less acute (though still present) with text-based models.

My first prompt was to have it write a bio for Olivia. Where it confidently declared she'd worked on several games that she hadn't, had attended a university several thousand miles away from the one she had, and had won several awards that she hadn't. The most correct thing about it came in the form of a sentence verbatim lifted from her website.

My experiments progressed to having it construct several dramatic scenarios, having it rewrite scenes from Shakespeare, but replace characters with pigeons, or Brian Blessed, or set them on a submarine.

BRIAN BLESSED PIGEON: (undeterred) And might I add, your beauty shines brighter than the moon in the sky at night!

(Strictly speaking, I'd asked it to change the dove in a previous draft into Brian Blessed, but I guess it technically followed the instructions.)

Or having it apply for a job as an assistant, but as a pigeon:

As a pigeon, I am highly adaptable and able to learn quickly. My ability to navigate my surroundings with precision and agility would make me an excellent researcher, helping you to gather information and inspiration for your stories. Additionally, my keen sense of observation allows me to pick up on small details that others may miss, which would be valuable in the editing and proofreading process.

I also have a unique perspective that would bring fresh ideas to the table. My avian nature allows me to see the world from a different angle and provide a unique perspective in the creative process.

I then tried teaching it to GM a TTRPG for me, which... did actually seem sorta possible in a slightly unexpected way, with steady coaching of the right prompts. It struggled with numbers, and I am suspicious that it wasn't just picking favourable outcomes for dice rolls and telling me that I had rolled an X. But if anything, that's a legitimate -- if not actually desired -- approach to the problem.

Act as the game master in a single player improvised tabletop roleplaying game set in the world of Buffy the Vampire Slayer. Answer as the game master. I am the only player. Do not write explanations. Start with character creation. ... Refer to me as if I’m Willow; don’t talk about her in the third person. However, I am not actually Willow, I am just playing the character of Willow. ... When I talk in character l, I will use speech marks “Like this.” When I talk in character, don’t repeat my speech in your reply, just paraphrase what I’ve said instead. Other character should still talk to me in direct speech.

(Some of the prompts that went into setting this up. I'd actually like to give this another go.)

My overall feeling from this play was that it was great for just that -- play. Seeing ChatGPT as a vector for entertainment felt good; it's applications beyond that much more tenuous.


Dan Hon:

Bard made a factual error in its first demo7. A point here is that the demo confidently asserts that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system”, and, whooeee let me interpret it this way: there’s no way this demo would’ve gotten out without going past a bunch of (human) eyes checking it was okay. It clearly got past all those eyes. One reason why, I’m pretty sure, is that LLM-generated text is super easy to glaze over entirely because it’s so predictable and we’re already framed to predict what it’s going to say. Which is a major point of the criticism of these models: they’re great until (which is frequently) they assert something that we don’t check. Which people didn’t check sufficiently. And now you’ve got AIgg on your face, big disgrace, etc. Things That Caught My Attention s14e14

Interconnected:

Look, I know the code isn’t rocket science. I know I could do this, eventually, and you could probably smash this out without looking, but I don’t really know React - I can’t write it idiomatically - and I don’t know about webcams in the browser, and I don’t know about the MediaStream API.

So this was a day of work in 10 minutes.

What is meant was that I could spend that day integrating hand pose detection and noodling with the actual micro-interactions. And now I have opinions about all of that!

Now, none of that Copilot-supplied code remains in my app.

What happened what that it helped me frame my problem. I was able to rapidly explore the edges of my knowledge, and figure out how to structure my questions and what I need to learn. My learning requirement is not obviated obviously… Interconnected: AI-generated code helps me learn and makes experimenting faster

(I also read something recently about a well-known concept artist using AI art as a means of noodling at ideas/exploring surprising conceptual spaces, but I can't track that piece down again right now.)

Having played with it some, I can see the magnetic draw and fascination ChatGPT inculcates. The problem is that I think a big part of that comes from this very superficiality -- its ability to bluster confidently. Which is, again, great for entertainment value, and absolutely lethal for the uses I think most people imagine for it.

Setting aside my concerns around how this is implemented and rolled out – which are MASSIVE, and honestly I am more than a little scared by the ramp-up in recent days to make this ubiquitous because dear God are we not ready for that, technically or socially – but setting that aside, the value here for me seems to be for uses where unexpected or unreliable outputs are baked into the intent. So, creative experimentation, generative play, or rapid prototyping.

But I feel that, while they exist, those use cases are much more niche, limited in appeal to a much smaller number of people. Which is not what the companies shipping this things are banking on at all -- they're banking, literally, on it being the next big money spinner, and working to capture the next big information market. Unfortunate, then, that the information that market is selling is rather spoiled.

Grab Bag -- The Moon, Mars, and Murmurations

A grab back of links today.

I don't really have words for this one, just go ahead and take a look yourself. Orion flies far beyond the Moon, returns an instantly iconic photo

A very different scale of 'wonder of nature'. I love how, to the eye, they seem like a great, shifting single organism. Sunset murmuration in Nottinghamshire caught on camera

More science news. The last paragraph here is, uh, certainly something that felt like a weird twist in the tale, but is apparently not that unexpected. Researchers discover two new minerals on meteorite grounded in Somalia

More space. The Verge links to a TikTok by Lizzie Philip going over the (wonderful, amazing, ridiculous) plan to retrieve samples from Mars. A simple plan.

I'll quote from this one in more detail. It harks back to Birdwatch. Manifesting History

Musk’s narrative follows a similar path because up until alarmingly recently…it worked. As I’ve related above, Musk has been an irascible shithead for many years, but his overwhelming clout with the media meant that he could, effectively push through any idea his little mind desired. A flamethrower? Sure. $420 Tequila? Of course. Landing humans on Mars? He said 2022, but everybody was fine with saying “within five years” or “2029.”

Musk has gotten away with a mixture of half-truths and outright lies enough times that he believed that he had the popularity to do anything, another condition afflicted upon those with billions of dollars. When he bought Twitter, I truly think that he believed everybody would be behind him, because up until that point most of the media had been. Kara Swisher gave an interview in May about how smart Elon was. Jessica Lessin of The Information described the acquisition as “like watching a business school case study on how to make money on the internet.” Hell, he was able to con banks and investors into raising $13 billion for him. Musk still had the ability to manipulate the media - and still does, in the sense that he can still get a bunch of stories about literally anything he does - but couldn’t change the reality that he did not have a plan for the website that he tied his entire financial future to.

That’s why he seems so utterly pathetic. Musk may have had no plan, but he also appears to have never considered the eventuality that most people would dislike his choices. For someone supposedly tuned into “the future,” he continually fails to adapt to his changing circumstances, picking and losing fights and taking that as proof that his cause is just rather than his ideas being bad. And now his closest allies are wobbling sycophants like David Sacks, who accidentally ended up on the right side of the antitrust debate in an attempt to kiss up to his boss.

Yeah.

And finally, your one-stop-shop for Twitter nonsense: https://twitterisgoinggreat.com/

Dreams from our machines

A piece I read a few weeks back: The Dark Side of Frictionless Technology.

A good read for a bunch of reasons, but one thing I'd noted down to come back around to was this:

I’m obviously not against innovation or building great things. But in this newsletter, I’ve criticized Silicon Valley’s impulse toward Builder Brain: “a particular line of thinking, one that seems to run the risk of missing the root cause of a problem in service of a more exciting solution.” I see it in the jargony, scam-riddled pyramid schemes and hype cycles surrounding Web3; I see it in the technology industry’s reluctance to embrace the Right to Repair movement; and I see it in so much of the pompous arrogance of tech founders and investors who think they can waltz into an industry with little expertise or understanding and disrupt it.

(Relevant right now for no reason in particular...)

It does come back to what I was talking a bit about yesterday in Birdwatch:

As for Musk himself, well, he really is showing himself up, isn't he? Musk a) believes he is smarter than everyone else and b) holds a very simplistic (and additionally wrong) view of the world. Having all that money and power has kept him in so much of a bubble that this has never really been challenged.

So that means, when he looks at a problem in a sphere about which he knows nothing and comes up with a simplistic solution, he believes that the only reason that no one else has done that already is down to a failure of their intelligence. NOT, as is the truth, down to a failure of his intelligence and the fact that the domain he's gazing at is more complicated, and actually other people do understand it a lot better than him.

This connects together in 'technosolutionism' -- a term I've been using for a few years without being quite clear as to its provenance -- the (implied to be mistaken) belief that technology is what can solve our big problems.

Now, that's not to suggest that technology can't solve problems. It's the idea that technology alone is all we have been lacking in the pursuit of solving some big, extant problems. Any problem we have is one we just haven't found the right technology for yet.

Which is obviously bunkum. The whole 'NFTs in gaming' thing fits into that -- the idea that the reason 'gamers' don't have the ability to transfer digital items that they own between games and ecosystems is because we haven't been able to do that, technologically. Ignoring, y'know, all the social, legal, business model, etc. problems with that.

(I think cryptocurrencies fall foul of this also, albeit in a different way. There are ways in which they are technologically distinct from traditional banking and currencies. But treating them as technological solutions and therefore infallible because they are technological solutions, is nonsense. Traditional banking is also riddled with technology; however, so many of the problems cryptocurrency purports to solve are social or political problems of trust and regulation.)


I haven't revisited this yet, but my mind does regularly return to this essay -- also in The Atlantic, actually -- from Debbie Chachra: Why I Am Not a Maker.


Instead we ask for dreams from our machines
and I’m no technophobe but
They’re just not made for this –

to give us visions in the smoke
to dream beyond the glass and draw for us some meaning from the gaps
in our sandcastle models of a world.

Birdwatch

Not the good kind, sadly. Things at Twitter continue to be a mess. I've deactivated my account for the time being; that almost definitely won't stick on a permanent basis this time around, but I'm testing the water.

Lots of people shopping around for their new platform of record. Lots of platforms jockeying to be the next contender. Maybe there'll be a clear winner, maybe not, but I expect a degree of fragmentation, which might be a good thing. I think it's naive to expect to just be able to pop up a new Twitter [the good bits], without either bringing the bad bits over as well or taking on new bad bits.

Part of the problem is sheer scale -- if you really want a 'global public square' (which I still think has been shown to be a pretty terrible idea), then it needs a massive content moderation effort. Which takes a massive amount of time and resources to effect. Which is one of the reasons it's tended to be bundled alongside commercial interests and advertising. But bundling the putative 'global public square' with commercial interests is even worse, etc.

Dan Hon:

So. Mastodon.social may be too big. We may reasonably expect it to collapse and splinter into smaller instances. Maybe that is okay. Maybe it doesn’t need to be that big, and that means maybe we are not yet ready for the global public city, because if we don’t have the tools to do this well-enough at scale in a way that isn’t funded by extractive advertising, then… maybe we shouldn’t do it and try to achieve it and we’ll fail until we’ve figured out different, better ways.

https://newsletter.danhon.com/archive/s13e23-colliders-speedrunning-benevolent/

As for Musk himself, well, he really is showing himself up, isn't he? Musk a) believes he is smarter than everyone else and b) holds a very simplistic (and additionally wrong) view of the world. Having all that money and power has kept him in so much of a bubble that this has never really been challenged.

So that means, when he looks at a problem in a sphere about which he knows nothing and comes up with a simplistic solution, he believes that the only reason that no one else has done that already is down to a failure of their intelligence. NOT, as is the truth, down to a failure of his intelligence and the fact that the domain he's gazing at is more complicated, and actually other people do understand it a lot better than him.

(I expect the number of times acting under this mistaken belief actually works out for you if you are a person of immense wealth and power is non-zero. If nothing else, many of the normal rules don't apply to you, and sometimes, ignorance of or the ability to ignore those rules can let you get away with solutions that on their own would not be viable. Which does not make this approach on the whole remotely smart or sensible. And, unfortunately, when you do get outliers where this works out, those people take as evidence of their brilliance (while ignoring all the incidents that would disconfirm it).)

And then you get crap like this going on.

Birdfall

It's really striking to watch how fast you can hollow something out.

Look, I'm not going to defend Twitter -- the website, the corporation, the idea -- but you don't have to like something to recognise an act of vandalism against it.

I'm genuinely undecided just how much of this is calculated and how much guided by delusional beliefs. To be clear, when I say 'calculated', I don't mean 'thought through with regards to its intended effect', because I really don't think any of it has been. I mean 'done for something other than the stated reason'.

The ultimatum feels like it might fall into this category. For anyone reading this who somehow missed it: Musk sent an email to all staff remaining at Twitter, telling them they had to click a 'yes' button on a Google Form, committing to 'Twitter 2.0' and 'going hardcore mode'. If they didn't... they were out (with apparently three months of severance pay).

There's a lot going on in this. Not least the fact that he gave them less than two days and did this via a bloody Google Form.

It is not a smart or good move. It was on the BBC front page at about 3am that Twitter was closing its offices to all employees until Monday. The football World Cup, apparently a major infrastructure event for Twitter in years past requiring some serious effort to keep the site running, starts this weekend. It seems that the office locked is motivated by the fact that because they made this opt-in, they don't actually know for sure who they need to kick out and who was just on vacation, etc.

Early reports suggest that ~75% of post-layoff employees have chosen not to take that truly tempting offer.

Look, I don't think this is tactical genius or whatever. But I think there's a good chance this is about laundering responsibility. Badly, it must be said, but I still read it as a hedged attempt at that. A way for Musk to shift the blame onto those who just weren't willing to commit to the apparent capitalist good values of abusive overwork for nebulous and possible non-existent reward amid a background of threats and bullying. I think this about image management and saving face (with the certain kind of crowd who don't just laugh about how ludicrous that idea is).

This also tracks with some of his recent messages, for instance celebrating -- celebrating! -- that Twitter has hit another all-time high in terms of user activity.

Well, yeah. But if you think, against all logic, reason, and common sense, that that's being driven by some positive, repeatable force that you can leverage to your advantage...

Well, you might just be Elon Musk.

The misery machine

It sounds like Musk in on course to actually buy Twitter, barring and further chicanery or legal shenanigans. (I'm planning for the fact that it will happen, but also assuming nothing, given who this involves and his track record on actually ever doing anything.)

I know this is more complicated for a lot of people, and I'm not going to hand wave it away and be like 'and nothing of value was lost', because that collapses a lot of different things that people do get out of Twitter. But also, I do think the world would be net-better-off without it at this stage.

Sure, Twitter was (comparatively) nice once. The problem is... well, the problems are many. Twitter gets held up as this bastion of free expression, the new public square, etc. etc. and it's obviously not any of those things. What Twitter is that makes it seem like it might be that is a) an early-mover social media site that didn't set walls between different groups of users and b) generally very poor at all the horrible business model stuff that social media sites cleave to. That latter point particularly does not mean it's not doing all those things -- just that it's been sufficiently, historically bad at it that it sometimes seems like these factors don't impact how it operates.

Twitter is not the public square. We are sorely lacking for those kinds of third spaces that aren't owned or commoditised by private interests, but a) it is owned and commoditised by private interests, b) public squares aren't meant to be that honkingly big and undelineated. It sucks that we're in a position where this is presented as a credible third space while being really almost but not quite entirely unlike one.

If Twitter goes away, either de facto or de jure, I think the world as a whole will be better off for it (with the caveat that there's no one clear replacement for the role that it plays -- I think that's a good thing but obviously this argument doesn't really work if some new horrible interest just picks up where it left off).

Once again: that's not to dismiss the fact that it would still suck for a lot of people. People who depend on or have various needs met by Twitter will lose out -- I'm not denying that. But part of the point is that... we don't get to choose what happens to Twitter, even collectively. Because it's not our space, and that's precisely the problem.

I'd be sad to see it go (I'm using 'go' here to mean 'change irrevocably if a significant enough number of influential people abandon the platform' rather than, like, be shut down, though I guess that's technically a possibility), but mostly out of a sense of nostalgia for what I remember it being once upon a time. Honestly, Twitter for me these days is mostly a misery machine.

"If anything good comes out of web3, I expect it will emerge despite the technologies rather than as a result of them"

When I went to look up what “web3” even was, I found no end of articles talking about how one company or another was doing something with web3, or how some venture capital firm was setting up a web3 fund, or how all the problems with the current web were going to be solved by web3… but very few that would actually succinctly describe what the term even meant. ... This definitely set off the first alarm bells for me: it’s concerning to me when people are trying extremely hard to get people to buy into some new idea but aren’t particularly willing (or even able) to describe what it is they’re doing.
...
In a lot of ways, people are also tying themselves to the technology in ways that I haven’t really seen before. You don’t see a lot of people pick a type of data model—say a linked list—and say “okay, how can I solve [x problem] with a linked list?” But that’s exactly what’s happening in web3: “How can I solve selling real estate with a blockchain?” “How can I solve voting integrity with a blockchain?” And inevitably some of these people are more tied to the idea of blockchains than they are to solving their chosen problems in a good way.
...
As far as specific projects, if anything good comes out of web3, I expect it will emerge despite the technologies rather than as a result of them. There are all kinds of people trying to solve very real problems, but they are putting all their eggs in the one basket: a type of datastore that’s often very expensive and inefficient, and which introduces complexities around decentralization, immutability, and privacy that many projects will find impossible to overcome.

Why you can’t rebuild Wikipedia with crypto at The Verge

Excellent interview with Molly White, creator of the site Web3 Is Going Just Great.