Meet Me Halfway

Meetings (in my experience, and for me right now, at least) tend to be about four things.

Solving problems (or perhaps 'making decisions')

Generating ideas

Discovery (i.e. figuring out the shape of a thing -- maybe what your problems are, without trying to solve them yet)

Transferring information

Some of these can be accomplished by means other than a meeting, but often meetings are the best way of doing the thing in question (for the immediate local version of the thing -- all of the above are sometimes better as meetings, and sometimes better other ways). Even transferring information, the poster child for 'this could have been an email', is sometimes genuinely best as a meeting (though I think that requires specific intention about why it's better that way).

Each of these demands slightly different approaches from meeting participants. Obviously, an all-hands is very different from a writers' room, but even two meetings consisting of the same three people should function very differently if they're seeking to transfer information or solve problems.

I think bad meetings, really, come when we conflate these functions. Or worse, don't think about them at all. Fundamentally, if you're setting up a meeting to generate ideas, you want to design it differently than if your goal is to solve problems. Likewise, for me as a participant, I need to be in different headspaces, and do different prep, for each of these things.

Basic example: for problem solving, I might want to bring a strategic frame of mind. It's useful to be able to weigh an approach and find issues with it, be critical, look for tradeoffs. If you bring that same critical lens to idea generation, it can be stifling and counterproductive.

(I do think there is some flex in this -- you can have meetings that mix categories that still work well. But I think that flexibility scales inversely with the number of participants. Three of you can probably manage a meeting that does three of these functions. Six should really just stick to one.)

Here's some ways I think about preparing for meetings of different functions:

Problem Solving: What are our guiding principles for making these decisions? Who are the stakeholders? What criteria are we trying to satisfy? (And then logistical stuff like 'who is responsible for capturing the outcomes and following up on next actions?')

Discovery: What questions should I be asking? What does success look like? What isn't in this picture -- what are we missing? Could I explain this clearly to someone else? What information am I missing to be able to do that?

Information Transfer: If I'm giving out information -- making sure I have my material organised and know what I'm actually trying to communicate. If I'm receiving information -- come prepared to take notes so I can engage with the material.

(If there's nothing at the level of note-taking, then, yes, maybe this could have been an email. But even then -- sometimes there are emotional management components at work as well -- telling people something to their face, and giving them the opportunity to ask questions. Even if 100% of the output is captured in writing anyway and no one actually does want to ask any follow-up questions, the decision to do things that way is sometimes important.)

Idea Generation: I try to prime myself ahead of time, as I would for solo idea generation. Reading some related and random stuff. Looking for sources of creative entropy. Warming up with some exercises. Making sure I have scrap paper at hand to scribble during the thing itself.

Fractals, Scenic Time, Dramatic Writing, Weak Default

I've been reading *Meander, Spiral, Explode: Design and Pattern in Narrative*, as recommended in this excellent talk by Christopher Morrison. This is fantastic -- one of my favourite writing-focused books I've read for a while. I strongly recommend it.

This has had me thinking (more than usual) about design and pattern in *games* narrative. I have already been preoccupied by some of the ways in which games lacks the affordances of writing in other mediums. A lot of core dramatic techniques that we take as given in those require intentional design choices to show up in games.

A big one is the tendency for games to use contiguous time -- with the player not breaking continuity of experience with the character they are playing (not just in time, that, but perspective overall). *Meander, Spiral, Explode* contrasts 'scenic time' (time unfolds proportionately on the page with the speed of time passing in the story world), 'summary' (the page affords proportionally less space on the page to the events taking place in the story world -- an afternoon passing in a sentence, for instance), and 'dilation' (a lot of space on the page proportionate to time elapsing in the story world -- a frozen moment).

Games most often seem to unfold in 'scenic time' -- or at least, the speed at which time passes in the presentation to the player remains consistently proportionate to time unfolding in the story world, certainly as far as the narrative is concerned. We might show some short of summary or shortcut, but most often when the player character is also experiencing that in the world, e.g. by sleeping or waiting. That's not quite summary as it appears in prose, in terms of effect, and the proportion of passing times feels consistent there, to me.

(It is more complicated than that -- time between plot beats can be wildly long or short, depending on how long a player delays a given objective. But I think, generally speaking, *as far as the narrative is concerned*, those tend not to be significant.)

Even the basic notion of having *scene*, a given in most other forms of dramatic writing, in games only arise out of specific narrative design choices. They might just not be in the envelope for a given game, in which case, a lot of assumptions and advice about writing go out the window! For instance: the notion of tight framing -- jumping into a scene as late as possible, and cutting as early as possible. Even the notion of cutting!

(I said 'most other forms of dramatic writing' -- I'd perhaps flip that to say 'not all games seek to be dramatic writing' to begin with, and are in fact Something Else, whereas dramatic writing tends to be the default state for a lot of other mediums that look somewhat similar at a glance -- TV, film, novels, theatre, etc. It becomes a problem when your default assumption is that games *are* dramatic writing and you use lenses and tools for your writing that are rooted in *dramatic* writing specifically.)

There are, pretty obviously, examples for all of the above where games *absolutely* have those affordances. A game that incorporates prose writing has access to many of the affordances of prose, of course. But rather than disproving it, I think that *is* my point -- games have access to these only as a result of design choices; in other mediums, they are the default position.

I think games writing is, if it has any kind of default state, fractal. You are replicating the same structures at multiple scales. Telling many stories that all tell one story.

This is the root, in my mind, of why games treats narrative design as a (somewhat) distinct and specialised discipline, in a way that other mediums don't. Other mediums have a strong default set of design choices (so strongly that they are often not perceived as design choices at all), and don't deviate from them without good reason. As far as games have *any* kind of default, it's a weak one; you're not quite figuring things out from scratch every time, but there are design decisions that *need* to be made (separate from the story design decisions required of all comparable writing), which benefits from specialised knowledge.

Then, in the writing itself, you've got to understand what you're working with this tim around -- the lenses and tools that you need to use for this specific project. And here, a toolbox geared solely around dramatic writing may work against you more than for you. (And I think this encapsulates the major stumbling block for non-games writers picking up games projects. You can be the best at dramatic writing that there is, but those might not be the tools you actually need here. And the real arse of it is when *that isn't obvious*, so you don't know there's a problem to disentangle.)

(The original thought that inspired this wander was about economy in writing. I learned to write through short fiction and games -- two forms that really emphasise *economy* in how much you are writing. Write short, okay? It's something I have to unlearn a bit when thinking about other things, e.g. long-form prose -- where brevity isn't necessarily the goal.

Coming from that starting point, I have internalised thinking of economy in writing as a strict virtue -- which it isn't. There are all kinds of effects that contraindicate economy that are valid and good. Prose can be meandering or all the rest and still very good. One of the reasons this doesn't tend to work in commercial games is that, in those mediums, *the writing isn't competing against other elements of the work*. That's a constant tradeoff in games -- what else is the player doing? (which means you can't rely on the writing having their full attention), and what else do they *want to do?* (are you just holding them up from getting on with the next level, which is what they really want). Which means you need to think about economy of writing as part of building the desired player experience. (And recognise this continually when in might actually be *faster* to write more.)

(Which is to to say that in other mediums the work as a whole doesn't have to compete to hold the audience's attention -- it does. It's just that the work doesn't have to balance how it competes with *other aspects of itself* for their attention.))

New rule of thumb for these posts -- when I start double-nesting parentheses, it's time to stop.

The 'show up' heuristic

Sometimes, showing up is enough. More than enough -- it's all you really need to do.

I've been pushing myself back into regular exercise lately -- something which is vitally important for me and which is usually one of the first casualties of fatigue, busyness, and all their sorry ilk. What this really means is forming new (old) habits -- finding ways to make these activities just *happen*, without each individual instance of them being a fresh negotiation you have to conduct with yourself.

(It's not just a question of motivation -- though there is that. I do get injured easily, and I have struggled with fatigue, a lot, over the past couple of years. Beyond the obvious of 'just feeling tired', it really messes with my brain. Getting overwhelmed, overstimulated, or overexhausted tend to entirely mess up my ability to rest, and send my body into stress overload. Which is a very unpleasant feedback loop that it can be a lot of effort to escape from. So, on days where I'm 'off nominal', the question of whether or not I should push myself is very real. Usually, the answer is 'yes'. But only usually.)

I like pushing myself, I like making progress. But the thing above all is recognising when you can just 'show up' -- defining what it means to do the thing without necessarily pushing yourself past your limits for that day, but also without copping out.

As well as motivation, and the bigger questions I mentioned, this also helps me get my shortcut around *faff* which disproportionately disincentivises me from doing something. If I'm reluctant to go for a run, or to the gym, half the reason -- more than half! -- is usually the thought of having to get changed and get out the door. That sometimes seems like an arbitrarily massive barrier to get over, and not only do you have to do that, you *then* have to actually do the entire thing!

So on those days, I move the goalposts. I ask myself 'can I just show up?' and define what that looks like. For running, showing up might just mean 'putting on my kit, getting out the door, and getting moving'. If I do that, I've shown up. If I then want to bail, immediately, for any reason, that's fine. I've made a good enough effort at the thing for today.

(This is not every time, either -- and those other times I still get to push myself.)

A more concise way to express this, I realise now (as I so often do with these posts) is: it's easy to accidentally make mental shortcuts about what you're doing. 'Going for a run' for me had become, mentally, 'go out and run three miles'. But for me, personally, and my goals, that's not *actually* what I care about on any given day. It's 'get out running a few times a week'. This 'show up' heuristic helps drag me back to that -- what I'm *really* trying to do. What I really care about doing.

The power of this, really, is in repetition. If I can build that habit (system), and do it regularly, the amount of progress I make each time matters less -- that incremental, steady progress is going to take me places. (This is how I approach many things that I care about progressing in.) By showing up and doing it, even 'badly' (relative to my ideal output), I maintain that cadence. (And, as ever, when you zoom out over the right timescale, the 'good' and 'bad' days are hard to delineate.)

And through repetition, you lower the perceived energy cost of doing the thing. You smooth off the edges. Your gear is ready at hand, familiar. You've learned to glide past the foibles and frictions. Your 'bad' day now is 100x better than the 'good' days of your past.

And onwards.

Carrier Signal Jan 2024

Last post here: about 260 days ago. Talking about structure, flow, and processes (especially creative ones). I am still thinking about these things, which is good, but did not remember those specific thoughts, which was differently good.

I've been wanting to get back to mark-making on here for quite a while. I feel like I hobbled my way into the end of last year -- the whole back half of it, really -- without much in the way of spare capacity. So, I was thinking, but less (outside of the immediate thinking needed to do the things in front of me) than I like to.

I've been recovering a bit of that capacity lately -- and have been having the opposite problem, where I find stuff spiralling out of my brain because it's too 'full'. I had set aside some time to try to write up at least one of those, but may not, in fact, get to that today. It's been a full brain day.

This venue actually isn't *that* appropriate for a blog any more. I intend (at some mythical point where I have the time to effect it) to move off Squarespace (good, but too expensive as a recurring cost relative to what I need it for). But: it'll do for right now.

I'm wary of declaring my intent to get back to blogging and mark-making and then not doing so. I don't have a clear space for it in my routine right now, even as the urge has progressively built. So this is very much a carrier signal, or perhaps a dial tone. It creates the space for a message, communication, without requiring that it follow.

Structures and flow

A good structure is something you can trust. It relieves you from the burden of remembering and keeping track of everything. If you can trust the system, you can let go of the attempt to hold everything together in your head and you can start focusing on what is important. ... A good structure enables flow, the state in which you get so completely immersed in your work that you lose track of time and can just keep on going as the work becomes effortless

How to Take Smart Notes, Sönke Ahrens (2017)

That does describe the bulk of my approach to, well, anything really. Devising and deploying structures that I can rely on.

That said, I did recently, separately, read a push-back on the idea of 'flow' -- or at least the idea of flow being something 'effortless'.

Against flow

The argument there was that it's become somewhat of a pernicious myth that we should be striving for a state of joy where work doesn't feel like work. When, even for masters of their disciplines, very often it does feel like work.

Much of that rang true to me, also. Things can still be hard, feel like pushing uphill, and be a lot of conscious, active effort. But sometimes, it can feel light, like all the pieces are moving themselves.

Status 2-May-23

Been quiet on here for quite a while. Just shy of two months, in fact! Well, I aten't dead, of course -- it's just been a long slew of sickness and busyness.

I don't think I have any particular news or updates to share -- things have mostly continued as they do, without any particular distinction -- but I wanted to start rebuilding the habit of posting things here.

(One thing I find interesting is that, having dipped on doing short, regular posts, I've lost the supporting habits that feed into it. e.g. when I was posting regularly, I'd pretty constantly be having and logging little ideas for other posts. The doing-and-the-logging are the things that stimulate the thinking (or, certainly, prime the brain to recognise the right shape of ideas when they crop up.))

I've been at Failbetter for more than a year now, which is strange. It's a classic case of the passage of time where it feels like the current state feels like it has been so forever, while also feeling like it only began five minutes ago.

The year, so far, seems to be following a similar pattern to 2022 -- a disciplined and mostly relaxing January, followed by a series of escalations that just knock everything off nominal (in this case: insomnia, sickness, holidays. Which is probably better than last year's case of 'just far too much work').

Spring has also firmly sprung since my last update. The bare trees and bushes are covered over with foliage, which mostly seem to serve to conceal screaming sparrows. There are two magpies nesting near our house (one of whom has a tic a bit like mine -- I have called him 'Squeaky Beaky'). I've spotted a few blackcaps -- the first time I've ever seen them! And I've started watching the skies for the arrival of the swifts...

Status 6-Mar-23

I've been a little quiet on here the past few weeks -- first slight overbusyness, follow by slight crash. All things in cycles. As I think I've mentioned before, when I need little snippets of extra time, writing these shards is usually the first piece of compression. (I tend to write them in the morning before starting the day job, which means that it can easily get squeezed by having excess miscellaneous tasks or being slightly behind from needing more rest. If it's a choice between writing these and making coffee, I know on which side my bread is buttered.)

(Sick fish don't help, either.)

One quick share today: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit

Choice quote:

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."

Choice because a) it's nice to remember some of the shitheads backing this, but more saliently, b) 'since our research is free from financial obligations, we can better focus on a positive human impact' is so on the money (hah) that it hurts.

I refer you back to Filtered for AI Apocalypse, specifically:

This is the sort of thing that bothers me here -- the economic incentives at work (and not out of nowhere: specifically courted by those making these things do not align -- remotely! -- with responsible creation of these products. The $ numbers at work are staggering, and if you're on the hook for that amount of money, I don't feel like you're going to be too prudent about the limits you set on these things if you can get away with them.

Which links in well to this Margins piece, talking about why voice assistants never quite stuck the landing as they might have:

ZIRP

Of course I'm going to go there: ZIRP played a huge role. Just think about every cycle we saw over the past decade. There would be some new technology like Blockchain, IoT, AI, VR, (and I'm sure I'm missing a few), and instantly everyone had to pretend it would change everything. All of these technological advancements could’ve been implemented gradually, finding product-market fit and building solid businesses from there. But instead, every startup in the space had to spout off big ideas to then be force-fed capital like a goose bred for foie gras. Then they’d never live up to their potential and be pushed into the trough of disillusionment. Startups that tried to grow responsibly would be blitzscaled into oblivion.

10x not 10%

Then you had the big tech companies. As each one sat comfortably on its own monopolized territory, gradual innovation simply didn't make any economic sense. If you're churning out profits, the incremental benefit from steady growth built on a new innovation would be a boring distraction. I still remember (and at the time buying into it) reading about the head of Google X saying that it was easier to create a 10x innovation than build a 10% improvement. People had to make statements like that because the 10% improvement would never get you the resources or promotions.

Voice couldn't simply be a cool feature that just gave you sports scores and told you the weather, and then evolve into something grander. Amazon is the flywheel king of losing money on certain things in order to build larger network effects, but it was mid-2010s blasphemy to simply have made money by…selling their speakers.

Enshittification by market autophagy.

Chekhov's Gun Bang Bang

Chekhov's gun is a writing principle that states that everything in a story should be there for a reason. Chekhov's famous example is that if we're told a rifle is hanging on the wall in chapter one, then someone needs to fire it in chapter two, otherwise why tell us there's a rifle there? Chekhov tells us not to waste time with details that aren't important. Talk about only what is in service to the story, no matter how irrelevant it may seem at the start.

Another Year Wiser: Orbital Operations for 19 February 2023

Chekhov's Gun is about lots of things. First and foremost, yes, as the quote says, it's about narrative economy – you often only have so much space and attention to play with, so, as Strunk and White might have it, 'omit needless words'.

In fact, specific sensibilities when it comes to narrative economy vary between mediums, commercial intent, and even just writers and their own voices. I think you've got a lot more latitude in your average novel than your average comic or TV show. That's partly because the latter two, at least in the commercial shapes we tend to talk about most often, have specific, formalist constraints (page count, episode length).

It's also down to the dependencies of including an element. A prose writer can just toss out a bunch of details without any marginal cost or wider implications. In comics, someone's got to draw that. In TV, it might mean costuming, propmaking, casting. In video games, concepting, modelling, rigging, animating, capturing, or all manner of thing, depending on what you're asking for.

This is where narrative economy collides with actual economy, which is very important.

(The same can be said even of prose or 'cheaper', marginally speaking, forms of writing. Even if I can produce an arbitrary amount of prose without issue, I may have a limited time assigned for it given what makes financial sense for that piece of work to look like. Even if I can write faster than your average bear, it might be incumbent on me not to do too much if there are specific formalist expectations of the content, or even just because then it becomes time intensive to edit or QA.)


But there are some other useful aspects to Chekov's Gun. The inverse of its formulation, for instance: if you need to have someone fire a rifle in chapter two, you need to show that there is a rifle in chapter one.

This is, in some ways, 'earning' that moment (though there is more breadth to what 'earning a beat' can be). But also it's just about making the story feel intentional and inevitable. Often, it can be surprising the audience ('a character just pulled a gun!'), but making that surprise feel concrete -- resting on information that was available to them ('of course! the gun was on the wall the whole time!'). Though outside of Chekov's Gun, this can be earned in softer ways ('of course that character carries a gun!').

I'm reminded also of Sanderson's Law of Magic, which is interesting even if I have mixed feelings about it. Paraphrasing from memory: the level to which you are allowed to use a magic system in a work of fantasy to solve plot problems (e.g. a character using magic to get out of a fix) is proportional to the level of hard, established rules of that system. It's fine to have handwavy magic that behaves in ways that are inconsistent or hard for the audience to pin down, but if you then use them to get characters out of trouble, you're creating a situation where you're not bringing the audience along with you. It becomes effectively deus ex machina, or even 'Sonic Screwdriver Syndrome'.

(I'm increasing levels of tangent deep now, but one of the things that made me care less for Doctor Who than I already did was watching a two-parter where the first episode ended dramatically with the Doctor et al. surrounded by approaching Cybermen. Oh no! How will they escape?! And then the second episode begins with the Doctor whipping out the screwdriver and doing... something. I'm not even sure what. It felt like the most baseless manufactured tension and a waste of everybody's time.)

Anyway, Chekov's Gun here is partly about explaining at leisure what will be dramatically important in haste later.


Last thing on Chekov's Gun: I think it's also paying off the promise of what you're putting out there. Not just in the strict narrative economy sense, but: if you're explaning some cool facet of the world or something about a character that seems specifically interesting... you're making a kind of promise to the audience that they'll see that play out somehow. Not because that's an unnecessary detail, but because you're getting them excited about something that sounds cool, and it's better to pay that off than not much of the time.


But above all, as the quote says, this is a principle, not a prescription. I can imagine a reductive, CinemaSins-esque analysis of a work that points to all the details that it includes but doesn't pay off (or that it pays off without setting up). Which is not... how this works. This is a shorthand description of what tends to make stories connect up in exciting ways, not the be-all and end-all of how to make them.

Status 20-Feb-23

Monday, functional if brain-looping. Two cups of coffee continues to be too much for a baseline. A good weekend, where we ended up repotting a bunch of plants and roasting a shoulder of lamb. Organic things.

I've never cooked a joint of lamb before, and ended up cobbling together various recipes to suit what we had on hand/I was aiming for. Roughly:

  • Salted the lamb the day before
  • Pasted fresh rosemary and thyme with English mustard, olive oil, and a little more salt. Slashed the meat, then rubbed it with the mixture.
  • The butcher had already packed the joint with some long rosemary sprigs.
  • Put carrots and parsnips in a cast iron pan, poured in red wine, balsamic, and olive oil, and laid the lamb on top.
  • Cooked for 4 hours at 130c, then for another 30m at 170 or so.
  • Rested for 30m. Made gravy with the juices.
  • Served it with some (separately) roasted vegetables and Nigella's salt and vinegar potatoes.

The lamb came out very nicely, albeit not 'melt off the bone' done. Doing it over again, I think I'd roast for more like 6 hours at the lower temperature (for a 2kg joint like the one we had).

The vegetables under it didn't come out to good, either. Too tough, and a little too acerbic for my tastes. Probably didn't need vinegar on top of the wine. Though the longer cooking time might take care of the rest.

I served it with a bottle of Bordeaux, purely by virtue of Googling what sort of wine would go well with lamb shoulder. (To be fair, it did.)

Lamb shoulder is a joint that, particularly when slow roasted to a melting tenderness until it needs no more than a spoon to take it from the bone, requires wines with an essential freshness as part of its make up to slice through the fatty meat. Rioja and red Bordeaux are classic combinations but look too to Chilean carmenère or Chianti from Italy.

https://www.thewinesociety.com/food-wine-matcher/meat-course/lamb-shoulder

All in all, an excellent time.

Filtered for AI Apocalypse

On top of that, these products are extremely expensive to run. ChatGPT burns millions of dollars a day in computing power, while charging a per-access-token price for companies to plug into its models. OpenAI expects $200 million in revenue in 2023, claims that they’ll be making a billion a year in revenue in 2024, but as their AI becomes more widely adopted into other products, so will the cost of providing that service, and I see nothing about how this company could possibly be profitable. On top of that, OpenAI will be handing over 75% of its profits until Microsoft has recouped its $10 billion investment. To survive and perform even its most basic tasks, OpenAI must constantly consume information and burn capital, and as it grows in complexity, so will its technological demands, ethical concerns and genuine threats to society. ... As said last week, tech companies are incentivized to grow at all costs, even if said costs involve them acting in reckless, unethical ways. A ChatGPT-powered search engine marketed by a massive tech company as a search engine is one that users are likely to believe the answers of, meaning that when said search engine gives patently incorrect and “unhinged” results, we are likely to see situations of misinformation that dwarf the damage caused by Cambridge Analytica and Facebook. Microsoft rushed out Bing AI because they wanted their shareholders to see them as constantly growing, despite the fact that it is both regularly wrong and actively manipulative users. ... And the reason that these risks are so significant is that both Microsoft and Google were desperate to show that they will never, ever stop growing. They did not slowly and methodically roll out these products with constant reminders of their fallibility - they branded and marketed them as the future, as the new way in which we request and process information. Microsoft and Google’s customers are victims of rich people playing with toys, hoping to find ways to please rich people that invest the money of other rich people.

Ed Zitron: The Rotten AI

This is the sort of thing that bothers me here -- the economic incentives at work (and not out of nowhere: specifically courted by those making these things do not align -- remotely! -- with responsible creation of these products. The $ numbers at work are staggering, and if you're on the hook for that amount of money, I don't feel like you're going to be too prudent about the limits you set on these things if you can get away with them.

I refer you back also to BRIAN BLESSED PIGEON (undeterred).

AI (which we’ll use to describe the new wave of natural language systems despite the fact that it’s basically a branding exercise invoking benign superconsciousnesses from mid-20th Century SF to sell an interactive probability map) is going to disrupt our lives. That’s a given. In exactly what ways remains to be seen. At the moment the AI chat systems we’re talking about are the image of a persuasive liar: smooth-talking fabricators of plausible nonsense without the hindrance of conscience or a residual allegiance to truth. In fact it’s worse than it appears - they’re number generators emitting the expression of statistical likelihood dressed up as meaning. Unchecked, they are misogynistic, racist conspiracists gleefully plagiarising or collaging slabs of coherence, their entire being an intricate patchwork of human self-expression stitched into the shape of consciousness.

Nick Harkaway: Not The Future

Tom Scott: I tried using AI. It scared me.

More Obsidian things

My love of Obsidian only grows. I was running a Session 0 for a new Monster of the Week campaign last night. I was taking notes on people's characters as they introduced them, all together at the bottom of my session notes doc. "Dang," I then thought. "What I really want is a separate note for each character, so I can link them for each of their histories.

I was expecting a straightforward but mildly annoying copy-paste situation, but...

It turns out that this button just pulls an entire section out into its own note. The fact that I was using headers for each character meant that it pulled through note titles automatically, too. I was: very pleased.

The note-linking feature was really useful here as well -- previously in MOTW character creation, I've drawn a little paper chart of the connections as we went. Too many players for that this time around, but using note links let me set 'outbound' histories between the characters. And the backlinks sidebar lets me see which other characters have history links into that character.

I also unearthed another feature that I was sure would be there, but I couldn't figure out what it would be called. It turned out it was rather simple: embedding. There are a few notes where I want a live-updating preview of content from another note, rather than having to manually replicate that information whenever it changes in one place. Boom. It's there.

I really love Obsidian.

See also Daily Notes, Daily Notes and Possum Ravens, Obsidian sync, and Experimenting with Obsidian PKM

Throwing stones

I thought I'd written about intuition amplifiers on here before, but it seems not. Most of this is just in reference to this Matt Webb post.

I’d love to build intuition amplifiers as cyborg prostheses.

Like, here’s an old one: flipping a coin.

You know when you don’t know the right course of action so you get a coin and then it lands heads and you think, oh I wish that had been tails. In the process some internal signal too faint to discern has been lifted to awareness.

His post then goes on to explore that for other concepts, like dowsing, where there's some exterior mechanism to -- in theory, at least -- amplify some internal signal and let you reject or accept that input.

I was reminded of this because last night we started watching Twin Peaks. I watched it about 7 years or so ago now, and remember very little (characters and some big moments; almost nothing in the way of plot beats other than that).

In episode 2, Agent Cooper explains that he access to 'a deductive technique involving mind-body coordination operating hand in hand with the deepest level of intuition'.

https://www.youtube.com/watch?v=REeCZokP9tc

and then goes on to throw rocks at a bottle, like this https://www.youtube.com/watch?v=7SjPBSm29zY. The setup being that hitting the bottle (and perhaps breaking it) is significant.

This is a) just very strange, b) a fantastic way of doing a bunch of recap exposition on various characters, and c) is obviously an intuition amplifier.

Twin Peaks is just as weird as I remember. It feels like nothing other than a surrealist soap opera, with something strange and dark buried in the sand beneath and occasionally poking through. I am yet to watch the third series, so I look forward to experiencing... whatever that entails.

Good work, 47. I'm sure they'll pay your invoice soon.

I've still got a stack of games I want to finish that I started over Christmas, but I've been preoccupied recently by Hitman Freelancer.

Freelancer is a new game mode that layers onto the existing Hitman: World of Assassination, itself a rebranding of Hitmen 1–3, which came out between 2016 and last year. (Wait, I just checked, and Hitman 3 came out in 2021! What is time?)

Freelancer adds a roguelike (or roguelite? I can never remember the difference) mode to Hitman gameplay. A gameplay loop that is designed to be (hopefully) enjoyably repeated, with atypically punishing (though often not as much as it appears) failure states. In Hitman's case, that means you can't save and load in missions arbitarily, you can lose equipment by failing missions (or entire runs), and the elements of the missions vary procedurally.

This is fantastic for Hitman. It's genuinely transformed the game for the better, and it was good to start with. There were always some odd filigrees to the Hitman design space that I felt they circled round actually paying off.

The maps are big and intricate, and set up with lots of elements that are 'redundant' in the context of the core story missions -- areas that are underused, interaction points or equipment that's sort of by-the-by.

Through play, you unlock a ton of different equipment, but almost none of it is better than what you begin the game with. The game is also scored around being silent, sneaky, and evasive, which makes loud weaponry seem fairly pointless. (In practice, you could play that way and it could feel pleasantly transgressive, but it felt like working against the design intention even though they were literally giving you these weapons).

They've done various other game modes over the years which have nibbled at this. The Elusive Target model is very cool, though its appeal waned for me. In ETs, there's a real-time-limited special contract on a target. You can save during the mission; if you fail, that's it; if you complete any objective, you're 'locked in' to that run. But it must have been quite dev-intensive to do new ones, since they all required some unique design. And the level of failure penalty ('you have permanently failed this') encouraged and extremely cautious play style, which isn't actually that fun.

Other game modes also touched on these elements, but Freelancer feels like it finally makes it all make sense. They've retroactively made the game that fills out the design space they had already built.

I'm also not sure they could have, like, skipped right to this. Part of the success of Hitman comes from the intricacy of its maps and their little clockwork tessellations. Also the breadth and number of them. By taking the somewhat unusual step of updating the maps from each game to bring them into the next one (so Hitman 3, until it was all merged, would let you play Hitman and Hitman 2 maps if you owned them), they've given themselves a surprisingly large stable of very broad and deep maps that support a lot of replay. I think, if you sat down to design Hitman Freelancer as the baseline, you just couldn't make those same design decisions first time around. At least not sensibly.

BRIAN BLESSED PIGEON (undeterred)

I finally did some prodding at ChatGPT last week. I wrote a little bit before about some of my problems with the way these things are trained, though the problem is slightly less acute (though still present) with text-based models.

My first prompt was to have it write a bio for Olivia. Where it confidently declared she'd worked on several games that she hadn't, had attended a university several thousand miles away from the one she had, and had won several awards that she hadn't. The most correct thing about it came in the form of a sentence verbatim lifted from her website.

My experiments progressed to having it construct several dramatic scenarios, having it rewrite scenes from Shakespeare, but replace characters with pigeons, or Brian Blessed, or set them on a submarine.

BRIAN BLESSED PIGEON: (undeterred) And might I add, your beauty shines brighter than the moon in the sky at night!

(Strictly speaking, I'd asked it to change the dove in a previous draft into Brian Blessed, but I guess it technically followed the instructions.)

Or having it apply for a job as an assistant, but as a pigeon:

As a pigeon, I am highly adaptable and able to learn quickly. My ability to navigate my surroundings with precision and agility would make me an excellent researcher, helping you to gather information and inspiration for your stories. Additionally, my keen sense of observation allows me to pick up on small details that others may miss, which would be valuable in the editing and proofreading process.

I also have a unique perspective that would bring fresh ideas to the table. My avian nature allows me to see the world from a different angle and provide a unique perspective in the creative process.

I then tried teaching it to GM a TTRPG for me, which... did actually seem sorta possible in a slightly unexpected way, with steady coaching of the right prompts. It struggled with numbers, and I am suspicious that it wasn't just picking favourable outcomes for dice rolls and telling me that I had rolled an X. But if anything, that's a legitimate -- if not actually desired -- approach to the problem.

Act as the game master in a single player improvised tabletop roleplaying game set in the world of Buffy the Vampire Slayer. Answer as the game master. I am the only player. Do not write explanations. Start with character creation. ... Refer to me as if I’m Willow; don’t talk about her in the third person. However, I am not actually Willow, I am just playing the character of Willow. ... When I talk in character l, I will use speech marks “Like this.” When I talk in character, don’t repeat my speech in your reply, just paraphrase what I’ve said instead. Other character should still talk to me in direct speech.

(Some of the prompts that went into setting this up. I'd actually like to give this another go.)

My overall feeling from this play was that it was great for just that -- play. Seeing ChatGPT as a vector for entertainment felt good; it's applications beyond that much more tenuous.


Dan Hon:

Bard made a factual error in its first demo7. A point here is that the demo confidently asserts that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system”, and, whooeee let me interpret it this way: there’s no way this demo would’ve gotten out without going past a bunch of (human) eyes checking it was okay. It clearly got past all those eyes. One reason why, I’m pretty sure, is that LLM-generated text is super easy to glaze over entirely because it’s so predictable and we’re already framed to predict what it’s going to say. Which is a major point of the criticism of these models: they’re great until (which is frequently) they assert something that we don’t check. Which people didn’t check sufficiently. And now you’ve got AIgg on your face, big disgrace, etc. Things That Caught My Attention s14e14

Interconnected:

Look, I know the code isn’t rocket science. I know I could do this, eventually, and you could probably smash this out without looking, but I don’t really know React - I can’t write it idiomatically - and I don’t know about webcams in the browser, and I don’t know about the MediaStream API.

So this was a day of work in 10 minutes.

What is meant was that I could spend that day integrating hand pose detection and noodling with the actual micro-interactions. And now I have opinions about all of that!

Now, none of that Copilot-supplied code remains in my app.

What happened what that it helped me frame my problem. I was able to rapidly explore the edges of my knowledge, and figure out how to structure my questions and what I need to learn. My learning requirement is not obviated obviously… Interconnected: AI-generated code helps me learn and makes experimenting faster

(I also read something recently about a well-known concept artist using AI art as a means of noodling at ideas/exploring surprising conceptual spaces, but I can't track that piece down again right now.)

Having played with it some, I can see the magnetic draw and fascination ChatGPT inculcates. The problem is that I think a big part of that comes from this very superficiality -- its ability to bluster confidently. Which is, again, great for entertainment value, and absolutely lethal for the uses I think most people imagine for it.

Setting aside my concerns around how this is implemented and rolled out – which are MASSIVE, and honestly I am more than a little scared by the ramp-up in recent days to make this ubiquitous because dear God are we not ready for that, technically or socially – but setting that aside, the value here for me seems to be for uses where unexpected or unreliable outputs are baked into the intent. So, creative experimentation, generative play, or rapid prototyping.

But I feel that, while they exist, those use cases are much more niche, limited in appeal to a much smaller number of people. Which is not what the companies shipping this things are banking on at all -- they're banking, literally, on it being the next big money spinner, and working to capture the next big information market. Unfortunate, then, that the information that market is selling is rather spoiled.

Filtered for Simplicity

Simplicity

Simplicity is difficult, after all, no less than complexity. Both require taste and skill. Neither is less artificial or more natural than the other. Both are necessary for good writing. And when either becomes a forced regimen, exclusive of the other, the results can be only hideous. Good writing is produced not by forsaking the beautiful for the sublime or the exorbitant for the restrained, but by finding new ways of orchestrating the interplay between them.

How to Write English Prose


Much of good design is refusing to do what’s bad.

Designing a New Old Home: Materials and Hardware


Elegance is a popular place. Dieter Rams, a massively influential product designer and successor to Bauhaus, popularized the functionalist school of thought, invoked famously in the designs of Apple products. In his Ten Principles for Good Design, Mr. Rams states rule number 10 is “Good Design is as little design as possible”, explaining it’s “less but better”:

“because it concentrates on the essential aspects and the products are not burdened with inessential”

In Praise of Messy Design


Counterpoints: Messiness and Complexity

First off, that last link in the previous section: In Praise of Messy Design

Yet, when I meet with other experienced game designers, we find ourselves admitting to fascination with decidedly inelegant game designs, and envious of those designers that permit themselves this kind of latitude. ... Maybe I enjoy messy design because it allows for more variety. When you cut away the unnecessary and create only the ‘critical’ gameplay and systems, you are following a design path to its logical endpoint. There only needs to be one Threes because it is already perfect; you can derive variations, but you cannot improve on it (sorry 2048). There is only one perfect toothbrush handle.


A previous post on here, It's Complicated:

There's a growing tendency to treat simplicity as a proxy for correctness

which links out to more of Sarris's writing, also.

Status 7-Feb-23

Still in patchy blog mode. We've got a couple of sick fish, which means some additional morning maintenance tasks, which puts pressure on the time I normally set aside to write these.

One of the fish, Hawthorne, has been inside all winter, which will probably be the case for each such season in the future. But this time around, it was all rather ad hoc, which means we haven't had the time to get a proper tank sorted. Which means regular manual water changes, scooping a few bucketfuls of the stuff out each day (trying not to catch or squash the fish) and replacing it with water of the right temperature.

Goldfish are, apparently, prodigious waste-producers. Their excretions (and any uneaten food that's not been cleaned out promptly) break down to produce ammonia, which is toxic to fish. (Looking it up now, it can burn their gills and make it hard for them to breathe, to the point of being fatal.) In a properly set up tank or pond, a biological filter converts the ammonia to nitrites, and then onwards into nitrates. Aquatic plants can then help manage levels of nitrates (though the occasional water change would still be necessary).

We don't have any hard-working bacteria at the moment, so we have to do it.

What adds to this is that we had to bring another fish inside over the weekend. Chompers has been looking peaky for quite a while. We've had him checked out before, and not found anything conspicuously wrong, but he's clearly been struggling in the cold, as Hawthorne was. We've put off bringing him inside, because it didn't seem sensible to try to support two goldfish in our temporary tank for multiple months. But we've conceded defeat now, before Chompers gets any more sideways.

We had a bit of a false spring here over the weekend, and even though the temperature's dropped again now, the sun is out, and the birds seem suitably confused.

It also reminded me of the small seasons: https://smallseasons.guide/, where we're at 'Ground thaws, fish appear under ice.' Last year was the first where I really felt attuned to the (macro) seasons in anything other than gross weather terms.

I'm looking forward to spring in earnest. Birds are already singing unsociably early. I think we're due for the joyful irritation of 4am wrens soon. Chiffchaffs and swifts slowly filtering in.

Status 3-Feb-23

Patchy blog this week. Some busy days, and one day out sick, and I've not quite been caught up enough today to spare the attention.

Please enjoy this quote about carnivorous vulture bees via The Whippet, which is a fabulous and interesting newsletter to which you should subscribe.

Via The Whippet 163:

Most bees have saddle bag–like structures on their legs for carrying pollen, but vulture bees have much smaller leg baskets, which they use for carrying meat back to their hives. ("They had little chicken baskets,” said Quinn McFrederick, a UCR entomologist.") To gather their hauls, vulture bees have a unique set of teeth they use to slice bits of meat. Once in the hive, the vulture bees store the meat chunks in small wax pots, leave them there for two weeks to cure, and then feed it to their larvae.

https://arstechnica.com/science/2021/11/carnivorous-vulture-bees-have-acidic-microbiomes-to-better-digest-their-carrion/

Daily Notes and Possum Ravens

I mentioned last year that I'd started Obsidian's daily notes feature and was finding it really valuable. That continues. I saw an article recently (which I now can't find) about someone else's approach to daily notes. They'd write down interesting things, and then at the end of the day, roll them up into a month-level note, zibaldone style.

For me, daily notes have been most useful for stuff that I want to organise/have in my head in a strictly temporally local context (i.e. really today) and then have expire, rather than worrying about storing and filing lots of daily notes. Usually, that's little snippets of stuff I'll need later but don't want to file long-term, or ephemeral tasks which are useful to turn towards when I have a spare moment, but really don't matter enough to be worth sending to any actual to-do lists.

But there are occasionally things that it would be useful to tuck away somewhere but don't fit neatly within my existing filing system. When they're more involved, I've often renamed the note at the end of the day to be more descriptive of the content. That's not entirely satisfying, though, but makes rediscovery/disposal after the fact easier.

But this zibaldone appraoch appeals, for little snippets of stuff that I like but don't want to extensively file. Here's one I kept from a few days ago:

From https://en.wikipedia.org/wiki/Ravens_of_the_Tower_of_London

Another story concerns the two ravens named "James Crow" and "Edgar Sopper". James Crow, who was a much-loved and long-lived raven, had died. After noticing the commotion surrounding the other raven's death, Edgar Sopper decided he could "play dead" in order to bring more attention to himself. His trick was so convincing that the ravenmaster fully believed that Edgar Sopper had died. When the ravenmaster picked up the "corpse", Edgar bit the man's finger and "flapped off croaking huge raven laughs".[45] Likewise, "Merlin" has since been known for eliciting a commotion from visitors by occasionally playing dead.

Birds!

Sociotechnological Factors

I finally finished The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger (Marc Levinson). It was a good read, but took me several months longer than it should have.

I feel like the level of information was slightly different from what I was looking for (lots of more detailed figures and values than my strict level of interest), and it took me a while to find my level of reading fast for the bits I was more drawn by. But it's a very interesting history of how containerised shipping came to be and the forces -- economic and otherwise -- that led it to transform the global economy writ large.

One of the striking things about it to me was that, like most stories of technology, is that it's really a story about labour rights, regulation, and politics -- and balancing those against innovation. There was some heavy industry and national protectionism around how shipping was handled (not just literal ships, but trucks and railroads also), with heavy locks on rates, jobs, and processes, in ways that directly worked against developing growth. I talked about this a bit in Nexus of unlikely forces.

(I'm not going to get into the moral or essential angles on 'growth', here. It is a concept that we tend to take as an unalloyed good, or some natural force or economic absolute, rather than the policy choice that it is. The pursuit of growth in this way is a major crucible of why The World is So Messed Up. But for the purposes of discussing this, I'm thinking internally within the system then and the system now.)

Unions and industry conferences controlled rates, jobs, and working conditions in ways that massively limited what new things were possible. They also ensured jobs and sustainable conditions [for their own industry], at least for very specific subsets of people. The story of the shipping container is really the story of the individuals and companies (Malcom Matson chief among them) that found a way to skirt those regulations and established ways of doing things enough to drive a wedge in and make the whole thing come apart at the seams.

It's not consumer-facing in the same way, but it parallels a lot of modern tech company stories, where they establish a company in illegal or loss-leading ways, build customer goodwill to the point where the old way of doing things just seems completely mad, then turn the dials as they try to actually make the business model work.

What's interesting to me, though, is the tipping point. There was a point where, to use the phrase people are so fond of using about ML models these days, 'the genie was out of the bottle'. Once the established systems and protections started to come apart, many of the unions, conferences, governments, and ports fought it tooth and nail. They were correct to be hesitant, in direction if not in magnitude, of the impact it would have on their respective concerns. But there was a definite tipping point where the ball was rolling, and those who continued resisting set themselves up to become the economic losers of the situation.

There were various ports who eventually relented and made major investments in the infrastructure needed to handle containers... only to find that the ship had, so to speak, sailed. Container shipments depended on a lower frequency of higher-volume port stops, and the early mover ports tended to be the ones around which the value accreted.

Again, I'm not trying to draw a moral stance on this. The container reshaped the entire global economy in ways that none of its proponents actually predicted either. Drawing the through-line form there to our current problems with infinite growth probably suggests that, for all the neat things it has given us, it probably wasn't the greatest move. But it's interesting to look at this story through the lens of technology and sociotechnological factors of today.

“Any change in technology,” the economist Joel Mokyr observed, “leads almost inevitably to an improvement in the welfare of some and to a deterioration in that of others.” That was as true of the container as of other technologies, but on an international scale. Containerization did not create geographical disadvantage, but it has arguably made it a more serious problem.

Fractal, Longitudinal Selves

Good quote from Kieron Gillen's newsletter that resonated with me:

But that it’s nonsense doesn’t make it useless. It’s made me think about time, perspective and who I was at each of these points. One of the things that has struck me as I’ve aged is that old people never actually explain what they mean “you’ll see things different when you’re older. You’ll have a different perspective” That only comes across as patronising, because it is patronising… but they’re also not really saying what they mean.

What being older does is provide a variety of perspectives – not just who you met, but from your own moving identity. You have been a different person at different places, and seen the world from where you stood. It’s not about the final perspective being correct – but an awareness that there are multiple perspectives one can inhabitant, and they were (mainly) true and necessary responses to where you were in that moment. Unless they’re being a total shithead (which is entirely possible) what they’re really saying is “you will see things differently when you’re older because you will have seen things differently at different points, and then have a different understanding based on all those people you’ve been.”

The danger for the old is that because they have had all these multiplicities of experience is that they think their multiplicities are all there are. That is fatal.

245: the juvenilia jamboree

This is the fractal version of something I've held for a long time, which is utterly obvious but for me has explained so much of various people's behaviour over time. Your worldview is not universal. Worldview here being the internal model of/interpretive lens you hold up to the world based on the aggregate of your life experience.

Again, I think this is utterly obvious to a lot of people. But so much that I see in other people's behaviour that I think of as wrong or destructive (in the minor or the major key) comes down to them just not operating as if this is true, and treating their own perspective as a monolith.

(This is not the only thing that leads to those behaviours, of course -- it's perfectly possible to understand this and still do bad things or things badly. But it's a common 'thoughtless' thing, as far as I can see.)

The temporal aspect to this is important, too. I remember hearing some years ago (I think also on the Hidden Brain podcast, as referenced yesterday) about a study where participants were asked for their forward- and backward-looking perspectives on themselves over time. They were asked:

  • How much have you changed vs you a decade ago? (Answers generally boiling down to 'oh wow SO MUCH!')
  • How much do you think you'll change in the next decade? (Answers generally amounting to 'oh, not much, really'.)

Those answer patterns were consistent across all age groups. Generally, we seem to think we're 'finished' at whatever age we happen to be. We underrate how much will continue to grow and change, despite the preponderance of past evidence for the fact that we will. Understanding this has been hugely important to me. I feel in continuity with my past and future selves, but without feeling I am the 'correct' form of George, either.

(And the change talked about doesn't mean a strictly negative reckoning, either. We can have compassion for our past (or even future) selves even if we don't like them, or wish they'd done things differently, or had access to (self-)knowledge or insights that we have in the present.)