Infinite Bicycles for the Mind

Information theory has this concept of “surprise”. If a baby says “goo goo gaa gaa” all the time you’d be very surprised if the next words out of its mouth were “all religions, arts and sciences are branches of the same tree.” Woah there, little Einstein…

You form a prediction of what the baby is likely to say next. More “goo goo”: less surprising. Philosophical observations: more surprising.

When generative coding there’s also an element of surprise. Say you’ve never coded before and decide to vibe a dashboard page. Perhaps the layout it generates — a sidebar on the left and a table on the right — are what you expected because you’ve used many web apps and that’s how they typically look. It amazes you it was able to create that but the choices it made were unsurprising.

But then you feel brave to look at the code and you’re in another world, with onState and familiar words like “table” repeated but in angled brackets and lots of punctuation. It‘s very surprising and highly unpredictable. If you were to attempt to change some of the code there’s a high chance it would break.

But take someone like myself who is familiar with programming React web apps. I’d ask it to generate a dashboard and there’s a certain familiarity I’d have with the code. I’d say “yes it created some components and called this one Table and put them in files ending in ts, just like I expected". There’s low surprise and high predictability.

So far my usage of generative AI in my programming career have been as a souped-up autocomplete. Starting several years ago I was able to start type on and my editor would suggest onClick. I press tab and with three keyboard taps I’ve typed seven letters. Today with Copilot, I can type on and hit the tab key and I’ve generated onClick={() => signOut()}>Sign Out</button>. If that’s most of what I intended to write that’s saved me a lot of typing.

Via Copilot learning what I’m likely to write and me learning what Copilot is likely to suggest, I’m able to code more quickly than before. Because I have an expectation of what I want written, the output is completely predictable and unsurprising to me. I just get it done faster.

Compare this to when I’ve asked ChatGPT about why objects emit infrared light — a topic I know little about. What it writes for me is quite surprising because I didn’t know exactly what to expect. It tells me “Infrared photons are emitted by electrons when they lose energy due to thermal motion.” Is this true? I‘m not quite sure.

I‘m staring at concepts like “energy states” and weird symbols and familiar words like “heat” but used in an unfamiliar context. Sound familiar? I’ve entered a world I don’t really understand. The surprise is high, and I‘ve no idea if it’s accurate or what it’s going to say next. If I was to change a word there’s a good chance I’ve introduced a mistake.

Back to entropy, at least the informational kind. When using generative AI I have a level of familarity with the subject I’m getting it to output. For me, this means I can gauge a generated React component well but a paragraph on quantum physics poorly. The React component code will hopefully leave me unsurprised, which lets me move faster to where I’ve already planned. The physics lesson will be surprising and unpredictable as I have little idea what to expect next, and I’ll gain slightly more familiarity by asking follow-up questions.

I think it’s helpful to know which of the two modes you are in.

Are you in a familiar mode wanting to accelerate to a known destination?

Or are you in an exploratory mode, in uncharted seas not knowing whether to expect a reef or safe waters over the horizon?

In your familiar mode the entropy is low. You type a few keys and get lots of predictable signal out. What you see is anticipated, and if there is any code I didn’t bargain for I can fix it or generate anew. The vibe is low surprise, and you can keep rolling the dice until you get what you expect.

In your exploratory mode the entropy starts high. The goal is to inform your mind or deliver the output elsewhere. It’s bespoke Wikipedia on command, with a similar risk of inaccuracies. The output is possibly wrong and you won’t know how to directly fix it, though over time you can learn what tends to break. The vibe is high surprise.

Both modes are marvels. They’re infinite bicycles for the mind. Just ask yourself whether you’re on a path you’ve ridden before.