a close up of an old fashioned typewriter

Featured Image. Credit CC BY-SA 3.0, via Wikimedia Commons

Suhail Ahmed

This 1970s AI Could Write Poems – Sort Of

AIHistory, EarlyAI, RetroAI, TechThrowback, VintageComputing

Suhail Ahmed

In a decade better known for synthesizers, space probes, and shag carpets, a handful of researchers tried something stranger: they taught small, rule-bound programs to write verse. The outputs were clumsy but oddly moving – like messages washed ashore from a machine mind learning to speak. Behind the scenes, early algorithms stitched words using probability and grammar, not meaning, yet sometimes they stumbled into lines that felt uncannily human. Those sparks raised a thorny question that still lingers: if a poem makes you feel something, how much does its author need to understand? Today’s vast language models overshadow those toy systems, but the 1970s prototypes set the stage, one shuffled noun phrase at a time.

The Hidden Clues

The Hidden Clues (image credits: unsplash)
The Hidden Clues (image credits: unsplash)

What made those vintage verses feel alive were the tiny irregularities – the awkward enjambments, the sudden, stark images that looked almost intentional. Early programs shuffled word lists and grammar templates, then nudged choices with simple probabilities, a far cry from today’s deep networks. Yet readers found patterns where none were planned, the same way we spot animals in clouds or faces on the Moon. That interpretive habit turned clatter into cadence, and output into art, at least on a forgiving day.

It’s tempting to dismiss the whole enterprise as a parlor trick, but the clues hidden in those lines mattered. They showed that structure alone can evoke emotion, even when semantics is paper-thin. They also revealed a counterintuitive lesson: randomness, carefully constrained, can mimic risk-taking and surprise – two qualities many poets chase by hand.

From Ancient Tools to Modern Science

From Ancient Tools to Modern Science (image credits: unsplash)
From Ancient Tools to Modern Science (image credits: unsplash)

The seeds of machine poetry sprouted from mid‑century ideas: information theory’s playful “fake English,” context‑free grammars, and early computational linguistics. By the 1970s, researchers adapted these tools to small computers with memory measured in kilobytes, not gigabytes. They leaned on modular rules – subject, verb, object – and basic statistical tricks to choose words that “fit” just enough. The result was text that walked like language but couldn’t yet think in it.

These systems foreshadowed today’s pipelines more than they’re given credit for. A modern model predicts the next token; a 1970s script picked the next word from a tiny menu with weighted dice. Both rely on patterns in data; the difference is scale, representation, and learning. If today’s engines are orchestras, those earlier programs were a tin whistle – limited in range but capable of a clear, surprising note.

Inside the Machine

Inside the Machine (image credits: unsplash)
Inside the Machine (image credits: unsplash)

Under the hood, poets-by-code typically used one of two strategies: rule trees or stochastic chains. Rule trees enforced a skeletal grammar and slotted words into templates, while chains used the recent word history to predict likely next words. Neither approach understood context beyond a narrow window, so the systems repeated themselves, contradicted earlier lines, or drifted into nonsense. Ironically, those flaws sometimes read like intentional avant‑garde choices.

Constraints did the heavy lifting. By forcing syllable counts, rhyme positions, or word categories, the machine could hit formal targets while still varying content. Think of it as rails on a mountain road: the guardrails didn’t drive the car, but they prevented the worst crashes, letting serendipity deliver scenic views.

Human Hands in the Loop

Human Hands in the Loop (image credits: unsplash)
Human Hands in the Loop (image credits: unsplash)

Despite the hype, early machine poetry often depended on careful human curation. Researchers would generate dozens or hundreds of outputs, then select the handful that felt coherent, funny, or strange in the right way. Editing added cohesion, cut dead ends, and emphasized accidental motifs that the code never intended. In effect, people and programs co‑authored the poems, even if the machine got top billing.

This collaboration wasn’t cheating; it was a recognition of the tool’s limits and strengths. The machine provided combinatorial reach and surprise; the human supplied taste, context, and narrative glue. That workflow looks familiar today when artists prompt models repeatedly, harvest the best takes, and stitch them into a finished piece. The partnership predates the prompt era by decades.

Why It Matters

Why It Matters (image credits: wikimedia)
Why It Matters (image credits: wikimedia)

The 1970s experiments reframed creativity as a system problem: if structure and chance can spark feeling, then inspiration isn’t mystical – it’s engineered. That view challenged romantic notions of authorship and forced critics to ask what we value in a poem: intent, craft, or impact. It also seeded ideas that later matured into computational creativity, generative art, and interactive storytelling. The throughline from rule lists to neural nets is less a leap than a staircase.

Compared with traditional poetry, machine‑assisted verse traded depth for scale and surprise. Where a human revises to refine meaning, an algorithm explores possibilities at speed and lets selection stand in for reflection. That contrast explains why early outputs felt both shallow and electric. They were sketches of what might be possible once the models learned to carry context and nuance.

Global Perspectives

Global Perspectives (image credits: unsplash)
Global Perspectives (image credits: unsplash)

Across the Atlantic, writers experimenting with constraints treated computers as comrades in invention, not rivals. In research labs and art schools from the United States to Europe, small groups hacked together programs that permuted word sets, sampled corpora, or simulated conversational turns. The cultural reactions varied: some readers embraced the novelty; others saw it as hollow mimicry. Either way, the discourse expanded, pulling poets, programmers, and philosophers into the same room.

What emerged was a pragmatic pluralism. Machine verse lived alongside concrete poetry, sound poetry, and performance, another tool for bending language. A few practical takeaways stand out: – Early memory ceilings forced radical minimalism. – Rule constraints doubled as creative prompts. – Audience expectations shaped judgments as much as outputs. Those lessons still travel well.

Surprising Data Points

Surprising Data Points (image credits: unsplash)
Surprising Data Points (image credits: unsplash)

For all their limitations, the numbers behind the era tell a story. Many systems ran on machines with memory so small that a single modern image file would have swallowed it whole. Training materials were tiny – sometimes bespoke word lists or a single author’s pages – so overfitting wasn’t a risk, it was the point. And yet, observers reported moments of genuine aesthetic response, a reminder that perception does heavy lifting in art.

Consider a few anchors: – Programs often relied on word frequencies from narrow sources rather than broad corpora. – Storage lived on tape or floppy disks, demanding compact grammars. – Processing times pushed authors to value quick heuristics over deep analysis. Viewed together, these constraints shaped a style – spiky, fragmentary, and oddly memorable.

The Future Landscape

The Future Landscape (image credits: unsplash)
The Future Landscape (image credits: unsplash)

Fast‑forward to today, and the landscape looks like science fiction next to the 1970s toolkit. Transformer models map long‑range dependencies and style, while fine‑tuning and prompting steer tone with precision the old scripts never had. Still, the core questions persist: authorship, meaning, and the ethics of training. As systems grow stronger, credit and consent have become central, not optional footnotes.

There’s also a quiet movement back to small, interpretable models. Researchers want systems you can open up, tinker with, and understand line by line – the way you could in the 1970s. Expect more hybrids that combine transparent constraints with learned style, plus renewed interest in preserving early digital art so we can trace the lineage clearly. The future might be enormous, but it will keep borrowing wisdom from the tiny past.

How You Can Engage

How You Can Engage (image credits: unsplash)
How You Can Engage (image credits: unsplash)

If the idea of machine‑made verse intrigues you, there are simple ways to dive in. Explore constraint‑based writing yourself – set a rhyme scheme, restrict your vocabulary, or imitate a form, then see how far structure carries you. Try a small, local text generator and compare its outputs before and after you tweak rules; noticing the changes is a fast lesson in how models “think.” You’ll develop an eye for where human judgment adds magic.

Support preservation efforts for early digital literature and art, since the history is fragile and easily lost. Back open, interpretable tools that let students and creators learn by building from scratch. Most of all, read machine‑assisted poems alongside human‑crafted ones and ask what each does best. You might find that the tension between them is where new art happens – what would you have guessed?

Leave a Comment