On automating programming
Max Rottenkolber <max@mr.gy>
Saturday, 17 January 2026

Ok large language models are all the rage right now. How do they work? What can they do? Are they Turing complete?
So deep learning, neural networks is what LLMs are based on. A neural network we can just think of as an acyclic, directed graph of logic gates. So it is like a hand-written program, but without loops. Not yet Turing complete.
Deep learning, what does that mean? Well, an untrained LLM is a directed graph of logic gates, but we don't specify which gates. They are chosen randomly. We architected what is called a transformer, a configuration of a graph of randomized logic gates that responds well to being trained. Training just means we compare two random mutations of the logic gate mesh, and reinforce whichever mutation’s output we liked better. And then we build a wide and deep mesh out of copies of that graph configuration, we link the directed gate-graphs in series. That's what deep refers to.
So the width of our mesh is the input width of the program, and the depth we can think of as its logical complexity? Still, not Turing complete though. We are missing the tape.
Ok so what we do is this: instead of running our input through the mesh just once we evaluate the mesh iteratively, and on each iteration we feed it more and more of its own output.
mesh(truncate(input + output)) := output | halt
| iteration | input | output |
|---|---|---|
| 1 | query | -comp |
| 2 | ery-comp | utati |
| 3 | omputati | on-re |
| 4 | ation-re | spons |
| 5 | -respons | e<halt> |
Suddenly we have unbounded iteration, and a kind of temporary memory proportional to the width of the mesh. That sounds Turing complete and rather like any regular old program.
program(memory) := memory | halt
So what gives? Well, the mesh/model is programmed automatically via the training process. I think the idea might be, hey we have these specs, and we can test if an implementation matches the specs, let’s automate the synthesis of (or train) the program. It’s not a new idea. We already write programs that program (compilers), and we have experimented with genetic optimization. It might however be the right time for the combination of the two.
There are other interesting aspects of deep neural networks as general purpose runtimes for automated program synthesis. One is they seem to be undebuggable. Then again, why debug something when you whole spiel is automating its creation?
On performance, I suppose its optimization too is automated, but at a lower level the current transformer architecture does seem to be amenable to parallelism in width and pipelining in depth, and we are already working on hardware specialized for evaluation (and training?) of LLMs.
So is programming changed forever, or even obsolete? I think in practice much programming is empirical in that we do not prove programs correct, but merely test if the outputs satisfy known inputs. Considering this I find automating program synthesis at least intriguing.

These days this is not at all what I hear programmers talk about however. Instead it seems programmers are using pre-trained instances of LLMs to synthesize programs which they then modify manually to satisfaction.
My kneejerk reaction is that this amounts to holding the hammer wrong, I outlined above what I think to be the correct grip. However, at least subjectively LLMs appear to be successfuly being used that way. Without probing too deeply into why that is, I will just note that there must be a very human aspect to programming that surely is complemented by a model trained on, and providing efficient access to, our entire public written record, which many existing programs are part of. And then, just avoiding the terror of a blank page can be seen as success.
However, I do question the novelty of the programs emitted by static LLMs. It is my understanding that to truly synthesize a program we should train a model to solve a target problem. We could of course train a model to train a model, but that is just making my point.
I am skeptical regarding the practice of manually modifying a machine-generated, human-readable program. This is just calling the human a robot, and not actual automation. Consequently, the future programs created by automation should not be read by humans, but merely observed.