commit 3a74310adea1e6ea4814791e973b6b5cca435657 Author: Max Rottenkolber Date: Mon Jan 19 12:03:00 2026 +0100 publish blog/automated-programming diff --git a/blog/automated-programming.meta b/blog/automated-programming.meta index df380d7..78c2f90 100644 --- a/blog/automated-programming.meta +++ b/blog/automated-programming.meta @@ -3,4 +3,4 @@ :index-p nil :index-headers-p nil :date "Saturday, 17 January 2026") -:publication-date nil \ No newline at end of file +:publication-date "2026-01-17" \ No newline at end of file commit 35f2917e1385796cd2f9b0ff198a5a67330b40d6 Author: Max Rottenkolber Date: Mon Jan 19 11:52:34 2026 +0100 execution model diff --git a/blog/automated-programming.mk2 b/blog/automated-programming.mk2 index 44afc02..d8ff706 100644 --- a/blog/automated-programming.mk2 +++ b/blog/automated-programming.mk2 @@ -30,6 +30,14 @@ feed it more and more of its own output. mesh(truncate(input + output)) := output | halt # +#table Execution model. # +| iteration | input | output +| 1 | query | -comp +| 2 | ery-comp | utati +| 3 | omputati | on-re +| 4 | ation-re | spons +| 5 | -respons | e<_halt_> + Suddenly we have unbounded iteration, and a kind of temporary memory proportional to the width of the mesh. That sounds Turing complete and rather like any regular old program. commit 5bcb69780168284a03e4d5aab1715b53952745a3 Author: Max Rottenkolber Date: Sat Jan 17 14:11:02 2026 +0100 blog/automated-programming diff --git a/blog/automated-programming.meta b/blog/automated-programming.meta new file mode 100644 index 0000000..df380d7 --- /dev/null +++ b/blog/automated-programming.meta @@ -0,0 +1,6 @@ +:document (:title "On automating programming" + :author "Max Rottenkolber " + :index-p nil + :index-headers-p nil + :date "Saturday, 17 January 2026") +:publication-date nil \ No newline at end of file diff --git a/blog/automated-programming.mk2 b/blog/automated-programming.mk2 new file mode 100644 index 0000000..44afc02 --- /dev/null +++ b/blog/automated-programming.mk2 @@ -0,0 +1,94 @@ +#media# +DSCN0326_blog.jpg + +Ok large language models are all the rage right now. +How do they work? What can they do? Are they Turing complete? + +So deep learning, neural networks is what LLMs are based on. +A neural network we can just think of as an acyclic, directed graph of logic gates. +So it is like a hand-written program, but without loops. Not yet Turing complete. + +Deep learning, what does that mean? Well, an untrained LLM is a +directed graph of logic gates, but we don't specify which gates. +They are chosen randomly. +We architected what is called a transformer, a configuration of +a graph of randomized logic gates that responds well to being trained. +Training just means we compare two random mutations of the +logic gate mesh, and reinforce whichever mutation’s output we liked better. +And then we build a wide and deep mesh out of copies of that graph configuration, +we link the directed gate-graphs in series. That's what deep refers to. + +So the width of our mesh is the input width of the program, +and the depth we can think of as its logical complexity? +Still, not Turing complete though. We are missing the tape. + +Ok so what we do is this: instead of running our input through the +mesh just once we evaluate the mesh iteratively, and on each iteration we +feed it more and more of its own output. + +#code# +mesh(truncate(input + output)) := output | halt +# + +Suddenly we have unbounded iteration, and a kind of temporary memory +proportional to the width of the mesh. That sounds Turing complete +and rather like any regular old program. + +#code# +program(memory) := memory | halt +# + +So what gives? Well, the mesh/model is programmed automatically via the +training process. I think the idea might be, hey we have these specs, +and we can test if an implementation matches the specs, let’s automate +the synthesis of (or train) the program. It’s not a new idea. We already +write programs that program (compilers), and we have experimented with +genetic optimization. It might however be the right time for the +combination of the two. + +There are other interesting aspects of deep neural networks as general +purpose runtimes for automated program synthesis. One is they seem to +be undebuggable. Then again, why debug something when you whole spiel is +automating its creation? + +On performance, I suppose its optimization too is automated, +but at a lower level the current transformer architecture does seem +to be amenable to parallelism in width and pipelining in depth, +and we are already working on hardware specialized for evaluation +(and training?) of LLMs. + +So is programming changed forever, or even obsolete? I think in +practice much programming is empirical in that we do not +prove programs correct, but merely test if the outputs satisfy +known inputs. Considering this I find automating program synthesis +at least intriguing. + +#media# +DSCN0324_blog.jpg + +These days this is not at all what I hear programmers talk about however. +Instead it seems programmers are using pre-trained instances of LLMs to +synthesize programs which they then modify manually to satisfaction. + +My kneejerk reaction is that this amounts to holding the hammer +wrong, I outlined above what I think to be the correct grip. +However, at least subjectively LLMs appear to be successfuly +being used that way. Without probing too deeply into why that is, +I will just note that there must be a very human aspect to +programming that surely is complemented by a model trained on, +and providing efficient access to, our entire public written record, +which many existing programs are part of. +And then, just avoiding the terror of a blank page can be +seen as success. + +However, I do question the novelty of the programs emitted by +static LLMs. It is my understanding that to truly synthesize a +program we should train a model to solve a target problem. +We could of course train a model to train a model, but that +is just making my point. + +I am skeptical regarding the practice of manually modifying a +machine-generated, human-readable program. This is just calling +the human a robot, and not actual automation. +Consequently, the future programs created by automation should +not be read by humans, but merely observed.