This blog post is a sophisticated piece of content marketing for a company called JUXT and their proprietary tool, "Allium." While the technical achievement is plausible, the framing is heavily distorted to sell a product.
Here is the breakdown of the flaws and the "BS" in the narrative.
1. The "I Didn't Write Code" Lie
The author claims, "I didn't write a line of implementation code." The Flaw: He wrote 3,000 lines of "Allium behavioural specification." The BS: Writing 3,000 lines of a formal specification language is coding. It’s just coding in a proprietary, high-level language instead of Kotlin.
The Ratio is Terrible: The post admits the output was ~5,500 lines of Kotlin. That means for every 1 line of spec, he got roughly 1.8 lines of code.
Why this matters: True "low-code" or "no-code" leverage is usually 1:10 or 1:100. If you have to write 3,000 lines of strict logic to get a 5,000-line program, you haven't saved much effort—you've just swapped languages.
2. The "Weekend Project" Myth
The post frames this as a casual project done "between board games and time with my kids." The Flaw: This timeline ignores the massive "pre-computation" done by the human. The BS: To write 3,000 lines of coherent, bug-free specifications for a Byzantine Fault Tolerant (BFT) system, you need to have the entire architecture fully resolved in your head before you start typing. The author is an expert (CTO level) who likely spent weeks or years thinking about these problems. The "48 hours" only counts the typing time, not the engineering time.
3. The "Byzantine Fault Tolerance" (BFT) Bait-and-Switch
The headline claims "Byzantine fault tolerance," which implies a system that continues to operate correctly even if nodes lie or act maliciously (extremely hard to build). The Flaw: A "Resolved Question" block in the text admits: "The system's goal is Byzantine fault detection, not classical BFT consensus." The BS: Real BFT (like PBFT or Raft with signatures) is mathematically rigorous and keeps the system running. "Fault Detection" just means "if the two copies don't match, stop." That is significantly easier to build. Calling it "BFT" in the intro is a massive overstatement of the system's resilience.
4. The "Maintenance Nightmare" (The Vendor Lock-in Trap)
The post glosses over how this system is maintained. The Flaw: You now have 5,500 lines of Kotlin that no human wrote. The BS: This is the "Model Driven Architecture" (MDA) trap from the early 2000s.
Scenario: You find a bug in the Kotlin code.
Option A: You fix the Kotlin. Result: Your code is now out of sync with the Spec. You can never regenerate from Spec again without losing your fix.
Option B: You fix the Spec. Result: You hope the AI generates the exact Kotlin fix you need without breaking 10 other things.
The Reality: You are now 100% dependent on the "Allium" tool and Claude. If you stop paying for Allium, you have a pile of unmaintainable machine-generated code.
5. The Performance "Turning Point"
The dramatic story about 10,000 Requests Per Second (RPS) has a hole in it. The Flaw: The "bottleneck" wasn't the code; it was a Docker proxy setting (gvproxy). The BS: This is a standard "gotcha" for anyone using Docker on Mac. Framing this as a triumph of AI debugging is a stretch—any senior engineer would check network topology when seeing high latency but low CPU usage. 10k RPS is also not "ambitious" for a modern distributed system; a single well-optimized Node.js or Go process can handle that easily.
I find it interesting that you request me to discuss subject matter when your post is intellectually equivalent to: "I generated this sequence of numbers using my pseudo-random number generator"
I find it interesting you still continue with personal attacks without saying nothing of substance. What is wrong factually in the post? Why are you convinced I'm an AI? I also find it interesting that you've just been hostile throughout this entire interaction and still have nothing to add to the actual discussion. Stop the slop.
I don't think you are AI. I merely lament the fact that you found it appropriate to post a clearly LLM generated comment.
The problem with LLM generated comments for me is not their content, but rather their nature. I am not addressing the "actual discussion", as in my personal opinion, there is no "discussion" to be had. The post constitutes an automated response akin to an answering machine (be it much more sophisticated), and I generally do not find discussions with answering machines interesting at all.
That sounds like a you problem. Since you're not interested in discussing anything or finding this to be an actual discussion, I'll stop trying to engage you in good faith anymore, sorry.
I don't think people should be rude to you, but the comment was AI-generated, right? Lots of people dislike that as it feels kind of wasteful and disrespectful of our time; it can literally take you less time to generate the comment than for us to read it, and the only information you added is whatever was in the (presumably much shorter) prompt. If you'd written it yourself, it may or may not be interesting and correct, but I'd at least know that someone cared enough to write it and all of it made sense from that person's perspective. Sometimes I am interested in an LLM's take on a topic, but not when browsing a forum for humans.
I'm sorry but if you're blaming some text on a website to be "disrespectful of our time", I don't know what to say to you.
I stand behind everything on my comment and I have engaged in good faith with every single reply to it here (even though none of them talk about anything in the comment itself).
Go through my profile, see how I engage with people and tell me again I'm AI.
If you do not have anything to say to the subject matter of a comment and just have personal snide remarks, I do think it's a waste of your time but do not blame me for it or tell me to leave the platform.
Typing this comment right now is a waste of time for me but I do not feel the need to grandstand over it as if there's a massive opportunity cost to it. I'm a human writing/interacting in a "forum for humans."
I didn't say or think that your account was AI-run, and I didn't tell you to leave the platform. I just tried to explain why your comment might have annoyed people and triggered negative responses (while agreeing that the rude ones were inappropriate).
Sure, cheers then. I don't care about negative responses if they're negative just because they think it's AI-generated, without having to say anything substantial on the actual comment or the article. I have demonstrated my willingness to engage in good faith but those comments have not.
If negative responses have no substance behind it, it makes no sense to care about them or take them seriously.
Also, the fact that you assume it takes more time reading that comment than it took for me to write is pretty weird (I still don't get what was so wrong about the comment that simply reading it is a waste of time to people).
> the fact that you assume it takes more time reading that comment than it took for me to write is pretty weird
I didn't do that either! I had no idea whether you just fired off a quick prompt and pasted the result without even reading it, or spent ages crafting and rereading and revising it, or (most likely) something in between those extremes. I said generated comments can take less time to create than to read, and that's one reason people push back against them. There's a risk that the forum just gets buried in comments that take near-zero effort to 'write' but create non-trivial time/effort/annoyance for those of us wading through them in search of actual human perspectives. And even the relatively good ones will be little different from what we could all obtain from an LLM if we wanted it.
FWIW, I didn't even get to the substance, because I instinctively bounce off LLM-written content posted in human contexts without explanation. You're obviously free not to care about that, and I wouldn't have replied and got into this meta discussion if not for the back-and-forth you were already involved in.
edit: but if you do care about getting through to people like me, even a short manually-written introduction can make me significantly more likely to read the rest of the content. To me, pure LLM output is a pretty strong signal of a bot/low-effort human account. But if someone acknowledges that they're pasting in an AI response and bothers to explain why they think it's interesting and worthwhile, I'll look at it with more of an open mind.
I stand by the comment in its entirety. If formatting is an issue that makes it unreadable for you (to not even get to the substance), I can't help you. I do not care about "getting through" to anyone, I'm a human interacting on a human forum and I responded to the content of the article which was mostly BS about creating AI slop (on top of being a content marketing piece trying to sell people shit using deceptive claims).
But I will defend myself when I'm told obtuse things without any substance backing it.
I'm obviously just annoying you, which really wasn't my goal, so I'll stop here. But I want to note that if you think this all comes down to "formatting", you're still not hearing what I'm trying to say.
Is there even a sync to be had? The same prompt to the same LLM at different times will yield different artifacts, even if you were to save and re-use the seed.
Here is the breakdown of the flaws and the "BS" in the narrative.
1. The "I Didn't Write Code" Lie The author claims, "I didn't write a line of implementation code." The Flaw: He wrote 3,000 lines of "Allium behavioural specification." The BS: Writing 3,000 lines of a formal specification language is coding. It’s just coding in a proprietary, high-level language instead of Kotlin.
The Ratio is Terrible: The post admits the output was ~5,500 lines of Kotlin. That means for every 1 line of spec, he got roughly 1.8 lines of code.
Why this matters: True "low-code" or "no-code" leverage is usually 1:10 or 1:100. If you have to write 3,000 lines of strict logic to get a 5,000-line program, you haven't saved much effort—you've just swapped languages.
2. The "Weekend Project" Myth The post frames this as a casual project done "between board games and time with my kids." The Flaw: This timeline ignores the massive "pre-computation" done by the human. The BS: To write 3,000 lines of coherent, bug-free specifications for a Byzantine Fault Tolerant (BFT) system, you need to have the entire architecture fully resolved in your head before you start typing. The author is an expert (CTO level) who likely spent weeks or years thinking about these problems. The "48 hours" only counts the typing time, not the engineering time.
3. The "Byzantine Fault Tolerance" (BFT) Bait-and-Switch The headline claims "Byzantine fault tolerance," which implies a system that continues to operate correctly even if nodes lie or act maliciously (extremely hard to build). The Flaw: A "Resolved Question" block in the text admits: "The system's goal is Byzantine fault detection, not classical BFT consensus." The BS: Real BFT (like PBFT or Raft with signatures) is mathematically rigorous and keeps the system running. "Fault Detection" just means "if the two copies don't match, stop." That is significantly easier to build. Calling it "BFT" in the intro is a massive overstatement of the system's resilience.
4. The "Maintenance Nightmare" (The Vendor Lock-in Trap) The post glosses over how this system is maintained. The Flaw: You now have 5,500 lines of Kotlin that no human wrote. The BS: This is the "Model Driven Architecture" (MDA) trap from the early 2000s.
Scenario: You find a bug in the Kotlin code.
Option A: You fix the Kotlin. Result: Your code is now out of sync with the Spec. You can never regenerate from Spec again without losing your fix.
Option B: You fix the Spec. Result: You hope the AI generates the exact Kotlin fix you need without breaking 10 other things.
The Reality: You are now 100% dependent on the "Allium" tool and Claude. If you stop paying for Allium, you have a pile of unmaintainable machine-generated code.
5. The Performance "Turning Point" The dramatic story about 10,000 Requests Per Second (RPS) has a hole in it. The Flaw: The "bottleneck" wasn't the code; it was a Docker proxy setting (gvproxy). The BS: This is a standard "gotcha" for anyone using Docker on Mac. Framing this as a triumph of AI debugging is a stretch—any senior engineer would check network topology when seeing high latency but low CPU usage. 10k RPS is also not "ambitious" for a modern distributed system; a single well-optimized Node.js or Go process can handle that easily.