The Clojure Bait-and-Switch: A Review of Functional Design

A Java developer’s honest reckoning with Uncle Bob’s Functional Design: Principles, Patterns, and Practices

What I Hoped For

I picked up this book with a specific hunger. As a Java developer working in a C# ecosystem, I wanted the “meat.” I wanted to understand the transition from object-oriented dioramas to functional pipelines. I wanted the architectural vocabulary of the functional world — Functors, Monoids, Monads — and more importantly, I wanted to understand how those abstractions solve real engineering problems: side effects, concurrency, state management.

To be clear: I had no desire to wade through category theory proofs. But I did expect a working engineer’s definition of these terms, because they are not just academic decoration. They describe structural patterns that recur constantly in functional systems. Without them, you are building without a blueprint.

What I found instead was, largely, a Clojure manual.

The Missing Vocabulary — and Why It Matters

Uncle Bob explicitly states early on that this isn’t a book about mathematics. Fair enough. But by omitting the vocabulary entirely — without even a working definition — the book creates a gap it never fills.

Consider what these terms actually mean in practice:

A Functor is simply something you can map over while preserving structure. Java’s Stream is a functor.

A Monad is a pattern for sequencing computations that carry context — errors, nullability, asynchrony. Java’s Optional is a monad. CompletableFuture is a monad. You are already using monads. You just haven’t had a name for them.

A Monoid is a type with an associative combining operation and an identity value. String concatenation. Integer addition. List merging. These are everywhere.

This is precisely where the book fails its target audience. The side-effect problems Uncle Bob is trying to solve — race conditions, unpredictable state, non-deterministic behaviour — are exactly the problems these abstractions were invented to address. Naming them would have given Java and C# developers a direct bridge: “You already have these shapes in your standard library. Here is what to call them, and here is what that unlocks.” Instead, that bridge is never built.

The Clojure Problem

A significant portion of the book is built around Clojure code examples. This is not inherently wrong — Clojure is a thoughtful choice. It is a hosted language that runs on the JVM, so the ideas are theoretically portable even if the syntax is not.

Uncle Bob does include Java examples — but they are uniformly written in hard-line OO style. He never attempts to rewrite the same logic in a Java-functional style: no streams pipeline where there was a loop, no Optional chain where there was a null check, no Function composition where there was a strategy hierarchy. The functional idioms that Java has gradually acquired are entirely absent from his Java code. So the reader is left with Clojure demonstrating the functional ideal and Java demonstrating the OO problem — but never Java demonstrating the functional solution. That bridge is assumed rather than constructed, and for most readers it simply does not appear.

If you are not interested in learning Clojure, a substantial portion of this book becomes effectively inert. The Java examples offer no rescue — they model the problem, never the solution. You end up doing all the translation work yourself, which for a book selling architectural guidance, is an odd burden to place on the reader.

The Repetition of “Mutation Is Evil”

The book follows a predictable rhythm. Whether discussing SOLID principles, the Command Pattern, or State Machines, the conclusion is always the same: mutable state is the root of all evil. He returns to this point chapter after chapter, approaching it from different angles but never fundamentally advancing the argument.

To his credit, by the end I found myself persuaded. The case for immutability — particularly in concurrent systems where mutable shared state is the source of race conditions and deadlocks — is not a trivial one. It is a genuine engineering insight.

But the implementation gap is real. In Clojure, deep recursion is safe because the language supports Tail Call Optimization (TCO)*: the compiler reuses the caller’s stack frame for a tail-recursive call, preventing stack growth. In standard Java, no such guarantee exists. Every recursive call adds a frame to the stack. Follow Uncle Bob’s functional advice too literally in Java, and you will encounter StackOverflowError in production. The book does not acknowledge this. For a practical guide, that silence is a serious omission.

When a function’s final action is a call to itself, TCO allows the runtime to reuse the current stack frame rather than creating a new one. This makes recursion as safe as iteration. Clojure leverages this via the recur keyword; the JVM does not provide it natively.

The Question the Book Should Have Asked

The most interesting idea in the book is one Uncle Bob gestures at but never fully prosecutes: the Gang of Four design patterns are, in large part, workarounds for the absence of first-class functions.

Think about it carefully. The Strategy Pattern is a first-class function. The Command Pattern is a first-class function. The Visitor Pattern is a function dispatched over a type. In a language where functions are values — where you can pass them, return them, compose them — you do not need these patterns as formal constructs. You just write a function.

This is not a minor observation. It means that much of what Java developers call “good design” — the careful application of patterns, the construction of elaborate interface hierarchies — is infrastructure built to compensate for a language limitation. Functional languages do not need the scaffolding because they have the capability directly.

Which raises the harder question: if we had treated code as composable pipelines from the beginning — simple functions transforming data, composed into larger functions — would we have needed the complex class hierarchies at all? The honest answer is probably no. The hierarchies are not a solution to real-world complexity; they are a solution to the constraints of the language we chose.

Uncle Bob circles this insight throughout the book but never states it plainly. It deserved its own chapter.

The Verdict

Functional Design is an evangelism book masquerading as an architecture guide. It will convince you that immutability is worth pursuing — and that conviction is valuable. But it stops well short of telling you how to get there from a typed, OO-dominant codebase.

What is missing is not category theory. What is missing is a conversion guide: here are the functional shapes already present in Java and C#, here is what they are called, here is how to compose them, and here is how to introduce them incrementally into a codebase that does not yet speak this language.

That book would be genuinely essential. This one is a useful, if frustrating, first step.

Is GenAI Making Code Generators Obsolete?

For a long time, being a Java developer meant being a professional typist. We all know the drill: you create a simple data object, and suddenly you are staring at a 200-line file filled with getters, setters, equals, hashcode, and constructors. It is the Java Tax.

To stay sane, we turned to magic tools. Lombok became the industry standard to hide the clutter. Before that, we had Orika, Dozer, or ModelMapper to handle the tedious job of copying data from one object to another, eventually leading many of us to MapStruct. They were lifesavers. But as we move into a world where Generative AI writes the code for us, I have started wondering: are these libraries becoming more of a burden than a benefit?

Why we invited them to the party

Let us be fair—we did not add these dependencies because we loved them. We added them because manual mapping is error-prone. If you have 50 fields in a User object and you forget to map zipCode to the DTO, that is a bug. Tools like MapStruct or the older Orika solved that. Lombok solved the wall of text problem. They gave us a way to keep our source code clean, our fingers from cramping up and working memory free of a gazillion IntelliJ shortcuts.

The hidden cost of Magic

But these tools are not free. I am not talking about money, but about the architectural cost.

Every time you add a library, you are adding a learning curve. A junior dev cannot just read the Java code. They have to understand how a specific annotation processor or reflection-based mapper works. Then there is the security side. Every external dependency is another door left open for a potential vulnerability. We all remember how the entire industry spent some sleepless nights when the Log4j disaster hit back in late 2021. It was a wake-up call that even the most trusted, invisible utilities can become a massive liability overnight.

The biggest headache, though, is long-term maintenance. We have all been stuck on an old version of the JDK because a core generative library was not updated to support the new Java module system or bytecode changes. When a library like Orika falls out of favor and stops being maintained, it becomes an anchor that prevents your entire stack from migrating forward. You end up trapped by the very tool that was supposed to save you time, investing weeks of migrations into a future sunk-cost.

Enter GenAI – The middleman we did not know we needed

Here is where it gets interesting. If I can ask an AI to write a standard Java mapper between these two classes, it happens in two seconds. No library needed. No annotation processor. Just plain, boring, readable Java.

If the AI is doing the heavy lifting, why do we need the magic anymore? Plain Java is universal. It does not break when you upgrade your IDE. It does not have security vulnerabilities in its metadata. It just works. By using AI to generate the ugly code, we get the transparency of Plain Old Java without the manual labor.

The one reason to keep Lombok around

However, there is a catch. Even if an AI writes all my getters and setters, I still have to look at them. This is where Lombok might actually survive.

Lombok’s real superpower is not that it writes the code. It is that it hides the code. A class with five fields and a @Data annotation is much easier for a human to scan than a class with 100 lines of boilerplate. In a world of AI-generated code, the signal-to-noise ratio becomes the most important thing. We need to see the logic, not the plumbing.

So, where do we go from here?

Are we reaching a point where we should stop reaching for MapStruct or ModelMapper by default? If an AI can generate a standard, searchable, and debuggable Java class for us, is the risk of a third-party dependency still worth it?

I am starting to think that Plain Old Java might be making a comeback, powered by AI. We might finally get the best of both worlds: the safety of standard code and the speed of automation.

Investigating Record-Based Domain Models

I recently worked on updating our backend application to adhere to hexagonal architecture principles. It was a bit of a challenge at times since there weren’t any set rules about what the API exposed, but it provided me with an opportunity to propose some initial guidelines myself for better organization and readability.

As the only Java developer within a C# team, I often face criticism regarding following best practices or the perceived unreadability of Java code. Given my multi-year experience in the field, I believe that many of these concerns stem from differences in preference between Java and C# developers or a lack of proficiency in Java. To keep things simple and readable, I’m seeking feedback from experienced Java developers.

Currently, I’m delving into Functional Design: Principles, Patterns, and Practices by Uncle Bob. Inspired by this book, I have proposed creating an ArchUnit rule that mandates all newly created domain models to be Java records, which provide immutability and less boilerplate code via the automatic implementation of toString(), equals(), and hashCode() methods.

Upon having this idea challenged by various LLMs, we (the LLMs and I) have concluded that these models should be decorated with “Wither” methods (Lombok’s @With annotation) to minimize the need for instantiating the record with all its fields when only a single value is modified. For example, when changing the status, this approach reduces boilerplate code and lowers the risk of errors.

However, there were some challenges in making “Wither” methods work effectively. During our team meeting, we identified potential issues, which I will explain using a simplified code example:

@With
public record Bandwidth(int lowerLimit, int upperLimit) {
   
  public Bandwidth {
    if (lowerLimit > upperLimit) {
      throw new IllegalArgumentException("lowerLimit must not be less or 
      equal than upperLimit");
    }
  }

  public Bandwidth setNewLimits(int lowerLimit, int upperLimit) {
    return this.withLowerLimit(lowerLimit).withUpperLimit(upperLimit);
  }
}

When I run the following test, it throws an IllegalArgumentException, even though the object’s state would finally pass the business rule, which dictates that the upper limit should be greater than the lower limit.

@Test
void updateLimits_createsNewInstance(){
  var bandwidth = new Bandwidth(10, 20);
  var result = bandwidth(30, 50);
  assertThat(result).isNotEqualTo(bandwidth); //stupid assertion, but let’s run with it for now
}

The IllegalArgumentException is thrown because of the intermediate new instance created by .withLowerLimit, which hasn’t yet updated the upper limit. To address this issue, I considered adding another rule: if more than one instance variable of the object is being updated, it should only happen through one new instantiation, not chained Wither methods.

But fundamentally, this raised the question: should we abandon the idea of simplifying domain model instantiation and remove the @With annotation? Or should we take a different approach by defining the @Builder annotation with “toBuilder=true” instead? This would allow us to implement setNewLimits as follows:

public Bandwidth setNewLimits(int lowerLimit, int upperLimit) {
  return this.toBuilder()
    .lowerLimit(lowerLimit)
    .upperLimit(upperLimit)
    .build();
}

However, I do not appreciate that updating a single attribute might lead to more boilerplate code with this approach. I’m eager to hear your thoughts on this and receive feedback from other experienced engineers, as I am certain I’m not the first person to encounter this challenge.