[RECAP] The Great Library Merge: A Journal on Taming Complexity
From Many to One: Our Journey in Merging Repos to Conquer Complexity
In the world of software engineering, growth is a good problem to have. But with growth in services, code, and people comes its inevitable companion: complexity. At Swym, we undertook a significant project to tackle this head-on by merging our numerous internal Clojure library repositories into a single, unified repo.
This wasn't just a technical exercise in tidying up. It was a strategic move to simplify our systems, reduce cognitive overhead for our engineers, and pave the way for a more efficient future. This is the story of that project—our motivations, our strategy, our risks, and the lessons we learned along the way.
The Tipping Point: Why We Had to Change
Like many growing companies, our architecture evolved. The initial choice to have separate repositories for different services and libraries made sense at the time, but as we scaled, this separation started to create friction. The real-world trigger for the merge was the sheer complexity involved in what should have been simple tasks: needing to build, deploy, test, and manage numerous components for a single function was becoming a bottleneck. The cost of adding value was becoming too high compared to the value itself.
Our stack is built on Clojure, a powerful and dynamic language running on the JVM. By its nature, Clojure allows for a lot of flexibility through guidelines rather than rigid rules. While this is a strength, it also meant that over time, inconsistencies grew across our many repos. This lack of a singular, enforceable guideline was a constant challenge.
There was also a subtle, forward-looking trigger: a centralised context would be hugely beneficial for the AI tools we are increasingly relying on. While not the sole reason, it was a necessary condition to unlock their full potential. We knew we had to re-evaluate our architectural choices to get ready for the future.
The Game Plan: A "Band-Aid Rip-Off"
We had attempted this before and paused, knowing the problem had merit but that the timing wasn't right. This time, we had a clear strategy. Aravind Baskaran, who spearheaded the initiative, called it the "band-aid rip-off" approach.
The core idea was to make the change swiftly and cleanly, focusing only on simplifying deployment and structure without altering any existing functionality in the first phase. All the code would move, the structure would change, but the underlying logic would remain untouched.
This approach had a few key principles:
No Functionality Changes: This was critical. By ensuring the core logic was the same, we could be confident that, with baseline tests passing, the system would behave as expected.
Active Monitoring is Key: After the merge, we had to be incredibly watchful. We paid close attention to key metrics like build and deploy times—which we expected to be reduced by 50-60%—but also watched for any negative runtime implications.
A Stable Technical Foundation: A key technical goal was to have a solid map of all our component versions—from the JVM and Clojure itself to our Kubernetes environment variables—to ensure a smooth and predictable rollout.
Navigating the Risks: Don't Stop the Train
A project of this scale is not without its risks. The biggest fear was the point of no return. Aravind likened it to "stopping and reversing a large train". An undertaking this large had to be atomic—it was either done or not done, with no room for a "half-done" intermediate state.
Other significant risks included:
Lack of Signals: The fear that even with testing, we might not have adequate signals to detect subtle issues once the changes hit our production Kubernetes environments.
Feature Release Conflicts: We are constantly shipping new features. Trying to coordinate this merge with twenty other feature releases could have created a processing nightmare.
To mitigate this, the project was given a critical "shelf life". It had to be completed within a specific, tight time window to avoid causing more damage than it solved. If it took too long, the disruption to business requirements would outweigh the benefits. We carefully planned this window to execute the project with minimal disruption, ensuring no other deployments proceeded without a "green flag" from the merge team.
Lessons from the Trenches: Our Key Takeaways
This project was a tremendous learning experience. Here are our biggest takeaways for any team considering a similar endeavor.
1. This is Not a Side Project
Our previous attempts had stalled because the work was treated as a "side experiment". The most significant learning was that for a project of this magnitude to succeed, it cannot be a side project. It has to be the project. It requires focus and commitment to get it done.
2. Empower a Single Driver
While input from many is crucial, the execution needs a single, responsible owner to avoid "room for committee" based negotiations. We found success by letting the person who wants to take it forward build the proposal and enabling them to succeed, knowing they will be responsible for making and fixing mistakes along the way.
3. Embrace Your Tools (and Your Team)
The AI tooling available today significantly reduced the manual grunt work that would have made this project much more arduous in the past. But tools are only part of the equation. The project's success was driven by the team's conviction that this was a necessary pain point to address and by their confidence in the system's scalability and reliability.
4. The Merge is a Foundation for the Future
This project wasn't just about cleaning up the past; it was about building a better future. By bringing our Clojure code together, we now have the closest shot at developing and enforcing community-driven guidelines and standards internally. It allows our internal community to grow and build its legacy in terms of how we build software in 2025 and beyond.
The Journey to Simplicity Continues
The great library merge is complete, but it marks a beginning, not an end. Our journey toward simplicity is ongoing. Over the next few weeks and months, we will be closely observing the ripple effects of this change—both good and bad—to continue learning and refining our processes. The immediate next steps involve improving our monitoring signals so we can understand system health without digging into every detail, making our operations even more efficient.
This merge has given us more than just a cleaner codebase; it has provided a simplified, stable foundation. It enables us to finally attack the larger opportunities we had previously put on hold. By taking on this challenge, we've not only improved our day-to-day development reality but have also reaffirmed a core engineering principle: sometimes, to go faster, you first have to simplify.
Reflections from 2026
Looking back from early 2026, the “Great Library Merge” of August 2025 has proven to be a watershed moment for our engineering velocity. The most immediate win was the dramatic slash in overhead; our build and deployment times dropped, transforming what used to be a sluggish bottleneck into a streamlined, rapid-fire process. However, the most strategic advantage emerged in our AI integration. By consolidating our entire Clojure ecosystem into a single, unified source of truth, we provided our AI agents with a comprehensive, centralized context. This “one-stop shop” for code architecture has made it significantly easier to build agents that understand our entire system’s nuances without getting lost in repository sprawl. What started as a cleanup project has become the essential foundation for our AI-driven future.



