Here at Addepar, our world revolves around data. Our team, the Feeds team, lives at the very heart of this world. We build and maintain the thousands of data pipelines — or 'feeds' — that ingest information from countless banks, brokerages and financial institutions around the globe. Our job is to translate that raw, often messy data into the clean, unified model that powers the entire Addepar platform. It’s a critical function and a daily challenge that ensures our clients have the clarity they need to make the best possible decisions.

For years, we operated with a workflow that was effective, but also a source of significant friction. When daily data exceptions occurred, the responsibility fell to our colleagues on the Operations team to resolve them. These are talented financial analysts hired for their deep domain knowledge and analytical skills. Yet our old process required them to step into the shoes of a software engineer. They had to install developer environments like IntelliJ, learn the basics of Git, and edit Java code files directly just to make operational overrides.

We knew this wasn't sustainable. It created a series of cascading problems that we, both as engineers and as a company, were determined to solve. During our annual hackathon, we reached a breakthrough.

The hidden costs of our old system

To understand the breakthrough, you first have to understand the bottlenecks we were living with every day. The issues went beyond technical and had a real impact on our people and ability to scale.

For our analyst colleagues, the challenges were immense. First, the onboarding process was incredibly long. We had to spend months upskilling new analysts in developer tooling, extending their total onboarding time to around six months. This was time they could have spent honing their financial expertise. Second, the workflow itself was slow. Because analysts were directly editing Java code, any change required the entire feed to be redeployed into production, a process that added an average of 10 minutes to every single fix. Finally, as our company grew globally, this model created a work imbalance. A strategic decision not to install developer tools on analyst computers in India meant our colleagues there were limited in the operational work they could perform.

As engineers, we faced our own set of frustrations. The biggest bottleneck by far was our regression testing process. To ensure that converting a feed from the old Java-based model to our new target architecture didn't introduce errors, we had to run massive tests. For a large feed, a single testing cycle could take a full 24 hours, and we often needed several iterations to get it right. This glacial pace made rapid progress nearly impossible. Furthermore, the "config wizard" tool we had built to help automate these conversions struggled with the wide variety of edge cases found in our feeds, meaning most conversions required a lot of manual engineering work.

The vision and the growing debt

Years ago, work had begun on a much better model: making our data feeds "config-driven.” The vision was to separate the operational logic — the maps and overrides — from the core code by storing it in external configuration files. This would allow analysts to make changes through a simple UI, without ever touching a line of Java.

The vision was clear, but progress was slow. Converting each feed was a painstaking manual process. In the years since the project began in 2020, new feeds were still being built in the old format, often because they were subfeeds of a parent that hadn't been converted yet. This created a growing mountain of technical debt. Leading into our 2025 hackathon, 269 new feeds had been built the old way, and out of a total of nearly 500 feeds that needed conversion, we had only managed to complete 90.

We were moving forward, but the finish line was moving faster. The hackathon was our chance to change the race entirely.

Our Hackathon breakthrough: Smarter, faster, automated

Simply converting a few more feeds wouldn’t be enough. If we wanted to achieve our goal, we’d have to re-imagine the entire conversion process from the ground up. To do so, we focused on the three biggest bottlenecks and built solutions to shatter them.

First, we tackled the manual overhead of preparing a feed for conversion. Previously, an engineer had to manually define a feed's unique configuration maps for our wizard tool. It was tedious and slow. Our solution was to automate this using AI. We developed a well-crafted prompt for Gemini, allowing us to upload an entire Java constants file. In seconds, Gemini would generate the exact, perfectly formatted input our conversion tool needed. Manual extraction was completely eliminated.

Second, we made our conversion wizard smarter. The tool used to rely on hard-coded patterns to find and transform data maps, which is why it failed on so many edge cases. We re-architected it to use dynamic regex matching. Now, instead of using a fixed library, the tool analyzes the code and auto-generates the specific patterns it needs on the fly. This massively expanded its coverage and ability to handle variation without manual intervention.

Third, and most importantly, we solved the 24-hour testing problem. We built a new local diff tool designed specifically for this purpose. The tool works by taking a snapshot of a feed's entire configuration as Java sees it before the conversion. It then runs the conversion and takes a second snapshot. By comparing these two collections directly, it can verify with high confidence if the conversion was successful. What used to take 24 hours in a full regression test now takes about 15 minutes on a local laptop. This turned our feedback loop from a day into a coffee break.

The results: A better way forward

The impact of these three innovations was staggering. In the several years leading up to the hackathon, a total of 90 feeds had been converted.

During the hackathon, our small team converted 315 feeds

In just a few days, we quadrupled the total number of converted feeds, bringing the count from 90 to 405. This moves our total conversion ratio from 18% to 82% in one hackathon. We strategically targeted entire families of subfeeds, which means that any new feeds built under them will now automatically be in the modern format, putting an end to increasing tech debt.

The benefits are already being felt across the company. The onboarding time for our Operations Analysts is expected to be cut from six months to three once all conversions are complete. Our global teams can now operate on a more level playing field, with access to manage hundreds of additional feeds through a UI. And for our fellow engineers, the acceleration is profound. The tools will accelerate all future config conversions, freeing us up to focus on building new capabilities for our clients.

This project was more than just an internal process improvement. It was a reaffirmation of our commitment to building an efficient, scalable platform. For our clients, this work translates directly into higher data quality, faster resolution times, and the confidence that the engine powering your insights is stronger and more agile than ever before.