arXiv NG: incremental decoupling and search

Martin (our IT Team Lead) and I gave a brief update on the arXiv-NG project at the Open Repositories 2018 conference in Bozeman, MT last week. So it seems like a good time to offer an update here, as well. For a high-level refresher on what we’re up to, check out my earlier post. In this post, I’ll provide a bit more detail about how we’re migrating from a legacy code-base to a more evolvable architecture, illustrated by our recent work on our search interface.

The classic (read: legacy) arXiv platform is complex, and the fact that we are in the midst of re-architecting that system in fairly dramatic ways makes it difficult to provide a visual representation of progress. Here is my best attempt, from the perspective of how data “moves” through the arXiv platform. Each of the polygons represents a notional component of the classic system and/or a separate service or application in the NG architecture.

Notional overview of the arXiv system, depicting how data generally “moves” through the platform. This starts in the submission system with new papers, and flows through to a variety of access and discovery interfaces.

One thing that may jump out here is that the publication process creates a fairly clean division between submission-related components and access- and discovery-related components. This separation of state and concerns is a significant part of the underlying architecture that will carry through in arXiv-NG. On a more local scale, you’ll also notice the separation of “source packages” (the LaTeX sources most authors diligently provide) and “presentation formats” (e.g. PDFs). This distinction will also persist in arXiv-NG, and will be increasingly important as we begin to experiment with HTML as an alternative reading format.

Another thing to notice is that quite a bit of the finished and ongoing work has been “under the hood.” Our search interface was the first reader-facing project to be released, and came on the heels of almost a year of work behind the scenes in other areas.

In an earlier post I wrote about our high-level planning process, and how we’re moving to a new technology stack and a new deployment configuration (Python+Flask+Docker, deployed on Kubernetes in Amazon Web Services). So, how are we actually going about this?

Incremental decoupling

The classic arXiv code-base is what you might call a spaghettilith: a monolithic system in which, with the passage of time, what were once useful abstractions have become tangle of indirection and concerns. While stable and performant, the platform is exceedingly difficult to develop in this state.

Simplified diagram of the classic Diagram depicting encapsulation of concerns within separate services (small applications) that communicate via HTTP.

Concerns about evolvability and minimizing complexity for developers are major drivers for our transition to more service-oriented architecture. In short, our goal is to never undertake arXiv-NG 2.0. One way to look at this transition is that we are moving integrations from buried deep within the spaghettilith, or at the database or filesystem level, up to the level of HTTP.

We think about this as (heuristically) involving five steps:

  1. Prioritize: what can we carve off of the classic system without breaking it? You’ll notice in the overview diagram above that we haven’t even begun to touch the publication process. Instead, we’re starting with things that are relatively easy to isolate.
  2. Identify integrations: because of the way that the classic system was implemented, it can be difficult to determine what bits of data and connectivity are actually required for a component to do its job. In this step, we try to identify the minimum set of inputs and outputs for a component. What columns in what tables in the database does it touch? What parts of the file-system does it manipulate?
  3. Re-engineer: our primary goal in re-implementing classic components in the NG architecture is to achieve a high-level of independence while maintaining feature-parity. In other words, we want to replicate the expected behavior and performance of the classic system, while setting ourselves up for future success in implementing new features down the road. We do try to make some opportunistic improvements, such as better unicode support.
  4. Deploy locally: most NG components are first deployed on our on-premises infrastructure, both to minimize the number of things that can go wrong and to avoid latency-related issues. However…
  5. Migrate to the cloud: as practicable, we are moving some (eventually all) components to the cloud. This is a topic that deserves its own blog post, so I’ll leave it at that for now.

Finding the balance between lumping and splitting is an ongoing process; sometimes components really do need to co-habitate. The key is keeping our future selves in mind, and designing for change.

Integration patterns

As we carve off bits of the classic system, we leverage three general integration patterns.

In some cases, it is unavoidable that we provide minimal access to the database or file-system layers of the classic system. We call this a “shim,” in that it provides a temporary integration that gives us some space to develop the re-engineered service. In many cases, those bits of the database or file-system under the purview of a service will eventually be carved off from those massive pools of state.

API-level integrations involve introducing a lightweight Perl controller in the classic system that provides HTTP-level access to the required data. This is preferable to a data-layer shim: although the Perl implementation of the API will change, the API itself becomes a durable part of the NG architecture.

Three heuristic patterns for integrating new services with the classic system: data-layer “shims”, API-based integrations, and publish-subscribe using a notification broker.

A third and final pattern is a publish-subscribe pattern leveraging AWS Kinesis. In this pattern, important messages (e.g. a new paper) are generated by a component in the classic system, and distributed over a “topic” or “stream”. Those messages are consumed by NG components deployed in the cloud. We use this pattern to keep our search index up to date as papers are published or updated.

Case-study: search

As I mentioned above, the search interface was one of the first public-facing components to be re-released as part of arXiv-NG. I think that it nicely illustrates the “incremental decoupling” process that we’re using throughout the project.


One of the reasons that we started with search is that it has minimal integration dependencies.

It already has its own stand-alone data store—a search index—which only needs to be updated from the primary metadata records roughly once per day. It doesn’t touch the file-system. It also left quite a bit of room for improvement: unicode support was fairly terrible, and if your search returned more than 1,000 results you’d be given only an arbitrary subset (yuck). There were also some pretty big performance and evolvability gains to be had by moving from an ancient version of Lucene on-premises to a managed Elasticsearch cluster.


The two main inputs to the search system are (a) metadata about arXiv e-prints, and (b) information about which papers need to be added or updated. We also needed to make sure that we updated references from other parts of the system so that links wouldn’t break.

General container-level architecture for the NG search system, including integrations with the classic system. Notifications about new/updated publications are disseminated via Kinesis. An indexing agent process consumes those notifications, retrieves metadata from a backend API on the classic system, and updates the search index.

We used a combination of API and notification integration patterns here. We introduced a process in the classic system that monitors our database for new or updated papers, and generates notifications on a Kinesis topic. An indexing process monitors that stream, and retrieves metadata from the classic system via a document metadata API—a lightweight controller in the classic system. That metadata API will be become the responsibility of a separate core metadata service at a later stage of the project.


The new search interface was developed in Python, using the Flask microframework. While our goal was to achieve feature-parity with the classic search interface, we also knew that we didn’t want to replicate some of the usability and accessibility issues that had plagued that component in the past. Another consideration was laying the groundwork for future interfaces, including faceted search, and finding better ways to show users why results were coming up in their searches.

One thing that we didn’t anticipate was just how attached some users had become to what was really just an arbitrary implementation detail of the legacy search system: the “surname_initial” syntax for author names (e.g. “bloggs_j”). Author names were indexed this way in the old system so that we could easily generate name queries from other parts of the site. In some cases, power-users had noticed this detail and began using it themselves. Users were also confused by the fact that the name query format is very similar to the format that we use arXiv author identifiers (e.g. “bloggs_j_1”), and so many had come to believe—understandably, but incorrectly—that queries generated from the abstract page and other pages were based on those unambiguous identifiers. Another thing that made those queries seem so precise in the classic system is that they were usually limited to the primary category of the paper that the user was viewing. So when we decided not to impose that artificial constraint (since many authors publish across categories), and changed the author name syntax to use commas instead of underscores, there was some understandable distress.

The indexing format for author names in the legacy search index is strikingly similar to the arXiv author ID format, despite having almost nothing to do with each other. This led to quite a bit of confusion when we deprecated the old author query format.

We decided to offer backward compatibility for the underscore syntax, but users will notice that we rewrite the queries using the preferred comma syntax.

Naturally, this raises questions about how we can provide less ambiguous author queries in the future. Rest assured that some important changes to the underlying author data model are in the pipeline that will allow us to make better use of the arXiv author ID and ORCID IDs for auto-generated search links. In the meantime, however, searching by author ID or ORCID ID is fully supported!


The main search application is deployed on-premises, right alongside the rest of the classic arXiv system. This allowed us to continue using our existing logging and monitoring infrastructure, and reduced the complexity of integrating it with the rest of the public-facing site. On the other hand, the indexing agent—the bit that keeps the search index up to date, and runs separately from the web application that provides the search interface—is deployed as a Docker container on our Kubernetes cluster in AWS. We’ve also deployed a staging version of the search application in Kubernetes, which has been a useful test-case as we build out logging, monitoring, and other infrastructure in that environment.

Next up…

In future blog posts, I’ll flesh out some of the other projects that we’re working on right now, including efforts toward an arXiv API Gateway, and what it means to move arXiv into the world of open source.