arXiv-NG: In-Person Testing for Submission UI

One of our current arXiv-NG projects is refactoring the submission interface. In June, we invited ten people from a variety of disciplines and career stages at Cornell University to the arXiv offices to test drive the Next Generation submission interface. We used an interactive prototype to simulate the process of submitting a paper to arXiv from start to finish, inviting participants to comment freely on the prototype as they explored the new design.

User-Centered Design Approach

Design principles for arXiv-NG prioritize intuitive use of the interface for as many people as possible. When a design decision or change is made, this means that we consider accessibility, visibility of critical path elements, legibility, and understandability. We use feedback from users to evaluate and improve the design to make sure it meets these goals and priorities. Usability tests provide one type of data to help us make good choices for a more intuitive design, reducing cognitive load and allowing users to focus on their submission rather than trying to make sense of the interface.

Usability testing isn’t new to arXiv – in recent years we’ve conducted a number of surveys through our usability email list (please feel free to join the list), published a user survey with over 36,000 responses, held a remote-interview usability study for the new moderator interface, and collected feedback on our new search feature to help identify issues and refine functionality.

Conducting in-person usability tests offers a number of advantages. We can observe nonverbal gestures (nodding, facial expressions), observe behavior such as cursor hovering in places where a user would expect to find something (whether or not anything is in that location), ask and answer questions about intended behavior of an interface element, and pursue “what-if” ideas and suggestions in context. The potential downsides include the time it takes to prepare a high-fidelity mockup and good starting questions, the time required for volunteers and for researchers to conduct the test, managing the logistics of testing (scheduling and finding suitable locations), and finally, compiling and making sense of the results. We feel the trade-off in effort is easily justified by obtaining useful and meaningful input to the design process.

What we were looking for

In keeping with the priorities outlined above, we were particularly interested in identifying  general expectations of the process and the interface, places where a submitter might need more information about the process, places where terminology or directions are unclear, and details of the interface that are confusing or take time to understand.

Some takeaways from testing

Testing gave us a very clear list of action items. Some suggestions were small but significant – changing wording on labels, for example. Others imply more involved projects that may take time to incorporate. Some of our most significant observations included:

  • Several specific labels and terms should be adjusted to help reduce confusion (for example, “forename” changed to “given name” or “first name”).
  • The new progress bar element was regarded as helpful and intuitive, especially with labels for each step.
  • Users referred frequently to New information boxes and help icons during the submission process.
  • Nearly all testers indicated they would welcome more in-context help information, especially for author name formats, file processing, and file size limits.
  • Buttons that have a text label, or label plus icon are more useful than icons.
  • A document preparation checklist or reference sheet would be useful.
  • Autofill of metadata from TeX source was a top request, mentioned by most participants.

Identify, investigate, improve, iterate

We will continue to test interfaces and new features at all stages – during design and requirements drafting, development, beta testing, and post-launch. Connecting this process to actual arXiv users will help us to refine our work and make a clean, clear interface that is easier to use on all devices.

Are you interested in being a part of an upcoming usability test, beta test, or user survey? Announcements for arXiv usability tests are distributed via the arxiv-usability-l email list – sign up to be notified when new tests are happening. To leave comments and report bugs on published NG features, use the “Feedback” button found on newly-built sections of arXiv.