We’re at an exciting point in the evolution of design tooling. Other disciplines have had computer-aided tools that correct grammar, rewrite code, and analyze structural integrity of 3D models for some time. Now, with design tooling that’s open, connected, and extendable, we’ve laid the groundwork necessary to provide similar smart tooling for the design process.
Digital product design tools have traditionally represented design as a series of pixels and rectangles—things that look like the desired outcome, but are actually static images of what an interface will eventually become. Once a design is ‘finished’, an engineer must laboriously translate the pixels to code. Important to note: these pixels don’t contain semantic information about what they represent.
As a designer and maker of tools for designers, I find this consistently frustrating. While going about our work, it’s normal to say or hear, “I’d like to see our current landing page with a large video instead of that carousel.” The utopian world in my head says that’s the most we should need to reason with machines—that an expression of design intent (via speech, thought, or interpretive dance) transforms the design into what we want. But today’s workflows have us translating densely-encoded product representations into graphical representations through constant clicking, dragging, and nudging.
But what if we could systematize the way we observe and reason about our team’s work? Are colors consistent across all of the buttons in our product? Are there anomalies in our typographic scale? How many datepicker widgets do we have? (a fun query for cross-platform products that revolve around the Gregorian calendar)
We find answers to these questions by pouring a big pot of tea, rolling up our sleeves, and counting on our fingers and toes. Or an abacus. Or sprawling spreadsheets. It’s like watching paint dry, sans the satisfaction of having spent your morning in a hardware store. And when the question is inevitably asked again in a day, week, or year, the counting starts all over again.
At the risk of stating the obvious, it doesn’t need to be this way! Software used across other disciplines understands what we’re doing, and provides us with hooks to customize it to augment and amplify our potential.
As I write this, my word processor occasionally red-squiggles words. It happened just now when I misspelled ‘occasionally’. Soon enough I’ll have this article reviewed by friends and an editor, but not before inputting my text to a web service that will start surfacing grammatical and structural improvements. It’ll point out my tendency to misuse punctuation, so my human friends can focus on more compelling tasks.
When I first started writing code, the way to find mistakes was to take a deep breath and see if anything caught on fire while running your program. It was painful and tedious. But in the last few years we were blessed with a series of excellent editor plugins that paint your mistakes with familiar red squiggles. Instead of spelling or grammar violations, they find syntactical errors that turn working programs into garbage.
More recently, these plugins have evolved to improve code as you type. We can even configure them to fit our engineering department’s code standards. It’s like pair-programming with a human, except your partner is a helpful robot who doesn’t complain about your music choice or force you into small talk about last night’s baseball game.
I’ve been using the definitely-just-made-up term Design Intelligence to describe the trend toward smarter tooling for workflows—tooling that understands the who, what, where, why, and how of our work, and seamlessly aids us in achieving our goals. This tooling works with our design systems, and we can interact with it in an intentional, humane way.
Last year we shared react-sketchapp—a tool for eloquently controlling Sketch’s programming interface, and building custom jigs for our workflow. We’ve done sensible things such as automating the production of template files, and weird things like creating design files from hand-drawings on whiteboards and commands over voice interfaces. We figured out ways to take the input mechanisms we were comfortable with, and output the graphics that our tool referred to.
react-sketchapp, and everything we built with it, all live on the write part of the workflow. While we can read from Sketch files, it’s still frustrating to try instrumenting the design workflow of a large company, and gain real-time insight about how things flow together when tools live on individual apps.
Our friends at Figma recently opened up the first layer of their API to the public. Their cloud-based, multiplayer-by-default design tool is the perfect environment for considering Design Intelligence as an agent that, much like a colleague, could truly exist alongside us.
We’ve been having fun playing with it for the past few months, building workflow tools we once only dreamed of. While we’re not ready to open source them just yet, here’s a sneak peek of what’s to come.
This is just the beginning and we’re uncovering opportunity areas each day. Personally, I couldn’t be more excited about the possibilities.
Cover image by Karri Saarinen