The BS-Industrial Complex of Phony A.I.

By Mike Mallazzo

Photo: Laura Ockel/Unsplash

When news broke that McDonalds was buying Isreali “Artificial Intelligence company” Dynamic Yield for $300 million, WIRED editor in chief Nicholas Thompson predicted that the sale would either go down as “peak A.I. hype” or “the day big data saved the Big Mac.”

As a former Dynamic Yield employee, I think time will prove him right on both counts. If the tech and talent are deployed correctly, Dynamic Yield can pay for itself many times over by helping McDonald’s better understand its customers. None of this will be due to artificial intelligence. Yet in my two years at the startup, reporters, analysts, and sometimes even customers seemed determined to call us an A.I. company. For a while, we resisted the A.I. label, understanding that our platform wasn’t going to make Watson sweat anytime soon. But eventually, we gave up and just decided to kind of go along with the hype. The market wanted us to be an A.I. company so we chuckled and decided to call ourselves one.

Dynamic Yield exists in the highly commoditized category of venture-backed “personalization” providers, which use a combination of contextual information and a user’s previous actions to try to create the most relevant user experience. Doing this well requires an open-architecture platform that can maintain massive amounts of data and help companies use this data to rapidly test what resonates with a given user. For McDonalds, this will help power everything as basic as recommending a McFlurry on a sunny August afternoon to providing a personalized offer to entice a loyal customer who has gone three weeks without treating himself to an Egg McMuffin.

We were the perfect startup in that we were ever so slightly better than the competition at solving the really boring and ubiquitous problem of data management for personalization. It’s not trivial; at the enterprise level, this equates to tens of millions of dollars. Apparently, it is also what passes for artificial intelligence these days.

In this way, Dynamic Yield is part of a generation of companies whose core technology, while extremely useful, is powered by artificial intelligence that is roughly as good as a 24-year-old analyst at Goldman Sachs with a big dataset and a few lines of Adderall. For the last few years, startups have shamelessly re-branded rudimentary machine-learning algorithms as the dawn of the singularity, aided by investors and analysts who have a vested interest in building up the hype. Welcome to the artificial intelligence bullshit-industrial complex.

The core feature of a B.S.-industrial complex is that every member of the ecosystem knows about the charade, but is incentivized to keep shoveling. It’s not so much that we reach a point where we convince ourselves our bullshit is true; it’s that the difference between truth and bullshit has become purely semantic. The definition of something, like artificial intelligence, becomes so jumbled that any application of the term becomes defensible.

Let’s break down the key components:

The marketers know it’s bullshit. At some point, it probably began innocently enough: A clever product marketer, looking to differentiate a technology that three of his competitors were also hawking, likely started out by declaring that his email capture tool was powered by dragon’s tears. When that failed, he said it was powered by artificial intelligence. The next week, customer relationship management solutions became A.I., then sales outreach platforms and eventually… bodegas. Then it became a demand-side problem. Requests for proposal began to ask how technology vendors “leverage A.I.” while investors began to inquire around how incorporating artificial intelligence at scale would reduce churn.

If it’s beneficial for companies to sprinkle in a little sex appeal and brand this as “A.I.,” there’s no incentive to stop

This has long passed its logical extremes. In press releases, Feedvisor, a price analysis tool for brands who sell on Amazon, markets itself as the “artificial-intelligence, machine-learning, big-data company.” This is comically full of buzzwords, but for the purpose of SEO and lower-tier press, I imagine it is pretty effective. While I explicitly made the decision at my current startup to strip A.I. from all branding, I empathize with those who feel they are forced to play along. If the market is pleading with you be an A.I. company, it’s an uphill battle to try and say you are anything else.

The investors know it’s bullshit. When venture capitalists say they are looking to add “A.I. companies” to their portfolio, what they really want is a technological moat built around access to uniquely valuable data. If it’s beneficial for companies to sprinkle in a little sex appeal and brand this as “A.I.,” there’s no incentive to stop them from doing so.

The pundits know it’s bullshit. First of all, apologies in advance to Anand Sanwal, the founder and CEO of CB Insights, who would likely object to being called a pundit. However, his company’s “A.I. 100” list is one of the bullshit-industrial complex’s greatest art forms.

For context, CB Insights built the premier brand in B2B by being refreshingly genuine. The company explicitly calls out the empty wisdom expressed by thought leaders, investors, and consultants, but even it is not above occasionally pandering to the market to advance its business objectives.

When I worked at Dynamic Yield, I filled out the application that ultimately placed us on the 2018 version of the A.I. 100. Like almost every other award application, it was an exercise in innovation theatre. At first, I thought this was a laughable longshot, but as I started to look at the past year’s winners, a dangerous Neil Patrick Harris “challenge accepted” bubbled up in my head. The rest is history.

About a month after we appeared on the A.I. 100, an analyst from Juniper Research reached out and asked to know more about how we were using “artificial intelligence to transform marketing and reduce ad fraud.” I gave him an in-depth demo of our platform and live customer user cases, walking through each capability without once mentioning A.I. The next week, we were listed in his report as one of the top five companies in A.I. alongside Alphabet, IBM, Facebook and Salesforce.

The journalists know its bullshit. Two years ago, TechCrunch said that A.I. has become a meaningless term, tech’s equivalent of “all-natural.” Yet every other TechCrunch funding announcement introduces a startup that is using A.I. to transform the paradigm of its chosen industry.

The press is in a tough spot here. Consumers are clamoring to know about the newest developments in artificial intelligence, but there is minimal objective truth around what is and isn’t A.I.

To help contain the spread of misinformation, I propose all journalists follow the “Theranos A.I.” test. In her New Yorker profile, Elizabeth Holmes described Theranos core blood testing technology as a process where: “A chemistry is performed so that a chemical reaction occurs and generates a signal from the chemical interaction with the sample, which is translated into a result, which is then reviewed by certified laboratory personnel.”

If a founder sounds anything remotely like this describing her A.I., ask her to try again.

The technologists know it’s bullshit. Fed up with the fog that marketers have created, they’ve simply ditched A.I. and moved on to a new term called “artificial general intelligence.” Every so often, an academic will call out the nonsense, but it does little to move the zeitgeist. More often, the true geeks ignore the noise and build the future.

With the incentives of all players more or less perfectly aligned, conditions are perfect for a flywheel of bullshit to spin faster than it can hit the fan.

So why should anyone care? If the bullshit-industrial complex drives growth, why intervene for the sake of intellectual clarity?

For those of us who cling to the belief that we’re not hopelessly at the mercy of technological change, these distinctions fundamentally matter. Presidential hopeful Andrew Yang has largely built his campaign around building an America that can thrive in a post-A.I. world, but if A.I. is an umbrella term used interchangeably to mean anything from basic automation to advertising technology, what exactly is the threat? The future can be shaped by smart government policy — if we know what we’re legislating against.

Technology policy often exists outside the usual binaries of American politics. In 2016, a lone broad area of bipartisan agreement was that the technology sector was a national treasure that should be free from government regulation. In 2020, the lone bipartisan sentiment may well be that technology monopolies represent a threat to competition, growth, and national security and must be disbanded. On the subject of technology, politicians have strong opinions, loosely held.

However, the ideological vacuum that tech policy occupies is all for naught if nobody understands what artificial intelligence even is. If our senators couldn’t understand that Facebook makes money through selling advertising, how can we ever hope they’ll understand the technical realities and limitations of A.I.? With potentially tens of millions of jobs in the balance, “A.I.” will become whatever the highest paid lobbying firm convinces our politicians it is. Season Six of Black Mirror practically writes itself.

In college, I had an anthropology professor who opened his class by defining “bullshit” with the single best Balderdash response in the history of mankind. For any given topic, there is a gap between the supply of what we actually know and the demand for what we feel we need to know. Everything that fills this gap is bullshit. This dynamic is at the heart of most grand human folly from the pseudoscience of 19th-century medicine to mansplaining.

Since the introduction of the iPhone in 2007, tech has desperately been searching for the next technology that will unlock a trillion-dollar market. In Silicon Valley, a decade is an eternity and 12 years after Steve Jobs said hello to a five-inch box, we’re no closer to the new new thing.

In 2015, Facebook made a largely forgotten big bet on VR/AR as the operating platform of the future, declaring that it would be more “ubiquitous than mobile.” The jury’s out for six more years but so far, VR is not even finding much traction in pornography, its lowest hanging fruit market.

I cling desperately to the notion that voice will become the dominant medium but have never used my Alexa for anything other than glorified meteorology. Blockchain and cryptocurrencies have enormous potential but their only clearly actualized use case is speculation. As a result, tech has pushed all its chips in on A.I. and with good reason.

Even as the BS-industrial complex muddies things up, remarkable things are happening in the field of artificial (general) intelligence. James Holzhauer broke Jeopardy but wouldn’t stand much of a chance against Watson. Deep learning algorithms have proven to be better than humans at spotting lung cancer, a development that if applied at scale could save more than 30,000 patients per year.

But artificial intelligence isn’t yet opening trillion-dollar markets or reshaping the fundamental nature of work. It’s not yet reached the heights of previous Silicon Valley deities such as the microprocessor, internet browser, or mobile phone. But the valley is banking on A.I. to be its next god and if there is no god, it becomes necessary to invent him.

In the meantime, there’s another way.

Founded in 2014, x.ai is a technology company that declares that it is driven by artificial intelligence and human empathy. The company exploded onto the scene when it released Amy and Andrew Ingram, A.I.-powered personal assistants that promised to magically schedule meetings.

When I was interviewing for a job at the company, I downloaded Amy and found that almost nothing about the technology was magical. Like my fiancee, Amy had little appreciation for my sardonic wit and struggled to understand where, when, or why I wanted a meeting scheduled.

As a result of many cases like mine, x.ai struggled to find product-market fit while resolving the near infinite edge cases associated with meeting communication, culminating in a Wired profile where the reporter fumed over his back and forth with the agent. In a Hail Mary, the company changed its focus to become a sort of Calendly+, perfecting the UX around meeting scheduling with garnishes of honestly branded artificial intelligence. In doing so, x.ai has broken free of the bullshit-industrial complex and gone back to the magic startup formula of solving a boring and ubiquitous problem with elegant technology.

But until that day comes, the basics of infusing empathy and a deeper understanding of user pain points will be the basic differentiators in crowded markets. There’s a beautiful irony — in the so-called age of artificial intelligence, companies can win by simply being human.