There’s a bright yellow billboard along I-95 between New York City and Philadelphia. You can’t miss it.
“Potholes suck. AI can fix them.”
That’s something most drivers can get behind. Will it be Boston Dynamics’ robot dogs? Amazon delivery drones? Is some other company building androids with jackhammer arms and stomachs full of asphalt millings? Will they come dressed in bright orange vests?
None of the above. AI — the “intelligent” software locked inside our computer networks — cannot fix potholes. The potholes will be fixed by human road workers.
This billboard, which advertises AI-powered road management startup Vialytics, exemplifies the exaggerated claims and self-importance that have pervaded tech industry promotions of its new favorite toy. This kind of hyperbole, propping up technology as a master of all trades, undermines the human labor necessary in those trades.
“Corporate narratives of AI emphasize its intelligence and convenience, often obscuring the material reality of its infrastructure and the human labor needed for it to function,” write Callum Cant, Mark Graham and James Muldoon in “Feeding the Machine: The Hidden Human Labor Powering AI.”
The authors, based at the Oxford Internet Institute, published the book in August after spending the past decade carrying out thousands of hours of fieldwork and hundreds of interviews to understand the lives of workers who are hidden and exploited by AI.
To be clear, that isn’t the case with Vialytics.
Vialytics, which was founded in Germany and launched its US headquarters last year in Edison, NJ, contracts with city or county public works departments to equip employee vehicles with phones that record road conditions as people drive around. On the back end, an AI-powered software analyzes the video, identifying issues, cataloging them by severity, and then compiling timelines for repairs. The company has around 50 clients across the US.
“Our product helps public works departments — which often get forgotten and are largely understaffed — allocate money most effectively … and allows road workers to show up to job sites more prepared,” Andy Kozma, chief revenue officer at Vialytics, told Technical.ly.
Vialytics’ product targets an issue affecting the manual laborers who risk their lives doing the hard and dirty work on the roads. And because of this, per Kozma, it’s actually anti-glitz.
The billboard, then, is just an example of using sensational marketing tactics to catch ever-shortening attention spans. “That was the intent — to use hyperbole. You’ve only got a few seconds to catch someone’s attention,” Kozma said. “And in this case it worked.”
‘Makes one person as effective as five’
Tactics like this are especially effective in today’s environment, where there’s a lot of excitement around anything connected to AI.
Starting with OpenAI’s launch of ChatGPT in 2020, the interest in AI and its utility has exploded. This has largely formed around AI’s ability to perform creative tasks, from writing a research paper to generating digital images.
“I am not generally positive about what AI means for the creative or working class,” said Manu Sporny, a serial entrepreneur currently researching digital credentials that would signal whether an online identity is real or part of an AI network. “It could increasingly gut their ability to make a living.”
AI appears to be skilled in these fields not because the algorithm itself is artistic and imaginative, but because these systems consume gargantuan amounts of data created by humans. As people have begun understanding the nuance of these systems, there has been some pushback to what could be termed AI’s plagiarism of humanity.
AI adoption is typically touted as a measure for improving a business’s bottom line, which sounds good, but in practice usually means decreasing wages, reducing worker hours or cutting jobs altogether.
While Vialytics CFO Kozma said its product hasn’t replaced any workers, a testimonial on the company website from customer Edison Public Works in New Jersey says: “Vialytics makes one person as effective as five.”
The media industry has been a poster child for this human-replacement theory. As the new AI chatbots have proven themselves eloquent, some publishers moved quickly to adopt the tech. And this has happened concurrent to extensive layoffs of their human staff.
But this trend isn’t relegated to media. Companies across a variety of industries are pouring money into AI and acknowledging that the tech will take jobs from humans.
Fake tech claims aren’t new (remember crypto?)
There’s nothing inherently wrong with the pursuit of efficiency.
AI is great at analyzing big data sets and gleaning helpful patterns or even predicting potential problems, plus it can be used to automate tedious tasks to free up human workers for more meaningful work.
“My own response is the feeling that not only do potholes suck, but the job of managing them sucks too, and it would be great if people could direct even some of their attention to things more interesting to them,” Nathan Schneider, a professor at the University of Colorado Boulder and a scholar focused on economic justice in the online economy, told Technical.ly.
“But how do we ensure that any time saved by AI-assisted pothole detection,” he added, “is shared widely, not narrowly?”
Widely being with all workers, not just the C-suite, who would benefit from the profit made by cutting its biggest expenditure — humans.
AI businesses are not always so forthcoming about their products, or the technology’s limitations. The media industry’s adoption of AI has not come without consequences, for example: Dozens of articles have been published with glaring inaccuracies.
“The industry of fake tech has been around for decades, and it’s built on the idea that entrepreneurs need to induce FOMO from investors and users alike,” said Matheus Pagani, cofounder and CEO of Venture Miner, a startup that provides blockchain and AI tech for various industries.
Pagani saw fraught incentives like these several times over in the crypto industry. The ICO boom is a prime example. In 2018, huge amounts of institutional and retail investment was funneled into projects that over-emphasized their utility in marketing material without the tech to back it up. While a small handful of investors got rich, many others were left holding bags of worthless digital assets.
Vialytics’ Kozma has been working in the AI sector for more than a decade, and he admits overstatements have become a problem, muddying the tech’s potential advantages while frightening workers across industries.
Regulation is slow. Re-education is faster.
OpenAI recently pivoted away from its nonprofit business model. While the company initially rallied around a product that will “benefit all of humanity,” many experts believe the move to a for-profit business model will be a course correct towards making loads of money.
That shift — while it seems reasonable enough for a business — could exacerbate the use of deceptive marketing practices to entice industries to adopt AI technology.
“A lot of companies profit from misinformation,” said Pagani, the Venture Miner CEO. “They’re incentivized to lie.”
That said, government regulators are cracking down on deceptive claims.
In September, the Federal Trade Commission (FTC) took action against a handful of companies that were using AI claims to deceive and harm consumers. “The cases included in this sweep show that firms have seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge deception,” according to the agency’s press release.
But the government is notoriously slow and — if the questions posed to TikTok’s CEO during his Congressional testimony are a precedent — even luddite. Meanwhile, lawmakers are cutting workers’ rights and social program budgets across the country.
Pagani doesn’t think government regulation will help much. Instead, he thinks we need to focus on education, and fast. Pagani leads developer bootcamps on both blockchain and AI, cutting through the hype by showing students exactly what the tech does.
“I’m fighting against the misinformation with facts,” he said. “I don’t see a remedy other than education.”
It’s a challenging mission, but one adopted by educators of all kinds. AI education is being offered by a number of universities and tech companies and through open online providers like Coursera. Hands-on experimentation with AI tools has proven especially useful for helping people understand the technology’s strengths and weaknesses.
These kinds of programs, Pagani hopes, will help people learn skills to transition them into new jobs — even the most advanced AI of today needs humans to build tools around it or engineer prompts for the best results.
Though Goldman Sachs recently cautioned that massive corporate investments in AI have yet to bear fruit, AI startups are still pulling in huge swaths of the venture capital market. For now, it appears up to the educators to push back on the hype, while inspiring workers to learn new skills and stand up for their rights in a quickly-evolving workforce.
Because there’s no putting the artificial intelligence back in the bytecode.