Imagine a business meeting where someone’s presenting a document they wrote with an AI-enabled word processor, to a room full of people equipped with AI text summarizers, which provide AI-written commentary, which gets sent to the attendees as notes, and is then further summarized by AI-enhanced email clients.

Did anything of value just get accomplished?

I’ve spent nearly a year not working in tech, observing how the current AI mania has affected businesses of all sorts. Leaders are full-throated in their pursuit of “efficiency,” whatever it means to them. As I started my tech career in 2003, the pursuit of efficiency led companies to try outsourcing many processes, including software development, to employees or contractors in lower-cost locations. From my observations, these have been a mixed blessing for companies: there are plenty of opportunities for fresh university graduates in, say, India, but the most talented people want to transfer to places like the United States, where pay and opportunities for advancement are much better. Now, the trend is to try cutting humans out of companies altogether. Is it working?

As I unwrap the AI-drawn trinkets from an advent calendar I was given, I smile, knowing that a well-meaning friend wasn’t deterred by the obviously AI-generated artwork on the packaging. The first and most obvious targets of the current AI boom are the lowly paid, often offshore, artists who design the assets to promote products — what’s often called “creative”, as a collective noun. Cheap creative has been a hallmark of cheap products and knockoffs for generations; now, someone can run a few prompts and reduce their costs to a tiny fraction of what human workers used to charge. Low-quality “chum” banner ads, emotionless illustrations, product apps and websites done on a minimal budget — all of this creative work can now be done with AI, and although customers might notice, the marketplace of low-budget products welcomes it in.

“Content”, as some companies broadly call their written prose, can be generated easily using large language models (LLMs). Human-written articles on content farms are now fed into LLMs’ training data sets, and web searches for topics like household tasks and recipes now return both content farm articles and the AI-written articles that are derived from them. Duolingo, a language learning app that I’ve used for over a decade, is public about using LLMs to create lessons more quickly. The quality and cultural relevance of Duolingo’s lessons continue to be poor — they’ve taught me to say “the cat eats seven strawberries” in many languages, for example — but if the key performance indicator is quantity, then LLMs can help the company hit its metrics. For an extra fee, Duolingo Max will even let me have a near-real-time, LLM-aided, conversation with one of its characters in a language of my choice. It’s somewhat useful, but mostly creepy.

It’s fun to laugh at and dismiss the confident-sounding nonsense text that LLMs spit out; one of my old colleagues, Vincent, dedicates much of his Bluesky account to this. I’m concerned that I also see AI-generated creative being produced by local volunteer organizations and even government agencies. The obvious marks of non-human art and writing damage the credibility of anyone who puts a real brand on synthesized artwork. I’ve also talked to recruiters and job-seeking developers who believe in the pervasive myth that the “ATS”, an AI-powered talent scoring system supposedly used by many companies in the same way, assigns a single numeric score to every applicant’s resume, so a resume should be rewritten primarily for the ATS, not for humans, to read. Some companies now do initial interviews, particularly for entry-level jobs, using fully automated recruiters. If I were invited to such an interview, and I couldn’t spin up an AI bot to speak to a recruiter bot on my behalf, I’d walk away.

On the other hand, well-implemented AI has been incredibly useful since long before ChatGPT went viral in late 2022. Travelers like myself would be lost without machine translation; for example, on my most recent visit to Japan, several restaurants had smartphone-based ordering interfaces, and they relied on translation features built into my phone’s browser, which in turn have used AI models for many years. We’ve all come to accept automated speech recognition and synthesized voices in the products we use; for short text phrases, they listen and speak well, they respond quickly, and they’re far cheaper than hiring professional operators and voice actors. I’ve even seen people try to use their phone or earbuds as a real-time translation tool, which doesn’t work particularly well right now, but I’m optimistic that these features will improve over time. Not too long ago, YouTube’s automated captions were hilariously inaccurate; now, they are incredibly good at compensating for low audio quality, background noise, thick accents, and the use of proper names. AI models make those captions, very useful for accessibility and search indexing of videos, possible.

I also think a lot about the “creepy line,” as coined by Eric Schmidt, then CEO of Google, the point at which technology becomes too off-putting to keep using. When I was attending public weekly meetings for Jupyter, the open source project I worked on from 2021 to 2024, one person had an AI note-taking bot that showed up as a second attendee. When neither the attendee nor the bot responded to questions about what the bot was doing, the community collectively decided to ban third-party recording bots from listening in on meetings, including discussions we considered “off the record” and excluded from the published recordings. Earlier this year, I considered interviewing with a company that makes a bot that listens in on meetings and writes incredibly detailed summaries not just of the text content of the meeting, but also of who spoke for how long, and how attentive people on camera seemed to be. The bot lets individual people opt out of its reports, and makes its own presence known in meetings, but despite these good practices, I felt like such a bot was over the creepy line for me. When I’m given the choice to opt out from recording, I take it, so I felt I wouldn’t be an effective developer of something I wouldn’t want to use myself.

I consider individuals’ AI voice recorders and “smart glasses”, with built-in cameras and microphones, to be over the creepy line. At some point in the near future, I see myself in a situation where I’ll ask someone to remove their prescription spectacles because I don’t want them to be recording me. (Many frames have lights to indicate when they’re recording, but some quick electrical work or a piece of tape will render them useless.) If they refuse, I’ll leave the conversation. People have called me old-fashioned for exercising my right not to consent to the recording of private conversations, but I’m sticking to it. Facial recognition is so far over the creepy line that even Facebook shut its implementation down in 2021, although unsurprisingly, they resumed using facial recognition last year. Companies and governments still use it; I opt out of the facial recognition camera at the TSA checkpoint when I take a domestic flight, but when I’m at a private business, they generally have the right to check that I don’t look like someone they’ve previously banned. There’s no way to know how many Ring doorbells in my neighborhood clock my movements as I walk past a stranger’s front door. What’s creepy to me might be useful to someone else; for example, many of my former coworkers learned English as a second language, or had cognitive issues that made it harder to process speech in real-time, so AI tools would help them understand speech more quickly, even in a private chat. If someone really has a compelling or compassionate reason to use AI, I’d reconsider where the creepy line lies. AI devices that use locally-hosted models, without sending recordings to be archived forever in a corporate data center, might land on the good side of the creepy line.

Is AI a bubble? Yes, in a similar way as the web was a bubble 25 years ago. The dot-com bubble burst, a lot of investors lost a lot of money, a lot of talented people lost their jobs, and web technologies are more dominant and more lucrative now than they’ve ever been. AI technology predates ChatGPT, and it will continue to improve with time, no matter what happens with investors and data centers. Twenty-five years from now, there will still be AI and LLMs, and technologies derived from both, and there will be ever more uses for them, which will require talented and, perhaps, ethical people in charge. Maybe we’ll even have government regulations to keep them in line. A lot can change in the years ahead.

Author’s note: Although the text above includes em dashes, it was all human-written; no LLM was used in the production of this article. At publication time, I owned shares of Amazon, which owns Ring.

Updated: