LLet Me
Recently, I read a wonderful article on The Bloggess: No, I do not want AI to “polish” me. You should give it a read if you haven't already. It's brief, hilarious, and well worth your time. Also, I'm going to use it as a jumping off point for my own complaints, so it will provide useful context.
What Jenny's experience demonstrates well (and what many people who are confused by the adverse reactions to generative AI don't seem to realize) is just how fucking annoying the sales pitch is. And I know I say that as someone who is not at all interested in LLMs, but I believe it would be true even if I was. User interfaces are distorted around these new buttons, pushing aside useful tools and making use of every attention-grabbing dark pattern trick in the book. Four apps that I'm forced to use in my day job have literal blinking and animated buttons placed at forefront of their UIs. Their constantly-looping motion seems designed in a lab to draw my eye just when I'm trying to form a real thought, distracting me with its pleads to generate something, anything. There it will be, flashing and moving incessantly in our documentation app while I'm trying to explain the novel bug I found in our custom data processing engine that is exacerbated by the esoteric embedded hardware only my company seems to support. There is no prompt I can give to make a model generate that for me because such a prompt already requires me to extract the necessary information from my own brain.
And then there's my IDE, where the sparkling autocomplete I did not ask for and did not enable pops up not where autocomplete normally does but rather as a ghostly apparition of a line of code I might write in some reality within the infinite multiverse, perhaps even this one! I am obliged, of course, to read what it has spat out to determine if I'm actually so predictable. About one in four times it seems that I'm doing something about 80% of developers have done before me. More often, the AI predicts an incorrect version of my future, but the real autocomplete gives me a helpful suggestion. Then I find myself in the precarious position of remembering the keyboard shortcuts to avoid something I don't want but get something I do want. Of course, I pretty quickly develop the muscle memory to work around these anti-patterns, or I dig through the settings to disable them altogether. But the number of times it has derailed my train of thought left a lasting impression on me.
These might sound like nit-picky complaints, but it's hard to explain how easily this kind of thing can pull me out of a flow state. And I'm not even someone who needs a distraction-free workspace. I have thrived in open office environments. There's just something different when my tools are the ones doing the distracting.
I can't think of a more perfect analogy than what my new favorite person, Jenny, came up with: it's like a toddler. It's like having to work while a toddler constantly tugs on my pants leg, screaming, "Let me, let me!"
Except I've been jaded by over a decade of corporate life, so it's more like having a product manager reach through my screen and grab me by the shoulders. "Please, please use this feature," they wail, "We're spending so much time and money on it. You've gotta pump those engagement numbers up! Integrate it into your workflows. Then we can lock you in, monetize it, and finally get our bonuses!"
And not only is the UI annoying, but the features themselves are demonstrably not useful. I found this article on HackerNews (I know, I'm sorry), and while reading through the comments (I know, I'm sorry), a couple really stood out to me.
The first is someone who noticed that this "polish" feature actually changed the meaning of the original message. I really want to highlight that, because the Bloggess article focuses on how the writer's personality was removed, which is deeply important, and the message changes are subtle enough that I didn't even notice until they were pointed out. So, not only did the model make the tone more corporate, but it also morphed the message itself into something closer to the corporate average. It implies promises and alludes to excuses. These changes are insidiously slight, and they can be hugely important when interacting with other humans.
Another commenter mentioned that they still to write their own emails but have found a use for LLMs in generating summaries that no one will ever read based on transcripts of a meeting that the attendees find pointless. [quick aside: One of the early criticisms I heard about LLMs is that they only excel at writing words no one wants to read. It feels great to have reached the point where this has become an argument of their proponents.] This is a perfect example of the twisted kind of "efficiency" that generative AI is selling to businesses. Here we have a pretty clearly identified process inefficiency, where someone is asked to spend time doing a task that they believe is wasteful. Rather than work to remove this inefficiency (or discover its true goals and make them clear so we can adapt the process into something not so wasteful), we double down on it. We pay tens if not hundreds of thousands of dollars per year on a corporate contract with companies operating so deep in the red that they're barely within the visible spectrum. They then burn eye-watering amounts of energy just to generate documents that nobody wants to write and nobody wants to read. And somehow, along the way, the heads of these companies get cosmically wealthy.
This is modern corporate efficiency, though it doesn't resemble any efficiency I'd recognize based on my education and experience. We end up with many words that may or may not actually convey what was meant, but no one will ever read them so it's okay.
I want to write things that people want to read, and I recognize that not everything I write is a perfect representation of my true self. We make decisions when we write, some consciously and some not, that change how the reader perceives us. In making those decisions, we are always masking to some degree, especially at work. But using LLMs is like actively participating in regression to the mean. Where we're worse, it may bring us up to the average and make us better. Where we're better, it will bring us down to the average and make us worse. More importantly, it largely removes the us in the process. If I use an LLM to write a work email, instead of it just being a version of me wearing my big-boy business pants and writing in complete sentences, I turn myself into a fuzzy JPG of a human. You might be able recognize that there was the shape of a person there, but the humanity is gone. And corporate life needs more humanity, not less.
I am constantly being informed that the generative AI revolution is inevitable. I am also constantly being bombarded by pleas to use this inevitable thing. I'm not so naive as to think that an essential part of the fabric of our future must get its start with zero advertising. I just don't expect it to be pushed on me like opioid pills from the strung-out kid in an anti-drug commercial.