Wrestling with AI
We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list.
— Why A.I. Isn’t Going to Make Art (The New Yorker 🔒)
As I shared in my recent announcement that I’m looking for my next in-house product role, I have been wrestling with AI. Sometimes it feels empowering, sometimes frustrating, and occasionally dystopian.
I know I’m not the only one struggling with AI. Many of us are trying to make sense of what these tools mean for how we think, create, and contribute. They’re powerful, useful, even delightful. And like all technologies, they come with trade-offs and externalities. Jobs are shifting or disappearing. Spam and noise multiply. People lean on the tools instead of doing their own thinking. It’s hard to know who or what to trust.
As part of my grappling with this, I’ve sought out resources to understand AI in our world (and of course engaged AI in the process of helping me understand AI). I also embarked on having deep conversations with folks that have vastly different takes on this transition to a post-Generative AI world.
One conversation was with a successful musician collaborating with one of the most well-funded AI research labs, training algorithms to create tools for artists. Their view was that artists should be directly involved in shaping these tools so they serve us rather than replace us. In my other conversations with artists and art professors, the responses spanned the spectrum: from anger and avoidance, including a professor responsible for penalizing students who use AI; to curiosity and experimentation, weaving these tools into their creative process.
I’ve also spoken with members of the modern Luddite movement, who, contrary to the popular use of the term, are not anti-technology. They’re asking vital questions about who owns and controls our tools and challenging the ways technology is used to divide us, devalue us, and disconnect us.
And then there are the product managers and technologists who have learned to embrace AI because their jobs depend on it. Some wrestle with the ethical challenges these technologies raise, while others don’t see it as their role to ask those questions. I’ve been particularly concerned about the latter, like one product leader who proudly told me they’d replaced a full customer-support team with “better performing” AI bots (who, presumably, don’t mind the long hours because they don’t have children at home).
Throughout all these conversations, I’ve been pulling on a thread: in a world with AI everywhere, what is still left to for us to do as humans?
I’ve come to take my own view. These technologies are here to stay. We can and must use and understand them, and we can and must ask questions about how we use them and their impacts. They are already shaping us everywhere we look, whether that’s our Google search results, the emails and messages we receive, the expectations in our jobs. What would it mean to both engage AND ask critical questions? To see what we can create that would not be possible without AI tools, and also ask what capacities we lose when we outsource our own thinking to AI. To see what dull and challenging work can be taken on by AI, and ask who and what is harmed by the use of these tools (the “externalities” in economic speak).
Along this winding road of contemplation, I met up with my friend Cliff Flamer, a fellow coach and winner of the “World’s Best Resume Writer Award” (yes, it’s real, I checked). Unsurprisingly the subject turned to writing, something AI seems to be rapidly overtaking (and not just in professional contexts). We braved some existential questions about this work, but I left with some hope as well. What we came to was that writing well is about thinking clearly and bringing your own unique opinions, neither of which AI can do for you. I thought this conversation was worth sharing, so I ended up inviting Cliff to record an episode of the Livelihoods podcast…coming soon!
So I’d love to hear, what are you learning in your own wrestling with AI? What part of your humanness are you still holding onto in this age of AI automation?
To your livelihood,
Nat
—
If you want to explore further, here are a few small experiments in co-creating with AI (mine and others’) that I think you’ll enjoy:
Resonant Way (App): My client and sustainability product manager, Sondra Tosky, has been experimenting with AI “vibe coding” to create free privacy-respecting meditation and relaxation apps.
The Thought Virus (Podcast): My friend Thomas Rudczynski created this short narrative podcast in collaborating with AI, to ask questions about how AI shapes how we think, or more specifically, how is AI shaping his thinking through the creation of this podcast.
My own experiments: I’ve used AI for everything from making weird art for friends to writing manifestos on trust in the age of AI. One of my favorite projects was prototyping an AI-powered experience for Thrive Market, a playful attempt to get the hiring team’s attention that went mostly unnoticed, but was still deeply fun to build. And for the philosophy nerds: I co-authored a three-act play that brings two 19th-century Critical Theorists back to life to debate the meaning of creativity in the age of AI. And finally, I'll leave you with this...
AI Transparency Note
Given the topic of this newsletter, I want to practice what ethical use of AI might look like, and I think that includes transparency about its use. This newsletter was primarily written word-for-word by me, Nat Fassler. And, I have used ChatGPT to help me capture early thoughts that were incorporated into this newsletter, as well as to offer me support around clarity and concision.