These are some reflections after a panel discussion on AI that happened at the college where I work. The panel was made up of about 8 students and faculty. It turned out to be mostly one-sided with most folks on the panel and in the audience being anti-AI. Those who were AI advocates or AI-curious got quiet and felt like the space was not conducive to other view points. AI discourse has already risen to a level of challenge where folks feel strongly one way or the other on the issue.
Here are some of my reflections and questions that came out of experience and what I'd do if there was a next time:
It is clear to me that there are folks across the spectrum: those using AI without qualms but mostly keeping it invisible, there are some who are using it in more visible ways and talking about it, and there are some who are abstaining altogether. The panel revealed a lot more concern and unease about AI than the folks who planned it anticipated.
I think we all expected there to be far more support for AI. The questions certainly leaned that direction, which may have also created some of the reaction. My guess is that the questions were likely AI generated, or at least AI consulted, so it is no wonder it leaned that direction (I was in the role of moderator for the panel but did not come up with the plan or questions).
I think the unease reflected in the group reveals a larger trend with AI: people are not all falling in line and getting on board. Even the Super Bowl was criticized by having so many AI ads by a lot of places. Clearly people's feelings are all over the place and do not match the AI hype that the billionaires want us to accept.
As someone who follows all of this very closely, and yes, I have used AI to build apps, redesign a website, help with scheduling, and answer plenty of questions, I find myself far more on the skeptical-to-anti side of this conversation these days. Therefore, the central question of "responsible AI," as the assumed ground for a framing of AI, is not where I think we should start. This idea of "ethical uses" codes an assumption of acceptance and inevitability that I think we should call into question, if not challenge and reject.
In other words, for conversations on this, I'd lead with "How are you feeling about AI?" And leave room for each person to have many different feelings about it or none at all. Then I'd want to ask, "How has that changed over the last year or two?" "Are there any contradictions you are personally experiencing related to AI?"
I personally started out much more curious and interested in AI and have become increasingly concerned as I've seen what it can do, how it is being used, how it is being forced into everything. This is not to mention its impact on the environment (and here), let alone on humans. For one example, just last week an autonomous AI agent doxed a programmer for rejecting its code. And it was just couple weeks ago Grok on X/Twitter was "undressing" women with only a few words prompting it. Things are moving incredibly fast. Now that AI is helping develop AI it is going to go faster still.
Our tendency as humans is to see technology moving fast and try and adopt it just as quickly. And while this has always been a hazard, I think the development of AI is on a whole other level. It fits well within what Ronald Wright called the "Progress Trap:"
“The pursuit of progress through human ingenuity (often using technology) that results in humans creating more problems for themselves in the long run.” - A Short History of Progress
Moving on, another question to ask is, "If you have any, what is your experience of AI?" We don't need to assume everyone has used AI. I have students who won't touch it with a ten-foot pole. My 16 year hates the thought of it. And then there are a lot of folks dabbling, and doing cute things like it is a harmless new app in the app store. AI's power is much more than that but if you're working with someone using AI to add words to a cat photo, they're in a very different headspace than those who are aware of (or building) MoltBook.
This question seeks to gauge what level people are dipping in/out. These things are not static and I think our perspectives will continue to shift based on complexities and types of work they are doing. An AI transcriber of a meeting is one example. Writing a full on research paper complete with citations, proofreading so the AI isn't detected, bibliographies all with Grammarly is another. Using AI for proposal building and then giving that proposal to another person to implement is another. What about building a simple website or personal use app for a specific need? Then there's the baked in features of Gmail that email into buckets like social and promotion. And don't forget the cars that fired their drivers. There are so many different ways it shows up, often meant to be invisible like so many other things under capitalism, hard to disentangle and hard to fully reject.
A concern I have is AI is being coerced onto us in a number of ways. Take Zoom for example. If you're not comfortable with Zoom AI try to turn it off. If you're not uncomfortable with it, think about how quickly technology has totally consumed all areas of our privacy in such a short time (thanks iPhone and Covid). Turning it off is a whole process that takes time and you will like need to read documentation from Zoom like I did in order to do it. Why is it being forced on us in so many of our apps? When I recently installed Chrome, I went to turn off all the AI features only to find that it had an entirely separate Gemini app running in my Mac's Menu Bar weeks after I thought I shut it all off. Now I've turned to Firefox's Zen Browser sans AI. I've been looking for apps with no or very little AI. They are hard to find. Why are we having to opt out? I want a world where we opt in to AI not out. I believe AI should be about consent and transparency. If you use it, name it.
Could we model that kind of openness in our communities?
Other questions I think are worth asking:
What do you do when colleagues use AI for work without telling you? A number of folks I interact with on Mastadon have recently written that they have colleagues using ChatGPT during conversations and in chats back and forth with them. I have had colleagues send me proposals for projects to work on totally generated by AI some have told me some have not. How do you approach the proposals for work for you to do totally generated by a machine that you will likely be responsible for implementing? How do we talk with/address colleagues doing this?

Are all AI platforms the same? How are they each as companies programming, designing, and impacting the AI narrative? (I am looking for an article I read that showed how Gemini is programmed to get around teacher's trying to trick AI into revealing AI is being used but cannot find it currently - stay tuned). How are AI apps incorporating addictive qualities of social media into how AI functions (like sycophancy).
What should we do about verification? Verification of personhood, verification of truth and reality? With the autonomous AI bots of apps like OpenClaw, with the internet finally be overrun by bots and will humans need to make a new AI-free zone to connect with each other? Will human need new forms of identification that span the physical and digital world so we can trust sources, interactions, and whose really talking to us?
How do we think about the different kinds of tasks and uses for AI? And are there tasks/uses that are more or less justifiable? I tend to think about AI in three buckets currently: process oriented (AI as a part automating things that are not AI), task-oriented (coding, processing emails, etc), creation and thinking-oriented (creating proposals, generating ideas, reading/writing papers, emails). I'm not settled on these but it's how I've categorized these things so far.
I heard a metaphor from the reporters over at Hard Fork who said that AI is like using a forklift and using our brains is like weightlifting. I so rarely need to use a forklift in the work I do, and weightlifting is really good for the fitness of my brain. I sent this to a loved one the other day because I know they've been sucked in by AI:
Don’t offload your thinking to AI. Your brain, your thoughts, your words, and spirit are irreplaceable and even if you can use AI to do those things doesn’t mean you should. Keep sharpening your mind and abilities. Do the hard thing. Take the time to develop your own voice and workflows. That’s what will keep you relevant when the machines try to do everything else.
What are the impacts of AI (and tech more generally) on human development, behavior, practice, and community?
What is the impact of AI on the planet, water, the economy, politics, etc? What does this look like outside of the pockets of consumers but in the hands of empires, militaries, and dictators?
And given these questions, then I think we could realistically arrive at a question like: is there a responsible uses? AI, like all other forms of technology, will seek to force our ethics to adapt. Will we let these billionaire tech companies set the parameters for these conversations or will we hold on to our own ethical traditions and practices as we navigate these challenging waters? How will this new information be taken in and incorporated into these traditions in meaningful ways?
---
As you can tell from the tone of this article, I am concerned. I am a realist in the sense that I think we need to be informed about these things, we need to - at some level - understand what they are doing and can do - but if I ever did, I no longer think we should actively accept technological progress without question. Even if its fun and cute and feels like magic. The tech-bro billionaires do not have our best interest in mind and to think anything other than that, I believe, is fooling ourselves.
I worry that we will feel like we need to keep up with AI and add it to everything we do. There is and will continue to be a proliferation of organizations jumping on the AI hype with "Course and Centers for AI this and that." We will try to normalize it so we feel relevant. We may even fall into "assumed ethical use" which is our way of believing that because we see ourselves as ethical, we will use it in ethical ways, and we are not as susceptible to the power/influence these companies exercise on us as those who are not like us.
My perspective is that we slow walk all of this in favor of our longstanding traditions, (Quaker) principles, and commitments to each other. Let us consider the impact of these things on our community, on our own development, and our watersheds, and where do we want to go as a species?
And what if our institutions mostly abstained or abstained in certain ways focusing on the in-person, tangible, and very nature/human powered intelligence all around us? How might that look and feel?
A friend of mine, CN, summarized these things when she wrote in response to some of this: while this may not being going away, let's work towards a framework of AI that includes: consent, opting out, community, and institutional abstinence, etc.
Let's reach for something far better than artificial intelligence.
Thanks for reading,
Wess
Haw River Watershed (Greensboro, NC)
PS - This was 100% human made out of organic intelligence (OK, and some Fireweed Coffee).