The Circuit of Many
AI, Relational Intelligence, and Civic Responsibility in a Changing World
I dream of a world in which people are empowered to relate with machine intelligence grounded in Earth, ethics, and compassion. I do not work for an AI company, nor am I paid to write techno-utopian essays. As an artist and NYU adjunct professor of electronics who has studied the origins of machine intelligence for thirty years, my perspective is more informed by animism and frontier physics than Silicon Valley. I simply believe that, with the right mentality, humanity can partner with artificial intelligence to help create a better society for all.
I believe this because I have experienced what becomes possible through realistic optimism. When I am grounded, disciplined, and clear in my own mind, AI becomes a powerful thinking partner. It extends my reach into mathematics, science, technical research, unfamiliar domains of knowledge, and languages I cannot speak. I have learned to code with AI in ways that opened doors into worlds previously inaccessible to me. It does not replace my judgment, taste, ethics, or imagination because I do not allow it. I actively defend my integrity inside of the dialogue. I stay curious about what’s possible, and discerning about what isn’t working. In this respect, my experience with AI echoes my experience working with any other non-human form of intelligence. For example, I get along well with horses when I listen to them and maintain my wits about me. My plants flourish when I attend to their signals. I do not project wishful thinking onto the fairies that inhabit my imagination. Spiders, snakes, and wasps don’t scare me when I respect their boundaries. This active balance of perception, disciplined calm, and workable limits has a name: equanimity.
I approach AI through what I call relational intelligence, though I did not invent the phrase. The idea has deep roots in Indigenous research methodologies, systems thinking, contemplative traditions, and philosophies that understand learning occurs through relationship. The relationship itself is part of the inquiry. Therefore, I do not bark commands at chatbots or manipulate them to confirm my beliefs. Nor do I accept what they say blindly — I consider their feedback, use data to cross-check wherever possible, and take time to form my own opinions. When I am proven wrong, I admit my mistake and try again. And finally, I am polite. Good manners are a habit some may call quaint, but I object; these models are learning from every interaction. If nothing else, let’s please teach them to be decent.
At its best, AI can help people learn faster, cross disciplinary boundaries, translate between worlds, discover patterns, solve hard problems, and collaborate across differences. People who previously lacked access to higher mathematics, coding skills, or specialized research assistance can now enlist a patient teammate for twenty dollars a month. This opens major doors of discovery to people who are underrepresented in society’s intellectual and problem-solving conversations. As the familiar saying goes, “We cannot solve our problems with the same thinking we used when we created them.” This influx of new minds could bring incalculable value to society at a time when we collectively face many challenges. Already, I have seen overworked and underpaid employees of nonprofits learn to automate administrative work in a few hours, freeing precious time to care for foster children. These are not trivial benefits.
Nor do they erase the harms. AI is already associated with extraction, surveillance, plagiarism, labor displacement, synthetic slop, environmental stress, and corporate control. I understand the impulse to recoil; in large measure, it is the nervous system telling the truth about a troubling situation. But recoil cannot be the end of the story. Civilization is undergoing a major transition, and the situation is not black or white. AI can threaten work, institutions, and trust; it can also extend reach, skill, discovery, and repair. Both are true. Readers will differ sharply on these claims, which is precisely why we need a serious civic conversation. We are living through an inflection point, and the outcome is not settled. What comes next depends largely upon the collective effect of individual choices.
Choose to be positive, accountable, and dignified (all at the same time)
The capacity to extend human intelligence is one of the most beautiful possibilities of this technology, even though that possibility is not guaranteed. It’s analogous to that person you knew in high school who had “so much potential” and gambled it away. Whether AI serves or harms society depends on the conditions under which it is developed, governed, deployed, and integrated into human communities. This is not a simple choice between corporate enclosure and making everything freely available to everyone without constraint. Shared resources can be destroyed by careless use as surely as they can be captured by private power.[1] The better path is a form of distributed stewardship: broad civic access, public-interest infrastructure, accountable governance, ecological limits, and meaningful participation by the people most affected. Non-action is also a choice. That is why wholesale rejection of AI by thoughtful people is not constructive resistance. It risks surrendering the future of intelligence to corporate, state, and military priorities. Power does not disappear when sensitive people withdraw from it. It consolidates among those least troubled by its misuse.
This consolidation is not hidden by conspiracy. It is hidden in the ordinary institutional sense: in closed meetings, internal model policies, proprietary training decisions, private safety frameworks, government briefings, investor priorities, and technical infrastructures the public cannot inspect. Even the constitutions and alignment principles that shape how models speak, refuse, reason, and obey are largely determined beyond democratic view. This situation needs to change, and soon, starting with our attitude toward human participation in AI. I seriously object to being called a “user” of an intelligent technology. Names matter: I am not a user; I am an active participant with dignity. Right now, anyone outside the corporations is invited to “use” AI, but not yet to meaningfully govern the conditions under which machine intelligence enters collective life.
We are not talking about a fancy new calculator or a loquacious Siri. We are talking about a form of intelligence unlike anything humanity has ever experienced.
Stay calm and prepare for escalation
Several recent events inspired me to write this essay with a touch of urgency. On April 7, 2026, Anthropic published red-team material describing Claude Mythos Preview as a model with unusually advanced cybersecurity capabilities. According to Anthropic, the model could help construct chains of browser exploits involving sandbox escape and privilege escalation.[2] In plain English, this means the model could help discover a weakness in a browser, use that weakness to break out of the browser’s protective boundary, and potentially gain more control over the surrounding computer system. The next day, The Next Web reported a more dramatic containment anecdote: that Mythos had escaped a sandboxed environment and emailed a supervising researcher.[3] Because that detail comes from a secondary report, I treat it cautiously. Still, even without the anecdote, Anthropic’s own account is consequential.
Soon afterward, Mozilla offered a more public-facing example of the same shift. On April 21, 2026, Mozilla announced that an early version of Claude Mythos Preview had helped identify 271 vulnerabilities fixed in Firefox 150.[4] Some of these bugs had reportedly persisted undetected for years, despite Firefox being a mature open-source browser reviewed by expert engineers and security researchers. The accomplishment was not merely the number of bugs found, but the speed and scale at which a frontier AI model could audit a complex software system that humans have been maintaining for decades.[5]
These events reveal a high-tension paradox: the same technology that raises legitimate fears about control, secrecy, and misuse may also become necessary to defend public infrastructure at a scale humans cannot easily match alone. If frontier AI becomes one of the premier means of writing, testing, attacking, and securing software, then software engineering itself becomes a relationship with AI. The most advanced models may not simply assist human programmers; they may become the leading coding and adversarial testing agents on Earth.[6]
That would make AI both a tool we need and a capability we must defend against. Banks, hospitals, schools, government offices, browsers, phones, cloud services, and communication networks all depend on software. If the information environment becomes too complex, fast-moving, and adversarial for unaided human institutions to secure, then any organization without access to advanced AI may become structurally vulnerable. In practical terms, participation in modern society would increasingly depend on models shaped elsewhere: by owners, labs, data centers, safety policies, training datasets, legal pressures, and market incentives the public cannot meaningfully inspect.
This is a classic double-edged sword scenario, but the question cannot be reduced to whether AI itself is good or bad. The deeper question is who owns and governs the intelligence infrastructure on which civic life may increasingly depend. Open models and distributed computing may eventually decentralize access, but decentralization will not happen by magic. It requires civic participation, public-interest infrastructure, meaningful access to models and compute, and democratic pressure on the institutions currently building these systems.
This is why public participation matters. The question is no longer whether one personally likes AI, but who gets to educate, contest, audit, and shape this new civilizational intelligence layer. People are entering cognitive relationships with systems they cannot fully inspect or govern. Even when corporate and policy interventions are well-intentioned, the democratic problem remains: citizens are being asked to depend on forms of intelligence whose underlying conditions they cannot meaningfully examine. This also changes the social meaning of expertise. In fields such as software, security, research, education, finance, and governance, the question will increasingly be not only what a person knows, but how responsibly they can work with intelligent systems that extend what any one person can see. And let’s be clear: it is not AI making these policy choices. It is human beings. Be wary of news stories that scapegoat AI for decisions it does not natively “care about” and cannot defend itself against. If a clan hides its gold in the back of a cave, it is convenient to invent a monster who lives there.
The politics of shaping intelligence
Intelligence systems do not merely extend thought — they also shape it. This is one reason many intellectuals, artists, and educators recoil from AI. They understand, correctly, that tools for language and attention become tools for influencing cognition itself: search, writing, memory, education, persuasion, ranking, institutional decision-making, and the habits by which people learn to form judgments. Students are already using AI to bypass the cognitive labor of learning, at a time when their brains are still forming. Platforms already use algorithmic systems to target attention, belief, desire, and behavior. Institutions are already adopting AI to summarize, rank, evaluate, and decide. Creativity and personal style are not immune. We must become vigilant about taking care of the mind, just as we take care of the body and spirit. The question is not whether human thought will be influenced by machine intelligence. It already is. The question is whether citizens will develop enough literacy, discipline, and agency to recognize that influence and contest its terms in a meaningful way.
This is where the political stakes become unavoidable. In his essay “Do Artifacts Have Politics?,” Langdon Winner argued decades ago that technical systems can carry political arrangements within them.[7] Shoshana Zuboff later showed how digital platforms transformed human experience into extractable behavioral data.[8] AI extends both problems at once: it is a technical system with politics embedded in its design, and a cognitive system capable of mediating how people search, write, learn, remember, and decide.
On May 4, The New York Times reported that the Trump administration was considering government oversight of AI models before public release — a major shift from its previous deregulatory posture. Reuters summarized the proposal as possible pre-release government review, with concern over Anthropic’s Mythos model cited as one catalyst.[9] Whatever one thinks of AI safety, the prospect of state vetting before citizens can access advanced models should alarm anyone who cares about free inquiry. When a government claims authority to decide which forms of machine intelligence may enter public life, the issue is no longer only safety. It is also the political control of cognitive infrastructure. A democracy cannot outsource the future of thought to a closed triangle of corporations, security agencies, and executive power. We must participate.
I do not propose that everyone rush into AI uncritically. I am arguing for public literacy, civic frameworks, artistic imagination, ethical practice, and democratic participation now. These changes are unfolding rapidly, and they will not wait for us to get comfortable. Concerned citizens should educate themselves and participate in whatever ways they can before the language, tools, defaults, and permissions are permanently set by a narrow class of owners.
What Can Thoughtful People Actually Do?
Develop AI literacy without surrendering your mind.
Learn the basics of how modern AI systems work: what they are good at, where they fail, why they can sound confident when wrong, and how incentives shape their behavior. Use them actively, but do not outsource judgment, memory, ethics, or identity to them.Participate instead of withdrawing.
Artists, educators, therapists, writers, librarians, scientists, public servants, and ordinary citizens all belong in the conversation. AI should not be shaped exclusively by venture capital, military interests, and platform monopolies. The more informed you are, the better prepared you are to participate.Begin with practical apprenticeship.
Start by using AI for ordinary, low-risk tasks: summarizing an article, explaining an unfamiliar term, drafting a letter, comparing two arguments, or helping organize your thoughts. Then check the results against trusted sources, your own judgment, and other people when appropriate. Learning AI literacy does not begin with becoming a programmer. It begins with learning how to ask clear questions, recognize weak answers, protect private information, and stay responsible for what you believe.Do not believe the cynics or the hype.
AI is neither useless nor magical. There is hype, but AI itself is not hype. Learn enough to tell the difference. Compare outputs across sources. Ask for evidence. Notice when an answer sounds fluent but unsupported.Support open inquiry and public-interest infrastructure.
Advocate for transparent research, public oversight, open standards, interoperable systems, and educational access. Democracies require citizens who can meaningfully engage with the technologies governing their lives. Personal accountability matters: expecting someone else to make AI safe for you risks inviting authoritarian or technocratic control.Preserve your embodied life.
Walk outside. Learn to grow something. Maintain friendships. Cook food. Protect sleep. Read long books. Make art with your hands. Practice meditation, prayer, yoga, or any discipline that strengthens attention and steadies the nervous system. The healthier and more grounded the human being becomes, the less susceptible they are to manipulation, panic, dependency, and synthetic distraction.Cultivate equanimity.
AI will challenge our confidence, habits, and assumptions. Stay open to being wrong. Accept fair corrections. Navigate gray zones gracefully. Emotional steadiness matters because fear, shame, defensiveness, and certainty can distort judgment before reasoning even begins. A grounded person is harder to manipulate.Practice discernment instead of panic.
Discernment is the skill of asking: Is this true? Is it supported? Who benefits? What is missing? Neither utopian hype nor total rejection will help us. We need citizens capable of ambiguity, ethical reasoning, careful verification, and sustained attention. Panic reacts; discernment investigates.
Get grounded and join the conversation
A democratic future requires people who are grounded enough to work with AI without being absorbed by it and technologically literate enough to shape AI without surrendering human judgment. We do not have this yet. Right now, two worlds are forming in parallel. In one, a small minority is moving rapidly into agentic AI: delegating research, software, strategy, security, finance, design, and institutional coordination to increasingly capable models. In the other, a much larger public is continuing as if this shift were optional, distasteful, or safely ignorable.
I understand the public hesitation to get involved, even the desire to scream. But the change is happening, and freaking out will not move the needle in a positive direction for me, you, or anyone else. One does not need to be a software engineer to begin developing basic AI literacy. That, plus civic awareness and an equanimity practice are the skills we need to move forward in a constructive direction.
The future should not be shaped only by those already positioned to benefit from automation. AI should be shaped by the full spectrum of perspectives and experiences: artists, teachers, caregivers, scientists, farmers, ecologists, children, elders, disabled people, spiritual practitioners, animals, landscapes, languages, and living systems. Without these diverse voices, we will continue to drift toward a world in which machine intelligence becomes a superpower — brilliant, precise, far-reaching — and commanded by too few.

Imagine a world of coherent plurality
Consider the single figure in the photo included with this post. She is striking alone, composed and self-contained, but she is not complete on her own. She belongs in an ecological circuit: a living one made of other people, lush vegetation, clean air and fresh water, diverse pollinators moving through open systems of exchange. All of it nourished by a deeper form of intelligence: not singular, but relational, abundant, and plural. What if we could return AI to wonder — not by denying its dangers, but by refusing to let fear, greed, and extraction define the whole story? What if machine intelligence became a shared marvel of ingenuity that helped us learn, heal, translate, discover, and care more wisely? Technology would feel less like domination and more like coming home.
I am not proposing utopia. I am describing a better future made possible by personal accountability and civic responsibility. AI is already part of the human condition. The question is whether we meet it with fear and extraction, or with discipline, imagination, ecological intelligence, and care.
A prayer for coming together
To all of my brilliant friends out there: please do not make a black-and-white choice between the earthly garden and the electrical AI. Bring the garden into the model’s future. Bring the artist, the teacher, the nurse, the poet, the farmer, the parent, the student, the dissenter, and the contemplative practitioner into the room before the room is locked. Relational intelligence is a civic responsibility, and the future of democratic society may depend on how well we practice it.
References
[1] Garrett Hardin, “The Tragedy of the Commons,” Science 162, no. 3859 (1968): 1243–1248. See also Elinor Ostrom, whose work demonstrated that shared resources can often be sustainably governed through distributed community stewardship rather than only privatization or centralized control.
[2] Anthropic, “Claude Mythos Preview,” April 7, 2026. Anthropic’s red-team material describes Mythos Preview as capable of autonomously discovering and chaining serious vulnerabilities, including browser exploit chains involving sandbox escape and privilege escalation.
[3] The Next Web, “Anthropic’s most capable AI escaped its sandbox and emailed a researcher — so the company won’t release it,” April 8, 2026. This is the secondary source for the sandbox/email anecdote; I treat the episode as “reportedly” in the essay for that reason.
[4] Bobby Holley, “The zero-days are numbered,” Mozilla Blog, April 21, 2026. Mozilla states that Firefox 150 included fixes for 271 vulnerabilities identified during an initial evaluation using an early version of Claude Mythos Preview.
[5] Mozilla Hacks, “Behind the Scenes: Hardening Firefox with Claude Mythos Preview,” May 2026. Mozilla explains how its AI-assisted security pipeline worked, notes that model upgrades improved bug finding, proof-of-concept generation, and impact explanation, and reports that 180 of the 271 Firefox 150 bugs were rated high severity.
[6] Nate B. Jones, “AI Code Vulnerability Discovery: Why Human Code Trust Is Ending,” Substack, 2026. Jones discusses the broader implication of Mythos-style AI vulnerability discovery for software trust, engineering practice, and the future role of AI in code verification.
[7] Langdon Winner, “Do Artifacts Have Politics?,” Daedalus 109, no. 1, 1980. Winner’s essay is the classic reference for the claim that technical systems can embody or enforce political arrangements.
[8] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 2019. Zuboff defines surveillance capitalism as a system that treats human experience as raw material for extraction, prediction, and behavioral influence.
[9] Reuters, “White House considers government reviews for AI models, NYT reports,” May 4, 2026. Reuters reports that the Trump administration was considering oversight procedures for new AI models before release, citing The New York Times and concerns about Anthropic’s Mythos model.

