AI has entered the workplace with all the subtlety of a wrecking ball. So far, most of the headlines pound home what it’s taking away: jobs, privacy, and the last shreds of human autonomy. From hiring tools that filter out candidates using biased data to productivity trackers that log every keystroke, the technology has largely been cast as the boss’s new weapon.
The funny thing is, AI isn’t brand new. It’s been shaping everyday life for years through tools like spam filters and Netflix recommendations. What’s different now is that generative AI tools like ChatGPT and DALL·E have made its power visible and accessible to millions, while leaps in computing power and data have made these systems far more capable. Add the post-pandemic workplace defined by layoffs, remote monitoring, and wage pressure, and AI has shifted from a background efficiency tool to a frontline driver of economic and political change. The stakes, and the potential for public harm or benefit, are much higher.
And for good reason. In the hands of those profiting from labor cuts, AI becomes a devastating force for exploitation. Built to serve shareholders, it behaves like any other profit-maximizing tool in history.
But that’s not the only possible story. This transformative technology now used to surveil and deskill workers could, in theory, be used to do the opposite: strengthen protections, expedite public services, and expand access to life-saving resources. As youth activist Sneha Revanur contends, “If these algorithms were programmed for good, they could be used for good.”
AI isn’t inherently exploitative; it mirrors the values and intentions of its creators. So the question isn’t just “What will AI do to us?” but “Who decides what AI is for?”

Who Decides What AI Is For
The notion that AI must be deployed for automation and cost-cutting is a choice, not a given. When corporations dictate the goals of AI, they shape its impact. That’s why we have productivity trackers, surveillance algorithms, and efficiency metrics, all in service of profit. The same systems could just as easily be repurposed to track wage theft, streamline benefits access, or flag unsafe conditions.
The myth of AI’s neutrality ignores who holds the real power. As AI computer scientist Timnit Gebru warns, “Right now, only a handful of people and organizations have the power and resources to automate decision-making.” When control rests with a small elite, decisions reflect entrenched hierarchies instead of dismantling them.
The problem isn’t that AI exists; it’s that its trajectory has been set by those who stand to profit from keeping things exactly as they are. If the power to design and deploy these systems were broadened beyond the boardroom, AI could serve entirely different ends. As musician Björk notes, “You always have to figure out the morality [of a new technology], and what it means on every level: socially, personally, and politically.” That shift in control opens the door to AI as a public utility, not a corporate weapon.”

AI as a Public Utility
Imagine an open-source AI run by a city agency, workers union, or nonprofit, trained on public datasets, governed transparently, and built to solve community problems rather than maximize shareholder returns.
Barcelona offers a glimpse at what’s possible. Through Decidim, residents can propose, debate, and vote on policies; with Project DECODE, they gain ownership and control of their personal data. These initiatives form a blueprint for ethical, citizen-led technology. Barcelona’s smart-city systems already improve daily life by optimizing traffic flow, coordinating bus networks, and enhancing emergency response in real time.
Applied to workforce issues, this model could match workers to apprenticeships based on skills and interests rather than connections, route surplus food from restaurants to shelters through real-time coordination, or analyze workplace injury data to flag dangerous patterns before accidents happen. In this framework, AI becomes infrastructure for equity.

The Unionized Algorithm
If corporations can train AI to monitor workers, unions can train it to defend them. Warehouse AI could flag unsafe workloads, scheduling systems could catch violations of fair scheduling, and performance-review data could be audited for systemic bias in promotions.
Worker-led tech initiatives already exist. With Coworker.org’s digital tools, Starbucks and REI employees have won tangible victories. In Europe, IG Metall is negotiating transparency and safeguards in algorithmic management. In the U.S., the Athena Coalition has backed the federal Warehouse Worker Protection Act to bring quota transparency, limit surveillance, and improve safety at Amazon and similar employers.
As the Berkeley Labor Center puts it, “Technology is not inherently good or bad, but neither is it neutral; public policy must ensure technology serves and responds to workers’ interests.” By turning AI into a watchdog for labor rights, unions could ensure that technology in the workplace strengthens worker power instead of eroding it.

AI That Cuts Red Tape Instead of Jobs
Across the U.S., public benefits systems are collapsing under understaffing and outdated tech. This forces applicants for unemployment insurance, disability benefits, housing assistance, or immigration aid to wait months, often with devastating consequences.
With public oversight, AI could help break these bottlenecks. In New Jersey, for example, an AI translation assistant developed with Google.org has tripled translation speed for unemployment insurance forms, improving access for Spanish-speaking claimants and sharing its tools with other states. In Nevada, an AI prescreening tool is reducing backlogs and processing claims with 99.99% accuracy.
AI can translate documents into dozens of languages in seconds, verify documents, and guide applicants through complex paperwork, escalating only complex cases to human staff.
Used well, AI could resolve claims in days and instantly match housing applicants to available units while streamlining aid, safeguarding privacy, ensuring transparency, and upholding independent oversight. Still, as New York Times columnist Julia Angwin has noted, “AI is not even close to living up to its hype… It’s looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do the task yourself.” The potential is real, but so is the need for rigorous oversight and human judgment in every system.

AI as Corporate Watchdog
Most workplace AI today monitors employees, measuring keystrokes, tracking breaks, and tallying “idle” minutes. A recent APA study found that more than half of monitored workers feel stressed on the job, highlighting the disastrous mental health toll of constant surveillance. That same analytical power could be redirected to those with real decision-making authority. AI could analyze hiring and promotion patterns to detect bias, monitor pay data for hidden wage gaps, flag environmental violations, and map lobbying networks. ProPublica has used such tools to reveal how risk-assessment algorithms in criminal sentencing rated Black defendants as higher risk despite similar profiles, exposing deep algorithmic bias. Global Witness has applied AI to identify fossil fuel lobbyists at COP climate talks, revealing whose interests were shaping climate negotiations.
Instead of being the boss’s microscope, AI could be the public’s telescope for corporate behavior, magnifying consequential patterns and keeping them in plain sight. It would not replace journalists, regulators, or advocacy groups, but it could give them faster, deeper insights to act on. The shift would be simple but seismic: move AI from tracking worker “efficiency” to tracking corporate responsibility.

The Mutual Aid Machine
When traditional systems buckle, grassroots tech steps in. Civic hackers, volunteers, and digital makers have repeatedly used AI and mapping tools as lifelines during crises. During the 2020 pandemic, Taiwan’s crowd-sourced mask availability map evolved into a real-time, pharmacy-linked distribution system. In the aftermath of earthquakes, volunteer networks used tools like AIDR and MicroMappers to rapidly verify and map critical needs.
Tenant justice has gone high tech as well. Cornell Law students launched Teny, a chatbot that delivers legal guidance on landlord disputes to renters in upstate New York. In NYC, the AI-powered Roxanne the Repair Bot, developed by Housing Court Answers and NYU, uses conversational logic to help tenants document repairs, draft letters, and file legal reports, making justice instantly actionable.
These projects demonstrate how AI can be a lifeline, matching aid faster than bureaucracy and translating rights into real-time tools. The technology already exists. What’s needed is vision, coordination, and trust.

The Clock Is Ticking
We’re in a brief window where AI governance is still open to influence. But corporate uses are hardening quickly. Every system built to monitor workers or cut jobs makes reclaiming AI for the public harder.
We’ve seen what’s possible when technology is guided by values other than profit: Barcelona’s citizen-led platforms, union watchdog systems, tenant justice chatbots, and crisis response networks. These aren’t thought experiments; they’re working models. But without organized pressure, they will remain isolated experiments rather than the new standard.
AI is not destiny. It’s infrastructure. Built for the many, it can be a public utility. Captured by the few, it will be another weapon for exploitation. The time to decide is now, before the cement sets.
Primary Sources:
- European Commission, “DECODE: Giving People Control Over Their Data,” 2020
- ProPublica, “Machine Bias,” 2016
- Google.org, “Helping States Improve Access to Unemployment Benefits,” 2023
- Global Witness, “The Lobbyists at COP26,” 2021
Further Reading:
- Social Europe, “Collective Bargaining Over Algorithms,” 2021
- Athena for All, “Ending Harmful Quotas”
- Barcelona’s Decidim platform and smart city initiatives
- Housing Court Answers and NYU’s tenant justice AI tools



