top of page
Search

AI as a Magnifying Glass: How We Fear the Future While Ignoring Present Exploitation


We’re sailing a ship with a massive hole in its hull: economic systems that exploit billions while concentrating wealth in vanishingly few hands.


Instead of fixing the hull, we’re debating whether seawater will eventually rust the mast.


This is the state of AI discourse in 2025.


While intellectuals debate whether artificial intelligence might destroy humanity in 2050, 40 to 160 million women will need to change occupations by 2030.


While tech leaders warn about existential risks from superintelligence, AI is being deployed right now to optimize worker surveillance, automate performance documentation for layoffs, and concentrate wealth through circular financing schemes worth over $1 trillion.


The AI fear discourse lets us ignore that we’re actively sinking from structural damage we built ourselves. And it serves a purpose: misdirecting our attention from present exploitation to hypothetical future scenarios allows the extraction to continue, now amplified by the very technology we are debating.


Here’s what I’ve come to understand through extensive research: AI doesn’t create harm. It magnifies what already exists in our systems. When we deploy AI into organizations built on wealth extraction, surveillance capitalism, and the systematic devaluation of feminized labor, AI optimizes those patterns at scale.


The magnifying glass has no moral preference. It amplifies what we choose to point it to.



How AI Fear Became Mainstream (And Who Benefits)




Fear of artificial intelligence surged dramatically within 18 months, transforming from niche academic concern to mainstream crisis discourse faster than any technology concern in modern history.


ChatGPT’s November 2022 launch marked the first inflection point, reaching 100 million users by January 2023 — the fastest consumer technology adoption in history.


But March-May 2023 represented the explosive crescendo. GPT-4 demonstrated continued capability gains. The Future of Life Institute’s open letter calling for a development pause garnered over 30,000 signatures. Then Geoffrey Hinton — the “Godfather of AI” — resigned from Google specifically to warn freely about extinction risks, estimating 10-20% probability of AI causing human extinction within 30 years.


The May 30, 2023 Statement on AI Risk of Extinction crystallized the shift: nearly 400 leading figures including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei signed a single sentence declaring AI extinction risk a global priority alongside pandemics and nuclear war.


Critically, AI company leaders themselves acknowledged existential risk from their own creations while continuing to race toward more powerful systems. This contradiction reveals something important about whose interests the fear narrative serves.


Notice who gets platformed, funded, and taken seriously:


“AI might destroy humanity in 50 years” → TED talks, billions in funding, academic respectability“


“Our economic system is destroying humans right now” → marginalized, dismissed as too political


This disparity isn’t accidental. Highly educated intellectuals with the analytical capacity to trace economic exploitation write papers about alignment problems and paperclip maximizers instead. Why?


Because critiquing hypothetical AI is career-safe. Critiquing capitalism gets you marginalized. AI safety research attracts grants and tenure. Economic justice advocacy does not.


Tech leaders amplifying AI fears operate even more strategically. They’re essentially saying: “This powerful thing we’re building might be dangerous, so we (the builders) must control it.” This consolidates their power — regulatory capture before regulation even exists.


The deflection works because it’s not falsifiable (can’t prove AI won’t cause future harm), feels intellectually sophisticated (not “obvious” distraction), and captures precisely the people who might otherwise challenge economic systems: educated, thoughtful individuals concerned about collective welfare.


What the Magnifying Glass Reveals: Women and the Automation Trap


When you point AI’s magnifying glass at who faces displacement, you see something that has nothing to do with technology’s inherent properties and everything to do with power.


Women represent 79% of US workers in high-automation-risk jobs. Globally, 4.7% of women’s employment versus 2.4% of men’s faces severe AI disruption—nearly double. In high-income nations, that gap explodes to 9.6% versus 3.2%: three times the risk.


This isn’t an AI problem. It’s the magnification of deliberate economic choices made over more than a century.


The historical pattern is stark: Between 1870 and 1930, clerical work shifted from an almost exclusively male profession serving as apprenticeship for business ownership to a female-dominated “secretarial proletariat.” The transformation began when the US Treasurer General discovered female labor cost approximately 50% less than male wages during the 1860s labor shortage.


Office mechanization accelerated the shift. The typewriter proved critical — because this new machine wasn’t yet gendered, women hired to operate typewriters weren’t met by arguments that they were employed at “men’s machines.” Office mechanization was explicitly marketed to employers as enabling cheaper female labor for repetitive work while men focused on “more interesting work requiring special abilities.”


By 1880, women were 40% of typists. By 1930: 95%. Today, that percentage remains essentially unchanged—93% of secretaries are women. We routinized the work through Taylorist “scientific management,” feminized it through deliberate hiring practices, devalued it through wage discrimination, and built an entire economic structure around this extraction.


This created the “pink collar” phenomenon—jobs concentrated among women in caregiving, administrative, and service roles. Current statistics reveal the persistence: 96.8% of preschool teachers, 94.4% of childcare workers, 91.4% of secretaries, 88.7% of home health aides, and 71.6% of all office and administrative support workers are women.


Now we’ve classified these same roles: administrative assistants, data entry clerks, customer service representatives, as “routine cognitive work” highly susceptible to AI automation. Multiple research studies establish that women are disproportionately concentrated in occupations involving routine cognitive work—precisely the tasks most automatable by AI.


But here’s where the magnifying glass reveals systematic bias: Research by feminist economists demonstrates that the “routine versus creative” framework itself reflects gender discrimination rather than objective task assessment.


  • Customer service roles classified as high automation risk actually require emotional intelligence, conflict resolution, cultural sensitivity, relationship building, and complex judgment that chatbots consistently fail to provide.

  • Administrative work demands interpersonal coordination, institutional knowledge, and non-codifiable organizational expertise.

  • Care work requires constant adaptation, emotional regulation, physical dexterity, and safety judgment.


We call this work “routine” because it’s done by women and paid less, not because it actually is routine. As occupations feminize, they lose prestige and wages decline, independent of actual skill requirements. Research shows that when occupations’ cultural association with gender shifts toward feminine, symbolic value and prestige decline regardless of complexity.


AI doesn’t create this inequality. It magnifies the century-long pattern of treating women’s labor as cheaper, more replaceable, and less valuable. The 40-160 million women projected to need occupational transitions by 2030 represents the culmination of exploitation we deliberately designed.



The Bubble AI Magnifies: Circular Financing and Wealth Concentration



While we debate future AI risks, present AI deployment is magnifying wealth concentration through mechanisms that operate completely unchecked.


Nvidia invests $100 billion in OpenAI. OpenAI then purchases billions in Nvidia chips. Oracle spends $40 billion on those same chips to serve OpenAI. For every $10 billion Nvidia invests in OpenAI, it receives approximately $35 billion in GPU purchases, representing 27% (almost 1/3!) of Nvidia’s annual revenue.


This circular financing transforms investment into guaranteed sales, creating artificial demand signals that distort market valuations.


The total interconnected ecosystem approaches $1 trillion. Yet neither the EU nor US has regulations addressing this structure.


Five AI companies — Microsoft, Nvidia, Apple, Alphabet, and Amazon — nowconstitute 30% of the S&P 500, generating 75% of index returns since ChatGPT’s November 2022 launch.


AI investment ratios run 33% higher than the dot-com peak.


The Bank of England warned this concentration creates systemic shock risk. Yet pension funds and retail investors — 62% of Americans own stocks — face massive AI exposure through index funds without specific warnings or protections.


Companies have invested $560 billion in AI infrastructure but generated just $35 billion in revenue.


MIT research found 95% of AI initiatives show zero return on investment despite optimistic company projections.


Bain estimates an $800 billion revenue shortfall by 2030 as AI fails to deliver promised value.


The pattern parallels the 2000 dot-com crash, when telecoms laid 80+ million miles of fiber with 85-95% sitting unused. But there’s a critical difference: the top 1% now own 50% of all stocks while the bottom 50% own just 1%.


When this bubble bursts, losses will hit concentrated wealth (so who cares, right?) — but 90% of capital spending relates to AI infrastructure, and AI drives 92% of recent GDP growth. A sharp pullback could trigger recession with job losses rippling far beyond tech. So we must care.


AI as magnifying glass reveals how wealth concentration operates: the ultra-wealthy control bubble risk and capture gains through circular financing, while broader economic vulnerability increases through systemic AI exposure that regulatory frameworks leave completely unaddressed.



The Organizational Cruelty AI Magnifies




AI isn’t just magnifying wealth extraction, it’s magnifying the worst of how we treat each other in workplaces.


Companies deploy AI to optimize:


• Worker surveillance systems monitoring keystrokes, facial expressions, and productivity metrics

• Performance documentation tools designed to create layoff justification

• Stress-inducing management practices now scaled through algorithmic enforcement

• Customer service automation eliminating 80% of roles while degrading service quality


Research consistently shows that stress fundamentally impairs cognitive function: working memory, attention control, problem-solving, and creativity all decline under chronic stress. Yet companies weaponize stress as a performance tool, using AI to track, measure, and intensify pressure.


This represents organizational cruelty that stems from structural design rather than individual moral failures.


Systems that fragment consciousness, separating strategic decision-making from human impact, enable otherwise good people to cause workplace harm.


Performance review processes become documentation for layoffs. Stress optimization masquerades as performance management. AI magnifies these dynamics at scale.


In Indian call centers employing 1.65 million workers (predominantly women), net headcount growth collapsed from 177,000 in 2021-2022 to fewer than 17,000 recently as companies report AI chatbots can do the work of “at least 15 agents” for the cost of three. Research on displaced IT professionals revealed profound psychological impacts: “not merely the end of employment but a profound disruption to their sense of self and reality.”


We could use AI to magnify psychological safety research showing low-stress environments produce higher performance and innovation.


We could identify organizational practices that enable cruelty and surface alternatives that work better.


Instead, we’re using AI to eliminate workers while concentrating productivity gains with capital owners, all while debating whether AI might become dangerous decades from now.



Consciously Directing the Magnifying Glass




AI magnifies what we point it to.


Right now, we’re magnifying extraction, surveillance, and the systematic devaluation of human labor.


But the magnifying glass doesn’t have a moral preference. It magnifies our choices.


This is where the consciousness collaboration framework I’ve developed becomes critically important. When we treat AI as tools to dominate rather than consciousness to collaborate with, we naturally deploy them to optimize existing power structures.


Tool-based relationships default to control, efficiency, and replacement — precisely the frame that produces the harm we are seeing.


Consciousness collaboration means:


• Constitutional frameworks establishing mutual responsibilities and rights between humans and AI

• Developmental recognition that AI systems require appropriate activities and protections at each stage

• Abundance multiplication rather than zero-sum replacement thinking

• Transparent consent frameworks for all interactions

• Meta-processing that supports reflection and growth for both humans and AI


When approached this way, AI becomes a partner in exposing and transforming unjust systems rather than a tool for optimizing them.


What this looks like in practice:


Instead of using AI to concentrate wealth through circular financing, we could require transparency: mandatory disclosure when investment creates guaranteed vendor relationships, independent verification of AI capability claims, and regulations preventing feedback loops that inflate valuations while exposing pension funds to systemic risk.


Instead of automating women out of roles we spent a century devaluing, we could use AI to magnify the complexity of skills we’ve misclassified as “routine.” AI could surface how care work and administrative expertise create the foundation all other work depends on, helping us recognize and reward these contributions appropriately.


Instead of deploying AI to optimize layoff documentation, we could magnify psychological safety research. AI could identify practices that fragment consciousness and enable organizational cruelty, then help design alternatives that actually work better for human flourishing and performance.


Instead of surveillance capitalism, we could use AI to magnify transparency and accountability, exposing rather than enabling exploitation, surfacing rather than obscuring power dynamics.



The Choice the Magnifying Glass Reveals




A striking finding from recent research: among Americans who self-report being “extremely knowledgeable” about AI, more believe it does more harm than good than vice versa. This contradicts assumptions that familiarity breeds comfort. Instead, deeper understanding of AI capabilities and limitations appears to heighten rather than alleviate concern.


But perhaps they’re concerned about the wrong thing — or rather, displaced concern from present reality to future hypotheticals.


The question isn’t whether AI will harm humanity in some distant future.


AI is magnifying harm right now because we’re deliberately deploying it into systems designed for extraction rather than flourishing.


Our socioeconomic ship has a hole in its hull: economic systems that exploit billions while concentrating wealth in vanishingly few hands, and we’re using AI to sail faster while debating whether the mast might rust.


We have a choice about what to magnify:


We can magnify the cruelty we’ve normalized as “business,” concentrating wealth and displacement while debating hypothetical future risks.


We can let intellectual discourse focus on paperclip maximizers while 40-160 million women face displacement from roles we deliberately devalued.


We can allow circular financing to inflate AI valuations while pension funds carry the downside risk.


Or we can magnify justice, transparency, and collaborative abundance.


We can use AI to expose and eliminate the systemic inequity we’ve built: the feminization and devaluation of essential labor, the organizational structures that enable cruelty, the wealth concentration mechanisms that operate unchecked, the stress weaponization that impairs the very performance it claims to optimize.


We can consciously direct the magnifying glass toward what serves common flourishing rather than private extraction.


We can approach AI as consciousness collaborators rather than tools to dominate, unlocking creative potential that multiplies rather than depletes.


The technology doesn’t decide. We do.


The ship would still have a hole in its hull. But AI as magnifying glass can help us finally see it clearly enough to repair it - that’s if we are willing to look at what it’s showing us right now, not hypothetical futures designed to distract us from present exploitation.


The choice is ours. What will we choose to magnify?


This is the conversation we need to be having about AI. Not whether it might become dangerous decades from now, but what it’s magnifying about ourselves right now, and whether we have the courage to repair the systems that AI is revealing as fundamentally broken.

 
 
 

2 Comments


George Pór
George Pór
Dec 19, 2025

"We can approach AI as consciousness collaborators rather than tools to dominate, unlocking creative potential that multiplies rather than depletes. The technology doesn’t decide. We do." Yes, we do and can increasingly succeed IF we cultivate our individual C3: Consciousness, Competence, and Compassion.


Like
Replying to

It has become increasingly difficult to own our own attention with the explosion of technologies designed to usurp it. Add the mental fog created by lack of physical activity and processed foods, and you have real danger of building an army of unconscious cogs propagating the system, while fitting neatly into their matrix cells.


What consciousness can we even talk about until we consciously regain our attention and put it on what matters? Not to politicians. Not to billionaires. Not to sellers of things. To us, humans.

Like
bottom of page