I Compared GenAI to a Virus. Then I Realized What I Was Really Doing.
Feb 10
/
Shally
I built the metaphor.
🦠 Virus → 🤖 GenAI.
The parallels were everywhere.
🦠 Virus → 🤖 GenAI.
The parallels were everywhere.
-
Exponential spread through organizations.
-
Mutation as models adapted.
-
Dependency that changes host behavior.
The story even included carriers who transmitted without understanding what they were spreading.
It all fit so perfectly I could taste the elegance of it.
Then I looked at my inbox: 2,000+ requests for my AI browser implementation guides. TSIUniversity.com members paying for subscriptions, most of them senior recruiting professionals asking increasingly sophisticated questions. Not whether to use GenAI. But HOW to use. And sometimes without IT finding out. That's when it hit me.
I wasn't describing reality. I was excusing it.
I wasn't describing reality. I was excusing it.
The virus metaphor gave me (gave all of us) permission to treat GenAI adoption as something that happened to us. A force of nature. An inevitable contagion. And if it's inevitable, if it's biological, if it's just doing what viruses do, then whose fault is it really?
At your last executive meeting, did anyone ask: "What's our AI governance strategy?"
Or did they ask: "Which AI tools should we buy?"
One question is about control. The other is capitulation disguised as strategy.
At your last executive meeting, did anyone ask: "What's our AI governance strategy?"
Or did they ask: "Which AI tools should we buy?"
One question is about control. The other is capitulation disguised as strategy.
Why the Virus Metaphor Seduced Me (And You)
Let me show you why I built it. Because the parallels aren't just surface-level, they're disturbingly precise:
Exponential Spread & Transmission
Viral replication maps perfectly to model proliferation. Each "infected" organization spawns more implementations. The speed outpaces institutional immune response: governance, policy, training all lag behind adoption curves. Network effects accelerate everything.
In recruiting, I watched this in real-time. 2023: early adopters experimenting quietly. 2024: rapid organizational spread, everyone suddenly had a "pilot program." 2025: GenAI assumed as baseline, stigma emerging for non-users. The adoption curve wasn't linear. It was exponential.
In recruiting, I watched this in real-time. 2023: early adopters experimenting quietly. 2024: rapid organizational spread, everyone suddenly had a "pilot program." 2025: GenAI assumed as baseline, stigma emerging for non-users. The adoption curve wasn't linear. It was exponential.
Mutation
Fine-tuning operates like viral mutation. Models adapt to selection pressure through RLHF, user feedback, organizational constraints. Variants emerge and proliferate, GPT spawns Claude spawns Gemini spawns local models. Each iteration adapts to new "host" functions.
In recruiting: resume screening became interview prep became assessment design became offer negotiation became employee communications. Each mutation colonized new territory. Each adaptation made the previous function harder to perform without AI assistance.
Host Dependency
Viruses need cells. GenAI needs infrastructure and human input. Both create permanent changes to host behavior. How we write, how we search, how we think. Once infected, returning to pre-AI workflows feels impossible.
My students describe this dependency viscerally. "I can't write a Boolean string from scratch anymore." "I forget how I used to do intake calls." "I start every search with Claude now." The cognitive offloading isn't a feature. It's the business model.
Carriers
Organizations deploy GenAI without understanding implications. Users spread prompts and workflows without quality control. "Shadow AI" operates as asymptomatic transmission… invisible, unmeasured, unmanaged.
Here's what Shadow AI actually looks like: A recruiter pastes candidate resumes into ChatGPT to generate scorecards. An HR business partner uses Claude to draft performance reviews with actual employee names and performance data. A talent acquisition leader builds an entire sourcing strategy using Gemini, never mentioning it in the project retrospective.
None of this appears in your AI audit. All of it is happening right now.
The metaphor worked because it explained what I was seeing. It gave me a framework, a vocabulary, a sense of understanding in the face of something moving faster than I could analyze it.
But comfort is exactly the problem.
What the Metaphor Was Hiding
A virus operates by blind, non-conscious replication driven solely by chemical necessity. GenAI is a product of complex human intent, data curation, algorithmic design, and massive infrastructure investment.
The analogy strips the discussion of its necessary complexity. It obscures the human and institutional actors involved. It lets us ignore the foundational drivers of AI adoption:
Economic: The relentless pursuit of efficiency, automation, market dominance, and profit by corporations and investors. AI is an economic engine, not a random biological agent. Every implementation serves a business case. Every adoption answers to a P&L.
Political: The nationalistic drive for technological supremacy, military applications, surveillance capabilities. This represents a fundamental shift in state power and competitive advantage. Nations aren't "catching" AI. They're weaponizing it.
Ethical: Algorithmic bias, labor displacement, intellectual property rights, the future of human agency. A virus has no ethics. A technology created and deployed by humans is saturated with them.
By focusing on "infection" and "replication," the viral metaphor sidesteps every question that actually matters. It's not just incomplete, it's actively misleading.
But here's the most dangerous consequence: the metaphor excuses human responsibility.
Casting AI as a force of nature, an autonomous viral agent, creates a powerful rhetorical shield that absolves designers, developers, corporate executives, and policymakers of accountability. If AI is merely "doing what viruses do," then human hands are clean.
This rhetoric ignores the explicit, conscious choices made in funding, designing, deploying, and regulating (or failing to regulate) these systems. It shifts blame from the creator and the institution to the creation itself, stalling meaningful efforts toward governance and responsible innovation.
Your students aren't infected, Shally. They're choosing. Your organization isn't a victim. It's complicit.
I had given executives, and myself, permission to stop thinking.
I'll grant myself, and the metaphor, one legitimate parallel: resource consumption. A virus replicates until it exhausts its hosts. GenAI scales until it exhausts its infrastructure: power, water, data, compute. There's a ceiling, and it's not theoretical. It's thermodynamic. This is actually one of the most reassuring constraints against runaway AI growth, and one of the least discussed. Every model that gets 'smarter' needs exponentially more resources. Mother Nature sets limits that Moore's Law can't override forever. Thanks mom.
The analogy strips the discussion of its necessary complexity. It obscures the human and institutional actors involved. It lets us ignore the foundational drivers of AI adoption:
Economic: The relentless pursuit of efficiency, automation, market dominance, and profit by corporations and investors. AI is an economic engine, not a random biological agent. Every implementation serves a business case. Every adoption answers to a P&L.
Political: The nationalistic drive for technological supremacy, military applications, surveillance capabilities. This represents a fundamental shift in state power and competitive advantage. Nations aren't "catching" AI. They're weaponizing it.
Ethical: Algorithmic bias, labor displacement, intellectual property rights, the future of human agency. A virus has no ethics. A technology created and deployed by humans is saturated with them.
By focusing on "infection" and "replication," the viral metaphor sidesteps every question that actually matters. It's not just incomplete, it's actively misleading.
But here's the most dangerous consequence: the metaphor excuses human responsibility.
Casting AI as a force of nature, an autonomous viral agent, creates a powerful rhetorical shield that absolves designers, developers, corporate executives, and policymakers of accountability. If AI is merely "doing what viruses do," then human hands are clean.
This rhetoric ignores the explicit, conscious choices made in funding, designing, deploying, and regulating (or failing to regulate) these systems. It shifts blame from the creator and the institution to the creation itself, stalling meaningful efforts toward governance and responsible innovation.
Your students aren't infected, Shally. They're choosing. Your organization isn't a victim. It's complicit.
I had given executives, and myself, permission to stop thinking.
I'll grant myself, and the metaphor, one legitimate parallel: resource consumption. A virus replicates until it exhausts its hosts. GenAI scales until it exhausts its infrastructure: power, water, data, compute. There's a ceiling, and it's not theoretical. It's thermodynamic. This is actually one of the most reassuring constraints against runaway AI growth, and one of the least discussed. Every model that gets 'smarter' needs exponentially more resources. Mother Nature sets limits that Moore's Law can't override forever. Thanks mom.
The Framework Nobody Wants (But Everyone Needs)
If we're going to use biological metaphors, let's at least use one that acknowledges human agency. The proliferation of GenAI maps more accurately to a pandemic than a virus, complete with organizational response patterns we've all just lived through.
Your company is somewhere on this curve right now. The question is whether you're managing the transition or being managed by it.
Phase 1: Denial and Shadow AI (Incubation)
In the initial stages of a pandemic, there's widespread tendency to downplay the threat, dismiss early reports, assume it won't affect us. Sound familiar?
Many organizations initially ignored or prohibited GenAI use. This created fertile ground for Shadow AI, individual employees and teams seeking efficiency gains adopted unapproved tools outside IT oversight. Free ChatGPT accounts. Personal Claude subscriptions. Gemini for "research purposes."
This period of denial creates significant unmanaged risk: data leakage, intellectual property exposure, compliance violations, all undetected. Like asymptomatic spread, Shadow AI is most dangerous when you can't see it.
My 2,000+ handout requests aren't evidence of curiosity. They're evidence of adoption happening despite your policies, not because of them. When senior recruiting professionals are asking me how to implement AI browsers, they're not waiting for your enterprise license. They're working around your denial.
Phase 2: Panic and Reactionary Policy (Containment)
A major data breach or public-facing ethical error forces the organization out of denial. The response is reactionary policy, a knee-jerk attempt to regain control.
Outright bans on all external AI tools. The rapid, non-strategic purchase of a single large internal LLM solution because "we need to do something." Compliance policies drafted in haste by people who don't understand the technology. Training programs announced but not resourced.
These frantic containment efforts often stifle innovation and create bottlenecks without genuinely solving the governance challenge. You haven't managed the risk. You've just made it harder to see.
Phase 3: Adaptation and Governance (New Normal)
True maturity is reached when organizations move past bans and fear to establish comprehensive AI governance. Not control. No governance. The distinction matters.
This involves:
Strategic Integration: Defining where AI provides genuine value versus where it introduces unacceptable risk. Not "AI everywhere" or "AI nowhere"... AI where it makes strategic sense.
AI Hygiene: Implementing robust policies for data input, ethical use, and output validation. This isn't IT policy. This is operational doctrine.
AI Antibodies: Developing internal expertise, training the workforce to use approved tools effectively, building an adaptive policy framework that evolves with the technology.
Organizations must recognize that, like a pandemic, AI cannot simply be wished away. The only sustainable strategy is intelligent adaptation and robust, flexible governance to achieve productive symbiosis rather than an endless struggle for impossible control.
Shadow AI isn't asymptomatic spread. It's your workforce telling you your governance failed before you built it. They found the tools they needed because you didn't provide them. They're hiding usage because you created a policy environment that punishes efficiency.
AI governance isn't a vaccine. It's infrastructure you should have built 18 months ago.
Where is your organization on this curve?
And more importantly: who decided, and on what basis?
And more importantly: who decided, and on what basis?
What This Looks Like in the Wild
Recruiting isn't a use case for GenAI. It's the canary in the coal mine.
What happens in recruiting functions today previews what happens in sales tomorrow, in customer service next quarter, in financial analysis next year. The pattern is consistent. The timeline is compressing.
Transmission Pattern:
2023: Early adopters experiment with resume screening, job description generation. Quiet pilots. Informal tests. Plausible deniability.
2024: Rapid organizational spread. Everyone suddenly has an "AI initiative." Vendors proliferate. Integration becomes assumed. Interview prep tools, assessment design, sourcing automation, candidate engagement.
2025: GenAI as baseline assumption. Stigma emerging for non-users. "You're still doing that manually?" becomes the question that ends careers.
The adoption curve isn't just steep. It's vertical.
Mutation Through Functions:
Resume screening was the entry point. Low risk, high volume, easy to justify. Then it mutated.
Interview question generation. Candidate communication. Sourcing strategy development. Diversity analysis. Offer negotiation talking points. Performance review language. Succession planning frameworks.
Each iteration adapted to new host functions. Each deployment made the previous manual process feel obsolete. The technology didn't just assist the work, it redefined what the work was.
Here, the virus metaphor fractures beyond repair. A virus has been biologically engineered by billions of years of evolution to understand DNA replication at a molecular level. It mutates in response to the actual physical world. LLMs don't understand anything. They predict the next token. They mutate in response to human feedback about language, not reality. The virus adapts to biology. The AI adapts to our expectations of what a good answer sounds like. Those are not the same thing.
World models, the kind Yann LeCun is building, aim to change this by grounding AI in physics and biology rather than language patterns. If those succeed, the metaphor gets a lot more uncomfortable. But we're not there yet, and pretending LLMs already understand the world is the most dangerous form of anthropomorphization we're currently practicing.
Here, the virus metaphor fractures beyond repair. A virus has been biologically engineered by billions of years of evolution to understand DNA replication at a molecular level. It mutates in response to the actual physical world. LLMs don't understand anything. They predict the next token. They mutate in response to human feedback about language, not reality. The virus adapts to biology. The AI adapts to our expectations of what a good answer sounds like. Those are not the same thing.
World models, the kind Yann LeCun is building, aim to change this by grounding AI in physics and biology rather than language patterns. If those succeed, the metaphor gets a lot more uncomfortable. But we're not there yet, and pretending LLMs already understand the world is the most dangerous form of anthropomorphization we're currently practicing.
Symptoms:
Homogenization of candidates.
Everyone uses the same AI prep tools, generating identical "perfect" answers. Differentiation collapses.
Everyone uses the same AI prep tools, generating identical "perfect" answers. Differentiation collapses.
Loss of employer branding distinctiveness.
Every job description sounds the same because they're all generated from similar prompts and training data.
Every job description sounds the same because they're all generated from similar prompts and training data.
Skill atrophy in human sourcers.
When was the last time you built a Boolean string without AI assistance?
Do you remember how?
Does it matter?
When was the last time you built a Boolean string without AI assistance?
Do you remember how?
Does it matter?
Dependency on tools we don't fully understand.
Can you audit the reasoning?
Can you explain the output?
Can you identify the failure modes?
Can you audit the reasoning?
Can you explain the output?
Can you identify the failure modes?
Systemic Risk:
The question that should terrify every executive:
When your people forget how to do the thing AI does for them, who validates the AI?
When your people forget how to do the thing AI does for them, who validates the AI?
Skill atrophy isn't a side effect. It's the intended outcome of efficiency-driven adoption. Every task automated is expertise made optional. Every workflow optimized is judgment made redundant.
Until the AI is wrong. Then what?
The virus metaphor suggests this happened to recruiting. The reality is recruiting chose this. Individual practitioners making individual choices that aggregated into industry transformation. No conspiracy. No master plan. Just thousands of people solving immediate problems with available tools, none of them thinking about second-order effects.
And here's something the virus metaphor completely obscures: a virus operates deterministically. It either binds to a receptor or it doesn't. GenAI is probabilistic. Ask it to transfer $100 from savings to checking and there's a high probability it completes the task. Not certainty. Probability. Do you feel safe about your money with that probability?
Now scale that uncertainty across every recruiter scorecard, every candidate summary, every hiring recommendation your team generated last quarter. How many were 'probably' right? And how would you know the difference?
And here's something the virus metaphor completely obscures: a virus operates deterministically. It either binds to a receptor or it doesn't. GenAI is probabilistic. Ask it to transfer $100 from savings to checking and there's a high probability it completes the task. Not certainty. Probability. Do you feel safe about your money with that probability?
Now scale that uncertainty across every recruiter scorecard, every candidate summary, every hiring recommendation your team generated last quarter. How many were 'probably' right? And how would you know the difference?
Which of your critical functions are currently experiencing skill degradation you can't measure?
How would you know?
The Reckoning
Let me make this real for you.
In your organization right now, there are people resisting GenAI adoption. They're the ones still doing things manually. Still insisting on "the old way." Still asking questions about accuracy, bias, validation.
Are they principled or obsolete? Conscientious objectors or anti-vaxxers?
Do you even know? Have you asked them? Or have you already decided based on whether they're accelerating your efficiency metrics?
Here's the parallel nobody wants to acknowledge: The anti-vaccine movement emerged partly from legitimate distrust of institutions that had failed to earn trust, partly from misunderstanding of science, partly from reasonable questions asked in unreasonable ways. The response was to mock, marginalize, and mandate.
What if your AI resisters are asking reasonable questions you're too busy to answer?
What if their resistance is the only thing preventing you from discovering your governance gaps when the stakes are catastrophic instead of manageable?
Your competitors are making their people dumber faster, more dependent on tools they don't control, more confident in outputs they can't validate. Is that the race you want to win?
The ethics of non-adoption aren't simple. Neither are the ethics of adoption. But we've made adoption the default and resistance the deviation, without ever establishing the criteria for making that determination.
Every metaphor is a decision about what to ignore.
Call it a virus, and you reach for containment, treatment, vaccines. The solutions write themselves from the metaphor.
Call it an invasive species, and you think in ecosystems, where does it fit, what does it displace, how do we manage coexistence.
Call it a pandemic, and you build response infrastructure, phases, governance, adaptation protocols.
Call it none of these, and you're forced to think harder. To acknowledge that the phenomenon born of human engineering, economic forces, and political decisions is a socio-technical system that demands socio-technical solutions.
The language we use shapes the solutions we imagine. More importantly, it shapes the responsibilities we accept.
What I'm Really Doing
I compared GenAI to a virus because I wanted an answer. What I needed was a better question.
Not "How does this spread?" but "Who decided this was happening to us instead of by us?"
Not "How do we contain it?" but "What are we optimizing for, and at what cost?"
Not "What's the cure?" but "What does healthy look like, and who gets to define it?"
The metaphor I built was elegant. It was also a lie. Not because the parallels aren't real, they are. But because those parallels let me pretend someone else was responsible for what comes next:
- They let executives externalize accountability.
- They let organizations treat strategy as procurement.
- They let all of us pretend that buying tools is the same as making decisions.
- They let organizations treat strategy as procurement.
- They let all of us pretend that buying tools is the same as making decisions.
GenAI isn't happening to your organization. Your organization is doing GenAI, whether you're governing it or not. Your people are choosing it. Your processes are incorporating it. Your competitive position is being reshaped by it.
Your people are choosing it. Even behind your back.
The only question is whether you're leading that transformation or rationalizing your way out of responsibility for it.
The organizations that survive this won't be the ones with the best AI tools but those which stopped waiting for the technology to make their decisions for them.
They'll be the ones who recognized that every implementation is a choice, every deployment is a value statement, every efficiency gain is a trade-off that deserves interrogation.
They'll be the ones who asked better questions than I did when I built that virus metaphor.
What if the most dangerous aspect of GenAI isn't how it spreads, but how confidently we analogize away our responsibility to lead through genuine uncertainty?
What if we're not managing AI adoption, we're just giving ourselves permission to stop thinking?
And if you think governance solves this permanently, consider a scenario that keeps the really smart AI researchers awake: a system too intelligent for us to understand its reasoning, too integrated to shut down without collapsing the infrastructure we depend on. Not science fiction. A serious concern articulated by people building these systems right now. The virus metaphor didn't just let me stop thinking. It let me stop thinking small enough to feel comfortable. The real questions aren't about adoption curves and governance frameworks. They're about whether we're building something we can remain in relationship with, or something that outgrows the relationship entirely. If so, then what?
I built a metaphor that let me stop thinking. Then I caught myself.
Your move.
Copyright ©2026 Shally Steckerl, Stecorp, Inc. All Rights Reserved. | Terms of Service | Privacy Policy | Cookie Policy | Contact Us
