I'm not a Big Tech exec, but I built 3VERYBODY to 300% YoY growth using zero paid ads--just authentic creator partnerships. That means I've spent the last two years studying what makes influencer content actually convert vs. just look pretty. Here's my contrarian take: virtual creators will dominate product demos and educational content, but they'll fail at aspirational lifestyle marketing. When I watch a real person with texture, scars, or different body types use our tanning products, the comments explode with "finally someone who looks like me." AI can't replicate that vulnerability yet. We've had customers literally cry seeing our male model or curvier bodies in campaigns--that emotional trust is where human creators still win. The ethics piece gets messy when brands don't disclose AI involvement. I refuse to retouch our ads or use shade name gimmicks because my customers' moms had skin cancer--they need to trust what they're putting on their bodies. If a "person" recommending skincare isn't real, and brands hide that fact, we're back to the same manipulative beauty industry BS we're trying to escape. My prediction: hybrid model wins. Use AI for scalable how-to content and product visualization across skin tones (God knows we need better representation there). Reserve human creators for testimonials, unboxings, and anything requiring genuine emotional response. At 3VERYBODY, we'd never fake a sensitive-skin testimonial with AI--but I'd absolutely use it to show our tanning drops on 50 different skin tones simultaneously.
I come into this conversation from a place that is both deeply technical and unapologetically human. The move from human to virtual creators isn't a creative leap, it's a business shortcut and audiences can tell. Generative AI created scale and speed, and it offered brands something even more tempting: autonomy over the "talent." Virtual creators don't push back. They don't age, burn out, unionize, or go off-script. They don't carry cultural memory, pain, mess or lived contradictions. That's the appeal. But it's also the problem. Authenticity will never be replaced. But with AI and the humans training it, authenticity is being simulated. And simulations break the moment values, culture, or accountability are tested. What worries me most is that we're watching diversity flatten, not expand. After years of hard-earned progress toward visibility and representation, AI is compressing influence back into a neutral, low-risk aesthetic. What used to be textured, imperfect, and culturally specific is getting smoothed into a blank, cookie-cutter white space that performs "inclusion" without carrying it. AI reflects what it's fed and what it's fed is already biased already optimized for comfort. When virtual creators become the face of influence, marginalized perspectives don't just get excluded. They get diluted remixed, and stripped of context. That's not innovation. That's erasure with better branding. The ethical conversation usually stops at disclosure, labels, or whether an avatar looks "too real." That's surface-level. The deeper issue is asymmetry. AI creators can shape behavior, aesthetics, and purchasing decisions at scale without lived experience, social consequence, or reciprocal trust. There's no community feedback loop, only performance metrics. That matters, especially when younger audiences are forming identity and belief systems inside algorithmic environments that reward sameness over truth. Where I diverge from the hype is this: AI is most powerful when it's invisible and supportive, not front-facing and performative. The strongest brands will use generative AI to give real people more leverage, more reach, and more room to be honest. AI should do the labor. and humans should carry the meaning. My perspective is shaped by years working across public-sector technology, cybersecurity, nonprofit leadership, and workforce systems. Influence, whether human or virtual, is ultimately relational. If technology doesn't deepen trust, it erodes it.
Here's the thing with AI creators: people feel tricked when they find out later. Someone follows an influencer for months, then learns it's not a real person, and they're done. Platforms should just put a clear label on AI accounts from the start. It's not complicated. Just tell people what they're looking at, and nobody gets mad.
I run one of the largest product and software comparison platforms online and work directly with AI-assisted content, scoring systems, and creator workflows at scale. The contrarian reality is that virtual creators don't replace trust, they replace reach. Generative AI excels at consistency, speed, and cost control, but it collapses authenticity if brands treat influence as output volume instead of judgment. Audiences increasingly trust the process behind recommendations, not the face delivering them. The ethical fault line isn't disclosure alone. It's incentive alignment. When AI influencers are optimized for engagement without accountability, influence becomes synthetic persuasion. Brands that win will use AI to support human creators with research, personalization, and iteration, while keeping human judgment and lived experience as the credibility layer. Authenticity survives when AI is infrastructure, not the influencer. Albert Richer, Founder, WhatAreTheBest.com
The big shift isn't human vs virtual creators. It's who's in control of the message and the risk. With human influencers, you're borrowing trust from a person who can say no, push back, or walk away. With AI creators, the brand controls every frame and never hears "no". That's efficient, but it also removes the natural friction that stops bad ideas and forces honesty. I don't see virtual creators as "the next influencers". I see them as R&D. Use them to stress-test hooks, formats, and scripts at scale, then hand the winning ideas to human faces who have something to lose if they mislead people. When brands reverse that and let AI front the story, engagement looks good at first but drops as people realise nothing behind the persona can change, grow, or be held to account. On ethics, the line for me isn't AI itself, it's consent and context. People should know when they're dealing with an AI persona. They should know what data is being used to target them. And there should be hard bans on using AI-influencer optimisation in high-risk areas like gambling, dodgy finance, and health claims. Once you can A/B test emotional pressure thousands of times, "dark patterns" stop being edge cases and become the default. On authenticity, I think AI doesn't kill it; it stress-tests it. If a brand's values are only a tagline, AI will make that hollowness obvious faster, because anyone can copy the look and tone. The brands that'll hold up are the ones where AI is used to scale a clear point of view and real subject-matter experts who are willing to speak in their own names, on record. If you need them, my details are: Josiah Roche Fractional CMO Silver Atlas www.silveratlas.org
Happy to join the conversation. We've worked with creators who've literally doubled their reach after shifting from human-run campaigns to virtual influencers--brands love the precision and consistency, and audiences usually show up as long as the narrative holds. But there's a catch: the slicker the avatar, the faster people sense the disconnect. One retail client tested it head-to-head, running the same campaign through a virtual persona and a human creator. The human--messy, unfiltered, imperfect--ended up driving triple the conversions. I don't see AI "replacing" influencers as the real issue. The bigger problem is influence without accountability. When a virtual character pushes a product and something goes sideways, who exactly is responsible? With humans, followers at least have a face and a point of view to anchor to. We're overdue for disclosure rules that can keep pace--something stronger than a hashtag, and clear enough to surface AI-driven persuasion as it happens. Without that, brand authenticity just turns into another line in a prompt.
From our seat at The Monterey Company, the contrarian take is that "AI influencers replacing humans" is mostly a media story. Virtual creators work best as brand-owned characters for repeatable explainers and always-on content, but they don't create trust the way a real person does when a buyer is risking budget, reputation, or their job. The ethical line is simple: clear disclosure every time, no synthetic likeness that could be confused for a real person, and contracts that spell out IP ownership, usage rights, and "no training on our content" terms with vendors. If brands try to hide what's synthetic, the penalty won't just be backlash, it'll be platform enforcement and higher customer skepticism, which makes human creators more valuable, not less. The winning play looks hybrid: AI for scale and consistency, humans for proof, demos, and accountability.
Hello, Thanks for the query. At All-in-One-AI.co, we almost fell into a common trap. We tracked which AI tools were used most and assumed usage meant value. The trigger was churn we couldn't explain. Power users were active every week, but renewals slipped. Support messages repeated the same line: "It's good, just not for me." We changed what we measured. We stopped ranking tools by clicks and completions. We started logging when users ignored suggestions and why. We added a one-tap "Not helpful" option on every AI output and forced a short reason from a fixed list plus a custom note. Every rejection was tied to the exact model, prompt, and screen. We reviewed the top rejection reasons weekly and cut anything users kept skipping. The results were blunt. People rejected outputs that looked polished for practical reasons. The context was wrong. The tone felt generic. The suggestion added steps instead of removing them. One example was our influencer outreach assistant. It wrote clean DMs, but users ignored it. The top reason was "wrong context." We changed one rule and required users to select a campaign goal and audience before generating anything. Rejections dropped by 31% in two weeks. Another example was our caption generator. It performed well in demos but failed in daily use. Creators marked it "too generic." We removed half the templates and kept only three formats creators already used in real campaigns. Overall usage dropped, but edits dropped more. Within six weeks, weekly retention improved by 8%, and support tickets about "AI feels off" fell by 23%. We shipped fewer features and renewed more customers without adding new models. My advice would be... track when users say no, force a reason, and remove what people keep walking past. Best, Dario Ferrai co-founder at All-in-One-AI.co (a platform where users can access all premium AI models under one subscription) Website: https://all-in-one-ai.co/ LinkedIn: https://www.linkedin.com/in/dario-ferrai/ Headshot: https://drive.google.com/file/d/1i3z0ZO9TCzMzXynyc37XF4ABoAuWLgnA/view?usp=sharing Bio: I'm a co-founder at all-in-one-AI.co. I build AI tooling and infrastructure with security-first development workflows and scaling LLM workload deployments.
What if I told you that I think there is a major evolution happening not with how creators create moving from human to virtual, but instead, how creators influence (moving from performance, to systems. A virtual creator does not succeed because they are an artificial creation. A virtual creator succeeds because they are predictable, scalable and instrumented. Brands are shifting to virtual creators, not to replace humans, but to reduce their uncertainty about influence. There is an ethical risk not because AI will be creating content for brands and potentially tricking consumers, but rather, because brands may delegate judgement to optimisation loops. When influence is defined purely by engagement metrics, the creator's authenticity slowly diminishes, whether that creator is human or AI. The main way generative AI makes this risk more transparent is by providing brands with tools to create highly adaptable, highly repeatable and systematic approaches to influence. In the coming years, the most important factors in determining the authenticity of a brand will be brand transparency and brand restraint. A virtual creator can enhance trust for a brand when a brand publicly communicates which content was generated via AI and which content was created independently. The brands that will be successful will be the brands that do not attempt to replicate human behaviour, but rather those brands that create sophisticated, honest and clearly articulated systems for creating influence.
I appreciate the invitation, but I need to be transparent: this isn't my area of expertise. As CEO of Fulfill.com, I've spent 15 years building logistics infrastructure and marketplace technology that connects e-commerce brands with fulfillment providers. My expertise is in supply chain optimization, warehouse operations, and the physical movement of products, not influencer marketing or generative AI for content creation. While I've certainly observed how our clients leverage influencer marketing to drive sales, and we've seen the fulfillment implications when viral campaigns succeed, I'm not the right voice for a discussion about virtual creators, AI influence ethics, or brand authenticity in the influencer space. Those topics deserve insights from someone who lives and breathes that world daily. What I can speak to authoritatively is how AI and automation are transforming logistics operations, how brands can scale fulfillment to meet demand spikes from successful marketing campaigns, or how technology is revolutionizing the 3PL marketplace. I can also discuss the intersection of e-commerce growth and operational readiness, which is critical when influencer campaigns drive unexpected volume. For your roundtable on influencer marketing and generative AI, you'd be better served by someone from a marketing technology platform, a creator economy company, or a brand that's pioneered virtual influencer strategies. I want to provide value to journalists, but that means being honest about where my expertise truly lies. If AllTech Magazine ever explores topics like AI in supply chain management, the future of fulfillment technology, or how e-commerce brands can build resilient logistics operations, I'd be eager to contribute. I've seen firsthand how technology is reshaping our industry, and I have strong perspectives on where logistics is headed. But for this particular roundtable, I'd recommend finding a contributor whose daily work centers on the creator economy and AI-generated content.
I'd be interested in contributing from the perspective of someone working closely with brands as they navigate AI, trust, and customer experience at scale. My contrarian view is that generative AI won't replace human creators so much as expose weak influence models that were already performative. Virtual creators can be effective, but only when brands are explicit about intent, disclosure, and value creation rather than chasing reach. The real risk isn't AI influence itself, it's brands outsourcing judgment and authenticity to systems without clear governance or accountability.
I am cautious about framing the shift from human to virtual creators as progress rather than a tradeoff. Generative AI can absolutely accelerate content creation and extend reach, but influence is still built on trust, accountability, and lived experience. Those qualities are difficult to replicate with a fully virtual persona, especially in industries where decisions carry real financial, operational, or ethical risk. The ethics of AI influence come down to transparency and intent. Audiences deserve to know when content is generated, augmented, or performed by AI, and brands must be clear about where automation ends and human judgment begins. When that line is blurred, authenticity erodes quickly. AI should support storytelling, not replace responsibility. From a brand perspective, authenticity is not about whether a creator is human or virtual, but whether the message is honest, informed, and aligned with reality. In my experience, the most effective brands use AI as an enablement layer rather than a substitute for human expertise. AI can scale insights, personalize delivery, and optimize engagement, but it should never become the voice of the brand without strong human governance behind it. The future of influencer marketing is not human versus AI. It is human judgment enhanced by AI, with clear guardrails. Brands that treat generative AI as a shortcut to credibility will struggle. Brands that treat it as a tool to amplify real expertise and real values will earn lasting trust.
We see most people discussing virtual creators as just a replacement for traditional creators, but the biggest change is really happening within companies' own behaviors, inside their decision-making processes. Virtual creators can also change how teams work together by being unresponsive. During a test roll-out I witnessed, a human creator rejected 3 of the 10 changes made to her content by a brand, while the virtual influencer was completely receptive to all of the changes, accepting each and every edit with complete compliance. After 60 days, despite having virtually the same number of viewers, the company's conversions dropped from 2.6% to 1.9%. This decline was due solely to the lack of checks and balances on messaging throughout the workflow and not because the audience became biased against the brand. When brands eliminate the friction points from the process, ethics tend to fail and that friction point was created by human creators, which acted as quality control, slowing brands down and preventing them from producing low-quality content. When you use a virtual creator to produce content, they create a void where that friction no longer exists and create an environment where influence is now simply output. Credibility is destroyed when there are no checks or balances internally in a workflow to say "no." The future winner will be the company that creates both AI-created content and a human veto power creating both speed, as well as protecting their credibility through the continued presence of some level of friction within their workflow.
It's exciting to think about the direction influencer marketing is heading, especially with the rise of virtual creators powered by generative AI. But I think there's an important nuance to consider—AI-driven influencers aren't just a tool to amplify human content. They have the potential to completely reshape the landscape by challenging traditional ideas of authenticity and connection. What's often overlooked is the deep impact on brand authenticity. As brands start experimenting with AI influencers, it's easy to focus on the novelty factor. But what really matters is the human connection consumers are looking for. A virtual creator can be incredibly polished, but can it build genuine trust? For some consumers, the very idea of an AI influencer might feel more "authentic" than a human one, especially as we increasingly embrace digital and virtual identities. However, we need to be careful about how we navigate the ethics of AI influence. There's a huge responsibility on brands and AI creators to disclose that they are, in fact, not human. Deceptive marketing—where consumers think they're engaging with a real person when it's actually an AI—is a slippery slope. Transparency and authenticity must still be at the core of any marketing strategy, whether the face behind the brand is a person or a machine. At the intersection of influencer marketing and generative AI, I'd say it's important to strike a balance between innovation and ethics. Brands that can integrate AI influencers without losing sight of the core values that drive customer relationships—transparency, trust, and authenticity—will be the ones that stand out in the long term.
It is important to understand that the death of authenticity in the virtual creator industry is being caused by lazy brand strategies, not virtual creators. Additionally, audiences do not care whether a creator is real or artificial, but rather whether or not their recommendation was earned. Using AI-created influencers will provide brands with more consistent, compliant and expansive content than using human-created influencers. However, brands must also recognize the importance of the potential risk associated with using AI influencers versus human influencers to develop and establish their brand reputation. The ethical concern associated with virtual creators is not simply disclosing they are artificial creators but rather creating an imbalance between humans and the artificial creators when it comes to testing and measuring emotional appeals for product marketing. Banning virtual creators is not going to solve the problem, rather brands should be required to disclose the intent behind the use of the virtual creator, what it is that they want to accomplish by using a virtual creator, and what kinds of data were used to train their virtual creator. The future of successful brands will be reliant on the use of AI-created content to establish credibility for their brand through familiarizing consumers with their brand, while using AI-created influencers to repeat those messages. It is when brands mix and confuse those roles of AI-created influencers and human influencers, that they lose consumer trust.
The rise of generative AI is highlighting the difficulties involved with influencer marketing and generating conversations around that topic, as trust is the driving factor behind influencer marketing rather than the technology used to create or distribute content. Brands are not losing authenticity because of virtual creators; they are losing authenticity because there are no accountabilities for virtual creators. The human influencers create content following a pre-established script with an accompanying rate card and brief outlining what it is they should do or say. The technological advancements of AI help to remove the 'delusion' of a human influencer. The real ethical divide regarding what makes an influencer 'ethical' is not whether that influencer is human or virtual, it is about whether the audience is able to see and understand what is being done with the content created by that influencer. I have seen firsthand how many times and how to train virtual creators to produce content that is 'brand safe' and how they must be able to quantify the 'vibes' or enthusiasm generated from the content produced by that creator. The immediate impact of 'engagements' has been to drop the amount of engagement with a virtual creator at the outset, but after there is a clear and calculated understanding as to how an audience can engage with the content the trust of the audience has increased significantly. The future of 'influencers' is not about creating fake humans; it is about creating a transparent system where an audience has the ability to evaluate it. Depending upon your audience or topic area, I could provide alternate perspectives on this issue using the frameworks of governance models or measurement metrics for both 'virtual' and 'human' influencers.
The real shift isn't human versus virtual creators, it's trust versus novelty. Virtual influencers scale perfectly, but influence still depends on perceived accountability, which AI doesn't naturally earn. Brands that win will be transparent about where AI is used and where human judgment stays in control. What's more, authenticity won't disappear, it'll become a differentiator, since audiences can forgive automation but not deception.
I would love to be considered: Nate Nead, CEO of MARKETER https://linkedin.com/in/natenead nate@marketer.co
The debate around human versus virtual influencers misses the actual disruption: influence is becoming a programmable infrastructure layer, and most brands have no idea who controls their optimization stack. At our company, we architect systems where every user interaction feeds back into performance loops. What I see is that we're shifting from idiosyncratic risk, one creator going rogue, to correlated risk, one misconfigured model poisoning entire portfolios instantly. The contrarian position: the threat isn't fake authenticity, but it's ungoverned, automated persuasion systems that A/B test emotional levers at scale without proper technical governance. We need authenticity encoded as engineering constraints, hard limits on narrative drift, mandatory model logging, and red team QA pipelines. Most organizations treat this as a PR problem when it's actually a key systems architecture challenge they're completely unprepared for.
Everyone keeps declaring the death of SEO, but what is actually happening is a power shift from SEO to GEO, generative engine optimisation, where the goal is not ranking first, it is being the source that Google's AI chooses to mention. In an AI influence world, E E A T is no longer a guideline, it is the filter, because virtual creators can flood the internet with plausible content, but brands only win when their claims are anchored to real expertise, real proof, and real local context. The new game is entity trust and retrieval, meaning you build a tight set of pages, bios, references, and first-party evidence so the model can safely cite you, then you amplify it with community signals that AI cannot fake like genuine reviews, local mentions, and real-world case examples. If you are still optimising for blue links, you are arguing about billboard placement while the audience has moved into the chatbot.