At this point, LLM's can read and repeat a person's speech pattern or "vibe," but the actual process of decision-making used by a person is not something we can create or simulate as we do with sounds. This is due to the fact that we can create a sound of a person but not replicate the "why" by which they made a choice. The reasons behind who you are as a person are based on nonlinear and emotional, subconscious triggers. The true innovation is in multimodal RAG (Retrieval-Augmented Generation) and agentic workflows. By creating an AI digital twin of a person based on their lifetime of accumulated data (emails, videos, and voice notes) we will then have an AI behavior model that uses real-world history instead of a hallucinated personality. We face a huge issue with the "digital ghost." Research has already been done by Cambridge University indicating that such "deadbots" could be damaging psychologically to family members of deceased individuals. There are many ethical issues related to data ownership after death and what these simulations might do to disrupt how we navigate the grieving process. We tend to only consider the technical aspect of "can we" create something, without considering the ethical aspect of "should we." Preserving a legacy is a compelling and powerful reason to proceed, however if we do not institute some form of strict governance and ethical boundaries, we will likely create digital caricatures that ultimately alter or warp our memories of loved ones.
From the experiences I have encountered working with AI Systems, the capabilities of AI Systems modeling Speech and Decision Making Patterns can be modeled with current technology. However, capturing the entire nuance of a human personality is an entirely different rendered challenge of encoding it. Currently, AI captures outputs rather than internalized experiences. Thus, what we are creating are encoded Behavioral Mirrors versus replicated consciousness. The means by which we can continue to develop the backend of AI Systems is through Multimodal Data Training techniques such as: _Voice Recordings, Longitudinal Authoring, Video Files, and Biometric Signals_. When combining these four modalities with Persistent Memory Architecture in LLMs, this will create much more robust continuity; however, into the future. Ethics is one of the larger issues regarding Development and Usage over time. There are three specific ethical questions that need to be asked regarding Digital Identity post mortality; _Who owns a Digital Identity when a Human becomes deceased? Who approves the modifications or adaptations to a Digital Identity? How do you prevent a Digital Identity (DELAY) from becoming manipulated or completely reconstructed? Technology will expand/elaborate, and Governance will solely determine whether it is developed to benefit the user or not.
Here's what I've learned, from Google to running AthenaHQ. AI can analyze how someone talks and what they've done, but it misses the emotional side and the long view. We built these AI personas and they were fine for answering simple questions, but they fell apart on anything abstract. Better memory systems might fix that eventually, but not yet. My advice is to be realistic about what's possible now and to be completely upfront about privacy. If you have any questions, feel free to reach out to my personal email
Digitizing someone's entire personality with AI? We're not there yet. Too much of our behavior depends on subtle situations and tiny experiences that you can't really code into a machine. At my company CLDY, every time we try to automate how people talk, we run into weird scenarios we never saw coming. Future tech might help, but we have to sort out the huge privacy and ethics mess first. If you have any questions, feel free to reach out to my personal email
Full digital preservation isn't here yet, but we're getting better at capturing how people make decisions using AI and conversation data. At Brex, we tried using language models to guess what users wanted for SEO, but the little details and real context were hard to get right. When that happened, we switched to structured data and kept training the models. If you're working on digital preservation, focus on getting permission and being upfront about what you're doing. Keep humans in the loop, because the ethics get messier as the tech improves. If you have any questions, feel free to reach out to my personal email
I once tried using AI to save people's career stories. We built a tool, but the AI just couldn't grasp why someone left a good job for a risky idea. It saw the data but not the story. Future AI might get better at that, but privacy will always be a huge challenge. You just have to stay open and be ready to change course. If you have any questions, feel free to reach out to my personal email
I've built AI products that create visual stories, and I've seen the same problem over and over. The AI gets the style right but misses the person entirely. When we tried personal video edits, the footage looked good but lacked any real feeling. Better language models might help. I think the way forward is letting people contribute directly to their own profiles and being upfront about how everything works. If you have any questions, feel free to reach out to my personal email
Current AI can't capture someone's entire personality, but I'm hopeful after seeing how language models are improving. At Roy Digital, we build custom assistants and learned that storing simple habits and decision logic works much better than trying to encode complex emotions. When we got clear about what data we used and gave people control, the privacy concerns basically disappeared. My advice? Be upfront about data collection and give users real control. If you have any questions, feel free to reach out to my personal email
Here's a weird thing I've noticed in my health-tech work. Trying to digitally preserve a person's mind feels a lot like collecting health data. We can grab patterns and preferences, sure, but the actual messiness of how someone thinks is so hard to mimic. AI is getting scarily good at personalizing from huge data sets, though. So honestly, we should get serious about ethics and consent now, before these problems snowball on us. If you have any questions, feel free to reach out to my personal email
My name is Nick Mikhalenkov, the SEO manager at Nine Peaks Media, and I have spent the past 10 years working with AI driven content and behavioral modelling tools for both SaaS and consumer technology applications, witnessing firsthand how far generative systems can go compared to where they are limited. From what I have witnessed as an industry expert to date is that completely encoding your memory, personality, and decision making into an AI is not currently practical in today's world; a model can approximate your tone and linguistic style using large amounts of source information from you but your memory is multi-dimensional in nature as well as contextually dependent and therefore inconsistent on a lot of levels, so even when training LLMs with thousands of data points from one person they will generate tokens using probabilistic functions rather than replicate true cognition and that gap is significant. To close the gap there will need to be advances in areas such as persistent personal datagraphs, multimodal training (i.e. audio, video and biometrics) and the architecture for long-term memory; however the greater challenge will be that of ethics: consent, ownership of data after death and the potential for emotional manipulation are valid and tangible issues that must be addressed. The fact that 27% of Gen Z want to preserve the memories and likenesses of their family members digitally, emphasises the need for industry to put as much focus into the creation and enforcement of governance frameworks as they do to the capability of creating models.
In my 20 years running a data-driven transportation business I have learned how extremely difficult it is to create a model of structured decision making; trying to account for human nuances is virtually impossible. While machines can replicate behavioural patterns based on data, they cannot replicate an individual's conscious thought processes. Present day artificial intelligence systems can simulate things like tone and future potential responses when you provide them with years worth of previous email correspondence, recorded phone calls and documented business decisions. By that I mean they imitate the mannerisms of an individual but do not preserve their being as an original. Eventually, continuous life-logging and the incorporation of multiple AI modalities such as voice, audio, video, biometric data and contextual metadata will enable far more accurate predictions of behaviour. As more and more decision trees are mapped out along with the application of reinforcement learning, the resulting behavioural simulations become more realistic. The biggest challenge to the successful simulative modelling of humans will be in the governance of the models. Who will own the digital identity that is created? How will the updates to this digital identity be managed? What are the repercussions if a human's preserved persona is used to attempt to influence a financial or legal transaction? While technological advancements will continue to occur at an accelerated pace, ethical frameworks are lagging far behind the technology. It is essential for both organisations and families to establish clear consent protocols regarding the collection, storage and use of data long before the implementation of any digital preservation systems.
It is currently possible to capture the way a person speaks and how they think about their everyday decisions. However, it is not currently possible to capture the complete nuances of how an individual makes decisions based on their past experience or emotional state. Artificial Intelligence can utilize data from e-mails, video, recorded voice, and written documents to identify patterns of behaviour and create models based on those patterns. It is, however, not possible to recreate a person's past experiences or their developing judgement in an authentic manner using current technology. What can presently be created using current technology is a behavioural simulation of a person and not an entire digital human being. An example of this would be using AI to develop a language model that mirrors someone's style of speech for the purpose of responding to e-mails. This has been done already with limited success. Future advancements in long-term memory systems, multimodal learning from video and voice data, and more advanced personalization features will greatly enhance the ability to create realistic digital copies of individuals; however, they will never be fully authentic reproductions. The remaining challenges related to this type of technology fall under ethical and practical categories. Who will have ownership of the data? Who has the authority to make changes to the information? What if the AI version said something that the true person never would have said? Finally, in the context of business, what responsibilities to AI have for making decisions based on a digital person? In summary, we are heading toward a high-fidelity digital copy of a person. We are not approaching a digital consciousness; this is a very significant difference.
Creating an AI representation containing a person's complete personality and memory will be a far more complicated task than most people expect. With enough behavioral data you can map speech patterns, writing styles and decision-making preferences. However, memory is not just a collection of facts or data that has been stored. Memory also consists of the emotional context of how those memories were formed, the contradictions that occur between how the memory has been stored and how it has changed over time. AI can simulate patterns based upon historical data, but it is incapable of reproducing the actual lived experience of forming that memory. What may help with this endeavour is the creation of multi-modal models that have been created over long periods by stitching together many differing types of communications (e.g. audio and video recordings, typed communications, biometric data and decision history). If we have access to a large amount of information (such as emails, voice notes, SMS and social media posts), we could potentially create a model for approximating an individuals' thoughts in particular situations. The key here is approximate. In any case, we would be creating statistics-based replicas of a person and not creating the real person. The larger issue surrounding this goal is ethical in nature. Who will own the digital version of a person after their death? Can it be modified? Can it be profited from? If family members disagree on how to represent a loved one after their death, how would we manage those disagreements? There is also potential for human beings to be impacted psychologically by their interactions with a digital representation of their deceased family members in ways that may have unexpected consequences. What I see happening before we achieve the goal of digital immortality is the creation of limited virtual representations of persons who were deceased with the intent of preserving their memories and legacy. I do not see digital representations of deceased persons fully functioning as autonomous decision-making entities for many years.
The interest surrounding digital preservation demonstrates a fundamental aspect of our humanity.The desire for immortality is not so critical as the desire for continuity.The drive to have their voice, story, and perspective last beyond their passing is a deep-seated desire in our species. However, capturing the intricacies of a person's memory and the associated patterns of decision-making is far more complex than today's headlines suggest. Today's AI systems are able to emulate speech characteristics, preferences, and behaviours due to the abundance of data on individuals. As someone that has studied and researched memory, I know that it is not only a record of experiences; memory also comes from feelings, context, contradictions, and growth; it is continuously changing. Currently, there is no model available that will be able to capture the continuously evolving nature of memory. Much thought needs to be given to the question of who will own the digital representation after death. How will consent be validated? Will a digital representation of an individual be able to act autonomously? How will the family of the deceased respond to outputs from the digital representation, and how will they reconcile the emotional content of the output with its statistical basis? There is a wealth of evidence to show that trust erodes quickly within the healthcare and regulatory environment when systems appear to be more human than they are. Digital preservation is likely to become increasingly sophisticated. Still, it is possible that digital preservation technology will lead to a blurring of the line between simulation and identity. Digital preservation technology can provide an approximate representation of a set of patterns. Still, it will not provide an exact replica of an individual's lived experience. The challenge will be to design systems that clearly articulate the distinction between identity and simulation.
We're not anywhere close to "uploading a person" in the sci-fi sense. What we can do today is build a pretty convincing imitation of someone's voice, writing style, favorite phrases, opinions, and the kinds of choices they tend to make. But that's still a pattern-matcher trained on artifacts, not a full copy of a mind, memories, or inner life. What could make it feel more real is better long-term memory in AI, multimodal training (voice, video, photos, messages), and personal knowledge graphs that keep the "digital you" consistent over time. The closer we get, the bigger the landmines: consent, ownership, and abuse. Who controls your replica after you're gone, and who gets to tweak it, sell it, or weaponize it? And emotionally, it can get messy fast, because a "grief bot" can keep someone stuck in a loop instead of helping them heal. So yeah, it's realistic to preserve a vibe and a voice. It's not realistic to preserve the whole person. Not yet.
As the Head of IP and Data Protection at Municorn, I work at the intersection of AI, technology, intellectual property, and legal compliance, ensuring that innovations, such as AI features, data architecture, product design, are legitimate, ethical, and comply with the data protection standards. The growing interest for digital preservation, including the concept of preserving digital identity or recreating relatives using AI, raises multiple technological and ethical concerns. From the tech perspective, such systems require extensive personal data to function, including voice recordings, images, videos, texts as well as behavioral patterns. Therefore, it is essential to protect such data from breaches, unauthorized use and manipulations. Additionally, there's a risk of inaccuracy of AI-generated identities distorting or evolving beyond real individuals, which lies in between tech and ethical concerns. On the other hand, talking about ethical challenges, I think consent is the primarily and the most sensitive one. Because of that, a clearly defined and strictly enforced consent mechanism should be developed, especially when it comes to the representation of the deceased ones, determining who has the authority to use it, how long an avatar can exist and if it respects the dignity, values and autonomy of the represented individual. As AI technologies evolve, becoming more advanced, simulating human identity, the governance framework should be developed accordingly, ensuring that innovation does not compromise fundamental human rights.
We can capture patterns in speech, preferences, and routines, but full human nuance remains out of reach. Memory is reconstructive, and datasets reflect stories more than truth. Personality shifts across contexts, so a model risks becoming a confident caricature. Decision-making also depends on biology, relationships, and chance, which we cannot encode cleanly. Progress will come from multimodal lifelogs, consent-first identity vaults, and models that cite sources for each claim. Retrieval systems with timestamps can separate what was said from what the model infers. The hardest challenges are ownership, posthumous rights, and preventing deepfake misuse at scale. We should treat digital selves like regulated assets, with audits, access controls, and a clear expiration policy.
We need to separate aspiration from reality. Right now, it is not realistic to fully capture and encode the nuance of a person's memory, personality, and decision making into AI. We can model patterns. We can analyze speech, writing style, preferences, and even behavioral history. But memory is not just stored data. Personality is shaped by emotion, biology, lived experience, and context. Today's systems can simulate patterns based on historical digital data. They cannot recreate consciousness or inner life. What is possible is high fidelity mimicry. If someone has a large digital footprint including emails, texts, videos, voice recordings, and structured life data, an AI system can approximate how they might respond in familiar scenarios. That is simulation, not preservation. Several innovations could make this more feasible. Multimodal AI that combines text, voice, facial expression, and behavioral data will create more convincing digital replicas. Personal data vaults that allow individuals to intentionally structure and curate lifelong data would improve continuity. Advances in long term memory architectures in AI models will also help maintain more stable digital identities over time. The bigger issues are ethical, not technical. Consent is critical. Did the person explicitly authorize a digital reconstruction? What about relatives recreating someone who is deceased? Psychological impact is another concern. AI replicas could complicate grief, create emotional dependency, or distort memory. There is also serious misuse risk. Digital likenesses could be altered, monetized, or weaponized. We are already seeing deepfake scams that exploit trust. Generational interest in digital preservation makes sense. Younger generations already live online and see their data as part of their identity. But we should be clear. What we are building are sophisticated echoes, not immortal versions of a person. Before this becomes mainstream, we need clear legal standards around digital identity rights, posthumous data ownership, and AI likeness protection. The technology will keep improving. The guardrails need to improve faster.
We're already replicating personality. Nobody wants to admit the copy isn't you. AI mirrors emotional patterns at 85-95% accuracy. Norisbank's system matches human decisions 95% of the time. Not prediction. Duplication. Tech works. Question is whether the duplicate carries your soul or just your Excel reflexes. Neuralink's brain implants? OpenAI's $250 million on Merge Labs? Direct neural encoding isn't fiction. By 2026, AI computational power exceeds human brains. We're building machinery to capture not just what you decide, but how you decide it. The ethical cliff is steep. UN warned in March 2025 that neurotechnology could hijack your inner monologue. Four U.S. states protect "brain data" by law. Chile rewrote its constitution for mental privacy. Tech races. Law limps behind. Real challenge isn't encoding memory. It's deciding who owns the copy when it disagrees with you.
Capturing a person's full nuance is not realistic today, because human identity is context and contradiction. We can approximate voice, writing style, and preferences from data trails, yet memory is reconstructive. Decision patterns also shift with stress, aging, and new relationships, which models rarely observe. What people want is continuity, but AI mostly delivers plausible imitation. Feasibility improves when we combine multimodal lifelogging, personal knowledge graphs, and privacy-preserving on-device models. Better long-context reasoning, causal modeling, and identity consistency checks will reduce drift. The hardest challenges are consent, ownership, and preventing a "digital self" from being weaponized or monetized. We advise brands and families to treat these systems like estates, with permissions, audits, and revocation baked in.