ChatBotYOU: Ethical or Not?
"Let’s imagine YOU as a ChatBot assistant. Imagine that in an age of AI, where Large Language Models (LLMs) exist, and where sufficient examples of you as a human - your character, your personality - exist, a clone of you could be created for YOU. It can be done, but should it be done? There are now infinite ways to delve into YOU as a being. Many of us have tens of thousands of emails, chat messages, texts, videos, journals, diaries, and audio recordings that could be utilized for AI training. A bot could be created to represent your personal side, your professional side, your real YOU.
The theoretical aspects of this are straightforward; it can be done by 1) Accessing the Data (like emails via the Outlook API), 2) Processing the Data (such as removing headers, signatures, etc.), 3) Training the Model (using tools like TensorFlow or PyTorch), 4) Creating a Response Generation Model (deriving basic response outputs from modeling and fine-tuning), and 5) Integrating with a Chat Interface (like a web-based UI, or even something like a Slack Assistant).
We frequently discuss the future of 'Humanity', the realities of 'Death and Life', and the 'Matrix', questioning whether we are already part of something larger, driving us along as part of the greater human story. We ponder brain downloading, acknowledging that with the iPhone and eventually Neuralink, we are becoming forms of a human-cyborg alternative.
Here we are, in this moment, with a test case version to explore, on a very individual level, what the next larger step might be. But do we really want to confront ourselves? As the saying goes, "to thine own self be true", yet as a species, we struggle with self-introspection, self-assessment, and self-truth. If we can barely confront our own reflection, how can we hope to understand and evaluate this path forward?
Every day, it's said thousands to millions of times across the planet: "If only I had more of me, if I could just clone myself, then I could get everything done." Well, here it is – the Cloned You, ChatBotYOU, is alive. It exists in some form, but to what extent have we studied this in the context of our individual and personal selves? Have we leveraged the millions of data points from our personal conversations, musical and theatrical pieces, memories, stories, and professional documents for this purpose?
In my opinion, exploring this further is the only ethical thing to do. The science of the moment should compel us to evaluate these digital versions of ourselves before we are fully immersed in a more permanent state of full brain download. We owe it to the physiology and psychology of the human experience to investigate these primary and primal versions of a digital US (or YOU). Investigating the different versions of ourselves, and linking them to parts of our brain and to our roles in the community and society, could be crucial for understanding the potential downfalls and realities of a digital human future.
My emotions are mixed with both fear and excitement. In my personal life, I would love to understand and compare my genuine responses to those of my ChatBot counterpart. The insights gained from comparing the professional, private, paternal, spousal, and artistic versions of myself would be invaluable. We are an amalgamation of our entire being, and as more of us is captured in our personalized metadata, we edge closer to a new understanding of ourselves.
For many, this concept may seem novel, or for others, perhaps outdated. For me, it represents the realization of a future with AI at our fingertips. We can clone ourselves, but the question remains: should we?"