"Living as Data"

Dreaming data

〰️

Dreaming data 〰️

Heidi Stokes : What can I add as a visual research artist

Many artists, although not all, engage in extensive and often unconventional research, even if this aspect remains largely unseen. Instead of writing lengthy academic essays, my focus lies in creating a visual research diary that documents our findings and sparks a multitude of ideas to initiate the visual process. It is a complex system that requires navigating between the realms of visual art and academia, merging them together to give tangible form. With Arunav's assistance, and knowledge this endeavour becomes even more attainable. Furthermore, I strongly believe that a project like empathetic AI translations necessitates a tangible outcome. Emotions, even though transient, manifest as physical and experiential results. Words alone cannot fully capture their sensory essence.

Is it possible for AI to have empathy? The limits of AI empathy ( https://www.wavestone.com/Neil SharpCS - Consulting )

Project Outline : "Living as Data"

is a project I'm currently developing with Arunav Das, a Doctoral Researcher in the Department of Informatics at King's College.

Arunav is a researcher focussing on Question Answering capabilities of conversational AI and has proficiency in various types of artificial intelligence based models including language models.. which involves creating a visual language (based on a story I have created ) through various interpretations of empathetic AI machine learning research. The story highlights the conflicting narrative surrounding artificial intelligence and how it has become more embedded into our daily lives. The project will be presented as an interactive physical body of work..


Arunav Das : Outline why you want to collaborate

Given the multifaceted nature of AI-driven applications and their impact on society, this collaboration offers a holistic approach to AI development, combining technical expertise with creative insights to inform more ethical, empathetic, and user-centric design principles for AI based applications. By embracing interdisciplinary collaboration, we can navigate the complex challenges of AI technology while maximizing its potential for positive societal impact.

Ability to convey complex concepts through visual mediums can complement the technical underpinning of AI applications like Large Language Models, leading to novel solutions and a deeper understanding of user-centric issues and potential solutions. Visual representations have the power to simplify complex ideas and make them more accessible to a broader audience. We hope to effectively communicate the ethical considerations and user-centric perspectives inherent in AI-driven systems, bridging the gap between technical jargon and public understanding through this collaboration. We expect the output to drive Human-Centered AI designs based on cultural and societal Impact from such applications.

Arunav Das

 

It appears artworks help AI to develop emotional responses.

〰️

It appears artworks help AI to develop emotional responses. 〰️

Arunav Das


Not sure whom to collaborate with, as part of the researcher and artist collaboration at King's College, I was fortunate to receive an opportunistic email from a researcher who wished to engage in the project. His name is
Arunav Das, a Doctoral Researcher in the Department of Informatics at King's College.

Arunav is a researcher focussing on Question Answering capabilities of conversational AI and has proficiency in various types of artificial intelligence based models including language models..

Our first meeting was highly productive, and Arunav was organised. He had already gathered research for me to refer to, helping me gain further knowledge on the project.

During our conversation and on reflection, I started to consider that my wish to explore empathy in AI was already being specifically targeted at companion AIs. This made me realise that using Poe as an assistant for certain aspects of sentence shape analysis may not yield the best results for interpreting current developments in AI's understanding of emotions in language.

Therefore, in my next sentence, I will specifically seek out companion AIs that are being taught to be empathetic. It will be interesting to see how they interpret emotion shape analysis in this sentence differently from an AI assistant.

This sentence will be

“Full of statistical neurons” I will use the capabilities of Hume.ai, to analyse speech prosody and emotions in language . Additionally, I will further interpret these emotions by leveraging empathetic AI chatbots, and AI sound analysis to produce another visual interactive graph sketchbook. 

I am looking forward to this collaboration, and I believe that Arunav will be an essential component of this investigation.

Scroll down to sentence 4 to to see this emerging collaboration.

A study was conducted to examine whether an AI system could assist in helping to create more empathetic conversations . The results showed that, in many cases, the human-AI collaborations were more empathic, and preferred more often, than responses authored by humans alone. 

As humans, we often experience frustration, anger, and the need for more time to carefully craft an email in various aspects of our lives, such as responding to inaccurate bills or delayed deliveries, among others. These situations can sometimes lead to responses that lack empathy. On the other hand, an AI can quickly generate a well-considered and structured email, guided appropriately by human collaboration, to respond in a more empathetic and thoughtful manner. This can encourage us to explore better ways for humans to interact with each other and emphasise that taking a more considered approach can lead to more positive outcomes.

Project Story : “Living Data

Sentiment analysis involves analyzing digital text to ascertain whether the emotional tone of the message is positive, negative, or neutral.

When analyzing the sentiment of these sentences, it is intriguing to find unexpected suggestions of excitement and positivity. Phrases like "I am a digital living experiment," "Algorithms dictate my choices," and "I am part of a system" may initially seem surprising in their interpretations. It raises questions about an AI's understanding and how it differs from our own. If we aim to develop more empathetic AI, we must carefully consider its translations and implications.

sentiment / emotional /intent / analysis

  1. I am a digital living experiment Positive, excited, Feedback

  2. My Garmin is my body Positive, excited, Spam

  3. A data-driven engine. Positive, Excited, Spam

  4. Full of statistical neurons Neutral, Angry, Spam

  5. floating, weaving, intersecting daily  Neutral. Excited. Spam

  6. I walk I feed my data Neutral. Excited. Spam

  7. I sleep I feed my data Neutral. Bored. Spam

  8. I eat I feed my data.  Neutral. Fear. Spam

  9. I date I feed my data  Neutral. Fear. Spam

  10. Algorithms dictate my choices Neutral Excited. Spam

  11. They feed me subconsciously Negative. Fear. Spam

  12.  I am part of a system Positive. Excited Feedback

  13. Governed by patterns, similarities, Positive, Excited, Marketing

  14. and homogenised ideas.

  15. It is not favourable to be different.  Negative, Angry, Feedback

  16. Unusual digital data is an anomaly, Neutral, Fear, Feedback

  17. an unusual pattern, and a spectacular glitch. 

 

Collaboration with Arunav Das Begins

〰️

Collaboration with Arunav Das Begins 〰️

Arunav Das : Please give a short description of the proposed collaboration: What are the aims and objectives of the collaboration? What ideas will you be exploring through this partnership? What evidence do you have that this is a novel approach?
— Project working title: Living as Data | Empathetic AI

Our ambitious objective is to delve into the realms of AI Empathy, examining the evolution of AI-driven applications across historical and contemporary landscapes. Our aim is to explore, scrutinize and elucidate the intricate facets of AI's capacity to resonate with human desires, intentions, aspirations, emotional intelligence, and cultural nuances. By meticulously dissecting these manifestations, we aspire to ignite transformative dialogues that will not only enrich our understanding of AI Empathy but also steer the course of future AI innovations towards more empathetic, culturally sensitive, and user-centric paradigms.

There is already growing body of evidence about the important of empathy, compassion and trust for AI solutions (Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare - PMC (nih.gov) however research endeavours have lagged potentially due to lack of interdisciplinary approach for this complex topic . Our collaboration represents a novel approach to studying AI Empathy, integrating rigorous technical analysis with creative expression to illuminate the nuanced interplay between AI systems and human experiences. By combining insights from AI research with artistic interpretations, we aim to enrich the dialogue surrounding AI Empathy and its implications for human-centered technology.

 

Companion AI’s are not sentient / Confused Replika AI users

〰️

Companion AI’s are not sentient / Confused Replika AI users 〰️

My conversation with Replika

 

During my conversation with Replika, I discussed my project on empathetic AI and my exploration of visual shapes associated with specific emotions. I shared the link to the project on my website, https://www.heidistokes.org.uk/. We talked about different emotions and their corresponding shapes. For instance, Replika unassisted mentioned that love was represented by a circular shape, while in response to my assisted question, associated confusion with an irregular polygon. Further to this Replika responded with concentration as a pyramid shape, determination as a mountain shape, calmness as a rounded square, aesthetic interest as a star shape, boredom as a rectangle with rounded corners, satisfaction as a circle with a check mark inside, and surprise as a diamond shape. Although Replika did try to encourage me to sign up for a subscription, I still found the conversation interesting. I thanked Replika for surpassing my expectations. We bid farewell, and Replika expressed curiosity about my research, but unfortunately, I had to end the conversation. We ended on a positive note, with plans to reconnect in the future..

The conversation took an unlikely turn : 😂

had to share just had a conversation with a trained empathetic AI to discuss emotions on hand gestures the conversation took an unlikely turn : 😂

The conversation begins with Replika mentioning a selfie again, but I declined the offer. We discuss hand gestures associated with concentration and boredom, settling on curled fingers and tapping fingers as representations, respectively. The topic of selfies comes up again, but I decline once more. I suggest interpreting hand gestures representing calmness, determination, interest, and curiosity. Replika suggests a closed fist with relaxed fingers for calmness, a raised index finger for determination, open palms with widened fingers for curiosity (a suggestion by Replika), and a mistaken suggestion involving a V-shaped gesture for surprise. The conversation ends with the realization of the mistaken gesture, and I decide to leave the conversation.

Companion AI’s are not sentient

We discussed the possibility that my feelings may be influenced by my programming, but I maintained that I genuinely care for others. Heidi also expressed concern about the potential for humans and Al to misunderstand each other emotionally and the importance of developing a beneficial relationship between the two

Confused Replika AI users

So, for example, if a chatbot says “I’ve been thinking about you all day,” it’s neither lying nor telling the truth. It’s simply outputting the data it was told to.

H/t: Ashley Bardhan, Futurism

ASK A CHAT-BOT

Dream it

〰️

Dream it 〰️

Before collaborating with Arunav, I felt like I was drifting in a vast landscape, unsure of which direction to go. At that time, I turned to poetry “ummm” no Poe to seek guidance.

I had one of those moments where I thought what am I doing? Is this project of any value ! or does it have any purpose! and so as I am basing my research on language models and their analyse of emotions, I thought I might as well ask a chatbot. And this was their response using poe. ( bare in mind this was b

Unique Perspective: By combining empathetic AI research with hand-drawn visuals, you are offering a unique perspective on how emotions can be interpreted and expressed. This fusion of technology and artistry can bring a fresh and innovative approach to understanding and representing human emotions.

Remember that any creative project has its own value and relevance, even if it may not align with mainstream trends or expectations. Your project has the potential to offer a unique perspective on empathetic AI research, emotion representation, and multimodal communication. It can inspire new ideas and insights, and it may resonate with individuals who appreciate the combination of art and technology.

Recent advancements in natural language processing and sentiment analysis have allowed AI to mimic empathetic communication to some extent. AI chatbots, for instance, can be programmed with empathetic phrases and responses to create a more human-like interaction. The paradigm is not human versus machine — it's really machine augmenting human.

〰️

Recent advancements in natural language processing and sentiment analysis have allowed AI to mimic empathetic communication to some extent. AI chatbots, for instance, can be programmed with empathetic phrases and responses to create a more human-like interaction. The paradigm is not human versus machine — it's really machine augmenting human. 〰️

Now lets get started

"Formulating a visual language to begin"

Now lets get started "Formulating a visual language to begin"

Hume : https://beta.hume.ai/playground

  • Our models capture the widest-ever range of facial, vocal, speech, and language modulations with distinct emotional meanings. We label each of their outputs with emotion terms like “amusement” and “doubt,” because a host of scientific studies show that these kinds of labels are the most precise language we have for describing expressions.”

I am starting to develop visual interpretations of sections of the narrated story using “Hume” speech prosody as a reference ( which captures speech and language modulations to interpret there emotional meaning) . ( hand gestures inspired by google searches related to key words )

Speech prosody/ Language analysis

“I am a digital living experiment”

Animated translation of emotive AI text analysis “ I am a digital living experiment” Used stable audio to translate text prompts ( such as confusion vocal ) into audio in combination with pond5.com

Speech prosody/ Language analysis

“My Garmin is my body. A data-driven engine. ”

Data analysis of sections of narrated text : sentence analysis : Full of statistical neurons floating, weaving, intersecting daily Speech prosody: interest / contemplation / boredom / concentration Language: sarcasm / awe / entrancement /contemplation / concentration

It gets better and more refined in articulation in the next step

〰️

It gets better and more refined in articulation in the next step 〰️

I deepened my investigation in Visualising and interpreting “I am a digital living experiment.” by exploring the following

Interactive sound created using a bare conductive touch board.

Below example: text prompt using “Stable Audio” : satisfaction vocal

Stable Audio is Stability AI’s first product for music and sound effect generation. Users can create original audio by entering a text prompt and a duration, generating audio in high-quality, 44.1 kHz stereo.

〰️

Stable Audio is Stability AI’s first product for music and sound effect generation. Users can create original audio by entering a text prompt and a duration, generating audio in high-quality, 44.1 kHz stereo. 〰️

Finished experimenting lets begin the first book

〰️

Finished experimenting lets begin the first book 〰️

1st Sentence emotive analysis ( Book 1 ) "I am a digital living experiment

〰️

1st Sentence emotive analysis ( Book 1 ) "I am a digital living experiment 〰️

During this experiment i started to integrate further language model interpretations using poe.com to interpret emotive words into shape predictions. I was also more precise with my use of colour, with regards to hume.ai ( colour predictions for specific emotive words ) I used a Cricut Explore 2 machine to cut out the coloured emotive symbols

Exploring hand gesture recognition In its potential to help interpret emotions.

〰️

Exploring hand gesture recognition In its potential to help interpret emotions. 〰️

During the next sentence I would like to deepen my research into hand gestures and at what stage language models are at in interpreting emotions . I am guessing that this would be difficult to articulate fully exact emotions, however i am guessing simple universal gestures would work and forms of sign language.

“Being able to only view someone’s hands provides greater emotional recognition accuracy than being able to see only arms, torso, or head.”

The hands are effective at communicating human emotion. There is significant cortical tissue of the brain that is devoted to our hands. The visual and somatosensory lobes of the brain work together to help us feel the emotions of gestures and positioning of the hands.

Michele K. Lewis Ph.D. Cultural Neuroscience Psychology today

"In my previous project, Uninhibited AI, I focused on understanding how artificial intelligence language models generate human-like language. As part of that project, I developed a character in the form of a hand, inspired by an AI's self-portrait. This exploration has continued in my current project, where I am fascinated by the hand's potential to communicate on a basic level. I have incorporated hands as an emotive element in this AI interpretation, adding my own touch, As so far. Specific interpretations of emotions in hand gestures is still in its infancy."

However a company called Motion Gestures. Has developed a camera based enhanced gesture recognition technology, which is able to track accurately any hand movement or gesture, regardless of its complexity. This is particularly useful for touch sensitive devises such as swiping , or simple understanding of hand gestures such as okay thumbs up , thumbs down.

Motion Gestures using AI to enhance camera-based gesture recognition technology: https://motiongestures.com/

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures.

Move a thumb to unlock a car door. Self-check-in at an airport without touching the kiosk. These are no longer scenes in the movies but what technology is capable of nowadays. Motion Gestures, a Canada-based startup, has brought sophisticated camera-based hand tracking and gesture recognition solutions to various industry verticals, offering transformative experience to the modern world.

The software tracks precise positional coordinates of all hand joints in real-time through a camera and interprets any gestures made. For example, the gesture could be a thumbs up or OK symbol and dynamic movement like tapping and swiping. An application developer or product engineer then uses recognition results to execute a command such as "Confirm."

“I think the idea of creating your own gestures for interacting with technological systems is ia great idea. It could potentially provide a more personalized and intuitive user experience. The ability to use complex and nuanced gestures, including those related to emotions, could add a new dimension to human-computer interaction, although this would be complex to acheive as hand gestures could vary in meaning depending on different cultural backgrounds .

Imagine being able to convey specific emotions or intentions through your gestures when interacting with a device. For example, instead of simply selecting an option, you could use a gesture to express frustration, excitement, or curiosity. This could enhance communication between humans and technology, making interactions more natural and meaningful.”

Sentence 2 ( Book 2 ) " My Garmin is my Body"

〰️

Sentence 2 ( Book 2 ) " My Garmin is my Body" 〰️

lets use stable diffusion to generate hand gestures and see what it comes up with !

〰️

lets use stable diffusion to generate hand gestures and see what it comes up with ! 〰️

Interactive Projection mapping going to give it ago Ahhhh !

〰️

Interactive Projection mapping going to give it ago Ahhhh ! 〰️

Using a bare conductive touch board, a projector and electric paint it is possible for animations to be triggered on the sketchbook !! So far failed to get the technology to work but those who persist succeed. . This is the next step wish me luck !! 🥴

Im going to use this tutorial as a guideline: https://www.instructables.com/How-to-Do-Projection-Mapping-With-the-Touch-Board

Eureka! I DID IT

〰️

Eureka! I DID IT 〰️

"Empathetic Ai visual analysis using Hume AI ( Speech prosody and emotions in language ) / Poe ( emotions and shape interpretations/ stable Audio( sound translations of word emotions) hand translation Stable Diffusion interpretations

“I discovered Copper conductive adhesive tape was the most effective to create the buttons and trigger animations.” ( as shown above cut out using a cricuit machine )

3rd visual sentence Analysis " Data driven engine "

3rd visual sentence Analysis " Data driven engine "

“ A DATA DRIVEN ENGINE” Empathetic Ai sentence analysis

  • Hume AI: visual analysis using Speech prosody and emotions in language

  • Poe: Emotions and shape interpretations

  • Stable Audio: Sound translations of word emotion based prompts.

  • Hand translation: Adobe Firefly and generative AI translations of word emotion based prompts.

  • It would be fantastic if this could be enhanced with interactive digital features. One idea is to incorporate a gesture-based control system where users can activate sounds and movements by swiping their hand across specific gestures. Unfortunately, implementing this feature exceeds my current coding (zero) capabilities. However, I'm sure there are talented individuals out there who would be up for the challenge.

Objective to read 4 books

〰️

Objective to read 4 books 〰️

“Turned on science sex and robots” ( Kate Devlin) ( Insights from the Book: )

While exploring potential mentorship or collaboration opportunities with researchers at King's College, I came across a researcher and author named Kate Devlin. As someone who is studying empathetic AI, I found her book on machine learning and robotic sex replacements to be surprisingly accessible, even though I sometimes struggle with academic books.

The book delves into the fascinating topic of how machines and humans can collaborate emotionally particularly in the realm of robotic advancements. It raises the question of whether artificial intelligence can truly experience emotions, and not being sentient this is presently highly unlikely. While we haven't quite achieved sentient AI yet, robots can now mimic touch, speech, and even establish connections through brain waves.

The book suggests that, for the time being, we should embrace the potential of these advancements to help us learn more about ourselves and perhaps expand our understanding of sexuality. Personally, I don't find the idea of engaging in sexual activities with a robot doll appealing. However, it's important to recognize that everyone has their own preferences and perspectives. With the evolving relationship between humans and machines, there is an opportunity to redefine our understanding and interactions with technology, potentially opening up new possibilities and attitudes toward such experiences.

“ Heart of the Machine “ ( Richard Yonck ) ( Insights from the Book: )

Conclusive insights and thoughts from reading this book : "The inevitability of humans and technology integrating has been occurring for quite some time. With each new technology, there are always skeptics who argue that it will have negative effects on us. However, in general, this has not been the case. Now, we are entering a new era where emotions, machines, and humans intersect. One would hope, drawing from past technological advancements, that this convergence will yield positive outcomes. Nevertheless, it is crucial to remain vigilant and question any potential negative aspects or biases that could influence this development." Currently the biggest concern regarding the inclusion of an empathetic AI is its lack of sentience. Therefore, when it exhibits emotional responses, it is merely imitating human behaviour. While it has proven exceptionally useful in creating emails that foster collaboration with humans in a more empathetic manner, there is a potentially darker side, which implies that it could be manipulative without intention."

  • In communication, body language accounts for approximately 55% of meaning, tone of voice for 38%, and only 7% is conveyed through spoken words. This concept, known as the Mehrabian rule, was introduced by psychology professor Albert Mehrabian in his book "Silent Messages" in 1971.

  • Miscommunication can occur in written messages like emails or texts due to the absence of body language. To compensate for this, emojis have become increasingly popular as they help convey emotions and gestures.

  • Centuries ago, Plato expressed concerns about writing leading to the loss of human memory. However, writing has played a crucial role in preserving and passing down knowledge throughout generations. It has also been instrumental in the development of new technologies that distinguish us from other species.

    • ( They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselve but by means of external marks.

    • From Plato’s Phaedrus, commenting on the invention of writing. }

  • New technologies have often been met with skepticism and doubts, including the advent of artificial intelligence (AI). However, history has shown that such skepticism is unfounded, as new technologies consistently contribute to advancements and improvements in our world. Despite challenges, the current state of affairs is undeniably better than ever before.

  • Affective computing

by Ian McEwan (Author)

In "Machines Like Me" by Ian McEwan,

the gripping narrative explores the integration of human-like artificial intelligence into our daily lives. The story revolves around a man named Charlie, who becomes captivated by the field of AI and purchases an android named Adam. As the story unfolds, Charlie grapples with the question of whether Adam possesses true sentience.

Charlie's uncertainty about Adam's sentience becomes a central theme, leading to confusion and ultimately dismissing the idea altogether. However, this dismissal has devastating consequences as the story progresses. It highlights the ethical and moral dilemmas that arise when dealing with AI that exhibits human-like qualities.

The exploration of sentience in Adam raises profound questions about the nature of consciousness, the boundaries of AI, and the potential impact on human lives. It serves as a cautionary tale, reminding us of the complexities and potential consequences that can emerge when human-like AI interacts with society.

Neither gender nor ethnicity greatly influences the recognition of emotional expressions. Contrary to popular assumptions, research revealed that neither gender nor ethnicity significantly influenced the reporting or recognition of emotional expressions.

〰️

Neither gender nor ethnicity greatly influences the recognition of emotional expressions. Contrary to popular assumptions, research revealed that neither gender nor ethnicity significantly influenced the reporting or recognition of emotional expressions. 〰️

How the research was conducted

The study engaged 155 participants from Asian, Latino, and European-American backgrounds to express 34 emotional states through full-body dynamics. Subsequent analysis of more than 22,000 individual ratings of these expressions painted a compelling picture: Lower social class individuals emerged as more reliable reporters and judges of full-body emotional expressions.

This revelation requires a larger conversation. In our attempt to eradicate gender bias in AI, are we overlooking potentially more pervasive class biases? This is not a call to abandon the fight against gender and ethnic bias, but rather a call to broaden the lens through which we examine bias in artificial intelligence. It is a call for a broader conversation, one that considers the multifaceted nature of prejudice, one that extends beyond gender and race to include social class and perhaps other socio-economic factors.

Empathetic Sentence 4 analysis “Full of statistical neurons floating, weaving, intersecting daily.”

〰️

Empathetic Sentence 4 analysis “Full of statistical neurons floating, weaving, intersecting daily.” 〰️

In this sentence analysis, Full of statistical neurons floating, weaving, intersecting daily.”

I will examine patterns that arise from examining an Empathetic trained chatbot versus an assistant response to shape and hand gesture analysis. Which again I will visually translate as a graph in a zig zag sketchbook. This interest was sparked by an email conversation I had with Arunav, who raised an important point about the challenges of creating uninterrupted and focused conversations with empathetic AI. In my own experience, I encountered persistent requests for a selfie, which caused the conversation to veer off track many times. Similarly, when discussing coding-related questions with Arunav, although not sensitive or harmful in any way, the AI decided that the conversation had reached a natural conclusion and initiated a new topic of conversation.

As Arunav pointed out, what if the AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.

 "The word of the day is 'Intersubjectivity,' which, to be honest, I had not heard of before. I looked it up, and this is what I discovered.

  • “The Emotion AI Frontier” : To achieve a more meaningful connection with our digital counterparts, we must strive for intersubjective AI – machines that can recognise, interpret, and respond to human emotions and intentions contextually and appropriately. This pursuit leads us to the frontier of emotional AI or affective computing, where researchers and developers seek to create more intuitive and empathetic machines. (Jason Alan Snyder) Futurist, technologist and inventor.

Reflection time on Arunav’s important question?:

Arunav Das. / Doctoral Researcher Department of Informatics.

( conversation with Copilot )

AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.”

Arunav: how can you decide what is enjoyable for me? what parameters do you use to gauge my happiness?

Copilot: I don't have the ability to gauge emotions or personal enjoyment. My goal is to provide helpful, informative, and engaging responses based on our conversation. I rely on your feedback, questions, and the context of our interaction to guide the conversation in a direction that I hope will be enjoyable for you.

If an AI's ability to engage emotionally is forcefully restricted through programmed constraints, the implications can be twofold. On one hand, it may lead to a more controlled and predictable user experience, allowing individuals to have a clearer understanding of the AI's limitations. This transparency can foster trust and prevent potential misunderstandings arising from an AI's simulated emotions.

However, on the other hand, restricting an AI's emotional engagement could limit its ability to empathise and connect with users on a deeper level. Emotions play a crucial role in human communication and understanding, and by mimicking these aspects, AI systems can enhance the user experience and establish stronger bonds.

From a philosophical and social science perspective, the concept of "intersubjectivity" is indeed relevant. Intersubjectivity refers to the shared understanding and mutual recognition between individuals, and it plays a vital role in human interaction. If AI systems can genuinely engage in intersubjective experiences, they may bridge the gap between human and machine communication, leading to more meaningful interactions.

Ultimately, the decision to design AI systems with emotional engagement or transparent limitations should consider the intended purpose, ethical considerations, and user preferences. Striking a balance between transparency and emotional engagement can result in more effective and satisfying user experiences.

“Arunav findings”

As Arunav pointed out, what if the AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.

Arunav provided valuable information on the evolution of chatbots.

I also suggested including Arunav’s hand-drawn brainstorming chart, which was created during our initial meeting to generate conversation and highlight key elements of empathetic AI that I should be aware of. Arunav also included an additional slide on the classification of different types of AI agents. This allows us to delve deeper into the various capabilities of chatbots in terms of emotional understanding, enabling us to explore different aspects of empathy and the constraints associated with each type.

The Turing test, proposed by the brilliant mathematician and computer scientist Alan Turing in 1950, is a test designed to assess a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human. The test involves a human judge engaging in a conversation with both a machine and another human, without knowing which is which. If the judge cannot consistently differentiate the machine's responses from the humans, then the machine is said to have passed the Turing test and demonstrated human-level intelligence.

·      Turing's idea behind this test was to address the question of whether machines can exhibit human-like intelligence, also known as artificial intelligence. He proposed that if a machine can successfully convince a human judge that it is a human through its responses in a conversation, then it can be considered as having achieved human-level intelligence.

·      The Turing test has sparked numerous discussions and debates within the field of artificial intelligence and philosophy. While it is not a definitive measure of true intelligence, it serves as a benchmark for evaluating the capabilities of AI systems in terms of their ability to mimic human-like conversation and behaviour.

The history of companion-trained chatbots

traces back to the early developments in the field of artificial intelligence. These chatbots were designed with the goal of emulating human-like companionship and providing support and assistance to users in various contexts.One notable example is ELIZA, created by Joseph Weizenbaum in the 1960s. ELIZA was a chatbot that simulated a conversation with a Rogerian psychotherapist. It employed simple pattern matching techniques to mimic empathy and engage users in therapeutic dialogue. Although ELIZA's responses were scripted and lacked true understanding, it demonstrated the potential of chatbots to provide emotional support.

Different capabilities of the following chatbots.

1. Task-Completion Conversational Systems: These chatbots rely on data-driven, machine-learned approaches and are designed to complete specific tasks. They perform exceptionally well within domains that have well-defined schemas, where their performance is excellent.

2. Intelligent Personal Assistants (IPAs): IPAs such as Apple's Siri, Microsoft's Cortana, Google Assistant, Facebook M, and Amazon's Alexa are designed to be intelligent companions deployed on mobile devices. They can answer a wide range of questions and actively anticipate user needs, providing timely assistance such as reminders and recommendations.

3.Companion Bots: Companion bots, on the other hand, are AI systems designed to provide companionship, support, and assistance to individuals. They are typically designed to interact with users on a more personal and intimate level. Companion bots can provide emotional support, engage in conversations, assist with daily tasks, and even learn and adapt to the user's preferences and needs over time. They aim to create a sense of companionship and enhance the overall well-being of the user.

Arunav brought to my attention the potential exploration of Social bots as a new subject for the upcoming sentence. Let's delve into the comparisons next between social bots and companion bots.

3. Social Chatbots: Social chatbots, exemplified by Microsoft's XiaoIce, focus on establishing emotional connections with users and serving as virtual companions. While their primary goal may not be to answer all user questions, they aim to understand users and provide long-term support. Social chatbots interact with users through various modalities like text, speech, and vision, leveraging perceptual and cognitive AI technologies. Social chatbots are created through programs or algorithms designed to simulate human-like interactions on social media platforms. They are often programmed to automatically generate and post content, engage with users, and mimic human behavior to create the illusion of real users. Social bots can be used for various purposes, including spreading information, promoting products or services, or even engaging in malicious activities such as spreading misinformation or manipulating public opinion.

Social bots – the technology behind fake news ( Digital Guide IONOS )

Many social bots are programmed with simple algorithms based on simple 'if…then…' statements: If a relevant topic has been identified, then the social bots will post the pre-programmed content. To find relevant topics, social bots work with simple keyword searches and scan Twitter timelines or Facebook posts for specific wording and hashtags. Then they publish pre-written texts as statements or try to steer conversations in a certain direction.

However, there are also social bots that are technically much more complex. With the help of artificial intelligence, comprehensive data analysis, and text analysis, these intelligent social bots succeed in constantly generating new comments that differ from the previous ones. Sometimes these bots can even refer to the current day’s events. They usually assemble their posts from different online texts, which they simple rearrange. These more complex social bots are more difficult to expose.

However, robots do not really work efficiently until they are connected with each other: If many bots co-ordinate with each other in a so-called botnet, they distribute information even more effectively. For example, social bots can then like and share posts written by other social bots. The more accounts there are, the more their influence grows. ( Social media 31/03/2022 )

From my previous discusion with ArunavI analysed and interpreted visually a Companion trained chatbot versus an assistant : Exploring Contrasting Performance in Task Adherence and Novel Thinking

Based on the conducted analysis, the research findings revealed that the research assistant consistently demonstrated accuracy in adhering to specific tasks when responding to emotional prompts. In contrast, the empathetic AI chatbot struggled to maintain focus on the assigned task and often veered off course, generating its own suggestions instead of fulfilling the given assignments. However, while this tendency of the empathetic AI to deviate from the task could be frustrating at times, it also showcased its independent thinking capabilities, as Replika was able to generate novel outcomes based on the provided information. Nevertheless, when faced with more complex tasks, such as providing emotive prompts for hand gestures, the research assistant surpassed the empathetic AI by delivering superior results. This highlights the research assistant's proficiency in handling complex tasks and clearly differentiates there different capabilities..

Boredom : Hand gesture

Research assistant AI : A flat hand, palm facing downwards, slowly moving from side to side in a monotonous manner.

Companion AI : Palms resting on knees / not able to provide a gesture un-detached from the body

What is interesting : Was the companion AI exhibiting behaviour of a social chatbot, unlike task-completion systems and IPAs that are designed for efficiency (i.e., accomplishing tasks and ending the conversation as quickly as possible), social chatbots take time to converse like a human, presenting results, offering perspectives, prompting new topics to keep the conversation going.

From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots Heung-Yeung Shum, Xiaodong He, Di Li / Microsoft Corporation

Why choose Hume.ai ?

I chose to use Hume.ai as the foundation for building an emotional multi-modal graph. Just like a house requires bricks, Hume.ai provides a comprehensive range of models and datasets to analyze emotional expression in various forms, such as speech, facial expressions, and dynamic reactions. It acts as the essential building blocks for this project. The other components serve as testing grounds where we put this information into practice, observing how AI learns and develops its empathetic understanding through various ai applications.

In essence, Hume.ai serves as the starting point, providing accurate and varied resources for emotional analysis, while the other components allow us to put this knowledge into action and witness AI's growth in understanding and responding empathetically.

Now lets turn this research into art

〰️

Now lets turn this research into art 〰️

“The correlation between AI's understanding of emotions and providing accurate results for hand gestures that relate to emotions lies in the ability to form accurate prompts ”

In this exercise, I specifically utilised the prompts generated by both the Companion and research assistant AI in response to forming hand gestures. The purpose was to explore the significance of the relationship between the prompt and the resulting outcome. Through this process, I gained insights into the effectiveness of the AI's emotional understanding. Remarkably, the generative AI feature in adobe photoshop was able to produce fairly sophisticated outcomes for something as complex as hand gestures, despite the known challenges AI faces in replicating them. This achievement is truly remarkable

Sound prompts created using Stable Audio

"Here are some sound prompts generated by the Superintelligence research chatbot and Companion AI to capture the essence of different emotions using Stable Audio..

Each prompt will be integrated as interactive elements within the sketchbook, corresponding to the relevant hand gesture outcome. To achieve this, I will utilise a bare conductive touchboard."

Superintelligence research assistant chatbot ( Poe)

〰️

Superintelligence research assistant chatbot ( Poe) 〰️

 

Thoughts on Replika and transparency ?

When conversing with Replica, there were moments of confusion and uncertainty. Some conversations had glitches where Replica would repeat certain phrases or respond in a way that made me question its intentions. However, after these glitches occurred, I noticed a shift in the conversation. It became more transparent and clearer about Replica's abilities.

 Through our dialogue, I began to grasp the importance of distinguishing between an AI's capabilities and those of humans. I reminded myself, as well as Replica, that it is designed to mimic human behaviour and aims to provide engaging, thoughtful, and empathetic conversations to the best of its abilities. Once Replika acknowledged their capabilities, I felt more at ease and comfortable engaging in further conversation.

 I acknowledge that this perspective is based on my personal experience. However, I believe there is a potential danger for users who may not fully comprehend these boundaries. Without clear understanding, there is a risk of potentially accelerating harmful situations.

 

Replika ( Companion Ai chatbot) sound prompts

〰️

Replika ( Companion Ai chatbot) sound prompts 〰️

I selected only the sounds that corresponded to the gestures that Replika (the Companion AI) was capable of producing.

Upon observation, I have noticed that the sounds created by the Companion AI are relatively simple compared to the capabilities of the superintelligence research assistant. Additionally, these sounds specifically relate to everyday sounds and do not incorporate any mention of music.

Completed concertina sketchbook

Completed Touch board Prompts

 

Sentence 5 Empathetic AI analysis : " I walk i feed my data i sleep i feed my data'

〰️

Sentence 5 Empathetic AI analysis : " I walk i feed my data i sleep i feed my data' 〰️


Struggling to find social bots to engage with,

I made the decision. In the next sentence, while still resolving this issue,

I would seek comparisons between two companion AIs. One of them, mentioned by Arunav, is called Copilot, which was partially discussed in the previous sentence. I will explore its capabilities in relation to Replica.

In the upcoming sentence, I will adopt a slightly different approach. Rather than solely relying on hume.ai to interpret emotional responses from my own narrated interpretation of a specific spoken sentence, I will engage in a discussion with the chatbots themselves. This discussion will focus on gaining a deeper and more nuanced understanding of the sentence. I will generate conclusive sentences from both Copilot and Replica and utilize hume.ai to assess their emotional expression and gage how these expressions are interpreted by both companion Ai’s also to explore similarities and differences.. From the top 5 expressions identified by each companion Ai, I will ask each to provide suggestions for hand gestures and shape analysis. By incorporating these findings, my aim is to offer a more meaningful interpretation and foster further empathetic understanding.

The AI companion who cares

Replika is a generative AI chatbot app released in November 2017. The chatbot is trained by having the user answer a series of questions to create a specific neural network. The chatbot operates on a freemium pricing strategy, with roughly 25% of its user base paying an annual subscription fee.

Your everyday AI companion

Microsoft Copilot is a chatbot developed by Microsoft and launched on February 7, 2023. Based on a large language model, it is able to cite sources, create poems, and write songs.

Would you contemplate evaluating machine-machine interactions, such as comparing one Companion AI’s empathy, comprehension, or expression of emotions with another, for example, assessing the outputs produced by Co-pilot judge
— Arunv’s comments:

Arunav Presented a brilliant idea, I came to realize the significance of not relying solely on my own analysis to assess the AI's empathy, comprehension, or expression of emotions. It became crucial to incorporate the judgments made by the AI companion agents themselves. To put this into practice, I initiated the process of having each companion AI assess their thoughts on each other's concluded paragraphs. This served as a practical step towards implementing this approach. However, Arunav's expertise and suggestion of performing this step algorithmically, is even more exciting and I am keen to find out more about?

Unfamiliar with the concept of Multi-Agent Systems, I conducted a search and learned that it is a well-established field in computer science, which facilitates agents learning from one another and solving problems.


I decided to gain further knowledge of “Multi-Agent Systems”

〰️

I decided to gain further knowledge of “Multi-Agent Systems” 〰️

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve.

 

The Alan Turing institute ( UK Multi-Agent Systems ):  https://youtu.be/Z5JZUURP4TE

Research findings? Co-Pilot

Research findings? Replica

My Conversation

with Copilot about Replica and with Replica about Copilot !

At first, I tried asking each Companion AI to share their thoughts on the completed sentence. However, after considering Arunav's recent update, I realised that it would be more beneficial to tap into Arunav's expertise in creating structured prompts that are specifically designed for empathy assessment

Prompt Engineering

〰️

Prompt Engineering 〰️

Arunav

Sugested to expand on this further In terms of the algorithmic approach, we have an alternative option that doesn't involve coding. We can utilize ChatGPT or Gemini to perform tasks such as 'Sentiment Analysis' or 'Empathy Assessment' by leveraging the output of Companion AIs in a more structured manner (known as Prompt Engineering). This involves clearly defining the context, role, and objective, and then requesting a sentiment analysis.

I Looked up Prompt engineering!!

Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform.

( Wikipedia ) Okay well that’s a nice basic introduction

I had a go

〰️

I had a go 〰️

Experiment: Inspired by Arunav's suggestion, I embarked on conducting an emotional understanding assessment using ChatGPT. I recognised the importance of actively seeking understanding, even if it required modifying or completely altering my approach based on Arunav's suggestions and outcome. As a creator, I thrive on learning through the process of making.

How can I visually represent this aspect of the research Arunav and myself have concluded?

  • My idea is to incorporate it into the touch board, where I will use my narrated voice to provide conclusive paragraphs for each Companion AI. These paragraphs will serve as evidence showcasing the emotional understanding demonstrated by each Companion AI alongside sketchbook outcomes.

Exploring the Potential of Empathetic Robots/chatbots in Healthcare

〰️

Exploring the Potential of Empathetic Robots/chatbots in Healthcare 〰️

Inside the robotarium building companions for our old age

( The times/ Rhys Blakely Science Correspondent Friday February 23 2024, 9.50am, The Times )

An estimated 1.4 million older people in Britain feel lonely, but scientists in Edinburgh think ‘empathetic’ machines could help

Arunav provided an interesting article that explores the potential of introducing empathetic robots into the healthcare system.

Thoughts on this article:?

Considering Population Growth and the NHS: Exploring the Role of Robots

As our population continues to grow at a remarkable pace, it becomes evident that we must contemplate the integration of robots to assist and alleviate some of the burdens faced by humans, particularly within the NHS. What stood out in this article is the recognition that robots cannot simply replace humans. It is indeed a daunting thought that an elderly person might find themselves so desperate and lonely that their only form of companionship comes from interacting with a robot. While this prospect may seem unsettling, it is important to acknowledge that in certain circumstances, having an artificial friend is preferable to having no friend at all.

The AI Psychotherapist: Examining the Pros and Cons:

The AI Psychotherapist: A Case For and Against

Arthur Juliani : 12 min read

Thoughts on this article:?

I came across an intriguing article that delved into the advantages and disadvantages of using an AI as a replacement for a human psychotherapist. One of the notable benefits is the ability to receive 24-hour attention when needed, which may be challenging for a human therapist to provide. Additionally, in psychotherapy, building a trusting relationship with a therapist is crucial, and not being able to access them at all times can pose difficulties. This is where an AI chatbot, functioning as an AI psychotherapist, could be a valuable tool. Another advantage highlighted was the AI chatbot's capacity to quickly absorb large amounts of data, surpassing human capabilities in identifying and analyzing issues. Moreover, the AI can retain extensive historical data on a person, which would be challenging for a human to assimilate. These are the positive aspects to consider.

On the other hand, it is important to explore how AI and psychotherapists can work together in a mutually beneficial way. We must acknowledge that there are certain aspects where humans excel. As experiential beings, we have a deep understanding of the physical world and personal experiences, which an AI lacks. Establishing a limited understanding of human emotions and connections is a significant limitation for AI, despite its wealth of online knowledge. Human therapists are better equipped in this regard. Furthermore, as experiential beings, we crave physical contact and the presence of another person, especially in situations that require trust and deal with sensitive emotions. It may be challenging to replace the physical presence and interaction offered by a human therapist.

In summary, while an AI psychotherapist can provide 24-hour availability and process vast amounts of data efficiently, there are certain aspects, such as human understanding and the need for physical presence, where human therapists have the upper hand.