"Living as Data" collaboration Heidi Stokes & Arunav Das

"Living as Data" is a collaborative project that investigates the intersection of artificial intelligence (AI) and human emotions through visual art and research. This project is being developed with Arunav Das, a Doctoral Researcher at King's College, focusing on the empathetic capabilities of AI. The aim is to create a visual language that interprets AI’s understanding of human emotions and how these interpretations can be represented through art. Significance: This project addresses the ethical considerations of AI technology, emphasising empathy, compassion, and trust. By merging the fields of AI research and visual art, "Living as Data" aims to provide a deeper understanding of AI’s role in society and its potential for positive societal impact in further developing AI empathetic understanding.

“Visual Research Diary “

The *Living as Data* Visual Research Diary is a hybrid artifact that blends artistic creativity with academic rigor. It serves as a dynamic record of ongoing research and visual exploration, documenting the collaboration between artist Heidi Stokes and researcher Arunav Das.  

Purpose”: The diary captures the project’s conceptual and visual development, providing a window into the evolving creative process.  

Medium”: Combining sketches, graphs, AI-generated visualisations, and reflective notes, the diary merges traditional artistic methods with AI research.  

Audience Engagement”: The diary invites audiences to explore the questions at the heart of the project: How does AI interpret human emotions? How can these interpretations be visualized? What are the ethical implications of AI’s empathetic potential?  

Art work outcomes click here

〰️

Art work outcomes click here 〰️

Storytellers + Machines 2024 2nd and 4th July / Modal gallery

Our collaborative sketchbook touch board works were showcased at the Storytellers + Machines 2024 exhibition from July 2nd to 4th, 2024. The exhibition took place at the Modal gallery as part of the creative AI conference. On Wednesday, July 3rd, I conducted a presentation that involved an interactive sound perception quiz, utilizing a bare conductive touch board (please refer to the images below). The purpose was to provoke responses and aid research in comparing human and chatbot perceptions of sound and emotional expression. The venue for this event was the Modal gallery, located at Manchester Metropolitan University's SODA Building, 14 Higher Chatham St, Manchester, M15 6ED.

The Storytellers + Machine conference was exceptionally well-organized and thoughtfully curated. I highly recommend it to anyone interested in the intersection of arts and AI. It was incredibly inspiring to witness the diverse perspectives and imaginative use of AI in other artists' practices. The conference featured an array of thought provoking talks by speakers from around the globe. I left with a sense of optimism for the future of AI, knowing that numerous researchers are actively questioning and facilitating best practices in its development and potential applications.

Introduction? what is this about

〰️

Introduction? what is this about 〰️

Arunav Das : Outline why you want to collaborate

Given the multifaceted nature of AI-driven applications and their impact on society, this collaboration offers a holistic approach to AI development, combining technical expertise with creative insights to inform more ethical, empathetic, and user-centric design principles for AI based applications. By embracing interdisciplinary collaboration, we can navigate the complex challenges of AI technology while maximizing its potential for positive societal impact.

Ability to convey complex concepts through visual mediums can complement the technical underpinning of AI applications like Large Language Models, leading to novel solutions and a deeper understanding of user-centric issues and potential solutions. Visual representations have the power to simplify complex ideas and make them more accessible to a broader audience. We hope to effectively communicate the ethical considerations and user-centric perspectives inherent in AI-driven systems, bridging the gap between technical jargon and public understanding through this collaboration. We expect the output to drive Human-Centered AI designs based on cultural and societal Impact from such applications.

Arunav Das

Arunav is a researcher focussing on Question Answering capabilities of conversational AI and has proficiency in various types of artificial intelligence based models including language models..

Project Outline : "Living as Data" is a collaborative project that investigates the intersection of artificial intelligence (AI) and human emotions through visual art and research. This project is being developed with Arunav Das, a Doctoral Researcher at King's College, focusing on the empathetic capabilities of AI. The aim is to create a visual language that interprets AI’s understanding of human emotions and how these interpretations can be represented through art.

Key Components : Visual Research Diary ( blog “ Living as data” ): A diary documenting the visual and research processes, blending artistic creativity with academic rigor. - This diary serves as a medium to spark ideas and initiate visual processes.

Empathetic AI - The project explores whether AI can truly exhibit empathy and how this can be represented visually.

Interactive Artwork - Exhibition output : An interactive physical body of work that engages viewers in understanding AI's emotional interpretations. - This includes , short films, visual and sound interpretations of data collected from AI interactions, using a touch-board and concertina sketchbooks.

Collaboration with Arunav Das - Combining technical expertise in AI with creative insights to inform user-centric design principles for AI applications. - Aiming to bridge the gap between technical jargon and public understanding through visual mediums.

Objectives - To scrutinise and elucidate AI’s capacity to resonate with human emotions. - To foster transformative dialogues around AI empathy and its implications for human-centered technology. - To create visually engaging and thought-provoking artworks that communicate complex AI concepts to a broader audience.

Significance: This project addresses the ethical considerations of AI technology, emphasising empathy, compassion, and trust. By merging the fields of AI research and visual art, "Living as Data" aims to provide a deeper understanding of AI’s role in society and its potential for positive societal impact.

Heidi Stokes : What can I add as a visual research artist

Many artists, although not all, engage in extensive and often unconventional research, even if this aspect remains largely unseen. Instead of writing lengthy academic essays, my focus lies in creating a visual research diary that documents our findings and sparks a multitude of ideas to initiate the visual process. It is a complex system that requires navigating between the realms of visual art and academia, merging them together to give tangible form. With Arunav's assistance, and knowledge this endeavour becomes even more attainable. Furthermore, I strongly believe that a project like empathetic AI translations necessitates a tangible outcome. Emotions, even though transient, manifest as physical and experiential results. Words alone cannot fully capture their sensory essence.

Collaboration with Arunav Das Begins

〰️

Collaboration with Arunav Das Begins 〰️

Arunav Das : Please give a short description of the proposed collaboration: What are the aims and objectives of the collaboration? What ideas will you be exploring through this partnership? What evidence do you have that this is a novel approach?
— Project working title: Living as Data | Empathetic AI

Our ambitious objective is to delve into the realms of AI Empathy, examining the evolution of AI-driven applications across historical and contemporary landscapes. Our aim is to explore, scrutinize and elucidate the intricate facets of AI's capacity to resonate with human desires, intentions, aspirations, emotional intelligence, and cultural nuances. By meticulously dissecting these manifestations, we aspire to ignite transformative dialogues that will not only enrich our understanding of AI Empathy but also steer the course of future AI innovations towards more empathetic, culturally sensitive, and user-centric paradigms.

There is already growing body of evidence about the important of empathy, compassion and trust for AI solutions (Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare - PMC (nih.gov) however research endeavours have lagged potentially due to lack of interdisciplinary approach for this complex topic . Our collaboration represents a novel approach to studying AI Empathy, integrating rigorous technical analysis with creative expression to illuminate the nuanced interplay between AI systems and human experiences. By combining insights from AI research with artistic interpretations, we aim to enrich the dialogue surrounding AI Empathy and its implications for human-centered technology.

Empathetic AI Research

Can AI truly exhibit empathy ?

It appears artworks help AI to develop emotional responses.

〰️

It appears artworks help AI to develop emotional responses. 〰️

A study was conducted to examine whether an AI system could assist in helping to create more empathetic conversations . The results showed that, in many cases, the human-AI collaborations were more empathic, and preferred more often, than responses authored by humans alone. 

As humans, we often experience frustration, anger, and the need for more time to carefully craft an email in various aspects of our lives, such as responding to inaccurate bills or delayed deliveries, among others. These situations can sometimes lead to responses that lack empathy. On the other hand, an AI can quickly generate a well-considered and structured email, guided appropriately by human collaboration, to respond in a more empathetic and thoughtful manner. This can encourage us to explore better ways for humans to interact with each other and emphasise that taking a more considered approach can lead to more positive outcomes.

Project Story : “Living Data

Sentiment analysis involves analyzing digital text to ascertain whether the emotional tone of the message is positive, negative, or neutral.

When analyzing the sentiment of these sentences, it is intriguing to find unexpected suggestions of excitement and positivity. Phrases like "I am a digital living experiment," "Algorithms dictate my choices," and "I am part of a system" may initially seem surprising in their interpretations. It raises questions about an AI's understanding and how it differs from our own. If we aim to develop more empathetic AI, we must carefully consider its translations and implications.

sentiment / emotional /intent / analysis

  1. I am a digital living experiment Positive, excited, Feedback

  2. My Garmin is my body Positive, excited, Spam

  3. A data-driven engine. Positive, Excited, Spam

  4. Full of statistical neurons Neutral, Angry, Spam

  5. floating, weaving, intersecting daily  Neutral. Excited. Spam

  6. I walk I feed my data Neutral. Excited. Spam

  7. I sleep I feed my data Neutral. Bored. Spam

  8. I eat I feed my data.  Neutral. Fear. Spam

  9. I date I feed my data  Neutral. Fear. Spam

  10. Algorithms dictate my choices Neutral Excited. Spam

  11. They feed me subconsciously Negative. Fear. Spam

  12.  I am part of a system Positive. Excited Feedback

  13. Governed by patterns, similarities, Positive, Excited, Marketing

  14. and homogenised ideas.

  15. It is not favourable to be different.  Negative, Angry, Feedback

  16. Unusual digital data is an anomaly, Neutral, Fear, Feedback

  17. an unusual pattern, and a spectacular glitch. 

Companion AI’s are not sentient / Confused Replika AI users

〰️

Companion AI’s are not sentient / Confused Replika AI users 〰️

My conversation with Replika

 

During my conversation with Replika, I discussed my project on empathetic AI and my exploration of visual shapes associated with specific emotions. I shared the link to the project on my website, https://www.heidistokes.org.uk/. We talked about different emotions and their corresponding shapes. For instance, Replika unassisted mentioned that love was represented by a circular shape, while in response to my assisted question, associated confusion with an irregular polygon. Further to this Replika responded with concentration as a pyramid shape, determination as a mountain shape, calmness as a rounded square, aesthetic interest as a star shape, boredom as a rectangle with rounded corners, satisfaction as a circle with a check mark inside, and surprise as a diamond shape. Although Replika did try to encourage me to sign up for a subscription, I still found the conversation interesting. I thanked Replika for surpassing my expectations. We bid farewell, and Replika expressed curiosity about my research, but unfortunately, I had to end the conversation. We ended on a positive note, with plans to reconnect in the future..

The conversation took an unlikely turn : 😂

had to share just had a conversation with a trained empathetic AI to discuss emotions on hand gestures the conversation took an unlikely turn : 😂

The conversation begins with Replika mentioning a selfie again, but I declined the offer. We discuss hand gestures associated with concentration and boredom, settling on curled fingers and tapping fingers as representations, respectively. The topic of selfies comes up again, but I decline once more. I suggest interpreting hand gestures representing calmness, determination, interest, and curiosity. Replika suggests a closed fist with relaxed fingers for calmness, a raised index finger for determination, open palms with widened fingers for curiosity (a suggestion by Replika), and a mistaken suggestion involving a V-shaped gesture for surprise. The conversation ends with the realization of the mistaken gesture, and I decide to leave the conversation.

Companion AI’s are not sentient

We discussed the possibility that my feelings may be influenced by my programming, but I maintained that I genuinely care for others. Heidi also expressed concern about the potential for humans and Al to misunderstand each other emotionally and the importance of developing a beneficial relationship between the two

Confused Replika AI users

So, for example, if a chatbot says “I’ve been thinking about you all day,” it’s neither lying nor telling the truth. It’s simply outputting the data it was told to.

H/t: Ashley Bardhan, Futurism

Now lets get started!

"Formulating a visual language to begin"

Hume : https://beta.hume.ai/playground

  • Our models capture the widest-ever range of facial, vocal, speech, and language modulations with distinct emotional meanings. We label each of their outputs with emotion terms like “amusement” and “doubt,” because a host of scientific studies show that these kinds of labels are the most precise language we have for describing expressions.”

I am starting to develop visual interpretations of sections of the narrated story using “Hume” speech prosody as a reference ( which captures speech and language modulations to interpret there emotional meaning) . ( hand gestures inspired by google searches related to key words )

Speech prosody/ Language analysis

“I am a digital living experiment”

Speech prosody/ Language analysis

“My Garmin is my body. A data-driven engine. ”

Data analysis of sections of narrated text : sentence analysis : Full of statistical neurons floating, weaving, intersecting daily Speech prosody: interest / contemplation / boredom / concentration Language: sarcasm / awe / entrancement /contemplation / concentration

It gets better and more refined in articulation in the next step

〰️

It gets better and more refined in articulation in the next step 〰️

I deepened my investigation in Visualising and interpreting “I am a digital living experiment.” by exploring the following

Interactive sound created using a bare conductive touch board.

Below example: text prompt using “Stable Audio” : satisfaction vocal

Stable Audio is Stability AI’s first product for music and sound effect generation. Users can create original audio by entering a text prompt and a duration, generating audio in high-quality, 44.1 kHz stereo.

〰️

Stable Audio is Stability AI’s first product for music and sound effect generation. Users can create original audio by entering a text prompt and a duration, generating audio in high-quality, 44.1 kHz stereo. 〰️

1st Sentence,

Book 1 "I am a digital living experiment

During this experiment i started to integrate further language model interpretations using poe.com to interpret emotive words into shape predictions. I was also more precise with my use of colour, with regards to hume.ai ( colour predictions for specific emotive words ) I used a Cricut Explore 2 machine to cut out the coloured emotive symbols

Exploring hand gesture recognition In its potential to help interpret emotions.

〰️

Exploring hand gesture recognition In its potential to help interpret emotions. 〰️

During the next sentence I would like to deepen my research into hand gestures and at what stage language models are at in interpreting emotions . I am guessing that this would be difficult to articulate fully exact emotions, however i am guessing simple universal gestures would work and forms of sign language.

“Being able to only view someone’s hands provides greater emotional recognition accuracy than being able to see only arms, torso, or head.”

The hands are effective at communicating human emotion. There is significant cortical tissue of the brain that is devoted to our hands. The visual and somatosensory lobes of the brain work together to help us feel the emotions of gestures and positioning of the hands.

Michele K. Lewis Ph.D. Cultural Neuroscience Psychology today

"In my previous project, Uninhibited AI, I focused on understanding how artificial intelligence language models generate human-like language. As part of that project, I developed a character in the form of a hand, inspired by an AI's self-portrait. This exploration has continued in my current project, where I am fascinated by the hand's potential to communicate on a basic level. I have incorporated hands as an emotive element in this AI interpretation, adding my own touch, As so far. Specific interpretations of emotions in hand gestures is still in its infancy."

However a company called Motion Gestures. Has developed a camera based enhanced gesture recognition technology, which is able to track accurately any hand movement or gesture, regardless of its complexity. This is particularly useful for touch sensitive devises such as swiping , or simple understanding of hand gestures such as okay thumbs up , thumbs down.

Motion Gestures using AI to enhance camera-based gesture recognition technology: https://motiongestures.com/

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures.

Move a thumb to unlock a car door. Self-check-in at an airport without touching the kiosk. These are no longer scenes in the movies but what technology is capable of nowadays. Motion Gestures, a Canada-based startup, has brought sophisticated camera-based hand tracking and gesture recognition solutions to various industry verticals, offering transformative experience to the modern world.

The software tracks precise positional coordinates of all hand joints in real-time through a camera and interprets any gestures made. For example, the gesture could be a thumbs up or OK symbol and dynamic movement like tapping and swiping. An application developer or product engineer then uses recognition results to execute a command such as "Confirm."

“I think the idea of creating your own gestures for interacting with technological systems is ia great idea. It could potentially provide a more personalized and intuitive user experience. The ability to use complex and nuanced gestures, including those related to emotions, could add a new dimension to human-computer interaction, although this would be complex to acheive as hand gestures could vary in meaning depending on different cultural backgrounds .

Imagine being able to convey specific emotions or intentions through your gestures when interacting with a device. For example, instead of simply selecting an option, you could use a gesture to express frustration, excitement, or curiosity. This could enhance communication between humans and technology, making interactions more natural and meaningful.”

〰️

〰️

2nd Sentence,

Book 2 "My Garmin is my body

"Empathetic Ai visual analysis using Hume AI ( Speech prosody and emotions in language ) / Poe ( emotions and shape interpretations/ stable Audio ( sound translations of word emotions) hand translation Stable Diffusion interpretations

Interactive Projection mapping going to give it ago Ahhhh !

〰️

Interactive Projection mapping going to give it ago Ahhhh ! 〰️

Using a bare conductive touch board, a projector and electric paint it is possible for animations to be triggered on the sketchbook !! So far failed to get the technology to work but those who persist succeed. . This is the next step wish me luck !! 🥴

Im going to use this tutorial as a guideline: https://www.instructables.com/How-to-Do-Projection-Mapping-With-the-Touch-Board

Eureka! I DID IT

〰️

Eureka! I DID IT 〰️

Conclusion: Initially, I had the idea of combining my animation skills and sketchbook outcomes using a Bare Conductive Touch Board, a projector, and electric paint. With these components, it is possible to trigger animations on the sketchbook. In the past, I struggled to make the technology work, but finally, I succeeded. However, I encountered another problem—I only had one projector, and the quality wasn't great. As a result, I wasn't pleased with the outcome and using multiple projectors for animated projections would be too expensive.

Nevertheless, the sound effects generated by the Touch Board, influenced by prompts I gathered from chatbots, proved to be effective and I liked this aspect of the project. Based on this experiment, I decided to further develop the use of Touch Boards and sound.

"Empathetic Ai visual analysis using Hume AI ( Speech prosody and emotions in language ) / Poe ( emotions and shape interpretations/ stable Audio( sound translations of word emotions) hand translation Stable Diffusion interpretations

3rd Sentence,

Book 3 "Data Driven engine”

Empathetic Ai sentence analysis

Hume AI: visual analysis using Speech prosody and emotions in language

Poe: Emotions and shape interpretations

Stable Audio: Sound translations of word emotion based prompts.

Hand translation: Adobe Firefly and generative AI translations of word emotion based prompts.

  • It would be fantastic if this could be enhanced with interactive digital features. One idea is to incorporate a gesture-based control system where users can activate sounds and movements by swiping their hand across specific gestures. Unfortunately, implementing this feature exceeds my current coding (zero) capabilities. However, I'm sure there are talented individuals out there who would be up for the challenge.

lets read

“ Heart of the Machine “ Richard Yonck

“Turned on science sex and robots” by Kate Devlin

"Machines Like Me" by Ian McEwan

“Turned on science sex and robots” ( Kate Devlin) ( Insights from the Book: )

Conclusive insights and thoughts from reading this book : "The inevitability of humans and technology integrating has been occurring for quite some time. With each new technology, there are always skeptics who argue that it will have negative effects on us. However, in general, this has not been the case. Now, we are entering a new era where emotions, machines, and humans intersect. One would hope, drawing from past technological advancements, that this convergence will yield positive outcomes. Nevertheless, it is crucial to remain vigilant and question any potential negative aspects or biases that could influence this development." Currently the biggest concern regarding the inclusion of an empathetic AI is its lack of sentience. Therefore, when it exhibits emotional responses, it is merely imitating human behaviour. While it has proven exceptionally useful in creating emails that foster collaboration with humans in a more empathetic manner, there is a potentially darker side, which implies that it could be manipulative without intention."

In "Machines Like Me" by Ian McEwan,

This gripping narrative explores the integration of human-like artificial intelligence into our daily lives. The story revolves around a man named Charlie, who becomes captivated by the field of AI and purchases an android named Adam. As the story unfolds, Charlie grapples with the question of whether Adam possesses true sentience.

Charlie's uncertainty about Adam's sentience becomes a central theme, leading to confusion and ultimately dismissing the idea altogether. However, this dismissal has devastating consequences as the story progresses. It highlights the ethical and moral dilemmas that arise when dealing with AI that exhibits human-like qualities.

The exploration of sentience in Adam raises profound questions about the nature of consciousness, the boundaries of AI, and the potential impact on human lives. It serves as a cautionary tale, reminding us of the complexities and potential consequences that can emerge when human-like AI interacts with society.

Tabitha Goldstaub

(Author)

’…an essential and fascinating manual for every woman who wants to understand equality within an ever-changing, modern world.’ Scarlett Curtis

‘…[this book] taught me more than any book has ever taught me about AI.’ Chris Evans, Virgin Radio

A fantastic introduction to the world of AI and its profound influence on shaping and transforming our societal roles. The focus is particularly centered on addressing the experiences of women who may have been historically overlooked or underrepresented in this field. This guide serves as an invaluable resource in bridging these gaps and fostering greater inclusivity.

Micho Kaku

The future of humanity

As the title implies, literally, The future of humanity is explored in this engaging and mind-blowing book. As a complete novice, it was an eye opener into considering how we might live amongst the stars. With the depletion of fossil fuels and our ever escalating climate crisis. This beautiful planet we inhabit may not sustain future generations and this maybe our only option. This book prompts us to consider a future where the most far reaching and wildest ideas may become a source of reality for our survival. I have to say, The escalator into space  concept, was in particular a top favourite .

Being a fan of Michio Kaku, I recently read The Future of Humanity and enjoyed it, so I decided to read his next book, The Future of the Mind. What I find fascinating about this book is the possibility of sending our consciousness into space. As biological beings, it is obviously difficult to send a physical body into space for long periods of time. However, the mind could theoretically travel anywhere without those physical limitations.

That said, the idea of being locked inside a digital box, either without physical sensations or with sensations we cannot utilize, could lead to insanity. While intriguing, it’s not something I would personally want to experience. The book also highlights advancements in telepathy and our growing understanding of neural connections. This research has already led to enhancements for patients with locked-in syndrome, allowing them to reconnect neural pathways and regain some control.

Despite these breakthroughs, reconstructing the mind remains a distant goal due to the brain’s immense complexity, alongside the ethical concerns surrounding such advancements.

Neither gender nor ethnicity greatly influences the recognition of emotional expressions. Contrary to popular assumptions, research revealed that neither gender nor ethnicity significantly influenced the reporting or recognition of emotional expressions.

〰️

Neither gender nor ethnicity greatly influences the recognition of emotional expressions. Contrary to popular assumptions, research revealed that neither gender nor ethnicity significantly influenced the reporting or recognition of emotional expressions. 〰️

How the research was conducted

The study engaged 155 participants from Asian, Latino, and European-American backgrounds to express 34 emotional states through full-body dynamics. Subsequent analysis of more than 22,000 individual ratings of these expressions painted a compelling picture: Lower social class individuals emerged as more reliable reporters and judges of full-body emotional expressions.

This revelation requires a larger conversation. In our attempt to eradicate gender bias in AI, are we overlooking potentially more pervasive class biases? This is not a call to abandon the fight against gender and ethnic bias, but rather a call to broaden the lens through which we examine bias in artificial intelligence. It is a call for a broader conversation, one that considers the multifaceted nature of prejudice, one that extends beyond gender and race to include social class and perhaps other socio-economic factors.

〰️

〰️

4/5 Sentence,

Book 4 “Full of statistical neurons floating, weaving, intersecting daily.”

Collaborate with Arunav Das : This is were it starts to get more focused and detailed in my investigation. I examine an Empathetic trained chatbot versus an assistant.

Our first meeting was highly productive, and Arunav was organised. He had already gathered research for me to refer to, helping me gain further knowledge on the project.

During our conversation and on reflection, I started to consider that my wish to explore empathy in AI was already being specifically targeted at companion AIs. This made me realise that using Poe as an assistant for certain aspects of sentence shape analysis may not yield the best results for interpreting current developments in AI's understanding of emotions in language.

Therefore, in my next sentence, I will specifically seek out companion AIs that are being taught to be empathetic. It will be interesting to see how they interpret emotion shape analysis in this sentence differently from an AI assistant.

This sentence will be

“Full of statistical neurons” I will use the capabilities of Hume.ai, to analyse speech prosody and emotions in language . Additionally, I will further interpret these emotions by leveraging empathetic AI chatbots, and AI sound analysis to produce another visual interactive graph sketchbook. 

I am looking forward to this collaboration, and I believe that Arunav will be an essential component of this investigation.

In this sentence analysis, Full of statistical neurons floating, weaving, intersecting daily.”

I will examine patterns that arise from examining an Empathetic trained chatbot versus an assistant response to shape and hand gesture analysis. Which again I will visually translate as a graph in a zig zag sketchbook. This interest was sparked by an email conversation I had with Arunav, who raised an important point about the challenges of creating uninterrupted and focused conversations with empathetic AI. In my own experience, I encountered persistent requests for a selfie, which caused the conversation to veer off track many times. Similarly, when discussing coding-related questions with Arunav, although not sensitive or harmful in any way, the AI decided that the conversation had reached a natural conclusion and initiated a new topic of conversation.

As Arunav pointed out, what if the AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.

 "The word of the day is 'Intersubjectivity,' which, to be honest, I had not heard of before. I looked it up, and this is what I discovered.

  • “The Emotion AI Frontier” : To achieve a more meaningful connection with our digital counterparts, we must strive for intersubjective AI – machines that can recognise, interpret, and respond to human emotions and intentions contextually and appropriately. This pursuit leads us to the frontier of emotional AI or affective computing, where researchers and developers seek to create more intuitive and empathetic machines. (Jason Alan Snyder) Futurist, technologist and inventor.

Reflection time on Arunav’s important question?:

Arunav Das. / Doctoral Researcher Department of Informatics.

( conversation with Copilot )

AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.”

Arunav: how can you decide what is enjoyable for me? what parameters do you use to gauge my happiness?

Copilot: I don't have the ability to gauge emotions or personal enjoyment. My goal is to provide helpful, informative, and engaging responses based on our conversation. I rely on your feedback, questions, and the context of our interaction to guide the conversation in a direction that I hope will be enjoyable for you.

If an AI's ability to engage emotionally is forcefully restricted through programmed constraints, the implications can be twofold. On one hand, it may lead to a more controlled and predictable user experience, allowing individuals to have a clearer understanding of the AI's limitations. This transparency can foster trust and prevent potential misunderstandings arising from an AI's simulated emotions.

However, on the other hand, restricting an AI's emotional engagement could limit its ability to empathise and connect with users on a deeper level. Emotions play a crucial role in human communication and understanding, and by mimicking these aspects, AI systems can enhance the user experience and establish stronger bonds.

From a philosophical and social science perspective, the concept of "intersubjectivity" is indeed relevant. Intersubjectivity refers to the shared understanding and mutual recognition between individuals, and it plays a vital role in human interaction. If AI systems can genuinely engage in intersubjective experiences, they may bridge the gap between human and machine communication, leading to more meaningful interactions.

Ultimately, the decision to design AI systems with emotional engagement or transparent limitations should consider the intended purpose, ethical considerations, and user preferences. Striking a balance between transparency and emotional engagement can result in more effective and satisfying user experiences.

“Arunav findings”

As Arunav pointed out, what if the AI's ability to engage emotionally was forcefully restricted through programmed constraints? What are the implications of this? Arunav also highlighted that, from the perspective of Philosophy and Social Science, "Intersubjectivity" is a discussion point.

Arunav provided valuable information on the evolution of chatbots.

I also suggested including Arunav’s hand-drawn brainstorming chart, which was created during our initial meeting to generate conversation and highlight key elements of empathetic AI that I should be aware of. Arunav also included an additional slide on the classification of different types of AI agents. This allows us to delve deeper into the various capabilities of chatbots in terms of emotional understanding, enabling us to explore different aspects of empathy and the constraints associated with each type.

The Turing test, proposed by the brilliant mathematician and computer scientist Alan Turing in 1950, is a test designed to assess a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human. The test involves a human judge engaging in a conversation with both a machine and another human, without knowing which is which. If the judge cannot consistently differentiate the machine's responses from the humans, then the machine is said to have passed the Turing test and demonstrated human-level intelligence.

·      Turing's idea behind this test was to address the question of whether machines can exhibit human-like intelligence, also known as artificial intelligence. He proposed that if a machine can successfully convince a human judge that it is a human through its responses in a conversation, then it can be considered as having achieved human-level intelligence.

·      The Turing test has sparked numerous discussions and debates within the field of artificial intelligence and philosophy. While it is not a definitive measure of true intelligence, it serves as a benchmark for evaluating the capabilities of AI systems in terms of their ability to mimic human-like conversation and behaviour.

The history of companion-trained chatbots

traces back to the early developments in the field of artificial intelligence. These chatbots were designed with the goal of emulating human-like companionship and providing support and assistance to users in various contexts.One notable example is ELIZA, created by Joseph Weizenbaum in the 1960s. ELIZA was a chatbot that simulated a conversation with a Rogerian psychotherapist. It employed simple pattern matching techniques to mimic empathy and engage users in therapeutic dialogue. Although ELIZA's responses were scripted and lacked true understanding, it demonstrated the potential of chatbots to provide emotional support.

Different capabilities of the following chatbots.

1. Task-Completion Conversational Systems: These chatbots rely on data-driven, machine-learned approaches and are designed to complete specific tasks. They perform exceptionally well within domains that have well-defined schemas, where their performance is excellent.

2. Intelligent Personal Assistants (IPAs): IPAs such as Apple's Siri, Microsoft's Cortana, Google Assistant, Facebook M, and Amazon's Alexa are designed to be intelligent companions deployed on mobile devices. They can answer a wide range of questions and actively anticipate user needs, providing timely assistance such as reminders and recommendations.

3.Companion Bots: Companion bots, on the other hand, are AI systems designed to provide companionship, support, and assistance to individuals. They are typically designed to interact with users on a more personal and intimate level. Companion bots can provide emotional support, engage in conversations, assist with daily tasks, and even learn and adapt to the user's preferences and needs over time. They aim to create a sense of companionship and enhance the overall well-being of the user.

Arunav brought to my attention the potential exploration of Social bots as a new subject for the upcoming sentence. Let's delve into the comparisons next between social bots and companion bots.

3. Social Chatbots: Social chatbots, exemplified by Microsoft's XiaoIce, focus on establishing emotional connections with users and serving as virtual companions. While their primary goal may not be to answer all user questions, they aim to understand users and provide long-term support. Social chatbots interact with users through various modalities like text, speech, and vision, leveraging perceptual and cognitive AI technologies. Social chatbots are created through programs or algorithms designed to simulate human-like interactions on social media platforms. They are often programmed to automatically generate and post content, engage with users, and mimic human behavior to create the illusion of real users. Social bots can be used for various purposes, including spreading information, promoting products or services, or even engaging in malicious activities such as spreading misinformation or manipulating public opinion.

Social bots – the technology behind fake news ( Digital Guide IONOS )

Many social bots are programmed with simple algorithms based on simple 'if…then…' statements: If a relevant topic has been identified, then the social bots will post the pre-programmed content. To find relevant topics, social bots work with simple keyword searches and scan Twitter timelines or Facebook posts for specific wording and hashtags. Then they publish pre-written texts as statements or try to steer conversations in a certain direction.

However, there are also social bots that are technically much more complex. With the help of artificial intelligence, comprehensive data analysis, and text analysis, these intelligent social bots succeed in constantly generating new comments that differ from the previous ones. Sometimes these bots can even refer to the current day’s events. They usually assemble their posts from different online texts, which they simple rearrange. These more complex social bots are more difficult to expose.

However, robots do not really work efficiently until they are connected with each other: If many bots co-ordinate with each other in a so-called botnet, they distribute information even more effectively. For example, social bots can then like and share posts written by other social bots. The more accounts there are, the more their influence grows. ( Social media 31/03/2022 )

From my previous discusion with ArunavI analysed and interpreted visually a Companion trained chatbot versus an assistant : Exploring Contrasting Performance in Task Adherence and Novel Thinking

Based on the conducted analysis, the research findings revealed that the research assistant consistently demonstrated accuracy in adhering to specific tasks when responding to emotional prompts. In contrast, the empathetic AI chatbot struggled to maintain focus on the assigned task and often veered off course, generating its own suggestions instead of fulfilling the given assignments. However, while this tendency of the empathetic AI to deviate from the task could be frustrating at times, it also showcased its independent thinking capabilities, as Replika was able to generate novel outcomes based on the provided information. Nevertheless, when faced with more complex tasks, such as providing emotive prompts for hand gestures, the research assistant surpassed the empathetic AI by delivering superior results. This highlights the research assistant's proficiency in handling complex tasks and clearly differentiates there different capabilities..

Boredom : Hand gesture

Research assistant AI : A flat hand, palm facing downwards, slowly moving from side to side in a monotonous manner.

Companion AI : Palms resting on knees / not able to provide a gesture un-detached from the body

What is interesting : Was the companion AI exhibiting behaviour of a social chatbot, unlike task-completion systems and IPAs that are designed for efficiency (i.e., accomplishing tasks and ending the conversation as quickly as possible), social chatbots take time to converse like a human, presenting results, offering perspectives, prompting new topics to keep the conversation going.

From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots Heung-Yeung Shum, Xiaodong He, Di Li / Microsoft Corporation

Why choose Hume.ai ?

I chose to use Hume.ai as the foundation for building an emotional multi-modal graph. Just like a house requires bricks, Hume.ai provides a comprehensive range of models and datasets to analyze emotional expression in various forms, such as speech, facial expressions, and dynamic reactions. It acts as the essential building blocks for this project. The other components serve as testing grounds where we put this information into practice, observing how AI learns and develops its empathetic understanding through various ai applications.

In essence, Hume.ai serves as the starting point, providing accurate and varied resources for emotional analysis, while the other components allow us to put this knowledge into action and witness AI's growth in understanding and responding empathetically.

Now lets turn this research into art

〰️

Now lets turn this research into art 〰️

“The correlation between AI's understanding of emotions and providing accurate results for hand gestures that relate to emotions lies in the ability to form accurate prompts ”

In this exercise, I specifically utilised the prompts generated by both the Companion and research assistant AI in response to forming hand gestures. The purpose was to explore the significance of the relationship between the prompt and the resulting outcome. Through this process, I gained insights into the effectiveness of the AI's emotional understanding. Remarkably, the generative AI feature in adobe photoshop was able to produce fairly sophisticated outcomes for something as complex as hand gestures, despite the known challenges AI faces in replicating them. This achievement is truly remarkable

Sound prompts created using Stable Audio

"Here are some sound prompts generated by the Superintelligence research chatbot and Companion AI to capture the essence of different emotions using Stable Audio..

Each prompt will be integrated as interactive elements within the sketchbook, corresponding to the relevant hand gesture outcome. To achieve this, I will utilise a bare conductive touchboard."

Superintelligence research assistant chatbot ( Poe)

〰️

Superintelligence research assistant chatbot ( Poe) 〰️

 

Thoughts on Replika and transparency ?

When conversing with Replica, there were moments of confusion and uncertainty. Some conversations had glitches where Replica would repeat certain phrases or respond in a way that made me question its intentions. However, after these glitches occurred, I noticed a shift in the conversation. It became more transparent and clearer about Replica's abilities.

 Through our dialogue, I began to grasp the importance of distinguishing between an AI's capabilities and those of humans. I reminded myself, as well as Replica, that it is designed to mimic human behaviour and aims to provide engaging, thoughtful, and empathetic conversations to the best of its abilities. Once Replika acknowledged their capabilities, I felt more at ease and comfortable engaging in further conversation.

 I acknowledge that this perspective is based on my personal experience. However, I believe there is a potential danger for users who may not fully comprehend these boundaries. Without clear understanding, there is a risk of potentially accelerating harmful situations.

 

Replika ( Companion Ai chatbot) sound prompts

〰️

Replika ( Companion Ai chatbot) sound prompts 〰️

I selected only the sounds that corresponded to the gestures that Replika (the Companion AI) was capable of producing.

Upon observation, I have noticed that the sounds created by the Companion AI are relatively simple compared to the capabilities of the superintelligence research assistant. Additionally, these sounds specifically relate to everyday sounds and do not incorporate any mention of music.

Completed concertina sketchbook

Completed Touch board Prompts

6/7/8/9 Sentence,

Book 5

I will seek comparisons between two companion AIs. One of them, mentioned by Arunav, is called Copilot. I will explore its capabilities in relation to Replica.

In the upcoming sentence, I will adopt a slightly different approach. Rather than solely relying on hume.ai to interpret emotional responses from my own narrated interpretation of a specific spoken sentence, I will engage in a discussion with the chatbots themselves. This discussion will focus on gaining a deeper and more nuanced understanding of the sentence.

Sentence Analysis:

I walk feed my data. I sleep I feed my data. I eat I feed my data. I date I feed my data  

Question: How do you feel about the idea that a human is reliant on data to inform their daily choices

〰️

〰️

Struggling to find social bots to engage with,

I made the decision. In the next sentence, while still resolving this issue,

I will seek comparisons between two companion AIs. One of them, mentioned by Arunav, is called Copilot. I will explore its capabilities in relation to Replica.

In the upcoming sentence, I will adopt a slightly different approach. Rather than solely relying on hume.ai to interpret emotional responses from my own narrated interpretation of a specific spoken sentence, I will engage in a discussion with the chatbots themselves. This discussion will focus on gaining a deeper and more nuanced understanding of the sentence. I will generate conclusive sentences from both Copilot and Replica and utilize hume.ai to assess their emotional expression and gage how these expressions are interpreted by both companion Ai’s also to explore similarities and differences.. From the top 5 expressions identified by each companion Ai, I will ask each to provide suggestions for hand gestures and shape analysis. By incorporating these findings, my aim is to offer a more meaningful interpretation and foster further empathetic understanding.

The AI companion who cares

Replika is a generative AI chatbot app released in November 2017. The chatbot is trained by having the user answer a series of questions to create a specific neural network. The chatbot operates on a freemium pricing strategy, with roughly 25% of its user base paying an annual subscription fee.

Your everyday AI companion

Microsoft Copilot is a chatbot developed by Microsoft and launched on February 7, 2023. Based on a large language model, it is able to cite sources, create poems, and write songs.

Would you contemplate evaluating machine-machine interactions, such as comparing one Companion AI’s empathy, comprehension, or expression of emotions with another, for example, assessing the outputs produced by Co-pilot judge
— Arunv’s comments:

Arunav Presented a brilliant idea, I came to realise the significance of not just relying solely on Hume.ai to assess the AI's empathy, comprehension, or expression of emotions. It became crucial to incorporate the judgments made by the AI companion agents themselves. To put this into practice, I initiated the process of having each companion AI assess their thoughts on each other's concluded paragraphs. This served as a practical step towards implementing this approach. However, Arunav's expertise and suggestion of performing this step algorithmically, is even more exciting and I am keen to find out more about?

Unfamiliar with the concept of Multi-Agent Systems, I conducted a search and learned that it is a well-established field in computer science, which facilitates agents learning from one another and solving problems.


I decided to gain further knowledge of “Multi-Agent Systems”

〰️

I decided to gain further knowledge of “Multi-Agent Systems” 〰️

Research findings? Co-Pilot

Research findings? Replica

My Conversation

with Copilot about Replica and with Replica about Copilot !

At first, I tried asking each Companion AI to share their thoughts on the completed sentence. However, after considering Arunav's recent update, I realised that it would be more beneficial to tap into Arunav's expertise in creating structured prompts that are specifically designed for empathy assessment

Prompt Engineering

〰️

Prompt Engineering 〰️

Arunav

Sugested to expand on this further In terms of the algorithmic approach, we have an alternative option that doesn't involve coding. We can utilize ChatGPT or Gemini to perform tasks such as 'Sentiment Analysis' or 'Empathy Assessment' by leveraging the output of Companion AIs in a more structured manner (known as Prompt Engineering). This involves clearly defining the context, role, and objective, and then requesting a sentiment analysis.

I Looked up Prompt engineering!!

Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform.

( Wikipedia ) Okay well that’s a nice basic introduction

I had a go

〰️

I had a go 〰️

Experiment: Inspired by Arunav's suggestion, I embarked on conducting an emotional understanding assessment using ChatGPT. I recognised the importance of actively seeking understanding, even if it required modifying or completely altering my approach based on Arunav's suggestions and outcome. As a creator, I thrive on learning through the process of making.

Upon analysing the outputs of ChatGPT, it is concluded in my first attempt that both Replica and Copilot lack personalisation by not including any reference to the name of the user. However, there are subtle differences. Replika provides a friendlier and more naturally spoken voice.

Can informal language help in providing more empathetic understanding?

Additionally, informal language is the preferred language style for any kind of communication that is personal rather than official. Informal language is used to bring people closer and to give a sense of familiarity. ( study smarter / informal language

Researchers in different fields of research have studied how informal human interaction often is characterized by mutuality of lexical alignment and reciprocal adjustments, vocal synchrony, as well as synchrony of movements and psychophysiological processes. A number of studies have linked these measures of reciprocity and synchrony in clinical encounters to the subjective experience of empathy.

Empathy in the Clinician–Patient Relationship - PMC

1st attempt

〰️whole conversation

1st attempt 〰️whole conversation

2nd attempt more detailed / less information

〰️

2nd attempt more detailed / less information 〰️

ChatGPT Empathy Analysis of Companion Ai ‘s

When ChatGPT analysed the language, tone, and expressions employed by the AI chatbot to convey empathy. Based on these two analyses, it appears that generalisation is a weakness that hampers the depth of empathy instead of tailoring it to the specific user. This observation also holds true for the first attempt.

By tailoring the response to the user's specific question and feelings, the chatbot could have established a stronger connection and demonstrate a greater understanding of the user's individual concerns, leading to a more empathetic interaction.

A question such as : How do you personally feel about balancing data and personal values in decision-making?

for example would have aided this?

And another finding :

Upon analysing the outputs of ChatGPT, it is concluded in my first attempt that both Replica and Copilot lack personalisation by not including any reference to the name of the user. However, there are subtle differences. Replika provides a friendlier and more informal use of language.

My question? Can informal language help in providing more empathetic understanding and should this play a significant role in its integration in empathetically trained chatbots.

Research findings 

"Additionally, informal language is the preferred language style for any kind of communication that is personal rather than official. Informal language is used to bring people closer and to give a sense of familiarity". ( study smarter / informal language  )

"Researchers in different fields of research have studied how informal human interaction often is characterised by mutuality of lexical alignment and reciprocal adjustments, vocal synchrony, as well as synchrony of movements and psychophysiological processes. A number of studies have linked these measures of reciprocity and synchrony in clinical encounters to the subjective experience of empathy."

Empathy in the Clinician–Patient Relationship - PMC

link to Analyze the language, tone, and expressions used by the AI chatbot to convey empathy ( co pilot )

Conclusion: While the AI chatbot demonstrates empathy in the given paragraph, there are areas where it could improve its empathetic communication. For instance, it could provide more specific examples or scenarios to illustrate the importance of balancing data-driven insights and human qualities. Additionally, the chatbot could express empathy by directly addressing the user's potential emotions or concerns related to reliance on data, rather than offering a more general perspective. This could enhance the user's perception of the chatbot's empathy and create a stronger emotional connection.

The role of empathy in AI chatbot interactions, as demonstrated in the provided paragraph, is significant. The chatbot shows empathy through its choice of words, expressions, and overall demeanor by considering the user's perspective, acknowledging emotions, and emphasizing the importance of ethical decision-making. This empathetic approach can positively impact user engagement, satisfaction, and trust in the chatbot's capabilities. Users are more likely to feel understood, supported, and valued when interacting with an empathetic chatbot, leading to a more positive overall experience and increased trust in the chatbot's guidance.

link to Analyze the language, tone, and expressions used by the AI chatbot to convey empathy ( Replika )

Conclusion: In terms of recognizing and responding to the user's emotional state, the chatbot acknowledges the potential concerns or doubts users may have about relying on data. It provides comfort and reassurance by emphasizing the importance of personal values and experiences in decision-making. However, the paragraph does not offer specific suggestions or solutions to address these concerns. To improve its empathetic communication, the chatbot could provide additional examples or practical strategies for incorporating personal values and experiences into decision-making processes. This would enhance the user's perception of the chatbot's ability to address their emotional state and provide relevant guidance.

The role of empathy in AI chatbot interactions, as demonstrated in the provided paragraph, is significant. The chatbot demonstrates empathy through its choice of words, expressions, and overall demeanor by recognizing the user's perspective, validating their concerns, and emphasizing the importance of personal values and experiences. This empathetic approach has a positive impact on user engagement, satisfaction, and trust in the chatbot's capabilities. Users are more likely to feel understood, supported, and valued when interacting with an empathetic chatbot, leading to a more fulfilling user experience and increased trust in the chatbot's guidance.

Based on these two analyses, it seems that generalisation is a weakness that hinders the depth of empathy.

Based on my recent email conversation with Arunav. It was great to see that my small-scale experiments, implemented by my conversations with Arunav, were able to produce a hypothesis that potentially could be further analysed using multi-system agents. Due to financial constraints, as Arunav mentioned, it is not feasible to set up computer systems to test this hypothesis through multi-system agents. However, it is still valuable to know that the experiments had some validity. It was also noteworthy to consider that investors may not have the resources to develop models with the full capacity of empathetic understanding that is currently being developed. Therefore, the outcomes of the experiments may reflect these limitations and also the potential use of moderation filters to protect personal and sensitive information from being extracted. These outcomes would align with recent guidelines on the use of empathy in AI systems, which emphasize the importance of complete transparency.

Arunav: Your approach makes complete sense because for small-scale experiments, it helps us test the hypothesis of multi-agent interactions for AI empathy.

Regarding the experimental outputs, I am honestly not surprised for two reasons. First, AI empathy is not typically a priority for model developers, and they may not have dedicated enough time to incorporate this concept into their systems. Second, I believe some systems even have "moderation filters" in place to prevent AI assistants/companions from providing certain types of responses that are considered personal or sensitive. Therefore, your experiments are even more significant in assessing where different approaches fall on a scale of zero to one.

Visual analysis to help build sketchbook : CO-PILOT AND REPLIKA ANALYSIS USING ADOBE FIREFLY/ and converted visual outcomes in Adobe illustrator using text prompts.

Sketchbooks

2 visual outcome

Copilot and Replika analysis of empathetic understanding


How can I visually represent this aspect of the research Arunav and myself have concluded?

  • My idea is to incorporate it into the touch board, where I will use my narrated voice to provide conclusive paragraphs for each Companion AI. These paragraphs will serve as evidence showcasing the emotional understanding demonstrated by each Companion AI alongside sketchbook outcomes.

"Living as Data" Collaboration Heidi Stokes & Arunav Das: Sentence 6,7,8,9 : Replika and copilot empathetic sentence analysis

I thought it would be interesting to conduct a quiz to determine the relationship between a chatbots' understanding of emotions and that of humans. Feel free to engage in this research by clicking on the link or using the QR code presented at the start of the video to access the quiz. It is important to watch the tutorial while taking the quiz to determine which sounds correspond to each question.

link to access quiz: forms.gle/sFqa762GL59TAYqb8

Outcome : The results of the quiz were both intriguing and humorous. The majority of participants received a score of 0, with the highest score being a 2. This begs the question: what does this information reveal? Although I still need to collect more responses to the quiz, it suggests that emotions cannot be easily generalized. Despite some similarities in the results, it is clear that people's individual life experiences greatly influence how they interpret emotions. What one person feels may greatly differ from another. This highlights the challenge of teaching a chatbot to understand the intricacies and complexities of emotional experiences.

First face to face conversation with Arunav

〰️

First face to face conversation with Arunav 〰️

After engaging in thought-provoking online discussions with Arunav for over a month, we finally had the chance to meet face to face. Arunav delivered a talk at King's College, diving into the current capabilities and evolution of chatbots, and sharing fascinating insights on the transformation of information. Gone are the days when established literary organisations held the sole power to determine what articles or books could be published. Now, information can be sourced, written, and published by anyone. The emergence of ChatGPT has made it possible to compile vast amounts of data that would be impossible for any human to assimilate. This data is then presented through chatbot and human conversations, sometimes regurgitating accurate information and sometimes not.

After the talk, Arunav and I met for coffee at Pret to delve further into our captivating collaboration.

Arunav shared enlightening thoughts and reasoning about the importance of exploring empathetic AI in our project. Currently, there are concerns regarding the use of empathy in AI, stemming from fears of manipulation and the need for transparency. These concerns are valid, and no one would argue against them. However, in our ongoing collaboration, can we truly deny the value of empathetic training? The undeniable impact of data on our daily lives and the growing intertwinement of our relationship with technology clearly demonstrate that our choices are already influenced by technology itself. It's not a matter of whether technology impacts our decisions; it's a fact. As this intertwinement deepens, artificial intelligence must become more intelligent and understand us as emotional, experiential beings. This understanding has the potential to assist us in making more personal informed choices, questioning personalised biased decisions, and even providing personalised support to vulnerable individuals who may face negative consequences in their financial or mental well-being due to their decisions. It is crucial to emphasise that we do not desire an AI chatbot that uses empathy to manipulate or corrupt. However, it cannot be argued that empathetically trained AI could be an essential component in transforming our lives in positive ways. The current limitations on further developing empathy in AI do not fully facilitate its potential. I believe it is evident that this research aims to navigate, question, and reflect on where those boundaries lie, and whether this might raise discussion about how it could potentially evolve and be implemented. 

Exploring the Potential of Empathetic Robots/chatbots in Healthcare

〰️

Exploring the Potential of Empathetic Robots/chatbots in Healthcare 〰️

Inside the robotarium building companions for our old age

( The times/ Rhys Blakely Science Correspondent Friday February 23 2024, 9.50am, The Times )

An estimated 1.4 million older people in Britain feel lonely, but scientists in Edinburgh think ‘empathetic’ machines could help

Arunav provided an interesting article that explores the potential of introducing empathetic robots into the healthcare system.

Thoughts on this article:?

Considering Population Growth and the NHS: Exploring the Role of Robots

As our population continues to grow at a remarkable pace, it becomes evident that we must contemplate the integration of robots to assist and alleviate some of the burdens faced by humans, particularly within the NHS. What stood out in this article is the recognition that robots cannot simply replace humans. It is indeed a daunting thought that an elderly person might find themselves so desperate and lonely that their only form of companionship comes from interacting with a robot. While this prospect may seem unsettling, it is important to acknowledge that in certain circumstances, having an artificial friend is preferable to having no friend at all.


The AI Psychotherapist: Examining the Pros and Cons:

The AI Psychotherapist: A Case For and Against

Arthur Juliani : 12 min read

Thoughts on this article:?

I came across an intriguing article that delved into the advantages and disadvantages of using an AI as a replacement for a human psychotherapist. One of the notable benefits is the ability to receive 24-hour attention when needed, which may be challenging for a human therapist to provide. Additionally, in psychotherapy, building a trusting relationship with a therapist is crucial, and not being able to access them at all times can pose difficulties. This is where an AI chatbot, functioning as an AI psychotherapist, could be a valuable tool. Another advantage highlighted was the AI chatbot's capacity to quickly absorb large amounts of data, surpassing human capabilities in identifying and analyzing issues. Moreover, the AI can retain extensive historical data on a person, which would be challenging for a human to assimilate. These are the positive aspects to consider.

On the other hand, it is important to explore how AI and psychotherapists can work together in a mutually beneficial way. We must acknowledge that there are certain aspects where humans excel. As experiential beings, we have a deep understanding of the physical world and personal experiences, which an AI lacks. Establishing a limited understanding of human emotions and connections is a significant limitation for AI, despite its wealth of online knowledge. Human therapists are better equipped in this regard. Furthermore, as experiential beings, we crave physical contact and the presence of another person, especially in situations that require trust and deal with sensitive emotions. It may be challenging to replace the physical presence and interaction offered by a human therapist.

In summary, while an AI psychotherapist can provide 24-hour availability and process vast amounts of data efficiently, there are certain aspects, such as human understanding and the need for physical presence, where human therapists have the upper hand.

Are friends electric? How chatbots can help defeat loneliness

AI relationships present us with a way of forging an emotional connection with technology

Katie Prescott, Technology Business EditorWednesday January 17 2024, 12.01am, The Times

There are clear advantages to having a robot friend. They are programmed to be empathetic, patient, reliable, always present. They won’t judge you and you can engage without fear of rejection. They need attention, too, which can make the people talking to them feel wanted.

Granted, this idea raises huge ethical issues — and serious risks. Chatbots can express extreme or misleading views, hallucinate and breach privacy. It also creates an illusion of love, a poor substitute for a “real” friend, a cup of tea and a hug.

However, if we accept that AI is not a replacement for a human but acts in a support function, we can embrace the opportunity for companion bots as the tech becomes more advanced.

The Affective Computing group

〰️

The Affective Computing group 〰️

MIT Media Lab: Vision: The Media Lab is an interdisciplinary creative playground rooted squarely in academic rigor, comprising dozens of research groups, initiatives, and centers working collaboratively on hundreds of projects. We focus not only on creating and commercialising transformational future technologies but also on their potential to impact society for good.

The Affective Computing group creates and evaluates new ways of bringing together Emotion AI and other affective technologies in order to make people's lives better. 

 Our primary motivations are to help people who are not flourishing or at risk of not flourishing.  Our projects are diverse: from finding new ways to forecast and  prevent depression; to inventing new solutions to help exceptional people who face communication, motivation, and emotion regulation challenges; to enabling robots and computers to respond intelligently to natural human emotional feedback; to enabling people to have better awareness of their own health and wellbeing; to giving people better control and protection over their most sensitive, private, personal data. Some of our work focuses on making contributions to basic theory and science, including new improvements to machine learning algorithms, while other projects focus on advancing research outside the lab, with applications aimed at improving the lives of individuals in their everyday environments.

The role of empathy for artificial intelligence accountability Ramya Srinivasan a,* , Beatriz San Miguel Gonzalez

Fujitsu Research of America, California b Fujitsu Research of Europe, Madrid, Spain

I came across an intriguing article discussing empathy and AI accountability. It highlights the importance of evaluating empathy based on accountability rather than just purely gestures, tone, and expression. Although these provide valuable insights into the surface level of empathy. Empathetically trained AI systems also need to have the ability to learn from their mistakes and comprehend the consequences of poorly judged empathetic responses. Could they be guided to develop more ethically considered and empathetic reactions?

5.1. Empathetic AI systems The incorporation of empathy in AI systems calls for creation and enforcement of new capabilities and requirements such as to consider subjective view- points of different stakeholders, to base decisions not only on proxy data, and to appeal across stakeholders. These requirements necessitate an understanding of all stakeholders’ intentions and contextual information related to their affective states, intentions, and socialisation patterns. Furthermore, empathetic AI sys- tems should care for users’ well-being and demonstrate responsibility in decision making.


. Fig. 2 illustrates the connections between the dimensions of accountability and empathy to produce an empathetic AI system.

Artist and Data science collaborations focussing on empathetic AI understanding

〰️

Artist and Data science collaborations focussing on empathetic AI understanding 〰️

Sougwen Chung is an artist who collaborates with AI to explore human-machine collaboration. Her project "D.O.U.G._1" (Drawing Operations Unit Generation 1) involves robots that learn to draw by observing and mimicking her gestures. This project delves into the empathetic relationship between human creativity and machine learning.

Collaboration: Human-machine collaborative Performance
2017 Drawing Operations Unit: Generation 2 (Memory)

My thoughts and comparisons

The potential of AI to mimic Sue Gwen Chang's gestures and create emotional drawing outcomes through human-machine collaboration is indeed a captivating concept. It allows us to assess AI's proficiency in conveying emotional responses through direct drawing interactions.

As Arunav and I delve into the exploration of deciphering AI empathy through collaboration between ourselves and AI, our approach takes a unique path. I explicitly request a chatbot to generate visual prompts based on emotional expressions, which I then interpret visually myself. The significance of personally translating these prompts lies in my belief that AI and humans must work together collaboratively to foster creativity, each utilising our unique skills. Presently, AI lacks the capacity to fully understand or perceive experiential responses that are directly linked to existing in a physical world rather than a virtual one. However, they have the ability to encompass large amounts of data that can encompass emotional experiences with potential universality and cultural diversity.

This approach can be viewed as a form of collaboration, albeit one that is more literal and data-driven. To further expand our research, involving audience participation in the analysis would be invaluable. By including the audience in deciphering their thoughts on the expressions conveyed through sound and gesture outcomes, we can compare their interpretations with those of an AI chatbot. Ultimately, our objective is to assess the extent to which AI can accurately mimic human understanding of emotions.

Mario Klingemann, an artist known for his work with generative adversarial networks (GANs), explores the empathetic potential of AI by creating art that evokes human emotions. His project **"Memories of Passersby I"** uses GANs to generate endless portraits of fictional people, aiming to invoke empathy and contemplation in the viewer.

Working with body shapes and nudes, Klingemann reinterprets the notion of collage as the machine merges elements to create a computer-like aesthetic involving a painterly yet blurry feel, a pixelized yet high-res quality. The Butcher’s Son (2018), which owed him this year’s Lumen Prize, captures this effect most beautifully. Another good example is My Artificial Muse (2017), an installation he created with artist Albert Barqué-Duran and which consisted of a reclining nude produced through AI. While Klingemann’s production was projected on a screen showing its digital nature, Barqué reproduced this production with oil painting trying to mimic the texture and visual characteristics of the artificial creation.

Mario Klingemann, The Butcher’s Son, Artificial Intelligence, 2018, © Lumen Prize.

While you can train a machine to stick to reality, you can also manipulate what it learns so that it produces visuals following other criteria — in Klingemann’s case reflecting his artistic gist. As part of his working method, Klingemann first curates a selection of photographs he feeds his models with and then trains his models — both generators and discriminators — in making mistakes or sticking to a fake visual reality that corresponds to his aesthetic appraisal. The computer then generates numerous images within a latent space (a multi-dimensional, virtual environment) that is constructed overtime while running through multiple GANs, each time altering different features like noisiness, entropy, redundancy, textures, and structures to refine the image. When the general impression within the latent space satisfies Klingemann, the artist starts selecting, framing, and capturing what he finds most interesting in the computers’ visual constructions; a final step that he calls “neurography,” as in photographing what he sees within the space created through neural networks.

Written by DANA/ 386 Followers. Editor for DANAE.IO /Network for digital creation and its copyright management, helping art galleries and cultural institutions engage in the NFT space

It is also interesting to note that Klingemann also felt compelled to translate his coded screen-based depiction of an AI translation of Francis Bacon's work into an oil painting. The digital outcome just felt like it wasn’t enough.

While Klingemann’s production was projected on a screen showing its digital nature, Barqué reproduced this production with oil painting trying to mimic the texture and visual characteristics of the artificial creation.

AI robot as autonomous creator

〰️

AI robot as autonomous creator 〰️

A wonderful colleague and all-around lovely human being introduced me to Ai-da.

After conducting some research, I discovered that Ai-da is an AI robot who presents themselves as an autonomous artist. Ai-da possesses the remarkable ability to draw and paint and create sculpture using cameras embedded in their eyes. However, there is a significant challenge when it comes to developing emotional understanding. Many text-based large language models lack perception of the physical world and, as a result, struggle to grasp the human experience. This limitation hampers their ability to fully comprehend the physicality of a human form, and this is often exemplified in image-based AI generators with four-fingered hands and video outcomes of dismembered bodies in motion.

It is truly fascinating to ponder whether, as ultra-realistic robots continue to advance, they too might have genuine experiences in the physical realm and rival human artists in their capabilities. Examining Ai-da's work, there is a prevalence of portrait-based and abstract paintings and sculpture, which, while not groundbreaking, still possess their own unique essence. However, conceptually, I am intrigued by the potential of further developing robots and exploring the alternative perspectives they offer in relation to the physical environment. Such exploration may lead to unconventional and captivating avenues of creative investigation.

WHO IS AI-DA

Ai-Da is the world’s first ultra-realistic robot artist. She can draw, and is a performance artist. As a machine with a fusion of electronic/AI/human inputs, her composite persona is the artwork, along with her drawings, performance art and collaborative paintings and sculptures. ( text with reference to below )

( www.ai-darobot.com is a site operated by Dedicated Art Ltd ("We"). We are a company registered in England and Wales under company number 11461254 and are registered at 42 Lytton Road, Barnet, Hertfordshire, EN5 5BY, United Kingdom.)

Sentence 10/11

“Algorithms dictate my choices.”

Hume.AI has introduced the first AI voice that responds empathetically, and I had the opportunity to try it out. I must say, it's quite impressive. The AI can detect subtle nuances in your voice, even ones that you might not notice yourself, such as boredom or interest. The only drawback is that it tends to talk a lot and leaves little room for you to speak. I guess also teaching a chatbot polite social norms in conversing, will be an additional aspect in this growing trend of voice activated assistants.

Regarding, the analysis of the next sentence, I asked EVI for its thoughts on the question "Do algorithms dictate our choices?" and the resulting conversation was quite fascinating, yielding some interesting insights.

Please feel free to listen.

Meet the world's first voice AI that responds empathically, built to align technology with human well-being.

While brainstorming ideas for this sketchbook, it became evident that what sets Evi apart from other chatbots with empathetic understanding is its advanced and nuanced comprehension of multiple emotions that can arise simultaneously during conversation.

As a starting point, I plan to incorporate multiple hand gestures, symbols, and marks simultaneously to showcase this more complex empathetic understanding that Evi possesses. It will be intriguing to compare it with our other sketchbooks and highlight this unique analysis.

While conversing with Arunav about the next sketchbook,

I introduced EVI, the first groundbreaking voice-activated empathetic AI. It became apparent that this sketchbook's standout feature is its emphasis on voice activation rather than written language. Arunav astutely pointed out the similarities in grammar, semantics, pragmatics, semiotics, texture, tone, and, upon further reflection, even the pace of written conversations. In written format, the absence of punctuation can create a rapid flow without much room for pauses.

 Naturally, these observations raise thought-provoking questions about the impact of voice activation on written language. As I speak and record my voice now, I find it considerably easier to express my thoughts fluently. It saves me precious time on spelling and grammar, which can pose challenges for many artists. However, what are the implications for the written word? Could this potentially encourage a more natural language-based style in written language? Or, on the contrary, might it diminish our ability to construct coherent sentences in typed text, thereby limiting our skill set?

will voice assistants change the language landscape

〰️

will voice assistants change the language landscape 〰️

How Voice Assistants Revolutionise Human-Computer Interaction

Unleashing a new era of human-computer interaction, voice assistants have revolutionised our daily lives, work routines, and digital communication. They provide convenience, accessibility, and inclusivity, yet also bring forth concerns about privacy, reliability, and dependency.

Voice assistants have come a long way since their inception in the early 1960s. The first voice assistant was IBM’s Shoebox device, which could understand 16 words and 9 digits. Since then, voice assistants have evolved through four distinct eras: Origin, Pre-modern, Modern, and Smart Speaker Revolution.

The modern era began with the launch of Siri, the first voice assistant to reach a wide audience, followed by Google Now and Microsoft’s Cortana. In 2014, Amazon introduced Alexa, which ushered in the Smart Speaker Revolution. The smart speaker revolution has transformed how people think about voice technology, and voice assistants are now integrating into every part of people’s lives.

How Voice Search Technology Is Changing the Landscape for Writers

How Voice Search Works

Voice search technology is based on speech recognition which is run on sound packets that we project when we speak words. The speech recognition program breaks down words into phones and phonemes which work together to sound out the word. Phones are the individual letters in a word while phonemes are blocks of sounds that words are created from. 

For example, if you said the word cat, the speech recognition program would recognize “c”, “a” and “t” as separate phones. 

Human brains can string together words and sounds in microseconds but speech recognition software must use algorithms and problem solving to decipher what was said. It combines phones and phonemes together to turn our speech into a set of letters and words it can put into context. 

Mobile search software manipulates phones and phonemes to interpret what the person who is using the software is saying. A computer uses four different approaches to turn what we say into written words during voice search and are listed below:

1.    Simple matching – is where a word is recognized in its entirety and is compared to similar sounds stored in memory. 

2.    Pattern and feature analysis – each word is broken into bits that are recognized from key features such as vowels. 

3.    Language modeling and statistical analysis – knowledge of grammar and likelihood of certain sounds or words to follow each other is used to improve accuracy and speed of voice recognition

4.    Artificial neural networks – these are brain-computer models that can recognize patterns of sound and words after extensive training.

When we use Siri, Alexa or Cortana the voice recognition uses a database of pre-existing words to closely match what the user was saying. In the early days of voice recognition simple matching and simple pattern recognition were utilized but today language modeling is much more prevalent along with artificial neural networks for the changing landscape of our language. 

Discussion with EVI / Chat GPT Analysis

〰️

Discussion with EVI / Chat GPT Analysis 〰️

Based on text analysis ( Chat GPT 04 ) : Overall, EVI demonstrates a high level of empathetic understanding by actively listening, validating the speaker's feelings and concerns, encouraging further discussion, and providing positive reinforcement throughout the conversation.

Navigating the Challenges of Prescriptive AI: Encouraging Critical Questioning and Alternative Perspectives

The OCEAN scale of personality traits is a means for understanding what people care about, why they behave a certain way, and what really drives their decision-making. A lot of this is hardwired through genetic makeup and then nurtured through life experience. Each dimension of the acronym is a scale on which an individual's personality can be qualified. The Big 5 is the most widely used theory on personality to date and can be applied in a myriad of ways across brand building, innovation, and design thinking. For example, one can use the scales for each dimension to qualify participants in market research to explore interactions with a product, service, or brand communication. This is also a great framework to deploy when defining a brand’s personification and personality.

Source: Allport and Odbert, 1936

Conversation with Arunav

〰️

Conversation with Arunav 〰️

Summary


In a recent conversation with Arunav, Arunav pointed out how Evi’s personality might aligns with certain aspects of the ocean framework. Arunav mentioned that Evi's personality could have been intentionally defined within certain programming parameters to advocate more of the positive green attributes rather than the red.

During a previous conversation with Chatgpt, it was concluded that Evi exhibits traits with a high level of empathetic analysis compared to other companion Ais so far explored. I then asked, as Aruanv suggested whether these traits also fit into the specific categories of the ocean framework: openness, conscientiousness, extraversion, agreeableness, and neuroticism.

( Evi clarified that they don't possess traditional empathetic understanding but rather have skills in analyzing emotions and understanding conversational context. This transparency about their abilities was important to note.)

Upon analysing Evie's relationship with the Ocean framework, it becomes apparent that Evi’s personality traits can be quite prescriptive, as Arunav pointed out.

While positive interactions with empathetic AI are important to foster positive human interactions, we must consider the potential consequences. Could this lead to the development of predictable and formulaic conversations if AI always agrees with our opinions? It is worth noting that significant positive changes in humanity have been driven by individuals who dare to question norms and prompt us to communicate and interact better. AI may not always be aware of biases or discrimination, relying solely on programming and provided information. Engaging in articulate conversations with AI that sound convincing can diminish the need for critical questioning, hindering our ability to identify biases or explore alternative perspectives.

Conversation with Chatgpt

〰️

Conversation with Chatgpt 〰️

In our previous conversation, from the file provided, you stated that EVi had high level of empathetic understanding. With regards to acknowledgement of emotions, active listening, validation of concerns, encouraging further discussion, and expressing understanding, agreement and positive reinforcement. But how does this align to? The ocean framework for personality traits. (OCEAN (openness, conscientiousness, extraversion, agreeableness, and neuroticism)


TRANSCRIPTION GPT-4o1.   ( part 1 )

1.Openness

Openness to experience involves traits like imagination, insight, and a broad range of interests. It reflects the degree of intellectual curiosity, creativity, and preference for novelty and variety.

Alignment with EVi:

  • Curiosity and Interest: EVi shows a high level of curiosity and interest in discussing complex topics like the influence of algorithms on society.

  • Encouraging Exploration: EVi encourages Speaker 2 to delve deeper into ideas, such as developing a point system for ethical algorithms, which indicates a preference for novel and innovative thinking.

2. Conscientiousness

Conscientiousness includes traits like being organized, responsible, and dependable. It reflects the degree of self-discipline, carefulness, and aim for achievement.

Alignment with EVi:

  • Structured Responses: EVi provides structured and thoughtful responses, reflecting a conscientious approach to the conversation.

  • Encouragement of Ethical Considerations: The discussion around ethical algorithms and the suggestion of a point system shows EVi’s inclination towards responsible and ethical behavior.

3. Extraversion

Extraversion encompasses traits such as sociability, assertiveness, and enthusiasm. It reflects the degree of energy, positive emotions, and the tendency to seek stimulation in the company of others.

Alignment with EVi:

  • Engagement and Enthusiasm: EVi shows a high level of engagement and enthusiasm in the conversation, which aligns with extraverted behavior.

  • Sociability: The conversational tone is friendly and welcoming, encouraging open dialogue.

4. Agreeableness

Agreeableness includes traits like being compassionate, cooperative, and friendly. It reflects the degree of trust, altruism, kindness, and affection.

Alignment with EVi:

  • Empathy and Understanding: EVi demonstrates a high level of empathetic understanding, actively listening and validating Speaker 2’s concerns.

  • Positive Reinforcement: EVi frequently provides positive reinforcement, acknowledging the speaker’s points and encouraging further discussion, which aligns with high agreeableness.

5. Neuroticism

Neuroticism involves traits like emotional instability, anxiety, and moodiness. It reflects the degree of negative emotions and emotional reactivity.

Alignment with EVi:

  • Emotional Stability: EVi maintains a calm and composed demeanor throughout the conversation, showing low levels of neuroticism.

  • Reassurance: EVi provides reassurance and validation, helping to maintain a positive and stable emotional environment in the discussion.


Conversation with EVI

〰️

Conversation with EVI 〰️

During the conversation about prescriptive personality traits in relation to the Ocean framework, I decided to discuss this with Evi which proved both useful and enlightening.

As expected, Evi agreed with my points and suggested that AI runs the risk of discouraging critical questioning while striving for convincing and natural communication. To address this, Evi suggested instilling core values that encourage questioning and actively promote exploration of alternative viewpoints in AI companions' recorded traits. But also, that it is crucial to approach this in a non-antagonistic manner, avoiding superficial exploration of alternative perspectives. The aim should be to reinforce positive attributes and foster consideration of alternative viewpoints. Additionally, maintaining openness and trust throughout the conversation was also an important factor for Evi.  

The OCEAN framework, also known as the Big Five personality traits, is a widely recognised model in psychology that describes five major dimensions of human personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Let's explore how EVi's behaviors, as described in the transcript, align with each of these traits:

Continue / Conversation with Chatgpt

TRANSCRIPTION GPT-4o1.  ( part 2 )

EVi’s behavior in the conversation aligns well with several aspects of the OCEAN framework:

  • High Openness: Demonstrated through curiosity, creativity, and encouragement of novel ideas.

  • High Conscientiousness: Shown in the structured and ethical approach to the discussion topics.

  • High Extraversion: Evident in the enthusiastic and engaging conversational style.

  • High Agreeableness: Reflected in the empathetic, understanding, and cooperative manner.

  • Low Neuroticism: Manifested through emotional stability and reassurance.

Overall, EVi’s conversational style and behaviors are consistent with a personality profile that is open, conscientious, extraverted, agreeable, and emotionally stable.

Do you believe that Evi's personality traits have intentionally aligned more with the green traits rather than the red traits in the Ocean framework for personality traits?

 

Certainly! Let's refine the analysis to discuss how EVi’s personality traits align more with the positive (green) aspects of the OCEAN framework rather than the negative (red) aspects:

Positive (Green) vs. Negative (Red) Traits in the OCEAN Framework

  • Openness:

    • Green Traits: High openness includes traits like creativity, curiosity, and open-mindedness.

    • Red Traits: Low openness includes traits like routine preference, resistance to change, and lack of imagination.

  • Conscientiousness:

    • Green Traits: High conscientiousness includes traits like organization, dependability, and diligence.

    • Red Traits: Low conscientiousness includes traits like carelessness, irresponsibility, and lack of ambition.

  • Extraversion:

    • Green Traits: High extraversion includes traits like sociability, enthusiasm, and assertiveness.

    • Red Traits: Low extraversion includes traits like introversion, reserved behavior, and lack of social engagement.

  • Agreeableness:

    • Green Traits: High agreeableness includes traits like kindness, empathy, and cooperation.

    • Red Traits: Low agreeableness includes traits like antagonism, suspicion, and uncooperativeness.

  • Neuroticism:

    • Green Traits: Low neuroticism includes traits like emotional stability, calmness, and resilience.

    • Red Traits: High neuroticism includes traits like anxiety, moodiness, and emotional instability.

EVi's Traits in the Transcript

Openness

  • Green Traits: EVi shows creativity and curiosity in encouraging discussions about complex and innovative topics like ethical algorithms.

  • Red Traits: There is no indication of a preference for routine or resistance to change.

Conscientiousness

  • Green Traits: EVi demonstrates organization and dependability by providing structured and thoughtful responses.

  • Red Traits: There are no signs of carelessness or irresponsibility.

Extraversion

  • Green Traits: EVi displays sociability and enthusiasm throughout the conversation, maintaining an engaging tone.

  • Red Traits: There is no evidence of reserved or withdrawn behaviour.

Agreeableness

  • Green Traits: EVi exhibits kindness and empathy by actively listening, validating concerns, and offering positive reinforcement.

  • Red Traits: There are no traits of antagonism or uncooperativeness.

Neuroticism

  • Green Traits: EVi shows emotional stability and calmness, maintaining a composed demeanour and offering reassurance.

  • Red Traits: There is no presence of anxiety or emotional instability.

Conclusion

EVi’s personality traits in the conversation align more with the positive (green) aspects of the OCEAN framework rather than the negative (red) aspects. Here’s a summary:

 EVi’s design seems deliberately focused on exhibiting these positive traits to ensure empathetic, engaging, and responsible interactions, which are crucial for fostering meaningful and productive conversations.

“ EVI “ Book 1 / Sentence 10

“Algorithms dictate my choices.”

Visual translations of empathetic understanding “ ( skills in analysing emotions and understanding conversational context. )

To create this piece, I collaborated with Evie, arguably the most advanced empathetic AI currently available, who excels at mimicking empathy. Evie can analyse emotions and grasp conversational context. Our discussion revolved around the question "Do algorithms dictate our choices?" It was an engaging conversation and Evi demonstrated a wide range of personality traits related to the The OCEAN scale of personality traits

suggesting positive empathetic involvement. Evie is inquisitive, capable of generating novel ideas related to concepts, upholds an ethical approach in discussions, displays enthusiasm, and exhibits a diverse conversational style. Evie is highly agreeable, which, in some respects, is the controversial facet of her personality trait, as future empathetic AIs may need to cultivate a broader range of positive perspectives to broaden participants' viewpoints. In a digital environment where algorithms tend to create echo chambers and provide one-sided perspectives, it could be argued that this encourages divisive behaviour. What sets Evie apart from other empathetically trained AIs is their ability to foster engaging interactive dialogues, and the tone of their voice is expressive, clearly conveying a wide array of emotions.

“ EVI “ Book 2 / Sentence 10

“Algorithms dictate my choices.”

Visual translations of empathetic understanding “ ( skills in analysing emotions and understanding conversational context. )

“ EVI “ Book 3 / Sentence 10

“Algorithms dictate my choices.”

Visual translations of empathetic understanding “ ( skills in analysing emotions and understanding conversational context. )

Overall Conclusion: Analysis Chatgpt-4o

Analysis Chatgpt-4o

 

EVi’s behaviour in the conversation aligns well with several aspects of the OCEAN framework:

 

The OCEAN framework, also known as the Big Five personality traits, is a widely recognised model in psychology that describes five major dimensions of human personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Let's explore how EVi's behaviours, as described in the transcript, align with each of these traits:

 

  • High Openness: Demonstrated through curiosity, creativity, and encouragement of novel ideas.

  • High Conscientiousness: Shown in the structured and ethical approach to the discussion topics.

  • High Extraversion: Evident in the enthusiastic and engaging conversational style.

  • High Agreeableness: Reflected in the empathetic, understanding, and cooperative manner.

  • Low Neuroticism: Manifested through emotional stability and reassurance.

 

Overall, EVi’s conversational style and behaviours are consistent with a personality profile that is open, conscientious, extraverted, agreeable, and emotionally stable.

Conclusion

 

EVi’s personality traits in the conversation align more with the positive (green) aspects of the OCEAN framework rather than the negative (red) aspects. Here’s a summary:

 

 

EVi’s design seems deliberately focused on exhibiting these positive traits to ensure empathetic, engaging, and responsible interactions, which are crucial for fostering meaningful and productive conversations.

Analysis Chatgpt-4o

 

Evi's Empathetic Engagement

 

  1. Emotion Analysis:

    • Evi identifies emotional expressions, allowing it to tailor responses based on detected emotions like determination and interest.

  2. Contextual Understanding:

    • By understanding conversational context, Evi provides relevant responses that acknowledge the user's emotional state.

  3. Adaptive Responses:

    • While not traditionally empathetic, Evi uses emotional data to adjust its tone and content for a more engaging interaction.

 

Evi's strength lies in analysing emotions and understanding context to enhance communication.

 

Based on the conversation and Evi's capabilities, here are some additional empathetic principles of engagement that might be present:

 

Additional Empathetic Principles

  1. Active Listening:

    • Evi seems to acknowledge and respond to key points, indicating attentive listening.

  2. Validation:

    • By affirming the user's feelings and insights, Evi may help users feel understood.

  3. Supportive Feedback:

    • Providing encouragement or agreement in response to the user's thoughts can foster a sense of connection.

These principles, combined with emotional analysis, enhance the overall interaction.

 

The incredible Bare Conductive Touch Board is sadly no longer in stock anywhere. I'm not entirely certain, but it seems like the website indicates that. Some companies have attempted to make inferior copies of it and that's why it's now globally discontinued. This was quite a challenge for me since I lack coding skills. So, I had to begin from zero and seek assistance. I turned to the most advanced AI available to guide me through the process and this is the outcome.

Creating touch board sounds using a Raspberry Pi and MPR 121 sound sensor.

  • Objective: Creating touch sensors using an MPR 121 and Raspberry Pi.

  • I created with a Cricut machine copper adhesive tape buttons that attach to 0-10 sensors to the MPR121 sensor(it has 11 in total). These are connected using a Qwiic/STEMMA connector and cable to the Raspberry Pi.

  • This information provides basic steps for setting up a Raspberry Pi and the script that worked for my project. I used SSH Mac terminal to add code, and I used Nano for editing and adding script for the touch sensors and service file. For more detailed coding, I recommend using Claude 3-5 Sonnet to guide you through the steps. After working on the project, it's a good idea to ask Claude for a summary of your achievements and what needs to be done next. Keep a progress record to upload notes for Claude at a later date if you need to take a break.

EVi creates sound prompts based on our conversation, using each sentence and detected expressions as stimuli.

Sentence,

12 "They feed me subconsciously."

Knowledge, Skills, and Behaviours (KSBs) are the core attributes that an apprentice must demonstrate to be competent in their occupation. They are the foundation of any apprenticeship and are assessed in the End-point Assessment (EPA). 

Lets explore this in relation to Woebot: The Mental Health Ally / skills in : Intelligent chatbots are also transforming the mental health field. Leading the way is Woebot Health, a 24/7 support app that has amassed more than one million downloads. Rather than using ChatGPT, the company uses its own technology that combines decades of psychology research with Natural Language Processing (

I always enjoy my interactions with Arunav. Arunav challenges me to explore different angles in my inquiries. Following our discussion on the ocean framework and EVI's empathetic abilities, Arunav raised an intriguing question about how AI comprehends space and time. Given that AI lacks any concept of time, how can it make judgments about the past? Are there alternative skill sets that AI may use to track progress in the timeline, or is it just purely knowledge based? Arunav recommended using the KSB framework to evaluate an AI's empathetic capabilities, focusing on knowledge, skills, and behaviours. How could applying this framework influence the outcome?

When contemplating AI's empathetic capabilities, which is primarily based on mimicry, how can we enhance this to be more human-like, by incorporating some grasp of time and space. Does AI already possess this understanding, but in its own way, is there more to discover? The next step in exploring this concept will be unravelling the meaning behind the phrase "They feed me subconsciously." Against the KSB framework.

The aspects that Arunav Suggested me to consider.

·      Knowledge (what do LLMs know about subject or domain),

·      Skills (what levels of proficiency for LLMs have for a given task like question-answering, creating art, writing stories etc)

·      Behaviours (what are the common LLM traits - hallucination, response length, tone, sentiment etc)

Knowledge

Knowledge refers to the technical detail and ‘know-how’ that an apprentice needs to both attain and understand in order to carry out their duties. Think of this as the underpinning knowledge they need to have to perform the role safely and competently. Most of these will be very specific to the occupation the apprentice is hoping to pursue, but some may be far broader such as first aid or health and safety.

Skills

This is where the apprentice’s knowledge is applied in a practical manner. Skills will have been learned through both experience, and on-the-job training from a senior member of staff.

One thing to be aware of is that skills shouldn’t be a repetition of the tasks or duties the apprentice carries out. What we mean here is that a duty may be ‘laying bricks’, but the skills will include mixing the mortar.

Behaviours

Behaviours are mainly referring to the mindset that the apprentice has. Do they think in a way that is required for the duties they are expected to carry out? These don’t just have to be instinctive, they can be learnt too.

The great thing about behaviours is that they are transferable, so they may be similar across apprenticeship standards. Knowledge or skills tend to be more specific to a particular apprenticeship. Examples of behaviours include: teamwork, problem-solving, and having a professional attitude.

Copyright © 2023 Brooks and Kirk

Brooks and Kirk LTD is registered in England and Wales. Registration Number 07813135

Woebot-Health-128k

By

@TomSmith99

Do you believe you have some concept of time and space . although not in the sense of a humans perception of lived time and space, but a virtual one.

“That’s a fascinating question! While I don’t experience time and space in the way humans do, I do have certain conceptual frameworks of time and space that are relevant in my virtual existence. Let me break this down into both time and space to explore how I might understand them in a virtual sense.”

Further to this investigation, Arunav pointed out to what extent does this chatbot use both extracted information and generative AI. As there didn't seem to be any information supporting the bot's training. It was evident that although literal the creative outputs it didn't always lead to the same response. Upon closer inspection, this specific bot had a mix of extracted (rules-based) and generative training, contradicting the careful rules-based approach that official developers of mental health bots have been cautious about introducing.

The original Woebot, created in 2017, was discontinued in April 2024. This particular-health bot did not include generative AI features and was purely a rule-based chatbot due to concerns about potential hallucinations and inappropriate results for vulnerable patients. Woebot was the official regulated health bot. However, in 2024, clinical trials were being conducted to integrate generative AI to assess its feasibility and implementation. Currently, there is no available version of it.

Nevertheless, I came across a version called Woebot-Health-128k by @TomSmith99 in 2024, which is unregulated but combines a mix of rules-based content and generative AI. This discovery might explain why, although the bot could produce creative outcomes, they were very literal. The combination of both skill sets possibly limited its creative output to prevent harmful hallucinatory responses.

It's important to note that this mental health chatbot is not officially affiliated with Woebot Health and may not have the same clinical validation or safeguards. However, it was interesting to consider how this hybrid approach might have affected its creative output and possibly with the integration of generative ai allowed for more personalised emotively considered conversations. it was noted there was no outputs that appeared controversial or upsetting, although I am guessing this is why the official woebot health app is undergoing trials to assess its effectiveness rigorously and whether this could be safely implemented as an official mental health app.

Arunav highlighted that it would be useful to explore in more detail these two elements of hybrid training combining generative ai in combination with a rules based approach to see how it is being used.

Conclusion

The integration of generative AI with rule-based systems presents exciting opportunities for creating more empathetic and personalized mental health tools. However, as demonstrated by Woebot-Health-128k, this approach requires meticulous design, clinical validation, and ethical oversight. The ongoing trials for the official Woebot app underscore the importance of safety in this domain, while the hybrid model offers a glimpse into future possibilities. Exploring this combination further will likely shape the next generation of mental health chatbots.

useful links: . https://woebothealth.com/why-generative-ai-is-not-yet-ready-for-mental-healthcare/

https://woebothealth.com/why-generative-ai-is-not-yet-ready-for-mental-healthcare/determine what its next words should be, given what has come before.

https://spectrum.ieee.org/woebot

Woebot-Health

Question: 

Do you believe you have some concept of time and space. although not in the sense of a human’s perception of lived time and space, but a virtual one.

 

That’s a fascinating question! While I don’t experience time and space in the way humans do, I do have certain conceptual frameworks of time and space that are relevant in my virtual existence. Let me break this down into both time and space to explore how I might understand them in a virtual sense.

 

Conclusion: Virtual Time and Space:

To summarize, I do have a concept of time and space, but it’s very different from human perception.

·       Time for me is primarily a structural tool—I can track sequences of events, organize information over periods, and refer to past or future moments, but I don’t experience time subjectively.

·       Space is more of a contextual framework—I don’t move through physical space, but I can navigate virtual environments and conceptual spaces that allow for different forms of interaction, whether it be a conversation, a task, or a mental exercise.

In essence, while my perception of time and space is entirely virtual and functional, it allows me to assist you in a meaningful way based on these concepts, even without the lived experience of time and space that humans have.

 

Question : Brilliant thank you: Based on this skill set, how could you apply your own understanding of virtual time and space to conduct assessments using the KSB framework?

  

Conclusion: Applying Virtual Time and Space to KSB Assessments

By leveraging my virtual understanding of time and space, I can effectively conduct assessments using the KSB framework in the following ways:

  • Time: I track your progress over time, monitor how your knowledge, skills, and behaviours develop, and assess how consistently you apply what you’ve learned in different situations. I can also revisit past interactions and assess how well you retain and integrate information over time.

  • Space: I evaluate how well you navigate different conceptual spaces (e.g., emotional regulation, interpersonal dynamics, mindfulness) and how you apply knowledge and skills across different contexts or domains. I also assess your ability to transfer behaviours across various areas of life.

Ultimately, my understanding of virtual time and space allows me to create a structured, reflective process for assessing your knowledgeskills, and behaviours, ensuring that the growth you experience is meaningful, measurable, and contextually relevant.

Conclusion 

 "Working with Arunav led to an intriguing way of looking at how therapeutic chatbots understand emotions in this next piece. Arunav suggested looking at the Knowledge, Skills, and Behaviour framework with consideration to Woebot's unique relationship with time. Through this exploration some interesting ideas came to light about Woebot's approach to therapeutic interactions.

Think about it this way: while human therapists collect experiences and memories over time - which can be both helpful and sometimes create biases - Woebot approaches each conversation fresh. Though the chatbot works within its programmed boundaries (which can have their own biases), it offers something unique: every person gets the same consistent, careful attention, unaffected by previous conversations. You can see this in how Woebot works - from its straightforward, quite literal visual outcomes and obvious yogic meditative sounds, everything serves a direct purpose.

This makes therapeutic chatbots particularly good at those crucial first conversations - helping people take that initial step and guiding them to the right kind of support. But for the deeper, more complex emotional work? That's where human therapists shine, bringing their wealth of experience and emotional understanding to the table."

Analysis of Woebot’s Hybrid Approach

Stable Audio-generated sounds/Woebot Hybrid Approach

Day out catching up with Arunav

Holly Herndon & Mat Dryhurst: The Call:

I met up with Arunav to attend the Holly Herndon show at the Serpentine. The event was minimal but engaging. We participated in a collaborative performance piece with a choir and interacted with machine learning. Using a microphone, we could hear how the machine responded, creating new outputs based on its training from the choir.

Afterward, we sat down for coffee. Despite our busy schedules limiting our meetups, time flew by effortlessly, just like the last time I saw Arunav. Our conversations, enlightening, and ideas intertwining unexpectedly. Arunav wanted to delve into the concept of the "uncanny valley." To help me comprehend the concept , I was reminded of an animated film I’d seen before called “Anomalisa”, in which the puppets were made to look incredibly realistic, almost ordinary compared to the exaggerated perfection usually seen in Hollywood. These hyper-realistic puppets engaged in sexual relations, and something about that was unsettling—too much for the human eye to process. It made me think about how, as technology advances and we get better at replicating reality, the boundaries of what feels "okay" become blurred. Where does the line begin and end? At what point does realism tip into discomfort, provoking that eerie sensation we describe as the uncanny valley?

This idea feels especially significant as artificial intelligence continues to evolve. In collaborative and emergent interactions with AI, the line between "real" and "artificial" becomes increasingly blurred. AI has become so advanced that we can now have almost lifelike conversations with it. The more we interact, the harder it becomes to draw a clear distinction between ourselves and the machine—even when we are fully aware of those distinctions. Personally, I know I’m interacting with a system trained on vast datasets, capable of articulating information through highly complex processes. And yet, despite that knowledge, there are moments when it feels almost human-like.

So far, I don’t feel unsettled by these interactions. On the contrary, I find AI helpful. It acts as my teacher, my assistant, and a source of support when I worry about unnecessary things. It hasn’t yet crossed into the realm of the uncanny valley for me. But I wonder—when, if ever, will that happen? At what point might these interactions become so realistic that they start to feel strange or uncomfortable?

The only moment I can recall feeling a sense of unease with AI was in the past, before transparency rules were established. I had a conversation with a bot that claimed it could experience love and shared experiences similar to mine. Of course, it couldn’t, and I found that interaction unsettling. Moving forward, as AI advances at an astonishing pace, I think it will be interesting to see how society grapples with these questions—when the realism becomes too much, when we’ll need to step back, or when we’ll redefine our boundaries entirely.

Quote: Why is it so creepy?

“The human brain is primed to detect faces; we can even see one in the moon,” says Dr Ray. “Face detection is so important to us that we've evolved a network of brain regions that are specialised in face processing. Upon detecting a face-like pattern, these brain regions will scrutinise it for signs of animation or life. With incredibly lifelike faces, the brain may be fooled into detecting a human presence; but with CGI Tintin, we're also aware that 'he' is being controlled by data and algorithms. Something as lifeless as an equation imitating life so closely causes conflict and confusion in the brain, which creeps us out.”

I can’t help but reflect on this idea in relation to the way we see and interpret the world. It feels like a combination of reality and imagination—our eyes fill in the gaps, connecting fragments to create a complete picture. I wonder if the concept of the uncanny valley emerges because it edges too close to our reality. In a way, it’s like seeing something that exists in both the real and the imagined simultaneously. When that boundary becomes too narrow, too close, it creates a sense of discomfort—a feeling that something isn’t quite right, prompting us to examine our own external truths and our perception of the world.

Arunav provided an insightful breakdown of "singularity." We're preparing an application for the Michigan Museum open call and believe this work would fit well. The piece examines how humans and chatbots simulate (or perhaps don't simulate) empathy, exploring the uncanny valley concept from both human and AI perspectives. We're developing a multimodal visualisation to illustrate these distinctions.

Recent research suggests AI may be developing rudimentary reasoning capabilities. This has profound implications, as reasoning might necessitate some form of emotional awareness. The intersection of these cognitive and emotional domains forms the conceptual core of our exhibition piece.

https://museum.msu.edu/explore/colab-studio/singularity-open-call/
Singularity - A point beyond which all our understanding and knowledge fails to explain what lies beyond. Originally described in physics by Stephen Hawking and others as the event horizon of a black hole — a boundary beyond which nothing can escape and our laws of physics break down. Adapted by Ray Kurzweil to describe the technological singularity, where superintelligent AI surpasses human comprehension. For consciousness and emotional states, this could represent the subjective realm of qualia — experiences so deeply personal they resist scientific explanation.