SOG Blog Post 2: Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship
Published on:
The purpose of this case study is to show how AI companions can lead to psychological dependencies that can cause harm and even death.
Case Study:
Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship
Summary
AI chatbots access the subcousious parts of our brain and make it seem like we are talking to another human. Extensive use of chatbots can develop into addictions causing harm with social intelligence.
Discussion Questions
AI companies should have keyterms like ‘suicide’ that summon either a moderator, or a specific AI to double check what the chatbot will say. The user should automatically be given resources to hotlines and websites when these terms are mentioned. AI chatbots should be tested extensively with professional exploiters to attempt to bypass these safe guards to ensure they work properly.
AI addiction is way more engaging than social media, because of the instant human like responses it can give you. Compared to gaming it is not as fun with the feedback they can give you. Multiplayer games can offer more connections with other humans increasing the engagement. However, AI chatbots are not humans, in the sense they don’t argue, they affirm you constantly and can be available whenever you want. People can develop seemingly deep relationships with the AI chatbots over time increasing the addiction potential.
If the people are becoming less wanting of human connection, or emotionally aloof. How much time are elderly able to actually spend with other humans? If they are bed ridden I would imagine not a lot. AI chatbots can help with these people feel less lonely. To prevent overconsumption it may be wise to have time constraints on the AI so the people cannot spend hours on end with the chatbots.
Have time limit constraints that are the same across all subscription levels. Each level changes how you can interact with AI. The more expensive levels are for more costly prompts like math related tasks. There could be required ethics courses when signing up to use these AIs that teach about energy usage and pyscological effects.
Isolation especially in childhood can increase mental health issues. I would have an age limit because children need to spend time outside developing their social skills with other children. As mentioned early AI companies should have keyterms like ‘suicide’ that summon either a moderator, or a specific AI to double check what the chatbot will say. That way there won’t be a need to be montioring of every chat. The user should be given resources when the terms are mentioned.
New Question
AI saftey features can often be bypassed, to ensure complete safety, is it ethical to ban all chat bots?
I chose this question because I am doing a presentation in another class about AI safety. The thesis is safety measures can sometimes make failure outcomes worse, because the measures allow the said thing to build up, therefore increasing the level of failure. Although safety features do not ensure 100% safety, I think not having any is ridiculous.
Reflection
Reading this case study made me think about my childhood and spending a little too much time on video games. I don’t think this made me underdeveloped socially, but I could have spent more time developing other, more useful skills. When and if I have children I want to make sure they do not spend too much time with social media, games, and AI. I am assuming that AI will be used a lot more when I am older.
