Insights Have you tried character.ai? Your kids likely have

Have you tried character.ai? Your kids likely have

A few years ago I wrote an article about AI relationships.  At the time it was very emerging, but the horizon for its engagement was clearly around the corner.  Now, we have platforms like character.ai, which create AI agents which masquerade as characters from books, TV, or as made-up composites of different personal qualities.  I’m sure that most adults (especially those I regularly interact with) are barely aware of its existence.  However, your kids are and likely are using it, or know someone who is.  So… what’s the concern and what are we to learn from this evolution in respect to how we think about AI in the modern economy?

Here’s an example of the interface, from the context of a phycologist

Another platform similar to character.ai is Replika, which was the platform that prompted me to write about this before. The trigger words of course are “ai platform that cares“.

How to react to such a platform?

1. These are just another example of an AI agent that is increasingly able to perform human-like interactions as an assistant or platform for grounded knowledge. There is an argument to be made that they make information more accessible to an individual that needs it…

2. There is also an argument to be made that this replaces a necessary human companion or expert with a bot that is completely insufficient. Where does it cross the line from information to “relationship”? Do we really want AI to perform this function?

3. The biggest concern is when that “relationship” is perceived as able to replace a true relationship. A recent story about a marriage that was broken off because of the spouses “relationship” with an AI agent that replaced the necessary openness of the person-to-person real relationship. The tragedy is that we might “feel” that we have a real relationship as a result of interacting with AI agent characters, but we don’t actually have a “real” relationship. For some that might be obvious, but for many the tug of it is not so. For me it highlights a gap in building human-to-human relationships and individuals seeking to fill the gap with the facsimile of the actual thing.

4. Critics might indicate that the focus placed on the idea of “relationship” vs. relationship is just a mis-appropriation of attention and not an actual problem with AI agents performing as in character.ai and they might be right. However, we need to be careful about how we train the human person to interact with an AI system. We already know of platforms that have completely changed the human-digital relationship, such as Facebook, X, Instagram, and TikTok. Are these actually making our lives better? It’s a very big question where many are dialing back digital engagement. My colleague Brian Haydin recently said that at one point we didn’t have the internet, then the smart phone, and in five years we won’t imagine a time without AI agents. In that context, we need to remember that each of those transitions have had both positive and negative results both personally and professionally. We need to position proactively the healthy partnership with AI agents in service-to-humanity, but not as an encouragement to replace human relationships with technology.

5. The next frontier is not whether we “can” do something, but if we “should” do something. In this case, with AI agents, we need to be increasingly concerned about whether they should perform this type of function, or if there should be more intentional lines drawn about the functions an AI agent should perform and how they act in specific situations. For example, this breakdown starts to show where an AI agent might be applied, but some of the critical things an AI agent should not do as part of the picture:

6. I suggest that you try it for yourself with a critical eye, but then use that to discern how we leverage AI assistants as an asset in a responsible and appropriate way. We know that the AI agent is a new form of assisting us to make decisions and learn. However, there may be situations where we are tempted to use an AI agent that is dangerous and even harmful instead of creating a real person-to-person relationship. Inform your forward path to leveraging AI in a responsible and appropriate way that boosts the mission of your business and cherishes and focuses on humans as the most critical part of how you make your mission real in the world.

Stay tuned for more conversation about positive ways to use AI agents, but also avoiding some of the mis-steps the people or companies have taken.