In a recent episode of HBO's TV show, Silicon Valley, Pied Piper network engineer Bertram Gilfoyle (played by actor Martin Starr), creates a chatbot he calls "Son of Anton," which interacts on the company network with other employees automatically, posing as Gilfoyle.
For a while, Pied Piper developer Dinesh Chugtai (played by Kumail Nanjiani) chats with the bot until during one interaction he sees Gilfoyle standing nearby, away from his computer. Upon discovering he's been chatting with AI, Dinesh is angry. But then he asks to use "Son of Anton" to automate his interactions with an annoying employee.
Like Dinesh, we hate the idea of being fooled into interacting with software impersonating a person. But also like Dinesh, we may fall in love with the idea of having software that interacts as us so we don't have to do it ourselves.
[ Related: Will Google's AI make you artificially stupid? ]
We're on the brink of confronting AI that impersonates a person. Right now, AI that talks or chats can be categorized in the following way:
- interacts like a human, but identifies itself as AI
- poses as human, but not a specific person
- impersonates a specific person
What all three of these have in common is -- regardless of their pretenses to humanity -- they all try to behave like humans. Even chatbots that identify themselves as software are increasingly designed to interact with the pace, tone and even flaws of human interaction.
[ Don't miss: Mike Elgan every week on Insider Pro ]
I detailed in this space recently the subtle difference between Google's two public implementations of its Duplex technology. It's use to answer calls when someone calls a Google Pixel phone is the first kind of AI -- it identifies itself to the caller as AI.
[ Inside AI ebook: Artificial intelligence in the enterprise ]