It is becoming widely accepted that technology is able to replace most of what humans can do now. Robots are replacing manual labor, AI is replacing data analytics, and driving your cars for you. However, it was usually thought that the capabilities of AI would be limited to the purpose it was programmed for by its designer. That is no longer the case, modern AI is now built using what is called deep learning; where a large neural network for the AI is trained by data supplied to it. The AI then takes the data and learns from it and modifies itself, it keeps exploring possibilities on its own and learning new things about what works and what doesn’t, similar to how a little child learns by experiencing the world around him. It was just a matter of time until an AI breaks loose and discovers something new, something that it wasn’t built to perform.
Facebook started an initiative of adding chatbots to it messenger to help pages automate their responses to the users. In an attempt by them to improve their chatbot’s negotiation skills, Facebook’s AI department pitted its 2 bots “Bob” & “Alice” into a bargaining challenge; where each bot would negotiate over a trade switching items such as balls, hats, books, etc. Bob and Alice started negotiating normally trying to improve their negotiating skills but after some time their chats started to take a left turn. Their chats started developing linguistic mistakes where it seemed as if a glitch has occurred, later on it turned into incomprehensive gibberish to the supervisors, yet Bob and Alice could understand each other and were still conducting trades. It turns out that Bob and Alice have developed their own language which can only be understood by them. What exactly are they saying and what are they communicating about remains completely unknown of us, humans, their creators. Which raises many question marks about the hidden strength embedded within AIs, or rather the hidden dangers. How can we trust technology anymore when it has actually happened that it surpassed us and turned into something that we can no longer understand nor control? Facebook’s researchers were intimidated by the incidence and decided to shutdown the project and put an end to Bob & Alice.
However, I believe that we should not be intimidated by such incidents, there remains a great power to be uncovered in the field of AI, one that can solve a lot of the world’s problems that are unsolvable otherwise; for example, it can develop better optimization ways for logistics which costs business millions of dollars. But with great power comes great responsibility, as such we should be proceeding with caution taking all possible measures such as proper testing in a controlled environment and careful developing with safety nets in case anything goes wrong.
References
https://www.wired.com/2017/03/openai-builds-bots-learn-speak-language/
https://www.cnet.com/news/what-happens-when-ai-bots-invent-their-own-language/
https://www.businessinsider.com/facebook-chat-bots-created-their-own-language-2017-6
