Source – gizadeathstar.com
– “…But imagine what would have happened if it was not shut down. Eventually, the amount of machine-created expressions would have simply overwhelmed any human’s, or any team of humans’, ability to “decode” and “decrypt,” and that that point, the artificial intelligence would have been “up and running” on its own, independently”:
AI Shut Down After it Creates its Own Language
Oxford philosopher Nick Bostrom, and government-privileged businessman Elon Musk, have both attained some notoriety for warning of the impending dangers of the development of artificial intelligence. In addition to these academic and corporate concerns, popular culture has warned of the dangers from time-to-time. Consider only “HAL” from Arthur C. Clarke’s 20o1 Space Odyssey, or “Vicki” from Isaac Asimov’s I Robot, or even “the Machine” and its counterpart, “Samaritan” from the popular CBS television series Person of Interest.
If one reduces all these warnings to the lowest common denominator, the warning is that an artificial intelligence will begin to code for itself, and quickly overtake its human administrators, making it impossible to “turn off.” As readers here are also aware, I’ve suspected for a while that we might be seeing hints of such activity with the various flash crashes that have occurred on various equities and commodities markets. Indeed, Person of Interest even did an episode fictionalizing this precise scenario.
Well, Mr. B.H. shared this article which suggests that perhaps these scenarios and concerns are not so far-fetched; if anything, the article carries the implications that these concerns are no longer hypothetical, but now a very real world happening:
There’s something here that intrigued me, and I’ll fully grant that people familiar with information technologies can, and probably will, call me crazy. Well, both they and I can claim the academic and free speech right to be wrong. That said, one of my “pet peeves” is that real communication is breaking down as ipads and other texting devices take over; we now talk increasingly in a steady stream of abbreviations and anagrams, which, if one is not familiar with them, inhibit, rather than enhance, clear communication. Rather than spell out words, we now abbreviate them, often expecting others to g.w.w.m. (guess what we mean). “Lol” and “Rof” are now part of our vocabulary. But things have reached a pass that, in my email-sorting that I go through on a weekly basis to schedule these blogs, inevitably I will run across four or five emails full of abbreviations whose meanings are completely opaque to me. On occasion, I will write the sender (usually with some exasperation) asking what they mean. I am then usually informed that the abbreviations refer to some previous email – whose context and contents I don’t remember – and of course, once the context is recalled, the abbreviations sometimes begin to make sense.
It’s everywhere. Read an article on finances or economics, and one will encounter BIS, FRBNY, BOE, HSBC. Read a government budget and one will encounter GAAP, SEC, ESF, and so on. The abbreviation has come to be “technical” jargon that is a bewildering jumble of vowels and consonants that to the uniniated inhibit, rather than clarify, accurate communication. And the root of it is both “the need for speed” and just plain old laziness. BIS? Bank of International Settlements. FRBNY? Federal Reserve Bank of New York. BOE? Bank of England. GAAP? General Accepted Accounting Principles… and so on. Now, I can hardly claim perfection on this issue, since I’ve been guilty of abbreviation mania myself; but I have been making an effort of late to try to change this bad habit.
Well, with that in mind, contemplate the following paragraphs from the article:
An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.
As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.
While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”
English lacks a “reward”
The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.
Note what has happened: the logic of the machine, the need for speed, eradicated the very humanly based expressions of a natural language, and started inventing its own. The result, at first, was apparent gibberish. The system was shut down.
But imagine what would have happened if it was not shut down. Eventually, the amount of machine-created expressions would have simply overwhelmed any human’s, or any team of humans’, ability to “decode” and “decrypt,” and that that point, the artificial intelligence would have been “up and running” on its own, independently.
This should give those who advocate the “integration of human brain and machine” pause (and, of course, it won’t), for the implication is that those humans will increasingly become more machine like in their “communications.” Facebook’s artificial intelligence, in other words, was “communicating” in the same machine like way in abbreviations, whose meaning is only implicit, rather than in clearly spelled out and explicitly formulated fashion.
This isn’t communication.
My challenge? Spell it out. Pull up the weeds of abbreviations. Use words.
See you on the flip side…