By
Aaron Krumins
Despite all the recent hullabaloo concerning artificial
intelligence, in part fueled by dire predictions made by the likes of
Stephen Hawking and Elon Musk, there have been few breakthroughs in the
field to warrant such fanfare. The
artificial neural networks
that have caused so much controversy are a product of the 1950s and
60s, and remain relatively unchanged since then. The strides forward
made in areas like speech recognition owe as much to improved datasets
(think
big data)
and faster hardware than to actual changes in AI methodology. The
thornier problems, like teaching computers to do natural language
processing and leaps of logic remain nearly as intractable now as they
were a decade ago.
This may all be about to change. Last week, the British high priest of
artificial intelligence Professor Geoffrey Hinton, who was snapped up by Google two years back during its massive acquisition of AI experts,
revealed that
his employer may have found a means of breaking the AI deadlock that
has persisted in areas like natural language processing.
AI Guru Geoffrey Hinton at the Google Campus
The hope comes in the form of a concept called “thought
vectors.” If you have never heard of a thought vector, you’re in good
company. The concept is both new and controversial. The underlying idea
is that by ascribing every word a set of numbers (or vector), a
computer can be trained to understand the actual meaning of these words.
Now, you might ask, can’t computers already do that — when I
ask Google the question, “Who was the first president of the United
States?”, it spits back a short bit of text containing the correct
answer. Doesn’t it understand what I am saying? The answer is no. The
current state of the art has taught computers to understand human
language much the way a trained dog understands it when squatting down
in response to the command “sit.” The dog doesn’t understand the actual
meaning of the words, and has only been conditioned to give a response
to a certain stimulus. If you were to ask the dog, “sit is to chair as
blank is to bed,” it would have no idea what you’re getting at.
Thought vectors provide a means to change that: actually
teaching the computer to understand language much the way we do. The
difference between thought vectors and the previous methods used in AI
is in some ways merely one of degree. While a dog maps the word sit to a
single behavior, using thought vectors, that word could be mapped to
thousands of sentences containing “sit” in them. The result would be
the computer arriving at a meaning for the word more closely resembling
our own.
While this sounds well and dandy, in practice things will
prove more difficult. For instance, there is the issue of irony, when a
word is being used in more than just its literal sense. Taking a crack
at his contemporaries across the pond, Professor Hinton remarked, “Irony
is going to be hard to get, [as] you have to be master of the literal
first. But then, Americans don’t get irony either. Computers are going
to reach the level of Americans before Brits.” While this may provide
some small relief to Hinton and his compatriots, regardless of which
nationality gets bested by computers first, it’s going to come as a
strange awakening when the laptop on the kitchen counter starts talking
back to us.
Comments
Post a Comment