Google Translate spews doomsday messages, Facebook snatches boffins, and more in AI

Hello, welcome to this week's roundup in AI. The machines have been sending us spooky messages on Google Translate, Facebook is hiring more academics to start new labs and some prat decided to step on a self-driving car in California.

AI sends us secret apocalyptic messages: What’s that Google? Jesus is going to return when the Doomsday clock strikes twelve, you say? Hmm.

Folks recently spotted weird sinister messages when trying to translate seemingly innocuous words using Google Translate.

If you type, for example, in "dog" 18 times and set it to translate from Yoruba to English, Google gives you this back: “Doomsday Clock is three minutes at twelve We are experiencing characters and a dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus' return.”
google_translate

O...kay, Google ... Click to enlarge

That’s not the only weird glitch. Adding odd spaces between words also makes Google Translate go wild. Some of the translations are pretty dark. Ask it to translate “ple as el etm ed ie” from Somali into English, and you’ll get back the eerie message: “As you please.”
google_translate_2

Click to enlarge

Google overhauled its online translation services using a giant neural machine translation model, an AI system that uses natural language processing to encode and decode words in different languages. It cannot come up with something that it hasn’t been exposed to before. Judging by some of the machine translations, it highly possible the model was fed passages from bibles and similar material.

This makes sense because the Christian bible is probably one of the world's most widely translated texts, and thus contains rich training data. In other words, it's a good idea to train an AI using texts translated into multiple languages, in order to get the neural network to connect words in different languages by their common meaning. The bible, available in many tongues, is a relatively good example of such a text.

The glitches are more likely to pop up with obscure languages because the training data for, say, Yoruba to English, and Somali to English, must be pretty sparse. So whatever datasets Google is using – bibles, novels, books, crawled webpages, you name it – there won't be much knowledge for the machine-learning to go on. Thus, when presented with tricky passages to translate, the underlying training data is likely to be exposed in whole and in unexpected ways.

Nobody, not even Google's engineers, really know how to untangle the decision-making process behind these neural nets, so weird stuff like this is always possible, and will continue to happen. This freaks out today's machine-learning boffins just as much as you and I.

In any case, it appears Google has adjusted its translation code to stop it spewing at least some of the obvious creepy portents – for now.

Autonomous vehicle accident reports: GM Cruise recently filed a report to the DMV in California after a pedestrian stepped onto the hood of one its test cars at a red light.

It’s interesting to see what one of these reports looks like. Thankfully, no one was hurt.

“A Cruise autonomous vehicle ("Cruise AV" ) while operating in autonomous mode, was involved in an incident on westbound Sutter Street at the intersection with Sansome Street when a jaywalking pedestrian approached the Cruise AV and intentionally stepped up onto the hood of the vehicle while the Cruise AV was stopped at a red light, resulting in a dent on the hood. The pedestrian then stepped off and walked away. There were no injuries and the police were not called,” the report said.

A new robotics lab for Facebook: Facebook announced it a round of new academics joining the social media giant to open research hubs, including one for robotics.

Jessica Hodgins, a robotics professor at Carnegie Mellon University will split her time between academia and leading a new Facebook AI Research lab in Pittsburgh. She is joined by Abhinav Gupta, an associate robotics professor also at Carnegie Mellon. It’s not really clear why a social media platform is interested in physical robots.

But the team will be focusing on “robotics, lifelong learning systems that learn continuously over years, teaching machines to reason, and AI in support of creativity,” according to a blog post.

Other hires from academia include Luke Zettlemoyer, an associate professor focused on natural language processing from the University of Washington, who has joined FAIR’s lab in Seattle. Andrea Vedaldi, an associate professor from the University of Oxford, and Jitendra Malik, will both do computer vision research for FAIR in London and Palo Alto.

OpenAI launches new Dota challenge: OpenAI announced another competition to battle former professional Dota players with its OpenAI Five bots.

OpenAI has slowly been ramping up the difficulty of the challenge. At first, it was a mirror match - where all heroes pitted against one another had to be the same - in a 1V1 game. Last month, OpenAI Five won in 5V5 mirror matches.

Now, OpenAI wants its engine to face semi pros with even less restrictions. There will be a pool of 18 heroes to choose from and no mirror matching. Some items like the Divine Rapier and Bottle are still banned, and the bots won’t get to use Scan, a move that allows players to detect any surrounding enemies.

The reaction time of the bots has also been increased from 80ms to 200ms so that they have less of an advantage. But it looks like they will still be able to have the massive benefit of being able to see the whole entire map at any one time, something that humans cannot do as they have to manually move their heroes around the map.

The competition will take place in OpenAI’s San Francisco office on August 5.

Speaking of OpenAI... The org has released a reversible generative model, called Glow, described here with open-source code here. It can be used to, for instance, tweak things like smiles, signs of age, eye size, and hair color, in photos of faces.

American FPGA biz snaps up Chinese AI chip startup: Xilinx, a hardware company known for its FPGAs has acquired DeepPhi Tech.

Financial details of the acquisition were not disclosed. Both companies have had a close working relationship for a while, DeepPhi has partnered up with Xilinx to tailor its FPGA chips to accelerate the training and inference stages for neural networks.

“FPGA based deep learning accelerators meet most requirements,” Yao previously explained to our sister site The Next Platform. “They have acceptable power and performance, they can support customized architecture and have high on-chip memory bandwidth and are very reliable.”

It looks like DeepPhi wants to focus on optimising long-short term memory networks and convolutional neural networks for natural language processing and computer vision tasks.

https://www.geezgo.com/sps/31648

Comments