AI Company Accused of Using Humans to Fake Its AI

AI Company Accused of Using Humans to Fake Its AI

On Friday, iFlytek was hit with accusations that it hired humans to fake its simultaneous interpretation tools, which are supposedly powered by AI. In an open letter posted on Quora-like Q&A platform Zhihu, interpreter Bell Wang claimed he was one of a team of simultaneous interpreters who helped translate the 2018 International Forum on Innovation and Emerging Industries Development on Thursday. The forum claimed to use iFlytek’s automated interpretation service.

While a Japanese professor spoke in English at the conference on Thursday morning, a screen behind him showed both an English transcription of what he was saying, and what appeared to be a simultaneous translation into Chinese which was credited to iFlytek. Wang claims that the Chinese wasn’t a simultaneous translation, but was instead a transcription of an interpretation by himself and a fellow interpreter. “I was deeply disgusted,” Wang wrote in the letter.

In the open letter, Wang pointed to two examples to support his claim. First, iFlytek’s tool appeared to struggle with the Japanese professor’s English accent, rendering “Davos Forum” as “Devil’s Forum.” Despite the transcription error, the Chinese translation came out correct — just as it had in Wang’s partner’s translation.

Second, the majority of the Chinese translation was the same as what Wang and his partner had interpreted, including a tricky conjunction which Wang believes would have been translated literally had it really been translated by AI. Despite his concerns, Wang said that he didn’t confront the forum organizer or iFlytek staff at the forum.

Source: sixthtone.com