SAN FRANCISCO, June 4 —  A group of researchers at the Chinese Web services company Baidu have been barred from participating in an international competition for artificial intelligence technology after organisers discovered that the Baidu scientists broke the contest’s rules.

The competition, which is known as the “Large Scale Visual Recognition Challenge”, is organised annually by computer scientists at Stanford University, the University of North Carolina at Chapel Hill and the University of Michigan.

It requires that computer systems created by the teams classify the objects in a set of digital images into 1,000 different categories. The rules of the contest permit each team to run test versions of their programs twice weekly ahead of a final submission as they train their programs to “learn” what they are seeing.

However, on Tuesday, the contest organisers posted a public statement noting that between November and May 30, different accounts had been used by the Baidu team to submit more than 200 times to the contest server, “far exceeding the specified limit of two submissions per week.”

Jitendra Malik, a University of California computer scientist who is a pioneer in the field of computer vision, compared the accusations against Baidu to drug use in the Olympics.

“If you run a 9.5-second 100-meter sprint, but you are on steroids, then how can your result be trusted?” Malik said.

The episode has raised concern within the computer science community, in part because the field of artificial intelligence has historically been plagued by claims that run far ahead of actual science.

Indeed, as early as 1958, when Frank Rosenblatt introduced the first so-called neural network system, a newspaper article about the advance suggested that it might lead to “thinking machines” that could read and write within a single year.

In the 1960s, when John McCarthy, the scientist who coined the term “artificial intelligence”, proposed a new research laboratory to Pentagon officials, he claimed that building a working artificial intelligence system would take a decade.

When that did not happen, the field went through periods of decline in the 1970s and 1980s, which have since been described as “AI winters.”

Now rapid progress in a hot artificial intelligence field known as “deep learning” has touched off a computing arms race among powerful companies like Facebook, Google, IBM, Microsoft and Baidu, and scientists at each company have trumpeted improved performance in vision and speech recognition.

As the companies compete in new services as varied as self-driving cars or online personal assistants that converse with mobile phone users, the technologies have moved from the backwater of academic journals to front-page news.

With that has come controversy. In the past year, technologists and scientists like Elon Musk, founder of Tesla; Stephen Hawking, the celebrated physicist; and Bill Gates, co-founder of Microsoft, have warned that the potential emergence of self-aware computing systems might prove to be an existential threat to humanity.

But artificial intelligence researchers have a more basic concern: that their work will once again fall short of expectations, leading to yet another fallow period for their field.

And the Baidu controversy adds to the fretting.

This year, Baidu announced that it had built a custom supercomputer named Minwa with the intention of dedicating it to the image recognition contest. Baidu researchers subsequently made a series of announcements about the success of the computer, including one playing up a result more accurate than an earlier score by Google scientists.

On May 4, Baidu posted an article on its technology blog headlined “Baidu Achieves Top Results on Image Recognition Challenge”.

The article has since been removed.

Contest organisers said in a statement that by submitting many slightly different solutions it was possible for Baidu to “achieve a small but potentially significant advantage” and “choose methods for further research”.

Because Baidu had submitted so many more times than was permissible, it would not be possible to fairly compare its results with those of other teams, the statement said. “We therefore requested that they refrain from submitting to the evaluation server or the challenge for the next 12 months,” the judges said.

The computer science community has been buzzing.

“We are all wondering what scenario took place behind this debacle,” said Yann LeCun, a Facebook artificial intelligence researcher and one of the creators of the deep learning field.

“Was it the actions of a lone young researcher under intense pressure to deliver, and under weak oversight by his senior co-authors?”

The Baidu episode raises broader questions about scientific research in an era when the lines have begun to blur between basic science and new technologies that have huge commercial potential.

Image- and speech-recognition technologies are being used to deploy a variety of powerful new services in the Internet and computing markets.

For example, Microsoft is expected to make the improved quality of its speech technology a major selling point in its new Windows 10 operating systems, due to be released in July.

A number of computer science researchers said they were concerned about the episode but declined to speak on the record, in part because it is not yet clear what the motive of the Baidu researchers actually was.

Scientists organising the competition, also called the ImageNet Challenge, posted a comment by a Baidu researcher, Ren Wu, who is based in the company’s Silicon Valley research office.

“We apologise for this mistake and are continuing to review the results. We have added a note to our research paper, ‘Deep Image: Scaling Up Image Recognition’, and will continue to provide relevant updates as we learn more,” the statement read.

“We are staunch supporters of fairness and transparency in the ImageNet Challenge and are committed to the integrity of the scientific process.” — New York Times