“A mastery of skills is not the same as general intelligence, for humans or machines, according to Google AI researcher.
Cutting through the hype surrounding artificial intelligence, François Chollet, an AI researcher at Google, has proposed the Abstract and Reasoning Corpus (ARC), an intelligence test that could shape the course of future AI research. To date, there has been no satisfactory definition of artificial intelligence nor any meaningful evaluation methods, according to his recent paper (“On the Measure of Intelligence”). There are benchmarks, such as mastery of video games, but they measure only narrowly-defined skills and nothing that could be called human-like intelligence.
Developers, of course, are not claiming that their chess or Go champions possess general intelligence, but all such work in this field is striving toward this goal, even if only implicitly. And therein lies the problem. Goals and evaluation benchmarking are essential to science, but they need to be explicit — “precise, quantitative definitions and measures of intelligence,” in Chollet’s words. Without a formal definition, AI researchers revert to unclear, commonly-used definitions which are conceptual in nature, rather than actionable. They lack the ability to be measured objectively without relying on tests “with implicit definitions and biases that stretch back decades.”0