Diffbot is an AI that learns by constantly reading the internet, MIT Technology Review reported. It pulls facts from every public web page to become more intelligent and to learn to communicate with humans by providing that information. Is artificial intelligence really “intelligent?”
According to MIT Technology Review, a new AI is taking a different approach to building up its intelligence. “To collect its facts, Diffbot’s AI reads the web as a human would—but much faster,” the article said.
“Using a super-charged version of the [Google] Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses [natural language processing] to extract facts from any text.”
Diffbot is likely to add fuel to the debate over whether artificial intelligence is truly intelligent or merely copying information and following instructions from its human creators. Before his unfortunate passing in 2018, Dr. Daniel Robinson taught material for The Great Courses on the subject.
Why AI Matters
The concept of artificial intelligence may not make a strong impression on some of us, but its philosophical implications of AI are heavy.
“It requires us to identify ever more accurately and systematically just what it is about our intellectual and mental and moral lives that seem to make these lives so special and beyond the range of simulation or duplication,” said Dr. Robinson, who was a member of the philosophy faculty at Oxford University and Distinguished Professor, Emeritus, at Georgetown University.
“It also gives us a handy way of examining in ever-greater detail just what it is we are doing when we are thinking and problem-solving and attempting to perceive complex patterns.”
Dr. Robinson said that artificial intelligence also matters if we could create a tool like an AI that could handle “the kinds of problems we take to be uniquely human,” whether they be existential, intellectual, rational, or political in nature. As such, an AI would serve as a heuristic for cognition and for problem-solving alike.
One famous critique of artificial intelligence is known as Gödel’s Theorem, named for number theorist and mathematician Kurt Gödel.
“Gödel established beyond dispute that in any formal system sufficiently complex to generate an arithmetic, the performance of that system, the achievements of the system itself, would depend on at least one axiom that could not be established or proved or validated within the system itself,” Dr. Robinson said.
“That is, any formal system sufficiently powerful to generate an arithmetic would have a theorem or axiom that had to be found outside that system. You could not establish it or validate it within the system, and thus we get what is referred to as Gödel’s incompleteness theorem.”
Gödel’s Theorem means that any formal system is incomplete because it depends on that axiom or theorem that must be proven outside the system itself. In that view, all computers suffer from incompleteness and can’t quite be considered “intelligent.”
However, the question remains: If Gödel figured that theorem out for himself and didn’t depend on any external help to do so, what if an AI could do the same thing somehow? What if it could figure out what makes its operation depend on outside assistance and learn to do it itself?
This article contains material taught by Dr. Daniel Robinson. Dr. Robinson (1937–2018) was a member of the philosophy faculty at Oxford University, where he lectured annually since 1991. He was also Distinguished Professor, Emeritus, at Georgetown University, on whose faculty he served for 30 years. He was formerly Adjunct Professor of Psychology at Columbia University, and he also held positions at Amherst College and at Princeton University. Dr. Robinson earned his PhD in Neuropsychology from City University of New York.