Why is Brightwork Research Interested in Evaluating AI Claims?
Executive Summary
- We cover AI, Data Science, and Big Data claims, which has required that we get into the foundational areas of these topics.
- This is necessary to fact check the claims made by entities on these topics.
Introduction
Why does Brightwork Research & Analysis have an interest in AI? Well, we fact check vendors. And AI claims have become increasingly prevalent with vendors. AI has become a predominant way that companies lie.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Promoting Us to Research AI
We were forced into researching AI, which, of course, directly connects to data science and Big Data through our fact-checking efforts. There are two essential things to know about AI.
- First, AI claims are made that are greatly exaggerated and misrepresent state of the art on AI.
- Second, those that work in the field are reticent to contradict the claims made by software vendors or luminaries like Bill Gates and Elon Musk. There is much more money to be made aligning yourself with fictitious claims around AI, than in contradicting AI claims or otherwise inserting sanity into the proceedings.
What We Do
Brightwork Research & Analysis is one of the only fact-checking entities in the enterprise software space. Nearly all of the information about AI, data science and Big Data originates in either software vendors or consulting companies and matriculates through IT analysts or IT media, nearly all of which receive advertizing or paid placements from the vendors and consulting companies.
There is no quality control in this process, and IT and non-IT media have been found to have erroneously reported many areas around AI, data science and Big Data already. One example being How Awful Was the Coverage of the McDonald’s AI Acquisition?
The vendors, consulting companies and their compliant IT analysts and IT media create a system that leads to tremendous waste, a far higher project failure rate than is necessary and one bubble after another. Each bubble is very rarely being analyzed in retrospect as a new bubble is always created to distract people to the latest and unproven item. IT media are paid based upon how well they help promote IT items, and this is absolutely counter to accuracy.
We have for years tracked clearly false statements by companies trying to raise money or sell AI, data science or Big Data projects. It is easy for these companies to find willing participants in the analyst and media spheres to serve as message repeaters.
The Reality of AI, Data Science, and Big Data
AI, data science and Big Data are developing essential contributions to our lives, but they are all also significantly oversold in terms of the current state and the future state, and it is also much like the 2008 bubble a great deal of lying being performed because as this book is written, slapping the AI label is the easiest way to raise money. And the fact that part of AI, Big Data and data science are genuine is part of the problem. When something is ½ true or true in certain areas, it is far more challenging to determine if the rest is true. This is particularly true of something that requires a significant amount of domain expertise or is technical to determine truth from falsehood. And as a researcher who fact checks the software industry, I have compiled the evidence to say that the majority of companies in the IT space, do not care about what is true. AI claims are a perfect place for unscrupulous entities to function because the present projections about AI make it difficult to outline what the limits of AI are assertively.
What Will Follow the Current AI, Data Science, Big Data Boom
This will inevitably lead to disappointment when it becomes more apparent that these items cannot meet the promises that have been made for them. Enormous amounts of resources have been poured into all three of these items and will continue to be poured. The big three promise generalized benefits, but the case studies they provide tend to be very narrow and not actual intelligence. Instead, they are robotic automation of a highly restricted process that like Watson playing Jeopardy! It only appears intelligent when viewed from a distance. The more exposure one attains into each case study, much like learning a magicians’ tricks, the more explainable the behaviour of the artefact becomes. At Brightwork Research & Analysis, we use a type of AI to create summaries of articles because it is much faster than doing so manually. We recently added text to speech which is courtesy of Google Text to Speech which is based upon a neural network. Voice recognition is also, like writing software, undoubtedly useful, as is grammar checking. I used all of these things to write part of this book.
The Problem in Anthromorphizing AI
While I leverage these items, I never convince myself that the software I am using is intelligent or that it will be conscious, or like HAL will eventually lock me out of the spacecraft once it learns of my plans to unplug it. Instead, all of these technologies are running through an automated procedure, and unlike a sentient entity, it does not have any opinion on what it is doing because it is not alive and is merely aping its human instructions. A many-layered neural network may develop those instructions, but this means that the procedure has more layers.
This is expressed in the following quotation.
“Mimicking the human mind is a daunting task with no guarantee of success. There have been some legendary exceptions, like AT&T’s Bell Labs, Lockheed Martin’s Skunk Works, and Xerox’s Parc, but few companies are willing to support intellectually interesting research that does not have a short term payoff. It is more appealing to make something that is useful and immediately profitable.
I don’t know how long it will take to develop computers that have a general intelligence that rivals humans. I suspect that it will take decades.
Statistical evidence is not sufficient to distinguish between real knowledge and bogus knowledge. Only logic, wisdom, and common sense can do that. Computers cannot assess whether things are truly related or just coincidentally correlated because computers do not understand data in any meaningful way. Computers do not have the human intelligence needed to distinguish between statistical patterns that make sense and those that are spurious. Computers today can pass the Turning Test, but not the Smith Test. The situation is exacerbated if the discovered patterns are concealed inside black boxes that make the models inscrutable. Then no one knows why a computer algorithm concluded that this stock should be purchased, this job applicant should be rejected, this patient given this medication, this prisoner should be denied parole, this building should be bombed.” – The AI Delusion
Listening to Inaccurate Sources
Many people are convinced about the intelligence of various items because they are listening to consulting firms, to luminaries like Elon Musk, and to companies trying to sell their products or increase their stock price, to media entities who sell advertising space to the same companies that need to promote AI, data science and Big Data. None of these is a reliable source. For those interested in seeing Elon Musk’s track record in prediction, see the site elonmusk.today.
Is Elon Musk someone people should be listening to on AI? Looking at his track record, the answer should be “no.” Elon Musk’s primary skill is as a promoter. Promoters, by their very nature, provide inaccurate information for personal gain.
Something important to note is that a great deal of AI, data science and Big Data projects have no ROI. They were sold on the basis of false claims, and companies don’t normally go around admitting they were tricked by consulting firms or software vendors. This is expressed in the following way in the following quotation. However, in an article in Forbes (and in a typical article that serves roughly 25 ads while it is being read), and what was most likely a paid placement (that is the author paid Forbes to run the article, anyone with enough money can publish anything they want in Forbes) the reader is told not to worry about the ROI of AI. Notice the comment.
This means looking beyond traditional, cold ROI measures, and looking at the ways AI will enrich and amplify decision-making. Ravi Bapna, professor at the University of Minnesota’s Carlson School of Management, says attitude wins the day for moving forward with AI. In a recent Knowledge@Wharton article, he offers four ways AI means better decisions:
Observe how asking for evidence of ROI is “cold.” What a relief this must be for consulting companies and vendors that have sold AI projects that are failing, as with IBM that we cover in the article How Many AI Projects Will Fail Due to a Lack of Data?
Now, after the AI project has been sold, it is important to not focus on ROI. In fact, why focus on any types of benefits coming out of the AI project at all? Perhaps the employing AI is its own virtue?
AI promotes counter-factual thinking: Data by itself can be manipulated to justify pre-existing notions, or miss variables affecting results. “Counter-factual thinking is a leadership muscle that is not exercised often enough,” says Bapna relates. “This leads to sub-optimal decision-making and poor resource allocation.” Casual analytics encourages counter-factual thinking. “Not answering questions in a causal manner or using the highest paid person’s opinion to make such inferences is a sure shot way of destroying value for your company.” – Forbes
What does this paragraph even mean? Also is the author proposing counter-factual thinking a good thing or a bad thing? It would seem it would have to be bad, why would you want thinking to be counter to facts. However, this paragraph seems to propose it is good. How is not using the highest-paid person’s opinion counter-factual?
Before we get into the topic of AI, the author of this article needs to learn how to write a paragraph, because I cannot tell what this author is trying to say. This paragraph can’t be analyzed because it is inscrutable or that is uninterpretable.
Despite negative images and talk, Luis is sure that artificial intelligence is here to stay, at least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development. – Forbes
That is as soon as something becomes sufficiently promoted, its investment continues regardless of the actual benefits because the decision-makers that bought into the claims become captured by the claims, as they are unwilling to admit to others that they received bad information and did not do the work necessary to fact check the claims on which they based their decision.
The Value We Offer to Evaluating AI
Most of the academic work on these topics is not generally relatable to non-AI practitioners, and most of the information published around AI by companies and consulting firms is not reliable or is exaggerated due to the profit motive. Brightwork Research & Analysis is a research entity that has no incentive to either promote or minimize these topics. We have worked with many of the things we cover, from things ranging from forecasting to speech to text, but aside from usage, the author is bringing a researcher’s scalpel to these topics.
Conclusion
There is little fact checking in the area of AI, Data Science, Big Data. This allows vendors to make any claim they want and must be required to provide little to no evidence. The term AI has little meaning, as every AI project uses something more specific than the term AI. Therefore the term is designed to virtue-wash what is being used.