Are Gartner and Other IT Analysts Correct That They Are Unbiased?
Executive Summary
- Gartner proposes it has no bias, but it takes billions in revenue from software vendors.
- Gartner stacks the deck in favor of large vendors and has many dimensions of bias, all of which are undeclared.
Introduction
In 2006, InformationWeek wrote an article titled The Credibility of Analysts. In this article, IW brought up an excellent point about IT analysis. This point is rarely discussed when a new Magic Quadrant or Forrester Wave is published or republished by vendors. However, this topic goes to the heart of the profit-maximizing and non-research business model of well-known IT analyst firms. Join us as we analyze the claims from IT analysts regarding their objectivity.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Notice of Lack of Financial Bias: You are reading one of the only independent sources on Gartner. If you look at the information software vendors or consulting firms provide about Gartner, it is exclusively about using Gartner to help them sell software or consulting services. None of these sources care that Gartner is a faux research entity that makes up its findings and has massive financial conflicts. The IT industry is generally petrified of Gartner and only publishes complementary information about them. The article below is very different.
- First, it is published by a research entity, not an unreliable software vendor or consulting firm that has no idea what research is.
- Second, no one paid for this article to be written, and it is not pretending to inform you while being rigged to sell you software or consulting services as a vendor or consulting firm that shares their ranking in some Gartner report. Unlike nearly every other article you will find from Google on this topic, it has had no input from any company's marketing or sales department.
Gartner and Bias
Gartner often proposes that it has no bias. The following quotation is a very typical example of this.
Bias is a nonissue at the company, CEO Hall insists. “We wouldn’t have a dollar of revenue from the user community if our objectivity and independence weren’t held in high regard,” he says. Gartner has policies in place meant to ensure objectivity. The ombudsman office reports to Gartner’s general counsel to ensure it’s free from pressure from other parts of the company. And Gartner analysts aren’t allowed to own stock in the companies they cover.
None of those policies are verifiable or auditable. The only policy that Gartner has in place is its ombudsman, which is covered in the article How to Understand the Gartner Ombudsman Best.
The Problem with the Argument Based On Popularity
Secondly, Hall’s argument is the same as that of the University of Phoenix, which is used to defend itself against charges of poor outcomes for its students. Their logic was the following. Because students chose the University of Phoenix, it must be good.
The argument used by Gartner is circular.
- Gartner has no bias because it is a popular
- Because Gartner is popular, it must have no bias.
Let’s see how some other companies could use this same logic.
- If McDonald’s did not offer healthy food, they would not get any people buying meals.
- Many people with money would not have invested with Bernie Madoff if anything unusual happened.
- Volkswagen would not have been able to sell its “clean diesel” to so many customers if its cleaning technology were a fraud.
- If political parties were corrupt, no one would vote for them.
The Logical Fallacy of Popularity
This argument is so well known to be false that it is categorized as a logical fallacy. Specifically, it is a logical fallacy of “argument by number.” If many people believe something to be accurate, it must, therefore, be true. It leaves out the obvious alternative: those people might be deluded or misinformed. Wikipedia explains it as the following:
“This type of argument is known by several names,[1] including appeal to the masses, appeal to belief, appeal to the majority, social justice, appeal to democracy, appeal to popularity, argument by consensus, consensus fallacy, authority of the many, bandwagon fallacy, vox populi,[2] and in Latin as argumentum ad numerum (“appeal to the number”), fickle crowd syndrome, and consensus gentium (“agreement of the clans”). It is also the basis of a number of social phenomena, including communal reinforcement and the bandwagon effect.”
This type of argument should concern anyone using Gartner’s opinions (what Gartner calls its research in court documents) because the fallacy of argument by number is primarily used by those who have no evidence to provide.
The second problem with Hall’s statement begins before one even gets to the argument by number. And that is that many people, both in the user community and particularly in the vendor community, question Gartner’s objectivity. (therefore, Hall could cram two falsehoods into a single sentence with only 22 words.) Vendors, like ZL Technologies and NetScout, have sued Gartner. Yet, in Hall’s statement, the concept is that Gartner’s objectivity is never questioned.
Did these lawsuits happen, or didn’t they?
Gartner’s Objectivity is Unquestioned?
So, the presumption that their objectivity is not questioned is false. Many vendors would publish this fact. However, they fear reprisal on the part of Gartner. Gartner is now so influential, primarily through a corrupt business model and a constant stream of acquisitions — it now functions as a monopoly in the IT analyst space.
Vendors know that Gartner is pay to play, and if they can afford to, they usually play. Vendors publish the latest Magic Quadrant results with aplomb (something they pay Gartner for the privilege of doing). Vendors are looking for any edge they can get in sales pursuits. But that does not necessarily mean that they take the MQ results seriously themselves internally.
The History of Media Bias
Historical media analysis tells us that media output tends to align with the interests of those who control it and pay for it. For example, the primary reason that advertised television tends towards so much unchallenging programming is that challenging programming puts viewers in a critical mindset, making them less susceptible to advertising. The best programming to put viewers in the correct mindset for suggestibility is lighthearted fare.
Evidence that smoking advertisements or their payments for them adjusted the media’s coverage of the dangers of smoking is now well-established. Tobacco was at least suspected of causing cancer before the following ad ran in 1931. Yet, ads like this, with doctors promoting certain brands, continued to run for decades after the industry knew the relationship by late 1950.[1] Cigarette advertising was shown to relate directly to the number of anti-smoking stories. In effect, for decades, cigarette advertising slowed the dissemination of information about the real risks of smoking.
The story repeats itself today as Exxon-Mobile has contributed to the Heritage Foundation and the National Center for Policy Analysis (NCPA). Unsurprisingly, both entities have published “misleading and inaccurate information about climate change,” according to Bob Ward, the policy director at the Grantham Research Institute on Climate Change and the Environment at the London School of Economics.
As pointed out by the Stanford School of Medicine,
“Unlike with celebrity and athlete endorsers, the doctors depicted were never specific individuals, because physicians who engaged in advertising would risk losing their license. It was contrary to accepted medical ethics at the time for doctors to advertise, but that did not deter tobacco companies from hiring handsome talent, dressing them up to look like throat specialists, and printing their photographs alongside health claims or spurious doctor survey results. These images always presented an idealized physician—wise, noble, and caring. This genre of ads regularly appeared in medical journals such as the Journal of the American Medical Association, an organization which for decades collaborated closely with the industry. The big push to document health hazards also did not appear until later.”
Evidence of Advertisers Interfering in Media Content
The paper What Do the Papers Sell? The Model of Advertising and Media Bias and other papers on the influence of payments on media demonstrate a consistent conclusion.
“The regulatory view grew out of evidence that some advertisers seriously interfere with media content. Baker (1994) and Bagdikian (2000) detailed accounts of the history of suppression of news on tobacco-related diseases. Complementing this evidence, Warner and Goldenhar (1989) statistically identify tobacco advertising as causing the reporting bias (for further evidence, see e.g., Kennedy and Bero, 1999). Another more recent case is misreporting on anthropogenic climate change. Boyko and Boyko (2004) demonstrate a clear bias in the US quality press over 1988-2002 (see Oreskes, 2004, on the scientific benchmark). Automotive advertising has been signaled as a key explanatory factor: in the US in 2006, automotive advertising alone accounted for $19.8 billion, of which nearly 40% went to newspapers and magazines.” – Advertising Age, 2007
So, there are consistent findings in other media outlets that payments from subjects influence editorial content. More observations can be found in the well-known paper (in media analysis circles at least) Do Ads Influence Editors? Advertising and Bias in the Financial Media:
“For their part, media outlets tend to strongly deny that such a pro-adviser bias exists. For example, a 1996 article in Kiplinger’s Personal Finance printed statements from editors at a number of personal finance publications (including the three in our study) claiming that advertisers have no influence over published content. In this paper, we test for advertising bias within the financial media. Specifically, we study mutual fund recommendations published between January 1997 and December 2002 in five of the top six recipients of mutual fund advertising dollars. Controlling for observable fund characteristics and total family advertising expenditures, we document a positive correlation between a family’s lagged advertising expenditures and the probability that its funds are recommended in each of the personal finance publications in our sample
(Money Magazine , Kiplinger’s Personal Finance , and SmartMoney ). While we consider several alternative explanations below, the robustness of the correlation leads us to conclude that the most plausible explanation is the causal one, namely, that personal finance publications bias their recommendations—either consciously or subconsciously—to favor advertisers.”
Advertising affects editorial decisions; this is a clear and straightforward finding.[1]
The Problem with Payments from Industry
This issue is not merely restricted to companies that create ratings but to all forms of media. Media outlets entirely independent financially from those they report on have a much better ability —both in theory and actuality—to stay separate from the industries they cover. Before the stock market crash of 1929, many reporters for financial publications would take direct payments to write glowing reviews/predictions of stocks by financial manipulators. The manipulators would buy shares at a low price and then sell them as soon as the article came out. This was so common that it had a specific name: “pump and dump.” Many similarly corrupt arrangements were behind many financial panics before and since 1929. The corruption of the general financial press and rating agencies plays a decisive role in financial panics and a small minority getting very wealthy. Having a prestigious brand name is no protection against corruption; the most prestigious brands have the most significant ability to drive and benefit from financial bubbles.
The Example of Buying Financial Ratings
Anyone who thinks that firms that do this will eventually be made to pay for the “market” only has to look as far as Moody’s, Standard and Poor’s, and Fitch — three rating agencies that rated billions of toxic assets as the highest investment grade. They are still doing very well, thank you very much. I have a side interest in economics. I have found that the smaller media outlets that are less known and take in very little revenue consistently outperform the more prominent media and better-known outlets that do take advertising. Very naturally, money corrupts the media product.
[1] A French TV executive spoke about his television channel’s purpose with some uncommon candor.
“[French TV channel] TF1’s job is to help a company like Coca-Cola sell its products. For a TV commercial’s message to get through, the viewer’s brain must be receptive. Our programs are there to make it receptive, that is to say to divert and relax viewers between two commercials. What we are selling to Coca-Cola is human brain time.” What is surprising to most people who buy magazines is that they are not the only “payer” for the product. The consumer buys and pays for the magazine at the newsstand; however, advertisers pay in effect to have the magazine produced. The percentages of overall revenues are shown in the following quotation. “Mainstream US newspapers generally earn over 50 and up to 80% of their revenue from advertising; in Europe, this percentage lies between 30 and 80%, e.g., averaging 40% in the UK (see e.g., Baker, 1994; Gabszewicz et al., 2001). Overall, advertising exceeds 2% of GDP in the US and a substantial fraction of this becomes media revenue: 17.7% to newspapers, 17.5% to broadcast TV, 7.4% to radio and 4.6% to consumer magazines.” What Do the Papers Sell? The Model of Advertising and Media Bias: Advertising Age, 2007.
Two, Not One Customer Base
In this way, many media outlets can be seen to have two customer bases: their audience and their advertisers. That is, they receive income from both the buy and sell-side. This is similar to Gartner. However, because advertisers are much more concentrated, they have even more power than the percentage of revenue they contribute to the media outlet. For instance, one audience member cannot seek to influence the media product. Even ten percent of the audience would not effectively influence the media output; audience members do not coordinate with one another, so their influence is diffused. However, a single advertiser can influence the media product, and the larger the advertiser, the more its ability to influence the media entity.
[1] “Evidence linking smoking and cancer appeared in the 1920s. Between 1920 and 1940, a chemist named Angel Honorio Roffo published several articles showing that cancers could be experimentally induced by exposure to tars from burned tobacco. Roffo et al. further showed that cancer could be induced by using nicotine-free tobacco, which means that tar, with or without nicotine, was carcinogenic. Research implicating smoking as a cause of cancer began to mount during the 1950s, with several landmark publications in leading medical journals. The first official U.S. government statement on smoking and health was issued by the Surgeon General Leroy Burney in a televised press conference in 1957, wherein he reported that the scientific evidence supported cigarette smoking as a causative factor in the etiology of lung cancer.”
Financial Bias
In 2006, InformationWeek wrote an article titled The Credibility of Analysts. In this article, IW brought up an exciting point about Gartner and the overall analyst community.
“Research firm executives are well aware of the questions being raised about their business models, but don’t expect changes to be fast or wide-sweeping. The financial stakes are too high — and the incentives for change aren’t compelling enough.”
Yes, research firms make great money. While some may question the absolute lack of transparency in the IT analyst industry, companies keep buying these biased reports that fail to mention where vendor funding comes from. So, since not enough people are complaining, why follow normal research rules?
Bias Removal?
I analyze bias removal in Chapter 9 of the book Supply Chain Forecasting Software (a chapter dedicated to the study of bias). This was a fascinating area of study, partially because of the enormous discrepancy between the reality of bias and the interpretation of bias. Humans have so many areas of bias, including any perception or forecasting.[1] For instance, humans have a well-known optimism bias. Confirmation bias is the selective use of information to support what one already believes to be true and rejecting information that contradicts one’s hypothesis.
These are the unconscious biases that are part of the human condition. Then, we get into social and institutional biases. In studying financial analysts’ bias for the book Supply Chain Forecasting Software, I found a detailed explanation of how analysts biased their forecasts to achieve career advancement. An excellent example of forecast bias, which is produced by institutional financial incentives, is described below:
“Sell-side analysts are pressured to issue optimistic forecasts and recommendations for several reasons. First, their compensation is tied to the amount of trade they generate for their brokerage firms. Given widespread unwillingness or inability to sell short, more trade will result from a ‘buy’ than from a ‘sell’ recommendation. Second, a positive outlook improves the chances of analysts’ employers winning investment banking deals. Third, being optimistic has historically helped analysts obtain inside information from the firms they cover (underline added). While all these pressures introduce an optimistic bias to analysts’ views, the magnitude of the bias is held in check by reputational concerns. Ultimately, an analyst’s livelihood—the ability to generate trades and attract investment banking business—depends on her credibility.” – Anna Scherbina
“Analysts will set the optimistic bias at an optimal point that balances the benefit of being upbeat against the cost to their reputation.” – Anna Scherbina
The Pressure for Optimistic Forecasts
Here the case is made that financial forecasters must trade-off pressures to create optimistic forecasts regarding their reputations. In this way, the forecast of a financial analyst can be seen as less of a forecast and more of a balancing act; they attempt to develop numbers that garner favor with the powerful companies from whom the analysts’ investment banks gain business while keeping some semblance of credibility with investors. This “credibility” also determines whether “information channels” are kept open or closed and highlights how political factors can influence a financial analyst’s forecast.
We hear the term “unbiased” quite frequently. However, when one analyzes the output of individuals and institutions, we find bias to be universal. The best that can be hoped for is that financial bias is reduced—but even this is incredibly rare. Here, I would like to borrow from noted philosopher and linguist Noam Chomsky, who said,
“Everyone has a bias, the honest people tell you what their bias is. People that are not honest say they have no bias.”
Gartner’s Self-Ascribed Reputation for Objectivity?
I had had several conversations with people at Gartner (years before I decided to write this book), during which they repeatedly stated that they believe they have a reputation for being objective. However, this subject is much greyer than Gartner would have it, according to my conversations with many people and from publicly available information on the Internet. Questions of bias plague Gartner, but it should be understood that this conversation is held chiefly among those who are the most sophisticated in their understanding of Gartner —usually those who work in marketing within vendors. The controversy regarding Gartner’s bias is not prevalent among investors and software buyers. In fact, among the executive decision-makers in software buyers, I do not see the topic as even a minor conversation point.
Gartner Defending Gartner
It would be good if Gartner only had more defenders than current or ex-Gartner analysts. I have observed that the more experienced and savvy the individual is, the less convinced they are that Gartner is objective. Those who work for best-of-breed or smaller vendors are most critical of Gartner’s objectivity.
I can confidently say that this criticism is not sour grapes on the part of these individuals, who are with smaller vendors in the software category in which I specialize. I have used the software of these vendors as well as the software of the larger software vendors. In each case, these smaller vendors’ software is far superior to software offered by the more prominent vendors (which, as stated previously, is ranked higher than that of these smaller vendors). Vendors that provide “point solutions” (considered a negative but should not be) have a legitimate complaint that the Gartner methodologies favor large vendors and vendors with broad suites. I will discuss this topic in detail in Chapter 5: “The Magic Quadrant.”
Gartner’s IT Bias
Gartner tends to tailor its writing in a way that suits the interests of IT. IT’s control of software selection decisions has been increasing for some time. Gartner’s growth has coincided with a reinforced influence of IT over software selection. This is demonstrated by Gartner’s diminished focus on the application and amplified focus on things like integration, reducing the number of vendors from which purchases are made, etc. IT simply has different incentives than business. When I discuss software’s inability (which the company has just purchased) to do the job, the business is all ears while IT does not want to hear about it. This point is brought out well in the following quote:
“They think it’s better to have fewer software contracts to manage than it is to have the best technology for the business problems they face. Companies should buy the best software for the job, not because it’s software from the vendor they already use. That’s just plain lazy and bad business.” – Christopher Koch, CIO Magazine
Gartner’s IT Versus Business Bias
Gartner’s reports also have multiple categories of customers within buyers, the two most prominent being IT and business. IT tends to be pro-Gartner because IT does not like dealing with many vendors, service contracts, etc. Therefore, IT has been one of the main proponents of purchasing software from fewer firms. This policy alleviates the pressure faced on integration applications (although much less than generally anticipated)[1] but has numerous downsides concerning implementation success. When a company restricts its buying alternatives to fewer software vendors, the business loses because no single vendor offers the best solution for even a small fraction of the software categories. What makes a solution the “best solution” is not only the general functionality of an application. Still, it includes how buying from more software vendors makes available to the business more suited to its industry and the requirements of the particular buying company.
Thus, a mix of competing interests makes decisions not necessarily based on evidence or logic but upon which department or grouping has the most power. For whatever reason, IT has tended to get its way more often than not in software selection during the last several decades.
How Gartner Pushes Buyers Towards More Expensive Solutions
Because Gartner prefers solutions from larger vendors, Gartner also tends to push buyers towards more expensive solutions. This higher expense is not only for software but also for services. For instance, I work as an SAP consultant, and SAP consultants are some of the most expensive IT resources. One aspect of the cost is the hourly billing rate, which is high. Another aspect is that SAP software is complicated to install, so SAP projects are long. The billing rate multiplied by the total number of hours is, of course, the consulting cost. The extent to which the cost to implement SAP is higher than implementing “best-of-breed solutions” is shown in the articles below. In these articles (knowing that I would receive a great deal of negative feedback), I provided an advantage to SAP by setting their software costs to zero.[2]
Regarding the overall costs, the implementation consulting costs far exceed the cost of purchasing the product. (Therefore, it makes little sense to focus on the cost of acquiring software simply. It makes the most sense instead of a total cost approach, which includes implementation and maintenance costs.) These consulting resources can come from the software vendor or a consulting company. The software that is selected, in large part, determines the cost of the consulting that will follow. The differences in the total costs between large and small vendors are quite significant, so placing them in the same category is problematic. This would be as if Consumer Reports placed Lexus automobiles in the same category as Toyota’s automobiles.[3] Lexus would outscore all of the Toyota products because Lexus uses upgraded components, paint, engines, etc. However, a review of the highway will show that Toyota sells many more cars than Lexus because price is an important consideration when buying an automobile. This is obvious: differently priced items should be compared in different categories. If I am looking for an automobile and my budget is twenty thousand dollars, I am quite aware that a forty thousand dollar car is probably better. I wouldn’t care so much as that car is out of my price range. If the costs of acquisition implementation and maintenance were included in some “value matrix,” the matrix would look quite a bit different, and the larger vendors would drop significantly in the rankings.
While Gartner’s Magic Quadrant rankings do not account for cost, sometimes the vendor descriptions do. For instance, in the Magic Quadrant for Business Intelligence and Analytical Platforms for 2013, which must be one of the longest and most thorough Gartner reports ever written, costs are discussed in several vendor profiles. The following quotations are examples [emphasis is mine]:
- “Licensing cost remains a concern. When references were asked what limits wider deployment, 37.78% indicated the software’s cost, compared with the industry average of 25.4% across all vendor references.
- When compared with Actuate, Quiterian has some concerns, including cost, ease of use for business users, ease of use for developers, support quality, and product quality, all ranked below the survey average.”
Understanding Gartner’s Elitist Bias
Gartner has an elitist orientation, and this is demonstrated in several ways. The more I analyzed Gartner and compared it to other IT analysts, the more this characteristic became apparent. I have listed some of how Gartner is elitist below:
- Gartner’s analysts deal with the most senior members of buyer and vendor companies.
- Gartner shares very little of its research for free, even for promotional purposes. The one exception to this is their market updates and predictions, which are selectively released as of press releases. Gartner is effective at preventing the free dissemination of its research. By comparison, many companies, including many IT analyst firms, have a blog that distributes some free analysis. Gartner does not do this (see the example below from Gartner’s website). Their work is tightly controlled unless part of Gartner’s research is republished through a partner. This subject is one of the most interesting parts of how Gartner operates and will be discussed in detail in several places in this book.
- Many IT analysts sell research by the article. For instance, you can search through Forrester’s website, and if you find an article that you like, you can buy just that article. Gartner does not allow nonsubscribers to search through their research database or, in most cases, to buy single articles. Gartner’s model is to sell subscriptions as a starter or an introduction to their analyst consulting services.
- Gartner seems to have an unlimited number of service levels, which one can sign up for, each promising more access to analysts and to more inside information. This exclusivity seems to increase the perceived value of research in the eyes of some clients. Gartner is proficient at marketing its upgraded services. For instance, criticism on clients’ part regarding satisfaction with their current service is quickly deflected by Gartner; whatever the client is dissatisfied with or seeks is offered at the next level in service. Gartner’s employees must be trained in this response because I have heard it used as a reflexive defense against criticisms that have nothing at all to do with the level of services purchased from Gartner.
- All of Gartner’s research, consulting, and events are expensive. This means that not only does one have to pay to participate with Gartner, but the price of admission is quite high.
- Gartner’s broadcasting approach does not allow for user commentary, which also means that subscribers cannot read what other subscribers are thinking about the research areas. Most articles published today on the web (particularly true technology articles) allow comments. However, the communication in Gartner’s articles is a one-way affair.
Because much of Gartner’s advice to clients is not in a published form, only those with the budgets get to learn what Gartner analysts “really think.”
The Dimensions of Gartner’s Bias
Because they are so numerous, it is difficult to keep track of all of the dimensions of Gartner’s bias. For this reason, we created a graphic that lays out how Gartner’s bias impacts their output.
The Brightwork Graphic of Gartner’s Biases
Trick #1: How Gartner’s Writing Style is Meant to Appear Unbiased
All of the Gartner reports that I have read are well written, and much like The Economist, they keep a consistent tone and writing style even though a report could have been written by any one of Gartner’s analysts. The way to ensure this consistency is to have good internal training for analysts and ensure that a group of internal copyeditors go through all the analysts’ reports. While the writing is consistent, reports from category to category and area to area vary greatly in their thoroughness and content. For instance, within the topic of Magic Quadrants, there is great variability in the amount of text dedicated to vendors based upon the particular Magic Quadrant in question. Secondly, some Magic Quadrant reports will quote survey results, while others will not.
Generally, Gartner reports are written for an executive audience. Most software-oriented people, either developers or implementers, are not the target audience even though the reports cover software in which these groups specialize. The people who spend the most time reading and discussing Gartner’s research are:
- Executives faced with purchasing decisions in companies that implement enterprise software
- Marketing, sales, and executives in the vendor companies
- Investor analysts
Trick #2: Hiding the Actual Math
Gartner’s analytical products are unusual in using text to explain the research rather than graphics or numerical tables. Gartner tends to avoid using numerical tables—which would allow the user to see what Gartner is writing about comparatively—and the raw data is rarely provided to the reader (something which will be covered in Chapter 4: “Comparing Gartner to Consumer Reports, the RAND Corporation, and Academic Research.”) For instance, Gartner will say how a vendor performed in some survey area, but not declare how other vendors performed in that same survey area. This methodology is somewhat unique in this type of comparative research; in academic research or with Consumer Reports, tables are commonly used to compare all the data points— a generally accepted practice that Gartner declines to follow. Gartner prevents direct comparisons that normal publication guidelines for research would allow. And no wonder, as it is not in Gartner’s best interests to declare its findings in black and white because those companies it rates are also its customers. Therefore, Gartner’s research feels more like a liberal arts paper than a research paper, as everything is interpreted for the reader rather than presented to the reader. As a consequence of verbose prose and the lack of anchoring comparative graphics, it is quite easy to get lost in a lengthy Gartner report, and it is quite natural for the reader to go back to the single comparison that is offered in the reports (for instance the Magic Quadrant graphic).
Trick #3: Gartner’s Political Sensitivity
A clear political sensitivity comes across in Gartner reports’ writing style and how information is disclosed to the reader. The writing approach also changes depending on the type of report. For example, the writing for a Magic Quadrant document is dispassionate. In contrast, other articles (such as those related to the future outcomes of mergers or technology market predictions) are more opinionated. Gartner also does an excellent job of writing to promote one vendor over another rarely. Occasionally, I have come across Gartner research reports that are just thinly-disguised press releases from a software vendor, but this is not representative of the vast majority of their research reports.
Trick #4: Gartner’s Lack of Transparency
The transparency of the scoring of vendors depends upon the report category. For instance, I could never find one that showed the scores for the different criteria in all of the Magic Quadrant reports I reviewed. Providing the criteria scores per vendor would be instrumental, as the scores would allow buyers to adjust these reports to their needs and analyze the research better. For example, if the actual scores per criteria were listed, buyers could alter the criteria’ weight or eliminate the ones that are not important to them altogether, which would be preferable over using the same criteria deemed relevant by Gartner. Gartner does not show the criteria scores for a couple of reasons. One: Gartner has to be careful what it writes in its most influential reports because the vendors (which Gartner also counts as customers) review the analytical products. Two: the more oblique the reports, the more the customers must hire Gartner analysts for interpretation.
Alternatively, a table with each company’s criteria and scores is published when the stakes are lower, such as in Gartner’s Top 25 Supply Chain Companies report. However, why is the difference so stark? This difference in the case of the Top 25 Supply Chain Companies (which coincidentally are not large buyers and customers of Gartner), those companies that do not find themselves on the Top 25 list, will not cut their subscriptions. With this less political report, Gartner is much freer to publish the scoring—and therefore, they do.
Trick #5: Gartner’s Varying Degrees of Disclosure
At other research entities, a disclosure standard is applied to all research publications. Gartner’s varying degrees of disclosure, which depend upon political considerations, would not be allowed at these other entities.
However, I do not want to give the impression that other research entities ignore political considerations when performing their research. More often than not, research from academics that is politically inconvenient is suppressed, or academic researchers self-censor and do not submit grants for certain types of research, or the funding agency rejects their research grants. Therefore, there are two issues:
- How much disclosure is there on published research?
- How is politically sensitive research filtered out (i.e., not funded) before ever being researched?[1]
In this case, I am discussing the first issue and not the second. The footnote explains why exploring the second point is infeasible for this book.
[1] The issue of research suppression is hugely important. There are multiple cases of academic research being suppressed. Some research is kept private under pretenses, such as for “national security” reasons. For example, RAND led the research project, generally known as the “Pentagon Papers,” which were not published because they contained the truth behind Vietnam. The papers were unknown outside of RAND and the Pentagon until they were leaked. The papers were kept secret from the public and from the President of the United States, who has the top security clearance and every right to have access to them. And be aware that a massive research project was being undertaken on the history of the US involvement in Vietnam. The reason for doing this was that if the President knew the real history of the US involvement in Vietnam, the Pentagon would be less able to control the conflict’s interpretation. And of the US subterfuge of Vietnam and therefore less able to lead the President to the Pentagon’s conclusions lead towards.
Research suppression is more difficult to prove than differing disclosure standards between research. On the other side of the coin, a significant research component has little to no benefit, as is pointed out by John P. A. Loannidis, a medical researcher specializing in analyzing medical research.
“Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.”
Research suppression is also a complicated topic, which is far beyond the scope of this book.
Trick #6: Gartner’s Conclusions
Gartner looks authoritative until you understand that they do not publish information based upon research but instead can be seen as an entity that puts its readers last. That is why it is important to consider how Gartner concludes.
The graphics and links are not an exhaustive list. Most likely, we will be adding to this list in the future.
Are you aware of other dimensions by which Gartner is corrupt? Reach out to us to report what you know, and we may add it to the list if we believe the item qualifies.
Trick #7: Why the Gartner Ombudsman is a Ruse
The Ombudsman may report to Gartner’s general counsel, but it is impossible to ascertain the situation regarding pressure brought by the Ombudsman on the process of remediating complaints by vendors. However, Gartner is either confused about what an ombudsman is or has deliberately misled many people as to what it is and how an ombudsman functions.
- An ombudsman who is on Gartner’s payroll as an employee will have no incentive to side with a vendor over Gartner on a complaint.
- In other cases where an ombudsman is used, the process has some transparency. That is, there is a publication of the deliberations of the ombudsman. However, Gartner offers none of this. Therefore it isn’t easy to see how Gartner’s ombudsman is an ombudsman in anything but name.
Finally, there is no area of research is an ombudsman used. Academic research does not use an ombudsman. This is because they follow a disclosure method and conflict of interest rules (except for medical research, whereas with Gartner, research standards are frequently not followed).
Is Gartner Correct that Having an Ombudsman Removes the Disclosure Requirement?
Having an ombudsman does not remove Gartner from the responsibility of reporting its sales to software vendors. Gartner rating companies also take money from it. It is a textbook case of research conflict of interest.
Gartner analysts may not be allowed to own stock in the companies they cover, but Gartner is. This is covered in the article Why Does Gartner Invest in a Hedge Fund that Invests in Technology?
I want to say that Gartner’s behavior is uniquely dishonest in the IT research space. However, I can’t. IDC, the Yankee Group, Aberdeen, and others follow similar corrupt models, so what they do cannot be classified as research. Upon researching statements by the leaders in these organizations, it is obvious that they are taking advantage of the general population’s lack of understanding of research rules.
Here, the head of IDC copies the hollow defense of their biased research offered by Gartner’s Hall.
Fact-Based Service as Opposed To?
Execs at other major research firms speak with similar convictions. “We provide fact-based advice,” says IDC CEO Kirk Campbell. With its emphasis on hard data, he argues that IDC’s research methodology provides a built-in guard against analyst bias or favoritism. Campbell dismisses any suggestion in IDC reports that vendors have to be paying customers to get fair treatment. “We have an open-door policy,” he says.
This quote is from several years ago; however, it matches similar statements made by the White House Press Secretary Sarah Huckabee, who states that the press reports opinions when reporting facts. One important way of evaluating a statement is whether it can be reversed and whether it would ever apply.
So if we reverse Kirk Campbell’s statement, we get the following:
“We provide non-fact-based advice.”
Who would say such a thing? The question is not whether IDC provides fact-based advice. Vendor money causes IDC to promote vendors who can pay more than vendors who can’t or don’t. And Campbell’s statement here does not address that issue.
The Yankee Group
“Some research execs concede that their firms must do a better job of educating customers and the public about their policies and procedures, given the influence they have over multimillion-dollar buying decisions. One idea is to develop industrywide standards for business practices. “It’s something we should look at,” says Yankee Group CEO Emily Green. “Even the perception of favoritism would hurt us.” Says Forrester’s Kardon, “It would help the whole industry if we had a common set of practices to keep everybody clean.”
This is so hilarious because the rules of research are very well established. For instance, one thing Yankee Group could do is publish all of its funding from vendors. A second thing it could do is make sure it publishes a complete method explanation.
The Aberdeen Group
“Aberdeen Group CEO Jamie Bedard last week claimed the high ground. In a progress report on Aberdeen’s business, Bedard wrote to customers, “We promised you that … our research integrity was not for sale.” In an interview, Bedard charged that too many research firms base their advice on just a few interviews–what he calls opinion-based research–rather than on detailed surveys of dozens or hundreds of companies. “I think the industry can do a better job of deep research,” he says.
This is unintentionally hilarious because it effectively comingles two topics:
- Integrity
- Depth of Analysis
But Jamie Bedard starts with one topic but then quickly moves to a second topic. But these topics don’t have anything to do with each other.
Non Disclosure
There’s an important bit of information that Campbell refuses to share: He won’t disclose how much money IDC takes from tech-vendor clients.
And it is not difficult to guess why that might be.
Next up to bat, we have Forrester’s Brian Kardon, who proposes how much integrity Forrester has.
“Forrester Research focuses on selling its services to users rather than vendors to ensure that most of its revenue doesn’t come from the subjects of its research, says Brian Kardon, chief strategy and marketing officer. That affords the company a lot of freedom. “We routinely slam vendors,” he says. Still, about a third of Forrester’s revenue comes from tech vendors.”
What if the vendors that Forrester “slams” won’t pay off Forrester? Gartner is well known to do this. Therefore slamming non-paying vendors is not evidence of a lack of bias. In fact, that would be part of the business model. Payment means providing positive coverage, while non-payment means negative coverage. What better way to turn non-paying customers into paying customers?
The Importance of Disclosure for Any Research Entity
Software vendors may pay software analysts, which influences the ratings they receive. SAP alone pays Gartner several million dollars per year, and the larger the vendor, the more they can afford to pay. However, Gartner does not disclose this information anywhere on their website.
- Most IT analysts do not disclose or publish that vendors pay them.
- Their business model is similar to the financial rating agencies, except those, unlike rating agencies, are paid exclusively by those who want their products rated.
- Both the vendors and the software buyers pay the analysts, so their income sources are more balanced.
- While they present themselves as having one customer (those who buy their research).
They have two clients, the vendors being the second client. The biggest pay the most within these vendors, so the research results are slanted in their direction.
Gartner Invests in Hedge Funds?
It turns out that Gartner invests in hedge funds that invest in technology that it rates, which is explained in the following quotation.
“The firm (Gartner) invests in hedge funds that hold significant stakes in the companies it covers. One such investment is SI Ventures’ SI Venture Fund II. On its Web site, SI Ventures notes a “long-term relationship” with Gartner. SI Ventures helped launch Authentor Systems, which provides network security software. Gartner analysts have been quoted in press releases issued by Authentor supporting the company’s approach to security.”
This is the classic conflict of interest between the investment advisor and the investing banking sides of investment banking.
Gartner readers don’t know if Gartner gave praise to Authentor Systems because it is their actual view or because they benefit financially if Authenticator Systems goes up in value.
Who Owns Gartner?
However, Gartner’s conflict of interest regarding investing goes a step further. One example of this is who owns Gartner. The following quotes cover this.
“Gartner also is partly owned by investment companies that have stakes in tech vendors upon which Gartner is supposed to be casting a neutral eye.
Silver Lake Partners, which owns 33% of Gartner, counts Michael Dell, Bill Gates, Larry Ellison, and other tech-industry shakers among its current or former investors. Hedge fund ValueAct Capital owns more than 16% of Gartner and has owned as much as 11% of MSC Software, which Gartner views as a “challenger” in the market for product life-cycle-management software.”
This gets complex. But if Silver Lake Partners or Value Act Capital were so predisposed, could they pressure or otherwise influence Gartner to give better ratings to a software vendor in which either of the two is invested?
These are problematic ties. Gartner should not have them.
Conclusion
Gartner’s evidence that they are unbiased is simple. Trust Gartner. Trust that an ombudsman will control all the analysts at Gartner. Trust allows Gartner to make money off of technology investments, while its analysts can’t lead to unbiased outcomes. It all comes down to blindly trusting Gartner.
And the Yankee Group, Aberdeen, Forrester, IDC, and others follow the same approach. They are not as prominent as Gartner, but they all refuse to declare who pays them while swearing that none of these payments affect how they write about the vendors or their products.