Gartner MQ for supply chain planning is intended to provide information to companies that may be of value in their evaluation process of SCP technology vendors. However, there seems to be many flaws in the overall process leading to the final result. It does not truly reflect strength and weaknesses of the technology providers because of lack of in-depth analysis.
To prepare for the MQ report, vendors are asked to respond to a long RFI, make a 1-hour presentation and give a 90 minutes demo of a very complex application which would require at least half a day. That is only one issue, but more importantly, the vendors’ responses are not really verified for hundreds of questions asked in the RFI or claims made in the presentations. Hence the vendors are able to give highest ratings for each category and present features with PowerPoints rather than actual applications without any verifications.
- Reason #2 What Really Matters is Not Measured
Scalability, capability to model complex supply chains, and speed of the application are just a few that are not measured and verified. For example, a vendor can use Excel modeling in the background and run the model in 2 minutes, and end up getting better ranking than a vendor that uses an actual digital twin of the supply chain and runs in 10 minutes to give accurate results. The correctness of results is unknown to Gartner in both cases, the ability of the modeling is unknown to Gartner and how fast the system runs as data grows is unknown to Gartner
- Reason #3 Contradictory Evaluation of Technology
“S&OE is NOT important in SCP MQ.” So we were told by an analyst! This implies having S&OE capability in planning does not really give you an advantage! Hence vendors offering both S&OP and S&OE get lower rating in vision and execution! Almost none of the vendors in Leaders Quadrant offer S&OE, even though they claim. However, they are given high vision ranking. Garner’s own vision of autonomous planning specifically asks for S&OE and digital twin. However, a vendor with best digital twin representation is given one of the lowest rankings on the vision axis.
- Reason #4 Subjective Weighting of Features
The weights give to features offered by the vendors for final evaluation are subjective and decided by the analysts. For example, connectivity with operation, i.e. merging planning and execution has little merit. According to an analyst, this is only an integration issue. Gartner’s own literature and recommendations highly value the merging of planning & execution as part of their vision of autonomous planning. However, for supply chain planning MQ it does not seem to be that relevant or important.
Gartner uses data from inquiries to decide which vendor is more popular. This may be an indicator in NA and Europe. However, Gartner does not have any supply chain analysts in China, Japan, Korea, India and MENA. So, for a vendor which has great traction in these regions, Gartner has little understanding of how well they are executing in these fast-growing regions of the world.
Execution ranking is also decided on number of new logos. Hence a vendor with 10 new logos of small companies is rated higher than 2 logos of global multi-billion dollar companies.
The actual delivery of the technology takes the backseat to number of deals sold. Some of the more popular and recent vendors of S&OP technology do not have mature enough solutions. They are having serious issues delivering solutions, yet Gartner has ranked them very high because of their focus on sales. As far as we know Gartner relies only on less than a handful of references given by the vendor. Not necessarily a reliable way of knowing how mature the system is, how fast it can be implemented, how responsive the vendor is and if they are selling more than they can deliver. Would it be better for Gartner to make random calls to see how many of the sold accounts are actually up and running and if the customer is happy? In our own limited survey, a disproportionate number of companies have a hard time justifying their multi-million dollar investments with some of the fast growing technology vendors in the Leaders’ Quadrant.
Single industry presence seems to get higher Vision and higher Execution ranking than a vendor with capability and experience in many industries, so we were told by an analyst. Surely a vendor with experience in many industries has a more mature solution, potential for growth and cross industry knowledge transfer!
- Reason #6 Weekly release is preferred to stable software
We were told that frequent release of the software as many as weekly is a sign of new innovation and new features and functions and therefore higher rating is given to vendors who do multiple releases frequently! That maybe the case! If so, how frequently do you expect the clients to keep changing their production environment just because the vendor is sending new features and functions every week? Let alone the data dependence and maintenance that goes with it. We think that level of frequent release can be a sign of premature software.
- Reason #7 Volatility of MQ chart
A vendor can move from one quadrant to another within 6 months. How much more can a vendor do, or not do, within such a short time to go from Leader to Niche Quadrant or vice versa? A vision, if it is simply a technology direction cannot change so much unless a new PowerPoint “story” is invented by a vendor to take them from one Quadrant to the other. Or how much more sales can they accomplish in such a short time, where sales cycles are almost 9 months, to achieve a move from one quadrant to the other?
- Reason #8 Analyst Understanding of a vendor’s Strength
Unless a vendor is a paying member, only very limited number of exposures is given to the vendor to explain and demonstrate their strength to Gartner. As good as the technology might be, Gartner with limited exposure is not in a position to make a judgement about the relative strength of the vendor. Hence their evaluation can be unfair and damaging to the company. To be fair, the vendor is given half an hour to explain to the lead analyst what facts they have missed. That is 30 minutes for such a complex piece of application with hundreds of questions in the RFI!