

 
Home  About  News  IGDT  The Campaign  Myths & Facts  Reviews  FAQs  FUQs  Resources  Contact Us 
The rise and rise of voodoo decision making 
Review 72022 (Posted: February 9 2022; Last update: February 9, 2022)
Reference Jim W. Hall, Robert J. Lempert, Klaus Keller, Andrew Hackbarth, Christophe Mijere, and David J. McInerney Robust Climate Policies Under Uncertainty: A Comparison of Robust Decision Making and InfoGap Methods. Risk Analysis, 32(10):16571672. Publication type Peerreviewed journal article. Year of publication 2012 Downloads https://doi.org/10.1111/j.15396924.2012.01802.x. Abstract This study compares two widely used approaches for robustness analysis of decision problems: the infogap method originally developed by BenHaim and the robust decision making (RDM) approach originally developed by Lempert, Popper, and Bankes. The study uses each approach to evaluate alternative paths for climatealtering greenhouse gas emissions given the potential for nonlinear threshold responses in the climate system, significant uncertainty about such a threshold response and a variety of other key parameters, as well as the ability to learn about any threshold responses over time. Infogap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information. The study finds that the two approaches reach similar but not identical policy recommendations and that their differing attributes raise important questions about their appropriate roles in decision support applications. The comparison not only improves understanding of these specific methods, it also suggests some broader insights into robustness approaches and a framework for comparing them.. Reviewer Moshe Sniedovich IFIG perspective This article was reviewed more than 10 years ago. The reasons for the need for a new review are discussed below. The section The local/global novelty is new. Introduction
After reading a number of recent IGDT publications, including the chapters in the 2019 book "Decision Making under Deep Uncertainty" (see review), I decided to modify the old (2011) review slightly in light of the way the article has been referenced and interpreted by more recent publications on IGDT. It is therefore important to stress that this review was essentially written more than ten years ago.
Before I reviewed this article in 2011 (it was available then for early view), I reviewed two articles (see Review 6 and Review 15) by the first author, that advocated the use of IGDT as a methodology for decision under severe uncertainty. In my reviews of these articles, I pointed out that apparently, in response to my sustained criticism of IGDT at the time, a brave attempt was made in these two articles to introduce a fundamental "fix" in this theory, so as to correct a deep rooted flaw in it, in fact a flaw that renders IGDT a voodoo decision theory par excellence. The main objective of my reviews was to show that these attempts at fixing IGDT's fundamental ills were themselves very problematic, so that rather than providing a remedy, they exacerbated the problem even more!
So, having come across this new article in 2011, I was curious to find out whether my response to the previous attempts to revamp IGDT would be reflected in the new article. But, not only did I not find so much as a remote echo of my critique of the failed attempts at "amending" IGDT, I did not find so much as an echo of the previous implied admission of the necessity to introduce such correction in IGDT, and what is more, of the necessity to state these corrections explicitly.
This seems to suggest that Hall et al. (2012) have given up on attempts to fix IGDT, which means that this new article calls for harsher criticism than the criticism that was directed at the previous articles (see Review 6 and Review 15).
I want to assure all those readers who might find this story a bit complicated that it is in fact quite simple to follow. Simply read on!
However, before I can proceed to unravel the relevant details that will make sense of this story, it is important that I call attention to a no less important fact about this article. This has to do with what seems to be an attempt to "repackage" IGDT, again I suspect in an effort to circumvent the censure that IGDT is so obviously vulnerable to! And by this I mean that, by using a clearly modified rhetoric, tan attempt is made to dress IGDT in new clothes.
So, contrary to what, to date, has been trumpeted in the entire IGDT literature as IGDT's great forte, namely its capabilities as a method seeking robust decisions under severe uncertainty, in this article, is (deliberately?) being played down. And so, in this article IGDT is no longer claimed to provide the decisionmaker with a mechanism for ranking decisions! No, not at all!
According to the rhetoric in this article, all that IGDT furnishes the decision maker is some sort of general approach which gives him some (indeterminate?) counsel about the complicated business of decision under uncertainty. In a word, going by the rhetoric in this article, one would be hard pressed to recognize the IGDT described here as an "approach", as the methodology that in the entire IGDT literature to date is being hailed as a reliable methodology for seeking robust decisions, to be precise seeking those decisions that are (the most) robust to uncertainty.
The trouble is, however, that I am unable to give you a complete assessment of the socalled "Infogap approach" outlined in this paper, because the authors did not bother to specify the details of the performance function. Consequently, it is impossible to validate the results reported on in this paper.
And now to the details of the story that is at the center of this review. I propose to unravel it in three stages. In the first I remind the reader of the fundamental flaws that render infogap decision theory a voodoo decision theory. In the second, I explain how in previous articles, attempts were made to find a way around some of these flaws. Following that I examine how the new article deals with, or rather mishandles, this issue.
The fundamental flaw
For the benefit of readers who are not familiar with the "IGDT story", I should point out that the reason that I have branded this theory: a voodoo decision theory par excellence is essentially due to its prescription for robust decisionmaking under severe uncertainty. In fact, you need not even be a risk analyst to immediately see that, like any other "voodoo theory", this theory is of the "too good to be true" ilk. Which means of course that it is a theory whose groundless propositions can be easily exposed for what they are by means of simple examples and counterexamples.
Of particular interest to us in this discussion is IGDT's prescription for the treatment of severe uncertainty. This prescription basically instructs the following:
Of course, not in so many words, still this is what this prescription comes down to.
And to appreciate my claim that you need not be a certified riskanalyst to immediately see that it is this prescription that renders this theory a voodoo decision theory par excellence, simply keep in mind that this prescription is given by a theory claiming to provide the means for tackling the severest uncertainty imaginable.
How SEVERE?
To begin with, the uncertainty is claimed to be nonprobabilistic and likelihoodfree, meaning that it cannot be quantified by means of "conventional" models of uncertainty or, by means of fuzziness. The quantification of the uncertainty is therefore austere in the extreme. Indeed, it comprises two elements:
 An uncertainty space, $\overline{\mathscr{U}}$.
 A point estimate $\widetilde{u}$ of the true value of the parameter of interest, $u$.
The immediate implication of the uncertainty being likelihoodfree, is that there are no grounds to assume that the true value of $u$ is more/less likely to be in the neighborhood of the estimate than in the neighborhood of any other value of $u$ in $\overline{\mathscr{U}}$. And for similar reasons, the estimate must be assumed to be poor to the effect that it may well be substantially wrong indeed, no more than a wild guess of the true value of $u$. And to illustrate, suppose that the uncertainty space $\overline{\mathscr{U}}$ is this page. This means that, as the uncertainty is likelihoodfree, we have not a clue where the true value of $u$ is located on this page. All we know is that it is somewhere on this page.
And even if we were to assume, for argument's sake, that the point estimate of the true value of $u$ is exactly in the middle of this page. Given that the uncertainty is likelihoodfree, this additional stipulation would not alter even by one iota the basic fact that we have no clue where the true value of $u$ is in the page.
The implication is then that there are no grounds whatsoever to assume that the true value of $u$ is more/less likely to be in the neighborhood of the estimate than in the neighborhood of any other value of $u$ in $\overline{\mathscr{U}}$.
And all this leads to the inevitable conclusion that to determine the robustness of a decision to uncertainty where the uncertainty is described in these terms, it is imperative to evaluate the performance of the decision over the entire uncertainty space, or over a large sample of points from this space that adequately represents the variability of $u$ over this space.
Yet, in contrast to what other theories and methods for the treatment of severe uncertainty do, this is not what IGDT prescribes doing! To the contrary, IGDT prescribes conducting a robustness analysis only in the neighborhood of this (highly questionable, doubtful etc.) estimate and nowhere else. And so, IGDT's prescription for measuring robustness against severe uncertainty is based on the following question:
How much can this (poor) estimate be perturbed (in all directions) without causing a violation of the performance requirement?Thus, if the answer is for instance 12cm, then any perturbation of size 12cm or less (in any direction) will not violate the performance requirement and will therefore be deemed "acceptable". Whereas, a slightly larger perturbation (in some direction) will be deemed unacceptable as it will cause a violation of the performance requirement.
This definition and means of measuring robustness is known universally as Radius of Stability. For obvious reasons, the Radius of Stability is treated universally (eg. in the robust control theory literature, economics literature, etc.) as a model of local robustness. This means that the accepted convention is that as a measure of local robustness it cannot be counted on to provide a measure of the global robustness of a system or a decision, or whatever. In other word, it cannot be counted on to indicate how well, or how poorly, a decision performs over the entire uncertainty space. Indeed, it is elementary to devise examples demonstrating that a decision that is locally robust in the neighborhood of the estimate is very fragile globally over $\overline{\mathscr{U}}$, and vice versa.
In short, all this goes to show that, as a radius of stability model, IGDT's robustness model is the wrong model for the treatment of a severe uncertainty of the type that IGDT is claimed to address.
It is extremely important to note in this regard that IGDT in fact has the dubious distinction of being the only decision theory in the trade to propose that the robustness of a decision against severe uncertainty be measured by a Radius of stability model.
So, to repeat, you need not be a risk analyst to see that a theory purporting to offer a reliable methodology for robust decision to uncertainty proposing this prescription for this purpose, is fundamentally flawed.
The Fix
In response to my criticism of IGDT, Hall and Harvey (2009) have hit on an easy quick fix. Although this paper is riddled with TUIGF, my concern here is only with the following remarkable statement made immediately after the description of infogap's regions of uncertainty, the horizon of uncertainty $\alpha$ and the estimate $\widetilde{u}$ of the parameter of interest (color is used for emphasis):
An assumption remains that values of $u$ become increasingly unlikely as they diverge from $\widetilde{u}$.
Hall and Harvey (2009, p. 12)In other words, this assumption indicates that the estimate $\widetilde{u}$ is the most likely true value of the parameter $u$ and that the likelihood of $u$ being the true value decreases as $u$ deviates from $\widetilde{u}$. As I pointed out in my review, it was clear that Hall and Harvey's (2009) rationale for introducing this assumption was to justify IGDT's fixing on the estimate as the focus of the robustness analysis.
What I particularly wanted readers to take note of was the phrasing of this assumption. I therefore, called readers attention to the word remains by raising the following (rhetorical) question:
What exactly are we to make of "remains"? Does it mean that in the context of IGDT, which boasts of being a nonprobabilistic likelihoodfree theory, this assumption was all along the case and it thus "remains"? If so how does it square with the "official" IGDT stand on this issue? Or, is this a "new" assumption? One that was appended to the "official" theory? If it is a newly added assumption, then surely this must be made clear. Whatever the case, it must be explained how this assumption tallies with the claim that the uncertainty under consideration is severe.More on this in Review 6
A second attempt was made in Hine and Hall (2010). This time the assumption was introduced to get rid of, or mitigate, the above mentioned absurd in IGDT's prescription for the management of severe uncertainty. So, here we read this (color is used for emphasis):
The main assumption is that $u$, albeit uncertain, will to some extent be clustered around some central estimate $\widetilde{u}$ in the way described by $\mathscr{U}(\alpha,\widetilde{u})$, though the size of the cluster (the horizon of uncertainty $alpha;$ is unknown. In other words, there is no known or meaningfully bounded worst case. Specification of the info gap uncertainty model $\mathscr{U}(\alpha,\widetilde{u})$ may be based upon current observations or best future projections.
Hine and Hall (2010, pp.23)Although this phrasing is an improvement on Hall and Harvey's (2009) quickfix, just like its predecessor, this assumption amounts to a major indictment of IGDT. And, strictly speaking, it is misleading.
Let me explain.
 What the authors refer to here as the "main assumption", thereby creating the impression that this assumption is implicit in IGDT is not in fact an IGDT assumption. It is not an integral part of IGDT  as the theory is described in BenHaim (1996, 2001, 2006, 2010). There is no trace of it in any of these publications.
 This "main" assumption is an ad hoc innovation introduced Hine and Hall (2010) in order to justify IGDT's absurd proposition to focus the analysis on a wild guess.
 Interestingly, there is no trace of this "main" assumption in earlier publications on IGDT by the authors.
A detailed critique of the errors, misconceptions, misinformation etc. demonstrated in the wording of this assumption is available in Review 15.
With regard to the current review, the question is then: where is this "main Assumption" in Hall et al. (2012)?
Good question!
The new local/global novelty of IGDT
As was pointed out above, Hall et al. (2012) do not attempt to fix the flaw in IGDT, namely the inherently local nature of IGDT's robustness analysis. Instead a new argument appears on the scene (color is used for emphasis):
The uncertainty model is therefore written as a nested family of sets $\mathscr{U}(\alpha,\widetilde{u})$. For small $\alpha$, searching set $\mathscr{U}(\alpha,\widetilde{u})$ resembles a local robustness analysis. However, $\alpha$ is allowed to increase so that in the limit the set $\mathscr{U}(\alpha,\widetilde{u})$ covers the entire parameter space and the analysis becomes one of global robustness. The analysis of a continuum of uncertainty from local to global is one of the novel ways in which infogap analysis is informative.
Hall et al. (2012, 16612)To appreciate how absurd this claim is, let us examine the IGDT robustness model used in Hall et al. (2012, p. 1662), namely:
$$ \widehat{\alpha}(q,r_{c}) = \max\ \left\{\alpha: \min_{u\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}\right\}\tag{1} $$where $q$ denotes the decision variable, $u$ denotes the uncertainty parameter, $r_{c} $ denotes the required critical performance level, $R$ denotes the reward function, and $\mathscr{U}(\alpha,\widetilde{u})$ denotes a neighborhood of size $\alpha$ around the point estimate $\widetilde{u}$.
Note that in this setup, $q$, $r_{c}$ and $\widetilde{u}$, are fixed, and $\alpha$ plays the role of a decision variable whose optimal (maximum) value, denoted $\widehat{\alpha}(q,r_{c}) $, is interpreted as the robustness of decision $q$, given $r_{c}$.
Clearly, contrary to the above claim in Hall et al. (2012), in this model $\alpha$ is not free to increase indefinitely. Its largest admissible value is dictated by the performance constraint $\min_{u\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}$.
In short, this robustness model is inherently local in nature because the robustness analysis is conducted in the neighborhood of $\widetilde{u}$.
It seems that Hall et al. (2012) confuse IGDT's robustness model with the tradeoff curves that can be generated to display the $\widehat{\alpha}(q,r_{c})$ vs $r_{c}$ relationship. Note, by inspection, that $\widehat{\alpha}(q,r_{c})$ is nondecreasing with $r_{c}$, and for a sufficiently small $r_{c}$, the performance constraint under consideration, namely $\min_{q\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}$, is superfluous, hence for such a small value of $r_{c}$ the robustness $\widehat{\alpha}(q,r_{c}) $ will be unbounded above, and the set $\mathscr{U}(\infty, \widetilde{u}) $ will cover the entire uncertainty space. But this does not alter the fact that the robustness analysis itself is inherently local in nature.
Finally, it should also be pointed out that in many situations the value of $r_{c}$ is fixed and cannot be changed (e.g., it is dictated by regulations) and therefore the tradeoff curves mentioned above are not relevant.
The bottom line is then that the ability to generate tradeoff curves is not novel at all and is definitely not unique to IGDT. Any model can be subjected to a systematic parametric analysis with respect to its parameters to yield a tradeoff curve. This is not an IGDT novelty. Furthermore, the ability to generate tradeoff curves does not change the way the robustness of a decision is determined by IGDT. The local orientation of IGDT's robustness analysis is a manifestation of the fact that this robustness analysis is a (local) worstcase analysis that is conducted over nested sets in such a manner that a larger neighborhood is analyzed only if all smaller neighborhoods "passed" the worstcase test.
The following examples illustrate the inherent local orientation of IGDT's robustness analysis.
Examples
In all the examples below we deal with a situation where there are two decisions, call them $d'$ and $d''$, the uncertainty is severe, and in particular, the point estimate of the true value of the uncertainty parameter, denoted $\widetilde{u}$, is just a wild guess. The examples differ in the performance profiles of the two decisions.
Example 1_{ }
The performance profiles of the decisions are as follows:
 $d'$ satisfies the performance constraint everywhere on the uncertainty space, except at a single point, $u'$ located at a distance $\alpha\,'$ from $\widetilde{u}$.
 $d''$ violates the performance constraint everywhere on the uncertainty space except on a neighborhood of size $\alpha\,''= \alpha\,' + \epsilon$ around $\widetilde{u}$, where $\epsilon$ is small and positive, so $\alpha\,''$ is slightly larger than $\alpha\,'$.
According to IGDT, the robustness of decision $d''$ is equal to $\alpha\,'' = \alpha\,' + \epsilon$, whereas the robustness of decision $d'$ is smaller than $\alpha\,'$. Hence, IGDT regards $d''$ as more robust than $d'$. This absurd ranking is a manifestation of the inherently local orientation of IGDT's robustness analysis.
Example 2_{ }
The performance profiles of the decisions are as follows:
 $d'$ satisfies the performance constraint over 99.9% of the uncertainty space. Its IGDT robustness is equal to $\alpha\,' = 53$.
 $d''$ violates the performance constraint over 99.9% of the uncertainty space. Its IGDT robustness is equal to $\alpha\,''= 107$.
Hence, IGDT regards $d''$ as more robust than $d'$. This absurd ranking is a manifestation of the inherently local orientation of IGDT's robustness analysis.
Example 3_{ }
The performance profiles of the decisions are as follows:
 $d'$ satisfies the performance constraint $\widetilde{u}$.
 $d''$ violates the performance constraint at $\widetilde{u}$.
Here the IDGT's robustness of decision $d''$ is equal to $0$, irrespective of how well/badly $d''$ performs with respect to all other possible values of $u$. So for IGDT, the behavior of the decisions at $\widetilde{u}$ is of paramount importance in determining the robustness od decisions, even though the theory does not assume that the true value of the uncertainty parameter is more likely to be in the neighborhood of $\widetilde{u}$ than in the neighborhood of any other point in the uncertainty space! On what basis then does IGDT consider $\widetilde{u}$ to be much more "important" than any other point in the uncertainty space? And on what basis does IGDT totally ignores the performance of a decision over the uncertainty space if the decision violates the performance constraint at $\widetilde{u}$?
Example 4_{ }
The performance profiles of the decisions are as follows:
 $d'$ satisfies the performance constraint over 99.99% of the uncertainty space.
 $d''$ violates the performance constraint over 99.99% of the uncertainty space.
In this pathological case IGDT cannot determine which decision is more robust. Note that since we do not know where the estimate $\widetilde{u}$ is located relative to admissible/inadmissible points in the uncertainty space, we cannot rule out the possibility that, according to IGDT's robustness analysis, decision $d''$ will turn out to be more robust than decision $d'$.
Think about it: in this example we face a situation where overall, $d'$ performs almost perfectly practically over the entire uncertainty space, $d''$ performs very badly practically over the entire uncertainty space, yet ... IGDT cannot determine which decision is more robust to the severe uncertainty under consideration. The fact that IGDT is completely paralyzed in the absence of the point estimate $\widetilde{u}$ is a clear indication that its robustness analysis is inherently local in nature.
No amount of rhetoric can change this fact.
To make the "local issue" crystal clear, observe that
I stress again that I raise this "old issue" here, again, because this crucial point still eludes prominent members of the DMDU community (see Review 12022).
The repackaging of infogap decision theory
The really intriguing question about Hall et al.'s (2012) discussion on IGDT is what I call the "repackaging" of IGDT. In other words, the question that deserves answer is this: how does the claim in Hall et al. (2012) that IGDT does not provide a strict ranking of decisions square with the standard depictions of IGDT in the IGDT literature to date? To enable readers who are not familiar with this literature to see for themselves, let us compare Hall et al's. (2012) statement with a sample of statements illustrating the standard depiction of this theory  by other infogap scholars, including the Father of the theory, Prof. Yakov BenHaim.
Surely, Hall et al.'s (2012) blatant distortion of what IGDT actually does cannot be an accident. The question is then: what is its object? Because to date, IGDT's robustsatisficing strategy has been presented as a method that ranks decisions according to their robustness: the larger the robustness better. Similarly, the opportunesatisficing strategy has been presented as a method that ranks decisions according to their opportuneness: the smaller the opportuneness better. Hence, according to (what is known to the public as) IGDT's robustsatisficing strategy, the best decision is that whose robustness is the largest (for the desired level of performance, $r_{c}$). Similarly, according to (what is known to the public as) IGDT's opportunesatisficing strategy, the best decision is that whose opportuneness is the smallest (for the desired level of performance, $r_{w}$)
Are we to conclude from Hall et al. (2012) that this is no longer the case?
I suspect that this distortion may well be another attempt at a quick fix, aimed at getting around IGDT's fundamental ills. And this, by the way, is the norm in the IGDT literature: a repackaging of the rhetoric as a means of glossing over the fundamental ills of the theory itself! (see Review 22022).
Remarks
 Prof. BenHaim, the Father of infogap decision theory, is crystal clear about how his theory establishes preferences among decisions. In the 2006 edition of his book on the theory we read:
For given values of critical or windfall reward, $r_{c}$ or $r_{w}$, each immunity function induces a preference ranking on the set of available decisions. More importantly, the immunity functions enable the decision maker to explore the desirability of different options q and different requirements $r_{c}$ and $r_{w}$, and thus to alter earlier preferences.
BenHaim (2006, p. 45)And in the 2010 book we read:
Goals which are satisficed (suboptimal but good enough) can be achieved by many alternative policies. Choose the most robust from among these alternatives.In short, IGDT's robustsatisficing method gives specific instructions on how to rank alternative decisions, policies, options, etc: choose the most robust! Can there be any doubt about that? So, the message is crystal clear, leaving no room for the imagination: contrary to the claim in Hall et al. (2012), IGDT provides ranking of alternative decisions.
BenHaim (2010, p. 10) The point raised in Hall et al. (2012) about the "tradeoff" issue has no bearing whatsoever on the fundamental fact that, as dictated by IGDT's robustsatisficing method, for any given value of the critical reward $r_{c}$, the best decision is a decision that maximizes the robustness function (for the specified value of $r_{c}$). And it is on the basis of this determination that the "tradeoff" analysis is then conducted.
 The troubles besetting IGDTs robustness analysis have got nothing to do with this "tradeoff". The trouble is due to the fact that IGDT's robustness model is a a model of local robustness which is being misapplied for the management of severe uncertainty. More on this in Review_1_2022.
The missing performance (reward) function
On Fri, 10 June 2011 16:42:20 +1000 (EST), I requested from one of the coauthors details on the performance function of IGDT's robustness model used in Hall et al. (2012) because this function was not specified for the numerical examples presented in the article. The request was acknowledged and was forwarded to another author. On Wed, 13 July 2011 11:17:16 +0100 I was informed by the first author that the reward function was not specified in the article. The rewards were generated by a computer program (MLK DICE) that was reported on elsewhere.
I find this most surprising because the uncertain parameter under consideration (call it $u$) takes only 2662 values, namely the uncertainty space is discrete and contains 2662 distinct values of $u$. This means that for each of the four strategies under consideration, there are only 2662 possible rewards. The implication is then that, the reward function can be easily completely "specified" (online) by a relatively small spreadsheet, to enable interested users to easily download and examine it.
But more than this, in view of the fact that the uncertainty space is discrete and manifestly small, determining robustness is trivial  it can easily be done by enumeration. Indeed, given the size of this discrete uncertainty space, it is hard to comprehend why the authors conduct no more than a local robustness analysis! For their money, they could have easily performed a global analysis (over the entire uncertainty space) to come up with far more meaningful results.
Regrettably, the article do not provide such a spreadsheet so that it is impossible to check/reconstruct their results.
Summary and conclusions
 The uncertainty space of the robustness model under consideration in this article consists of 2662 values of a parameter comprising 4 components. Since only four strategies are considered in this case, the robustness issue is trivial to the extent that it can easily be handled by enumeration. This means that both methodologically and practically, it is unclear why a model of local robustness is used here to begin with.
 Furthermore, as the article does not specify the reward function, it is impossible to check the results generated by the proposed local robustness model so as to determine whether these results are consistent with the strategies’ global robustness over the uncertainty space.
 There are no signs, in this article, of the previous attempts to "fix" IGDT (see Review 6 and Review 15).
 Contrary to the claim in the article, IGDT's robustness analysis is inherently local in nature.
 The ability to generate tradeoff curves relating two or more performance measures is not a novelty. It is done routinely and is not unique to IGDT.
 The article misrepresents IGDT's stand with regard to ranking of decisions.
Readers interested in the "local vs global" robustness issue may wish to surf to the old robustness directory. Readers interested in "Voodoo Decision Making", may wish to surf to the old voodoo decision making directory.
Bibliography and links
Articles/chapters
 Sniedovich M. (2007) The Art and Science of Modeling DecisionMaking Under Severe Uncertainty. Journal of Decision Making in Manufacturing and Services, 1(12), 111136. https://doi.org/10.7494/dmms.2007.1.2.111
 Sniedovich M. (2008) Wald's Maximin Model: A Treasure in Disguise! Journal of Risk Finance, 9(3), 278291. https://doi.org/10.1108/15265940810875603
 Sniedovich M. (2008) From Shakespeare to Wald: Modelling worstcase analysis in the face of severe uncertainty. Decision Point 22, 89.
 Sniedovich M. (2009) A Critique of InfoGap Robustness Model. In Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 20712079, Taylor and Francis Group, London.
 Sniedovich M. (2010) A bird's view of infogap decision theory. Journal of Risk Finance, 11(3), 268283. https://doi.org/10.1108/15265941011043648
 Sniedovich, M. (2011) A classic decision theoretic perspective on worstcase analysis. Applications of Mathematics, 56(5), 499509. https://doi.org/10.1007/s104920110028x
 Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decisionmaking in the face of severe uncertainty. International Transactions in Operations Research, 19(12), 253281. https://doi.org/10.1111/j.14753995.2010.00790.x
 Sniedovich M. (2012) Fooled by local robustness: an applied ecology perspective. Ecological Applications, 22(5), 14211427. https://doi.org/10.1890/120262.1
 Sniedovich, M. (2012) Fooled by local robustness. Risk Analysis, 32(10), 16301637. https://doi.org/10.1111/j.15396924.2011.01772.x
 Sniedovich, M. (2014) The elephant in the rhetoric on infogap decision theory. Ecological Applications, 24(1), 229233. https://doi.org/10.1890/131096.1
 Sniedovich, M. (2016) Wald's mighty maximin: a tutorial. International Transactions in Operational Research, 23(4), 625653. https://doi.org/10.1111/itor.12248
 Sniedovich, M., (2016) From statistical decision theory to robust optimization: a maximin perspective on robust decisionmaking. In Doumpos, M., Zopounidis, C., and Grigoroudis, E. (eds.) Robustness Analysis in Decision Aiding, Optimization, and Analytics, pp. 5987. Springer, New York.
Research Reports
 Sniedovich, M. (2006) What's Wrong with InfoGap? An Operations Research Perspective
 Sniedovich, M. (2011) Infogap decision theory: a perspective from the Land of the Black Swan
Links