Local weather  
Brought to you by
 IF - IG 
International Interdisciplinary Center For
Information and Fact Checking about
Info-Gap Decision Theory

Local time

Home About News IGDT The Campaign Myths & Facts Reviews FAQs FUQs Resources Contact Us
The rise and rise of voodoo decision making


Reviews of publications on Info-Gap decision theory (IGDT)

Review 7-2022 (Posted: February 9 2022; Last update: February 9, 2022)


Reference Jim W. Hall, Robert J. Lempert, Klaus Keller, Andrew Hackbarth, Christophe Mijere, and David J. McInerney Robust Climate Policies Under Uncertainty: A Comparison of Robust Decision Making and Info-Gap Methods. Risk Analysis, 32(10):1657-1672.
Publication typePeer-reviewed journal article.
Year of publication2012
Downloads https://doi.org/10.1111/j.1539-6924.2012.01802.x.
Abstract This study compares two widely used approaches for robustness analysis of decision problems: the info-gap method originally developed by Ben-Haim and the robust decision making (RDM) approach originally developed by Lempert, Popper, and Bankes. The study uses each approach to evaluate alternative paths for climate-altering greenhouse gas emissions given the potential for nonlinear threshold responses in the climate system, significant uncertainty about such a threshold response and a variety of other key parameters, as well as the ability to learn about any threshold responses over time. Info-gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information. The study finds that the two approaches reach similar but not identical policy recommendations and that their differing attributes raise important questions about their appropriate roles in decision support applications. The comparison not only improves understanding of these specific methods, it also suggests some broader insights into robustness approaches and a framework for comparing them..
ReviewerMoshe Sniedovich
IF-IG perspective This article was reviewed more than 10 years ago. The reasons for the need for a new review are discussed below. The section The local/global novelty is new.

Introduction

After reading a number of recent IGDT publications, including the chapters in the 2019 book "Decision Making under Deep Uncertainty" (see review), I decided to modify the old (2011) review slightly in light of the way the article has been referenced and interpreted by more recent publications on IGDT. It is therefore important to stress that this review was essentially written more than ten years ago.

Before I reviewed this article in 2011 (it was available then for early view), I reviewed two articles (see Review 6 and Review 15) by the first author, that advocated the use of IGDT as a methodology for decision under severe uncertainty. In my reviews of these articles, I pointed out that apparently, in response to my sustained criticism of IGDT at the time, a brave attempt was made in these two articles to introduce a fundamental "fix" in this theory, so as to correct a deep rooted flaw in it, in fact a flaw that renders IGDT a voodoo decision theory par excellence. The main objective of my reviews was to show that these attempts at fixing IGDT's fundamental ills were themselves very problematic, so that rather than providing a remedy, they exacerbated the problem even more!

So, having come across this new article in 2011, I was curious to find out whether my response to the previous attempts to revamp IGDT would be reflected in the new article. But, not only did I not find so much as a remote echo of my critique of the failed attempts at "amending" IGDT, I did not find so much as an echo of the previous implied admission of the necessity to introduce such correction in IGDT, and what is more, of the necessity to state these corrections explicitly.

This seems to suggest that Hall et al. (2012) have given up on attempts to fix IGDT, which means that this new article calls for harsher criticism than the criticism that was directed at the previous articles (see Review 6 and Review 15).

I want to assure all those readers who might find this story a bit complicated that it is in fact quite simple to follow. Simply read on!

However, before I can proceed to unravel the relevant details that will make sense of this story, it is important that I call attention to a no less important fact about this article. This has to do with what seems to be an attempt to "repackage" IGDT, again I suspect in an effort to circumvent the censure that IGDT is so obviously vulnerable to! And by this I mean that, by using a clearly modified rhetoric, tan attempt is made to dress IGDT in new clothes.

So, contrary to what, to date, has been trumpeted in the entire IGDT literature as IGDT's great forte, namely its capabilities as a method seeking robust decisions under severe uncertainty, in this article, is (deliberately?) being played down. And so, in this article IGDT is no longer claimed to provide the decision-maker with a mechanism for ranking decisions! No, not at all!

According to the rhetoric in this article, all that IGDT furnishes the decision maker is some sort of general approach which gives him some (indeterminate?) counsel about the complicated business of decision under uncertainty. In a word, going by the rhetoric in this article, one would be hard pressed to recognize the IGDT described here as an "approach", as the methodology that in the entire IGDT literature to date is being hailed as a reliable methodology for seeking robust decisions, to be precise seeking those decisions that are (the most) robust to uncertainty.

The trouble is, however, that I am unable to give you a complete assessment of the so-called "Info-gap approach" outlined in this paper, because the authors did not bother to specify the details of the performance function. Consequently, it is impossible to validate the results reported on in this paper.

And now to the details of the story that is at the center of this review. I propose to unravel it in three stages. In the first I remind the reader of the fundamental flaws that render info-gap decision theory a voodoo decision theory. In the second, I explain how in previous articles, attempts were made to find a way around some of these flaws. Following that I examine how the new article deals with, or rather mishandles, this issue.

The fundamental flaw

For the benefit of readers who are not familiar with the "IGDT story", I should point out that the reason that I have branded this theory: a voodoo decision theory par excellence is essentially due to its prescription for robust decision-making under severe uncertainty. In fact, you need not even be a risk analyst to immediately see that, like any other "voodoo theory", this theory is of the "too good to be true" ilk. Which means of course that it is a theory whose groundless propositions can be easily exposed for what they are by means of simple examples and counter-examples.

Of particular interest to us in this discussion is IGDT's prescription for the treatment of severe uncertainty. This prescription basically instructs the following:

IGDT's secrete weapon against severe uncertainty Ignore the severity of the uncertainty! Conduct a local analysis in the neighborhood of a poor estimate that may be substantially wrong, perhaps even just a wild guess. This magic recipe works particularly well in cases where the uncertainty is unbounded! 
The obvious flaws in the recipe should not bother you, and definitely should not prevent you from claiming that the method is reliable! To wit (color is used for emphasis):
The management of surprises is central to the “economic problem”, and info-gap theory is a response to this challenge. This book is about how to formulate and evaluate economic decisions under severe uncertainty. The book demonstrates, through numerous examples, the info-gap methodology for reliably managing uncertainty in economic policy analysis and decision making.
Ben-Haim (2010, p. x)

Of course, not in so many words, still this is what this prescription comes down to.

And to appreciate my claim that you need not be a certified risk-analyst to immediately see that it is this prescription that renders this theory a voodoo decision theory par excellence, simply keep in mind that this prescription is given by a theory claiming to provide the means for tackling the severest uncertainty imaginable.

How SEVERE?

To begin with, the uncertainty is claimed to be non-probabilistic and likelihood-free, meaning that it cannot be quantified by means of "conventional" models of uncertainty or, by means of fuzziness. The quantification of the uncertainty is therefore austere in the extreme. Indeed, it comprises two elements:

The immediate implication of the uncertainty being likelihood-free, is that there are no grounds to assume that the true value of $u$ is more/less likely to be in the neighborhood of the estimate than in the neighborhood of any other value of $u$ in $\overline{\mathscr{U}}$. And for similar reasons, the estimate must be assumed to be poor to the effect that it may well be substantially wrong indeed, no more than a wild guess of the true value of $u$. And to illustrate, suppose that the uncertainty space $\overline{\mathscr{U}}$ is this page. This means that, as the uncertainty is likelihood-free, we have not a clue where the true value of $u$ is located on this page. All we know is that it is somewhere on this page.

And even if we were to assume, for argument's sake, that the point estimate of the true value of $u$ is exactly in the middle of this page. Given that the uncertainty is likelihood-free, this additional stipulation would not alter even by one iota the basic fact that we have no clue where the true value of $u$ is in the page.

The implication is then that there are no grounds whatsoever to assume that the true value of $u$ is more/less likely to be in the neighborhood of the estimate than in the neighborhood of any other value of $u$ in $\overline{\mathscr{U}}$.

And all this leads to the inevitable conclusion that to determine the robustness of a decision to uncertainty where the uncertainty is described in these terms, it is imperative to evaluate the performance of the decision over the entire uncertainty space, or over a large sample of points from this space that adequately represents the variability of $u$ over this space.

Yet, in contrast to what other theories and methods for the treatment of severe uncertainty do, this is not what IGDT prescribes doing! To the contrary, IGDT prescribes conducting a robustness analysis only in the neighborhood of this (highly questionable, doubtful etc.) estimate and nowhere else. And so, IGDT's prescription for measuring robustness against severe uncertainty is based on the following question:

How much can this (poor) estimate be perturbed (in all directions) without causing a violation of the performance requirement?

Thus, if the answer is for instance 12cm, then any perturbation of size 12cm or less (in any direction) will not violate the performance requirement and will therefore be deemed "acceptable". Whereas, a slightly larger perturbation (in some direction) will be deemed unacceptable as it will cause a violation of the performance requirement.

This definition and means of measuring robustness is known universally as Radius of Stability. For obvious reasons, the Radius of Stability is treated universally (eg. in the robust control theory literature, economics literature, etc.) as a model of local robustness. This means that the accepted convention is that as a measure of local robustness it cannot be counted on to provide a measure of the global robustness of a system or a decision, or whatever. In other word, it cannot be counted on to indicate how well, or how poorly, a decision performs over the entire uncertainty space. Indeed, it is elementary to devise examples demonstrating that a decision that is locally robust in the neighborhood of the estimate is very fragile globally over $\overline{\mathscr{U}}$, and vice versa.

In short, all this goes to show that, as a radius of stability model, IGDT's robustness model is the wrong model for the treatment of a severe uncertainty of the type that IGDT is claimed to address.

It is extremely important to note in this regard that IGDT in fact has the dubious distinction of being the only decision theory in the trade to propose that the robustness of a decision against severe uncertainty be measured by a Radius of stability model.

So, to repeat, you need not be a risk analyst to see that a theory purporting to offer a reliable methodology for robust decision to uncertainty proposing this prescription for this purpose, is fundamentally flawed.

The Fix

In response to my criticism of IGDT, Hall and Harvey (2009) have hit on an easy quick fix. Although this paper is riddled with TUIGF, my concern here is only with the following remarkable statement made immediately after the description of info-gap's regions of uncertainty, the horizon of uncertainty $\alpha$ and the estimate $\widetilde{u}$ of the parameter of interest (color is used for emphasis):

An assumption remains that values of $u$ become increasingly unlikely as they diverge from $\widetilde{u}$.
Hall and Harvey (2009, p. 12)

In other words, this assumption indicates that the estimate $\widetilde{u}$ is the most likely true value of the parameter $u$ and that the likelihood of $u$ being the true value decreases as $u$ deviates from $\widetilde{u}$. As I pointed out in my review, it was clear that Hall and Harvey's (2009) rationale for introducing this assumption was to justify IGDT's fixing on the estimate as the focus of the robustness analysis.

What I particularly wanted readers to take note of was the phrasing of this assumption. I therefore, called readers attention to the word remains by raising the following (rhetorical) question:

What exactly are we to make of "remains"? Does it mean that in the context of IGDT, which boasts of being a non-probabilistic likelihood-free theory, this assumption was all along the case and it thus "remains"? If so how does it square with the "official" IGDT stand on this issue? Or, is this a "new" assumption? One that was appended to the "official" theory? If it is a newly added assumption, then surely this must be made clear. Whatever the case, it must be explained how this assumption tallies with the claim that the uncertainty under consideration is severe.

More on this in Review 6

A second attempt was made in Hine and Hall (2010). This time the assumption was introduced to get rid of, or mitigate, the above mentioned absurd in IGDT's prescription for the management of severe uncertainty. So, here we read this (color is used for emphasis):

The main assumption is that $u$, albeit uncertain, will to some extent be clustered around some central estimate $\widetilde{u}$ in the way described by $\mathscr{U}(\alpha,\widetilde{u})$, though the size of the cluster (the horizon of uncertainty $alpha;$ is unknown. In other words, there is no known or meaningfully bounded worst case. Specification of the info gap uncertainty model $\mathscr{U}(\alpha,\widetilde{u})$ may be based upon current observations or best future projections.
Hine and Hall (2010, pp.2-3)

Although this phrasing is an improvement on Hall and Harvey's (2009) quick-fix, just like its predecessor, this assumption amounts to a major indictment of IGDT. And, strictly speaking, it is misleading.

Let me explain.

  • What the authors refer to here as the "main assumption", thereby creating the impression that this assumption is implicit in IGDT is not in fact an IGDT assumption. It is not an integral part of IGDT -- as the theory is described in Ben-Haim (1996, 2001, 2006, 2010). There is no trace of it in any of these publications.
  • This "main" assumption is an ad hoc innovation introduced Hine and Hall (2010) in order to justify IGDT's absurd proposition to focus the analysis on a wild guess.
  • Interestingly, there is no trace of this "main" assumption in earlier publications on IGDT by the authors.

A detailed critique of the errors, misconceptions, misinformation etc. demonstrated in the wording of this assumption is available in Review 15.

With regard to the current review, the question is then: where is this "main Assumption" in Hall et al. (2012)?

Good question!

The new local/global novelty of IGDT

As was pointed out above, Hall et al. (2012) do not attempt to fix the flaw in IGDT, namely the inherently local nature of IGDT's robustness analysis. Instead a new argument appears on the scene (color is used for emphasis):

The uncertainty model is therefore written as a nested family of sets $\mathscr{U}(\alpha,\widetilde{u})$. For small $\alpha$, searching set $\mathscr{U}(\alpha,\widetilde{u})$ resembles a local robustness analysis. However, $\alpha$ is allowed to increase so that in the limit the set $\mathscr{U}(\alpha,\widetilde{u})$ covers the entire parameter space and the analysis becomes one of global robustness. The analysis of a continuum of uncertainty from local to global is one of the novel ways in which info-gap analysis is informative.
Hall et al. (2012, 1661-2)

To appreciate how absurd this claim is, let us examine the IGDT robustness model used in Hall et al. (2012, p. 1662), namely:

$$ \widehat{\alpha}(q,r_{c}) = \max\ \left\{\alpha: \min_{u\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}\right\}\tag{1} $$

where $q$ denotes the decision variable, $u$ denotes the uncertainty parameter, $r_{c} $ denotes the required critical performance level, $R$ denotes the reward function, and $\mathscr{U}(\alpha,\widetilde{u})$ denotes a neighborhood of size $\alpha$ around the point estimate $\widetilde{u}$.

Note that in this setup, $q$, $r_{c}$ and $\widetilde{u}$, are fixed, and $\alpha$ plays the role of a decision variable whose optimal (maximum) value, denoted $\widehat{\alpha}(q,r_{c}) $, is interpreted as the robustness of decision $q$, given $r_{c}$.

Clearly, contrary to the above claim in Hall et al. (2012), in this model $\alpha$ is not free to increase indefinitely. Its largest admissible value is dictated by the performance constraint $\min_{u\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}$.

In short, this robustness model is inherently local in nature because the robustness analysis is conducted in the neighborhood of $\widetilde{u}$.

It seems that Hall et al. (2012) confuse IGDT's robustness model with the tradeoff curves that can be generated to display the $\widehat{\alpha}(q,r_{c})$ vs $r_{c}$ relationship. Note, by inspection, that $\widehat{\alpha}(q,r_{c})$ is non-decreasing with $r_{c}$, and for a sufficiently small $r_{c}$, the performance constraint under consideration, namely $\min_{q\in \mathscr{U}(\alpha,\widetilde{u})} R(q,u) \ge r_{c}$, is superfluous, hence for such a small value of $r_{c}$ the robustness $\widehat{\alpha}(q,r_{c}) $ will be unbounded above, and the set $\mathscr{U}(\infty, \widetilde{u}) $ will cover the entire uncertainty space. But this does not alter the fact that the robustness analysis itself is inherently local in nature.

Finally, it should also be pointed out that in many situations the value of $r_{c}$ is fixed and cannot be changed (e.g., it is dictated by regulations) and therefore the tradeoff curves mentioned above are not relevant.

The bottom line is then that the ability to generate tradeoff curves is not novel at all and is definitely not unique to IGDT. Any model can be subjected to a systematic parametric analysis with respect to its parameters to yield a tradeoff curve. This is not an IGDT novelty. Furthermore, the ability to generate tradeoff curves does not change the way the robustness of a decision is determined by IGDT. The local orientation of IGDT's robustness analysis is a manifestation of the fact that this robustness analysis is a (local) worst-case analysis that is conducted over nested sets in such a manner that a larger neighborhood is analyzed only if all smaller neighborhoods "passed" the worst-case test.

The following examples illustrate the inherent local orientation of IGDT's robustness analysis.

Examples

In all the examples below we deal with a situation where there are two decisions, call them $d'$ and $d''$, the uncertainty is severe, and in particular, the point estimate of the true value of the uncertainty parameter, denoted $\widetilde{u}$, is just a wild guess. The examples differ in the performance profiles of the two decisions.

Example 1 
The performance profiles of the decisions are as follows:

  • $d'$ satisfies the performance constraint everywhere on the uncertainty space, except at a single point, $u'$ located at a distance $\alpha\,'$ from $\widetilde{u}$.
  • $d''$ violates the performance constraint everywhere on the uncertainty space except on a neighborhood of size $\alpha\,''= \alpha\,' + \epsilon$ around $\widetilde{u}$, where $\epsilon$ is small and positive, so $\alpha\,''$ is slightly larger than $\alpha\,'$.

According to IGDT, the robustness of decision $d''$ is equal to $\alpha\,'' = \alpha\,' + \epsilon$, whereas the robustness of decision $d'$ is smaller than $\alpha\,'$. Hence, IGDT regards $d''$ as more robust than $d'$. This absurd ranking is a manifestation of the inherently local orientation of IGDT's robustness analysis.

Example 2 
The performance profiles of the decisions are as follows:

  • $d'$ satisfies the performance constraint over 99.9% of the uncertainty space. Its IGDT robustness is equal to $\alpha\,' = 53$.
  • $d''$ violates the performance constraint over 99.9% of the uncertainty space. Its IGDT robustness is equal to $\alpha\,''= 107$.

Hence, IGDT regards $d''$ as more robust than $d'$. This absurd ranking is a manifestation of the inherently local orientation of IGDT's robustness analysis.

Example 3 
The performance profiles of the decisions are as follows:

  • $d'$ satisfies the performance constraint $\widetilde{u}$.
  • $d''$ violates the performance constraint at $\widetilde{u}$.

Here the IDGT's robustness of decision $d''$ is equal to $0$, irrespective of how well/badly $d''$ performs with respect to all other possible values of $u$. So for IGDT, the behavior of the decisions at $\widetilde{u}$ is of paramount importance in determining the robustness od decisions, even though the theory does not assume that the true value of the uncertainty parameter is more likely to be in the neighborhood of $\widetilde{u}$ than in the neighborhood of any other point in the uncertainty space! On what basis then does IGDT consider $\widetilde{u}$ to be much more "important" than any other point in the uncertainty space? And on what basis does IGDT totally ignores the performance of a decision over the uncertainty space if the decision violates the performance constraint at $\widetilde{u}$?

Example 4 
The performance profiles of the decisions are as follows:

  • $d'$ satisfies the performance constraint over 99.99% of the uncertainty space.
  • $d''$ violates the performance constraint over 99.99% of the uncertainty space.

In this pathological case IGDT cannot determine which decision is more robust. Note that since we do not know where the estimate $\widetilde{u}$ is located relative to admissible/inadmissible points in the uncertainty space, we cannot rule out the possibility that, according to IGDT's robustness analysis, decision $d''$ will turn out to be more robust than decision $d'$.

Think about it: in this example we face a situation where overall, $d'$ performs almost perfectly practically over the entire uncertainty space, $d''$ performs very badly practically over the entire uncertainty space, yet ... IGDT cannot determine which decision is more robust to the severe uncertainty under consideration. The fact that IGDT is completely paralyzed in the absence of the point estimate $\widetilde{u}$ is a clear indication that its robustness analysis is inherently local in nature.

No amount of rhetoric can change this fact.


To make the "local issue" crystal clear, observe that

The Local Issue
  • IGDT's robustness model addresses the following question: how much can $\widetilde{u}$ be perturbed (in all directions) without violating the performance constraint over the neighborhood around $\widetilde{u}$ whose size is equal to the size of the perturbation?
  • IGDT's robustness model does not address the following question: How robust is decision $d$ to the severe uncertainty in the true value of $u$?

In other words, IGDT addresses the "small perturbation in $\widetilde{u}$" question. It does not address the "robustness against severe uncertainty" question.

I stress again that I raise this "old issue" here, again, because this crucial point still eludes prominent members of the DMDU community (see Review 1-2022).

The repackaging of info-gap decision theory

The really intriguing question about Hall et al.'s (2012) discussion on IGDT is what I call the "repackaging" of IGDT. In other words, the question that deserves answer is this: how does the claim in Hall et al. (2012) that IGDT does not provide a strict ranking of decisions square with the standard depictions of IGDT in the IGDT literature to date? To enable readers who are not familiar with this literature to see for themselves, let us compare Hall et al's. (2012) statement with a sample of statements --illustrating the standard depiction of this theory -- by other info-gap scholars, including the Father of the theory, Prof. Yakov Ben-Haim.

Hall et al. (2012, p. 2) Neither Info-gap nor RDM provide a strict ranking of alternative decisions. Rather, both provide decision support, summarizing tradeoffs for decision makers to help inform their judgments about the robustness of alternative decision options.
 
The IGDT literature The best decision is then chosen as the one that is most robust to uncertainty, i.e. is guaranteed to give acceptable outcomes under the greatest degree of uncertainty.
Halpern, Regan, Possingham and McCarthy (2006, p. 3)

The best decision is the one that is most robust to uncertainty, by guaranteeing an acceptable outcome under the greatest degree of uncertainty.
McCarthy and Lindenmayer (2007, p. 554)

When a decision must be made under considerable uncertainty, a robust option may be preferred to one that is less robust even if the latter has considerably better performance under best estimate conditions.
Harvey, Hall, and Peppe (2009, p. 4292)

The robustness can be evaluated even though there is no known worst case. Furthermore, the robustness function generates preferences on the decisions, $q$: a decision which is more robust for achieving aspiration $r_{c}$ is preferred over a decision which is less robust. Robust-satisficing decision making maximizes the robustness and satisfices the reward at the value $r_{c}$, without specifying a limit on the level of uncertainty: $$ \widetilde{q} = \arg\max_{q\in Q} \ \widetilde{\alpha}(q,r_{c})\tag{4} $$ where $Q$ is the set of available decisions.
Davidovitch and Ben-Haim (2010, p. 268)

The robustness function generates a preference ordering on the available decisions: a more robust decision is preferred over a less robust decision. Satisficing means doing well enough, or obtaining an adequate outcome. A satisficing decision strategy seeks a decision whose outcome is good enough, though perhaps sub-optimal. A robust-satisficing decision strategy maximizes the robustness to uncertainty and satisfices the outcome.
Schwartz, Ben-Haim and Dasco (2011, p. 213)

We now have a general mathematical formulation of the problem at hand including a model, which incorporates uncertainties in the preliminary data, and a method to choose the best decision.
Sisso, Shema and Ben-Haim (2010, p. 1035)

As we have noted before, this means that ''bigger is better'' for the robustness function. Consequently, a decision maker will usually prefer decision option $q$ over an alternative decision $q'$ if the robustness of $q$ is greater than the robustness of $q'$ at the same value of critical reward $r_{c}$.
...
...
Let $Q$ be the set of all available or feasible decision vectors $q$. A robust-satisficing decision is one which maximizes the robustness over the set $Q$ of available $q$-vectors and satisfices the performance at at the critical level rc.
Ben-Haim (2006, p. 45)

The robustness function is based on a satisficing performance requirement. When operating under severe uncertainty, a decision which is guaranteed to achieve an acceptable outcome throughout a large range of uncertain realizations is preferable to a decision which can fail to achieve an acceptable outcome even under small error. In this way the robustness function generates preferences among available decisions. When choosing between two options, the robust-satisficing decision strategy selects the more robust option.
Ben-Haim (2010, p. 8)

The opportuneness function generates preferences among the available decisions. These preferences may not agree with the preferences generated by the robustness function. When considering the choice between two options, the opportune-windfalling decision strategy chooses the more opportune strategy, recognizing that it may be less robust.
Ben-Haim (2010, p. 8)

Goals which are satisficed (sub-optimal but good enough) can be achieved by many alternative policies. Choose the most robust from among these alternatives.
Ben-Haim (2010, p. 10)

Surely, Hall et al.'s (2012) blatant distortion of what IGDT actually does cannot be an accident. The question is then: what is its object? Because to date, IGDT's robust-satisficing strategy has been presented as a method that ranks decisions according to their robustness: the larger the robustness better. Similarly, the opportune-satisficing strategy has been presented as a method that ranks decisions according to their opportuneness: the smaller the opportuneness better. Hence, according to (what is known to the public as) IGDT's robust-satisficing strategy, the best decision is that whose robustness is the largest (for the desired level of performance, $r_{c}$). Similarly, according to (what is known to the public as) IGDT's opportune-satisficing strategy, the best decision is that whose opportuneness is the smallest (for the desired level of performance, $r_{w}$)

Are we to conclude from Hall et al. (2012) that this is no longer the case?

I suspect that this distortion may well be another attempt at a quick fix, aimed at getting around IGDT's fundamental ills. And this, by the way, is the norm in the IGDT literature: a repackaging of the rhetoric as a means of glossing over the fundamental ills of the theory itself! (see Review 2-2022).

Remarks

The missing performance (reward) function

On Fri, 10 June 2011 16:42:20 +1000 (EST), I requested from one of the co-authors details on the performance function of IGDT's robustness model used in Hall et al. (2012) because this function was not specified for the numerical examples presented in the article. The request was acknowledged and was forwarded to another author. On Wed, 13 July 2011 11:17:16 +0100 I was informed by the first author that the reward function was not specified in the article. The rewards were generated by a computer program (MLK DICE) that was reported on elsewhere.

I find this most surprising because the uncertain parameter under consideration (call it $u$) takes only 2662 values, namely the uncertainty space is discrete and contains 2662 distinct values of $u$. This means that for each of the four strategies under consideration, there are only 2662 possible rewards. The implication is then that, the reward function can be easily completely "specified" (online) by a relatively small spreadsheet, to enable interested users to easily download and examine it.

But more than this, in view of the fact that the uncertainty space is discrete and manifestly small, determining robustness is trivial --- it can easily be done by enumeration. Indeed, given the size of this discrete uncertainty space, it is hard to comprehend why the authors conduct no more than a local robustness analysis! For their money, they could have easily performed a global analysis (over the entire uncertainty space) to come up with far more meaningful results.

Regrettably, the article do not provide such a spreadsheet so that it is impossible to check/reconstruct their results.

Summary and conclusions

Readers interested in the "local vs global" robustness issue may wish to surf to the old robustness directory. Readers interested in "Voodoo Decision Making", may wish to surf to the old voodoo decision making directory.

Bibliography and links

Articles/chapters

Research Reports

Links

Disclaimer: This site, its contents and style, are the responsibility of its owner the author and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.