-
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathactuarial.qmd
326 lines (248 loc) · 27.7 KB
/
actuarial.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
# Judgment Versus Actuarial Approaches to Prediction {#sec-judgmentVsActuarial}
## Getting Started {#sec-judgmentVsActuarialGettingStarted}
### Load Packages {#sec-judgmentVsActuarialLoadPackages}
```{r}
```
## Approaches to Prediction {#sec-approachesToPrediction}
There are two primary approaches to prediction: human judgment and the actuarial (i.e., statistical) method.
### Human Judgment {#sec-humanJudgment}
Using the judgment method of prediction, all gathered information is collected and formulated into a prediction in the person's mind.
The person selects, measures, and combines information and produces projections solely according to their experience and judgment.
For instance, a proclaimed "fantasy expert" might use their experience, expertise, and judgment to make a prediction about how each player will perform by using whatever information and data they deem to be important, aggregating all of this information in their mind to make the prediction for each player.
Professional scouts and coaches use judgment when making predictions or selecting players based on their impressions of the players [@DenHartigh2018].
As an example in popular media, in the movie, "Trouble with the Curve", a professional scout makes a judgment about how well a baseball hitter will do in the major leagues from his impressions of the hitter's ability based on the sound of the ball off the player's bat.
### Actuarial/Statistical Method {#sec-actuarialPrediction}
In the actuarial or statistical method of prediction, information is gathered and combined systematically in an evidence-based statistical prediction formula.
The method is based on equations and data, so both are needed.
An example of a statistical method of prediction is the Violence Risk Appraisal Guide [@Rice2013].
The Violence Risk Appraisal Guide is used in an attempt to predict violence and is used for parole decisions.
For instance, the equation might be something like @eq-actuarial:
$$
\scriptsize
\text{violence risk} = \beta \cdot \text{conduct disorder} + \beta \cdot \text{substance use} + \beta \cdot \text{suspended from school} + \beta \cdot \text{childhood aggression} + ...
$$ {#eq-actuarial}
Then, based on their score and the established cutoffs, a person is given a "low risk", "medium risk", or "high risk" designation.
An actuarial formula for projecting a Running Back's rushing yards might be something like @eq-actuarialFantasy:
$$
\scriptsize
\text{rushing yards} = \beta \cdot \text{rushing yards last season} + \beta \cdot \text{age} + \beta \cdot \text{injury history} + \beta \cdot \text{strength of offensive line} + ...
$$ {#eq-actuarialFantasy}
The beta weights in the actuarial model reflect the relative weight to assign each predictor.
For instance, in predicting rushing yards, a player's historical performance is likely the strongest predictor, whereas injury history might be a relatively weaker predictor.
Thus, we might give historical performance a beta of 3 and injury history a beta of 1 to give a player's historical performance three times more weight than the player's injury history in predicting their rushing yards.
For generating the actuarial model, you could obtain the beta weights for each predictor from [multiple regression](#sec-multipleRegression), from [machine learning](#sec-machineLearning), or from prior research on the relative importance of each predictor.
As an example of using the actuarial approach, Billy Beane, who was the general manager of the Oakland Athletics at the time, wanted to find ways for his team—which had less finanical resources than its competitors—to compete with teams that hard more money to sign players.
Because the team did not have the resources to sign the best players, they had to find to find other ways to find the optimal players that they could afford.
So, he used statistical formulas that weight variables, such as on-base percentage and slugging percentage, according to their value for winning games, for the use of selecting players.
His approach became well-known based on the Michael Lewis book, "Moneyball: The Art of Winning an Unfair Game", and the eventual movie, "Moneyball".
### Combining Human Judgment and Statistical Algorithms {#sec-combiningHumanJudgmentActuarialPrediction}
There are numerous ways in which humans and statistical algorithms could be involved.
On one extreme, humans make all judgments.
On the other extreme, although humans may be involved in data collection, a statistical formula makes all decisions based on the input data, consistent with an actuarial approach.
However, the human judgment and actuarial approaches can be combined in a hybrid way [@Dana2006].
For example, to save time and money, a clinical psychologist might use an actuarial approach in all cases, but might only use a judgment approach when the actuarial approach gives a "positive" test.
Or, the clinical psychologist might use both human judgment and an actuarial approach independently to see whether they agree.
That is, the clinician may make a prediction based on their judgment and might also generate a prediction from an actuarial approach.
The challenge is what to do when the human and the algorithm disagree.
Hypothetically, humans reviewing and adjusting the results from the statistical algorithm could lead to more accurate prediction.
However, human input also could lead to the possibility or exacerbation of biased predictions.
In general, with very few exceptions, actuarial approaches are as accurate or more accurate than "expert" judgment [@Dawes1989; @Aegisdottir2006; @Baird2000; @Grove1996; @Grove2000].
This is also likely true with respect to predicting player performance in sports [@DenHartigh2018].
Moreover, the superiority of actuarial approaches to human judgment tends to hold even when the expert is given more information than the actuarial approach [@Dawes1989].
In addition, actuarial predictions outperform human judgment even when the human is given the result of the actuarial prediction [@Kahneman2011].
Allowing experts to override actuarial predictions consistently leads to lower predictive accuracy [@Garb2019].
There is sometimes a misconception that formulas cannot account for qualitative information.
However, that is not true.
Qualitative information can be scored or coded to be quantified so that it can be included in statistical formulas.
For instance, if an expert scout is able to meaningfully assess a player's [cognitive and motivational factors](#sec-evalCogMotivational) (i.e., the "[X factor](#sec-evalCogMotivational)" or "[intangibles](#sec-evalCogMotivational)"), the scout can score this across multiple players and include these data in the actuarial prediction formula.
For instance, the scout could use a rating scale (e.g., 1 = "poor"; 2 = "fair"; 3 = "good"; 4 = "very good"; 5 = "excellent") to code (i.e., translate) their qualitative judgment into a quantifiable rating that can be integrated with other information in the actuarial formula.
That said, the quality of predictions rests on the quality and relevance of the assessment information for the particular prediction decision.
If the assessment data are lousy, it is unlikely that a statistical algorithm (or a human for that matter) will make an accurate prediction:
"Garbage in, garbage out".
A statistical formula cannot rescue inaccurate assessment data.
## Errors in Human Judgment {#sec-errorsInHumanJudgment}
Human judgment is naturally subject to errors.
Common [heuristics](#sec-heuristics), [cognitive biases](#sec-cognitiveBiases), and [fallacies](sec-fallacies) are described in @sec-cognitiveBias.
Below, I describe a few errors to which human judgment seems particularly prone.
When operating freely, clinicians and medical experts (and humans more generally) tend to overestimate exceptions to the established rules (i.e., the broken leg syndrome).
@Meehl1957 acknowledged that there may be some situations where it is glaringly obvious that the statistical formula would be incorrect because it fails to account for an important factor.
He called these special cases "broken leg" cases, in which the human should deviate from the formula (i.e., broken leg countervailing).
The example goes like this:
> If a sociologist were predicting whether Professor X would go to the movies on a certain night, he might have an equation involving age, academic specialty, and introversion score.
> The equation might yield a probability of .90 that Professor X goes to the movie tonight.
> But if the family doctor announced that Professor X had just broken his leg, no sensible sociologist would stick with the equation.
> Why didn't the factor of 'broken leg' appear in the formula?
> Because broken legs are very rare, and in the sociologist's entire sample of 500 criterion cases plus 250 cross-validating cases, he did not come upon a single instance of it.
> He uses the broken leg datum confidently, because 'broken leg' is a subclass of a larger class we may crudely denote as 'relatively immobilizing illness or injury,' and movie-attending is a subclass of a larger class of 'actions requiring moderate mobility.'
>
> --- Meehl [-@Meehl1957, p. 269–270]
However, people too often think that cases where they disagree with the statistical algorithm are broken leg cases.
People too often think their case is an exception to the rule.
As a result, they too often change the result of the statistical algorithm and are more likely to be wrong than right in doing so.
Because actuarial methods are based on actual population levels (i.e., [base rates](#sec-baseRates)), unique exceptions are not overestimated.
Actuarial predictions are perfectly [reliable](#sec-reliability)—they will always return the same conclusion given an identical set of data.
The human judge is likely to both disagree with others and with themselves given the same set of symptoms.
The decision by an expert (all by all humans) is likely to be influenced by past experiences.
Actuarial methods are based on objective algorithms, and past personal experience and personal biases do not factor into any decisions.
Humans give weight to less relevant information, and often give too much weight to singular variables.
Actuarial formulas do a better job of focusing on relevant variables.
Computers are good at factoring in [base rates](#sec-baseRates).
Humans ignore [base rates](#sec-baseRates) ([base rate neglect](#sec-fallaciesBaseRate)).
Computers are better at accurately weighing predictors and calculating unbiased risk estimates.
In an actuarial formula, the relevant predictors are weighted according to their predictive power.
Humans are typically given no feedback on their judgments.
To improve accuracy of judgments, it is important for feedback to be clear, consistent, and timely.
Intuition is a form of recognition-based judgment (i.e., recognizing cues that provide access to information in memory).
Development of strong intuition depends on the quality and speed of feedback, in addition to having adequate opportunities to practice [i.e., sufficient opportunities to learn the cues; @Kahneman2011].
The quality and speed of the feedback tend to benefit anesthesiologists who often quickly learn the results of their actions.
By contrast, radiologists tend not to receive quality feedback about the accuracy of their diagnoses, including their [false-positive](#sec-falsePositive) and [false-negative](#sec-falseNegative) decisions [@Kahneman2011].
In general, many so-called experts are "pseudo-experts" who do not know the boundaries of their competence—that is, they do not know what they do not know; they have the illusion of validity of their predictions and are [overconfident](#sec-cognitiveBiasesOverconfidence) about their predictions [@Kahneman2011].
Yet, many people arrogantly proclaim to have predictive powers, including in low-validity environments such as fantasy football.
Indeed, pundits are more likely to be television guests if they are opinionated, clear, and (overly) confident and make big, bold predictions, because they are more entertaining and their predictions seem more compelling [even though they tend to be less accurate than individuals whose thinking is more complex and less decisive; @Kahneman2011; @Silver2012].
Consider sports pundits like Stephen A. Smith and Skip Bayless who make bold predictions with uber confidence.
[Optimism](#sec-cognitiveBiasesOptimism) and [(over)confidence](#sec-cognitiveBiasesOverconfidence) are valued by society [@Kahneman2011].
Nevertheless, true experts know their limits in terms of knowledge and ability to predict.
::: {.content-visible when-format="html:js"}
Here is a video of sports pundits, Stephen A. Smith and Skip Bayless, making bold statements and incorrect predictions:
![Video of Sports Pundits, Stephen A. Smith and Skip Bayless, Making Bold Statements and Incorrect Predictions. From: <https://www.youtube.com/watch?v=lTjBuEPcLlc>.](images/sportsPundits.mp4){width=100%}\
:::
Intuitions tend to be skilled when a) the environment is regular and predictable, and b) there is opportunity to learn the regularities, cues, and contingencies through extensive practice [@Kahneman2011].
Example domains that meet these conditions supporting intuition include activities such as chess, bridge, and poker, and occupations such as medical providers, athletes, and firefighters.
By contrast, fantasy football and other domains such as stock-picking, clinical psychology, and other long-terms forecasts are low-validity environments that are irregular and unpredictable.
In environments that do not have stable regularities, intuition cannot be trusted [@Kahneman2011].
## Humans Versus Computers {#sec-humansVsComputers}
### Advantages of Computers {#sec-advantagesOfComputers}
Here are some advantages of computers over humans, including "experts":
- Computers can process lots of information simultaneously.
So can humans.
But computers can to an even greater degree.
- Computers are faster at making calculations.
- Given the same input, a formula will give the exact same result everytime.
Humans' judgment tends to be inconsistent both across raters and within rater across time, when trying to make judgments or predictions from complex information [@Kahneman2011].
As noted in @sec-reliabilityVsValidity, [reliability](#sec-reliability) sets the upper bound for [validity](#sec-validity), so unreliable judgments cannot be accurate (i.e., valid).
- Computations by computers are error-free (as long as the computations are programmed correctly).
- Computers' judgments will not be biased by fatigue or emotional responses.
- Computers' judgments will tend not to be biased in the way that humans' [cognitive biases](#sec-cognitiveBias) are.
Computers are less likely to be [overconfident](#sec-cognitiveBiasesOverconfidence) in their judgments.
- Computers can more accurately weight the set of predictors based on large data sets.
Humans tend to give too much weight to singular predictors.
Experts may attempt to be clever and to consider complex combinations of predictors, but doing so often reduces validity [@Kahneman2011].
Simple combinations of predictions often outperform more complex combinations [@Kahneman2011].
### Advantages of Humans {#sec-advantagesOfHumans}
Computers are bad at some things too.
Here are some advantages of humans over computers (as of now):
- Humans can be better at identifying patterns in data (but also can mistakenly identify patterns where there are none—i.e., illusory correlation).
- Humans can be flexible and take a different approach if a given approach is not working.
- Humans are better at tasks requiring creativity and imagination, such as developing theories that explain phenomena.
- Humans have the ability to reason, which is especially important when dealing with complex, abstract, or open-ended problems, or problems that have not been faced before (or for which we have insufficient data).
- Humans are better able to learn.
- Humans are better at holistic, gestalt processing, including facial and linguistic processing.
There *may* be situations in which a human judgment would do better than an actuarial judgment.
One situation where human judgment would be important is when no actuarial method exists for the judgment or prediction.
For instance, when no actuarial method exists for the diagnosis or disorder (e.g., suicide), it is up to the clinician.
However, we could collect data on the outcomes or on clinicians' judgments to develop an actuarial method that will be more reliable than the clinicians' judgments.
That is, an actuarial method developed based on clinicians' judgments will be more accurate than clinicians' judgments.
In other words, we do not necessarily need outcome data to develop an actuarial method.
We could use the client's data as predictors of the clinicians' judgments to develop a structured approach to prediction that weighs factors similarly to clinicians, but with more [reliable](#sec-reliability) predictions.
Another situation in which human judgment could outperform a statistical algorithm is in true "broken leg" cases, e.g., important and rare events (edge cases) that are not yet accounted for by the algorithm.
Another situation in which human judgment could be preferable is if advanced, complex theories exist.
Computers have a difficult time adhering to complex theories, so clinicians may be better suited.
However, we do not have any of these complex theories in psychology that are accurate.
We would need strong theory informed by data regarding causal influences, and [accurate](#validity) measures to assess them.
However, no theories in psychology are that good.
Nevertheless, predictive accuracy can be improved when considering theory [@Silver2012; @Garb2019].
If the prediction requires complex configural relations that a computer will have a difficult time replicating, a clinician's judgment may be preferred.
Although the likelihood that a person can accurately work through these complex relations is theoretically possible, it is highly unlikely.
Holistic pattern recognition (such as language and faces) tends to be better by humans than computers.
But computers are getting better with holistic pattern recognition through machine learning.
In sum, the human seeks to integrate information to make a decision, but is biased.
### Comparison of Evidence {#sec-evidenceOnHumanJudgmentVsActuarialPrediction}
Hundreds of studies have examined clinical versus actuarial prediction methods across many disciplines.
Findings consistently show that actuarial methods are as [accurate](#sec-validity) or more [accurate](#sec-validity) than human judgment/prediction methods.
"There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly...as this one" [@Meehl1986, pp. 373–374].
Actuarial methods are particularly valuable for criterion-referenced assessment tasks, in which the aim is to predict specific events or outcomes [@Garb2019].
For instance, actuarial methods have shown promise in predicting violence, criminal recidivism, psychosis onset, course of mental disorders, treatment selection, treatment failure, suicide attempts, and suicide [@Garb2019].
Actuarial methods are especially important to use in low-validity environments (like fantasy football) in which there is considerable uncertainty and unpredictability [@Kahneman2011].
Moreover, actuarial methods are explicit; they can be transparent and lead to informed scientific criticism to improve them.
By contrast, human judgment methods are not typically transparent; human judgment relies on mental processes that are often difficult to specify.
## Why Judgment is More Widely Used Than Statistical Formulas {#sec-whyHumanJudgmentIsMoreWidelyUsed}
Despite actuarial methods being generally more accurate than human judgment, judgment is much more widely used by clinicians.
There are several reasons why actuarial methods have not caught on; one reason is professional traditions.
Experts in any field do not like to think that a computer could outperform them.
Some practitioners argue that judgment/prediction is an "art form" and that using a statistical formula is treating people like a number.
However, using an approach (i.e., human judgment) that systematically leads to less [accurate](#sec-validity) decisions and predictions is an ethical problem.
Some clinicians do not think that group averages (e.g., in terms of which treatment is most effective) apply to an individual client.
This invokes the distinction between nomothetic (group-level) inferences and idiographic (individual-level) inferences.
However, the scientific evidence and probability theory strongly indicate that it is better to generalize from group-level evidence than throwing out all the evidence and taking the approach of "anything goes."
Clinicians frequently believe the broken leg fallacy, i.e., thinking that your client is an exception to the algorithmic prediction.
In most cases, deviating from the statistical formula will result in less [accurate](#sec-validity) predictions.
People tend to overestimate the probability of low [base rate](#sec-baseRates) conditions and events.
Another reason why actuarial methods have not caught on is the belief that receiving a treatment is the only thing that matters.
But it is an empirical question which treatment is most effective for whom.
What if we could do better?
For example, we could potentially use a formula to identify the most effective treatment for a client.
Some treatments are no better than placebo; other treatments are actually harmful [@Lilienfeld2007; @Williams2021].
Another reason why judgment methods are more widely used than actuarial methods is that so-called "experts" (and people in general) show [overconfidence](#sec-cognitiveBiasesOverconfidence) in their predictions—clinicians, experts, and humans in general think they are more accurate than they actually are.
We see this when examining their calibration; their predictions tend to be miscalibrated.
For example, things they report with 80% confidence occur less than 80% of the time, an example of [overprecision](#sec-cognitiveBiasesOverconfidence) in their predictions.
Humans will sometimes be correct by chance, and they tend to mis-attribute that to their skill; humans tend to remember the successes and forget the failures.
Another argument against using actuarial methods is that "no methods exist".
In some cases, that is true—actuarial methods do not yet exist for some prediction problems.
However, one can always create an algorithm of the experts' judgments, even if one does not have access to the outcome information.
A model of clinicians' responses tends to be more [accurate](#sec-validity) than clinicians' judgments themselves because the model gives the same outcome with the same input data—i.e., it is perfectly [reliable](#sec-reliability).
Another argument from some clinicians is that, "My job is to understand, not to predict".
But what kind of understanding does not involve predictions?
Accurate predictions help in understanding.
Knowing how people would perform in different conditions is the same thing as good understanding.
## Steps to Conduct Actuarial Approaches {#sec-stepsActuarial}
Here are several steps to conduct actuarial approaches [@DenHartigh2018; @Kahneman2011]:
1. Determine a set of relevant variables to measure
1. Determine how you will combine the variables
- Do some variables have more weight than other variables?
1. Determine how the variables will be scored
- e.g., a 7-point likert scale
1. Combine the scores based on the pre-specified formula
1. Use the final score to make your prediction (for selecting players)
## Challenges of Data-Driven Approaches and How to Address {#sec-challengesDataDrivenApproaches}
There are various challenges of data-driven approaches.
First, they are sometimes not interpretable or consistent with theory.
Second, they tend to [overfit](#sec-overfitting) the data.
[Overfitting](#sec-overfitting) is described in @sec-overfitting.
Third, as a result of [overfitting](#sec-overfitting) the data, they tend to show [shrinkage](#sec-shrinkage).
### Shrinkage {#sec-shrinkage}
In general, there is often shrinkage of estimates from training data set to a test data set.
*Shrinkage* is when variables with stronger predictive power in the original data set tend to show somewhat smaller predictive power (smaller regression coefficients) when applied to new groups.
Shrinkage reflects a model [overfitting](#sec-overfitting)—i.e., when the model explains error variance by capitalizing on chance.
Shrinkage is especially likely when the original sample is small and/or unrepresentative and the number of variables considered for inclusion is large.
To help minimize the extent of shrinkage, it is recommended to apply [cross-validation](#sec-crossValidation).
### Cross-Validation {#sec-crossValidation}
Cross-validation with large, representative samples can help evaluate the amount of shrinkage of estimates, particularly for more complex models such as [machine learning](#sec-machineLearning) models [@Ursenbach2019].
Ideally, cross-validation would be conducted with a separate sample (external cross-validation) to see the generalizability of estimates.
However, you can also do internal cross-validation.
For example, you can perform *k*-fold cross-validation, where you:
- split the data set into *k* groups
- for each unique group:
- take the group as a hold-out data set (also called a test data set)
- take the remaining groups as a training data set
- fit a model on the training data set and evaluate it on the test data set
- after all *k*-folds have been used as the test data set, and all models have been fit, you average the estimates across the models, which presumably yields more robust, generalizable estimates
## Best Actuarial Approaches to Prediction {#sec-bestActuarialApproaches}
The best actuarial models tend to be relatively simple (parsimonious), that can account for one or several of the most important predictors and their optimal weightings, and that account for the [base rate](#sec-baseRate) of the phenomenon.
[Multiple regression](#sec-multipleRegression) and/or prior literature can be used to identify the weights of various predictors.
Even unit-weighted formulas (formulas whose [predictor variables](#sec-correlationalStudy) are equally weighted with a weight of one) can sometimes generalize better to other samples than complex weightings [@Garb2019; @Kahneman2011].
Differential weightings sometimes capture random variance and [over-fit](#sec-overfitting) the model, thus leading to predictive accuracy [shrinkage](#sec-shrinkage) in [cross-validation](#sec-crossValidation) samples [@Garb2019], as described below.
The choice of [predictor variables](#sec-correlationalStudy) often matters more than their weighting.
An emerging technique that holds promise for increasing predictive accuracy of actuarial methods is [machine learning](#sec-machineLearning) [@Garb2019].
However, one challenge of some [machine learning](#sec-machineLearning) techniques is that they are like a "black box" and are not transparent, which raises ethical concerns [@Garb2019].
Moreover, [machine learning](#sec-machineLearning) also tends to lead to [overfitting](#sec-overfitting) and [shrinkage](#sec-shrinkage).
[machine learning](#sec-machineLearning) may be most valuable when the data available are complex and there are many [predictor variables](#sec-correlationalStudy) [@Garb2019], and when the model is validated with [cross-validation](#sec-crossValidation).
## Conclusion {#sec-conclusion-actuarial}
In general, it is better to develop and use structured, actuarial approaches than informal approaches that rely on human judgment or judgment by "so-called" experts.
Actuarial approaches to prediction tend to be as accurate or more accurate than expert judgment.
Nevertheless, in many domains, human judgment tends to be much more widely used than actuarial approaches.
::: {.content-visible when-format="html"}
## Session Info {#sec-judgmentVsActuarialSessionInfo}
```{r}
sessionInfo()
```
:::