Figures
Abstract
Modeling decision-making under uncertainty typically relies on quantitative outcomes. Many decisions, however, are qualitative in nature, posing problems for traditional models. Here, we aimed to model uncertainty attitudes in decisions with qualitative outcomes. Participants made choices between certain outcomes and the chance for more favorable outcomes in quantitative (monetary) and qualitative (medical) modalities. Using computational modeling, we estimated the values participants assigned to qualitative outcomes and compared uncertainty attitudes across domains. Our model provided a good fit for the data, including quantitative estimates for qualitative outcomes. The model outperformed a utility function in quantitative decisions. Additionally, we found an association between ambiguity attitudes across domains. Results were replicated in an independent sample. We demonstrate the ability to extract quantitative measures from qualitative outcomes, leading to better estimation of subjective values. This allows for the characterization of individual behavior traits under a wide range of conditions.
Author summary
In the current study, we explored how people make decisions when the outcomes are not easily measured in numbers, such as with choices between medical treatments. Traditional mathematical models, which rely on numerical data, are not designed to handle such decisions, leading to a gap in understanding how people evaluate these qualitative outcomes. Using hierarchical Bayesian modeling, we developed a model that bridges this gap by translating qualitative outcomes into individualized quantitative values, enabling us to understand the underlying decision-making processes better. Our model not only provides a better fit to laboratory data than existing models with qualitative or quantitative outcomes but also allows for meaningful comparisons of how people handle uncertainty across different decision-making scenarios. This approach opens new doors for studying decision-making in areas where traditional methods struggle, offering a more nuanced view of human behavior in complex situations.
Citation: Korem N, Duek O, Jia R, Wertheimer E, Metviner S, Grubb M, et al. (2025) Modeling decision-making under uncertainty with qualitative outcomes. PLoS Comput Biol 21(3): e1012440. https://doi.org/10.1371/journal.pcbi.1012440
Editor: Varun Dutt, Indian Institute of Technology Mandi - Kamand Campus: Indian Institute of Technology Mandi, INDIA
Received: August 23, 2024; Accepted: February 5, 2025; Published: March 3, 2025
Copyright: © 2025 Korem et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data and code are available on the author's GitHub. https://github.com/KoremNSN/QualMod https://github.com/KoremNSN/QualMod/tree/main/data.
Funding: This study was supported by the Yale Claude D. Pepper Older Americans Independence Center (c), and by NIH grants R21AG049293, R56AG058769, and NSF grant BCS1829439 to IL. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist
Introduction
Life is a series of decisions where most outcomes are uncertain. These decisions range from trying a new dish at our favorite restaurant to selecting a life-saving medical treatment. Often, these decisions are quantitative in nature; for example, buying a lottery ticket for $2 with a 1 in 300 million chance of winning $21 million. Many decisions, however, are qualitative, such as choosing between your regular coffee or paying more for premium coffee beans. The ability to compare different qualitative outcomes implies that we can derive some form of comparable subjective value for these outcomes [1]. Here, we aimed to quantify how qualitative outcomes affect uncertainty attitudes.
When presented quantitively, uncertainty around choice outcomes can be categorized into two components: risk and ambiguity. Risk occurs when probabilities of potential outcomes are precisely known [2]; ambiguity refers to situations where these probabilities are partially or entirely unknown [3]. Prior research has shown that individuals generally exhibit an aversion to both risk and ambiguity in scenarios involving potential gains [4–7], but that these attitudes vary substantially across individuals and are not strongly correlated with each other [7–13]. Many studies have characterized these attitudes using choices with monetary (or point) outcomes [5,8,10,12,14–16]. While self-report questionnaires examined risk-taking across various domains [17], only a few studies quantified individual attitudes in non-monetary domains. Importantly, these studies still employed quantitative outcomes, including numbers of M&Ms and milliliters of water [1], months of extended lifespan [18], or milligrams of medication, and minutes spent with social partners [19]. Although some earlier studies did examine qualitative decision-making [20,21], there is still no straightforward way to model these kinds of decisions. Current models, such as cumulative prospect theory [22], are not practically equipped to handle qualitative outcomes. As a result, such decisions are typically not addressed in neuroimaging and psychiatric research.
In this study, we estimated risk and ambiguity attitudes in two separate modalities: quantitative (monetary decisions) and qualitative (medical decisions), leveraging computational modeling to extract values from qualitative outcomes and examine how uncertainty attitudes influence decision-making across various domains. Sixty-six in-person and 332 online participants engaged in a task of decision-making under uncertainty (see Table 1 for demographics), where they made choices between a certain outcome and the chance for a more favorable outcome. Our objectives were to (1) estimate subjective values for a range of qualitative outcomes, (2) assess the model’s fit using quantitative outcomes for comparison, and (3) explore how attitudes towards uncertainty vary across different domains. Importantly, uncertainty in both domains was presented quantitatively (e.g., a 50% chance) rather than qualitatively (e.g., high chance). This allowed us to examine the subjective valuation of qualitative outcomes independently from qualitative risk representations.
Results
To test our modeling approach to decision-making with qualitative outcomes, we analyzed choice data from 398 participants over 2 experiments (Table 1). Participants made a series of choices about risky and ambiguous options, which varied in their potential outcomes, the likelihood of obtaining the outcomes, and the level of ambiguity around these likelihoods (see Methods). The task was similar to a task used in multiple previous studies [12], with one critical difference: instead of the monetary outcomes of the original task, here, participants were presented with a hypothetical medical scenario (see Fig 1 and S1 Text) and had to make choices among different available treatments. The potential outcome of each treatment was described verbally (Methods) with no quantitative information (Note, however, that the uncertainty levels associated with each outcome were presented quantitatively). To allow for model validation in a quantitative domain, all participants also made choices about uncertain monetary options (Methods).
Task Design: Participants were presented with choices between an uncertain option and a certain outcome across four scenarios: risky monetary decisions (A), ambiguous monetary decisions (B), Risky medical decisions (C), and ambiguous medical decisions (D). In the risky scenarios (A, C), the outcome probabilities were visually represented by red and blue rectangles, and these probabilities were fully disclosed to the participants. In the ambiguous scenarios (B, D), the probability information was partially obscured by a grey rectangle, indicating uncertainty. The outcome probabilities in risky trials were set at 25%, 50%, and 75%, while the levels of ambiguity (indicated by the grey area) were set at 74%, 50%, and 24%. There were four possible outcomes for monetary decisions ($5, $8, $12, and $25) and four potential medical outcomes (E). Each unique pairing of uncertainty and outcome levels was presented to the participants four times.
Modeling approach
We employed hierarchical Bayesian modeling to capture and examine subjects’ choice behavior. In the context of monetary decisions, we used a power utility function to model the subjective value assigned to each option [6] with a linear effect of ambiguity on the perceived probability (Equation 1 – Classic Utility Model) [12]. Hyperpriors were chosen based on previous data [12], with slight risk (mean 0.72) and ambiguity (mean 0.65) aversion (see S2 Text and S1 Table for comparison with less and uninformed hyper-priors). Importantly, the same priors were used for the two independent data sets.
Equation 1 – Classic Utility Model:
We modeled the subjective value (SV) using the objective probability (P; for risky options, P = 0.25, 0.5, or 0.75; for ambiguous options P = 0.5, and for the certain gain P = 1), ambiguity level (A; for risky or certain options A= 0; for ambiguous trials A = 0.24, 0.5, or 0.74), and potential winnings (V; 5, 8, 12, 25 dollars for the lotteries and 5 dollars for the sure bet). Risk attitude (α) and ambiguity attitude (β) were incorporated to capture individual differences in risk and ambiguity preferences.
To estimate the probability of choosing the lottery on each trial, we fitted a logistic choice function (equation 2), in which γ is the inverse temperature parameter.
Equation 2:
In addition, we tested “trembling-hand” logistic choice function (Equation 3), where there is less dependency between the risk and ambiguity parameters and the slope of the logistic function [23].
Equation 3 - Trembling-Hand Model:
In the medical domain, fitting the utility function is challenging due to the qualitative nature of the outcomes, which lacked a quantifiable value for the value (V) component of the model (although uncertainty was still presented quantitatively). To tackle this issue, we used the model to estimate the subjective value associated with each outcome. Notably, we excluded the risk aversion parameter, as subjective values were individually tailored to each outcome and participant (equation 4 – Estimated Value Model). Given the ordinal nature of the outcomes, the model employs a truncated normal distribution for each value. See S3 Text, S2 Table, and S1 Equation for a comparison with categorical models.
Equation 4 – Estimated Value Model:
In this model, we estimated the subjective value (ν) of each outcome (i) based on the cumulative values of the preceding outcomes. We assumed the outcomes were ordinal (e.g., slight improvement =<moderate improvement), and thus, we modeled the subjective value of each outcome as the sum of the subjective value of the preceding outcomes plus an additive value representing the improvement.
To evaluate the model’s fit, we introduced a baseline model devoid of subject-specific parameters, serving as a ‘straw man’ to establish a reference point for the estimated model’s performance. In the medical domain, we utilized the category level (e.g., 1 for slight improvement, 2 for moderate improvement) as the value (V) in the model (equation 5).
Equation 5 - No-Subjective Parameters:
Extracting quantitative values from qualitative outcomes
We applied the Estimated Value model (equation 4) to the medical decision data to obtain estimated subjective values for the different levels of qualitative outcomes. For comparison, we also applied the No-Subjective Parameters model (equation 5) to the data (see Table 2 for model comparison). Using a Leave-One-Out (LOO) cross-validation method to estimate the out-of-sample predictive fit [24] (Methods), we compared the models. The Estimated Value model demonstrated a superior fit to the data in both samples. Subsequently, we extracted the mean added value for each category assigned by each participant. This approach provided estimates of the added values people attribute to the various outcomes. The mean value for slight improvement was 6.93, with a standard deviation (SD) of 1.79. This value increased by 9.00 (SD 2.62) for moderate improvement, by an additional 7.04 (SD 2.89) for major improvement, and finally by 4.18 (SD 2.41) for complete recovery (Fig 2A). For the online sample, the mean value for slight improvement was 8.63 (SD 2.54), which increased by 12.62 (SD 4.25) for moderate improvement, by an additional 4.66 (SD 2.56) for significant improvement, and finally by an additional 2.37 (SD 1.61) for complete recovery.
The estimated values from the Estimated Value model for each category. Panes A and B show the estimated values for the medical decision-making task (Pane A) and the monetary decision-making task (Pane B) for the in-person sample. Panes C and D show the estimated values in the medical decision-making task (Pane C) and the monetary decision-making task (Pane D) for the online sample. Each colored dot represents an individual participant, while the large black dot indicates the mean estimated value for each category. See S4 Figure for an illustration of the curvature in the monetary domain.
Model validation and comparative analysis
We applied the Estimated Value model to the monetary decision data to facilitate a direct comparison with a classical utility model (equation 1). Using the monetary decision data, we evaluated the effectiveness of the Estimated Value model (equation 4) by comparing it to various models that incorporate an objective estimate of the outcome value. Specifically, using LOO cross-validation, we compared the Estimated Value model against several alternatives: a No-Subjective Parameters model (equation 5), a Classical Utility model (equation 1), and a Classical Utility with a Trembling-Hand choice function model (equation 3). Our analysis revealed that the Estimated Value model outperformed the other models in terms of fit to the data within the monetary domain in both samples (see Table 2). This finding underscores the robustness of the Estimated Value model across different decision-making contexts.
We also derived values for the monetary categories. The mean value for $5 was 7.22 (SD 1.60). This value increased by 4.13 (SD 1.47) for $8, by an additional 6.23 (SD 2.18) for $12, and finally by 8.77 (SD 3.41) for $25 (Fig 2B). For the online sample, the mean value for $5 was 10.91 (SD 2.72), which increased by 4.13 (SD 1.84) for $8, by an additional 5.04 (SD 2.35) for $12, and finally by an additional 3.92 (SD 2.18) for $25.
Simulations
The estimated model has more degrees of freedom. Hence, a better fit could result from overfitting [25]. One way to ensure that the results represent real phenomena rather than overfit is by using simulations [26]. We simulated data (Methods) to assess the fit of the Estimated values model in scenarios where it should underperform (low noise) and overperform (high noise). The simulation results demonstrate how noise levels influence model performance. With lower noise (0.1), where the data generation function is closer to the utility curve, the classic utility model provided a better fit. However, as noise increased to 0.3 and 0.5, the estimated value model consistently showed superior performance, regardless of the number of participants. Specifically, at higher noise levels, the estimated value model had higher LOO values and weights across different sample sizes. For detailed results, see S5 Text and S3 Table.
Cross-domain association between uncertainty attitudes
Finally, we explored how attitudes toward uncertainty vary across different domains. Using robust regression [27,28], we found that ambiguity aversion (β) in the medical domain was strongly and positively associated with the same attitude in the monetary domain (mean slope = 0.76, 89% HDP [0.63, 0.91]; Fig 3). This association was replicated in the online sample (mean slope = 0.37, 89% HDP [0.29, 0.44]). This finding suggests that attitudes towards ambiguity are consistent across different domains. Interestingly, the cross-domain association is stronger among individuals with a history of surgery (see S4 Text, S1, S2 and S3 Figs), suggesting that experience with potential outcomes may partially shape ambiguity attitudes. It is important to note that risk attitudes, as operationalized in our models, depend on the outcome levels; that is, the value incorporates the risk preference. Thus, it precludes a straightforward comparison of risk attitudes across domains.
The positive association between ambiguity aversion (β) in the monetary (x-axis) and medical (y-axis) domains in the in-person (Pane A) and online (Pane B) samples. The mean slope of the robust regression and the 89% highest density posterior interval (HDPi) are indicated in the panels.
Discussion
This study aimed to quantify the values of qualitative outcomes and to characterize individual uncertainty attitudes when making choices between ordinal outcomes. Participants made a series of choices between certain outcomes and uncertain, potentially better, outcomes. Through computational modeling, we estimated the values participants assigned to different qualitative outcomes and assessed their attitudes towards ambiguity, which were consistent across two domains. Notably, our model outperformed a classical utility model with access to the objective amounts in the monetary domain. Overall, our model demonstrates a good fit and allows for better estimation of subjective values for both qualitative and quantitative outcomes.
When testing uncertainty attitudes, one challenge that often arises is how to treat qualitative outcomes [29]. Theoretically, this problem stems from the difficulty in comparing outcomes with unknown cardinal values [30]. Methodologically, it complicates the design of experiments and the interpretation of results, as traditional approaches often rely on quantitative measures [31]. Statistically, it poses challenges in modeling and analyzing data due to the lack of a consistent scale for comparison. To address this issue, researchers often quantify an aspect of the outcome (e.g., milligrams of medication) [18,19,32] or treat the outcomes as categories with a fixed distance between them [33]. In this study, we utilized computational modeling to extract estimated values for each outcome, providing insights into the processes underlying participants’ decision-making. This approach allows for the use of different categories without the assumptions of a specific distance between them. By estimating these values, we can examine individual differences and use them in more precise parametric analyses, such as with neural representation of value, enhancing our understanding of the neural mechanisms involved in processing uncertainty. Nevertheless, while this model describes the observed data well, future applications should aim to extend its predictive capabilities to describe behavior more comprehensively. In particular, our experimental design relied on objectively ordinal outcomes (for example, moderate improvement is, by definition, better than slight improvement). A similar modeling approach could be used more generally for inferring values of options that are not inherently better or worse than each other (such as apples and oranges), but this should be empirically tested.
Uncertainty is especially relevant in the medical domain, as it is central to health decisions across the entire continuum of medical care [34,35]. While medical diagnoses and treatments are often described qualitatively, experiments assessing uncertainty attitudes typically quantify these outcomes into measures like years of life [18,36] or milligrams of medication [19]. This simplification abandons the original complex qualitative outcome. In contrast, our model allows for the introduction of complete qualitative outcomes. Nevertheless, our model still uses quantitative probabilities; in future research, it will also be interesting to include qualitative information about outcome likelihood (for example, “a high chance for success”).
The field of healthcare, particularly medical decision-making, is a critical area of application, where ongoing debates address the level of information patients receive and how they use it [35]. The fuzzy-trace theory posits that patients emphasize the gist representation of outcomes they are considering in decision-making despite also processing the verbatim representation simultaneously [37–39]. Our data can be viewed as an attempt to estimate this gist, with our Bayesian approach allowing us to preserve these values as distributions rather than forcing them into point estimates. This method captures a range of possible interpretations, providing a nuanced understanding of patient decision-making. Furthermore, the increased cross-domain association in ambiguity attitudes observed in individuals with prior surgical experience supports the transition from verbatim to gist processing [40]. This finding suggests that, similar to experts, personal experience may facilitate a transition toward greater reliance on gist-based processing, enriching our understanding of how experience shapes the processing of uncertainty in critical health decisions.
Beyond healthcare, our model can be applied in areas such as marketing, where product comparisons—particularly for new products—involve uncertainty. It offers a framework to quantify qualitative outcomes and enhance decision-making in contexts like product introduction [41,42]. For products with subjective attributes, such as comfort or quality, the model can capture consumer perceptions before purchase. For example, the value of the comfort of a new shoe can be modeled through elicited subjective preferences. Additionally, ambiguity aversion revealed in monetary decisions can influence preferences in other domains, such as a tendency to favor established brands over newer alternatives [43], particularly when attribute uncertainty is high. Our model can also adapt to scenarios with unknown probabilities [44]. In these cases, it will provide insight into decision rules by quantifying qualitative aspects and generating more informative payoff matrices.
Our focus on ambiguity - rather than risk - attitudes in this study warrants an explanation. Technically, in the Estimated Value model, risk attitudes are not quantified separately but are integrated into the estimated values (Equation 4). More broadly, the measure of risk attitude is tied to the outcome value. For example, a participant’s preference for a 50% chance of $12 over a guaranteed $5 reflects how they value different monetary amounts. Similarly, choosing a guaranteed “slight improvement” over a 50% chance of a “moderate improvement” reveals how they value medical outcomes. In both cases, risk is inherently linked to the value of the outcome. In contrast, ambiguity attitudes are assessed by measuring how participants respond to uncertainty within the same domain, in relation to risk, independent of outcome value. This distinction allows us to compare the effect of ambiguity across different domains, such as monetary and medical decisions. Previous studies suggest that subtle experimental manipulations can alter decision attitudes [45] and that ambiguity attitudes are less stable over time [11,33] and lack a known structural correlate [46,47]. Nevertheless, our results indicate that ambiguity attitudes are consistent across domains when studied simultaneously. Additionally, we provide evidence that experience influences this association. Just as perceived risk shapes risk-taking behavior [17], perceived ambiguity can affect the subjective value of outcomes. Future studies should explore disentangling the risk component from the estimated values.
In the monetary domain, our Estimated Value model fits the data better than a classical utility model. While the utility model we used is constrained to follow a specific functional form [48], our estimated model is more flexible and can adapt to the data. Although models like cumulative prospect theory [22] and the Kőszegi-Rabin model [49] account for reference points, they assume stable relationships between outcome values along a continuous curve. These models do not capture how adding new options can shift the perceived value of existing choices, as seen in behavioral phenomena like the decoy effect [50,51]. In the decoy effect, introducing an additional option that is slightly inferior to an existing choice makes that choice more attractive. For example, if participants must choose between $5 for sure or a chance for $25, adding a new $20 option could make the $25 option seem more appealing by comparison. These effects highlight the limitations of classical models, which assume that more data simply refine curve parameters without altering the relationships between outcomes. In contrast, our Estimated Value model can capture these categorical and contextual shifts, providing a more nuanced understanding of how people evaluate options in dynamic decision-making scenarios.
To mitigate the risk of overfitting, we employed a leave-one-out cross-validation procedure to compare the models. Our results emphasize that decisions traditionally considered quantitative, such as monetary choices, are better understood when accounting for subjective, qualitative aspects of the outcomes. The consistent performance of the Estimated Value model across both monetary and medical tasks suggests that individual value curves, even on ordinal scales, play a critical role in shaping decision-making. This finding supports the idea that qualitative elements underpin decision-making processes across domains, regardless of whether the outcomes are explicitly or implicitly defined.
Our simulations also support our modeling approach. We simulated hypothetical participants who made choices based on the Classical Utility Model. As expected, in scenarios with low noise levels, where the data generation function closely mimicked the utility curve, the Classical Utility model outperformed the Estimated Value model. This result highlights how the LOO algorithm effectively penalizes the Estimated Value model for its extra complexity. However, as noise levels increased, leading to a higher deviation from the utility function, the Estimated Value model consistently outperformed the Classical Utility model, regardless of the number of participants. This suggests that the Estimated Value model is more adaptable to noisy data, providing a better fit for real participants’ behavior.
The estimated model may be better equipped to capture phenomena such as the framing effect or range-frequency theory [52–54]. Unlike the utility function, which assumes outcomes are sampled from a single curve describing a person’s behavior, the estimated model allows for unique curvatures for each set of outcomes. This means that the lowest and highest amounts create a frame of reference against which all other outcomes are compared. By learning about the possibilities within this frame, participants adjust their expectations accordingly. This flexibility enables the estimated model to more accurately reflect how people perceive and evaluate different outcomes in varying contexts.
Having both monetary (quantitative) and medical (qualitative) datasets allowed us to assess the Estimated Value model with quantitative data and gain confidence in the model’s ability to extract values in the qualitative dataset. The model’s fit on quantitative data provides evidence that the estimates for the qualitative outcomes represent participants’ true values. While these estimates are on a relative scale, without specific measurement units, they open the possibility for use in future studies to examine value representation in the brain. Additionally, these estimates can be applied to evaluate the values of outcomes across different scenarios. Using the categorical variant, the model can compare discrete outcomes (e.g., comparing apples to oranges, choosing between an apple, or a 50% chance of two oranges). Overall, enhancing our understanding of how people perceive and compare various types of outcomes.
To conclude, we present a model capable of assigning quantitative values to qualitative outcomes. The model demonstrates a better fit for both qualitative and quantitative data compared to other potential models. Although more complex than a classical utility function, both model comparisons and simulations suggest that the improved fit is not due to overfitting. This model opens new avenues for exploring the relationships between different domains, outcomes that cannot be objectively quantified, and the representation of value in the brain.
Methods
Study 1 (in-person)
Ethics statement.
The study was approved by the Yale Human Investigation Committee 0910005795 and followed institution guidelines.
Participants.
A sample of sixty-six individuals with valid data was drawn from one hundred and one adults (48 females; age range = 18–89; mean 52.97, SD ±22.41) who were screened for the experiment. All participants were screened over the phone to ensure the absence of major medical conditions, including neurological illness and lifetime Axis I psychiatric disorders. Participants provided written consent after a detailed explanation of the study. Ten participants did not complete the study and were not included in the analysis, resulting in a sample of ninety-one participants (failed to complete the task n=4; failed to come to consecutive sessions n=6).
To ensure all participants were cognitively healthy, we administered the Montreal Cognitive Assessment (MoCA) [55], which can detect mild cognitive impairments. Data from participants who scored less than 26 on the test were excluded [56], resulting in a sample of seventy-one cognitively healthy participants (32 females; age range = 18–88; mean 49.68 ±22.3 SD).
Procedure.
Participants in this study came for three sessions completed within one week. All task data reported here were collected on the first session. The MoCA was completed in session 3, along with several other questionnaires. In brief, in session 1, participants completed the decision-making task (reported here) and a reversal reinforcement learning task (not reported here). In session 2, participants completed an fMRI task. Finally, in session 3, participants completed several questionnaires assessing cognitive ability and general IQ. Participants were paid for each session separately and received extra compensation for successfully completing the experiment.
Risk and ambiguity in the monetary domain.
The task was based on a previously developed task [12] used in multiple studies. On each trial, participants chose between a small certain gain ($5) and a lottery that offered a larger amount. The lottery was risky in half of the trials, i.e., with known outcome probability. The risky lotteries were represented as bags containing red and blue chips. The numbers of red and blue chips were indicated by the percentage of a rectangle colored in red and blue and the numbers on the bag. Three different outcome probabilities were used (25%, 50%, and 75%). Dollar amounts ($5, $8, $12, and $25) next to each color (Fig 1a) indicated the amount of money that could be won if a chip of that color was drawn. In the remaining trials, the lottery was ambiguous, i.e., outcome probability was not precisely known. Ambiguity was achieved by occluding part of the bag (Fig 1b), rendering the probability of drawing a chip of a certain color partially unknown. Increasing the occluder size (24%, 50%, and 74%) increases the ambiguity level or the range of possible probabilities for drawing a red or blue chip. Each combination (amount, risk/ambiguity) was repeated four times. On twelve (out of 84) trials, participants were asked to choose between $5 for sure and a chance to win $5. Those trials were used as attention checks. Participants who failed six or more attention checks (n=3) were removed from the analysis in the risk and ambiguity task. In addition, participants who chose the lottery less than two times were omitted from the analysis because their data could not be fitted with a model (n=2). This exclusion was necessary because our choice function requires response variability; a consistent choice of one option provides no data points for model estimation. The final sample included 66 participants. At the end of the experiment, the computer randomly selected one of the trials, and the participants acted out the trials by selecting a chip. However, they did not receive the additional payment. We opted for a hypothetical outcome to make the monetary and medical conditions similar to each other (see below).
Risk and ambiguity in the medical domain.
Participants were presented with a hypothetical scenario in which they were involved in a car accident and, as a result, suffered a spinal injury (S1 Text for more details). They were asked to choose between two medical treatments (Fig 1c and 1d), a known treatment with a known outcome (“slight improvement,” parallel to a fixed monetary gain of $5) or the experimental treatment (Fig 1e), with outcomes varying in the level of improvement (“moderate,” “major,” or “complete recovery”). To align with the monetary task, these outcomes were created on an ordinal scale. The likelihood of the outcome of the experimental treatment varied with different levels of risk and ambiguity (parallel to playing a lottery). Outcome probabilities and ambiguity levels were the same as in the monetary task and were presented graphically and verbally. All aspects of the experimental design were similar to the monetary task. The order of the monetary and medical tasks was counterbalanced across participants.
Study 2 (online)
This dataset was used to confirm and replicate the results. This is a secondary analysis of a previously published dataset [33].
Participants.
Valid data from three hundred and thirty-two out of four hundred and four adults (212 females; age range = 20–80; mean 49.362, SD ±14.849) who were recruited using Amazon Mechanical Turk (mTurk) was analyzed. Participants provided consent online after reading a detailed explanation of the study following Yale Human Investigation Committee guidelines. Seventy-two participants did not complete the study and were not included in the analysis, resulting in a sample of three-hundred and thirty-two participants.
Procedure.
Participants completed tasks similar to those described above, following the same procedures as the in-person sample, with two exceptions [33]. First, each lottery was presented twice instead of four times. Second, an additional condition of 100% ambiguity was introduced.
Data simulations.
By controlling the data generation function, we assessed the performance of different models. If the estimated model outperforms the utility function, even when the data were generated by the utility model, it suggests that the estimated model is insufficiently penalized for its complexity, indicating a risk of overfitting. Using the utility function (Equation 1), we simulated datasets with varying parameters, controlling for noise levels and sample sizes. Each selection in the simulation included noise drawn from a normal distribution with a mean of 0 and a standard deviation determined by the simulation. Additionally, each subject had unique risk (α) and ambiguity (β) parameters. To ensure model convergence, we constrained risk attitudes to the range [0.1, 1.6] and ambiguity attitudes to [-1.4, 1.4]. We tested the following sets of parameters (N, noise): (30, 0.1), (30, 0.3), (30, 0.5), (60, 0.3), (60, 0.5), (120, 0.5), (300, 0.5). We then evaluated the utility function and the estimated value model (Equation 4) using LOO scores and weights to compare their ability to fit the simulated data.
Hierarchical Bayesian modeling
By leveraging hierarchical Bayesian modeling (HBM), we were able to uncover hidden variables that offer valuable insights into the underlying mechanisms of decision-making. HBMs allow for partial pooling of data across the population, meaning that individual data points contribute to both individual and group-level estimates. This results in more robust and accurate posterior distributions compared to non-hierarchical models, particularly when dealing with small sample sizes or individual-level variability [57]. A key element in HBM is hyper-priors. Hyperpriors are higher-level distributions that set prior beliefs on the parameters of the model’s primary prior distributions. They are beneficial as they allow for incorporating prior knowledge and uncertainty about the parameters, leading to more flexible and robust models. Overall, this method allowed us to compare different models and assess which model described the data better.
Leave-one-out cross-validation
This method splits the data into training and testing datasets. The model is trained on the training data and evaluated on the held-out test. This process is done repeatedly. We used the ‘Arviz’ implementation to compute the LOO of the models. Unlike Log Likelihood, LOO measures expected log pointwise predictive density (ELPD). Thus, higher LOO points to a better fit. Additionally, LOO inherently accounts for model complexity through the effective number of parameters, ensuring that models are not rewarded for overfitting.
Model convergence
All models converge with rHat<1.01 and effective sampling rate > 1000. All analyses were conducted in Python 3.10.14, utilizing the ‘PyMC’ (version 4.1.7) [58] and ‘ArviZ’ (version 0.17.1) [59] packages. We utilized the No-U-Turn Sampler (NUTS) for Markov chain Monte Carlo (MCMC) inference, adhering to PyMC’s default settings: 1000 draws, 1000 tuning steps, no thinning, and an 80% acceptance rate. The code can be found here: https://github.com/KoremNSN/QualMod/tree/main
Supporting information
S1 Table. Model comparison sensitivity analysis priors.
https://doi.org/10.1371/journal.pcbi.1012440.s003
(DOCX)
S2 Table. Model comparison sensitivity analysis categorical.
https://doi.org/10.1371/journal.pcbi.1012440.s006
(DOCX)
S1 Fig. Posterior distribution of ambiguity attitudes for participants with and without a history of surgery.
https://doi.org/10.1371/journal.pcbi.1012440.s008
(TIF)
S2 Fig. Cross-domain associations of ambiguity attitudes (β) in monetary and medical domains.
https://doi.org/10.1371/journal.pcbi.1012440.s009
(TIF)
S3 Fig. Posterior distribution of cross-domain slopes for participants with and without a history of surgery.
https://doi.org/10.1371/journal.pcbi.1012440.s010
(TIF)
S4 Fig. Estimated values in relation to actual monetary sums.
https://doi.org/10.1371/journal.pcbi.1012440.s011
(TIF)
S3 Table. Model comparisons simulation analysis.
https://doi.org/10.1371/journal.pcbi.1012440.s013
(DOCX)
References
- 1. Levy DJ, Glimcher PW. The root of all value: a neural common currency for choice. Curr Opin Neurobiol. 2012;22(6):1027–38. pmid:22766486
- 2. Glimcher PW. Understanding risk: a guide for the perplexed. Cogn Affect Behav Neurosci. 2008;8(4):348–54. pmid:19033233
- 3. Ellsberg D. Risk, Ambiguity, and the Savage Axioms. The Quarterly Journal of Economics. 1961;75(4):643.
- 4. Camerer C, Weber M. Recent developments in modeling preferences: Uncertainty and ambiguity. J Risk Uncertainty. 1992;5(4):325–70.
- 5. Hsu M, Bhatt M, Adolphs R, Tranel D, Camerer CF. Neural systems responding to degrees of uncertainty in human decision-making. Science. 2005;310(5754):1680–3. pmid:16339445
- 6. Kahneman D, Tversky A. Prospect Theory: An Analysis of Decision under Risk. Econometrica. 1979;47(2):263.
- 7. Tymula A, Rosenberg Belmaker LA, Ruderman L, Glimcher PW, Levy I. Like cognitive function, decision making across the life span shows profound age-related changes. Proc Natl Acad Sci U S A. 2013;110(42):17143–8. pmid:24082105
- 8. Cohen M, Jaffray J-Y, Said T. Experimental comparison of individual behavior under risk and under uncertainty for gains and for losses. Organizational Behavior and Human Decision Processes. 1987;39(1):1–22.
- 9. FeldmanHall O, Glimcher P, Baker AL, Phelps EA. Emotion and decision-making under uncertainty: Physiological arousal predicts increased gambling during ambiguity but not risk. J Exp Psychol Gen. 2016;145(10):1255–62. pmid:27690508
- 10. Huettel SA, Stowe CJ, Gordon EM, Warner BT, Platt ML. Neural signatures of economic preferences for risk and ambiguity. Neuron. 2006;49(5):765–75. pmid:16504951
- 11. Konova AB, Lopez-Guzman S, Urmanche A, Ross S, Louie K, Rotrosen J, et al. Computational Markers of Risky Decision-making for Identification of Temporal Windows of Vulnerability to Opioid Use in a Real-world Clinical Setting. JAMA Psychiatry. 2020;77(4):368–77. pmid:31812982
- 12. Levy I, Snell J, Nelson AJ, Rustichini A, Glimcher PW. Neural representation of subjective value under risk and ambiguity. J Neurophysiol. 2010;103(2):1036–47. pmid:20032238
- 13. Tobler PN, Christopoulos GI, O’Doherty JP, Dolan RJ, Schultz W. Risk-dependent reward value signal in human prefrontal cortex. Proc Natl Acad Sci U S A. 2009;106(17):7185–90. pmid:19369207
- 14. Blankenstein NE, Crone EA, van den Bos W, van Duijvenvoorde ACK. Dealing With Uncertainty: Testing Risk- and Ambiguity-Attitude Across Adolescence. Dev Neuropsychol. 2016;41(1–2):77–92. pmid:27028162
- 15. Peysakhovich A, Karmarkar UR. Asymmetric Effects of Favorable and Unfavorable Information on Decision Making Under Ambiguity. Management Science. 2016;62(8):2163–78.
- 16. Serra D. Decision-making: from neuroscience to neuroeconomics—an overview. Theory Decis. 2021;91(1):1–80.
- 17. Blais A-R, Weber EU. A Domain-Specific Risk-Taking (DOSPERT) scale for adult populations. Judgm decis mak. 2006;1(1):33–47.
- 18. Attema AE, Bleichrodt H, L’Haridon O. Ambiguity preferences for health. Health Econ. 2018;27(11):1699–716. pmid:29971896
- 19. Seaman KL, Gorlick MA, Vekaria KM, Hsu M, Zald DH, Samanez-Larkin GR. Adult age differences in decision making across domains: Increased discounting of social and health-related rewards. Psychol Aging. 2016;31(7):737–46. pmid:27831713
- 20. Doyle J, Thomason R. Background to qualitative decision theory. AI Magazine. 1999;20(2):55.
- 21.
Dubois D, Godo L, Prade H, Adriana Z. Making Decision in a Qualitative Setting: from Decision under Uncertaintly to Case-based Decision. 1998. p. 607.
- 22. Tversky A, Kahneman D. Advances in prospect theory: Cumulative representation of uncertainty. J Risk Uncertain. 1992.
- 23. Krefeld-Schwalb A, Pachur T, Scheibehenne B. Structural parameter interdependencies in computational models of cognition. Psychol Rev. 2022;129(2):313–39. pmid:34180694
- 24. Vehtari A, Gelman A, Gabry J. Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. arXiv preprint. 2015;1507.04544.
- 25. Hawkins DM. The problem of overfitting. J Chem Inf Comput Sci. 2004;44(1):1–12. pmid:14741005
- 26. Wilson RC, Collins AG. Ten simple rules for the computational modeling of behavioral data. Elife. 2019;8e49547. pmid:31769410
- 27. Korem N, Duek O, Ben-Zion Z, Kaczkurkin AN, Lissek S, Orederu T, et al. Emotional numbing in PTSD is associated with lower amygdala reactivity to pain. Neuropsychopharmacology. 2022;47(11):1913–21. pmid:35945274
- 28. Korem N, Duek O, Spiller T, Ben-Zion Z, Levy I, Harpaz-Rotem I. Emotional State Transitions in Trauma-Exposed Individuals With and Without Posttraumatic Stress Disorder. JAMA Netw Open. 2024;7(4):e246813. pmid:38625701
- 29. Staunton H, Willgoss T, Nelsen L, Burbridge C, Sully K, Rofail D, et al. An overview of using qualitative techniques to explore and define estimates of clinically important change on clinical outcome assessments. J Patient Rep Outcomes. 2019;3(1):16. pmid:30830492
- 30. Higashi RT, Kruse G, Richards J, Sood A, Chen PM, Quirk L, et al. Harmonizing Qualitative Data Across Multiple Health Systems to Identify Quality Improvement Interventions: A Methodological Framework Using PROSPR II Cervical Research Center Data as Exemplar. International Journal of Qualitative Methods. 2023;22.
- 31. Kumar G, Basri S, Imam AA, Khowaja SA, Capretz LF, Balogun AO. Data Harmonization for Heterogeneous Datasets: A Systematic Literature Review. Applied Sciences. 2021;11(17):8275.
- 32. Levy DJ, Glimcher PW. Comparing apples and oranges: using reward-specific and reward-general subjective value representation in the brain. J Neurosci. 2011;31(41):14693–707. pmid:21994386
- 33. Xu CY, Dan O, Jia R, Wertheimer E, Chawla M, Fuhrmann-Alpert G, et al. Quantitative vs. Qualitative Outcomes: A Longitudinal Study of Risk and Ambiguity in Monetary and Medical Decision-Making. Res Sq. 2024. pmid:38978608
- 34.
Han PKJ. Uncertainty and Ambiguity in Health Decisions. In: Diefenbach MA, Miller-Halegoua S, Bowen DJ, editors. Handbook of Health Decision Science. New York, NY: Springer; 2016. p. 133–144. https://doi.org/10.1007/978-1-4939-3486-7_10
- 35. Reyna VF. A theory of medical decision making and health: fuzzy trace theory. Med Decis Making. 2008;28(6):850–65. pmid:19015287
- 36. Attema AE, Bleichrodt H, L’Haridon O, Peretti-Watel P, Seror V. Discounting health and money: New evidence using a more robust method. J Risk Uncertain. 2018;56(2):117–40. pmid:31007384
- 37. Reyna VF, Brainerd CJ. Fuzzy-trace theory: An interim synthesis. Learning and Individual Differences. 1995;7(1):1–75.
- 38. Reyna VF. A new intuitionism: Meaning, memory, and development in Fuzzy-Trace Theory. Judgm decis mak. 2012;7(3):332–59.
- 39. Reyna VF, Müller SM, Edelson SM. Critical tests of fuzzy trace theory in brain and behavior: uncertainty across time, probability, and development. Cogn Affect Behav Neurosci. 2023;23(3):746–72. pmid:36828988
- 40. Edelson SM, Reyna VF. Who Makes the Decision, How, and Why: A Fuzzy-Trace Theory Approach. Med Decis Making. 2024;44(6):614–6. pmid:39056326
- 41. Heijungs R. Selecting the best product alternative in a sea of uncertainty. Int J Life Cycle Assess. 2021;26(3):616–32.
- 42. Chen P, Hitt L, Hong Y, Wu S. Measuring product type and purchase uncertainty with online product ratings: a theoretical model and empirical application. Information Systems Research. 2021;32(4):1470–89.
- 43. Muthukrishnan AV, Wathieu L, Xu AJ. Ambiguity Aversion and the Preference for Established Brands. Management Science. 2009;55(12):1933–41.
- 44. Gaspars-Wieloch H. Newsvendor problem under complete uncertainty: a case of innovative products. Cent Eur J Oper Res. 2017;25(3):561–85. pmid:28855846
- 45. Grubb MA, Li Y, Larisch R, Hartmann J, Gottlieb J, Levy I. The composition of the choice set modulates probability weighting in risky decisions. Cogn Affect Behav Neurosci. 2023;23(3):666–77. pmid:36702993
- 46. Gilaie-Dotan S, Tymula A, Cooper N, Kable JW, Glimcher PW, Levy I. Neuroanatomy predicts individual risk attitudes. J Neurosci. 2014;34(37):12394–401. pmid:25209279
- 47. Grubb MA, Tymula A, Gilaie-Dotan S, Glimcher PW, Levy I. Neuroanatomy accounts for age-related changes in risk preferences. Nat Commun. 2016;713822. pmid:27959326
- 48. Schoemaker P. The expected utility model: its variants, purposes, evidence and limitations. Journal of Economic Literature. n.d.;20(4):529–63.
- 49. Koszegi B, Rabin M. A Model of Reference-Dependent Preferences. The Quarterly Journal of Economics. 2006;121(4):1133–65.
- 50. Wedell DH, Pettibone JC. Using Judgments to Understand Decoy Effects in Choice. Organizational Behavior and Human Decision Processes. 1996;67(3):326–44.
- 51. Huber J, Payne JW, Puto C. Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. J CONSUM RES. 1982;9(1):90.
- 52. Gong J, Zhang Y, Yang Z, Huang Y, Feng J, Zhang W. The framing effect in medical decision-making: a review of the literature. Psychol Health Med. 2013;18(6):645–53. pmid:23387993
- 53. Kőszegi B, Rabin M. Reference-Dependent Risk Attitudes. American Economic Review. 2007;97(4):1047–73.
- 54. Lim RG. A Range-Frequency Explanation of Shifting Reference Points in Risky Decision Making. Organizational Behavior and Human Decision Processes. 1995;63(1):6–20.
- 55. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53(4):695–9. pmid:15817019
- 56. Rossetti HC, Lacritz LH, Cullum CM, Weiner MF. Normative data for the Montreal Cognitive Assessment (MoCA) in a population-based sample. Neurology. 2011;77(13):1272–5. pmid:21917776
- 57.
Kruschke J. Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. 2014.
- 58. Abril-Pla O, Andreani V, Carroll C, Dong L, Fonnesbeck CJ, Kochurov M, et al. PyMC: a modern, and comprehensive probabilistic programming framework in Python. PeerJ Comput Sci. 2023;9e1516. pmid:37705656
- 59.
Kumar R, Carroll C, Hartikainen A, Martín OA. ArviZ a unified library for exploratory analysis of Bayesian models in Python. 2019.