## difference between risk and uncertainty pdf download

In our day to day life, there are many circumstances, where we have to take risks, which involves exposure to lose or danger. Risk can be understood as the potential of loss. It is not exactly same as uncertainty, which implies the absence of certainty of the outcome in a particular situation. There are instances, wherein uncertainty is inherent, with respect to the forthcoming events, i.e. there is no idea, of what can happen next.

We use the terms risk and uncertainty in a single breath, but have you ever wondered about their difference. Well, this article might help you in understanding the difference between risk and uncertainty, take a read.

In the financial glossary, the meaning of risk is not much different. It implies the uncertainty regarding the expected returns on the investments made i.e. the probability of actual returns may not be equal to the expected returns. Such a risk may include the probability of losing the part or whole investment. Although the higher the risk, the higher is the expectation of returns, because investors are paid off for the additional risk they take on their investments. The major elements of risk are defined as below:

The Journal of Risk and Uncertainty features both theoretical and empirical papers that analyze risk-bearing behavior and decision-making under uncertainty. The journal serves as an outlet for important, relevant research in decision analysis, economics, and psychology.

Among the topics covered in the journal are decision theory and the economics of uncertainty, psychological models of choice under uncertainty, risk and public policy, experimental investigations of behavior under uncertainty, and empirical studies of real-world, risk-taking behavior. Articles begin with an introductory discussion explaining the nature of the research and the interpretation and implications of the findings at a level that is accessible to researchers in other disciplines.

Companies forge partnerships for many reasons, and partnerships are becoming a cornerstone of many business models. Companies create alliances to optimize their business models, reduce risk, or acquire resources. We can distinguish between four different types of partnerships, which are strategic alliances between non-competitors, coopetition: strategic partnerships between competitors, joint ventures to develop new businesses, and buyer-supplier relationships to assure reliable supplies.

In this article, the concepts of risk and uncertainty will be introduced together with the use of probabilities in calculating both expected values and measures of dispersion. In addition, the attitude to risk of the decision-maker will be examined by considering various decision-making criteria, and the usefulness of decision trees will also be discussed.

The basic definition of risk is that the final outcome of a decision, such as an investment, may differ from that which was expected when the decision was taken. We tend to distinguish between risk and uncertainty in terms of the availability of probabilities. Risk is when the probabilities of the possible outcomes are known (such as when tossing a coin or throwing a dice); uncertainty is where the randomness of outcomes cannot be expressed in terms of specific probabilities. However, it has been suggested that in the real world, it is generally not possible to allocate probabilities to potential outcomes, and therefore the concept of risk is largely redundant. In the artificial scenarios of exam questions, potential outcomes and probabilities will generally be provided, therefore a knowledge of the basic concepts of probability and their use will be expected.

Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

The ability of humans to learn changing reward contingencies implies that they perceive, at a minimum, three levels of uncertainty: risk, which reflects imperfect foresight even after everything is learned; (parameter) estimation uncertainty, i.e., uncertainty about outcome probabilities; and unexpected uncertainty, or sudden changes in the probabilities. We describe how these levels of uncertainty evolve in a natural sampling task in which human choices reliably reflect optimal (Bayesian) learning, and how their evolution changes the learning rate. We then zoom in on estimation uncertainty. The ability to sense estimation uncertainty (also known as ambiguity) is a virtue because, besides allowing one to learn optimally, it may guide more effective exploration; but aversion to estimation uncertainty may be maladaptive. Here, we show that participant choices reflected aversion to estimation uncertainty. We discuss how past imaging studies foreshadowed the ability of humans to distinguish the different notions of uncertainty. Also, we document that the ability of participants to do such distinction relies on sufficient revelation of the payoff-generating model. When we induced structural uncertainty, participants did not gain awareness of the jumps in our task, and fell back to model-free reinforcement learning.

To correctly gauge estimation uncertainty, two additional statistical properties of the environment ought to be evaluated: risk, or how much irreducible uncertainty would be left even after the best of learning; and unexpected uncertainty, or how likely it is that the environment suddenly changes [5]. The notion of risk captures the idea that, to a certain extent, forecast errors are expected, and therefore should not affect learning. Under unexpected uncertainty, these same forecast errors are indications that learning may have to be re-started because outcome contingencies have changed discretely.

With Bayesian learning, the three notions of uncertainty are tracked explicitly. This is because Bayesians form a model of the environment that delineates the boundaries of risk, estimation uncertainty and unexpected uncertainty. The delineation is crucial: estimation uncertainty tells Bayesians how much still needs to be learned, while unexpected uncertainty leads them to forget part of what they learned in the past.

This contrasts with model-free reinforcement learning. There, uncertainty is monolithic: it is the expected magnitude of the prediction error [6]. Under reinforcement learning, only the value of a chosen option is updated, on the basis of the reward (or loss) prediction error, i.e., the difference between the received and the anticipated reward (or loss) [7]. No attempt is made to disentangle the different sources of the prediction error. Usually, the learning rate is kept constant. If not, as in the Pearce-Hall algorithm [8], adjustment is based on the total size of the prediction error.

Recently, evidence has emerged that, in environments where risk, estimation uncertainty and unexpected uncertainty all vary simultaneously, humans choose as if they were Bayesians [9]. Formally, the experiment that generated this evidence involved a six-arm restless bandit problem. Participants were asked to choose among six options with different risk profiles and differing frequencies of changes in reward (and loss) probabilities. Assuming softmax exploration [10], the Bayesian updating model was shown to provide a significantly improved fit over standard reinforcement learning as well as the Pearce-Hall extension.

To discover that humans are Bayesians implies that they must have tracked the three levels of uncertainty. Here, we discuss how the levels differentially affected the Bayesian learning rate in our restless bandit task, and how participants could have distinguished between them.

Neural implementation of Bayesian learning would require separate encoding of the three levels of uncertainty. Recent human imaging studies appear to be consistent with this view. The evidence has only been suggestive, however, as no imaging study to date involved independent control of risk, estimation uncertainty and unexpected uncertainty.

The task in [4] involved a bandit with only two arms, however. For our purposes, this entails a number of disadvantages. First, it is impossible to independently track the three levels of uncertainty with only two arms; at a minimum, six arms are needed, and this is what is implemented in the experiment here. As a matter of fact, in [4], risk was decreased along with unexpected uncertainty, introducing a confound that masked the full effect of unexpected uncertainty on the learning rate. Second, the two arms in [4] have perfectly negatively correlated reward probabilities, and as such, the task is one of reversal learning [17]. This means that outcomes for one arm are fully informative for the other one. Consequently, exploration is of no consequence.