examples to explain the decision analysis process. In the first lecture, he explained on what basis shall we make decision. For the analysis process, he mentioned about a framework which included establishing the context, finding the alternatives, predicting the consequences, assigning values to the outcomes and finally choosing the best outcome based on the value. He also explained about the different types of models. For decision making, decision trees (which he also called the first cousin to flow charts) are very helpful to identify different stages in different process.
The next lecture was more specific on decision making techniques using probability. There are mainly two kinds of probabilities: Prior probability, and posterior probability. Prior probability is the representation of knowledge about an unknown quantity, whereas, posterior probability is the conditional probability which represents the knowledge about some data based on some other data. we use probabilities to assign values to different data (outcomes). While discussing the decision tree, we also got to know about decision nodes, chance nodes and terminal nodes. Hidden Markov Models (HMMs) and Bayesian networks were also discussed in the lecture, which are very helpful tools to decide during uncertainty. The importance of baye's theorem is that we can calculate the probability of A given B when we know prior probability of A, and B, and conditional probability of B given A. For eg. when we want to find the probability of disease given symptom, then we can do that by knowing the probability of symptom given disease. Dr. Greenes also mentioned different interesting topics like odds (ratio of two probabilities), sensitivity, specificity, and likelihood ratio. The examples shown and mentioned in the lecture were interesting as well as informative.
Posted by
Prabal
No comments:
Post a Comment
Gentle Reminder: Sign comments with your name.