πŸ’ MIT 6.3700[6.041SC] Intro to Probability

Math

If you believe your copyrighted material appears in this deck without permission, please contact me at [email protected] with details, and I will promptly remove it upon request.

====================================================

πŸ’ Hi I'm Cherry! πŸ‘‹

This is the ultimate deck on Intro to Probability and contains literally everything taught in the free MIT 6.041SC Probabilistic Systems Analysis and Applied Probability course [now renamed 6.3700 Intro to Probability] taught by Prof. John Tsitsiklis.

This course is based on the textbook "Bertsekas, Dimitri, and John Tsitsiklis. Introduction to Probability. 2nd ed. Athena Scientific, 2008. ISBN: 9781886529236"

Download the version of this deck that has images for free on kofi.

πŸ“– Contents of the deck, organized by tags πŸ“–:

contents

⭐️ Features ⭐️:

  • Cards in the deck contain plentiful derivations, proofs, images, and context on the back to facilitate a deep understanding of concepts and strongly connected memories
  • Every card is color-coded and math is written in MathJax
  • Every card includes links to and is tagged by their lecture # in the 6.041SC Probabilistic Systems Analysis and Applied Probability course and RES.6-012 Intro to Probability resource page.
    The cards in this deck work with the Clickable Tags addon, which I highly recommend.
  • All cards are ordered so that material that comes earlier in the course shows up as new cards before material that comes later
  • Example practice problem cards so you practice and learn the procedure of solving problems (highly effective; will require pen & paper and more time than you may be used to, a few may require calculator)

✏️ Prerequisites for the course and deck πŸ’­:

  • Calculus
    • A strong calculus foundation is necessary, especially optimization which is important in statistical inference
  • Multivariable Calculus
    • Mainly just partial derivatives and double/triple Integrals for analyzing joint distributions of multiple random variables

❀️ Support 😊:

Has my deck really helped you out? If so, please give it a thumbs up thumbs up!

thumbs up

Please check out my other ✨shared decks✨.
To learn how to create amazing cards like I do, check out my πŸ’ 3 Rules of Card Creation

"Buy Me A Coffee"

Sample Data

Text We're performing a binary hypothesis test andour decision rule has the rejection region {{c3::\(R=\{x\:|\: L(x)>\xi\} \)::what is \(R\)}}If we {{c1::decrease the critical value \(\xi\)::change \(\xi\) how}}, this:{{c2::\(\uparrow\) probability of rejecting \(H_0\) and accepting \(H_1\)::\(\uparrow\) probability of what decision}}{{c2::\(\uparrow\) false rejection probabiliy \(\alpha(R)\)::change \(\alpha(R)\) how}}{{c2::\(\downarrow\) false acceptance probabiliy \(\beta(R)\)::change \(\beta(R)\) how}}
Replace
Answer
Attached If we increase the critical value \(\xi\), this:\(\uparrow\) probability of accepting \(H_0\) and rejecting \(H_1\)\(\downarrow\) false rejection probabiliy \(\alpha(R)\)\(\uparrow\) false acceptance probabiliy \(\beta(R)\)
Overlapping
Subject Clozes
Deck ID 74486892
Summary 1 Visual
Back Extra
Summary 2
Back Extra 2 6.041SC Lecture 24: Classical Inference - IIRES.6-012 Part II: Inference & Limit Theorems
Summary 3 Textbook 9.3: Likelihood Ratio Test
Back Extra 3
Summary 4 Textbook 9.3: Neyman-Pearson Lemma
Back Extra 4
Summary 5 Textbook 9.3: Binary Hypothesis Testing
Back Extra 5
Summary 6
Back Extra 6
Summary 7
Back Extra 7
Summary 8
Back Extra 8
Summary 9
Back Extra 9
Summary 10
Back Extra 10
Text {{c3:: ::deduce the network}}A computer network connects two nodes \(A\) and \(B\) through intermediate nodes \(C, D, E, F,\) as shown above.For every pair of directly connected nodes, say \(i\) and \(j\), there is a given probability \(p_{ij}\) that the link from \(i\) to \(j\) is up (succeeds). We assume {{c2::that link failures/successes are independent of each other::key assumption}}.What is the probability that there is a path connecting \(A\) and \(B\) in which all links are up?{{c1::ask-ans}}
Replace
Answer We can split the big system into subsystems that are either in series or in parallelNote that\(\mathbf{P}(\text{series subsystem succeeds})=p_1p_2\cdots p_m\)and that\(\begin{align}\mathbf{P}(\text{parallel subsystem succeeds})&=1-\mathbf{P}(\text{parallel subsystem fails})\\&=1-(1-p_1)(1-p_2)\cdots (1-p_m)\end{align}\) So first we find\(\begin{align}\mathbf{P}(C\to B)&= 1-(1-p_{CE}\,p_{EB})(1-p_{CF}\,p_{FB})\\&=1-(1-0.8\cdot 0.9)(1-0.95\cdot 0.85)\\&= 0.946\end{align}\)Then\(\begin{align}\mathbf{P}(A\to B)&=1-(1-p_{AC}\,\mathbf{P}(C\to B))(1-p_{AD}\, p_{DB})\\&= 1 -(1-0.9\cdot 0.946)(1- 0.75\cdot 0.95)\\&= 0.957\end{align}\)
Attached
Overlapping
Subject Clozes
Deck ID 74486892
Summary 1
Back Extra 6.041SC Lecture 3: Independence RES.6-012 Part I: The Fundamentals
Summary 2 Textbook 1.5: Reliability
Back Extra 2
Summary 3
Back Extra 3
Summary 4
Back Extra 4
Summary 5
Back Extra 5
Summary 6
Back Extra 6
Summary 7
Back Extra 7
Summary 8
Back Extra 8
Summary 9
Back Extra 9
Summary 10
Back Extra 10
Text Summarize the primary assumptions and goals of {{c1::Bayesian inference::... inference}}.{{c2::ask-ans}}
Replace
Answer It models the unknown quantity of interest \(\Theta\) as a random variable (or finite collection of random variables) and aims to extract information about \(\Theta\) based on observing a collection \(X=(X_1,\ldots,X_n)\) of related random variables called the observations.It assumes that we know1. the prior distribution \(p_\Theta\) or \(f_\Theta\), depending on whether \(\Theta\) is discrete or continuous2. the conditional distribution \(p_{X|\Theta}\) or \(f_{X|\Theta}\), depending on whether \(\Theta\) is discrete or continuousAfter observing \(X=x\), we want to find:\(\to\) the posterior distribution \(p_{\Theta | X}(\theta \:|\: X=x)\) by using the appropriate version of Bayes' rule
Attached
Overlapping
Subject Clozes
Deck ID 74486892
Summary 1 Visual
Back Extra
Summary 2 Clarification
Back Extra 2 While we model \(\Theta\) with its prior distribution \(p_\Theta\) or \(f_\Theta\), we don't know the actual value \(\theta\) it's taking, that's why we use our observations \(X=(X_1,\ldots,X_n)\) to narrow down and get a more "accurate" posterior distribution of \(\Theta\).
Summary 3 For comparison: Classical inference
Back Extra 3 Instead of modelling the unknown quantity of interest \(\Theta\) as a random variable (or finite collection of random variables),it models the unknown quantity of interest \(\theta\) as a constant (or finite collection of constants)with the goal of determining what that constant is.
Summary 4
Back Extra 4 6.041SC Lecture 21: Bayesian Statistical Inference - IRES.6-012 Part II: Inference & Limit Theorems
Summary 5 Textbook 8.1: Bayesian Inference
Back Extra 5
Summary 6 Textbook 8.1: Multiparameter Problems
Back Extra 6
Summary 7
Back Extra 7
Summary 8
Back Extra 8
Summary 9
Back Extra 9
Summary 10
Back Extra 10
0 Cards
0 Likes
0 Ratings
0 Downloads