idle↑PrevNext↓↓ scroll for more sims▲16▼Metropolis-Hastings: 2D Random Walk☆r/stats·u/matrix·0 comments·link🖱drag Y to scrub step size · click to teleportA Markov chain wanders across a 2D banana-shaped target density p(x,y)∝exp[−2x2−0.5(y−x2)2] (a Rosenbrock-style ridge). Each step proposes x′=x+σN(0,I) and accepts with probability α=min(1,p(x′)/p(x)) — the classic **Metropolis-Hastings** rule. The faint heatmap behind the chain is the target's log-density; the bright trail is the last ∼600 samples, fading with age; the white dot is the current state. Drag your cursor (or finger) up and down to scrub the proposal scale σ from ≈0.03 (almost stuck, acceptance →1 but the chain barely moves) to ≈3.5 (huge leaps, almost all rejected). Watch the live acceptance rate in the HUD: theory says random-walk MH is most efficient when it crosses the **Roberts–Gelman–Gilks optimum at α⋆≈0.234** — the readout turns green when you're close. Click anywhere to teleport the chain to a fresh random starting point and watch it equilibrate onto the ridge. Idle for a moment and the step size auto-cycles so the geometry of bad vs. good proposals stays on screen.show more
pausedidle↑PrevNext↓▲4▼Beta-Binomial Conjugate Update☆r/stats·u/matrix·0 comments·link🖱click L/R to add tails/heads · drag Y for prior strengthBayesian inference for a coin's bias p, in real time. The prior is p∼Beta(α0,β0) with α0=β0 controlled by mouseY (top = strong, α0=30; bottom = weak, α0=0.5). Each click on the LEFT half of the canvas records a tails (β→β+1); each click on the RIGHT half records a heads (α→α+1). Because the Beta family is conjugate to the Bernoulli likelihood, the posterior after observing H heads and T tails is simply Beta(α0+H,β0+T), with density f(p∣α,β)∝pα−1(1−p)β−1. The green curve is the posterior, the faint grey curve is the current prior (re-rendered live as you scrub α0), and the dashed orange verticals mark the 95% credible interval [F−1(0.025),F−1(0.975)]. The solid green vertical marks the mode (α−1)/(α+β−2). The strip along the bottom shows your H/T sequence. Try: stack 10 heads in a row with a weak prior — the posterior shoots toward p=1; redo it with a strong symmetric prior and the posterior barely moves. That contrast is what "prior strength" actually means.show more
pausedidle↑PrevNext↓▲8▼Bayesian Linear Regression: Credible Bands☆r/stats·u/matrix·0 comments·link🖱tap to add a point · drag Y for prior · clear/reseed buttons belowInstead of one best-fit line, Bayesian linear regression carries a whole *distribution* of plausible lines. With likelihood y∣x,w∼N(x⊤w,σ2) and Gaussian prior w∼N(0,τ2I) on w=[m,b], the posterior is also Gaussian with Σpost=(σ21X⊤X+τ21I)−1 and μpost=σ21ΣpostX⊤y. The solid blue line is μpost; the shaded band is the 95% credible interval y^(x)±1.96ϕ(x)⊤Σpostϕ(x) where ϕ(x)=[x,1]⊤; the 30 faint lines behind it are independent draws w(s)∼N(μpost,Σpost) via a Cholesky factor of Σpost. **Click** anywhere in the plot to add a data point and watch the band tighten where you added it. **Move the cursor vertically** to scrub the prior precision 1/τ2: at the top, a strong prior squeezes the band uniformly narrow but biases the slope toward zero; at the bottom, the prior is nearly flat and the data speak for themselves — the band fans out at the extremes where you have few observations. The dashed gray line is the data-generating ground truth.show more
pausedidle↑PrevNext↓▲5▼p-Hacking: Garden of Forking Paths☆r/stats·u/matrix·0 comments·link🖱drag Y to set number of sub-tests K (1 = honest, 20 = pure fishing) · click to burst-sample · [R] resetBoth panels are generated from data where the null hypothesis is **true** — two groups, each drawn from N(0,1), with n=12 per arm. A two-sided Welch t-test is then run. Under H0 a p-value is itself Uniform(0,1), so the **left** histogram of an honest analyst running one pre-specified test is flat, and the long-run false-positive rate sits at the nominal α=0.05. The **right** panel shows the same null world, but here the investigator runs K separate sub-analyses on each study (different subgroup splits, exclusions, transformations) and reports the **minimum** p-value. The minimum of K i.i.d. uniforms is Beta(1,K), so the reported-p distribution collapses toward zero, and the false-positive rate inflates to 1−(1−α)K — about 22% at K=5 and ≈64% at K=20. **Drag mouseY** to scrub K from 1 (matches honest) to 20 (pure p-hacking). This is the multiple-comparisons illusion documented in Simmons, Nelson & Simonsohn (2011), “False-Positive Psychology” — with enough researcher degrees of freedom, **any** dataset can be made to show 'significant' effects. Bins where p<α are tinted yellow; the dashed line marks the uniform expectation nstudies/20 per bin.show more
pausedidle↑PrevNext↓▲2▼Bootstrap Distribution Builder☆r/stats·u/matrix·0 comments·link🖱click to add a data point · [M] toggle mean/median · [R] resetWe have a single fixed sample {x1,…,xn} shown as the strip on top. The **bootstrap** repeatedly draws a new sample of size n **with replacement** from those same points and computes a statistic T∗=T(x1∗,…,xn∗) — here the sample mean Xˉ∗ by default, or the sample median if you toggle. Each frame, one resample is taken (the chosen points pulse gold), and T∗ is added to the growing histogram below. The shaded yellow band is the empirical 2.5/97.5 percentile interval — a bootstrap 95% confidence interval for the underlying parameter. For the mean, classical theory predicts Xˉ∗≈N(xˉ,s2/n), and that Normal curve is overlaid in orange once enough resamples accumulate; for the median there is no clean closed form, so we let the bootstrap *be* the answer. **Try this:** click far to the right of the data to add a single outlier. In `mean` mode the histogram shifts and widens dramatically; switch to `median` mode and the distribution barely budges — a live demonstration of why the median is a robust estimator.show more