Suppose I tell you that only 1% of people with COVID have a body temperature less than 97Â°. If you take someoneâ€™s temperature and measure less than 97Â°, what is the probability that they have COVID? If your answer is 1% you have committed the conditional probability fallacy and you have essentially done what researchers do whenever they use p-values. In reality, these inverse probabilities (i.e.,Â probability of having COVID if you have low temperatureÂ andÂ probability of low temperature if you have COVID) are not the same.

in practically every situation that people use statistical significance, they commit the conditional probability fallacy

Now if we gather some new data (

D), what needs to be examined isÂ the probability of the null hypothesis given that we observed this data, not the inverse! That is,ÂPr(H0|D)Â should be compared with a 1% threshold, notÂPr(D|H0). In our current methods of statistical testing, we use the latter as a proxy for the former.

By using p-values we effectively act as though we commit the conditional probability fallacy. The two values that are conflated areÂPr(H0|p<Î±)Â andÂPr(p<Î±|H0). We conflateÂ the chances of observing a particular outcome under a hypothesisÂ withÂ the chances of that hypothesis being true given that we observed that particular outcome.

Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis or about the probability that random chance produced the observed data. The p-value is neither.

What alternatives do we have to p-values? Some suggest usingÂ confidence intervalsÂ to estimate effect sizes. Confidence intervals may have some advantages but they still suffer from the same fallacies (as nicely explained inÂ Morey et al. 2016). Another alternative is to useÂ Bayes factorsÂ as a measure for evidence. Bayesian model comparison has been around forÂ nearly two decadesÂ but has not gained much traction, for a number of practical reasons.

The bottom line is that there is practically no correct way to use p-values. It does not matter if you understand what it means or if you frame it as a decision procedure rather than a method for inference . If you use p-values you are effectively behaving like someone that confuses conditional probabilities. Science needs a mathematically sound framework for doing statistics.

In future posts I will suggest a new simple framework for quantifying evidence. This framework is based on Bayes factors but makes a basic assumption: that every experiment has a probability of error that cannot be objectively determined. From this basic assumption a method of evidence quantification emerges that is highly reminescent of p-value testing but is 1) mathematically sound and 2) practical. (In contrast to Bayes factor, it produces numbers that are not extremely large or small).

- â€śDeepSpeed: Extreme-scale model training for everyoneâ€ť (demonstrates training of GPT-3-180b & 1t-parameter models (â€śThe trillion-parameter model has 298 layers of Transformers with a hidden dimension of 17,408 and is trained with sequence length 2,048 and batch size 2,048â€ť), w/open-source code; able to use CPU+GPU RAM simultaneously for 13b-parameter models per node per Pudipeddi et al 2020; sparse attention for saving RAM; approximated Adam gradients for saving bandwidth)
- â€śGPT-f: Generative Language Modeling for Automated Theorem Provingâ€ť, Polu & Sutskever 2020b (GPT-2 for Metamath scales & can bootstrap its theorem-proving abilityâ€”onward to IMO!)
- Why Tool AIs Want to be Agent AIs
- Unraveling the JPEG
- The First Roman Fonts

think of an archive library as a bookshelf, with some books on it (the separate .o files).

some books may refer you to other books (via unresolved symbols), which may be on the same, or on a different bookshelf.

The Not Rocket Science Rule Of Software Engineering:automatically maintain a repository of code that always passes all the tests

Time passed, that system aged and (as far as I know) went out of service. I became interested in revision control, especially systems that enforced this Not Rocket Science Rule. Surprisingly, only one seemed to do so automatically (Aegis, written by Peter Miller, another charming no-nonsense Australian who is now, sadly, approaching death).

Fantastic post by Jason Crawford (The Roots of Progress)

A major theme of the 19th century was the transition from plant and animal materials to synthetic versions or substitutes mostly from non-organic sources

(Ivory, fertilizer, lighting, smelting, shellac)

There are many other biomaterials we once relied onâ€”rubber, silk, leather and furs, straw, beeswax, wood tar, natural inks and dyesâ€”that have been partially or fully replaced by synthetic or artificial substitutes, especially plastics, that can be derived from mineral sources. TheyÂ

hadÂ to be replaced, because the natural sources couldnâ€™t keep up with rapidly increasing demand. The only way to ramp up productionâ€”the only way to escape theÂ Malthusian trapÂ and sustain an exponentially increasing population while actuallyÂimprovingÂ everyoneâ€™s standard of livingâ€”was to find new, more abundant sources of raw materials and new, more efficient processes to create the end products we needed. As you can see from some of these examples, this drive to find substitutes was often conscious and deliberate, motivated by an explicit understanding of the looming resource crisis.

In short, plant and animal materials had becomeÂ

unsustainable.

To my mind, any solution to sustainability that involves reducing consumption or lowering our standard of living is no solution at all. It is giving up and admitting defeat. If running out of a resource means that we have to regress back to earlier technologies, that is a failureâ€”a failure to do what we did in the 19th century and replace unsustainable technologies with new, improved ones that can take humanity to the next level and support orders of magnitude more growth.

free classic literature ebooks

Under general relativity, gravity is not a force. Instead it is a distortion of spacetime. Objects in free-fall move along geodesics (straight lines) in spacetime, as seen in the inertial frame of reference on the right. When standing on Earth we experience a frame of reference that is accelerating upwards, causing objects in free-fall to move along parabolas, as seen in the accelerating frame of reference on the left.

- ScienceClic video (recommended): A new way to visualize General Relativity
- Veritasium video: Why Gravity is NOT a Force

It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.

we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk.

The risk of a existential catastrophe then looks like an inverted U-shape over time:

There is an analog to this in environmental economics, called the â€śenvironmental Kuznets curve.â€ť It was theorized that pollution initially rises as countries develop, but, as people grow richer and begin to value a clean environment more, they will work to reduce pollution again. That theory has arguably been vindicated by the path that Western countries have taken with regard to water and air pollution, for example, over the past century.

Carl Sagan was the one who coined the term â€śtime of perils.â€ť Derek Parfit called it the â€śhinge of history.â€ť

On the other extreme, humanity is extremely fragile. No matter how high a fraction of our resources we dedicate to safety, we cannot prevent an unrecoverable catastrophe. Perhaps weapons of mass destruction are simply too easy to build, and no amount of even totalitarian safety efforts can prevent some lunatic from eventually causing nuclear annihilation. We indeed might indeed be living in this world; this would be the modelâ€™s version of Bostromâ€™s â€śvulnerable world hypothesis,â€ť Hansonâ€™s â€śGreat Filter,â€ť or the â€śDoomsday Argument.â€ť

Perhaps, if we followed this argument to the end, we might reach the counterintuitive conclusion that the most effective thing we can do reduce the risk of an existential catastrophe is not to invest in safety directly or to try to persuade people to be more long-term orientedâ€”but rather to spend money on alleviating poverty, so more people are well-off enough to care about safety.

Itâ€™s been 13 years since Yudkowsky published the sequences, and 11 years since he wrote â€śRationality is Systematized Winningâ€ś.

So where are all the winners?

Immediately after the Systematised Winning, Scott Alexander wroteÂ Extreme Rationality: Itâ€™s Not That GreatÂ claiming that there is â€śapproximately zero empirical evidence that x-rationality has a large effect on your practical successâ€ť

The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse.

Spin networks are states of quantum geometry in a theory of quantum gravity, discovered by Lee Smolin and Carlo Rovelli, which is the conceptual ancestor of the imaginary physics of Schildâ€™s Ladder.

Cool, but also damning?

â€śProposed by Michael Spivak in 1965, as an exercise in *Calculus*â€ť