Normalisation corresponds to a sensible notion of /computation/ in most type systems; most theories with a natural number type, for example, will have terms like
1 + 5
,(1 + 1) * (1 + 1 + 1)
, and so on all reduce to6
. In Abel’s wonderful analogy, normalisation increases /entropy/, understood by analogy to thermodynamic entropy. In physics, entropy is a measure of /disorder/ in a system. A pile of firewood contains a lot of complex organic molecules; lighting it on fire reduces it to much simpler molecules, reducing the total amount of “order” in the system, and converts some of the energy stored in chemical bonds to heat energy in the process. Energy is conserved, but the entropy of the (wood + oxygen + smoke + ashes) system increases in the process. In our PL-theoretic world, too, normalisation collapses — not chemical, but rather syntactic — structure.
Quasiquotes in GHC optimise a different use-case to macros: they let you deal with /foreign/ languages. Macros let you add slight enhancements to your /existing/ language.
We use braids and linking number to explain why automobile shades fold into an odd number of loops.
Good recommendations and references.
Surprisingly, given all the time I’ve spent on Lesswrong, I hadn’t heard of likelihood ratios / Bayes factors before.
Why are handsome men jerks? Why does Hollywood ruin books? Why do restaurants serve good burgers or good fries but not both? Spurious negative correlations everywhere.