Quotient Types for Programmers

Patrick Collison questions

Three especially interesting to me:

Why are certain things getting so much more expensive?

Spending on healthcare in the US is up 9X in real terms since 1960. K12 education spending in the US has increased by 2-3X per student per year since 1960. The cost of college in the US has more than doubled (again, in real terms) since 1984. Growth in everything from construction costs and childcare costs is significantly outpacing inflation. Lots more at SSC and from Tyler.

What’s going on? Why are we seeing dramatic declines in costs in many areas of the economy and such steep increases in others? How much of the cost growth is unmeasured improvement in quality and how much is growing inefficiency? How should one predict a priori whether a sector will exhibit increasing or decreasing costs relative to inflation? What do we do about it all?

Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

The Empire State Building was built in 410 days. The Lockheed P-80, the first jet aircraft deployed in the Air Force, took 143 days from project initiation to first deployment. The Apollo Program lasted 9 years from initiation to moon landing. While a lot happened in the US during World War II, it’s easy to forget how short the period in question was: American involvement lasted 3 years 8 months and 23 days.

Meanwhile, a BART extension is delayed more than a year because the wrong networking equipment was installed. (The 16-mile extension will cost circa $2.3 billion and have taken around 7 years.)

Is the sense that there are fewer contemporary examples of rapid progress justified? If so, what’s going on?

Why are programming environments still so primitive?

In different ways, Mathematica, Genera, and Smalltalk put almost every other programming environment to shame. Atom, Sublime Edit, and Visual Studio Code are neat, but they do not represent a great improvement over TextMate circa 2007. Emacs and Vim have advanced by even less.

Why can’t I connect my editor to a running program and hover over values to see what they last were? Why isn’t time-traveling debugging widely deployed? Why can’t I debug a function without restarting my program? Why in the name of the good lord are REPLs still textual? Why can’t I copy a URL to my editor to enable real-time collaboration with someone else? Why isn’t my editor integrated with the terminal? Why doesn’t autocomplete help me based on the adjacent problems others have solved?

On Buffon Machines and Numbers

Abtract:

The well-know needle experiment of Buffon can be regarded as an analog (i.e., continuous) device that stochastically “computes” the number 2/pi ~ 0.63661, which is the experiment’s probability of success. Generalizing the experiment and simplifying the computational framework, we consider probability distributions, which can be produced perfectly, from a discrete source of unbiased coin flips. We describe and analyse a few simple Buffon machines that generate geometric, Poisson, and logarithmic-series distributions. We provide human-accessible Buffon machines, which require a dozen coin flips or less, on average, and produce experiments whose probabilities of success are expressible in terms of numbers such as, exp(-1), log 2, sqrt(3), cos(1/4), aeta(5). Generally, we develop a collection of constructions based on simple probabilistic mechanisms that enable one to design Buffon experiments involving compositions of exponentials and logarithms, polylogarithms, direct and inverse trigonometric functions, algebraic and hypergeometric functions, as well as functions defined by integrals, such as the Gaussian error function.

Haskell implementation

Regular Bands

STOKE

STOKE is a stochastic optimizer and program synthesizer for the x86-64 instruction set. STOKE uses random search to explore the extremely high-dimensional space of all possible program transformations. Although any one random transformation is unlikely to produce a code sequence that is desirable, the repeated application of millions of transformations is sufficient to produce novel and non-obvious code sequences. STOKE can be used in many different scenarios, such as optimizing code for performance or size, synthesizing an implementation from scratch or to trade accuracy of floating point computations for performance. As a superoptimizer, STOKE has been shown to outperform the code produced by general-purpose and domain-specific compilers, and in some cases expert hand-written code.

Ersatz

A monad for expressing SAT or QSAT problems using observable sharing.

Sturdy

Haskell library to create abstract interpreters

Godel’s interpretation of intuitionism

On the Dialectica interpretation.

Real Differences between OT and CRDT for Co-Editors

Abstract:

OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become a core technique used in today’s working co-editors and adopted in major industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as “post-OT” techniques capable of making concurrent operations natively commutative, and have made broad claims of superiority over OT solutions, in terms of correctness, time and space complexity, simplicity, etc. Over one decade later, however, CRDT solutions are rarely found in working co-editors, while OT solutions remain the choice for building the vast majority of co-editors. Contradictions between this reality and CRDT’s purported advantages have been the source of much debate and confusion in co-editing research and developer communities. What is CRDT really to co-editing? What are the real differences between OT and CRDT for co-editors? What are the key factors that may have affected the adoption of and choice between OT and CRDT for co-editors in the real world? In this paper, we report our discoveries, in relation to these questions and beyond, from a comprehensive review and comparison study on representative OT and CRDT solutions and working co-editors based on them. Moreover, this work reveals facts and presents evidences that refute CRDT claimed advantages over OT. We hope the results reported in this paper will help clear up common myths, misconceptions, and confusions surrounding alternative co-editing techniques, and accelerate progress in co-editing technology for real world applications.

A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

The pattern theory of Grenander is a mathematical framework where the patterns are represented by probability models on random variables of algebraic structures. In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models. A discriminative model is in the form of a classifier. It specifies the conditional probability of the class label given the input signal. The descriptive model specifies the probability distribution of the signal, based on an energy function defined on the signal. A generative model assumes that the signal is generated by some latent variables via a transformation. We shall review these models within a common framework and explore their connections. We shall also review the recent developments that take advantage of the high approximation capacities of deep neural networks.

ISL

isl is an LGPL thread-safe C library for manipulating sets and relations of integer tuples bounded by affine constraints

Taming the C monster: Haskell FFI techniques

Most modern-seeming FFI best practices

Equality Saturation

Optimizations in a traditional compiler are applied sequentially, with each optimization destructively modifying the program to produce a transformed program that is then passed to the next optimization. We present a new approach for structuring the optimization phase of a compiler. In our approach, optimizations take the form of equality analyses that add equality information to a common intermediate representation. The optimizer works by repeatedly applying these analyses to infer equivalences between program fragments, thus saturating the intermediate representation with equalities. Once saturated, the intermediate representation encodes multiple optimized versions of the input program. At this point, a profitability heuristic picks the final optimized program from the various programs represented in the saturated representation. Our proposed way of structuring optimizers has a variety of benefits over previous approaches: our approach obviates the need to worry about optimization ordering, enables the use of a global optimization heuristic that selects among fully optimized programs, and can be used to perform translation validation, even on compilers other than our own. We present our approach, formalize it, and describe our choice of intermediate representation. We also present experimental results showing that our approach is practical in terms of time and space overhead, is effective at discovering intricate optimization opportunities, and is effective at performing translation validation for a realistic optimizer.

[Simplicial Sets]

A leisurely introduction

Elements of ∞-Category Theory

Canonicity and normalisation for Dependent Type Theory

We show canonicity and normalization for dependent type theory with a cumulative sequence of universes and a type of Boolean. The argument follows the usual notion of reducibility, going back to Godel’s Dialectica interpretation and the work of Tait. A key feature of our approach is the use of a proof relevant notion of reducibility.

Allegedly easier intro to the simply-typed version: http://www.jonmsterling.com/pdfs/gluing-note.pdf

Functorial Operational Semantics (and its Denotational Dual)

Great Notebooks

Logical Relations and Parametricity

A Reynolds Programme for Category Theory and Programming Languages

Abstract

In his seminal paper on “Types, Abstraction and Parametric Polymorphism,” John Reynolds called for homomorphisms to be generalized from functions to relations. He reasoned that such a generalization would allow type-based “abstraction” (representation independence, information hiding, naturality or parametricity) to be captured in a mathematical theory, while accounting for higher-order types. However, after 30 years of research, we do not yet know fully how to do such a generalization. In this article, we explain the problems in doing so, summarize the work carried out so far, and call for a renewed attempt at addressing the problem.

Kelly betting and Bayesian inference

* Set of hypotheses <-> set of Kelly bettors
* Prior probabilities <-> starting wealth for different bettors
* Making predictions by taking the weighted sum over your hypotheses <-> making predictions by setting up a market, where bettors with more money affect the market price more.
* Updating with Bayes’ rule <-> distributing the winnings

Domain Specific Languages and Towers of Abstraction links

  • Cousot, P. “Constructive Design of a hierarchy of Semantics of a Transition System by Abstract Interpretation,” 2002.
  • Hyland and Power. “The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads,” 2007.
  • Lawvere, F.W. “Functorial Semantics of Algebraic Theories,” 1963.
  • Lawvere, F.W. “Adjointness in Foundations,” 1969.
  • Meijer, Fokkinga and Paterson. “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire.”
  • Smith, Peter. “The Galois Connection Between Syntax and Semantics,” 2010.

Visualizing Quaternions

Bracketed Paste

One of the least well known, and therefore least used, features of many terminal emulators is bracketed paste mode. When you are in bracketed paste mode and you paste into your terminal the content will be wrapped by the sequences \e[200~ and \e[201~.

For example, let’s say I copied the string "echo 'hello'\n" from a website. When I paste into my terminal it will send "\e[200~echo 'hello'\n\e[201~" to whatever program is running.

I admit this is hard to get excited about, but it turns out that it enables something very cool: programs can tell the difference between stuff you type manually and stuff you pasted.