Quotient Types for Programmers

Patrick Collison questions

Three especially interesting to me:

Why are certain things getting so much more expensive?

Spending on healthcare in the US is up 9X in real terms since 1960. K12 education spending in the US has increased by 2-3X per student per year since 1960. The cost of college in the US has more than doubled (again, in real terms) since 1984. Growth in everything from construction costs and childcare costs is significantly outpacing inflation. Lots more at SSC and from Tyler.

What’s going on? Why are we seeing dramatic declines in costs in many areas of the economy and such steep increases in others? How much of the cost growth is unmeasured improvement in quality and how much is growing inefficiency? How should one predict a priori whether a sector will exhibit increasing or decreasing costs relative to inflation? What do we do about it all?

Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

The Empire State Building was built in 410 days. The Lockheed P-80, the first jet aircraft deployed in the Air Force, took 143 days from project initiation to first deployment. The Apollo Program lasted 9 years from initiation to moon landing. While a lot happened in the US during World War II, it’s easy to forget how short the period in question was: American involvement lasted 3 years 8 months and 23 days.

Meanwhile, a BART extension is delayed more than a year because the wrong networking equipment was installed. (The 16-mile extension will cost circa $2.3 billion and have taken around 7 years.)

Is the sense that there are fewer contemporary examples of rapid progress justified? If so, what’s going on?

Why are programming environments still so primitive?

In different ways, Mathematica, Genera, and Smalltalk put almost every other programming environment to shame. Atom, Sublime Edit, and Visual Studio Code are neat, but they do not represent a great improvement over TextMate circa 2007. Emacs and Vim have advanced by even less.

Why can’t I connect my editor to a running program and hover over values to see what they last were? Why isn’t time-traveling debugging widely deployed? Why can’t I debug a function without restarting my program? Why in the name of the good lord are REPLs still textual? Why can’t I copy a URL to my editor to enable real-time collaboration with someone else? Why isn’t my editor integrated with the terminal? Why doesn’t autocomplete help me based on the adjacent problems others have solved?

On Buffon Machines and Numbers

Abtract:

The well-know needle experiment of Buffon can be regarded as an analog (i.e., continuous) device that stochastically “computes” the number 2/pi ~ 0.63661, which is the experiment’s probability of success. Generalizing the experiment and simplifying the computational framework, we consider probability distributions, which can be produced perfectly, from a discrete source of unbiased coin flips. We describe and analyse a few simple Buffon machines that generate geometric, Poisson, and logarithmic-series distributions. We provide human-accessible Buffon machines, which require a dozen coin flips or less, on average, and produce experiments whose probabilities of success are expressible in terms of numbers such as, exp(-1), log 2, sqrt(3), cos(1/4), aeta(5). Generally, we develop a collection of constructions based on simple probabilistic mechanisms that enable one to design Buffon experiments involving compositions of exponentials and logarithms, polylogarithms, direct and inverse trigonometric functions, algebraic and hypergeometric functions, as well as functions defined by integrals, such as the Gaussian error function.

Haskell implementation

Regular Bands

STOKE

STOKE is a stochastic optimizer and program synthesizer for the x86-64 instruction set. STOKE uses random search to explore the extremely high-dimensional space of all possible program transformations. Although any one random transformation is unlikely to produce a code sequence that is desirable, the repeated application of millions of transformations is sufficient to produce novel and non-obvious code sequences. STOKE can be used in many different scenarios, such as optimizing code for performance or size, synthesizing an implementation from scratch or to trade accuracy of floating point computations for performance. As a superoptimizer, STOKE has been shown to outperform the code produced by general-purpose and domain-specific compilers, and in some cases expert hand-written code.

Ersatz

A monad for expressing SAT or QSAT problems using observable sharing.

Sturdy

Haskell library to create abstract interpreters

Godel’s interpretation of intuitionism

On the Dialectica interpretation.

Real Differences between OT and CRDT for Co-Editors

Abstract:

OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become a core technique used in today’s working co-editors and adopted in major industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as “post-OT” techniques capable of making concurrent operations natively commutative, and have made broad claims of superiority over OT solutions, in terms of correctness, time and space complexity, simplicity, etc. Over one decade later, however, CRDT solutions are rarely found in working co-editors, while OT solutions remain the choice for building the vast majority of co-editors. Contradictions between this reality and CRDT’s purported advantages have been the source of much debate and confusion in co-editing research and developer communities. What is CRDT really to co-editing? What are the real differences between OT and CRDT for co-editors? What are the key factors that may have affected the adoption of and choice between OT and CRDT for co-editors in the real world? In this paper, we report our discoveries, in relation to these questions and beyond, from a comprehensive review and comparison study on representative OT and CRDT solutions and working co-editors based on them. Moreover, this work reveals facts and presents evidences that refute CRDT claimed advantages over OT. We hope the results reported in this paper will help clear up common myths, misconceptions, and confusions surrounding alternative co-editing techniques, and accelerate progress in co-editing technology for real world applications.

A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

The pattern theory of Grenander is a mathematical framework where the patterns are represented by probability models on random variables of algebraic structures. In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models. A discriminative model is in the form of a classifier. It specifies the conditional probability of the class label given the input signal. The descriptive model specifies the probability distribution of the signal, based on an energy function defined on the signal. A generative model assumes that the signal is generated by some latent variables via a transformation. We shall review these models within a common framework and explore their connections. We shall also review the recent developments that take advantage of the high approximation capacities of deep neural networks.

Verification for Machine Learning, Autonomy, and Neural Networks Survey

This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components (LECs) that accomplish tasks from classification to control. Recently, the formal methods and formal verification community has developed methods to characterize behaviors in these LECs with eventual goals of formally verifying specifications for LECs, and this article presents a survey of many of these recent approaches.

Folk Theorem

OWL

Owl is a parser generator which targets the class of visibly pushdown languages.

Carnap

Github

Carnap is a free and open-source Haskell framework for creating and exploring formal languages, logics, and semantics.