Three especially interesting to me:
Spending on healthcare in the US is up 9X in real terms since 1960. K12 education spending in the US has increased by 2-3X per student per year since 1960. The cost of college in the US has more than doubled (again, in real terms) since 1984. Growth in everything from construction costs and childcare costs is significantly outpacing inflation. Lots more at SSC and from Tyler.
Whatâs going on? Why are we seeing dramatic declines in costs in many areas of the economy and such steep increases in others? How much of the cost growth is unmeasured improvement in quality and how much is growing inefficiency? How should one predict a priori whether a sector will exhibit increasing or decreasing costs relative to inflation? What do we do about it all?
The Empire State Building was built in 410 days. The Lockheed P-80, the first jet aircraft deployed in the Air Force, took 143 days from project initiation to first deployment. The Apollo Program lasted 9 years from initiation to moon landing. While a lot happened in the US during World War II, itâs easy to forget how short the period in question was: American involvement lasted 3 years 8 months and 23 days.
Meanwhile, a BART extension is delayed more than a year because the wrong networking equipment was installed. (The 16-mile extension will cost circa $2.3 billion and have taken around 7 years.)
Is the sense that there are fewer contemporary examples of rapid progress justified? If so, whatâs going on?
In different ways, Mathematica, Genera, and Smalltalk put almost every other programming environment to shame. Atom, Sublime Edit, and Visual Studio Code are neat, but they do not represent a great improvement over TextMate circa 2007. Emacs and Vim have advanced by even less.
Why canât I connect my editor to a running program and hover over values to see what they last were? Why isnât time-traveling debugging widely deployed? Why canât I debug a function without restarting my program? Why in the name of the good lord are REPLs still textual? Why canât I copy a URL to my editor to enable real-time collaboration with someone else? Why isnât my editor integrated with the terminal? Why doesnât autocomplete help me based on the adjacent problems others have solved?
Abtract:
The well-know needle experiment of Buffon can be regarded as an analog (i.e., continuous) device that stochastically âcomputesâ the number 2/pi ~ 0.63661, which is the experimentâs probability of success. Generalizing the experiment and simplifying the computational framework, we consider probability distributions, which can be produced perfectly, from a discrete source of unbiased coin flips. We describe and analyse a few simple Buffon machines that generate geometric, Poisson, and logarithmic-series distributions. We provide human-accessible Buffon machines, which require a dozen coin flips or less, on average, and produce experiments whose probabilities of success are expressible in terms of numbers such as, exp(-1), log 2, sqrt(3), cos(1/4), aeta(5). Generally, we develop a collection of constructions based on simple probabilistic mechanisms that enable one to design Buffon experiments involving compositions of exponentials and logarithms, polylogarithms, direct and inverse trigonometric functions, algebraic and hypergeometric functions, as well as functions defined by integrals, such as the Gaussian error function.
STOKE is a stochastic optimizer and program synthesizer for the x86-64 instruction set. STOKE uses random search to explore the extremely high-dimensional space of all possible program transformations. Although any one random transformation is unlikely to produce a code sequence that is desirable, the repeated application of millions of transformations is sufficient to produce novel and non-obvious code sequences. STOKE can be used in many different scenarios, such as optimizing code for performance or size, synthesizing an implementation from scratch or to trade accuracy of floating point computations for performance. As a superoptimizer, STOKE has been shown to outperform the code produced by general-purpose and domain-specific compilers, and in some cases expert hand-written code.
A monad for expressing SAT or QSAT problems using observable sharing.
Haskell library to create abstract interpreters
On the Dialectica interpretation.
Abstract:
OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become a core technique used in todayâs working co-editors and adopted in major industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as âpost-OTâ techniques capable of making concurrent operations natively commutative, and have made broad claims of superiority over OT solutions, in terms of correctness, time and space complexity, simplicity, etc. Over one decade later, however, CRDT solutions are rarely found in working co-editors, while OT solutions remain the choice for building the vast majority of co-editors. Contradictions between this reality and CRDTâs purported advantages have been the source of much debate and confusion in co-editing research and developer communities. What is CRDT really to co-editing? What are the real differences between OT and CRDT for co-editors? What are the key factors that may have affected the adoption of and choice between OT and CRDT for co-editors in the real world? In this paper, we report our discoveries, in relation to these questions and beyond, from a comprehensive review and comparison study on representative OT and CRDT solutions and working co-editors based on them. Moreover, this work reveals facts and presents evidences that refute CRDT claimed advantages over OT. We hope the results reported in this paper will help clear up common myths, misconceptions, and confusions surrounding alternative co-editing techniques, and accelerate progress in co-editing technology for real world applications.
The pattern theory of Grenander is a mathematical framework where the patterns are represented by probability models on random variables of algebraic structures. In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models. A discriminative model is in the form of a classifier. It specifies the conditional probability of the class label given the input signal. The descriptive model specifies the probability distribution of the signal, based on an energy function defined on the signal. A generative model assumes that the signal is generated by some latent variables via a transformation. We shall review these models within a common framework and explore their connections. We shall also review the recent developments that take advantage of the high approximation capacities of deep neural networks.
isl
is an LGPL thread-safe C library for manipulating sets and relations of integer tuples bounded by affine constraints
Most modern-seeming FFI best practices
Optimizations in a traditional compiler are applied sequentially, with each optimization destructively modifying the program to produce a transformed program that is then passed to the next optimization. We present a new approach for structuring the optimization phase of a compiler. In our approach, optimizations take the form of equality analyses that add equality information to a common intermediate representation. The optimizer works by repeatedly applying these analyses to infer equivalences between program fragments, thus saturating the intermediate representation with equalities. Once saturated, the intermediate representation encodes multiple optimized versions of the input program. At this point, a profitability heuristic picks the final optimized program from the various programs represented in the saturated representation. Our proposed way of structuring optimizers has a variety of benefits over previous approaches: our approach obviates the need to worry about optimization ordering, enables the use of a global optimization heuristic that selects among fully optimized programs, and can be used to perform translation validation, even on compilers other than our own. We present our approach, formalize it, and describe our choice of intermediate representation. We also present experimental results showing that our approach is practical in terms of time and space overhead, is effective at discovering intricate optimization opportunities, and is effective at performing translation validation for a realistic optimizer.
We show canonicity and normalization for dependent type theory with a cumulative sequence of universes and a type of Boolean. The argument follows the usual notion of reducibility, going back to Godelâs Dialectica interpretation and the work of Tait. A key feature of our approach is the use of a proof relevant notion of reducibility.
Allegedly easier intro to the simply-typed version: http://www.jonmsterling.com/pdfs/gluing-note.pdf
In his seminal paper on âTypes, Abstraction and Parametric Polymorphism,â John Reynolds called for homomorphisms to be generalized from functions to relations. He reasoned that such a generalization would allow type-based âabstractionâ (representation independence, information hiding, naturality or parametricity) to be captured in a mathematical theory, while accounting for higher-order types. However, after 30 years of research, we do not yet know fully how to do such a generalization. In this article, we explain the problems in doing so, summarize the work carried out so far, and call for a renewed attempt at addressing the problem.
* Set of hypotheses <-> set of Kelly bettors
* Prior probabilities <-> starting wealth for different bettors
* Making predictions by taking the weighted sum over your hypotheses <-> making predictions by setting up a market, where bettors with more money affect the market price more.
* Updating with Bayesâ rule <-> distributing the winnings
One of the least well known, and therefore least used, features of many terminal emulators is bracketed paste mode. When you are in bracketed paste mode and you paste into your terminal the content will be wrapped by the sequences
\e[200~
and\e[201~
.For example, letâs say I copied the string
"echo 'hello'\n"
from a website. When I paste into my terminal it will send"\e[200~echo 'hello'\n\e[201~"
to whatever program is running.I admit this is hard to get excited about, but it turns out that it enables something very cool: programs can tell the difference between stuff you type manually and stuff you pasted.