Bean soup is on the menu in Senate restaurants every day. There are several stories about the origin of that mandate, but none have been corroborated.
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.
Typer is a library for building CLI applications that users will love using and developers will love creating. Based on Python 3.6+ type hints.
In this talk Ed will give live coding introduction to normalization by evaluation. He will then show how Graal and Truffle, on the JVM, can be (ab)used to JIT functional languages. He discussesd why this seems like a promising direction for evaluating dependently typed languages in particular.
Dbunzli stuff:
Free font, designed to be legible for low-vision readers.
Merchant fees and reward programs generate an implicit monetary transfer to credit card users from non-card (or “cash”) users because merchants generally do not set differential prices for card users to recoup the costs of fees and rewards. On average, each cash-using household pays $149 to card-using households and each card-using household receives $1,133 from cash users every year. Because credit card spending and rewards are positively correlated with household income, the payment instrument transfer also induces a regressive transfer from low-income to high-income households in general. On average, and after accounting for rewards paid to households by banks, the lowest-income household ($20,000 or less annually) pays $21 and the highest-income household ($150,000 or more annually) receives $750 every year. We build and calibrate a model of consumer payment choice to compute the effects of merchant fees and card rewards on consumer welfare. Reducing merchant fees and card rewards would likely increase consumer welfare.
On benchmark performance, GPT-3 seems to be in line with performance predicted by smaller sizes, and doesn’t seem to particularly break or accelerate the trend. Close-to-optimal performance on these benchmarks seems like it’s at least ~3 orders of magnitude compute away (costing around $1B at current prices).
While the total amount of food has increased dramatically for the past few hundred years, for example, agriculture’s share of the economy steadily fell (from over 40% of the English economy in 1600, and an even greater share of total employment, to less than 1% today, with a similar trend repeated worldwide). Innovation has, in a sense, been the victim of its own success. By creating ever more products, sprouting new industries, and diversifying them into myriad specialisms, we have shrunk the impact that any single improvement can have.
A “huge project” for a Silicon Valley tech person may be a year or two long; a “huge project” for a researcher may last a decade. Persistence with a difficult problem may require tens of hours for a tech person and hundreds (or thousands) of hours for a researcher, no matter how quickly try to work. It’s not that the tech people are constitutionally lazy or something like that: in industry, it usually is, in fact, a bad idea to spend many hundreds of hours thinking about a single problem. Better to create an 80/20 solution or try a different approach. But foundational insights often do require more patient, focused thought than heuristics from tech culture would naturally encourage. Living here has changed me deeply. Probably other places would have had a similar (or a better?) effect on similar axes. For example, I like what conversation in Cambridge, MA does to my state of mind. But when I lived in Portland, OR, for example, the environment tended to emphasize a different set of values—community, craft, sustainability, enjoyment. I liked these values, too, but I suspect they would not so naturally reinforce my current work.
Is public opinion about to shift on geoengineering (into the Overton window)?
/We argue that the most important statistical ideas of the past half century are: counterfactual causal inference, bootstrapping and simulation-based inference, overparameterized models and regularization, multilevel models, generic computation algorithms, adaptive decision analysis, robust inference, and exploratory data analysis. We discuss common features of these ideas, how they relate to modern computing and big data, and how they might be developed and extended in future decades. The goal of this article is to provoke thought and discussion regarding the larger themes of research in statistics and data science./
Interesting take on the SSC / NYT kerfuffle.
Could these numbers possibly be accurate? Even if not, I’m sure they’re directionally true and those in the top decile drink a shocking amount.