Teardown of a quartz crystal oscillator and the tiny IC inside

Questioning the (dietary) variety hypothesis

One of the most common, least questioned pieces of dietary advice is the Variety Hypothesis: that a more widely varied diet is better than a less varied diet. I think that this is false; most people’s diets are on the margin too varied.

Failures of Megaproject Management, or Why Are the Olympics so Unprofitable? | Applied Divinity Studies

Bent Flyvbjerg, author of The Oxford Handbook of Megaproject Management describes the Iron Law of Megaprojects : “Nine out of ten such projects have cost overruns. Overruns of up to 50 percent in real terms are common.” This bears out empirically for the Olympic games. According to one analysis , estimated costs are dramatically lower than actual, with Athens and Sochi seeing overruns of 400%!

the principle of the Hiding Hand: if people knew the true cost of projects up front, they would never fund it

/News that the Transbay Terminal is something like $300 million over budget should not come as a shock to anyone. We always knew the initial estimate was way under the real cost…/the first budget is really just a down payment./If people knew the real cost from the start, nothing would ever be approved./The idea is to get going. Start digging a hole and make it so big, there’s no alternative to coming up with the money to fill it in.

An Intuitive Guide to Garrabrant Induction - LessWrong

the Bayesian framework assumes logical omniscience: that we can always perform any computation or feat of logic at no cost. A Bayesian reasoner maintains a list of all possible hypotheses, then updates all of them according to Bayes’ rule upon encountering new evidence. Knowing how a hypothesis interacts with a piece of definite evidence requires logical deduction, but bounded reasoners have finite resources with which to deduce.

The primary contribution of Garrabrant et al. is a computable algorithm that assigns probabilities to logical statements that satisfies the logical induction criterion. We call this algorithm Garrabrant induction.

Against overuse of the Gini coefficient (Vitalik Buterin)

Beating the L1 cache with value speculation

Very interesting technique.

Blum–Shub–Smale machine - Wikipedia

In computation theory , the Blum–Shub–Smale machine, or BSS machine, is a model of computation introduced by Lenore Blum , Michael Shub and Stephen Smale , intended to describe computations over the real numbers . Essentially, a BSS machine is a Random Access Machine with registers that can store arbitrary real numbers and that can compute rational functions over reals in a single time step.

Real RAM - Wikipedia

In computing, especially computational geometry , a real RAM ( random-access machine ) is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers.

Alien Species Reconsidered: Finding a Value in Non-Natives - Yale E360

Amidst all the concern for the honeybees, it’s easy to forget an important fact about them. They’re not native to the New World.

The earliest records of honeybees in this hemisphere come from English settlers who arrived with hives aboard their ships in the early 1600s. They brought the bees to make honey they could eat and wax they could burn. Over the past four centuries, new stocks of honeybees have arrived at least eight times, from Europe, the Near East, and Africa.

Carbon Neutral Concrete - the numbers

The physics says $70 in, $475 out. The big question now is, “What is the reality factor?” meaning the additional cost of all the overhead/inefficiencies. If we have a reality factor of greater than 7x, this would be a commercially unviable proposition. “Reality factor”

More than just profitability, can they make a dent in our carbon problem? The scale of our carbon problem is on the order of 30 billion tons of CO2, of which maybe 1 billion tons are due to lime production. Heimdal is currently at the scale of 1 ton per year - nine orders of magnitude away from making a difference. Assuming continuous Silicon Valley ridiculous growth rates of 50% year over year, they will take 50 years to grow to a point where they are actually making a dent in our carbon problem. I wish them good luck.

Launch HN: Heimdal (YC S21) – Carbon neutral cement | Hacker News

Concrete is responsible for 8% of global CO2 emissions. Cement is usually made from mined limestone, which is one of the largest natural stores of carbon dioxide. Using that to make cement is a bit like burning oil. The world is addicted to concrete, so this problem is not going away. We make synthetic limestone using atmospheric CO2, such that when it is used to make cement, the process is carbon neutral.

It looks like a product but is secretly a subscription

The dreaded riddle of “capex and opex” Software, despite widespread pretence, is usually opex Not all capex vs opex decisions are so rational, or so clear. One of the great conceits of modern business is to pretend that you are investing capital when you are really just cultivating a new stream of expenses. It’s common for people to say that they are “investing in [such and such]” or “building an asset” when really they are just allocating more budget for spending (ie: opex).

Where does the pressure to capitalise come from? Generally from managers. Most business units have some income and some expenses. If you start capitalising stuff those expenses disappear and, better yet, even start to appear on your balance sheet as an asset - however nebulous. Now your numbers look /really/ good. Accountants have heard this one before and that’s why the rules are so strict. A home truth of mine is that software projects in fact look more like a liability than an asset. Software, whether anyone is getting any benefit from it or not attracts a huge number of very annoying expenses: security vulnerability scrambles, bug fixes and database upgrades, and all of it just lines the pockets of wealthy sysadmins.

What 2026 looks like (Daniel’s Median Future) - LessWrong

The goal is to write out a detailedfuture history (“trajectory”) that is as realistic (to me) as I can currently manage, i.e. I’m not aware of any alternative trajectory that is similarly detailed and clearly more plausible to me. The methodology is roughly: Write a future history of 2022. Condition on it, and write a future history of 2023. Repeat for 2024, 2025, etc. (I’m posting 2022-2026 now so I can get feedback that will help me write 2027+. I intend to keep writing until the story reaches singularity/extinction/utopia/etc.)

Open sourcing a more precise time appliance - Facebook Engineering

  • Facebook engineers have built and open-sourced an Open Compute Time Appliance, an important component of the modern timing infrastructure.
  • To make this possible, we came up with the Time Card — a PCI Express (PCIe) card that can turn almost any commodity server into a time appliance.
  • With the help of the OCP community, we established the Open Compute Time Appliance Project and open-sourced every aspect of the Open Time Server . Looks like you can buy a miniature atomic clock for ~ $1000

What’s interesting to me is how the form factor of timing GPS modules has stayed constant over the years. I started my GNSS timing journey with a used Trimble GPS from the 2000s, and it has the same form factor and pinout as the modern multi-constellation timing GPSes from uBlox. I’ve had a GPS clock going for several years at this point, and without an atomic clock or really any fanciness (just LinuxPPS and Chrony), I see about +/- 380ns, which is pretty good. NTP to the Internet gives me jitter in the range of about 20ms-70ms, about 5 orders of magnitude worse.

Spotting the Intolerant Minority Rule (Part 1) - by Paul Skallas - The Lindy Newsletter

What happens when 95 per cent of people are indifferent, but 5 per cent of people prefer something else? The minority wins. Taleb wrote a classic piece on this phenomenon. Society doesn’t evolve by consensus , voting, majority, committees, verbose meeting, academic conferences, and polling; only a few people suffice to disproportionately move the needle. Once an intolerant minority reaches a tiny percentage of the total population, the majority of the population will naturally succumb to their preferences.

Urban Crude

Los Angeles is the most urban oil field, where the industry operates in cracks, corners, and edges, hidden behind fences, and camouflaged into architecture, pulling oil out from under our feet

Abstraction and Invariance for Algebraically Indexed Types

Reynolds’ relational parametricity provides a powerful way to reason about programs in terms of invariance under changes of data representation. A dazzling array of applications of Reynolds’ theory exists, exploiting invariance to yield “free theorems”, noninhabitation results, and encodings of algebraic datatypes. Outside computer science, invariance is a common theme running through many areas of mathematics and physics. For example, the area of a triangle is unaltered by rotation or flipping. If we scale a triangle, then we scale its area, maintaining an invariant relationship between the two. The transformations under which properties are invariant are often organised into groups, with the algebraic structure reflecting the composability and invertibility of transformations. In this paper, we investigate programming languages whose types are indexed by algebraic structures such as groups of geometric transformations. Other examples include types indexed by principals–for information flow security–and types indexed by distances–for analysis of analytic uniform continuity properties. Following Reynolds, we prove a general Abstraction Theorem that covers all these instances. Consequences of our Abstraction Theorem include free theorems expressing invariance properties of programs, type isomorphisms based on invariance properties, and nondefinability results indicating when certain algebraically indexed types are uninhabited or only inhabited by trivial programs. We have fully formalised our framework and most examples in Coq.

E-Prime - Wikipedia

E-Prime (short for English-Prime or English Prime, [1] sometimes denoted É or E′) refers to a version of the English language that excludes all forms of the verb to be , including all conjugations, contractions and archaic forms. Bourland and other advocates also suggest that use of E-Prime leads to a less dogmatic style of language that reduces the possibility of misunderstanding or conflict. Kellogg and Bourland describe misuse of the verb /to be/ as creating a “deity mode of speech”, allowing “even the most ignorant to transform their opinions magically into god-like pronouncements on the nature of things”.

Why I love flash frozen food and think you should too (slatestarcodex subreddit comment)

if the grocery store buys frozen Atlantic salmon, why do they defrost it and sell it as fresh? Fortunately, the food industry is aware of the benefits of flash frozen food and a sizeable amount of fresh fish, meat, fruits and vegetables sold in Canadian grocery stores have in fact been flash frozen. Unfortunately, consumers have an aversion to frozen food leading grocery stores to defrost the previously frozen food to make it appear “fresh”.

Is the Quality of Sushi Ruined by Freezing Raw Fish and Squid? A Randomized Double-Blind Trial With Sensory Evaluation Using Discrimination Testing

Evidence from Japan that frozen fish tastes basically as good as unfrozen.

Speeding up atan2f by 50x

atan2 is an important but slow trigonometric function. However, if we’re working with batches of points and willing to live with tiny errors, we can produce an atan2 approximation which is 50 times faster than the standard version provided by glibc. Perhaps more impressively, the approximation produces a result every 2 clock cycles. This is achieved through a bit of maths, modern compiler magic, manual low-level optimizations, and some cool documents from the 50s.

Particularly interesting is the uiCA (The uops.info Code Analyzer ) tool: Execution Trace

Sand dunes come in two sizes

This is Ralph Bagnold, one of the last generation of the Gentleman Explorer types that did a lot of early geology. Hence, a moderate lunatic.

sand piles tend to come in two modes. The first, ripples, are quite small and rework themselves on really fast time scales. The second, dunes, are quite large (sometimes hundreds and hundreds of meters high). They’re also mobile, but quite slow, which is really an ideal speed when we’re talking about ambulatory hills.

The math is rather fiddly, but it has to do with how air is viscous and slows down to grind against the sand when you’re measuring right up against the ground. Ripples represent the kind of predictable instability and wobble when that happens- chaos in the flow of air becomes a kind of order in the flow of sand. Dunes represent a separate case of theoretically-unbounded-except-by-the-atmosphere, rich-get-richer scheme where large piles of sand act as sand traps; they get so large because big dunes collect sand faster than open terrain.

Until Mars… they don’t fit this pattern /at all/. They’re an entire third class of bedform beyond the ripple/dune duality. For one, there’s the size; much too small to be dunes, much too large to be ripples.

Twitter