McDonalds sells most of its milkshakes not to children as a dessert but to adults early in the morning. The job to be done of the milkshake was not, as it turned out, delivering the taste of chocolate. Milkshakes were an engaging and enjoyable meal substitute for people who had not had breakfast and needed to consume calories while driving to work.
What EA projects could grow to become megaprojects, eventually spending $100m per year? - EA Forum
EA megaprojects continued - EA Forum
Ben Todd (CEO of 80,000 Hours) says âEffective altruism needs more âmegaprojectsâ. Most projects in the community are designed to use up to ~$10m per year effectively, but the increasing funding overhang means we need more projects that could deploy ~$100m per year.â
Buy a coal mine (suggested by Will MacAskill at EAG2021)
Implement a small pilot of the Nucleic Acid Observatory
Buy promising ML labs
Within the scientific research community, memory information in the brain is commonly believed to be stored in the synapse - a hypothesis famously attributed to psychologist Donald Hebb. However, there is a growing minority who postulate that memory is stored inside the neuron at the molecular (RNA or DNA) level - an alternative postulation known as the cell-intrinsic hypothesis, coined by psychologist Randy Gallistel. In this paper, we review a selection of key experimental evidence from both sides of the argument. We begin with Eric Kandelâs studies on sea slugs, which provided the first evidence in support of the synaptic hypothesis. Next, we touch on experiments in mice by John OâKeefe (declarative memory and the hippocampus) and Joseph LeDoux (procedural fear memory and the amygdala). Then, we introduce the synapse as the basic building block of todayâs artificial intelligence neural networks. After that, we describe David Glanzmanâs study on dissociating memory storage and synaptic change in sea slugs, and Susumu Tonegawaâs experiment on reactivating retrograde amnesia in mice using laser. From there, we highlight Germund Hesslowâs experiment on conditioned pauses in ferrets, and Beatrice Gelberâs experiment on conditioning in single-celled organisms without synapses (Paramecium aurelia). This is followed by a description of David Glanzmanâs experiment on transplanting memory between sea slugs using RNA. Finally, we provide an overview of Brian Dias and Kerry Resslerâs experiment on DNA transfer of fear in mice from parents to offspring. We conclude with some potential implications for the wider field of psychology.
Peritext is a novel algorithm for merging versions of a rich-text document. It is a Conflict-free Replicated Data Type
When writers are collaborating asynchronously, it is impossible for an algorithm to always merge edits perfectly. As one example, if two writers are editing the script for a TV show, changes to one episode may require plot changes in future episodes. Since an algorithm cannot do this automatically, human intervention is often necessary to produce the desired final result.
My goal in this post is to convince you that trying to spend as little time as possible on fun socializing, frivolous hobbies, or other leisure is a dangerous impulse. If you notice yourself aiming for the minimum amount of self-care, thatâs a sign that you should reorient and reprioritize.
Impact is about prioritizing, not agonizing over every hour
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such modelsâ predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.
Listening to audiobooks at 3x speed is born out of a flawed model of learning â and itâs the same one that underpins our modern education system. The assumption is that people can acquire knowledge as if itâs a substance they can pour into their minds.
A lecture has been well described as the process whereby the notes of the teacher become the notes of the student without passing through the mind of either. Mortimer Adler
The smartest people Iâve met reject the âWater in a Cupâ theory. They focus less on consuming as much information as possible and more on cultivating the deepest possible understanding of the ideas that resonate with them most.
Nano ID is quite comparable to UUID v4 (random-based). It has a similar number of random bits in the ID (126 in Nano ID and 122 in UUID), so it has a similar collision probability: For there to be a one in a billion chance of duplication, 103 trillion version 4 IDs must be generated. There are three main differences between Nano ID and UUID v4: 1 Nano ID uses a bigger alphabet, so a similar number of random bits are packed in just 21 symbols instead of 36. 2 Nano ID code is 4 times less than uuid/v4 package: 130 bytes instead of 483. 3 Because of memory allocation tricks, Nano ID is 2 times faster than UUID.
GitHub - ulid/spec: The canonical spec for ulid
UUID can be suboptimal for many use-cases because: It isnât the most character efficient way of encoding 128 bits of randomness
- UUID v1/v2 is impractical in many environments, as it requires access to a unique, stable MAC address
- UUID v3/v5 requires a unique seed and produces randomly distributed IDs, which can cause fragmentation in many data structures
- UUID v4 provides no other information than randomness which can cause fragmentation in many data structures
Instead, herein is proposed ULID: ulid() // 01ARZ3NDEKTSV4RRFFQ69G5FAV
- 128-bit compatibility with UUID
- 1.21e+24 unique ULIDs per millisecond
- Lexicographically sortable!
- Canonically encoded as a 26 character string, as opposed to the 36 character UUID
- Uses Crockfordâs base32 for better efficiency and readability (5 bits per character)
- Case insensitive
- No special characters (URL safe)
- Monotonic sort order (correctly detects and handles the same millisecond)
Proof-of-Stake and Stablecoins: A Blockchain Centralization Dilemma
I figured itâs time to delve into three related concepts that are more broad than just Ethereum. The first is about the trade-offs of proof-of-stake as a consensus mechanism in general, the second is the stablecoin centralization problem, and the third is the spectrum of centralization that various smart contract chains use to compete with each other on fees. All three of these relate together because they affect how truly decentralized a proof-of-stake smart contract blockchain can be compared to the Bitcoin network, and how they can perform relative to each other in hostile or non-hostile regulatory environments.
A friend of mine at SpaceX says that Muskâs biggest accomplishment is giving bright minds meaningful work. Are you working on something greater than yourself? Interplanetary life is a wild and audacious goal, itâs not easy, and thatâs precisely what makes it aspirational.
Meanwhile for tech-workers, the deterioration of public infrastructure changes how people view the city: as less of a home, and more of a hotel; a temporary gold mine that can be abandoned once riches are made, not a place to invest into.
If you canât wait to leave, you wonât engage in community-building, you wonât interact with strangers⌠The result is a low-trust society. Itâs young people who have no idea what it could be like to live in a place with a healthy social fabric. I went to buy a toothbrush at Walgreens, and even then had to ask an employee to unlock the cabinet that housed a repertoire of $5 items.
Iâm glad you asked the question of where the real problem with SF lies because it seems to me that it extends beyond the immediately vivid taglines of crime and housing. I wonder how happy its population would be even if it was a functional city. At least considering tech, the faction of residents Iâm most familiar with, I wonder how much the instrumentalization and endless drive for efficiency that permeates private lives also causes a sort of spiritual malaise. These issues go beyond SFâmaybe theyâre strongest there, but we certainly see the lack of meaning in lives all over the Westâfamily and community, maybe also religious practice, used to provide for that, while modernity has left a gaping hole in many lives that is yet to be filled. In a place like SF with few children and few old people, thoughts of posterity are scarce. Individuals remain their own center of importance their entire lives. Those problems are much harder to fix.
What decides the destiny of Western man? Credit scores he has only intermittent access to. Regulations he has not read. HR codes he had no part in writing.
Bulldozer: single actors can do important and meaningful, but potentially risky and disruptive, things without asking for permission Vetocracy: doing anything potentially disruptive and controversial requires getting a sign-off from a large number of different and diverse actors, any of whom could stop it [image:4C5D657C-EA24-40AE-9696-70AB9EE797EB-2598-000000942445023F/compass3.png]
Bellardâs formula is used to calculate the /n/th digit of in base 16 . One important application is verifying computations of all digits of pi performed by other means.