In one way we think a great deal too much of the atomic bomb. âHow are we to live in an atomic age?â I am tempted to reply: âWhy, as you would have lived in the sixteenth century when the plague visited London almost every year [âŚ]; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents. In other words, do not let us begin by exaggerating the novelty of our situation. Believe me, dear sir or madam, you and all whom you love were already sentenced to death before the atomic bomb was invented⌠It is perfectly ridiculous to go about whimpering and drawing long faces because the scientists have added one more chance of painful and premature death to a world which already bristled with such chances and in which death itself was not a chance at all, but a certainty. [âŚ] If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingsâpraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsânot huddled together like frightened sheep and thinking about bombs.
We present a new approach to e-matching based on relational join; in particular, we apply recent database query execution techniques to guarantee worst-case optimal run time. Compared to the conventional backtracking approach that always searches the e-graph âtop downâ, our new relational e-matching approach can better exploit pattern structure by searching the e-graph according to an optimized query plan. We also establish the first data complexity result for e-matching, bounding run time as a function of the e-graph size and output size. We prototyped and evaluated our technique in the state-of-the-art egg e-graph framework. Compared to a conventional baseline, relational e-matching is simpler to implement and orders of magnitude faster in practice.
The probability that /A/ rolls a higher number than /B/, the probability that /B/ rolls higher than /C/, and the probability that /C/ rolls higher than /A/ are all 5/9
Compares nine recipes
The Capitol Hill Babysitting Cooperative (CHBC) is a cooperative located in Washington, D.C. , whose purpose is to fairly distribute the responsibility of babysitting between its members. The co-op is often used as an allegory for a demand -oriented model of an economy. The allegory illustrates several economic concepts, including the paradox of thrift and the importance of the money supply to an economyâs well-being. At first, new members of the co-op felt, on average, that they should save more scrip before they began spending. So they babysat whenever the opportunity arose, but did not spend the scrip they acquired. Since babysitting opportunities only arise when other couples want to go out, there was a shortage of demand for babysitting. This illustrates the phenomenon known as the paradox of thrift . The administrationâs initial reaction to the co-opâs recession was to add new rules. But the measures did not resolve the inadequate demand for babysitting. Eventually, the co-op was able to alleviate the issue by giving new members thirty hoursâ worth of scrip, but only requiring them to return twenty when they left the co-op. Within a few years a new problem arose. There was too much scrip and a shortage of babysitting. As new members joined, more scrip was added to the system until couples had too much, but new members were not able to spend it because no one else wanted to babysit. In general, the cooperative experienced regular problems because the administration took in more than it spent, and at times the system added too much scrip into the system via the amount issued to new members.
AIPs are design documents that summarize Googleâs API design decisions. They also provide a framework and system for others to document their own API design rules and practices.
Impressions of an Indian graduate student new to America
/This annex describes specifications for recommended defaults for the use of Unicode in the definitions of general-purpose identifiers, immutable identifiers, hashtag identifiers, and in pattern-based syntax. It also supplies guidelines for use of normalization with identifiers./
It states that if voters and policies are distributed along a one-dimensional spectrum , with voters ranking alternatives in order of proximity, then any voting method which satisfies the Condorcet criterion will elect the candidate closest to the median voter. In particular, a majority vote between two options will do so.
Excellent video. Especially focuses on SO(3) and SU(2).
Like Taylor Series but better behaved for functions that donât go to infinity. Goes to 0 instead. Often stays close to the function for longer.
The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as âAnonymousâ.
Links from Steve Omohundro:
I was halfway through a PhD on software testing and verification before joining Anthropic (opinions my own, etc), and Iâm /less/ convinced than Eliezer about theorem-proving for AGI safety.
I want to push back against the idea that ANNs are âvectors of floating pointsâ and therefore itâs impossible to prove things about them. Many algorithms involve continuous variables and we can prove things about them. Support vector machines are also learning algorithms that are âvectors of floating pointsâ and we have a pretty good theory of how they work. In fact, there already is a sizable body of theoretical results about ANNs, even if it still falls significantly short of what we need. The biggest problem is not necessarily in the âfloating pointsâ. The problem is that we still donât have satisfactory models of what an âagentâ is and what it means for an agent to be âalignedâ. But, we do have some leads. And once we solve this part, thereâs no reason of principle why it cannot be combined with some (hitherto unknown) theory of generalization bounds for ANNs.
this post seems to argue (reasonably convincingly, in my view) that the space of possible abstractions (âepistemic representationsâ) is discrete rather than continuous, such that any representation of reality sufficiently close to âhuman compressionsâ would in fact /be/ using those human compressions, rather than an arbitrarily similar set of representations that comes apart in the limit of strong optimization pressure
âOf course, Andreessen was correct to claim that software was eating the world, but he had the causation backwards. Softwareâs high valuations were not the result of its extraordinary technological promise. Rather, the software sector had become the primary locus of innovation because of its high valuations. Its financial characteristics allowed software to attract growth investment while other sectors no longer could.