Reviving recipes written on ancient Babylonian tablets. A good idea?
I like the Venn diagram from this article.
Key quote:
If itâs so absurd to conceive of a human learning a new board game through pure RL, shouldnât we wonder if itâs a flawed framework for how AI agents should learn? Does it really make sense to start learning a new skill based only on its reward signal, with neither prior experience nor higher-level instruction?
This is mostly an argument against learning from scratch. More thoroughly covered in Gary Marcusâ Innateness, AlphaZero, and Artificial Intelligence.
The author of this piece is also editor of the interesting Skynet Today.
by Stuart Russell
Abstract. Logic and probability theory are two of the most important branches of mathematics and each has played a significant role in artificial intelligence (AI) research. Beginning with Leibniz, scholars have attempted to unify logic and probability. For âclassicalâ AI, based largely on first-order logic, the purpose of such a unification is to handle uncertainty and facilitate learning from real data; for âmodernâ AI, based largely on probability theory, the purpose is to acquire formal languages with sufficient expressive power to handle complex domains and incorporate prior knowledge. This paper provides a brief summary of an invited talk describing efforts in these directions, focusing in particular on open-universe probability models that allow for uncertainty about the existence and identity of objects.
It seems the real details are in papers about BLOG.
San Franciscoâs Fire Department is one of the few left in the United States that still uses wooden ladders. Each is made by hand at a dedicated workshop. Some have been in rotation for nearly a century. Weâll get to the why and how, but hang on: Wouldnât a wooden ladder burn? Yes. They go up in flames.
Donât mess with âem.
Neither LL nor LR1 parsers can handle this. In fact, GHCâs parser uses a hack: it parses patterns as expressions, and only later checks that the expression it parsed was a valid pattern!
Michael Arntzenius (rntz) has a lot of interesting writing.
Abstract. A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by constructionâgiving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtländerâs semantic bidirectionalization, creating a new programming style that is able to obtain benefits from both.
Really nice introduction to topology.