Interdisciplinary team cooks 4000-year old Babylonian stews

Reviving recipes written on ancient Babylonian tablets. A good idea?

Reinforcement learning’s foundational flaw

I like the Venn diagram from this article.

Key quote:

If it’s so absurd to conceive of a human learning a new board game through pure RL, shouldn’t we wonder if it’s a flawed framework for how AI agents should learn? Does it really make sense to start learning a new skill based only on its reward signal, with neither prior experience nor higher-level instruction?

This is mostly an argument against learning from scratch. More thoroughly covered in Gary Marcus’ Innateness, AlphaZero, and Artificial Intelligence.

The author of this piece is also editor of the interesting Skynet Today.

Unifying Logic and Probability: A New Dawn for AI?

by Stuart Russell

Abstract. Logic and probability theory are two of the most important branches of mathematics and each has played a significant role in artificial intelligence (AI) research. Beginning with Leibniz, scholars have attempted to unify logic and probability. For “classical” AI, based largely on first-order logic, the purpose of such a unification is to handle uncertainty and facilitate learning from real data; for “modern” AI, based largely on probability theory, the purpose is to acquire formal languages with sufficient expressive power to handle complex domains and incorporate prior knowledge. This paper provides a brief summary of an invited talk describing efforts in these directions, focusing in particular on open-universe probability models that allow for uncertainty about the existence and identity of objects.

It seems the real details are in papers about BLOG.

Inside San Franciso’s Fire Department, Where Ladders Are Made by Hand

San Francisco’s Fire Department is one of the few left in the United States that still uses wooden ladders. Each is made by hand at a dedicated workshop. Some have been in rotation for nearly a century. We’ll get to the why and how, but hang on: Wouldn’t a wooden ladder burn? Yes. They go up in flames.

Nematodes

Don’t mess with ‘em.

Parsing list comprehensions is hard

Neither LL nor LR1 parsers can handle this. In fact, GHC’s parser uses a hack: it parses patterns as expressions, and only later checks that the expression it parsed was a valid pattern!

Michael Arntzenius (rntz) has a lot of interesting writing.

Applicative bidirectional programming: Mixing lenses and semantic bidirectionalization

Abstract. A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by construction—giving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtländer’s semantic bidirectionalization, creating a new programming style that is able to obtain benefits from both.

Topology as a theory of touching

Really nice introduction to topology.