My somewhat pious belief was that if people focused more on remembering the basics, and worried less about the âdifficultâ high-level issues, theyâd find the high-level issues took care of themselves.
But while I held this as a strong conviction about other people, I never realized it also applied to me. And I had no idea at all how strongly it applied to me. Using Anki to read papers in new fields disabused me of this illusion. I found it almost unsettling how much easier Anki made learning such subjects. I now believe memory of the basics is often the single largest barrier to understanding. If you have a system such as Anki for overcoming that barrier, then you will find it much, much easier to read into new fields.
On October 31, 1832, a young naturalist named Charles Darwin walked onto the deck of the HMS Beagle and realized that the ship had been boarded by thousands of intruders. Tiny red spiders, each a millimeter wide, were everywhere. The ship was 60 miles offshore, so the creatures must have floated over from the Argentinian mainland. âAll the ropes were coated and fringed with gossamer web,â Darwin wrote.
Spiders can detect electric fields and engage in âballooningâ.
Many of the spiders actually managed to take off, despite being in closed boxes with no airflow within them.
Infinity plug one on general relativity:
General relativity says that âstraightâ should mean âgeodesicâ which should mean âfree-falling.â Einsteinâs goal was to somehow figure out how to write down a metric for spacetime, so that geodesics are the paths that someone would follow if they were freely-falling. Thus, the basic idea of general relativity is that freely-falling people (and matter and energy) follow geodesics in the spacetime, but also, that matter and energy bend the spacetime itself which determines how everything will travel.
I had heard of gravitational lensing, but had never seen the fantastic photos it produces:
For reasons I donât understand, one of the lenses produces a âcrossâ.
More at the Wikipedia for Einstein ring, including a double ring and one that looks like the Cheshire cate
Finding an effective way to condition on or fuse sources of information is an open research problem, and in this article, we concentrate on a speciďŹc family of approaches we call feature-wise transformations. We will examine the use of feature-wise transformations in many neural network architectures to solve a surprisingly large and diverse set of problems; their success, we will argue, is due to being ďŹexible enough to learn an effective representation of the conditioning input in varied settings. In the language of multi-task learning, where the conditioning signal is taken to be a task description, feature-wise transformations learn a task representation which allows them to capture and leverage the relationship between multiple sources of information, even in remarkably different problem settings.
Eric Weinstein on the crisis of late capitalism