Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course. There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are… Continue reading Unconscious Economies
I think there are four natural kinds of problems and learning to identify them helped me see clearly what’s bad with philosophy, good with start-ups, and many things in-between.
Human brains still outperform deep learning algorithms in a wide variety of tasks, such as playing soccer or knowing that it’s a bad idea to drive off a cliff without having to try first (for more formal examples, see Lake et al., 2017; Hinton, 2017; LeCun, 2018; Irpan, 2018). This fact can be taken as evidence for two different hypotheses: 1) In order to develop human-level AI, we have to develop entirely new learning algorithms. At the moment, AI is a deep conceptual problem. 2) In order to develop human-level AI, we basically just have to improve current deep learning algorithms (and their hardware) a lot. At the moment, AI is an engineering problem. This post explores whether insights from neuroscience, in patricular the question of whether the brain utilises backpropagation of error, might help resolve this question.
There lies a paradox at the heart of of human rationality. On the one hand, the brain seems statistically optimal in a wide range of domains, including motor control (Körding & Wolpert, 2004), regulation of energy consumption (Taylor & Faisal, 2011), and low-level cognitive domains such as perceptual inference (Ernst & Banks, 2002; Feldman, 2009) and visual sampling (Itti & Baldi, 2009). On the other hand, research in behavioural economics has uncovered many well-known instances of irrationality in human behaviour. This post explores whether assuming that the brain optimizes the costs of information-processing, in addition to expected reward, might help resolve this paradox.
How do you save the world? Presumably, you build a plan that's grounded in the way the world actually works, and that would look different if the world was different. This sounds straightforward. Unfortunately, I think most plans are not designed like this.
The Copernican revolution was a pivotal event in the history of science. Yet I believe that the lessons most often taught from from this period are largely historically inaccurate and that the most important lessons are basically not taught at all. As it turns out, the history of the Copernican revolution carries important lessons about rationality -- about what it is and is not like to try to figure out how the world actually works. Also, it’s relevant to deep learning, but it’ll take me about 5000 words on renaissance astronomy to make that point.