Christian WolfLinks between literature on addressing transformer complexity and work on recurrent neural worksThis post addresses some similarities in the ways the Deep field addressed two seemingly different problems, namely A) recent work on…Mar 21, 2021Mar 21, 2021
Christian WolfCow-Sharks: exploring the Shape vs. Texture biases in Deep Neural NetworksThe context to this post are recent papers providing evidence that prediction performance of deep networks trained for image…Nov 30, 2020Nov 30, 2020
Christian WolfWhat is translation equivariance, and why do we use convolutions to get it?Multi-layer Perceptrons (MLPs) are standard neural networks with fully connected layers, where each input unit is connected with each…Oct 5, 20202Oct 5, 20202
Christian WolfGeneralization and random labelsCaveat: I wrote this story initially on Dec. 21st, 2016, but moved it to medium in Oct. 2018.Oct 6, 2018Oct 6, 2018