1. The tradeoff bias variance is illusory: one always chooses the bias against the variance, because the 'human Learner' has a natural bias towards the ability to generalize (CF Gigerenzer, Bengio)
2. learning, before being in layer, is mainly sequential: CF 'AI exponential cognitive growth' etc: principle of innovation
3. Symmetry-generalization conjecture: a good learning path is such that, in step k
a.The symmetries Sk appear 'naturally'
b.Are sufficient to 'fix' the geometry of the transition k → k + 1
4. Einstein: "Everything Should Be Made as Simple as Possible, But Not Simpler"
5. Example 1: CNN for image: it is indeed the simplest natural solution under constraints of the symmetries of the problem
6. Example 2: Dirac's equation: from
a.Model simplicity (first order derivatives)
b.Physical invariance (Lorentz group)
emerges a new geometry: a Clifford algebra
7. Example 3: machine reading I (Synonyms) -> II x' = x1 + x2 -> III x'' =x’1 + x’2 -> IV ...
8. Example 4: Gigerenzer decision tree for classifying incoming heart attack patients
Http://psy2.ucsd.edu/~mckenzie/ToddGigerenzer2000BBS.pdf
I suspect that this 'tree' actually reveals a layered logic
No comments:
Post a Comment