1.
The theory of statistical risk lacks the
essential
2. Paradox: on the one hand the human
capacities of generalization appear as the graal in machine learning, on the
other one pins a theory that needs to order 0 at the foot of the inductive wall
3. Rare are the real cases where the
data constrains the model: anything fits,
almost always
4. Dreams of generalization: the human
theorizes (has priors): sees symmetries
5. Statistics is an historical fiction:
CF Pascal, Taleb (mediocristan)
of no practical use for real problems
6. The statistical reasoning is
basically erroneous, and at best affects to discover a theory actually already
known, at worst raves (overfit)
The MAB approach, and beyond Reinforcement Learning, is the only
theoretical answer common to this worm-eaten foundation.
See also Taleb’s convex heuristics, a logic of the decision
7. Gigerenzer is one of the very few
authors interested in human capacity for generalization, CF 'learning fallacy'
Do not be confuse on the use of the statistical risk approach in the
article: the penalty, or Occam's razor, is only one of the 2 'priors' of
learning : the second being the search for symmetries.
8. learning means theory, and more
exactly a theoretical unscrewing: a tower of Representations / Theories {T(k)}
9.
The DL is an ersatz of this design
10. T(k) encapsulates much more than the data which 'validates'
it (CF, for example, the theory of gravitation and precession of the perihelion
of mercury)
11. In physics, T = L, the Lagrangian
12. Most important is the innovation T(k)->T(k+1), based on certain symmetries that
must be guessed
No comments:
Post a Comment