A. \( LM \rightarrow \quad learning \)
It is surprising and very comforting to note that the very popular 'lean startup' by Eric Ries or the 'getting to plan B' by Mullins & Komisar (KM) demonstrates an obvious relationship between learning to learn (L2L) and lean management (LM)
$$ LM \rightarrow \quad learning $$
Maybe because entrepreneurs have essentially skin in the game , as should have all learner / modeler.
At the heart of lean management, there is the idea that the company is essentially a place of learning; let's count words:
Learn: 292
Know: 123
Problem: 177
Solv: 38
And precisely, a scientific process:
Hypotheses: 67
Assumption: 73
Theor: 31
Test: 206
Feedback: 86
Valid: 74
Experiment: 171
System: 139
Scien: 47
Fail: 132 (yes, failure is included in the scientific learning package!)
LM is the place of a surprising symmetry:
\begin{array}{r c l}
\ product & \quad & startup \\
\uparrow \quad \downarrow \quad & \rightarrow & \uparrow \quad \downarrow \quad \\
learning & \quad & customer \\
\end{array}
The traditional causality is: I learn to produce; LM puts forward : I produce to learn!
Fordism approach is in (product) push mode: Ford produces, the customer buys
LM is in (informational) pull mode: the startup learns from its customer
"The learning about how to build a sustainable business is the outcome of those experiments. For startups, that information is much more important than dollars, awards, or mentions in the press, because it can influence and reshape the next set of ideas. "
"For startups, the role of strategy is to help figure out the right questions to ask"
In detail, the functor LM → Learning is:
\begin{array}{r c l}
\ product & \quad & model \\
\uparrow \quad \downarrow \quad & \rightarrow & \uparrow \quad \downarrow \quad \\
customer & \quad & data \\
\end{array}
More precisely, everything is dynamic: at all times \( t \) LM seeks a minimum viable product, hence the correspondence
\begin{array}{r c l}
\ MVP_t & \rightarrow & model_t \\
\ customer_t & \rightarrow & data_t \\
\end{array}
In fact a set \( data_t \) of data is attached to \( MVP_t \) ( \( model_t \) ): this is the information available at \(t \)
B. Overfit, Regularization
The key to the endless process LM is what corresponds in ML to Active Learning: sequential acquisition of new data:
$$ Model_t → data_t → model_{t + 1} \rightarrow data_{t + 1} ...$$
or in the form of a cycle :
\begin{array}{r c l}
\ model & & \\
\ \downarrow \quad \uparrow & & \\
\ data & & \\
\end{array}
So what we have here is really a learning path
The model is at once:
a. A representation of the domain and,
b. A decision function that allows the exploration of the domain in order to acquire new data.
This learning diagram is similar to the Multi-Armed Bandit (MAB): at each \( t \), one decides which arm to operate, and one observes the reward.
a. Ex1: A / B testing protocol
b. Ex2: H&M in KM, different styles are tested almost simultaneously, and one favors the most successful.
Obviously exploration is expensive, and the whole question is to stay alive until the (relative) completion of the learning process ... this is where the concept of lean / waste (82 occurrences in the text) is looming.
Ries is particularly good at telling his own experience at IMVU. The whole pararagraph 'Talking to customers' is a piece of anthology, absolutely hilarious: the confrontation between the engineer and the 'seventeen-year-old girl' announces a tragi-comedy, and was for Ries the revelation that he has basically lost his time for 6 months! 'There's obviously something wrong', 'deal breaker', 'utterly / fundamentally flawed'...
Ries : 'Here's the question that bothered me most of all: if the goal of these months was to learn these important insights about customers, why did it take so long? How much of our effort contributed to the essential lessons we needed to learn? Could we have learned those lessons earlier if I had not been focused on making the product "better" by adding features and fixing bugs?
Here Ries has a seemingly surprising paragraph if one reads it from the coign of vantage of Statistical Learning: 'optimization versus learning'. In the context of statiscal learning, optimization is almost synonymous with learning.
In line with 'learning fallacy', I would say we have here a case of 'tree hiding the forest': the 6 months lost developing unnecessary features to IMVU, this is an example of over-optimization : This 'model' is demolished when confronting new data.
Carrying the metaphor / functor \( Lean \rightarrow Learn \) further, we get
$$ waste \rightarrow overfit$$
We can be tempted to talk about dynamic regularization: we are looking for the simplest and least expensive model (product) that 'fit' the data.
More precisely: if we calculate the difference between:
a. the ex-post reward \( r_t \): not only measurement of the product's suitability to customer, but more generally learning rate
b. and the ex-ante cost of R&D \(c_t \),
The regularization at \( t + 1 \) is done according to \( r_t - c_t \).
It is obviously advantageous for this measurement to be as continuous as possible: this is the key message of the LM: the increment of time, or cycle duration, must be as low as possible. 'The biggest advantage of working in small batches is that quality problems can be identified much sooner'.
The evaluation of \( r_t \) is anything but obvious. As Ries explains at length, growth or other 'vanity metrics' do not prove that \( r_t-c_t > 0 \). The 'actionable metrics' must make it possible to correctly evaluate \( r_t \), according to Ries.
Of course the LM approach gives no guaranty to converge before running out of cash!
C. symmetries?
But most notably, Ries gives no explicit method to guess the symmetries of the domain.
This part is entirely human black box, discretionary. Assumptions come from art, not science. 'As far as exploration is costless and continuous, you can explore randomly' seems to be the cartoon message of Ries. For example, in the case of Caroline at HP, nothing is said except 'testing'.
When Ries insists on metrics (or post analysis), in fact it is indeed representation, therefore symmetries fundamentally. For example, in the case of Grockit, the initial assumption itself is revised: 'In fact, over time, through dozens of tests, it became clear that the key to student engagement was to offer them a combination of social and solo features. Students preferred having a choice of how to study.'
But a fully symmetrized $$ Social \leftrightarrow solo $$
would have warned against a pure social approach.
Actually, we can argue that Ries gives tree heuristics to guess the symmetries:
a. The five whys has a strong flavor of hierarchical discovery, and in fact target (ground) symmetries.
b. Transfer learning: notably from manufacturing, and Toyota.
c. Catalog of Pivots:
Zoom-in / out
Customer segment (or need)
Value capture
Engine of growth
..
In all cases, Category Theory is closer : isn't "Pivot" a wonderful intuition of ... symmetry ?
Of course, MK's 'analogs and antilogs' is quite in line with Cat.
Incidentally, Ries gives examples of \( Learn * customer \), \( Learn *student \) actions without ever giving a model other than 'testing'. To give a single example, the \( social \leftrightarrow solo \) symmetry seems linked to concepts like mimicry (CF for example the recent theory of mirror neurons) on the one hand and to something as a need of intellectual order / Compression (CF the magical theory of creativity of Schmidhuber)
The statistical learning is: \( Learn * data \), and many methods exist.
But it is especially in the case of Sciences (Mathematics, Physics, Biology, ...) that the action
$$ S = Learn * phenomena $$
manifests oneself through a gigantic theoretical and empirical production.
If the action is \( Learn * Object \), then it seems interesting to learn the functor
$$ Learn * X \rightarrow S $$
whatever X is
In finance (and beyond in social science), the functor was formalized through the econophysics. To give an example: RFIM = Random Field Ising model is according to Bouchaud et al. a paradigm - i.e. a symmetry - plausible, CF eg "Crises and collective socio-economic phenomena: simple models and challenges"
Incidentally, the recent interest of physics for statistical learning (CF Mezard 'physics-statistics-and-information-the-defi-of-mass-data' in La Jaune et la Rouge, the Mallat site at ENS, "Learning as categorification III ", etc.) marks a re-symmetrization:
$$ S \leftrightarrow Learn * X $$
But as Mezard says: "Contrary to what is sometimes said, the irruption of massive data into the study of complex systems is not going to take the place of theory. It is always necessary and even more difficult to understand, analyze, and build a model, but the theorist can rely on new and powerful statistical tools. "
Conclusion: The Lean Management approach is motivated mainly by the constraint of profitability. This constraint, if it has the merit of bringing reflection (from the entrepreneur) back to (the objective observation of) reality, is essentially a transfer from the (~millenary) experimental method in sciences.
Why not to push this transfer / Categorification further ?
No comments:
Post a Comment