While I was working on the Theory of Statistical Learning, and the concept of consistency, I found the following popular graph (e.g. from thoses slides, here in French) The curve below is the error on the training sample, as a function of the size of the training sample. Above, it is the error on a validation sample. Our learning process is consistent if the two converge. I was wondering if…