lack_of_a_priori_distinctions_wolpert.pdf
데이비드 월퍼
The Lack of A Priori Distriction Between Learning Algorithms
This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms.
This first paper discusses the senses in which there are no a priori distinctions between learning algorithms.
(The second paper discusses the senses in which here are such distinctions.)
In this firstpaper it is shown, loosely speaking, that for any two algorithms A and B, there are "as many" targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss.
In particular, this is true if A is cross-validation and B is "anti-cross-validation'' (choose the learning algorithm with largest cross-validation error).
This paper ends with a discussion of the implications of these results for omputational learning theory.
It is shown that one cannot say: if empirical is classification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small.
Other implications for "membership queries" algorithms and "punting" algorithms are also discussed.
"Even after the observation of the frequent conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience."
'컴퓨터이야기' 카테고리의 다른 글
pandas.DataFrame.tail (0) | 2017.03.10 |
---|---|
Ipython.display.Image class (0) | 2017.03.10 |
EM 알고리즘 (0) | 2017.03.03 |
[펌] 가우시안 분포 (0) | 2017.03.03 |
딥러닝의 세계에 푹 빠지다 (0) | 2017.01.13 |