Deborah Pereg, Zuckerman Israeli Postdoctoral Scholar, publishes paper in Neural Networks
The statistical supervised learning framework assumes an input–output set with a joint probability distribution that is reliably represented by the training dataset. The learning system is then required to output a prediction rule learned from the training dataset’s input–output pairs. In this work, we investigate the relationship between the sample complexity, the empirical risk and the generalization error based on the asymptotic equipartition property (AEP) (Shannon, 1948). We provide theoretical guarantees for reliable learning under the information-theoretic AEP, with respect to the generalization error and the sample size in different settings.