Abstract
With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available
for learning predictive models. However, in many practical situations, the poor control of the data acquisition processes may naturally jeopardize the outputs of machine learning algorithms, and selection bias issues are now the subject of much attention in the literature. The present article investigates how to extend Empirical Risk Minimization, the principal paradigm in statistical learning, when training observations are generated from biased models, i.e., from distributions that are different from that in the test/prediction stage, and absolutely continuous with respect to the latter.
Precisely, we show how to build a “nearly debiased” training statistical population from biased samples and the related biasing functions, following in the footsteps of the approach originally proposed in [46]. Furthermore, we study from a nonasymptotic perspective the performance of minimizers of an empirical version of the risk computed from the statistical population thus created. Remarkably, the learning rate achieved by this procedure is of the same order as that attained in absence of selection bias. Beyond the theoretical guarantees, we also present experimental results supporting the relevance of the algorithmic approach promoted in this paper.

 

Stephan Clémençon. Pierre Laforgue. “Statistical learning from biased training samples.” Electron. J. Statist. 16 (2) 6086 – 6134, 2022. https://doi.org/10.1214/22-EJS2084