Variable selection represents one of the core challenges in applied statistics and machine learning, as it aims to identify the most influential factors in explaining the behavior of statistical models, thereby achieving a balance between estimation accuracy and model simplicity. Although classical penalized regression methods such as Lasso and Elastic Net have served as pioneering tools in this field, their performance remains highly sensitive to outliers and non-ideal data distributions, which limits their predictive efficiency in real-world applications. To overcome these limitations, recent research has increasingly focused on the development of robust methodologies by employing alternative loss functions, such as the Huber and Tukey losses, which mitigate the influence of atypical observations and enhance estimation stability. In addition, exponential loss functions have recently gained attention due to their flexibility and adaptability to complex data structures.
Building on these advances, this study proposes a novel framework, referred to as the Penalized Exponential Loss Function (PELF), which integrates the regularization strength of penalized models with the robustness properties of exponential loss functions. The proposed method combines the L1 penalty (Lasso) with an exponential loss function that nonlinearly reduces the impact of large deviations, thereby improving variable selection accuracy while maintaining model reliability.
The mathematical formulation demonstrates that the proposed framework provides both robustness and sparsity, making it especially suitable for high-dimensional settings or data contaminated with noise. Consequently, the PELF approach is expected to enhance predictive performance and broaden its applicability in advanced practical domains, particularly in medical, economic, and environmental studies, where the presence of outliers is a recurrent challenge