Item

Learning From Mistakes: A Multi-level Optimization Framework

Zhang, Li
Garg, Bhanu
Sridhara, Pradyumna
Hosseini, Ramtin
Xie, Pengtao
Supervisor
Department
Machine Learning
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Bi-level optimization methods in machine learning are popularly effective in subdomains of neural architecture search, data re-weighting, etc. However, most of these methods do not factor in variations in learning difficulty, which limits their performance in real-world applications. To address the above problems, we propose a framework that imitates the learning process of humans. In human learning, learners usually focus more on the topics where mistakes have been made in the past to deepen their understanding and master the knowledge. Inspired by this effective human learning technique, we propose a multi-level optimization framework, Learning From Mistakes (LFM), for machine learning. We formulate LFM as a three-stage optimization problem: 1) the learner learns, 2) the learner re-learns based on the mistakes made before, and; 3) the learner validates his learning. We develop an efficient algorithm to solve the optimization problem. We further apply our method to differentiable neural architecture search and data re-weighting. Extensive experiments on CIFAR-10, CIFAR-100, ImageNet, and other related datasets powerfully demonstrate the effectiveness of our approach.
Citation
L. Zhang, B. Garg, P. Sridhara, R. Hosseini and P. Xie, "Learning From Mistakes: A Multi-level Optimization Framework," in IEEE Transactions on Artificial Intelligence, doi: 10.1109/TAI.2025.3534151
Source
IEEE Transactions on Artificial Intelligence
Conference
Keywords
Training, Optimization, Computer architecture, Machine learning, Artificial intelligence, Neural architecture search, Data models, Training data, Noise measurement, Noise
Subjects
Source
Publisher
IEEE
Full-text link