Using data-guided projection instead of random initialization makes continual learning more stable and effective, especially when there's a big gap between pretrained model knowledge and new tasks.
This paper improves how pretrained models learn continuously on new tasks by replacing random projection layers with a smarter, data-guided approach. Instead of randomly initializing the projection layer, the method selectively builds it based on the target data, creating more stable and expressive representations when learning new classes incrementally without storing old examples.