Multi-Head Adapter Routing for Cross-Task Generalization

Research output: Contribution to conferencePaperResearchpeer-review

Standard

Multi-Head Adapter Routing for Cross-Task Generalization. / Caccia, Lucas ; Ponti, Edoardo ; Su, Zhan; Pereira, Matheus ; Le Roux, Nicolas ; Sordoni, Alessandro.

2024. Paper presented at 37th Conference on Neural Information Processing Systems - NeurIPS 2023, New Orleans., United States.

Research output: Contribution to conferencePaperResearchpeer-review

Harvard

Caccia, L, Ponti, E, Su, Z, Pereira, M, Le Roux, N & Sordoni, A 2024, 'Multi-Head Adapter Routing for Cross-Task Generalization', Paper presented at 37th Conference on Neural Information Processing Systems - NeurIPS 2023, New Orleans., United States, 10/12/2023 - 16/12/2023.

APA

Caccia, L., Ponti, E., Su, Z., Pereira, M., Le Roux, N., & Sordoni, A. (2024). Multi-Head Adapter Routing for Cross-Task Generalization. Paper presented at 37th Conference on Neural Information Processing Systems - NeurIPS 2023, New Orleans., United States.

Vancouver

Caccia L, Ponti E, Su Z, Pereira M, Le Roux N, Sordoni A. Multi-Head Adapter Routing for Cross-Task Generalization. 2024. Paper presented at 37th Conference on Neural Information Processing Systems - NeurIPS 2023, New Orleans., United States.

Author

Caccia, Lucas ; Ponti, Edoardo ; Su, Zhan ; Pereira, Matheus ; Le Roux, Nicolas ; Sordoni, Alessandro. / Multi-Head Adapter Routing for Cross-Task Generalization. Paper presented at 37th Conference on Neural Information Processing Systems - NeurIPS 2023, New Orleans., United States.2 p.

Bibtex

@conference{f4e4ce9b6e6641928541cf874e28ff78,
title = "Multi-Head Adapter Routing for Cross-Task Generalization",
abstract = "Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing) which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z) we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits high gradient alignment between training tasks. We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose MHR-μ, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks. This establishes MHR-μ as an effective method for single-adapter fine-tuning. We also show that MHR-μ can be used as an effective zero-shot transfer method by training the average of the pre-trained adapters for a few additional steps on the multi-task training set: this yields gains up to 3% on absolute accuracy w.r.t. the baselines.",
author = "Lucas Caccia and Edoardo Ponti and Zhan Su and Matheus Pereira and {Le Roux}, Nicolas and Alessandro Sordoni",
year = "2024",
language = "English",
note = "37th Conference on Neural Information Processing Systems - NeurIPS 2023 ; Conference date: 10-12-2023 Through 16-12-2023",

}

RIS

TY - CONF

T1 - Multi-Head Adapter Routing for Cross-Task Generalization

AU - Caccia, Lucas

AU - Ponti, Edoardo

AU - Su, Zhan

AU - Pereira, Matheus

AU - Le Roux, Nicolas

AU - Sordoni, Alessandro

PY - 2024

Y1 - 2024

N2 - Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing) which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z) we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits high gradient alignment between training tasks. We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose MHR-μ, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks. This establishes MHR-μ as an effective method for single-adapter fine-tuning. We also show that MHR-μ can be used as an effective zero-shot transfer method by training the average of the pre-trained adapters for a few additional steps on the multi-task training set: this yields gains up to 3% on absolute accuracy w.r.t. the baselines.

AB - Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing) which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z) we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits high gradient alignment between training tasks. We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose MHR-μ, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks. This establishes MHR-μ as an effective method for single-adapter fine-tuning. We also show that MHR-μ can be used as an effective zero-shot transfer method by training the average of the pre-trained adapters for a few additional steps on the multi-task training set: this yields gains up to 3% on absolute accuracy w.r.t. the baselines.

M3 - Paper

T2 - 37th Conference on Neural Information Processing Systems - NeurIPS 2023

Y2 - 10 December 2023 through 16 December 2023

ER -

ID: 384258796