Federated Data-Efficient Instruction Tuning for Large Language Models
This work proposes a federated data-efficient instruction tuning approach for LLMs that significantly reduces the amount of data required for LLM tuning while enhancing the responsiveness of instruction-tuned LLMs to unseen tasks.
Oct 14, 2024