This work proposes a federated data-efficient instruction tuning approach for LLMs that significantly reduces the amount of data required for LLM tuning while enhancing the responsiveness of instruction-tuned LLMs to unseen tasks.
Oct 14, 2024
This survey investigates existing works related to multi-modal LLMs (MLLMs) from a data-model co-development perspective, and provides a roadmap for the future development of MLLMs.
Jul 11, 2024