LLM fine-tuning
Zero-Order Optimization Enables Memory-Efficient LLM Fine-Tuning
New research introduces learnable direction sampling for zero-order optimization, dramatically reducing memory requirements for fine-tuning large language models without sacrificing performance.