Ecosyste.ms: Packages

An open API service providing package, version and dependency metadata of many open source software ecosystems and registries.

conda-forge.org : finetuning-scheduler

The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible fine-tuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits: - it dramatically increases fine-tuning flexibility - expedites and facilitates exploration of model tuning dynamics - enables marginal performance improvements of finetuned models Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled fine-tuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning. FinetuningScheduler orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met. Documentation ------------- - https://finetuning-scheduler.readthedocs.io/en/stable/ - https://finetuning-scheduler.readthedocs.io/en/latest/

Registry - Source - JSON
purl: pkg:conda/finetuning-scheduler
Keywords: artificial-intelligence, fine-tuning, finetuning, machine-learning, neural-networks, pytorch, pytorch-lightning, superglue, transfer-learning
License: Apache-2.0
Latest release: over 1 year ago
First release: about 2 years ago
Stars: 41 on GitHub
Forks: 2 on GitHub
See more repository details: repos.ecosyste.ms
Last synced: 3 days ago

    Loading...
    Readme
    Loading...