Chapter 14. Using Third-Party Models and Hubs
The success of the open source PyTorch framework has led to the growth of supplementary ecosystems. In this chapter, we’ll look at the various options of pretrained models and the associated tools and resources used to download, instantiate, and use them for inference.
While the PyTorch framework provides the foundation for deep learning, the community has created numerous repositories and hubs that store models that are ready to use and extend, making it easier for you to use and extend existing work rather than starting from scratch. I like to call this “standing on the shoulders of giants.”
Since the advent of generative AI, these hubs have exploded in popularity, and many scenarios of generative ML models within workflows have grown out of this. As a result, when it comes to using pretrained models, there are many options. You might use them directly for inference, taking advantage of those trained on massive datasets that would be impractical to replicate. Or you might use these models as starting points for fine-tuning, adapting them to specific domains or tasks while retaining their learned features. This can take the form of low-rank adaption (LoRA), as we’ll discuss in Chapter 20, or transfer learning, in which knowledge from one task is applied to another. Transfer learning or other fine-tuning has become a standard practice, especially when working with limited data or computational resources.
The advantages of using pretrained ...