13LARGE TRAINING SETS FOR VISION TRANSFORMERS
Why do vision transformers (ViTs) generally require larger training sets than convolutional neural networks (CNNs)?
Each machine learning algorithm and model encodes a particular set of assumptions or prior knowledge, commonly referred to as inductive biases, in its design. Some inductive biases are workarounds to make algorithms computationally more feasible, other inductive biases are based on domain knowledge, and some inductive biases are both.
CNNs and ViTs can be used for the same tasks, including image classification, object detection, and image segmentation. CNNs are mainly composed of convolutional ...
Get Machine Learning Q and AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.