Often, DBAs need to load copious amounts of data quickly — whether it's a nightly data load or a conversion from comma-delimited text files. When a few hundred megabytes of data need to get into SQL Server in a limited time frame, a bulk operation is the way to get the heavy lifting done.
XML's popularity may be growing, but its file sizes seem to be growing even faster. XML's data tags add significant bloat to a data file, sometimes quadrupling the file size or more. For large files, IT organizations are sticking with CSV (also known as comma-delimited) files. For these old standby files, the best way to insert that data is using a bulk operation.
In SQL Server, bulk operations pump data directly to the data file according to the following models:
- Simple recovery model: The transaction log is used for current transactions only.
- Bulk-logged recovery model: The bulk operation transaction bypasses the log, but then the entire bulk operation's data is still written to the log. One complication with bulk-logged recovery is that if bulk operations are undertaken, point-in-time recovery is not possible for the time period covered by the transaction log. To regain point-in-time recovery, the log must be backed up. As extent allocations are logged for bulk operations, a log backup after bulk operations can contain all the pages from extents that have been added, which results in a large transaction log backup.
- Full recovery model: In a full recovery model, bulk operations ...