IN THIS CHAPTER
Mass inserts from comma-delimited files
T-SQL ETL processing
Performing bulk operations
Often, DBAs need to load copious amounts of data quickly—whether it's a nightly data load or a conversion from comma-delimited text files. When a few hundred megabytes of data need to get into SQL Server in a limited time frame, a bulk operation is the way to get the heavy lifting done.
XML's popularity may be growing, but its file sizes seem to be growing even faster. XML's data tags add significant bloat to a data file, sometimes quadrupling the file size or more. For very large files, IT organizations are sticking with CSV (also known as comma-delimited) files. For these old standby files, the best way to insert that data is a bulk operation.
In SQL Server, bulk operations pump data directly to the data file according to the following models:
Simple recovery model: No problem with recovery; the transaction log is used for current transactions only.
Bulk-logged recovery model: No problem with recovery; the bulk operation transaction bypasses the log, but then the entire bulk operation's data is still written to the log. One complication with bulk-logged recovery is that if bulk operations are undertaken, point-in-time recovery is not possible for the time period covered by the transaction log. To regain point-in-time recovery, the log has to be backed up. As extent allocations are logged for bulk operations, a log backup after bulk operations ...