10.4. Enterprise Data Processing with the Data Flow
The Data Flow is the core data processing factory of Integration Services packages, where the primary data is handled, managed, transformed, integrated, and cleansed. Think of the Data Flow as a pipeline for data. A house, for example, has a primary water source, which is branched to all the different outlets in the house. If a faucet is turned on, water will flow out the faucet, while at the same time water is coming in from the source. If all the water outlets in a house are turned off, then the pressure backs up to the source to where it will no longer flow into the house until the pressure is relieved. On the contrary, if all the water outlets in the house are opened at once, then the source pressure may not be able to keep up with the flow of water and the pressure coming out of the faucets will be weaker. Of course, don't try this at home; it may produce other problems!
The Data Flow is appropriately named because the data equates to the water in the plumbing analogy. The data flows from the data sources through the transformations to the data destinations. In addition to the flowing concept, there are similarities to the data flow pressure within the pipeline. For example, while a data source may be able to stream 10,000 rows per second, if a downstream transformation consumes too much server resources, it could apply back pressure on the source and reduce the number of rows coming from the source. Essentially, this creates ...
Get Professional SQL Server™ 2005 Integration Services now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.