Dynamics 101
A Rand Group Knowledge Center

Improve ETL Performance with SSIS Parallel Processing

Leveraging parallelism in SSIS packages can greatly improve ETL performance. In this blog, we will discuss some methods that may help you to improve ETL performance by using SSIS parallel processing.

Control Flow Parallelism

SSIS can easily allow for different items to be executed in parallel. Once you determine where the bottleneck is on each of your packages, you can set the child packages to run with a degree of parallelism that will allow you to leverage the available resources in your SSIS environment at the time of execution. This could be a good strategy to use if you have multiple packages that pull from multiple sources, have relatively basic transformations, and/or push the data to multiple destinations.

Also, every package has a property called “MaxConcurrentExecutables.” The default value for this property is -1, equaling to the total number of processors plus two. You can increase this number in order to allow for a greater degree of parallelism. This could be helpful if each process being executed does not overly constrain resources on any one particular system.

However, if you plan on changing this number to 999 and kicking off a massive number of packages in parallel, you will probably find that the performance of your packages has decreased dramatically instead of having increased. This is because the system is too busy competing for resources to be able to efficiently leverage available resources.

Data Flow Parallelism

The same principals in Control Flow Parallelism apply to the Data Flow. The equivalent property to “MaxConcurrentExecutables” in the Data Flow is called “EngineThreads.” This property is set to 10 by default, but this does not mean that a data flow will use 10 processor threads, but rather that the data flow will use up to 10 threads in order to efficiently process the items in the data flow. Depending on the context of the performance of your package, you may find that partitioning your sources and destinations will increase performance of inserting records, but make sure that there are no dependencies in your data flow. If you cause a deadlock you will quickly lose whatever performance gain you have achieved.

The same words of warning from above also apply to this principle too. Trying to force too many threads into few processors will end up backfiring since the execution will constantly be changing context. Also, in the example below, you may find that replicating the components of the data flow may cause poor performance. In the example below, replicating the cached lookup will cause memory to spike. Using a conditional split or a cache transform may help improve this situation, but you should only use this method if your tables are well partitioned and your transformations are minimal.

Putting it to Practice

There is no surefire way to efficiently use parallelism. You may even find that adding parallelism to your package causes performance to worsen. Try to make sure your environment is quiet enough to support testing. If you are performing ETL against a system with a lot of transaction occurring, it may be better to create a copy of the data on a server with similar specs. Often the best way to determine which form of parallelism to implement is by trying a variety of options and slowly changing the parameters until you find an optimal combination. Finally, don’t chase after seconds unless you absolutely have to. As the data you deal with in your ETL changes, you will find that your performance will change slightly as different amounts and different types of data are being processed.

Series Navigation
  1. << Improving ETL Performance by Disabling Indexes
  2. Improving ETL Performance by Avoiding Asynchronous Transformations >>
William Young

Article written by

Sr. Business Intelligence Expert at Rand Group

William is an experienced Microsoft SQL Server developer and Business Intelligence consultant.