As a result of testing back chain resolution fetch times, there are two performance enhancements that can be easily applied without significant risk to the application. Other more significant enhancements will be pushed to a later release.
Reduce the size of the memory foot print by:
1 - Changing the main loop to request one item at a time.
Although this may appear counter intuitive, the implementation of batch fetching is implemented as one at a time already. The problem is that this causes the objects being fetched to be accumulated in memory which significantly increases the size of the memory that must be serialised at each checkpoint. By making the fetch one item at a time will cause there to be only one item held in memory. This improvement is shown in orange in the diagram. The original is shown in blue.
2 - Remove duplicate SecureHash objects
This is a smaller optimisation that can be applied on top of the "one at a time" change (above). The optimisation is to use a hash collection to prevent holding multiple copies of SecureHash items in memory. The yellow line on the graph shows the optimisation applied on top of the one above.
Both of the above optimisations are safe to deploy stand alone and will not affect version compatibility.