Scheduling

Once you have decided your approach for identifying and reassessing cases in batch processing, you must arrange for that batch processing to execute.

Broadly, you can run batch processing either:

The Dependency Manager batch suite is amenable to being executed on a pre-determined schedule, because if there have been no system-wide changes to data (written to the batch precedent change set) then the batch suite will quickly identify that there are no cases to reassess. If you use such a pre-determined schedule, then you can ignore the on-screen and application log messages that advise that bulk reassessment processing is required.

It is not recommended to execute top-down case identification algorithms on a pre-determined schedule because those algorithms will identify cases to reassess regardless of whether there have been any system-wide data changes published.

If you are executing batch processing that uses the chunked batch processing architecture (such as the Dependency Manager batch suite, the CREOLEBulkCaseChunkReassessmentStream batch process, or the recommended way to implement your own top-down case identification/reassessment batch process), then you have some flexibility when manually executing chunker and streamer processes:

Tip: If you have configured your chunker process to automatically perform streamer processing once the case identification phase is over and the case reassessment phase has begun, and you wish to run multiple parallel streamer processes to spread the reassessment load across your physical machines, then you should start your streamer processes before starting your chunker process. The streamer processes will simply wait until the chunker process has completed its case identification phase and the case reassessment phase has begun.

If you start your streamer processes after the chunker processing, then in a situation where the chunker process identifies only a few cases, it is possible for some of the streamers (including the chunker process itself) to complete reassessment processing on all the cases identified, and the overall batch processing would complete. If this happens, then the other streamer processes will have no work to do but will wait until the chunker process is next run, which could be quite some time later; from an operational perspective, these other streamer processes are just hanging and would need to be manually terminated, which is not ideal under normal operational procedures.

If you are executing the Dependency Manager batch suite, then you must run the PerformBatchRecalculationsFromPrecedentChangeSet streamed batch process once per dependent type. You can choose the order of these runs - for example, you may decide that it is more urgent to have your cases reassessed in response to a system-wide data change than it is to have advice recalculated.

If your batch run includes both a top-down case identification/reassessment algorithm and a run of the Dependency Manager batch suite (see Driving the Identification of Affected Cases), then typically you should run the top-down case identification/reassessment algorithm first so that your priority cases are identified and reassessed.

If your batch run includes the execution of the ApplyProductReassessmentStrategy batch process (see Reassessment Strategy) then typically there are no ordering constraints - but note that cases which could previously not be reassessed will only be able to be reassessed (and identifiable by the Dependency Manager batch suite) once the ApplyProductReassessmentStrategy batch process has completed.

If you are planning to publish multiple changes to system-wide data (see Bulk Reassessment for Multiple Simultaneous Changes), then you may choose to hold off on manually running your preferred approach to case identification/reassessment (or suspend your regular batch schedule, if you have one) until all those system-wide data changes are published. In this way, each case will only be identified and reassessed once in response to the combined system-wide data changes.