Because of the dependencies between builds, the builds in operational
data store jobs need to be run in sequence.
All jobs are defined under the
Jobs folder, the operational
data store jobs are organized according to supported products. Most of the
operational data store jobs support reusing the ETL job for data sources with
same data structure, so the jobs have similar structure, as follows:
- init node: For getting resource groups with specified
category in the data source and caching the result
- preparevar: For populating the variables for the current
resource group before running a build.
- The other nodes constitute a loop and run in sequence for each resource
group in sequence.
- hasMore, which is a condition node for determining
if there are more resource groups. If there are more resource groups, the
next loop is continued; otherwise, the job is finished.
Some jobs have a node with name
SetFinishedTime, which
records the result of the job in the data warehouse in the
config.ETL_INFO table.