Next: The FileSet Resource
Up: Bacula User's Guide
Previous: Customizing the Configuration Files
Contents
Index
Subsections
Configuring the Director
Of all the configuration files needed to run Bacula, the Director's is
the most complicated, and the one that you will need to modify the most often
as you add clients or modify the FileSets.
For a general discussion of configuration files and resources including the
data types recognized by Bacula. Please see the
Configuration chapter of this manual.
Director resource type may be one of the following:
Job, JobDefs, Client, Storage, Catalog, Schedule, FileSet, Pool, Director, or
Messages. We present them here in the most logical order for defining them:
- Director -- to define the Director's
name and its access password used for authenticating the Console program.
Only a single Director resource definition may appear in the Director's
configuration file. If you have either /dev/random or bc on your
machine, Bacula will generate a random password during the configuration
process, otherwise it will be left blank.
- Job -- to define the backup/restore Jobs
and to tie together the Client, FileSet and Schedule resources to be used
for each Job.
- JobDefs -- optional resource for
providing defaults for Job resources.
- Schedule -- to define when a Job is to
be automatically run by Bacula's internal scheduler.
- FileSet -- to define the set of files
to be backed up for each Client.
- Client -- to define what Client is to be
backed up.
- Storage -- to define on what physical
device the Volumes should be mounted.
- Pool -- to define the pool of Volumes
that can be used for a particular Job.
- Catalog -- to define in what database to
keep the list of files and the Volume names where they are backed up.
- Messages -- to define where error and
information messages are to be sent or logged.
The Director Resource
The Director resource defines the attributes of the Directors running on the
network. In the current implementation, there is only a single Director
resource, but the final design will contain multiple Directors to maintain
index and media database redundancy.
- Director
-
Start of the Director resource. One and only one director resource must be
supplied.
- Name = <name>
-
The director name used by the system administrator. This directive is
required.
- Description = <text>
-
The text field contains a description of the Director that will be displayed
in the graphical user interface. This directive is optional.
- Password = <UA-password>
-
Specifies the password that must be supplied for the default Bacula Console
to be authorized. The same password must appear in the Director
resource of the Console configuration file. For added security, the password
is never actually passed across the network but rather a challenge response
hash code created with the password. This directive is required. If you have
either /dev/random or bc on your machine, Bacula will generate a
random password during the configuration process, otherwise it will be left
blank and you must manually supply it.
- Messages = <Messages-resource-name>
-
The messages resource specifies where to deliver Director messages that are
not associated with a specific Job. Most messages are specific to a job and
will be directed to the Messages resource specified by the job. However,
there are a few messages that can occur when no job is running. This
directive is required.
- Working Directory = <Directory>
-
This directive is mandatory and specifies a directory in which the Director
may put its status files. This directory should be used only by Bacula but
may be shared by other Bacula daemons. However, please note, if this
directory is shared with other Bacula daemons (the File daemon and Storage
daemon), you must ensure that the Name given to each daemon is
unique so that the temporary filenames used do not collide. By default
the Bacula configure process creates unique daemon names by postfixing them
with -dir, -fd, and -sd. Standard shell expansion of the Directory is done when the configuration file is read so that values such
as $HOME will be properly expanded. This directive is required.
If you have specified a Director user and/or a Director group on your
./configure line with --with-dir-user and/or
--with-dir-group the Working Directory owner and group will
be set to those values.
- Pid Directory = <Directory>
-
This directive is mandatory and specifies a directory in which the Director
may put its process Id file. The process Id file is used to shutdown
Bacula and to prevent multiple copies of Bacula from running simultaneously.
Standard shell expansion of the Directory is done when the
configuration file is read so that values such as $HOME will be
properly expanded.
Typically on Linux systems, you will set this to: /var/run. If you are
not installing Bacula in the system directories, you can use the Working
Directory as defined above. This directive is required.
- Scripts Directory = <Directory>
-
This directive is optional and, if defined, specifies a directory in
which the Director will look for the Python startup script DirStartup.py. This directory may be shared by other Bacula daemons.
Standard shell expansion of the directory is done when the configuration
file is read so that values such as $HOME will be properly
expanded.
- QueryFile = <Path>
-
This directive is mandatory and specifies a directory and file in which
the Director can find the canned SQL statements for the Query
command of the Console. Standard shell expansion of the Path is
done when the configuration file is read so that values such as $HOME will be properly expanded. This directive is required.
- Maximum Concurrent Jobs = <number>
-
where <number> is the maximum number of total Director Jobs that
should run concurrently. The default is set to 1, but you may set it to a
larger number.
Please note that the Volume format becomes much more complicated with
multiple simultaneous jobs, consequently, restores can take much longer if
Bacula must sort through interleaved volume blocks from multiple simultaneous
jobs. This can be avoided by having each simultaneously running job write to
a different volume or by using data spooling, which will first spool the data
to disk simultaneously, then write each spool file to the volume in
sequence.
There may also still be some cases where directives such as Maximum
Volume Jobs are not properly synchronized with multiple simultaneous jobs
(subtle timing issues can arise), so careful testing is recommended.
At the current time, there is no configuration parameter set to limit the
number of console connections. A maximum of five simultaneous console
connections are permitted.
For more details on getting concurrent jobs to run, please see
Running Concurrent Jobs in the Tips chapter
of this manual.
- FD Connect Timeout = <time>
-
where time is the time that the Director should continue
attempting to contact the File daemon to start a job, and after which
the Director will cancel the job. The default is 30 minutes.
- SD Connect Timeout = <time>
-
where time is the time that the Director should continue
attempting to contact the Storage daemon to start a job, and after which
the Director will cancel the job. The default is 30 minutes.
- DirAddresses = <IP-address-specification>
-
Specify the ports and addresses on which the Director daemon will listen
for Bacula Console connections. Probably the simplest way to explain
this is to show an example:
DirAddresses = {
ip = { addr = 1.2.3.4; port = 1205;}
ipv4 = {
addr = 1.2.3.4; port = http;}
ipv6 = {
addr = 1.2.3.4;
port = 1205;
}
ip = {
addr = 1.2.3.4
port = 1205
}
ip = { addr = 1.2.3.4 }
ip = { addr = 201:220:222::2 }
ip = {
addr = bluedot.thun.net
}
}
where ip, ip4, ip6, addr, and port are all keywords. Note, that the address
can be specified as either a dotted quadruple, or IPv6 colon notation, or as
a symbolic name (only in the ip specification). Also, port can be specified
as a number or as the mnemonic value from the /etc/services file. If a port
is not specified, the default will be used. If an ip section is specified,
the resolution can be made either by IPv4 or IPv6. If ip4 is specified, then
only IPv4 resolutions will be permitted, and likewise with ip6.
Please note that if you use the DirAddresses directive, you must
not use either a DirPort or a DirAddress directive in the same
resource.
- DirPort = <port-number>
-
Specify the port (a positive integer) on which the Director daemon will
listen for Bacula Console connections. This same port number must be
specified in the Director resource of the Console configuration file. The
default is 9101, so normally this directive need not be specified. This
directive should not be used if you specify DirAddresses (not plural)
directive.
- DirAddress = <IP-Address>
-
This directive is optional, but if it is specified, it will cause the
Director server (for the Console program) to bind to the specified IP-Address, which is either a domain name or an IP address specified as a
dotted quadruple in string or quoted string format. If this directive is not
specified, the Director will bind to any available address (the default).
Note, unlike the DirAddresses specification noted above, this directive only
permits a single address to be specified. This directive should not be used if you
specify a DirAddresses (note plural) directive.
The following is an example of a valid Director resource definition:
Director {
Name = HeadMan
WorkingDirectory = "$HOME/bacula/bin/working"
Password = UA_password
PidDirectory = "$HOME/bacula/bin/working"
QueryFile = "$HOME/bacula/bin/query.sql"
Messages = Standard
}
The Job Resource
The Job resource defines a Job (Backup, Restore, ...) that Bacula must
perform. Each Job resource definition contains the name of a Client and
a FileSet to backup, the Schedule for the Job, where the data
are to be stored, and what media Pool can be used. In effect, each Job
resource must specify What, Where, How, and When or FileSet, Storage,
Backup/Restore/Level, and Schedule respectively. Note, the FileSet must
be specified for a restore job for historical reasons, but it is no longer used.
Only a single type (Backup, Restore, ...) can be specified for any
job. If you want to backup multiple FileSets on the same Client or multiple
Clients, you must define a Job for each one.
Note, you define only a single Job to do the Full, Differential, and
Incremental backups since the different backup levels are tied together by
a unique Job name. Normally, you will have only one Job per Client, but
if a client has a really huge number of files (more than several million),
you might want to split it into to Jobs each with a different FileSet
covering only part of the total files.
- Job
-
Start of the Job resource. At least one Job resource is required.
- Name = <name>
-
The Job name. This name can be specified on the Run command in the
console program to start a job. If the name contains spaces, it must be
specified between quotes. It is generally a good idea to give your job the
same name as the Client that it will backup. This permits easy
identification of jobs.
When the job actually runs, the unique Job Name will consist of the name you
specify here followed by the date and time the job was scheduled for
execution. This directive is required.
- Enabled = <yes|no>
-
This directive allows you to enable or disable automatic execution
via the scheduler of a Job.
- Type = <job-type>
-
The Type directive specifies the Job type, which may be one of the
following: Backup, Restore, Verify, or Admin. This
directive is required. Within a particular Job Type, there are also Levels
as discussed in the next item.
- Backup
-
Run a backup Job. Normally you will have at least one Backup job for each
client you want to save. Normally, unless you turn off cataloging, most all
the important statistics and data concerning files backed up will be placed
in the catalog.
- Restore
-
Run a restore Job. Normally, you will specify only one Restore job
which acts as a sort of prototype that you will modify using the console
program in order to perform restores. Although certain basic
information from a Restore job is saved in the catalog, it is very
minimal compared to the information stored for a Backup job -- for
example, no File database entries are generated since no Files are
saved.
- Verify
-
Run a verify Job. In general, verify jobs permit you to compare the
contents of the catalog to the file system, or to what was backed up. In
addition, to verifying that a tape that was written can be read, you can
also use verify as a sort of tripwire intrusion detection.
- Admin
-
Run an admin Job. An Admin job can be used to periodically run catalog
pruning, if you do not want to do it at the end of each Backup Job.
Although an Admin job is recorded in the catalog, very little data is saved.
- Level = <job-level>
-
The Level directive specifies the default Job level to be run. Each
different Job Type (Backup, Restore, ...) has a different set of Levels
that can be specified. The Level is normally overridden by a different
value that is specified in the Schedule resource. This directive
is not required, but must be specified either by a Level directive
or as an override specified in the Schedule resource.
For a Backup Job, the Level may be one of the following:
- Full
-
When the Level is set to Full all files in the FileSet whether or not
they have changed will be backed up.
- Incremental
-
When the Level is set to Incremental all files specified in the FileSet
that have changed since the last successful backup of the the same Job
using the same FileSet and Client, will be backed up. If the Director
cannot find a previous valid Full backup then the job will be upgraded
into a Full backup. When the Director looks for a valid backup record
in the catalog database, it looks for a previous Job with:
- The same Job name.
- The same Client name.
- The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
- The Job was a Full, Differential, or Incremental backup.
- The Job terminated normally (i.e. did not fail or was not canceled).
If all the above conditions do not hold, the Director will upgrade the
Incremental to a Full save. Otherwise, the Incremental backup will be
performed as requested.
The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time of the prior Job (Full,
Differential, or Incremental) against the time each file was last
"modified" (st_mtime) and the time its attributes were last
"changed"(st_ctime). If the file was modified or its attributes
changed on or after this start time, it will then be backed up.
Some virus scanning software may change st_ctime while
doing the scan. For example, if the virus scanning program attempts to
reset the access time (st_atime), which Bacula does not use, it will
cause st_ctime to change and hence Bacula will backup the file during
an Incremental or Differential backup. In the case of Sophos virus
scanning, you can prevent it from resetting the access time (st_atime)
and hence changing st_ctime by using the --
no-reset-atime
option. For other software, please see their manual.
When Bacula does an Incremental backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bacula catalog, which
means that if between a Full save and the time you do a restore, some
files are deleted, those deleted files will also be restored. The
deleted files will no longer appear in the catalog after doing another
Full save. However, to remove deleted files from the catalog during an
Incremental backup is quite a time consuming process and not currently
implemented in Bacula.
In addition, if you move a directory rather than copy it, the files in
it do not have their modification time (st_mtime) or their attribute
change time (st_ctime) changed. As a consequence, those files will
probably not be backed up by an Incremental or Differential backup which
depend solely on these time stamps. If you move a directory, and wish
it to be properly backed up, it is generally preferable to copy it, then
delete the original.
- Differential
-
When the Level is set to Differential
all files specified in the FileSet that have changed since the last
successful Full backup of the same Job will be backed up.
If the Director cannot find a
valid previous Full backup for the same Job, FileSet, and Client,
backup, then the Differential job will be upgraded into a Full backup.
When the Director looks for a valid Full backup record in the catalog
database, it looks for a previous Job with:
- The same Job name.
- The same Client name.
- The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
- The Job was a FULL backup.
- The Job terminated normally (i.e. did not fail or was not canceled).
If all the above conditions do not hold, the Director will upgrade the
Differential to a Full save. Otherwise, the Differential backup will be
performed as requested.
The File daemon (Client) decides which files to backup for a
differential backup by comparing the start time of the prior Full backup
Job against the time each file was last "modified" (st_mtime) and the
time its attributes were last "changed" (st_ctime). If the file was
modified or its attributes were changed on or after this start time, it
will then be backed up. The start time used is displayed after the Since on the Job report. In rare cases, using the start time of the
prior backup may cause some files to be backed up twice, but it ensures
that no change is missed. As with the Incremental option, you should
ensure that the clocks on your server and client are synchronized or as
close as possible to avoid the possibility of a file being skipped.
Note, on versions 1.33 or greater Bacula automatically makes the
necessary adjustments to the time between the server and the client so
that the times Bacula uses are synchronized.
When Bacula does a Differential backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bacula catalog, which
means that if between a Full save and the time you do a restore, some
files are deleted, those deleted files will also be restored. The
deleted files will no longer appear in the catalog after doing another
Full save. However, to remove deleted files from the catalog during a
Differential backup is quite a time consuming process and not currently
implemented in Bacula. It is, however, a planned future feature.
As noted above, if you move a directory rather than copy it, the
files in it do not have their modification time (st_mtime) or
their attribute change time (st_ctime) changed. As a
consequence, those files will probably not be backed up by an
Incremental or Differential backup which depend solely on these
time stamps. If you move a directory, and wish it to be
properly backed up, it is generally preferable to copy it, then
delete the original. Alternatively, you can move the directory, then
use the touch program to update the timestamps.
Every once and a while, someone asks why we need Differential
backups as long as Incremental backups pickup all changed files.
There are possibly many answers to this question, but the one
that is the most important for me is that a Differential backup
effectively merges
all the Incremental and Differential backups since the last Full backup
into a single Differential backup. This has two effects: 1. It gives
some redundancy since the old backups could be used if the merged backup
cannot be read. 2. More importantly, it reduces the number of Volumes
that are needed to do a restore effectively eliminating the need to read
all the volumes on which the preceding Incremental and Differential
backups since the last Full are done.
For a Restore Job, no level needs to be specified.
For a Verify Job, the Level may be one of the following:
- InitCatalog
-
does a scan of the specified FileSet and stores the file
attributes in the Catalog database. Since no file data is saved, you
might ask why you would want to do this. It turns out to be a very
simple and easy way to have a Tripwire like feature using Bacula. In other words, it allows you to save the state of a set of
files defined by the FileSet and later check to see if those files
have been modified or deleted and if any new files have been added.
This can be used to detect system intrusion. Typically you would
specify a FileSet that contains the set of system files that
should not change (e.g. /sbin, /boot, /lib, /bin, ...). Normally, you
run the InitCatalog level verify one time when your system is
first setup, and then once again after each modification (upgrade) to
your system. Thereafter, when your want to check the state of your
system files, you use a Verify level = Catalog. This
compares the results of your InitCatalog with the current state of
the files.
- Catalog
-
Compares the current state of the files against the state previously
saved during an InitCatalog. Any discrepancies are reported. The
items reported are determined by the verify options specified on
the Include directive in the specified FileSet (see the FileSet resource below for more details). Typically this command will
be run once a day (or night) to check for any changes to your system
files.
Please note! If you run two Verify Catalog jobs on the same client at
the same time, the results will certainly be incorrect. This is because
Verify Catalog modifies the Catalog database while running in order to
track new files.
- VolumeToCatalog
-
This level causes Bacula to read the file attribute data written to the
Volume from the last Job. The file attribute data are compared to the
values saved in the Catalog database and any differences are reported.
This is similar to the Catalog level except that instead of
comparing the disk file attributes to the catalog database, the
attribute data written to the Volume is read and compared to the catalog
database. Although the attribute data including the signatures (MD5 or
SHA1) are compared, the actual file data is not compared (it is not in
the catalog).
Please note! If you run two Verify VolumeToCatalog jobs on the same
client at the same time, the results will certainly be incorrect. This
is because the Verify VolumeToCatalog modifies the Catalog database
while running.
- DiskToCatalog
-
This level causes Bacula to read the files as they currently are on
disk, and to compare the current file attributes with the attributes
saved in the catalog from the last backup for the job specified on the
VerifyJob directive. This level differs from the Catalog
level described above by the fact that it doesn't compare against a
previous Verify job but against a previous backup. When you run this
level, you must supply the verify options on your Include statements.
Those options determine what attribute fields are compared.
This command can be very useful if you have disk problems because it
will compare the current state of your disk against the last successful
backup, which may be several jobs.
Note, the current implementation (1.32c) does not identify files that
have been deleted.
- Verify Job = <Job-Resource-Name>
-
If you run a verify job without this directive, the last job run will be
compared with the catalog, which means that you must immediately follow
a backup by a verify command. If you specify a Verify Job Bacula
will find the last job with that name that ran. This permits you to run
all your backups, then run Verify jobs on those that you wish to be
verified (most often a VolumeToCatalog) so that the tape just
written is re-read.
- JobDefs = <JobDefs-Resource-Name>
-
If a JobDefs-Resource-Name is specified, all the values contained in the
named JobDefs resource will be used as the defaults for the current Job.
Any value that you explicitly define in the current Job resource, will
override any defaults specified in the JobDefs resource. The use of
this directive permits writing much more compact Job resources where the
bulk of the directives are defined in one or more JobDefs. This is
particularly useful if you have many similar Jobs but with minor
variations such as different Clients. A simple example of the use of
JobDefs is provided in the default bacula-dir.conf file.
- Bootstrap = <bootstrap-file>
-
The Bootstrap directive specifies a bootstrap file that, if provided,
will be used during Restore Jobs and is ignored in other Job
types. The bootstrap file contains the list of tapes to be used
in a restore Job as well as which files are to be restored.
Specification of this directive is optional, and if specified, it is
used only for a restore job. In addition, when running a Restore job
from the console, this value can be changed.
If you use the Restore command in the Console program, to start a
restore job, the bootstrap file will be created automatically from
the files you select to be restored.
For additional details of the bootstrap file, please see
Restoring Files with the Bootstrap File chapter
of this manual.
- Write Bootstrap = <bootstrap-file-specification>
-
The writebootstrap directive specifies a file name where Bacula
will write a bootstrap file for each Backup job run. This
directive applies only to Backup Jobs. If the Backup job is a Full
save, Bacula will erase any current contents of the specified file
before writing the bootstrap records. If the Job is an Incremental
or Differential
save, Bacula will append the current bootstrap record to the end of the
file.
Using this feature, permits you to constantly have a bootstrap file that
can recover the current state of your system. Normally, the file
specified should be a mounted drive on another machine, so that if your
hard disk is lost, you will immediately have a bootstrap record
available. Alternatively, you should copy the bootstrap file to another
machine after it is updated. Note, it is a good idea to write a separate
bootstrap file for each Job backed up including the job that backs up
your catalog database.
If the bootstrap-file-specification begins with a vertical bar
(|), Bacula will use the specification as the name of a program to which
it will pipe the bootstrap record. It could for example be a shell
script that emails you the bootstrap record.
On versions 1.39.22 or greater, before opening the file or execute the
specified command, Bacula performs
character substitution like in RunScript
directive. To automatically manage your bootstrap files, you can use
this in your JobDefs resources:
JobDefs {
Write Bootstrap = "%c_%n.bsr"
...
}
For more details on using this file, please see the chapter entitled
The Bootstrap File of this manual.
- Client = <client-resource-name>
-
The Client directive specifies the Client (File daemon) that will be used in
the current Job. Only a single Client may be specified in any one Job. The
Client runs on the machine to be backed up, and sends the requested files to
the Storage daemon for backup, or receives them when restoring. For
additional details, see the
Client Resource section of this chapter.
This directive is required.
- FileSet = <FileSet-resource-name>
-
The FileSet directive specifies the FileSet that will be used in the
current Job. The FileSet specifies which directories (or files) are to
be backed up, and what options to use (e.g. compression, ...). Only a
single FileSet resource may be specified in any one Job. For additional
details, see the FileSet Resource section of
this chapter. This directive is required.
- Messages = <messages-resource-name>
-
The Messages directive defines what Messages resource should be used for
this job, and thus how and where the various messages are to be
delivered. For example, you can direct some messages to a log file, and
others can be sent by email. For additional details, see the
Messages Resource Chapter of this manual. This
directive is required.
- Pool = <pool-resource-name>
-
The Pool directive defines the pool of Volumes where your data can be
backed up. Many Bacula installations will use only the Default
pool. However, if you want to specify a different set of Volumes for
different Clients or different Jobs, you will probably want to use
Pools. For additional details, see the Pool Resource
section of this chapter. This directive is required.
- Full Backup Pool = <pool-resource-name>
-
The Full Backup Pool specifies a Pool to be used for Full backups.
It will override any Pool specification during a Full backup. This
directive is optional.
- Differential Backup Pool = <pool-resource-name>
-
The Differential Backup Pool specifies a Pool to be used for
Differential backups. It will override any Pool specification during a
Differential backup. This directive is optional.
- Incremental Backup Pool = <pool-resource-name>
-
The Incremental Backup Pool specifies a Pool to be used for
Incremental backups. It will override any Pool specification during an
Incremental backup. This directive is optional.
- Schedule = <schedule-name>
-
The Schedule directive defines what schedule is to be used for the Job.
The schedule in turn determines when the Job will be automatically
started and what Job level (i.e. Full, Incremental, ...) is to be run.
This directive is optional, and if left out, the Job can only be started
manually using the Console program. Although you may specify only a
single Schedule resource for any one job, the Schedule resource may
contain multiple Run directives, which allow you to run the Job at
many different times, and each run directive permits overriding
the default Job Level Pool, Storage, and Messages resources. This gives
considerable flexibility in what can be done with a single Job. For
additional details, see the Schedule Resource
Chapter of this manual.
- Storage = <storage-resource-name>
-
The Storage directive defines the name of the storage services where you
want to backup the FileSet data. For additional details, see the
Storage Resource Chapter of this manual.
The Storage resource may also be specified in the Job's Pool resource,
in which case the value in the Pool resource overrides any value
in the Job. This Storage resource definition is not required by either
the Job resource or in the Pool, but it must be specified in
one or the other. If not configuration error will result.
- Max Start Delay = <time>
-
The time specifies the maximum delay between the scheduled time and the
actual start time for the Job. For example, a job can be scheduled to
run at 1:00am, but because other jobs are running, it may wait to run.
If the delay is set to 3600 (one hour) and the job has not begun to run
by 2:00am, the job will be canceled. This can be useful, for example,
to prevent jobs from running during day time hours. The default is 0
which indicates no limit.
- Max Run Time = <time>
-
The time specifies the maximum allowed time that a job may run, counted
from when the job starts, (not necessarily the same as when the
job was scheduled). This directive is implemented in version 1.33 and
later.
- Max Wait Time = <time>
-
The time specifies the maximum allowed time that a job may block waiting
for a resource (such as waiting for a tape to be mounted, or waiting for
the storage or file daemons to perform their duties), counted from the
when the job starts, (not necessarily the same as when the job was
scheduled). This directive is implemented only in version 1.33 and
later.
- Incremental Max Wait Time = <time>
-
The time specifies the maximum allowed time that an Incremental backup
job may block waiting for a resource (such as waiting for a tape to be
mounted, or waiting for the storage or file daemons to perform their
duties), counted from the when the job starts, (not necessarily
the same as when the job was scheduled). Please note that if there is a
Max Wait Time it may also be applied to the job.
- Differential Max Wait Time = <time>
-
The time specifies the maximum allowed time that a Differential backup
job may block waiting for a resource (such as waiting for a tape to be
mounted, or waiting for the storage or file daemons to perform their
duties), counted from the when the job starts, (not necessarily
the same as when the job was scheduled). Please note that if there is a
Max Wait Time it may also be applied to the job.
- Prefer Mounted Volumes = <yes|no>
-
If the Prefer Mounted Volumes directive is set to yes (default
yes), the Storage daemon is requested to select either an Autochanger or
a drive with a valid Volume already mounted in preference to a drive
that is not ready. If no drive with a suitable Volume is available, it
will select the first available drive.
If the directive is set to no, the Storage daemon will prefer
finding an unused drive, otherwise, each job started will append to the
same Volume (assuming the Pool is the same for all jobs). Setting
Prefer Mounted Volumes to no can be useful for those sites particularly
with multiple drive autochangers that prefer to maximize backup
throughput at the expense of using additional drives and Volumes. As an
optimization, when using multiple drives, you will probably want to
start each of your jobs one after another with approximately 5 second
intervals. This will help ensure that each night, the same drive
(Volume) is selected for the same job, otherwise, when you do a restore,
you may find the files spread over many more Volumes than necessary.
- Prune Jobs = <yes|no>
-
Normally, pruning of Jobs from the Catalog is specified on a Client by
Client basis in the Client resource with the AutoPrune directive.
If this directive is specified (not normally) and the value is yes, it will override the value specified in the Client resource. The
default is no.
- Prune Files = <yes|no>
-
Normally, pruning of Files from the Catalog is specified on a Client by
Client basis in the Client resource with the AutoPrune directive.
If this directive is specified (not normally) and the value is yes, it will override the value specified in the Client resource. The
default is no.
- Prune Volumes = <yes|no>
-
Normally, pruning of Volumes from the Catalog is specified on a Client
by Client basis in the Client resource with the AutoPrune
directive. If this directive is specified (not normally) and the value
is yes, it will override the value specified in the Client
resource. The default is no.
- RunScript {<body-of-runscript>}
-
This directive is only implemented in version 1.39.22 and later.
The RunScript directive behaves more like a resource in that it
requires opening and closing braces around a number of directives
that make up the body of the runscript.
The specified Command (see below for details) is run as an
external program prior or after the current Job. This is optional.
You can use following options may be specified in the body
of the runscript:
Options |
Value |
Default |
Information |
Runs On Success |
Yes/No |
Yes |
Run command if JobStatus is successful |
Runs On Failure |
Yes/No |
No |
Run command if JobStatus isn't successful |
Runs On Client |
Yes/No |
Yes |
Run command on client |
Runs When |
Before|After|Always |
Never |
When run commands |
Abort Job On Error |
Yes/No |
Yes |
Abort job if script returns
something different from 0 |
Command |
|
|
Path to your script |
Any output sent by the command to standard output will be included in the
Bacula job report. The command string must be a valid program name or name
of a shell script.
In addition, the command string is parsed then fed to the execvp() function,
which means that the path will be searched to execute your specified
command, but there is no shell interpretation, as a consequence, if you
invoke complicated commands or want any shell features such as redirection
or piping, you must call a shell script and do it inside that script.
Before submitting the specified command to the operating system, Bacula
performs character substitution of the following characters:
%% = %
%c = Client's name
%d = Director's name
%e = Job Exit Status
%i = JobId
%j = Unique Job id
%l = Job Level
%n = Job name
%s = Since time
%t = Job type (Backup, ...)
%v = Volume name
The Job Exit Status code %e edits the following values:
- OK
- Error
- Fatal Error
- Canceled
- Differences
- Unknown term code
Thus if you edit it on a command line, you will need to enclose
it within some sort of quotes.
You can use these following shortcuts:
Keyword |
RunsOnSuccess |
RunsOnFailure |
AbortJobOnError |
Runs On Client |
RunsWhen |
Run Before Job |
|
|
Yes |
No |
Before |
Run After Job |
Yes |
No |
|
No |
After |
Run After Failed Job |
No |
Yes |
|
No |
After |
Client Run Before Job |
|
|
Yes |
Yes |
Before |
Client Run After Job |
Yes |
No |
|
Yes |
After |
Examples:
RunScript {
RunsWhen = Before
AbortJobOnError = No
Command = "/etc/init.d/apache stop"
}
RunScript {
RunsWhen = After
RunsOnFailure = yes
Command = "/etc/init.d/apache start"
}
Special Windows Considerations
In addition, for a Windows client on version 1.33 and above, please take
careful note that you must ensure a correct path to your script. The
script or program can be a .com, .exe or a .bat file. However, if you
specify a path, you must also specify the full extension. Unix like
commands will not work unless you have installed and properly configured
Cygwin in addition to and separately from Bacula.
The command can be anything that cmd.exe or command.com will recognize
as an executable file. Specifying the executable's extension is
optional, unless there is an ambiguity. (i.e. ls.bat, ls.exe)
The System %Path% will be searched for the command. (under the
environment variable dialog you have have both System Environment and
User Environment, we believe that only the System environment will be
available to bacula-fd, if it is running as a service.)
System environment variables can be referenced with %var% and
used as either part of the command name or arguments.
ClientRunBeforeJob = "\"C:/Program Files/Software
Vendor/Executable\" /arg1 /arg2 \"foo bar\""
The special characters
&<>()@^|
will need to be quoted,
if they are part of a filename or argument.
If someone is logged in, a blank "command" window running the commands
will be present during the execution of the command.
Some Suggestions from Phil Stracchino for running on Win32 machines with
the native Win32 File daemon:
- You might want the ClientRunBeforeJob directive to specify a .bat
file which runs the actual client-side commands, rather than trying
to run (for example) regedit /e directly.
- The batch file should explicitly 'exit 0' on successful completion.
- The path to the batch file should be specified in Unix form:
ClientRunBeforeJob = "c:/bacula/bin/systemstate.bat"
rather than DOS/Windows form:
ClientRunBeforeJob =
"c:\bacula\bin\systemstate.bat"
INCORRECT
For Win32, please note that there are certain limitations:
ClientRunBeforeJob = "C:/Program Files/Bacula/bin/pre-exec.bat"
Lines like the above do not work because there are limitations of
cmd.exe that is used to execute the command.
Bacula prefixes the string you supply with cmd.exe /c . To test that
your command works you should type cmd /c "C:/Program Files/test.exe" at a
cmd prompt and see what happens. Once the command is correct insert a
backslash (\) before each double quote ("), and
then put quotes around the whole thing when putting it in
the director's .conf file. You either need to have only one set of quotes
or else use the short name and don't put quotes around the command path.
Below is the output from cmd's help as it relates to the command line
passed to the /c option.
If /C or /K is specified, then the remainder of the command line after
the switch is processed as a command line, where the following logic is
used to process quote (") characters:
- If all of the following conditions are met, then quote characters
on the command line are preserved:
- no /S switch.
- exactly two quote characters.
- no special characters between the two quote characters,
where special is one of:
&<>()@^|
- there are one or more whitespace characters between the
the two quote characters.
- the string between the two quote characters is the name
of an executable file.
- Otherwise, old behavior is to see if the first character is
a quote character and if so, strip the leading character and
remove the last quote character on the command line, preserving
any text after the last quote character.
The following example of the use of the Client Run Before Job directive was
submitted by a user:
You could write a shell script to back up a DB2 database to a FIFO. The shell
script is:
#!/bin/sh
# ===== backupdb.sh
DIR=/u01/mercuryd
mkfifo $DIR/dbpipe
db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING &
sleep 1
The following line in the Job resource in the bacula-dir.conf file:
Client Run Before Job = "su - mercuryd -c \"/u01/mercuryd/backupdb.sh '%t'
'%l'\""
When the job is run, you will get messages from the output of the script
stating that the backup has started. Even though the command being run is
backgrounded with &, the job will block until the "db2 BACKUP DATABASE"
command, thus the backup stalls.
To remedy this situation, the "db2 BACKUP DATABASE" line should be changed to
the following:
db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING > $DIR/backup.log
2>&1 < /dev/null &
It is important to redirect the input and outputs of a backgrounded command to
/dev/null to prevent the script from blocking.
- Run Before Job = <command>
-
The specified command is run as an external program prior to running the
current Job. This directive is not required, but if it is defined, and if the
exit code of the program run is non-zero, the current Bacula job will be
canceled.
Run Before Job = "echo test"
it's equivalent to :
RunScript {
Command = "echo test"
RunsOnClient = No
RunsWhen = Before
}
Lutz Kittler has pointed out that using the RunBeforeJob directive can be a
simple way to modify your schedules during a holiday. For example, suppose
that you normally do Full backups on Fridays, but Thursday and Friday are
holidays. To avoid having to change tapes between Thursday and Friday when
no one is in the office, you can create a RunBeforeJob that returns a
non-zero status on Thursday and zero on all other days. That way, the
Thursday job will not run, and on Friday the tape you inserted on Wednesday
before leaving will be used.
- Run After Job = <command>
-
The specified command is run as an external program if the current
job terminates normally (without error or without being canceled). This
directive is not required. If the exit code of the program run is
non-zero, Bacula will print a warning message. Before submitting the
specified command to the operating system, Bacula performs character
substitution as described above for the RunScript directive.
An example of the use of this directive is given in the
Tips Chapter of this manual.
See the Run After Failed Job if you
want to run a script after the job has terminated with any
non-normal status.
- Run After Failed Job = <command>
-
The specified command is run as an external program after the current
job terminates with any error status. This directive is not required. The
command string must be a valid program name or name of a shell script. If
the exit code of the program run is non-zero, Bacula will print a
warning message. Before submitting the specified command to the
operating system, Bacula performs character substitution as described above
for the RunScript directive. Note, if you wish that your script
will run regardless of the exit status of the Job, you can use this :
RunScript {
Command = "echo test"
RunsWhen = After
RunsOnFailure = yes
RunsOnClient = no
RunsOnSuccess = yes # default, you can drop this line
}
An example of the use of this directive is given in the
Tips Chapter of this manual.
- Client Run Before Job = <command>
-
This directive is the same as Run Before Job except that the
program is run on the client machine. The same restrictions apply to
Unix systems as noted above for the RunScript.
- Client Run After Job = <command>
-
This directive is the same as Run After Job except that it is run on
the client machine. Note, please see the notes above in RunScript
concerning Windows clients.
- Rerun Failed Levels = <yes|no>
-
If this directive is set to yes (default no), and Bacula detects that
a previous job at a higher level (i.e. Full or Differential) has failed,
the current job level will be upgraded to the higher level. This is
particularly useful for Laptops where they may often be unreachable, and if
a prior Full save has failed, you wish the very next backup to be a Full
save rather than whatever level it is started as.
There are several points that must be taken into account when using this
directive: first, a failed job is defined as one that has not terminated
normally, which includes any running job of the same name (you need to
ensure that two jobs of the same name do not run simultaneously);
secondly, the Ignore FileSet Changes directive is not considered
when checing for failed levels, which means that any FileSet change will
trigger a rerun.
- Spool Data = <yes|no>
-
If this directive is set to yes (default no), the Storage daemon will
be requested to spool the data for this Job to disk rather than write it
directly to tape. Once all the data arrives or the spool files' maximum sizes
are reached, the data will be despooled and written to tape. When this
directive is set to yes, the Spool Attributes is also automatically set to
yes. Spooling data prevents tape shoe-shine (start and stop) during
Incremental saves. This option should not be used if you are writing to a
disk file.
- Spool Attributes = <yes|no>
-
The default is set to no, which means that the File attributes are
sent
by the Storage daemon to the Director as they are stored on tape. However,
if you want to avoid the possibility that database updates will slow down
writing to the tape, you may want to set the value to yes, in which
case the Storage daemon will buffer the File attributes and Storage
coordinates to a temporary file in the Working Directory, then when writing
the Job data to the tape is completed, the attributes and storage coordinates
will be sent to the Director.
- Where = <directory>
-
This directive applies only to a Restore job and specifies a prefix to
the directory name of all files being restored. This permits files to
be restored in a different location from which they were saved. If Where is not specified or is set to backslash (/), the files will
be restored to their original location. By default, we have set Where in the example configuration files to be /tmp/bacula-restores. This is to prevent accidental overwriting of
your files.
- Replace = <replace-option>
-
This directive applies only to a Restore job and specifies what happens
when Bacula wants to restore a file or directory that already exists.
You have the following options for replace-option:
- always
-
when the file to be restored already exists, it is deleted and then
replaced by the copy that was backed up.
- ifnewer
-
if the backed up file (on tape) is newer than the existing file, the
existing file is deleted and replaced by the back up.
- ifolder
-
if the backed up file (on tape) is older than the existing file, the
existing file is deleted and replaced by the back up.
- never
-
if the backed up file already exists, Bacula skips restoring this file.
- Prefix Links=<yes|no>
-
If a Where path prefix is specified for a recovery job, apply it
to absolute links as well. The default is No. When set to Yes then while restoring files to an alternate directory, any absolute
soft links will also be modified to point to the new alternate
directory. Normally this is what is desired -- i.e. everything is self
consistent. However, if you wish to later move the files to their
original locations, all files linked with absolute names will be broken.
- Maximum Concurrent Jobs = <number>
-
where <number> is the maximum number of Jobs from the current
Job resource that can run concurrently. Note, this directive limits
only Jobs with the same name as the resource in which it appears. Any
other restrictions on the maximum concurrent jobs such as in the
Director, Client, or Storage resources will also apply in addition to
the limit specified here. The default is set to 1, but you may set it
to a larger number. We strongly recommend that you read the WARNING
documented under Maximum Concurrent Jobs in the
Director's resource.
- Reschedule On Error = <yes|no>
-
If this directive is enabled, and the job terminates in error, the job
will be rescheduled as determined by the Reschedule Interval and
Reschedule Times directives. If you cancel the job, it will not
be rescheduled. The default is no (i.e. the job will not be
rescheduled).
This specification can be useful for portables, laptops, or other
machines that are not always connected to the network or switched on.
- Reschedule Interval = <time-specification>
-
If you have specified Reschedule On Error = yes and the job
terminates in error, it will be rescheduled after the interval of time
specified by time-specification. See the time
specification formats in the Configure chapter for details of
time specifications. If no interval is specified, the job will not be
rescheduled on error.
- Reschedule Times = <count>
-
This directive specifies the maximum number of times to reschedule the
job. If it is set to zero (the default) the job will be rescheduled an
indefinite number of times.
- Run = <job-name>
-
The Run directive (not to be confused with the Run option in a
Schedule) allows you to start other jobs or to clone jobs. By using the
cloning keywords (see below), you can backup
the same data (or almost the same data) to two or more drives
at the same time. The job-name is normally the same name
as the current Job resource (thus creating a clone). However, it
may be any Job name, so one job may start other related jobs.
The part after the equal sign must be enclosed in double quotes,
and can contain any string or set of options (overrides) that you
can specify when entering the Run command from the console. For
example storage=DDS-4 .... In addition, there are two special
keywords that permit you to clone the current job. They are level=%l
and since=%s. The %l in the level keyword permits
entering the actual level of the current job and the %s in the since
keyword permits putting the same time for comparison as used on the
current job. Note, in the case of the since keyword, the %s must be
enclosed in double quotes, and thus they must be preceded by a backslash
since they are already inside quotes. For example:
run = "Nightly-backup level=%l since=\"%s\" storage=DDS-4"
A cloned job will not start additional clones, so it is not
possible to recurse.
- Priority = <number>
-
This directive permits you to control the order in which your jobs run
by specifying a positive non-zero number. The higher the number, the
lower the job priority. Assuming you are not running concurrent jobs,
all queued jobs of priority 1 will run before queued jobs of priority 2
and so on, regardless of the original scheduling order.
The priority only affects waiting jobs that are queued to run, not jobs
that are already running. If one or more jobs of priority 2 are already
running, and a new job is scheduled with priority 1, the currently
running priority 2 jobs must complete before the priority 1 job is run.
The default priority is 10.
If you want to run concurrent jobs you should
keep these points in mind:
- To run concurrent jobs, you must set Maximum Concurrent Jobs = 2 in five
or six distinct places: in bacula-dir.conf in the Director, the Job, the
Client, the Storage resources; in bacula-fd in the FileDaemon (or
Client) resource, and in bacula-sd.conf in the Storage resource. If any
one is missing, it will throttle the jobs to one at a time. You may, of
course, set the Maximum Concurrent Jobs to more than 2.
- Bacula concurrently runs jobs of only one priority at a time. It
will not simultaneously run a priority 1 and a priority 2 job.
- If Bacula is running a priority 2 job and a new priority 1 job is
scheduled, it will wait until the running priority 2 job terminates even
if the Maximum Concurrent Jobs settings would otherwise allow two jobs
to run simultaneously.
- Suppose that bacula is running a priority 2 job and a new priority 1
job is scheduled and queued waiting for the running priority 2 job to
terminate. If you then start a second priority 2 job, the waiting
priority 1 job will prevent the new priority 2 job from running
concurrently with the running priority 2 job. That is: as long as there
is a higher priority job waiting to run, no new lower priority jobs will
start even if the Maximum Concurrent Jobs settings would normally allow
them to run. This ensures that higher priority jobs will be run as soon
as possible.
If you have several jobs of different priority, it may not best to start
them at exactly the same time, because Bacula must examine them one at a
time. If by Bacula starts a lower priority job first, then it will run
before your high priority jobs. If you experience this problem, you may
avoid it by starting any higher priority jobs a few seconds before lower
priority ones. This insures that Bacula will examine the jobs in the
correct order, and that your priority scheme will be respected.
- Write Part After Job = <yes|no>
-
This directive is only implemented in version 1.37 and later.
If this directive is set to yes (default no), a new part file
will be created after the job is finished.
It should be set to yes when writing to devices that require mount
(for example DVD), so you are sure that the current part, containing
this job's data, is written to the device, and that no data is left in
the temporary file on the hard disk. However, on some media, like DVD+R
and DVD-R, a lot of space (about 10Mb) is lost every time a part is
written. So, if you run several jobs each after another, you could set
this directive to no for all jobs, except the last one, to avoid
wasting too much space, but to ensure that the data is written to the
medium when all jobs are finished.
This directive is ignored with tape and FIFO devices.
The following is an example of a valid Job resource definition:
Job {
Name = "Minou"
Type = Backup
Level = Incremental # default
Client = Minou
FileSet="Minou Full Set"
Storage = DLTDrive
Pool = Default
Schedule = "MinouWeeklyCycle"
Messages = Standard
}
The JobDefs Resource
The JobDefs resource permits all the same directives that can appear in a Job
resource. However, a JobDefs resource does not create a Job, rather it can be
referenced within a Job to provide defaults for that Job. This permits you to
concisely define several nearly identical Jobs, each one referencing a JobDefs
resource which contains the defaults. Only the changes from the defaults need to
be mentioned in each Job.
The Schedule Resource
The Schedule resource provides a means of automatically scheduling a Job as
well as the ability to override the default Level, Pool, Storage and Messages
resources. If a Schedule resource is not referenced in a Job, the Job can only
be run manually. In general, you specify an action to be taken and when.
- Schedule
-
Start of the Schedule directives. No Schedule resource is
required, but you will need at least one if you want Jobs to be
automatically started.
- Name = <name>
-
The name of the schedule being defined. The Name directive is required.
- Run = <Job-overrides> <Date-time-specification>
-
The Run directive defines when a Job is to be run, and what overrides if
any to apply. You may specify multiple run directives within a
Schedule resource. If you do, they will all be applied (i.e.
multiple schedules). If you have two Run directives that start at
the same time, two Jobs will start at the same time (well, within one
second of each other).
The Job-overrides permit overriding the Level, the Storage, the
Messages, and the Pool specifications provided in the Job resource. In
addition, the FullPool, the IncrementalPool, and the DifferentialPool
specifications permit overriding the Pool specification according to
what backup Job Level is in effect.
By the use of overrides, you may customize a particular Job. For
example, you may specify a Messages override for your Incremental
backups that outputs messages to a log file, but for your weekly or
monthly Full backups, you may send the output by email by using a
different Messages override.
Job-overrides are specified as: keyword=value where the
keyword is Level, Storage, Messages, Pool, FullPool, DifferentialPool,
or IncrementalPool, and the value is as defined on the respective
directive formats for the Job resource. You may specify multiple Job-overrides on one Run directive by separating them with one or
more spaces or by separating them with a trailing comma. For example:
- Level=Full
-
is all files in the FileSet whether or not they have changed.
- Level=Incremental
-
is all files that have changed since the last backup.
- Pool=Weekly
-
specifies to use the Pool named Weekly.
- Storage=DLT_Drive
-
specifies to use DLT_Drive for the storage device.
- Messages=Verbose
-
specifies to use the Verbose message resource for the Job.
- FullPool=Full
-
specifies to use the Pool named Full if the job is a full backup, or
is
upgraded from another type to a full backup.
- DifferentialPool=Differential
-
specifies to use the Pool named Differential if the job is a
differential backup.
- IncrementalPool=Incremental
-
specifies to use the Pool named Incremental if the job is an
incremental backup.
- SpoolData=yes|no
-
tells Bacula to request the Storage daemon to spool data to a disk file
before putting it on tape.
- WritePartAfterJob=yes|no
-
tells Bacula to request the Storage daemon to write the current part file to
the device when the job is finished (see
Write Part After Job directive in the Job
resource). Please note, this directive is implemented
only in version 1.37 and later. The default is yes. We strongly
recommend that you keep this set to yes otherwise, when the last job
has finished one part will remain in the spool file and restore may
or may not work.
Date-time-specification determines when the Job is to be run. The
specification is a repetition, and as a default Bacula is set to run a job at
the beginning of the hour of every hour of every day of every week of every
month of every year. This is not normally what you want, so you must specify
or limit when you want the job to run. Any specification given is assumed to
be repetitive in nature and will serve to override or limit the default
repetition. This is done by specifying masks or times for the hour, day of the
month, day of the week, week of the month, week of the year, and month when
you want the job to run. By specifying one or more of the above, you can
define a schedule to repeat at almost any frequency you want.
Basically, you must supply a month, day, hour, and minute the Job is to be run. Of these four items to be specified, day
is special in that you may either specify a day of the month such as 1, 2,
... 31, or you may specify a day of the week such as Monday, Tuesday, ...
Sunday. Finally, you may also specify a week qualifier to restrict the
schedule to the first, second, third, fourth, or fifth week of the month.
For example, if you specify only a day of the week, such as Tuesday the
Job will be run every hour of every Tuesday of every Month. That is the month and hour remain set to the defaults of every month and all
hours.
Note, by default with no other specification, your job will run at the
beginning of every hour. If you wish your job to run more than once in any
given hour, you will need to specify multiple run specifications each
with a different minute.
The date/time to run the Job can be specified in the following way in
pseudo-BNF:
<void-keyword> = on
<at-keyword> = at
<week-keyword> = 1st | 2nd | 3rd | 4th | 5th | first |
second | third | fourth | fifth
<wday-keyword> = sun | mon | tue | wed | thu | fri | sat |
sunday | monday | tuesday | wednesday |
thursday | friday | saturday
<week-of-year-keyword> = w00 | w01 | ... w52 | w53
<month-keyword> = jan | feb | mar | apr | may | jun | jul |
aug | sep | oct | nov | dec | january |
february | ... | december
<daily-keyword> = daily
<weekly-keyword> = weekly
<monthly-keyword> = monthly
<hourly-keyword> = hourly
<digit> = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0
<number> = <digit> | <digit><number>
<12hour> = 0 | 1 | 2 | ... 12
<hour> = 0 | 1 | 2 | ... 23
<minute> = 0 | 1 | 2 | ... 59
<day> = 1 | 2 | ... 31
<time> = <hour>:<minute> |
<12hour>:<minute>am |
<12hour>:<minute>pm
<time-spec> = <at-keyword> <time> |
<hourly-keyword>
<date-keyword> = <void-keyword> <weekly-keyword>
<day-range> = <day>-<day>
<month-range> = <month-keyword>-<month-keyword>
<wday-range> = <wday-keyword>-<wday-keyword>
<range> = <day-range> | <month-range> |
<wday-range>
<date> = <date-keyword> | <day> | <range>
<date-spec> = <date> | <date-spec>
<day-spec> = <day> | <wday-keyword> |
<day-range> | <wday-range> |
<daily-keyword>
<day-spec> = <day> | <wday-keyword> |
<day> | <wday-range> |
<week-keyword> <wday-keyword> |
<week-keyword> <wday-range>
<month-spec> = <month-keyword> | <month-range> |
<monthly-keyword>
<date-time-spec> = <month-spec> <day-spec> <time-spec>
Note, the Week of Year specification wnn follows the ISO standard definition
of the week of the year, where Week 1 is the week in which the first Thursday
of the year occurs, or alternatively, the week which contains the 4th of
January. Weeks are numbered w01 to w53. w00 for Bacula is the week that
precedes the first ISO week (i.e. has the first few days of the year if any
occur before Thursday). w00 is not defined by the ISO specification. A week
starts with Monday and ends with Sunday.
An example schedule resource that is named WeeklyCycle and runs a job
with level full each Sunday at 1:05am and an incremental job Monday through
Saturday at 1:05am is:
Schedule {
Name = "WeeklyCycle"
Run = Level=Full sun at 1:05
Run = Level=Incremental mon-sat at 1:05
}
An example of a possible monthly cycle is as follows:
Schedule {
Name = "MonthlyCycle"
Run = Level=Full Pool=Monthly 1st sun at 1:05
Run = Level=Differential 2nd-5th sun at 1:05
Run = Level=Incremental Pool=Daily mon-sat at 1:05
}
The first of every month:
Schedule {
Name = "First"
Run = Level=Full on 1 at 1:05
Run = Level=Incremental on 2-31 at 1:05
}
Every 10 minutes:
Schedule {
Name = "TenMinutes"
Run = Level=Full hourly at 0:05
Run = Level=Full hourly at 0:15
Run = Level=Full hourly at 0:25
Run = Level=Full hourly at 0:35
Run = Level=Full hourly at 0:45
Run = Level=Full hourly at 0:55
}
Internally Bacula keeps a schedule as a bit mask. There are six masks and a
minute field to each schedule. The masks are hour, day of the month (mday),
month, day of the week (wday), week of the month (wom), and week of the year
(woy). The schedule is initialized to have the bits of each of these masks
set, which means that at the beginning of every hour, the job will run. When
you specify a month for the first time, the mask will be cleared and the bit
corresponding to your selected month will be selected. If you specify a second
month, the bit corresponding to it will also be added to the mask. Thus when
Bacula checks the masks to see if the bits are set corresponding to the
current time, your job will run only in the two months you have set. Likewise,
if you set a time (hour), the hour mask will be cleared, and the hour you
specify will be set in the bit mask and the minutes will be stored in the
minute field.
For any schedule you have defined, you can see how these bits are set by doing
a show schedules command in the Console program. Please note that the
bit mask is zero based, and Sunday is the first day of the week (bit zero).
-
Next: The FileSet Resource
Up: Bacula User's Guide
Previous: Customizing the Configuration Files
Contents
Index
Kern Sibbald
2007-01-13