📄 config
字号:
# Check for Prime Time Enforcement. Sites with a mixed user base can use # this option to enforce separate scheduling policies at different times# during the day. Refer to the INTERACTIVE_QUEUES description for the current# prime-time scheduling policy. If ENFORCE_PRIME_TIME is set to "0", the # non-prime-time scheduling policy (as described in BATCH_QUEUES) will be used # for the entire 24 hour period.## If enforcement is needed at some future time, it may be specified on the# configuration line instead of "True". The syntax of the date string is# MM/DD/YYYY@HH:MM:SS (due to limitations of the parsing engine). For # instance, to start enforcing primetime on January 1, 2000 at 8:00AM, use# the following configuration line:## ENFORCE_PRIME_TIME 01/01/2000@08:00:00#ENFORCE_PRIME_TIME False# Prime-time is defined as a time period each working day (Mon-Fri)# from PRIME_TIME_START through PRIME_TIME_END. Times are in 24# hour format (i.e. 9:00AM is 9:00:00, 5:00PM is 17:00:00) with hours, # minutes, and seconds. Sites can use the prime-time scheduling policy for # the entire 24 hour period by setting PRIME_TIME_START and PRIME_TIME_END # back-to-back. The portion of a job that fits within primetime must be# no longer than PRIME_TIME_WALLT_LIMIT (represented in HH:MM:SS).#PRIME_TIME_START ???PRIME_TIME_END ???PRIME_TIME_WALLT_LIMIT ???# Some systems may want a split time limit for small and large jobs. If this# is required, the split will be made at PRIME_TIME_SMALL_NODE_LIMIT nodes,# with jobs less than that value being allowed PRIME_TIME_SMALL_WALLT_LIMIT# time values, and larger jobs getting PRIME_TIME_WALLT_LIMIT.# For instance, for 8 nodes or more, give them PRIME_TIME_WALLT_LIMIT, but# if smaller, give them 5 minutes:#PRIME_TIME_SMALL_NODE_LIMIT 8#PRIME_TIME_SMALL_WALLT_LIMIT 0:05:00# Additionally, some sites may want a non-primetime walltime limit that is# different for different sized jobs. If so, set the SMALL_JOB_MAX option# to that size, and choose time limits. For instance, for more than 8 nodes,# give them 4 hours outside of primetime. Otherwise, give 2 hours.## SMALL_JOB_MAX 8# WALLT_LIMIT_LARGE_JOB 4:00:00# WALLT_LIMIT_SMALL_JOB 2:00:00# Interactive jobs that have been queued for (INTERACTIVE_LONG_WAIT + the# requested walltime) will be considered "long outstanding". This is based# on the theory that there is a human on the other end of the 'qsub -I', but# batch jobs have a looser idea of "long outstanding".#INTERACTIVE_LONG_WAIT 0:30:00# Jobs that have been waiting on the submit queue for a long period of time# should be given higher priority, in order to prevent starvation. If the# scheduler should go out of its way to run a long-waiting job (even to the# extreme of draining the system for it), set the MAX_QUEUED_TIME to the# amount of time that is "too long". 24 to 48 hours is usually good.#MAX_QUEUED_TIME 24:00:00# Check for Dedicated/Preventative Maintenance Time Enforcement. This option# requires the NAS 'pbsdedtime' command which in turn depends on the NAS # 'schedule' program.## If enforcement is needed at some future time, it may be specified on the# configuration line instead of "True". The syntax of the date string is# MM/DD/YYYY@HH:MM:SS (due to limitations of the parsing engine). For # instance, to start enforcing dedicated time on January 1, 2000 at 8:00AM, # use the following configuration line:## ENFORCE_DEDICATED_TIME 01/01/2000@08:00:00#ENFORCE_DEDICATED_TIME False# Command used to check for upcoming dedicated/preventative maintenance time. # Supply the absolute path to NAS 'pbsdedtime' or a similar command. This# command will be executed with the short name of an execution host as its# only argument. For instance, "/usr/local/pbs/sbin/pbsdedtime hopper".#DEDICATED_TIME_COMMAND ???# To minimize the impact on the schedule server, as well as reduce average# run time, the scheduler caches the responses for upcominig outages.# The last response from the DEDICATED_TIME_COMMAND will be cached for this # many seconds (300 is recommended). # The outages cache is invalidated upon catching a SIGHUP.#DEDICATED_TIME_CACHE_SECS 300# If your machines are running in a cluster configuration, you can specify a# "logical system name" for the cluster as a whole. DEDICATED_TIME_COMMAND# must be able to return valid information for this "hostname". Any dedicated# times returned for SYSTEM_NAME will override the dedicated time (or the lack# thereof) for any host in the cluster. Overlaps will be resolved correctly,# with the system dedicated times getting priority.# This option may be useful if your schedule dedicated time for individual# hosts. Point SYSTEM_NAME to the front-end to prevent scheduling jobs when# the front-end is in dedicated time (otherwise, dedicated time is checked for# the back-ends only).#SYSTEM_NAME ???# If SORT_BY_PAST_USAGE is non-zero, the list of jobs will be permuted to# bring the least active users' jobs to the front of the list. This may# provide some sense of fairness, but be warned that it may cause the choice# of jobs to be fairly inexplicable from the outside. The "past usage" is# decayed daily by one of two values, DECAY_FACTOR or OA_DECAY_FACTOR. The# OA_DECAY_FACTOR is used if the ENFORCE_ALLOCATION is on, and the user is# over their allocations. Decays must be <= 1.0, with lower numbers # indicating quicker decay.#SORT_BY_PAST_USAGE True#DECAY_FACTOR 0.75#OA_DECAY_FACTOR 0.95# Check for Allocation Enforcement. This option depends on the NAS ACCT++ # program and a Data Loading Client. ## If enforcement is needed at some future time, it may be specified on the# configuration line instead of "True". The syntax of the date string is# MM/DD/YYYY@HH:MM:SS (due to limitations of the parsing engine). For # instance, to start enforcing allocations on January 1, 2000 at 8:00AM, # use the following configuration line:## ENFORCE_ALLOCATION 01/01/2000@08:00:00#ENFORCE_ALLOCATION False# Absolute path to the $(PBS_SERVER_HOME)/pbs_acctdir directory which contains# the accounting year-to-date (YTD) and allocation files.#SCHED_ACCT_DIR ???# Choose an action to take upon scheduler startup. The default is to do no# special processing (NONE). In some instances, for instance with check-# pointing, a job can end up queued in one of the batch queues, since it was # running before but was stopped by PBS. If the argument is RESUBMIT, these # jobs will be moved back to the first submit queue, and scheduled as if# they had just arrived. If the argument is RERUN, the scheduler will have# PBS run any jobs found enqueued on the execution queues. This may cause# the machine to get somewhat confused, as no limits checking is done (the# assumption being that they were checked when they were enqueued).#SCHED_RESTART_ACTION RESUBMIT# Use heuristics to attempt to reduce the problem of queue fragmentation.# If AVOID_FRAGMENTATION is set, jobs that will cause or prolong queue# fragmentation will not be run. Once the queue becomes fragmented, this# will clear the small jobs out of the queue by favoring larger jobs.# This is probably not very useful in most situations, but might help if# your site has lots of very small jobs and several queues.#AVOID_FRAGMENTATION False# In order to guarantee that large jobs will be able to run at night, set# the NONPRIME_DRAIN_SYS option to True. This will cause a hard barrier to# be set at the time between prime-time and non-prime-time, so the system# will drain down to idle just before non-prime-time begins.#NONPRIME_DRAIN_SYS False# For debugging and evaluation purposes, it might be useful to have a file# listing the "pre-scheduling" sorted list of jobs (including any special# attributes) updated at each run. If SORTED_JOB_DUMPFILE is specified,# the sorted list of jobs will be dumped into it. If special permissions,# etc, are needed, create it with those permissions before starting the# scheduler. If the file does not exist, it will be created with the default# permissions and owner/group.## A good place for this file might be $(PBS_HOME)/sched_priv/sortedjobs##SORTED_JOB_DUMPFILE ???############################################################################### For testing, set TEST_ONLY to True to prevent any calls being made to PBS# that would change the jobs in any way.##TEST_ONLY False## The scheduler can multiply all system resources, load averages, node counts,# etc, by some integer value in order to simulate running on a larger system.# This can be used to test the expected behavior for a 128-processor machine,# on an 8-processor test host by setting FAKE_MACHINE_MULT to 16 (128 / 8).#FAKE_MACHINE_MULT 16
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -