📄 config
字号:
# Prime-time is defined as a time period each working day (Mon-Fri)# from PRIME_TIME_START through PRIME_TIME_END. Times are in 24# hour format (i.e. 9:00AM is 9:00:00, 5:00PM is 17:00:00) with hours, # minutes, and seconds. Sites can use the prime-time scheduling policy for # the entire 24 hour period by setting PRIME_TIME_START and PRIME_TIME_END # back-to-back. The portion of a job that fits within primetime must be# no longer than PRIME_TIME_WALLT_LIMIT (represented in HH:MM:SS).#PRIME_TIME_START ???PRIME_TIME_END ???PRIME_TIME_WALLT_LIMIT ???# Some systems may want a split time limit for small and large jobs. If this# is required, the split will be made at PRIME_TIME_SMALL_NODE_LIMIT nodes,# with jobs less than that value being allowed PRIME_TIME_SMALL_WALLT_LIMIT# time values, and larger jobs getting PRIME_TIME_WALLT_LIMIT.# For instance, for 8 nodes or more, give them PRIME_TIME_WALLT_LIMIT, but# if smaller, give them 5 minutes:#PRIME_TIME_SMALL_NODE_LIMIT 8#PRIME_TIME_SMALL_WALLT_LIMIT 0:05:00# Additionally, some sites may want a non-primetime walltime limit that is# different for different sized jobs. If so, set the SMALL_JOB_MAX option# to that size, and choose time limits. For instance, for more than 8 nodes,# give them 4 hours outside of primetime. Otherwise, give 2 hours.## SMALL_JOB_MAX 8# WALLT_LIMIT_LARGE_JOB 4:00:00# WALLT_LIMIT_SMALL_JOB 2:00:00# Check for Dedicated/Preventative Maintenance Time Enforcement. This option# requires the NAS 'pbsdedtime' command which in turn depends on the NAS # 'schedule' program.## If enforcement is needed at some future time, it may be specified on the# configuration line instead of "True". The syntax of the date string is# MM/DD/YYYY@HH:MM:SS (due to limitations of the parsing engine). For # instance, to start enforcing dedicated time on January 1, 2000 at 8:00AM, # use the following configuration line:## ENFORCE_DEDICATED_TIME 01/01/2000@08:00:00#ENFORCE_DEDICATED_TIME False# Command used to check for upcoming dedicated/preventative maintenance time. # Supply the absolute path to NAS 'pbsdedtime' or a similar command. This# command will be executed with the short name of an execution host as its# only argument. For instance, "/usr/local/pbs/sbin/pbsdedtime hopper".#DEDICATED_TIME_COMMAND ???# If your machines are running in a cluster configuration, you can specify a# "logical system name" for the cluster as a whole. DEDICATED_TIME_COMMAND# must be able to return valid information for this "hostname". Any dedicated# times returned for SYSTEM_NAME will override the dedicated time (or the lack# thereof) for any host in the cluster. Overlaps will be resolved correctly,# with the system dedicated times getting priority.# This option may be useful if your schedule dedicated time for individual# hosts. Point SYSTEM_NAME to the front-end to prevent scheduling jobs when# the front-end is in dedicated time (otherwise, dedicated time is checked for# the back-ends only).#SYSTEM_NAME ???# To minimize the impact on the schedule server, as well as reduce average# run time, the scheduler caches the responses for upcominig outages.# The last response from the DEDICATED_TIME_COMMAND will be cached for this # many seconds (300 is recommended). # The outages cache is invalidated upon catching a SIGHUP.#DEDICATED_TIME_CACHE_SECS 300# If SORT_BY_PAST_USAGE is non-zero, the list of jobs will be permuted to# bring the least active users' jobs to the front of the list. This may# provide some sense of fairness, but be warned that it may cause the choice# of jobs to be fairly inexplicable from the outside. The "past usage" is# decayed daily by one of two values, DECAY_FACTOR or OA_DECAY_FACTOR. The# OA_DECAY_FACTOR is used if the ENFORCE_ALLOCATION is on, and the user is# over their allocations. Decays must be <= 1.0, with lower numbers # indicating quicker decay.#SORT_BY_PAST_USAGE True#DECAY_FACTOR 0.75#OA_DECAY_FACTOR 0.95# Check for Allocation Enforcement. This option depends on the NAS ACCT++ # program and a Data Loading Client. ## If enforcement is needed at some future time, it may be specified on the# configuration line instead of "True". The syntax of the date string is# MM/DD/YYYY@HH:MM:SS (due to limitations of the parsing engine). For # instance, to start enforcing allocations on January 1, 2000 at 8:00AM, # use the following configuration line:## ENFORCE_ALLOCATION 01/01/2000@08:00:00#ENFORCE_ALLOCATION False# Absolute path to the $(PBS_SERVER_HOME)/pbs_acctdir directory which contains# the accounting year-to-date (YTD) and allocation files.#SCHED_ACCT_DIR ???# Choose an action to take upon scheduler startup. The default is to do no# special processing (NONE). In some instances, for instance with check-# pointing, a job can end up queued in one of the batch queues, since it was # running before but was stopped by PBS. If the argument is RESUBMIT, these # jobs will be moved back to the first submit queue, and scheduled as if# they had just arrived. If the argument is RERUN, the scheduler will have# PBS run any jobs found enqueued on the execution queues. This may cause# the machine to get somewhat confused, as no limits checking is done (the# assumption being that they were checked when they were enqueued).#SCHED_RESTART_ACTION RESUBMIT# Interactive jobs that have been queued for (INTERACTIVE_LONG_WAIT + the# requested walltime) will be considered "long outstanding". This is based# on the theory that there is a human on the other end of the 'qsub -I', but# batch jobs have a looser idea of "long outstanding".#INTERACTIVE_LONG_WAIT 0:30:00# Use heuristics to attempt to reduce the problem of queue fragmentation.# If AVOID_FRAGMENTATION is set, jobs that will cause or prolong queue# fragmentation will not be run. Once the queue becomes fragmented, this# will clear the small jobs out of the queue by favoring larger jobs.# This is probably not very useful in most situations, but might help if# your site has lots of very small jobs and several queues.#AVOID_FRAGMENTATION False# In order to guarantee that large jobs will be able to run at night, set# the NONPRIME_DRAIN_SYS option to True. This will cause a hard barrier to# be set at the time between prime-time and non-prime-time, so the system# will drain down to idle just before non-prime-time begins.#NONPRIME_DRAIN_SYS False# It is often the case that one or more queues will be drained of all jobs# that can run during primetime before the non-primetime period begins. If# the queue is idling (and, optionally, has been idle for NP_DRAIN_IDLETIME)# and some jobs could be run if primetime were to be turned off, and it is# within NP_DRAIN_BACKTIME of the PT/NPT boundary, turn off the enforcement# of primetime for the queue. This option requires NONPRIME_DRAIN_SYS and# ENFORCE_PRIME_TIME to be set.# For example, to start non-primetime jobs early if it is within 30 minutes# of the PT/NPT boundary, and the queue has been idle for 5 minutes or more, # use the following values:# #NP_DRAIN_BACKTIME 00:30:00#NP_DRAIN_IDLETIME 00:05:00# Jobs that have been waiting on the submit queue for a long period of time# should be given higher priority, in order to prevent starvation. If the# scheduler should go out of its way to run a long-waiting job (even to the# extreme of draining the system for it), set the MAX_QUEUED_TIME to the# amount of time that is "too long". 24 to 48 hours is usually good.#MAX_QUEUED_TIME 24:00:00# For debugging and evaluation purposes, it might be useful to have a file# listing the "pre-scheduling" sorted list of jobs (including any special# attributes) updated at each run. If SORTED_JOB_DUMPFILE is specified,# the sorted list of jobs will be dumped into it. If special permissions,# etc, are needed, create it with those permissions before starting the# scheduler. If the file does not exist, it will be created with the default# permissions and owner/group.## A good place for this file might be $(PBS_HOME)/sched_priv/sortedjobs##SORTED_JOB_DUMPFILE ???# To enable support for the scheduler to manage the state (either user- or # global-mode) of the HPM counters on each machine, set the MANAGE_HPM_COUNTERS# option. Note that this requires each back-end to be running a PBS mom that# understands the 'hpm_ctl' directive, and parses the following resmom# requests. The 'global' and 'user' directives should set the HPM counters# to the correct modes. For more information, see the ecadmin(1), ecstats(1)# and r10k_counters(1) manual pages.## hpm_ctl[mode=query] Returns either 'global' or 'user'# hpm_ctl[mode=global] Returns either 'OKAY' or 'ERROR <error string>'# hpm_ctl[mode=user] Returns either 'OKAY' or 'ERROR <error string>'##MANAGE_HPM_COUNTERS True# The HPM kernel interface does not provide a method of revoking the hpm# counters from a process. However, it is possible to dig around with# icrash(1) and determine which processes have those counters attached to# them and kill them. The hpm_ctl 'revoke' method performs this action.# To enable this activity, set REVOKE_HPM_COUNTERS to True.## Note: This is not a completely risk-free undertaking.#REVOKE_HPM_COUNTERS True############################################################################### For testing, set TEST_ONLY to True to prevent any calls being made to PBS# that would change the jobs in any way.##TEST_ONLY False## The scheduler can multiply all system resources, load averages, node counts,# etc, by some integer value in order to simulate running on a larger system.# This can be used to test the expected behavior for a 128-processor machine,# on an 8-processor test host by setting FAKE_MACHINE_MULT to 16 (128 / 8).#FAKE_MACHINE_MULT 16
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -