⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 tcltest.tcl

📁 Swarm,由圣塔菲研究所开发,用于复杂适应系统(CAS)仿真及其他
💻 TCL
📖 第 1 页 / 共 5 页
字号:
# substitution on the list as though each word in the list is a separate# argument to the Tcl function.  For example, if this function is# invoked as:##      SubstArguments {$a {$a}}## Then it is as though the function is invoked as:##      SubstArguments $a {$a}## This code is adapted from Paul Duffin's function "SplitIntoWords".# The original function can be found  on:##      http://purl.org/thecliff/tcl/wiki/858.html## Results:#     a list containing the result of the substitution## Exceptions:#     An error may occur if the list containing unbalanced quote or#     unknown variable.## Side Effects:#     None.#proc tcltest::SubstArguments {argList} {    # We need to split the argList up into tokens but cannot use list    # operations as they throw away some significant quoting, and    # [split] ignores braces as it should.  Therefore what we do is    # gradually build up a string out of whitespace seperated strings.    # We cannot use [split] to split the argList into whitespace    # separated strings as it throws away the whitespace which maybe    # important so we have to do it all by hand.    set result {}    set token ""    while {[string length $argList]} {        # Look for the next word containing a quote: " { }        if {[regexp -indices {[^ \t\n]*[\"\{\}]+[^ \t\n]*} \		$argList all]} {            # Get the text leading up to this word, but not including	    # this word, from the argList.            set text [string range $argList 0 \		    [expr {[lindex $all 0] - 1}]]            # Get the word with the quote            set word [string range $argList \                    [lindex $all 0] [lindex $all 1]]            # Remove all text up to and including the word from the            # argList.            set argList [string range $argList \                    [expr {[lindex $all 1] + 1}] end]        } else {            # Take everything up to the end of the argList.            set text $argList            set word {}            set argList {}        }        if {$token != {}} {            # If we saw a word with quote before, then there is a            # multi-word token starting with that word.  In this case,            # add the text and the current word to this token.            append token $text $word        } else {            # Add the text to the result.  There is no need to parse            # the text because it couldn't be a part of any multi-word            # token.  Then start a new multi-word token with the word            # because we need to pass this token to the Tcl parser to            # check for balancing quotes            append result $text            set token $word        }        if { [catch {llength $token} length] == 0 && $length == 1} {            # The token is a valid list so add it to the result.            # lappend result [string trim $token]            append result \{$token\}            set token {}        }    }    # If the last token has not been added to the list then there    # is a problem.    if { [string length $token] } {        error "incomplete token \"$token\""    }    return $result}# tcltest::test --## This procedure runs a test and prints an error message if the test# fails.  If verbose has been set, it also prints a message even if the# test succeeds.  The test will be skipped if it doesn't match the# match variable, if it matches an element in skip, or if one of the# elements of "constraints" turns out not to be true.## If testLevel is 1, then this is a top level test, and we record# pass/fail information; otherwise, this information is not logged and# is not added to running totals.## Attributes:#   Only description is a required attribute.  All others are optional.#   Default values are indicated.##   constraints -	A list of one or more keywords, each of which#			must be the name of an element in the array#			"testConstraints".  If any of these elements is#			zero, the test is skipped. This attribute is#			optional; default is {}#   body -	        Script to run to carry out the test.  It must#		        return a result that can be checked for#		        correctness.  This attribute is optional;#                       default is {}#   result -	        Expected result from script.  This attribute is#                       optional; default is {}.#   output -            Expected output sent to stdout.  This attribute#                       is optional; default is {}.#   errorOutput -       Expected output sent to stderr.  This attribute#                       is optional; default is {}.#   returnCodes -       Expected return codes.  This attribute is#                       optional; default is {0 2}.#   setup -             Code to run before $script (above).  This#                       attribute is optional; default is {}.#   cleanup -           Code to run after $script (above).  This#                       attribute is optional; default is {}.#   match -             specifies type of matching to do on result,#                       output, errorOutput; this must be a string#			previously registered by a call to [customMatch].#			The strings exact, glob, and regexp are pre-registered#			by the tcltest package.  Default value is exact.## Arguments:#   name -		Name of test, in the form foo-1.2.#   description -	Short textual description of the test, to#  		  	help humans understand what it does.## Results:#	None.## Side effects:#       Just about anything is possible depending on the test.#proc tcltest::test {name description args} {    global tcl_platform    variable testLevel    variable coreModTime    DebugPuts 3 "test $name $args"    DebugDo 1 {	variable TestNames	catch {	    puts "test name '$name' re-used; prior use in $TestNames($name)"	}	set TestNames($name) [info script]    }    FillFilesExisted    incr testLevel    # Pre-define everything to null except output and errorOutput.  We    # determine whether or not to trap output based on whether or not    # these variables (output & errorOutput) are defined.    foreach item {constraints setup cleanup body result returnCodes	    match} {	set $item {}    }    # Set the default match mode    set match exact    # Set the default match values for return codes (0 is the standard    # expected return value if everything went well; 2 represents    # 'return' being used in the test script).    set returnCodes [list 0 2]    # The old test format can't have a 3rd argument (constraints or    # script) that starts with '-'.    if {[string match -* [lindex $args 0]]	    || ([llength $args] <= 1)} {	if {[llength $args] == 1} {	    set list [SubstArguments [lindex $args 0]]	    foreach {element value} $list {		set testAttributes($element) $value	    }	    foreach item {constraints match setup body cleanup \		    result returnCodes output errorOutput} {		if {[info exists testAttributes(-$item)]} {		    set testAttributes(-$item) [uplevel 1 \			    ::concat $testAttributes(-$item)]		}	    }	} else {	    array set testAttributes $args	}	set validFlags {-setup -cleanup -body -result -returnCodes \		-match -output -errorOutput -constraints}	foreach flag [array names testAttributes] {	    if {[lsearch -exact $validFlags $flag] == -1} {		incr testLevel -1		set sorted [lsort $validFlags]		set options [join [lrange $sorted 0 end-1] ", "]		append options ", or [lindex $sorted end]"		return -code error "bad option \"$flag\": must be $options"	    }	}	# store whatever the user gave us	foreach item [array names testAttributes] {	    set [string trimleft $item "-"] $testAttributes($item)	}	# Check the values supplied for -match	variable CustomMatch	if {[lsearch [array names CustomMatch] $match] == -1} {	    incr testLevel -1	    set sorted [lsort [array names CustomMatch]]	    set values [join [lrange $sorted 0 end-1] ", "]	    append values ", or [lindex $sorted end]"	    return -code error "bad -match value \"$match\":\		    must be $values"	}	# Replace symbolic valies supplied for -returnCodes	foreach {strcode numcode} {ok 0 normal 0 error 1 return 2 break 3 continue 4} {	    set returnCodes [string map -nocase [list $strcode $numcode] $returnCodes]	}    } else {	# This is parsing for the old test command format; it is here	# for backward compatibility.	set result [lindex $args end]	if {[llength $args] == 2} {	    set body [lindex $args 0]	} elseif {[llength $args] == 3} {	    set constraints [lindex $args 0]	    set body [lindex $args 1]	} else {	    incr testLevel -1	    return -code error "wrong # args:\		    should be \"test name desc ?options?\""	}    }    if {[Skipped $name $constraints]} {	incr testLevel -1	return    }    # Save information about the core file.      if {[preserveCore]} {	if {[file exists [file join [workingDirectory] core]]} {	    set coreModTime [file mtime [file join [workingDirectory] core]]	}    }    # First, run the setup script    set code [catch {uplevel 1 $setup} setupMsg]    set setupFailure [expr {$code != 0}]    # Only run the test body if the setup was successful    if {!$setupFailure} {	# Verbose notification of $body start	if {[IsVerbose start]} {	    puts [outputChannel] "---- $name start"	    flush [outputChannel]	}	set command [list [namespace origin RunTest] $name $body]	if {[info exists output] || [info exists errorOutput]} {	    set testResult [uplevel 1 [list [namespace origin Eval] $command 0]]	} else {	    set testResult [uplevel 1 [list [namespace origin Eval] $command 1]]	}	foreach {actualAnswer returnCode} $testResult break    }    # Always run the cleanup script    set code [catch {uplevel 1 $cleanup} cleanupMsg]    set cleanupFailure [expr {$code != 0}]    set coreFailure 0    set coreMsg ""    # check for a core file first - if one was created by the test,    # then the test failed    if {[preserveCore]} {	if {[file exists [file join [workingDirectory] core]]} {	    # There's only a test failure if there is a core file	    # and (1) there previously wasn't one or (2) the new	    # one is different from the old one.	    if {[info exists coreModTime]} {		if {$coreModTime != [file mtime \			[file join [workingDirectory] core]]} {		    set coreFailure 1		}	    } else {		set coreFailure 1	    }		    if {([preserveCore] > 1) && ($coreFailure)} {		append coreMsg "\nMoving file to:\		    [file join [temporaryDirectory] core-$name]"		catch {file rename -force \		    [file join [workingDirectory] core] \		    [file join [temporaryDirectory] core-$name]		} msg		if {[string length $msg] > 0} {		    append coreMsg "\nError:\			Problem renaming core file: $msg"		}	    }	}    }    # check if the return code matched the expected return code    set codeFailure 0    if {!$setupFailure && [lsearch -exact $returnCodes $returnCode] == -1} {	set codeFailure 1    }    # If expected output/error strings exist, we have to compare    # them.  If the comparison fails, then so did the test.    set outputFailure 0    variable outData    if {[info exists output] && !$codeFailure} {	if {[set outputCompare [catch {	    CompareStrings $outData $output $match	} outputMatch]] == 0} {	    set outputFailure [expr {!$outputMatch}]	} else {	    set outputFailure 1	}    }    set errorFailure 0    variable errData    if {[info exists errorOutput] && !$codeFailure} {	if {[set errorCompare [catch {	    CompareStrings $errData $errorOutput $match	} errorMatch]] == 0} {	    set errorFailure [expr {!$errorMatch}]	} else {	    set errorFailure 1	}    }    # check if the answer matched the expected answer    # Only check if we ran the body of the test (no setup failure)    if {$setupFailure || $codeFailure} {	set scriptFailure 0    } elseif {[set scriptCompare [catch {	CompareStrings $actualAnswer $result $match    } scriptMatch]] == 0} {	set scriptFailure [expr {!$scriptMatch}]    } else {	set scriptFailure 1    }    # if we didn't experience any failures, then we passed    variable numTests    if {!($setupFailure || $cleanupFailure || $coreFailure	    || $outputFailure || $errorFailure || $codeFailure	    || $scriptFailure)} {	if {$testLevel == 1} {	    incr numTests(Passed)	    if {[IsVerbose pass]} {		puts [outputChannel] "++++ $name PASSED"	    }	}	incr testLevel -1	return    }    # We know the test failed, tally it...    if {$testLevel == 1} {	incr numTests(Failed)    }    # ... then report according to the type of failure    variable currentFailure true    if {![IsVerbose body]} {	set body ""    }	    puts [outputChannel] "\n==== $name\	    [string trim $description] FAILED"    if {[string length $body]} {	puts [outputChannel] "==== Contents of test case:"	puts [outputChannel] $body    }    if {$setupFailure} {	puts [outputChannel] "---- Test setup\		failed:\n$setupMsg"    }    if {$scriptFailure} {	if {$scriptCompare} {	    puts [outputChannel] "---- Error testing result: $scriptMatch"	} else {	    puts [outputChannel] "---- Result was:\n$actualAnswer"	    puts [outputChannel] "---- Result should have been\		    ($match matching):\n$result"	}    }    if {$codeFailure} {	switch -- $returnCode {	    0 { set msg "Test completed normally" }	    1 { set msg "Test generated error" }	    2 { set msg "Test generated return exception" }	    3 { set msg "Test generated break exception" }	    4 { set msg "Test generated continue exce

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -