📄 at-tw.t
字号:
BEGIN { if (! -d 'blib' and -d 't'){ chdir 't' }; unshift @INC, '../lib'; require Config; import Config; if ($Config{'extensions'} !~ /\bEncode\b/) { print "1..0 # Skip: Encode was not built\n"; exit 0; } unless (find PerlIO::Layer 'perlio') { print "1..0 # Skip: PerlIO was not built\n"; exit 0; } if (ord("A") == 193) { print "1..0 # Skip: EBCDIC\n"; exit 0; } $| = 1;}use strict;use Test::More tests => 17;use Encode;no utf8; # we have raw Chinese encodings hereuse_ok('Encode::TW');# Since JP.t already tests basic file IO, we will just focus on# internal encode / decode test here. Unfortunately, to test# against all the UniHan characters will take a huge disk space,# not to mention the time it will take, and the fact that Perl# did not bundle UniHan.txt anyway.# So, here we just test a typical snippet spanning multiple Unicode# blocks, and hope it can point out obvious errors.run_tests('Basic Big5 range', { 'utf' => (24093.39640.38525.20043.33495.35028.20846.65292.26389.30343.32771.26352.20271.24248.65108.25885.25552.35998.20110.23391.38508.20846.65292.24799.24218.23493.21566.20197.38477.65108 ), 'big5' => (join('','
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -