0004.txt

来自「用ruby on rails写的一个博客程序,还不错..ruby on rail」· 文本 代码 · 共 32 行

TXT
32
字号
Lexical analysis is performed by obtaining a tokenizer of the appropriate class and calling @tokenize@ on it, passing the text to be tokenized. Each token is yielded to the associated block as it is discovered.{{{lang=ruby,number=true,caption=Tokenizing a Ruby scriptrequire 'syntax'tokenizer = Syntax.load "ruby"tokenizer.tokenize( File.read( "program.rb" ) ) do |token|  puts token  puts "  group: #{token.group}"  puts "  instruction: #{token.instruction}"end}}}If you need finer control over the process, you can use the lower-level API:{{{lang=ruby,number=true,caption=Tokenizing a Ruby script via steprequire 'syntax'tokenizer = Syntax.load "ruby"tokenizer.start( File.read( "program.rb" ) ) do |token|  puts token  puts "  group: #{token.group}"  puts "  instruction: #{token.instruction}"endtokenizer.steptokenizer.step...tokenizer.finish}}}In this case, each time @#step@ is invoked, it results in tokens being consumed and yielded to the block. However, a single step may result in multiple tokens being detected and yielded--there is no way to guarantee a single token at a time, unless the corresponding syntax module was written to work that way. For efficiency, the existing modules will yield multiple tokens when processing (for instance) strings, regular expressions, and heredocs.

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?