tokenizer.pro
来自「经典的基于Context-Free_Grammer 的语法分析器。」· PRO 代码 · 共 27 行
PRO
27 行
% Author: Ying Tao, Department of Biomedical Informatics, Columbia University
% Date: 12/7/2005
:-module(tokenizer,[tokenize/2]).
tokenize(Str,List):- scanStr(Str,0,'',[],List),!.
scanStr(Str,L,Wlist,List,List1):- string_length(Str,L),string_to_atom(Wlist,Last),append(List,[Last],List1),!.
scanStr(Str,L,Sw,Wlist,List):- sub_string(Str,L,1,_,X),
string_to_list(X,Xl),
(( Xl = " ")->
( string_to_atom(Sw,Nwa),
append(Wlist,[Nwa],NWlist),
L1 is L +1,
scanStr(Str,L1,'',NWlist,List)
)
;
( string_concat(Sw,X,Nw),
L1 is L +1,
scanStr(Str,L1,Nw,Wlist,List)
)).
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?