代码搜索:Tokenize

找到约 228 项符合「Tokenize」的源代码

代码结果 228
www.eeworm.com/read/209211/4985301

c tokenize.c

#include #include static char qsep[] = " \t\r\n"; static char* qtoken(char *s, char *sep) { int quoting; char *t; quoting = 0; t = s; /* s is output string, t is input string */
www.eeworm.com/read/205824/5017728

h tokenize.h

// tokenize.h #ifndef TOKENIZE_H #define TOKENIZE_H void tokenize ( const std::string& text, std::vector& tokens ); #endif//TOKENIZE_H
www.eeworm.com/read/205824/5017738

cpp tokenize.cpp

// tokenize.cpp #ifdef _MSC_VER #pragma warning ( disable : 4786 ) #endif//_MSC_VER #include #include #include #include "assert.h" #include "tokenize.h" #inc
www.eeworm.com/read/205824/5021449

c tokenize.c

/* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find f
www.eeworm.com/read/193974/5138530

test_tokenize

test_tokenize 1,0-1,35: COMMENT "# Tests for the 'tokenize' module.\n" 2,0-2,43: COMMENT '# Large bits stolen from test_grammar.py. \n' 3,0-3,1: NL '\n' 4,0-4,11: COMMENT '# Comments\n' 5,0-5,3: STRIN
www.eeworm.com/read/193974/5138578

py tokenize.py

"""Tokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to ge
www.eeworm.com/read/167562/5458083

c tokenize.c

/* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiven
www.eeworm.com/read/302096/3826920

hh tokenize.hh

// -*- c-basic-offset: 4; tab-width: 8; indent-tabs-mode: t -*- // Copyright (c) 2001-2003 International Computer Science Institute // // Permission is hereby granted, free of charge, to any person o
www.eeworm.com/read/273525/4209690

hlp tokenize.hlp

{smcl} {* 10feb2005}{...} {cmd:help tokenize} {hline} {title:Title} {p2colset 5 21 23 2}{...} {p2col :{hi:[P] tokenize} {hline 2}}Divide strings into tokens{p_end} {p2colreset}{...} {t
www.eeworm.com/read/438718/1823013

test_tokenize

test_tokenize 1,0-1,35: COMMENT "# Tests for the 'tokenize' module.\012" 2,0-2,43: COMMENT '# Large bits stolen from test_grammar.py. \012' 3,0-3,1: NL '\012' 4,0-4,11: COMMENT '# Comments\012' 5,0-5,