lexer_tutorials.qbk

来自「Boost provides free peer-reviewed portab」· QBK 代码 · 共 60 行

QBK
60
字号
[/==============================================================================    Copyright (C) 2001-2008 Joel de Guzman    Copyright (C) 2001-2008 Hartmut Kaiser    Distributed under the Boost Software License, Version 1.0. (See accompanying    file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)===============================================================================/][section:lexer_tutorials __lex__ Tutorials Overview]The __lex__ library implements several components on top of possibly different lexer generator libraries. It exposes a pair of iterators, which, when dereferenced, return a stream of tokens generated from the underlying character stream. The generated tokens are based on the token definitions supplied by the user.Currently, __lex__ is built on top of Ben Hansons excellent __lexertl__ library (which is a proposed Boost library). __lexertl__ provides the necessary functionality to build state machines based on a set of supplied regular expressions. But __lex__ is not restricted to be used with __lexertl__. We expect it to be usable in conjunction with any other lexical scanner generator library, all what needsto be implemented is a set of wrapper objects exposing a well defined interface as described in this documentation.[note   For the sake of clarity all examples in this documentation assume         __lex__ to be used on top of __lexertl__.]Building a lexer using __lex__ is highly configurable, where most of this configuration has to be done at compile time. Almost all of the configurable parameters have generally useful default values, though, which means that starting a project is easy and straightforward. Here is a (non-complete) list of features you can tweak to adjust the generated lexer instance to the actual needs:* Select and customize the token type to be generated by the lexer instance.* Select and customize the token value types the generated token instances will   be able to hold.* Select the iterator type of the underlying input stream, which will be used   as the source for the character stream to tokenize.* Customize the iterator type returned by the lexer to enable debug support,  special handling of certain input sequences, etc.* Select the /dynamic/ or the /static/ runtime model for the lexical  analyzer.Special care has been taken during the development of the library that optimal code will be generated regardless of the configuration options selected.The series of tutorial examples of this section will guide you through somecommon use cases helping to understand the big picture. The first two quick start examples (__sec_lex_quickstart_1__ and __sec_lex_quickstart_2__) introduce the __lex__ library while building two standalone applications, notbeing connected to or depending on any other part of __spirit__. The section__sec_lex_quickstart_3__ demonstrates how to use a lexer in conjunction with a parser (where certainly the parser is built using __qi__). [endsect]

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?