⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dbengine.txt

📁 SQLAlchemy. 经典的Python ORM框架。学习必看。
💻 TXT
📖 第 1 页 / 共 3 页
字号:
Database Engines {@name=dbengine}============================The **Engine** is the starting point for any SQLAlchemy application.  It's "home base" for the actual database and its DBAPI, delivered to the SQLAlchemy application through a connection pool and a **Dialect**, which describes how to talk to a specific kind of database and DBAPI combination.The general structure is this:    {diagram}                                         +-----------+                        __________                                     /---|   Pool    |---\                   (__________)                 +-------------+    /    +-----------+    \     +--------+   |          |    connect() <--|   Engine    |---x                       x----| DBAPI  |---| database |                 +-------------+    \    +-----------+    /     +--------+   |          |                                     \---|  Dialect  |---/                   |__________|                                         +-----------+                       (__________)Where above, a [sqlalchemy.engine.Engine](rel:docstrings_sqlalchemy.engine_Engine) references both a  [sqlalchemy.engine.Dialect](rel:docstrings_sqlalchemy.engine_Dialect) and [sqlalchemy.pool.Pool](rel:docstrings_sqlalchemy.pool_Pool), which together interpret the DBAPI's module functions as well as the behavior of the database.Creating an engine is just a matter of issuing a single call, `create_engine()`:    {python}    engine = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')    The above engine invokes the `postgres` dialect and a connection pool which references `localhost:5432`.The engine can be used directly to issue SQL to the database.  The most generic way is to use connections, which you get via the `connect()` method:    {python}    connection = engine.connect()    result = connection.execute("select username from users")    for row in result:        print "username:", row['username']    connection.close()    The connection is an instance of [sqlalchemy.engine.Connection](rel:docstrings_sqlalchemy.engine_Connection), which is a **proxy** object for an actual DBAPI connection.  The returned result is an instance of [sqlalchemy.engine.ResultProxy](rel:docstrings_sqlalchemy.engine_ResultProxy), which acts very much like a DBAPI cursor.When you say `engine.connect()`, a new `Connection` object is created, and a DBAPI connection is retrieved from the connection pool.  Later, when you call `connection.close()`, the DBAPI connection is returned to the pool; nothing is actually "closed" from the perspective of the database.To execute some SQL more quickly, you can skip the `Connection` part and just say:    {python}    result = engine.execute("select username from users")    for row in result:        print "username:", row['username']    result.close()Where above, the `execute()` method on the `Engine` does the `connect()` part for you, and returns the `ResultProxy` directly.  The actual `Connection` is *inside* the `ResultProxy`, waiting for you to finish reading the result.  In this case, when you `close()` the `ResultProxy`, the underlying `Connection` is closed, which returns the DBAPI connection to the pool. To summarize the above two examples, when you use a `Connection` object, it's known as **explicit execution**.  When you don't see the `Connection` object, but you still use the `execute()` method on the `Engine`, it's called **explicit, connectionless execution**.   A third variant of execution also exists called **implicit execution**; this will be described later.The `Engine` and `Connection` can do a lot more than what we illustrated above; SQL strings are only its most rudimentary function.  Later chapters will describe how "constructed SQL" expressions can be used with engines; in many cases, you don't have to deal with the `Engine` at all after it's created.  The Object Relational Mapper (ORM), an optional feature of SQLAlchemy, also uses the `Engine` in order to get at connections; that's also a case where you can often create the engine once, and then forget about it.### Supported Databases {@name=supported}Recall that the `Dialect` is used to describe how to talk to a specific kind of database.  Dialects are included with SQLAlchemy for SQLite, Postgres, MySQL, MS-SQL, Firebird, Informix, and Oracle; these can each be seen as a Python module present in the `sqlalchemy.databases` package.  Each dialect requires the appropriate DBAPI drivers to be installed separately.Downloads for each DBAPI at the time of this writing are as follows:* Postgres:  [psycopg2](http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo)* SQLite:  [pysqlite](http://initd.org/tracker/pysqlite)* MySQL:   [MySQLDB](http://sourceforge.net/projects/mysql-python)* Oracle:  [cx_Oracle](http://www.cxtools.net/default.aspx?nav=home)* MS-SQL:  [pyodbc](http://pyodbc.sourceforge.net/) (recommended) [adodbapi](http://adodbapi.sourceforge.net/)  [pymssql](http://pymssql.sourceforge.net/)* Firebird:  [kinterbasdb](http://kinterbasdb.sourceforge.net/)* Informix:  [informixdb](http://informixdb.sourceforge.net/)The SQLAlchemy Wiki contains a page of database notes, describing whatever quirks and behaviors have been observed.  Its a good place to check for issues with specific databases.  [Database Notes](http://www.sqlalchemy.org/trac/wiki/DatabaseNotes)### create_engine() URL Arguments {@name=establishing}SQLAlchemy indicates the source of an Engine strictly via [RFC-1738](http://rfc.net/rfc1738.html) style URLs, combined with optional keyword arguments to specify options for the Engine.  The form of the URL is:    driver://username:password@host:port/databaseAvailable drivernames are `sqlite`, `mysql`, `postgres`, `oracle`, `mssql`, and `firebird`.  For sqlite, the database name is the filename to connect to, or the special name ":memory:" which indicates an in-memory database.  The URL is typically sent as a string to the `create_engine()` function:    {python}    # postgres    pg_db = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')        # sqlite (note the four slashes for an absolute path)    sqlite_db = create_engine('sqlite:////absolute/path/to/database.txt')    sqlite_db = create_engine('sqlite:///relative/path/to/database.txt')    sqlite_db = create_engine('sqlite://')  # in-memory database    sqlite_db = create_engine('sqlite://:memory:')  # the same        # mysql    mysql_db = create_engine('mysql://localhost/foo')    # oracle via TNS name    oracle_db = create_engine('oracle://scott:tiger@dsn')    # oracle will feed host/port/SID into cx_oracle.makedsn    oracle_db = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')The `Engine` will ask the connection pool for a connection when the `connect()` or `execute()` methods are called.  The default connection pool, `QueuePool`, as well as the default connection pool used with SQLite, `SingletonThreadPool`, will open connections to the database on an as-needed basis.  As concurrent statements are executed, `QueuePool` will grow its pool of connections to a default size of five, and will allow a default "overflow" of ten.   Since the `Engine` is essentially "home base" for the connection pool, it follows that you should keep a single `Engine` per database established within an application, rather than creating a new one for each connection.#### Custom DBAPI connect() argumentsCustom arguments used when issuing the `connect()` call to the underlying DBAPI may be issued in three distinct ways.  String-based arguments can be passed directly from the URL string as query arguments:    {python}    db = create_engine('postgres://scott:tiger@localhost/test?argument1=foo&argument2=bar')If SQLAlchemy's database connector is aware of a particular query argument, it may convert its type from string to its proper type.    `create_engine` also takes an argument `connect_args` which is an additional dictionary that will be passed to `connect()`.  This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter:    {python}    db = create_engine('postgres://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})The most customizable connection method of all is to pass a `creator` argument, which specifies a callable that returns a DBAPI connection:    {python}    def connect():        return psycopg.connect(user='scott', host='localhost')    db = create_engine('postgres://', creator=connect)        ### Database Engine Options {@name=options}Keyword options can also be specified to `create_engine()`, following the string URL as follows:    {python}    db = create_engine('postgres://...', encoding='latin1', echo=True)A list of all standard options, as well as several that are used by particular database dialects, is as follows:* **assert_unicode=False** - When set to `True` alongside convert_unicode=`True`, asserts that incoming string bind parameters are instances of `unicode`, otherwise raises an error.  Only takes effect when `convert_unicode==True`.  This flag is also available on the `String` type and its descendants. New in 0.4.2.  * **connect_args** - a dictionary of options which will be passed directly to the DBAPI's `connect()` method as additional keyword arguments.* **convert_unicode=False** - if set to True, all String/character based types will convert Unicode values to raw byte values going into the database, and all raw byte values to Python Unicode coming out in result sets.  This is an engine-wide method to provide unicode conversion across the board.  For unicode conversion on a column-by-column level, use the `Unicode` column type instead, described in [types](rel:types).* **creator** - a callable which returns a DBAPI connection.  This creation function will be passed to the underlying connection pool and will be used to create all new database connections.  Usage of this function causes connection parameters specified in the URL argument to be bypassed.* **echo=False** - if True, the Engine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout.  The `echo` attribute of `Engine` can be modified at any time to turn logging on and off.  If set to the string `"debug"`, result rows will be printed to the standard output as well.  This flag ultimately controls a Python logger; see [dbengine_logging](rel:dbengine_logging) at the end of this chapter for information on how to configure logging directly.* **echo_pool=False** - if True, the connection pool will log all checkouts/checkins to the logging stream, which defaults to sys.stdout.  This flag ultimately controls a Python logger; see [dbengine_logging](rel:dbengine_logging) for information on how to configure logging directly.* **encoding='utf-8'** - the encoding to use for all Unicode translations, both by engine-wide unicode conversion as well as the `Unicode` type object.* **module=None** - used by database implementations which support multiple DBAPI modules, this is a reference to a DBAPI2 module to be used instead of the engine's default module.  For Postgres, the default is psycopg2.  For Oracle, it's cx_Oracle.* **pool=None** - an already-constructed instance of `sqlalchemy.pool.Pool`, such as a `QueuePool` instance.  If non-None, this pool will be used directly as the underlying connection pool for the engine, bypassing whatever connection parameters are present in the URL argument.  For information on constructing connection pools manually, see [pooling](rel:pooling).* **poolclass=None** - a `sqlalchemy.pool.Pool` subclass, which will be used to create a connection pool instance using the connection parameters given in the URL.  Note this differs from `pool` in that you don't actually instantiate the pool in this case, you just indicate what type of pool to be used.* **max_overflow=10** - the number of connections to allow in connection pool "overflow", that is connections that can be opened above and beyond the pool_size setting, which defaults to five.  this is only used with `QueuePool`.* **pool_size=5** - the number of connections to keep open inside the connection pool.  This used with `QueuePool` as well as `SingletonThreadPool`.* **pool_recycle=-1** - this setting causes the pool to recycle connections after the given number of seconds has passed.  It defaults to -1, or no timeout.  For example, setting to 3600 means connections will be recycled after one hour.  Note that MySQL in particular will **disconnect automatically** if no activity is detected on a connection for eight hours (although this is configurable with the MySQLDB connection itself and the  server configuration as well).* **pool_timeout=30** - number of seconds to wait before giving up on getting a connection from the pool.  This is only used with `QueuePool`.* **strategy='plain'** - the Strategy argument is used to select alternate implementations of the underlying Engine object, which coordinates operations between dialects, compilers, connections, and so on.  Currently, the only alternate strategy besides the default value of "plain" is the "threadlocal" strategy, which selects the usage of the `TLEngine` class that provides a modified connection scope for connectionless executions.  Connectionless execution as well as further detail on this setting are described in [dbengine_implicit](rel:dbengine_implicit).* **threaded=True** - used by cx_Oracle; sets the `threaded` parameter of the connection indicating thread-safe usage.  cx_Oracle docs indicate setting this flag to `False` will speed performance by 10-15%.  While this defaults to `False` in cx_Oracle, SQLAlchemy defaults it to `True`, preferring stability over early optimization.* **use_ansi=True** - used only by Oracle;  when False, the Oracle driver attempts to support a particular "quirk" of Oracle versions 8 and previous, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using `column1(+)=column2` must be used in order to achieve a LEFT OUTER JOIN.* **use_oids=False** - used only by Postgres, will enable the column name "oid" as the object ID column, which is also used for the default sort order of tables.  Postgres as of 8.1 has object IDs disabled by default.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -