⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sqlexpression.txt

📁 SQLAlchemy. 经典的Python ORM框架。学习必看。
💻 TXT
📖 第 1 页 / 共 4 页
字号:
SQL Expression Language Tutorial {@name=sql}===============================================This tutorial will cover SQLAlchemy SQL Expressions, which are Python constructs that represent SQL statements.  The tutorial is in doctest format, meaning each `>>>` line represents something you can type at a Python command prompt, and the following text represents the expected return value.  The tutorial has no prerequisites.## Version CheckA quick check to verify that we are on at least **version 0.4** of SQLAlchemy:    {python}    >>> import sqlalchemy    >>> sqlalchemy.__version__ # doctest:+SKIP    0.4.0    ## ConnectingFor this tutorial we will use an in-memory-only SQLite database.   This is an easy way to test things without needing to have an actual database defined anywhere.  To connect we use `create_engine()`:    {python}    >>> from sqlalchemy import create_engine    >>> engine = create_engine('sqlite:///:memory:', echo=True)    The `echo` flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard `logging` module.  With it enabled, we'll see all the generated SQL produced.  If you are working through this tutorial and want less output generated, set it to `False`.   This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see whats being generated.    ## Define and Create Tables {@name=tables}The SQL Expression Language constructs its expressions in most cases against table columns.  In SQLAlchemy, a column is most often represented by an object called `Column`, and in all cases a `Column` is associated with a `Table`.  A collection of `Table` objects and their associated child objects is referred to as **database metadata**.  In this tutorial we will explicitly lay out several `Table` objects, but note that SA can also "import" whole sets of `Table` objects automatically from an existing database (this process is called **table reflection**).We define our tables all within a catalog called `MetaData`, using the `Table` construct, which resembles regular SQL CREATE TABLE statements.  We'll make two tables, one of which represents "users" in an application, and another which represents zero or more "email addreses" for each row in the "users" table:    {python}    >>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey    >>> metadata = MetaData()    >>> users = Table('users', metadata,    ...     Column('id', Integer, primary_key=True),    ...     Column('name', String(40)),    ...     Column('fullname', String(100)),    ... )    >>> addresses = Table('addresses', metadata,     ...   Column('id', Integer, primary_key=True),    ...   Column('user_id', None, ForeignKey('users.id')),    ...   Column('email_address', String(50), nullable=False)    ...  )All about how to define `Table` objects, as well as how to create them from an existing database automatically, is described in [metadata](rel:metadata).Next, to tell the `MetaData` we'd actually like to create our selection of tables for real inside the SQLite database, we use `create_all()`, passing it the `engine` instance which points to our database.  This will check for the presence of each table first before creating, so it's safe to call multiple times:    {python}    {sql}>>> metadata.create_all(engine) #doctest: +NORMALIZE_WHITESPACE    PRAGMA table_info("users")    {}    PRAGMA table_info("addresses")    {}    CREATE TABLE users (        id INTEGER NOT NULL,         name VARCHAR(40),         fullname VARCHAR(100),         PRIMARY KEY (id)    )    {}    COMMIT    CREATE TABLE addresses (        id INTEGER NOT NULL,         user_id INTEGER,         email_address VARCHAR(50) NOT NULL,         PRIMARY KEY (id),          FOREIGN KEY(user_id) REFERENCES users (id)    )    {}    COMMIT## Insert ExpressionsThe first SQL expression we'll create is the `Insert` construct, which represents an INSERT statement.   This is typically created relative to its target table:    {python}    >>> ins = users.insert()To see a sample of the SQL this construct produces, use the `str()` function:    {python}    >>> str(ins)    'INSERT INTO users (id, name, fullname) VALUES (:id, :name, :fullname)'    Notice above that the INSERT statement names every column in the `users` table.  This can be limited by using the `values` keyword, which establishes the VALUES clause of the INSERT explicitly:    {python}    >>> ins = users.insert(values={'name':'jack', 'fullname':'Jack Jones'})    >>> str(ins)    'INSERT INTO users (name, fullname) VALUES (:name, :fullname)'    Above, while the `values` keyword limited the VALUES clause to just two columns, the actual data we placed in `values` didn't get rendered into the string; instead we got named bind parameters.  As it turns out, our data *is* stored within our `Insert` construct, but it typically only comes out when the statement is actually executed; since the data consists of literal values, SQLAlchemy automatically generates bind parameters for them.  We can peek at this data for now by looking at the compiled form of the statement:    {python}    >>> ins.compile().params #doctest: +NORMALIZE_WHITESPACE    {'fullname': 'Jack Jones', 'name': 'jack'}    ## Executing {@name=executing}The interesting part of an `Insert` is executing it.  In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some "shortcut" ways to do it.  The `engine` object we created is a repository for database connections capable of issuing SQL to the database.  To acquire a connection, we use the `connect()` method:    {python}    >>> conn = engine.connect()    >>> conn #doctest: +ELLIPSIS    <sqlalchemy.engine.base.Connection object at 0x...>The `Connection` object represents an actively checked out DBAPI connection resource.  Lets feed it our `Insert` object and see what happens:    {python}    >>> result = conn.execute(ins)    {opensql}INSERT INTO users (name, fullname) VALUES (?, ?)    ['jack', 'Jack Jones']    COMMITSo the INSERT statement was now issued to the database.  Although we got positional "qmark" bind parameters instead of "named" bind parameters in the output.  How come ?  Because when executed, the `Connection` used the SQLite **dialect** to help generate the statement; when we use the `str()` function, the statement isn't aware of this dialect, and falls back onto a default which uses named parameters. We can view this manually as follows:    {python}    >>> from sqlalchemy.databases.sqlite import SQLiteDialect    >>> compiled = ins.compile(dialect=SQLiteDialect())    >>> str(compiled)    'INSERT INTO users (name, fullname) VALUES (?, ?)'What about the `result` variable we got when we called `execute()` ?  As the SQLAlchemy `Connection` object references a DBAPI connection, the result, known as a `ResultProxy` object, is analogous to the DBAPI cursor object.  In the case of an INSERT, we can get important information from it, such as the primary key values which were generated from our statement:    {python}    >>> result.last_inserted_ids()    [1]    The value of `1` was automatically generated by SQLite, but only because we did not specify the `id` column in our `Insert` statement; otherwise, our explicit value would have been used.   In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases' `Dialect` knows the specific steps needed to determine the correct value (or values; note that `last_inserted_ids()` returns a list so that it supports composite primary keys).## Executing Multiple Statements {@name=execmany}Our insert example above was intentionally a little drawn out to show some various behaviors of expression language constructs.  In the usual case, an `Insert` statement is usually compiled against the parameters sent to the `execute()` method on `Connection`, so that there's no need to use the `values` keyword with `Insert`.  Lets create a generic `Insert` statement again and use it in the "normal" way:    {python}    >>> ins = users.insert()    >>> conn.execute(ins, id=2, name='wendy', fullname='Wendy Williams') # doctest: +ELLIPSIS    {opensql}INSERT INTO users (id, name, fullname) VALUES (?, ?, ?)    [2, 'wendy', 'Wendy Williams']    COMMIT    {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>Above, because we specified all three columns in the the `execute()` method, the compiled `Insert` included all three columns.  The `Insert` statement is compiled at execution time based on the parameters we specified; if we specified fewer parameters, the `Insert` would have fewer entries in its VALUES clause.To issue many inserts using DBAPI's `executemany()` method, we can send in a list of dictionaries each containing a distinct set of parameters to be inserted, as we do here to add some email addresses:    {python}    >>> conn.execute(addresses.insert(), [ # doctest: +ELLIPSIS    ...    {'user_id': 1, 'email_address' : 'jack@yahoo.com'},    ...    {'user_id': 1, 'email_address' : 'jack@msn.com'},    ...    {'user_id': 2, 'email_address' : 'www@www.org'},    ...    {'user_id': 2, 'email_address' : 'wendy@aol.com'},    ... ])    {opensql}INSERT INTO addresses (user_id, email_address) VALUES (?, ?)    [[1, 'jack@yahoo.com'], [1, 'jack@msn.com'], [2, 'www@www.org'], [2, 'wendy@aol.com']]    COMMIT    {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>Above, we again relied upon SQLite's automatic generation of primary key identifiers for each `addresses` row.When executing multiple sets of parameters, each dictionary must have the **same** set of keys; i.e. you cant have fewer keys in some dictionaries than others.  This is because the `Insert` statement is compiled against the **first** dictionary in the list, and it's assumed that all subsequent argument dictionaries are compatible with that statement.## Connectionless / Implicit Execution {@name=connectionless}We're executing our `Insert` using a `Connection`.  There's two options that allow you to not have to deal with the connection part.  You can execute in the **connectionless** style, using the engine, which opens and closes a connection for you:    {python}    {sql}>>> result = engine.execute(users.insert(), name='fred', fullname="Fred Flintstone")    INSERT INTO users (name, fullname) VALUES (?, ?)    ['fred', 'Fred Flintstone']    COMMIT    and you can save even more steps than that, if you connect the `Engine` to the `MetaData` object we created earlier.  When this is done, all SQL expressions which involve tables within the `MetaData` object will be automatically **bound** to the `Engine`.  In this case, we call it  **implicit execution**:    {python}    >>> metadata.bind = engine    {sql}>>> result = users.insert().execute(name="mary", fullname="Mary Contrary")    INSERT INTO users (name, fullname) VALUES (?, ?)    ['mary', 'Mary Contrary']    COMMITWhen the `MetaData` is bound, statements will also compile against the engine's dialect.  Since a lot of the examples here assume the default dialect, we'll detach the engine from the metadata which we just attached:    {python}    >>> metadata.bind = NoneDetailed examples of connectionless and implicit execution are available in the "Engines" chapter: [dbengine_implicit](rel:dbengine_implicit).    ## Selecting {@name=selecting}We began with inserts just so that our test database had some data in it.  The more interesting part of the data is selecting it !  We'll cover UPDATE and DELETE statements later.  The primary construct used to generate SELECT statements is the `select()` function:    {python}    >>> from sqlalchemy.sql import select    >>> s = select([users])    {opensql}>>> result = conn.execute(s)    SELECT users.id, users.name, users.fullname     FROM users    []Above, we issued a basic `select()` call, placing the `users` table within the COLUMNS clause of the select, and then executing.  SQLAlchemy expanded the `users` table into the set of each of its columns, and also generated a FROM clause for us.  The result returned is again a `ResultProxy` object, which acts much like a DBAPI cursor, including methods such as `fetchone()` and `fetchall()`.  The easiest way to get rows from it is to just iterate:    {python}    >>> for row in result:    ...     print row    (1, u'jack', u'Jack Jones')    (2, u'wendy', u'Wendy Williams')    (3, u'fred', u'Fred Flintstone')    (4, u'mary', u'Mary Contrary')Above, we see that printing each row produces a simple tuple-like result.  We have more options at accessing the data in each row.  One very common way is through dictionary access, using the string names of columns:    {python}    {sql}>>> result = conn.execute(s)    SELECT users.id, users.name, users.fullname     FROM users    []        >>> row = result.fetchone()    >>> print "name:", row['name'], "; fullname:", row['fullname']    name: jack ; fullname: Jack JonesInteger indexes work as well:    {python}    >>> row = result.fetchone()    >>> print "name:", row[1], "; fullname:", row[2]    name: wendy ; fullname: Wendy WilliamsBut another way, whose usefulness will become apparent later on, is to use the `Column` objects directly as keys:    {python}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -