📄 metadata.txt
字号:
DROP TABLE employees {} The `create()` and `drop()` methods also support an optional keyword argument `checkfirst` which will issue the database's appropriate pragma statements to check if the table exists before creating or dropping: {python} employees.create(bind=e, checkfirst=True) employees.drop(checkfirst=False) Entire groups of Tables can be created and dropped directly from the `MetaData` object with `create_all()` and `drop_all()`. These methods always check for the existence of each table before creating or dropping. Each method takes an optional `bind` keyword argument which can reference an `Engine` or a `Connection`. If no engine is specified, the underlying bound `Engine`, if any, is used: {python} engine = create_engine('sqlite:///:memory:') metadata = MetaData() users = Table('users', metadata, Column('user_id', Integer, primary_key = True), Column('user_name', String(16), nullable = False), Column('email_address', String(60), key='email'), Column('password', String(20), nullable = False) ) user_prefs = Table('user_prefs', metadata, Column('pref_id', Integer, primary_key=True), Column('user_id', Integer, ForeignKey("users.user_id"), nullable=False), Column('pref_name', String(40), nullable=False), Column('pref_value', String(100)) ) {sql}metadata.create_all(bind=engine) PRAGMA table_info(users){} CREATE TABLE users( user_id INTEGER NOT NULL PRIMARY KEY, user_name VARCHAR(16) NOT NULL, email_address VARCHAR(60), password VARCHAR(20) NOT NULL ) PRAGMA table_info(user_prefs){} CREATE TABLE user_prefs( pref_id INTEGER NOT NULL PRIMARY KEY, user_id INTEGER NOT NULL REFERENCES users(user_id), pref_name VARCHAR(40) NOT NULL, pref_value VARCHAR(100) )### Column Insert/Update Defaults {@name=defaults} SQLAlchemy includes several constructs which provide default values provided during INSERT and UPDATE statements. The defaults may be provided as Python constants, Python functions, or SQL expressions, and the SQL expressions themselves may be "pre-executed", executed inline within the insert/update statement itself, or can be created as a SQL level "default" placed on the table definition itself. A "default" value by definition is only invoked if no explicit value is passed into the INSERT or UPDATE statement.#### Pre-Executed Python Functions {@name=preexecute_functions}The "default" keyword argument on Column can reference a Python value or callable which is invoked at the time of an insert: {python} # a function which counts upwards i = 0 def mydefault(): global i i += 1 return i t = Table("mytable", meta, # function-based default Column('id', Integer, primary_key=True, default=mydefault), # a scalar default Column('key', String(10), default="default") )Similarly, the "onupdate" keyword does the same thing for update statements: {python} import datetime t = Table("mytable", meta, Column('id', Integer, primary_key=True), # define 'last_updated' to be populated with datetime.now() Column('last_updated', DateTime, onupdate=datetime.now), )#### Pre-executed and Inline SQL Expressions {@name=sqlexpression}The "default" and "onupdate" keywords may also be passed SQL expressions, including select statements or direct function calls: {python} t = Table("mytable", meta, Column('id', Integer, primary_key=True), # define 'create_date' to default to now() Column('create_date', DateTime, default=func.now()), # define 'key' to pull its default from the 'keyvalues' table Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1)) # define 'last_modified' to use the current_timestamp SQL function on update Column('last_modified', DateTime, onupdate=func.current_timestamp()) )The above SQL functions are usually executed "inline" with the INSERT or UPDATE statement being executed. In some cases, the function is "pre-executed" and its result pre-fetched explicitly. This happens under the following circumstances:* the column is a primary key column* the database dialect does not support a usable `cursor.lastrowid` accessor (or equivalent); this currently includes Postgres, Oracle, and Firebird.* the statement is a single execution, i.e. only supplies one set of parameters and doesn't use "executemany" behavior* the `inline=True` flag is not set on the `Insert()` or `Update()` construct.For a statement execution which is not an executemany, the returned `ResultProxy` will contain a collection accessible via `result.postfetch_cols()` which contains a list of all `Column` objects which had an inline-executed default. Similarly, all parameters which were bound to the statement, including all Python and SQL expressions which were pre-executed, are present in the `last_inserted_params()` or `last_updated_params()` collections on `ResultProxy`. The `last_inserted_ids()` collection contains a list of primary key values for the row inserted. #### DDL-Level Defaults {@name=passive} A variant on a SQL expression default is the `PassiveDefault`, which gets placed in the CREATE TABLE statement during a `create()` operation: {python} t = Table('test', meta, Column('mycolumn', DateTime, PassiveDefault(text("sysdate"))) ) A create call for the above table will produce: {code} CREATE TABLE test ( mycolumn datetime default sysdate ) The behavior of `PassiveDefault` is similar to that of a regular SQL default; if it's placed on a primary key column for a database which doesn't have a way to "postfetch" the ID, and the statement is not "inlined", the SQL expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on the database side normally.#### Defining Sequences {@name=sequences} A table with a sequence looks like: {python} table = Table("cartitems", meta, Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True), Column("description", String(40)), Column("createdate", DateTime()) )The `Sequence` object works a lot like the `default` keyword on `Column`, except that it only takes effect on a database which supports sequences. When used with a database that does not support sequences, the `Sequence` object has no effect; therefore it's safe to place on a table which is used against multiple database backends. The same rules for pre- and inline execution apply.When the `Sequence` is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus "bundling" the sequence object with its parent table.The flag `optional=True` on `Sequence` will produce a sequence that is only used on databases which have no "autoincrementing" capability. For example, Postgres supports primary key generation using the SERIAL keyword, whereas Oracle has no such capability. Therefore, a `Sequence` placed on a primary key column with `optional=True` will only be used with an Oracle backend but not Postgres.A sequence can also be executed standalone, using an `Engine` or `Connection`, returning its next value in a database-independent fashion: {python} seq = Sequence('some_sequence') nextid = connection.execute(seq)### Defining Constraints and Indexes {@name=constraints}#### UNIQUE ConstraintUnique constraints can be created anonymously on a single column using the `unique` keyword on `Column`. Explicitly named unique constraints and/or those with multiple columns are created via the `UniqueConstraint` table-level construct. {python} meta = MetaData() mytable = Table('mytable', meta, # per-column anonymous unique constraint Column('col1', Integer, unique=True), Column('col2', Integer), Column('col3', Integer), # explicit/composite unique constraint. 'name' is optional. UniqueConstraint('col2', 'col3', name='uix_1') )#### CHECK ConstraintCheck constraints can be named or unnamed and can be created at the Column or Table level, using the `CheckConstraint` construct. The text of the check constraint is passed directly through to the database, so there is limited "database independent" behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table.Note that some databases do not actively support check constraints such as MySQL and SQLite. {python} meta = MetaData() mytable = Table('mytable', meta, # per-column CHECK constraint Column('col1', Integer, CheckConstraint('col1>5')), Column('col2', Integer), Column('col3', Integer), # table level CHECK constraint. 'name' is optional. CheckConstraint('col2 > col3 + 5', name='check1') ) #### IndexesIndexes can be created anonymously (using an auto-generated name "ix_<column label>") for a single column using the inline `index` keyword on `Column`, which also modifies the usage of `unique` to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass more than one column, use the `Index` construct, which requires a name. Note that the `Index` construct is created **externally** to the table which it corresponds, using `Column` objects and not strings. {python} meta = MetaData() mytable = Table('mytable', meta, # an indexed column, with index "ix_mytable_col1" Column('col1', Integer, index=True), # a uniquely indexed column with index "ix_mytable_col2" Column('col2', Integer, index=True, unique=True), Column('col3', Integer), Column('col4', Integer), Column('col5', Integer), Column('col6', Integer), ) # place an index on col3, col4 Index('idx_col34', mytable.c.col3, mytable.c.col4) # place a unique index on col5, col6 Index('myindex', mytable.c.col5, mytable.c.col6, unique=True)The `Index` objects will be created along with the CREATE statements for the table itself. An index can also be created on its own independently of the table: {python} # create a table sometable.create() # define an index i = Index('someindex', sometable.c.col5) # create the index, will use the table's bound connectable if the `bind` keyword argument not specified i.create()### Adapting Tables to Alternate Metadata {@name=adapting}A `Table` object created against a specific `MetaData` object can be re-created against a new MetaData using the `tometadata` method: {python} # create two metadata meta1 = MetaData('sqlite:///querytest.db') meta2 = MetaData() # load 'users' from the sqlite engine users_table = Table('users', meta1, autoload=True) # create the same Table object for the plain metadata users_table_2 = users_table.tometadata(meta2)
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -