Support for the PostgreSQL database.
The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.
PostgreSQL supports sequences, and SQLAlchemy uses these as the default means
of creating new primary key values for integer-based primary key columns. When
creating tables, SQLAlchemy will issue the SERIAL
datatype for
integer-based primary key columns, which generates a sequence and server side
default corresponding to the column.
To specify a specific named sequence to be used for primary key generation,
use the Sequence()
construct:
Table('sometable', metadata,
Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
)
When SQLAlchemy issues a single INSERT statement, to fulfill the contract of
having the “last insert identifier” available, a RETURNING clause is added to
the INSERT statement which specifies the primary key columns should be
returned after the statement completes. The RETURNING functionality only takes
place if Postgresql 8.2 or later is in use. As a fallback approach, the
sequence, whether specified explicitly or implicitly via SERIAL
, is
executed independently beforehand, the returned value to be used in the
subsequent insert. Note that when an
insert()
construct is executed using
“executemany” semantics, the “last inserted identifier” functionality does not
apply; no RETURNING clause is emitted nor is the sequence pre-executed in this
case.
To force the usage of RETURNING by default off, specify the flag
implicit_returning=False
to create_engine()
.
All Postgresql dialects support setting of transaction isolation level
both via a dialect-specific parameter isolation_level
accepted by create_engine()
,
as well as the isolation_level
argument as passed to Connection.execution_options()
.
When using a non-psycopg2 dialect, this feature works by issuing the
command SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL
<level>
for each new connection.
To set isolation level using create_engine()
:
engine = create_engine(
"postgresql+pg8000://scott:tiger@localhost/test",
isolation_level="READ UNCOMMITTED"
)
To set using per-connection execution options:
connection = engine.connect()
connection = connection.execution_options(isolation_level="READ COMMITTED")
Valid values for isolation_level
include:
READ COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
The psycopg2
dialect also offers the special level AUTOCOMMIT
. See
Psycopg2 Transaction Isolation Level for details.
Tables can be introspected from any accessible schema, including
inter-schema foreign key relationships. However, care must be taken
when specifying the “schema” argument for a given Table
, when
the given schema is also present in PostgreSQL’s search_path
variable
for the current connection.
If a FOREIGN KEY constraint reports that the remote table’s schema is within
the current search_path
, the “schema” attribute of the resulting
Table
will be set to None
, unless the actual schema of the
remote table matches that of the referencing table, and the “schema” argument
was explicitly stated on the referencing table.
The best practice here is to not use the schema
argument
on Table
for any schemas that are present in search_path
.
search_path
defaults to “public”, but care should be taken
to inspect the actual value using:
SHOW search_path;
Changed in version 0.7.3: Prior to this version, cross-schema foreign keys when the schemas
were also in the search_path
could make an incorrect assumption
if the schemas were explicitly stated on each Table
.
Background on PG’s search_path
is at:
http://www.postgresql.org/docs/9.0/static/ddl-schemas.html#DDL-SCHEMAS-PATH
The dialect supports PG 8.2’s INSERT..RETURNING
, UPDATE..RETURNING
and
DELETE..RETURNING
syntaxes. INSERT..RETURNING
is used by default
for single-row INSERT statements in order to fetch newly generated
primary key identifiers. To specify an explicit RETURNING
clause,
use the _UpdateBase.returning()
method on a per-statement basis:
# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
values(name='foo')
print result.fetchall()
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo').values(name='bar')
print result.fetchall()
# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo')
print result.fetchall()
The dialect supports PostgreSQL’s ONLY keyword for targeting only a particular
table in an inheritance hierarchy. This can be used to produce the
SELECT ... FROM ONLY
, UPDATE ONLY ...
, and DELETE FROM ONLY ...
syntaxes. It uses SQLAlchemy’s hints mechanism:
# SELECT ... FROM ONLY ...
result = table.select().with_hint(table, 'ONLY', 'postgresql')
print result.fetchall()
# UPDATE ONLY ...
table.update(values=dict(foo='bar')).with_hint('ONLY',
dialect_name='postgresql')
# DELETE FROM ONLY ...
table.delete().with_hint('ONLY', dialect_name='postgresql')
Several extensions to the Index
construct are available, specific
to the PostgreSQL dialect.
Partial indexes add criterion to the index definition so that the index is
applied to a subset of rows. These can be specified on Index
using the postgresql_where
keyword argument:
Index('my_index', my_table.c.id, postgresql_where=tbl.c.value > 10)
PostgreSQL allows the specification of an operator class for each column of
an index (see
http://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).
The Index
construct allows these to be specified via the
postgresql_ops
keyword argument:
Index('my_index', my_table.c.id, my_table.c.data,
postgresql_ops={
'data': 'text_pattern_ops',
'id': 'int4_ops'
})
New in version 0.7.2: postgresql_ops
keyword argument to Index
construct.
Note that the keys in the postgresql_ops
dictionary are the “key” name of
the Column
, i.e. the name used to access it from the .c
collection of Table
, which can be configured to be different than
the actual name of the column as expressed in the database.
PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as well
as the ability for users to create their own (see
http://www.postgresql.org/docs/8.3/static/indexes-types.html). These can be
specified on Index
using the postgresql_using
keyword argument:
Index('my_index', my_table.c.data, postgresql_using='gin')
The value passed to the keyword argument will be simply passed through to the underlying CREATE INDEX command, so it must be a valid index type for your version of PostgreSQL.
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be
valid with Postgresql are importable from the top level dialect, whether
they originate from sqlalchemy.types
or from the local dialect:
from sqlalchemy.dialects.postgresql import \
ARRAY, BIGINT, BIT, BOOLEAN, BYTEA, CHAR, CIDR, DATE, \
DOUBLE_PRECISION, ENUM, FLOAT, HSTORE, INET, INTEGER, \
INTERVAL, MACADDR, NUMERIC, REAL, SMALLINT, TEXT, TIME, \
TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \
DATERANGE, TSRANGE, TSTZRANGE
Types which are specific to PostgreSQL, or have PostgreSQL-specific construction arguments, are as follows:
sqlalchemy.dialects.postgresql.
array
(clauses, **kw)¶Bases: sqlalchemy.sql.expression.Tuple
A Postgresql ARRAY literal.
This is used to produce ARRAY literals in SQL expressions, e.g.:
from sqlalchemy.dialects.postgresql import array
from sqlalchemy.dialects import postgresql
from sqlalchemy import select, func
stmt = select([
array([1,2]) + array([3,4,5])
])
print stmt.compile(dialect=postgresql.dialect())
Produces the SQL:
SELECT ARRAY[%(param_1)s, %(param_2)s] ||
ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
An instance of array
will always have the datatype
ARRAY
. The “inner” type of the array is inferred from
the values present, unless the type_
keyword argument is passed:
array(['foo', 'bar'], type_=CHAR)
New in version 0.8: Added the array
literal type.
See also:
sqlalchemy.dialects.postgresql.
ARRAY
(item_type, as_tuple=False, dimensions=None)¶Bases: sqlalchemy.types.Concatenable
, sqlalchemy.types.TypeEngine
Postgresql ARRAY type.
Represents values as Python lists.
An ARRAY
type is constructed given the “type”
of element:
mytable = Table("mytable", metadata,
Column("data", ARRAY(Integer))
)
The above type represents an N-dimensional array, meaning Postgresql will interpret values with any number of dimensions automatically. To produce an INSERT construct that passes in a 1-dimensional array of integers:
connection.execute(
mytable.insert(),
data=[1,2,3]
)
The ARRAY
type can be constructed given a fixed number
of dimensions:
mytable = Table("mytable", metadata,
Column("data", ARRAY(Integer, dimensions=2))
)
This has the effect of the ARRAY
type
specifying that number of bracketed blocks when a Table
is used in a CREATE TABLE statement, or when the type is used
within a expression.cast()
construct; it also causes
the bind parameter and result set processing of the type
to optimize itself to expect exactly that number of dimensions.
Note that Postgresql itself still allows N dimensions with such a type.
SQL expressions of type ARRAY
have support for “index” and
“slice” behavior. The Python []
operator works normally here, given
integer indexes or slices. Note that Postgresql arrays default
to 1-based indexing. The operator produces binary expression
constructs which will produce the appropriate SQL, both for
SELECT statements:
select([mytable.c.data[5], mytable.c.data[2:7]])
as well as UPDATE statements when the Update.values()
method
is used:
mytable.update().values({
mytable.c.data[5]: 7,
mytable.c.data[2:7]: [1, 2, 3]
})
ARRAY
provides special methods for containment operations,
e.g.:
mytable.c.data.contains([1, 2])
For a full list of special methods see ARRAY.Comparator
.
New in version 0.8: Added support for index and slice operations
to the ARRAY
type, including support for UPDATE
statements, and special array containment operations.
The ARRAY
type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000.
See also:
postgresql.array
- produce a literal array value.
Comparator
(expr)¶Bases: sqlalchemy.types.Comparator
Define comparison operations for ARRAY
.
all
(other, operator=<built-in function eq>)¶Return other operator ALL (array)
clause.
Argument places are switched, because ALL requires array expression to be on the right hand-side.
E.g.:
from sqlalchemy.sql import operators
conn.execute(
select([table.c.data]).where(
table.c.data.all(7, operator=operators.lt)
)
)
Parameters: |
|
---|
any
(other, operator=<built-in function eq>)¶Return other operator ANY (array)
clause.
Argument places are switched, because ANY requires array expression to be on the right hand-side.
E.g.:
from sqlalchemy.sql import operators
conn.execute(
select([table.c.data]).where(
table.c.data.any(7, operator=operators.lt)
)
)
Parameters: |
|
---|
contained_by
(other)¶Boolean expression. Test if elements are a proper subset of the elements of the argument array expression.
contains
(other, **kwargs)¶Boolean expression. Test if elements are a superset of the elements of the argument array expression.
overlap
(other)¶Boolean expression. Test if array has elements in common with an argument array expression.
ARRAY.
__init__
(item_type, as_tuple=False, dimensions=None)¶Construct an ARRAY.
E.g.:
Column('myarray', ARRAY(Integer))
Arguments are:
Parameters: |
|
---|
sqlalchemy.dialects.postgresql.
Any
(left, right, operator=<built-in function eq>)¶Bases: sqlalchemy.sql.expression.ColumnElement
Represent the clause left operator ANY (right)
. right
must be
an array expression.
sqlalchemy.dialects.postgresql.
All
(left, right, operator=<built-in function eq>)¶Bases: sqlalchemy.sql.expression.ColumnElement
Represent the clause left operator ALL (right)
. right
must be
an array expression.
sqlalchemy.dialects.postgresql.
BIT
(length=None, varying=False)¶Bases: sqlalchemy.types.TypeEngine
sqlalchemy.dialects.postgresql.
BYTEA
(length=None)¶Bases: sqlalchemy.types.LargeBinary
__init__
(length=None)¶Construct a LargeBinary type.
Parameters: | length – optional, a length for the column for use in
DDL statements, for those BLOB types that accept a length
(i.e. MySQL). It does not produce a small BINARY/VARBINARY
type - use the BINARY/VARBINARY types specifically for those.
May be safely omitted if no CREATE
TABLE will be issued. Certain databases may require a
length for use in DDL, and will raise an exception when
the CREATE TABLE DDL is issued. |
---|
sqlalchemy.dialects.postgresql.
CIDR
(*args, **kwargs)¶Bases: sqlalchemy.types.TypeEngine
__init__
(*args, **kwargs)¶Support implementations that were passing arguments
sqlalchemy.dialects.postgresql.
DOUBLE_PRECISION
(precision=None, asdecimal=False, **kwargs)¶Bases: sqlalchemy.types.Float
__init__
(precision=None, asdecimal=False, **kwargs)¶Construct a Float.
Parameters: |
|
---|
sqlalchemy.dialects.postgresql.
ENUM
(*enums, **kw)¶Bases: sqlalchemy.types.Enum
Postgresql ENUM type.
This is a subclass of types.Enum
which includes
support for PG’s CREATE TYPE
.
ENUM
is used automatically when
using the types.Enum
type on PG assuming
the native_enum
is left as True
. However, the
ENUM
class can also be instantiated
directly in order to access some additional Postgresql-specific
options, namely finer control over whether or not
CREATE TYPE
should be emitted.
Note that both types.Enum
as well as
ENUM
feature create/drop
methods; the base types.Enum
type ultimately
delegates to the create()
and
drop()
methods present here.
__init__
(*enums, **kw)¶Construct an ENUM
.
Arguments are the same as that of
types.Enum
, but also including
the following parameters.
Parameters: | create_type – Defaults to True.
Indicates that New in version 0.7.4. |
---|
create
(bind=None, checkfirst=True)¶Emit CREATE TYPE
for this
ENUM
.
If the underlying dialect does not support Postgresql CREATE TYPE, no action is taken.
Parameters: |
|
---|
drop
(bind=None, checkfirst=True)¶Emit DROP TYPE
for this
ENUM
.
If the underlying dialect does not support Postgresql DROP TYPE, no action is taken.
Parameters: |
|
---|
sqlalchemy.dialects.postgresql.
HSTORE
(*args, **kwargs)¶Bases: sqlalchemy.types.Concatenable
, sqlalchemy.types.TypeEngine
Represent the Postgresql HSTORE type.
The HSTORE
type stores dictionaries containing strings, e.g.:
data_table = Table('data_table', metadata,
Column('id', Integer, primary_key=True),
Column('data', HSTORE)
)
with engine.connect() as conn:
conn.execute(
data_table.insert(),
data = {"key1": "value1", "key2": "value2"}
)
HSTORE
provides for a wide range of operations, including:
Index operations:
data_table.c.data['some key'] == 'some value'
Containment operations:
data_table.c.data.has_key('some key')
data_table.c.data.has_all(['one', 'two', 'three'])
Concatenation:
data_table.c.data + {"k1": "v1"}
For a full list of special methods see HSTORE.comparator_factory
.
For usage with the SQLAlchemy ORM, it may be desirable to combine
the usage of HSTORE
with MutableDict
dictionary
now part of the sqlalchemy.ext.mutable
extension. This extension will allow “in-place” changes to the
dictionary, e.g. addition of new keys or replacement/removal of existing
keys to/from the current dictionary, to produce events which will be detected
by the unit of work:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'data_table'
id = Column(Integer, primary_key=True)
data = Column(MutableDict.as_mutable(HSTORE))
my_object = session.query(MyClass).one()
# in-place mutation, requires Mutable extension
# in order for the ORM to detect
my_object.data['some_key'] = 'some value'
session.commit()
When the sqlalchemy.ext.mutable
extension is not used, the ORM
will not be alerted to any changes to the contents of an existing dictionary,
unless that dictionary value is re-assigned to the HSTORE-attribute itself,
thus generating a change event.
New in version 0.8.
See also
hstore
- render the Postgresql hstore()
function.
comparator_factory
(expr)¶Bases: sqlalchemy.types.Comparator
Define comparison operations for HSTORE
.
array
()¶Text array expression. Returns array of alternating keys and values.
contained_by
(other)¶Boolean expression. Test if keys are a proper subset of the keys of the argument hstore expression.
contains
(other, **kwargs)¶Boolean expression. Test if keys are a superset of the keys of the argument hstore expression.
defined
(key)¶Boolean expression. Test for presence of a non-NULL value for the key. Note that the key may be a SQLA expression.
delete
(key)¶HStore expression. Returns the contents of this hstore with the given key deleted. Note that the key may be a SQLA expression.
has_all
(other)¶Boolean expression. Test for presence of all keys in the PG array.
has_any
(other)¶Boolean expression. Test for presence of any key in the PG array.
has_key
(other)¶Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.
keys
()¶Text array expression. Returns array of keys.
matrix
()¶Text array expression. Returns array of [key, value] pairs.
slice
(array)¶HStore expression. Returns a subset of an hstore defined by array of keys.
vals
()¶Text array expression. Returns array of values.
sqlalchemy.dialects.postgresql.
hstore
(*args, **kwargs)¶Bases: sqlalchemy.sql.functions.GenericFunction
Construct an hstore value within a SQL expression using the
Postgresql hstore()
function.
The hstore
function accepts one or two arguments as described
in the Postgresql documentation.
E.g.:
from sqlalchemy.dialects.postgresql import array, hstore
select([hstore('key1', 'value1')])
select([
hstore(
array(['key1', 'key2', 'key3']),
array(['value1', 'value2', 'value3'])
)
])
New in version 0.8.
See also
HSTORE
- the Postgresql HSTORE
datatype.
sqlalchemy.dialects.postgresql.
INET
(*args, **kwargs)¶Bases: sqlalchemy.types.TypeEngine
__init__
(*args, **kwargs)¶Support implementations that were passing arguments
sqlalchemy.dialects.postgresql.
INTERVAL
(precision=None)¶Bases: sqlalchemy.types.TypeEngine
Postgresql INTERVAL type.
The INTERVAL type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000 or zxjdbc.
sqlalchemy.dialects.postgresql.
MACADDR
(*args, **kwargs)¶Bases: sqlalchemy.types.TypeEngine
__init__
(*args, **kwargs)¶Support implementations that were passing arguments
sqlalchemy.dialects.postgresql.
REAL
(precision=None, asdecimal=False, **kwargs)¶Bases: sqlalchemy.types.Float
The SQL REAL type.
__init__
(precision=None, asdecimal=False, **kwargs)¶Construct a Float.
Parameters: |
|
---|
sqlalchemy.dialects.postgresql.
UUID
(as_uuid=False)¶Bases: sqlalchemy.types.TypeEngine
Postgresql UUID type.
Represents the UUID column type, interpreting data either as natively returned by the DBAPI or as Python uuid objects.
The UUID type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000.
__init__
(as_uuid=False)¶Construct a UUID type.
Parameters: | as_uuid=False – if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI. |
---|
The new range column types founds in PostgreSQL 9.2 onwards are catered for by the following types:
sqlalchemy.dialects.postgresql.
INT4RANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql INT4RANGE type.
New in version 0.8.2.
sqlalchemy.dialects.postgresql.
INT8RANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql INT8RANGE type.
New in version 0.8.2.
sqlalchemy.dialects.postgresql.
NUMRANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql NUMRANGE type.
New in version 0.8.2.
sqlalchemy.dialects.postgresql.
DATERANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql DATERANGE type.
New in version 0.8.2.
sqlalchemy.dialects.postgresql.
TSRANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql TSRANGE type.
New in version 0.8.2.
sqlalchemy.dialects.postgresql.
TSTZRANGE
(*args, **kwargs)¶Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators
, sqlalchemy.types.TypeEngine
Represent the Postgresql TSTZRANGE type.
New in version 0.8.2.
The types above get most of their functionality from the following mixin:
sqlalchemy.dialects.postgresql.ranges.
RangeOperators
¶This mixin provides functionality for the Range Operators
listed in Table 9-44 of the postgres documentation for Range
Functions and Operators. It is used by all the range types
provided in the postgres
dialect and can likely be used for
any range types you create yourself.
No extra support is provided for the Range Functions listed in
Table 9-45 of the postgres documentation. For these, the normal
func()
object should be used.
New in version 0.8.2: Support for Postgresql RANGE operations.
comparator_factory
(expr)¶Bases: sqlalchemy.types.Comparator
Define comparison operations for range types.
__ne__
(other)¶Boolean expression. Returns true if two ranges are not equal
adjacent_to
(other)¶Boolean expression. Returns true if the range in the column is adjacent to the range in the operand.
contained_by
(other)¶Boolean expression. Returns true if the column is contained within the right hand operand.
contains
(other, **kw)¶Boolean expression. Returns true if the right hand operand, which can be an element or a range, is contained within the column.
not_extend_left_of
(other)¶Boolean expression. Returns true if the range in the column does not extend left of the range in the operand.
not_extend_right_of
(other)¶Boolean expression. Returns true if the range in the column does not extend right of the range in the operand.
overlaps
(other)¶Boolean expression. Returns true if the column overlaps (has points in common with) the right hand operand.
strictly_left_of
(other)¶Boolean expression. Returns true if the column is strictly left of the right hand operand.
strictly_right_of
(other)¶Boolean expression. Returns true if the column is strictly right of the right hand operand.
Warning
The range type DDL support should work with any Postgres DBAPI
driver, however the data types returned may vary. If you are using
psycopg2
, it’s recommended to upgrade to version 2.5 or later
before using these column types.
SQLAlchemy supports Postgresql EXCLUDE constraints via the
ExcludeConstraint
class:
sqlalchemy.dialects.postgresql.
ExcludeConstraint
(*elements, **kw)¶Bases: sqlalchemy.schema.ColumnCollectionConstraint
A table-level EXCLUDE constraint.
Defines an EXCLUDE constraint as described in the postgres documentation.
__init__
(*elements, **kw)¶Parameters: |
|
---|
For example:
from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE
class RoomBookings(Base):
room = Column(Integer(), primary_key=True)
during = Column(TSRANGE())
__table_args__ = (
ExcludeConstraint(('room', '='), ('during', '&&')),
)
Support for the PostgreSQL database via the psycopg2 driver.
Documentation and download information (if applicable) for psycopg2 is available at: http://pypi.python.org/pypi/psycopg2/
Connect String:
postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]
psycopg2-specific keyword arguments which are accepted by
create_engine()
are:
server_side_cursors
: Enable the usage of “server side cursors” for SQL
statements which support this feature. What this essentially means from a
psycopg2 point of view is that the cursor is created using a name, e.g.
connection.cursor('some name')
, which has the effect that result rows are
not immediately pre-fetched and buffered after statement execution, but are
instead left on the server and only retrieved as needed. SQLAlchemy’s
ResultProxy
uses special row-buffering
behavior when this feature is enabled, such that groups of 100 rows at a
time are fetched over the wire to reduce conversational overhead.
Note that the stream_results=True
execution option is a more targeted
way of enabling this mode on a per-execution basis.use_native_unicode
: Enable the usage of Psycopg2 “native unicode” mode
per connection. True by default.isolation_level
: This option, available for all Posgtresql dialects,
includes the AUTOCOMMIT
isolation level when using the psycopg2
dialect. See Psycopg2 Transaction Isolation Level.psycopg2 supports connecting via Unix domain connections. When the host
portion of the URL is omitted, SQLAlchemy passes None
to psycopg2,
which specifies Unix-domain communication rather than TCP/IP communication:
create_engine("postgresql+psycopg2://user:password@/dbname")
By default, the socket file used is to connect to a Unix-domain socket
in /tmp
, or whatever socket directory was specified when PostgreSQL
was built. This value can be overridden by passing a pathname to psycopg2,
using host
as an additional keyword argument:
create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")
See also:
The following DBAPI-specific options are respected when used with
Connection.execution_options()
, Executable.execution_options()
,
Query.execution_options()
, in addition to those not specific to DBAPIs:
Connection
(can only be set on a connection, not a statement
or query). See Psycopg2 Transaction Isolation Level.None
or not set, the server_side_cursors
option of the
Engine
is used.By default, the psycopg2 driver uses the psycopg2.extensions.UNICODE
extension, such that the DBAPI receives and returns all strings as Python
Unicode objects directly - SQLAlchemy passes these values through without
change. Psycopg2 here will encode/decode string values based on the
current “client encoding” setting; by default this is the value in
the postgresql.conf
file, which often defaults to SQL_ASCII
.
Typically, this can be changed to utf-8
, as a more useful default:
#client_encoding = sql_ascii # actually, defaults to database
# encoding
client_encoding = utf8
A second way to affect the client encoding is to set it within Psycopg2
locally. SQLAlchemy will call psycopg2’s set_client_encoding()
method (see: http://initd.org/psycopg/docs/connection.html#connection.set_client_encoding)
on all new connections based on the value passed to
create_engine()
using the client_encoding
parameter:
engine = create_engine("postgresql://user:pass@host/dbname", client_encoding='utf8')
This overrides the encoding specified in the Postgresql client configuration.
New in version 0.7.3: The psycopg2-specific client_encoding
parameter to
create_engine()
.
SQLAlchemy can also be instructed to skip the usage of the psycopg2
UNICODE
extension and to instead utilize it’s own unicode encode/decode
services, which are normally reserved only for those DBAPIs that don’t
fully support unicode directly. Passing use_native_unicode=False
to
create_engine()
will disable usage of psycopg2.extensions.UNICODE
.
SQLAlchemy will instead encode data itself into Python bytestrings on the way
in and coerce from bytes on the way back,
using the value of the create_engine()
encoding
parameter, which
defaults to utf-8
.
SQLAlchemy’s own unicode encode/decode functionality is steadily becoming
obsolete as more DBAPIs support unicode fully along with the approach of
Python 3; in modern usage psycopg2 should be relied upon to handle unicode.
The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.
As discussed in Transaction Isolation Level,
all Postgresql dialects support setting of transaction isolation level
both via the isolation_level
parameter passed to create_engine()
,
as well as the isolation_level
argument used by Connection.execution_options()
.
When using the psycopg2 dialect, these options make use of
psycopg2’s set_isolation_level()
connection method, rather than
emitting a Postgresql directive; this is because psycopg2’s API-level
setting is always emitted at the start of each transaction in any case.
The psycopg2 dialect supports these constants for isolation level:
READ COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
AUTOCOMMIT
New in version 0.8.2: support for AUTOCOMMIT isolation level when using psycopg2.
The psycopg2 dialect will log Postgresql NOTICE messages via the
sqlalchemy.dialects.postgresql
logger:
import logging
logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
The psycopg2 dialect will make use of the
psycopg2.extensions.register_hstore()
extension when using the HSTORE
type. This replaces SQLAlchemy’s pure-Python HSTORE coercion which takes
effect for other DBAPIs.
Support for the PostgreSQL database via the py-postgresql driver.
Documentation and download information (if applicable) for py-postgresql is available at: http://python.projects.pgfoundry.org/
Connect String:
postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]
Support for the PostgreSQL database via the pg8000 driver.
Documentation and download information (if applicable) for pg8000 is available at: http://pybrary.net/pg8000/
Connect String:
postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]
pg8000 requires that the postgresql client encoding be configured in the postgresql.conf file in order to use encodings other than ascii. Set this value to the same value as the “encoding” parameter on create_engine(), usually “utf-8”.
Passing data from/to the Interval type is not supported as of yet.
Support for the PostgreSQL database via the zxJDBC for Jython driver.
Drivers for this database are available at: http://jdbc.postgresql.org/