This section describes a variety of configurational patterns that are usable with mappers. It assumes you’ve worked through Object Relational Tutorial and know how to construct and use rudimentary mappers and relationships.
A Classical Mapping refers to the configuration of a mapped class using the
mapper()
function, without using the Declarative system. As an example,
start with the declarative mapping introduced in Object Relational Tutorial:
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)
In “classical” form, the table metadata is created separately with the Table
construct, then associated with the User
class via the mapper()
function:
from sqlalchemy import Table, MetaData, Column, ForeignKey, Integer, String
from sqlalchemy.orm import mapper
metadata = MetaData()
user = Table('user', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50)),
Column('fullname', String(50)),
Column('password', String(12))
)
class User(object):
def __init__(self, name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password
mapper(User, user)
Information about mapped attributes, such as relationships to other classes, are provided
via the properties
dictionary. The example below illustrates a second Table
object, mapped to a class called Address
, then linked to User
via relationship()
:
address = Table('address', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', Integer, ForeignKey('user.id')),
Column('email_address', String(50))
)
mapper(User, user, properties={
'addresses' : relationship(Address, backref='user', order_by=address.c.id)
})
mapper(Address, address)
When using classical mappings, classes must be provided directly without the benefit
of the “string lookup” system provided by Declarative. SQL expressions are typically
specified in terms of the Table
objects, i.e. address.c.id
above
for the Address
relationship, and not Address.id
, as Address
may not
yet be linked to table metadata, nor can we specify a string here.
Some examples in the documentation still use the classical approach, but note that
the classical as well as Declarative approaches are fully interchangeable. Both
systems ultimately create the same configuration, consisting of a Table
,
user-defined class, linked together with a mapper()
. When we talk about
“the behavior of mapper()
”, this includes when using the Declarative system
as well - it’s still used, just behind the scenes.
The default behavior of mapper()
is to assemble all the columns in
the mapped Table
into mapped object attributes, each of which are
named according to the name of the column itself (specifically, the key
attribute of Column
). This behavior can be
modified in several ways.
A mapping by default shares the same name for a
Column
as that of the mapped attribute - specifically
it matches the Column.key
attribute on Column
, which
by default is the same as the Column.name
.
The name assigned to the Python attribute which maps to
Column
can be different from either Column.name
or Column.key
just by assigning it that way, as we illustrate here in a Declarative mapping:
class User(Base):
__tablename__ = 'user'
id = Column('user_id', Integer, primary_key=True)
name = Column('user_name', String(50))
Where above User.id
resolves to a column named user_id
and User.name
resolves to a column named user_name
.
When mapping to an existing table, the Column
object
can be referenced directly:
class User(Base):
__table__ = user_table
id = user_table.c.user_id
name = user_table.c.user_name
Or in a classical mapping, placed in the properties
dictionary
with the desired key:
mapper(User, user_table, properties={
'id': user_table.c.user_id,
'name': user_table.c.user_name,
})
In the next section we’ll examine the usage of .key
more closely.
In the previous section Naming Columns Distinctly from Attribute Names, we showed how
a Column
explicitly mapped to a class can have a different attribute
name than the column. But what if we aren’t listing out Column
objects explicitly, and instead are automating the production of Table
objects using reflection (e.g. as described in Reflecting Database Objects)?
In this case we can make use of the DDLEvents.column_reflect()
event
to intercept the production of Column
objects and provide them
with the Column.key
of our choice:
@event.listens_for(Table, "column_reflect")
def column_reflect(inspector, table, column_info):
# set column.key = "attr_<lower_case_name>"
column_info['key'] = "attr_%s" % column_info['name'].lower()
With the above event, the reflection of Column
objects will be intercepted
with our event that adds a new ”.key” element, such as in a mapping as below:
class MyClass(Base):
__table__ = Table("some_table", Base.metadata,
autoload=True, autoload_with=some_engine)
If we want to qualify our event to only react for the specific MetaData
object above, we can check for it in our event:
@event.listens_for(Table, "column_reflect")
def column_reflect(inspector, table, column_info):
if table.metadata is Base.metadata:
# set column.key = "attr_<lower_case_name>"
column_info['key'] = "attr_%s" % column_info['name'].lower()
A quick approach to prefix column names, typically when mapping
to an existing Table
object, is to use column_prefix
:
class User(Base):
__table__ = user_table
__mapper_args__ = {'column_prefix':'_'}
The above will place attribute names such as _user_id
, _user_name
,
_password
etc. on the mapped User
class.
This approach is uncommon in modern usage. For dealing with reflected tables, a more flexible approach is to use that described in Automating Column Naming Schemes from Reflected Tables.
Options can be specified when mapping a Column
using the
column_property()
function. This function
explicitly creates the ColumnProperty
used by the
mapper()
to keep track of the Column
; normally, the
mapper()
creates this automatically. Using column_property()
,
we can pass additional arguments about how we’d like the Column
to be mapped. Below, we pass an option active_history
,
which specifies that a change to this column’s value should
result in the former value being loaded first:
from sqlalchemy.orm import column_property
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = column_property(Column(String(50)), active_history=True)
column_property()
is also used to map a single attribute to
multiple columns. This use case arises when mapping to a join()
which has attributes which are equated to each other:
class User(Base):
__table__ = user.join(address)
# assign "user.id", "address.user_id" to the
# "id" attribute
id = column_property(user_table.c.id, address_table.c.user_id)
For more examples featuring this usage, see Mapping a Class against Multiple Tables.
Another place where column_property()
is needed is to specify SQL expressions as
mapped attributes, such as below where we create an attribute fullname
that is the string concatenation of the firstname
and lastname
columns:
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
fullname = column_property(firstname + " " + lastname)
See examples of this usage at SQL Expressions as Mapped Attributes.
sqlalchemy.orm.
column_property
(*cols, **kw)¶Provide a column-level property for use with a Mapper.
Column-based properties can normally be applied to the mapper’s
properties
dictionary using the Column
element directly.
Use this function when the given column is not directly present within the
mapper’s selectable; examples include SQL expressions, functions, and
scalar SELECT queries.
Columns that aren’t present in the mapper’s selectable won’t be persisted by the mapper and are effectively “read-only” attributes.
Parameters: |
|
---|
Sometimes, a Table
object was made available using the
reflection process described at Reflecting Database Objects to load
the table’s structure from the database.
For such a table that has lots of columns that don’t need to be referenced
in the application, the include_properties
or exclude_properties
arguments can specify that only a subset of columns should be mapped.
For example:
class User(Base):
__table__ = user_table
__mapper_args__ = {
'include_properties' :['user_id', 'user_name']
}
...will map the User
class to the user_table
table, only including
the user_id
and user_name
columns - the rest are not referenced.
Similarly:
class Address(Base):
__table__ = address_table
__mapper_args__ = {
'exclude_properties' : ['street', 'city', 'state', 'zip']
}
...will map the Address
class to the address_table
table, including
all columns present except street
, city
, state
, and zip
.
When this mapping is used, the columns that are not included will not be
referenced in any SELECT statements emitted by Query
, nor will there
be any mapped attribute on the mapped class which represents the column;
assigning an attribute of that name will have no effect beyond that of
a normal Python attribute assignment.
In some cases, multiple columns may have the same name, such as when
mapping to a join of two or more tables that share some column name.
include_properties
and exclude_properties
can also accommodate
Column
objects to more accurately describe which columns
should be included or excluded:
class UserAddress(Base):
__table__ = user_table.join(addresses_table)
__mapper_args__ = {
'exclude_properties' :[address_table.c.id],
'primary_key' : [user_table.c.id]
}
Note
insert and update defaults configured on individual
Column
objects, i.e. those described at metadata_defaults
including those configured by the default
, update
,
server_default
and server_onupdate
arguments, will continue to
function normally even if those Column
objects are not mapped.
This is because in the case of default
and update
, the
Column
object is still present on the underlying
Table
, thus allowing the default functions to take place when
the ORM emits an INSERT or UPDATE, and in the case of server_default
and server_onupdate
, the relational database itself maintains these
functions.
This feature allows particular columns of a table be loaded only
upon direct access, instead of when the entity is queried using
Query
. This feature is useful when one wants to avoid
loading a large text or binary field into memory when it’s not needed.
Individual columns can be lazy loaded by themselves or placed into groups that
lazy-load together, using the orm.deferred()
function to
mark them as “deferred”. In the example below, we define a mapping that will load each of
.excerpt
and .photo
in separate, individual-row SELECT statements when each
attribute is first referenced on the individual object instance:
from sqlalchemy.orm import deferred
from sqlalchemy import Integer, String, Text, Binary, Column
class Book(Base):
__tablename__ = 'book'
book_id = Column(Integer, primary_key=True)
title = Column(String(200), nullable=False)
summary = Column(String(2000))
excerpt = deferred(Column(Text))
photo = deferred(Column(Binary))
Classical mappings as always place the usage of orm.deferred()
in the
properties
dictionary against the table-bound Column
:
mapper(Book, book_table, properties={
'photo':deferred(book_table.c.photo)
})
Deferred columns can be associated with a “group” name, so that they load
together when any of them are first accessed. The example below defines a
mapping with a photos
deferred group. When one .photo
is accessed, all three
photos will be loaded in one SELECT statement. The .excerpt
will be loaded
separately when it is accessed:
class Book(Base):
__tablename__ = 'book'
book_id = Column(Integer, primary_key=True)
title = Column(String(200), nullable=False)
summary = Column(String(2000))
excerpt = deferred(Column(Text))
photo1 = deferred(Column(Binary), group='photos')
photo2 = deferred(Column(Binary), group='photos')
photo3 = deferred(Column(Binary), group='photos')
You can defer or undefer columns at the Query
level using the orm.defer()
and orm.undefer()
query options:
from sqlalchemy.orm import defer, undefer
query = session.query(Book)
query.options(defer('summary')).all()
query.options(undefer('excerpt')).all()
And an entire “deferred group”, i.e. which uses the group
keyword argument
to orm.deferred()
, can be undeferred using
orm.undefer_group()
, sending in the group name:
from sqlalchemy.orm import undefer_group
query = session.query(Book)
query.options(undefer_group('photos')).all()
sqlalchemy.orm.
deferred
(*columns, **kwargs)¶Return a DeferredColumnProperty
, which indicates this
object attributes should only be loaded from its corresponding
table column when first accessed.
Used with the “properties” dictionary sent to mapper()
.
See also:
sqlalchemy.orm.
defer
(*key)¶Return a MapperOption
that will convert the column property
of the given name into a deferred load.
Used with Query.options()
.
e.g.:
from sqlalchemy.orm import defer
query(MyClass).options(defer("attribute_one"),
defer("attribute_two"))
A class bound descriptor is also accepted:
query(MyClass).options(
defer(MyClass.attribute_one),
defer(MyClass.attribute_two))
A “path” can be specified onto a related or collection object using a
dotted name. The orm.defer()
option will be applied to that object
when loaded:
query(MyClass).options(
defer("related.attribute_one"),
defer("related.attribute_two"))
To specify a path via class, send multiple arguments:
query(MyClass).options(
defer(MyClass.related, MyOtherClass.attribute_one),
defer(MyClass.related, MyOtherClass.attribute_two))
See also:
Parameters: | *key – A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets. |
---|
sqlalchemy.orm.
undefer
(*key)¶Return a MapperOption
that will convert the column property
of the given name into a non-deferred (regular column) load.
Used with Query.options()
.
e.g.:
from sqlalchemy.orm import undefer
query(MyClass).options(
undefer("attribute_one"),
undefer("attribute_two"))
A class bound descriptor is also accepted:
query(MyClass).options(
undefer(MyClass.attribute_one),
undefer(MyClass.attribute_two))
A “path” can be specified onto a related or collection object using a
dotted name. The orm.undefer()
option will be applied to that
object when loaded:
query(MyClass).options(
undefer("related.attribute_one"),
undefer("related.attribute_two"))
To specify a path via class, send multiple arguments:
query(MyClass).options(
undefer(MyClass.related, MyOtherClass.attribute_one),
undefer(MyClass.related, MyOtherClass.attribute_two))
See also:
orm.undefer_group()
as a means to “undefer” a group
of attributes at once.
Parameters: | *key – A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets. |
---|
sqlalchemy.orm.
undefer_group
(name)¶Return a MapperOption
that will convert the given group of
deferred column properties into a non-deferred (regular column) load.
Used with Query.options()
.
e.g.:
query(MyClass).options(undefer("group_one"))
See also:
Parameters: | name – String name of the deferred group. This name is
established using the “group” name to the orm.deferred()
configurational function. |
---|
Attributes on a mapped class can be linked to SQL expressions, which can be used in queries.
The easiest and most flexible way to link relatively simple SQL expressions to a class is to use a so-called
“hybrid attribute”,
described in the section Hybrid Attributes. The hybrid provides
for an expression that works at both the Python level as well as at the
SQL expression level. For example, below we map a class User
,
containing attributes firstname
and lastname
, and include a hybrid that
will provide for us the fullname
, which is the string concatenation of the two:
from sqlalchemy.ext.hybrid import hybrid_property
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
@hybrid_property
def fullname(self):
return self.firstname + " " + self.lastname
Above, the fullname
attribute is interpreted at both the instance and
class level, so that it is available from an instance:
some_user = session.query(User).first()
print some_user.fullname
as well as usable wtihin queries:
some_user = session.query(User).filter(User.fullname == "John Smith").first()
The string concatenation example is a simple one, where the Python expression
can be dual purposed at the instance and class level. Often, the SQL expression
must be distinguished from the Python expression, which can be achieved using
hybrid_property.expression()
. Below we illustrate the case where a conditional
needs to be present inside the hybrid, using the if
statement in Python and the
sql.expression.case()
construct for SQL expressions:
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.sql import case
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
@hybrid_property
def fullname(self):
if self.firstname is not None:
return self.firstname + " " + self.lastname
else:
return self.lastname
@fullname.expression
def fullname(cls):
return case([
(cls.firstname != None, cls.firstname + " " + cls.lastname),
], else_ = cls.lastname)
The orm.column_property()
function can be used to map a SQL
expression in a manner similar to a regularly mapped Column
.
With this technique, the attribute is loaded
along with all other column-mapped attributes at load time. This is in some
cases an advantage over the usage of hybrids, as the value can be loaded
up front at the same time as the parent row of the object, particularly if
the expression is one which links to other tables (typically as a correlated
subquery) to access data that wouldn’t normally be
available on an already loaded object.
Disadvantages to using orm.column_property()
for SQL expressions include that
the expression must be compatible with the SELECT statement emitted for the class
as a whole, and there are also some configurational quirks which can occur
when using orm.column_property()
from declarative mixins.
Our “fullname” example can be expressed using orm.column_property()
as
follows:
from sqlalchemy.orm import column_property
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
fullname = column_property(firstname + " " + lastname)
Correlated subqueries may be used as well. Below we use the select()
construct to create a SELECT that links together the count of Address
objects available for a particular User
:
from sqlalchemy.orm import column_property
from sqlalchemy import select, func
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Address(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey('user.id'))
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
address_count = column_property(
select([func.count(Address.id)]).\
where(Address.user_id==id).\
correlate_except(Address)
)
In the above example, we define a select()
construct like the following:
select([func.count(Address.id)]).\
where(Address.user_id==id).\
correlate_except(Address)
The meaning of the above statement is, select the count of Address.id
rows
where the Address.user_id
column is equated to id
, which in the context
of the User
class is the Column
named id
(note that id
is
also the name of a Python built in function, which is not what we want to use
here - if we were outside of the User
class definition, we’d use User.id
).
The select.correlate_except()
directive indicates that each element in the
FROM clause of this select()
may be omitted from the FROM list (that is, correlated
to the enclosing SELECT statement against User
) except for the one corresponding
to Address
. This isn’t strictly necessary, but prevents Address
from
being inadvertently omitted from the FROM list in the case of a long string
of joins between User
and Address
tables where SELECT statements against
Address
are nested.
If import issues prevent the column_property()
from being defined
inline with the class, it can be assigned to the class after both
are configured. In Declarative this has the effect of calling Mapper.add_property()
to add an additional property after the fact:
User.address_count = column_property(
select([func.count(Address.id)]).\
where(Address.user_id==User.id)
)
For many-to-many relationships, use and_()
to join the fields of the
association table to both tables in a relation, illustrated
here with a classical mapping:
from sqlalchemy import and_
mapper(Author, authors, properties={
'book_count': column_property(
select([func.count(books.c.id)],
and_(
book_authors.c.author_id==authors.c.id,
book_authors.c.book_id==books.c.id
)))
})
In cases where a SQL query more elaborate than what orm.column_property()
or hybrid_property
can provide must be emitted, a regular Python
function accessed as an attribute can be used, assuming the expression
only needs to be available on an already-loaded instance. The function
is decorated with Python’s own @property
decorator to mark it as a read-only
attribute. Within the function, object_session()
is used to locate the Session
corresponding to the current object,
which is then used to emit a query:
from sqlalchemy.orm import object_session
from sqlalchemy import select, func
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
@property
def address_count(self):
return object_session(self).\
scalar(
select([func.count(Address.id)]).\
where(Address.user_id==self.id)
)
The plain descriptor approach is useful as a last resort, but is less performant in the usual case than both the hybrid and column property approaches, in that it needs to emit a SQL query upon each access.
A quick way to add a “validation” routine to an attribute is to use the
validates()
decorator. An attribute validator can raise
an exception, halting the process of mutating the attribute’s value, or can
change the given value into something different. Validators, like all
attribute extensions, are only called by normal userland code; they are not
issued when the ORM is populating the object:
from sqlalchemy.orm import validates
class EmailAddress(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
email = Column(String)
@validates('email')
def validate_email(self, key, address):
assert '@' in address
return address
Validators also receive collection events, when items are added to a collection:
from sqlalchemy.orm import validates
class User(Base):
# ...
addresses = relationship("Address")
@validates('addresses')
def validate_address(self, key, address):
assert '@' in address.email
return address
Note that the validates()
decorator is a convenience function built on
top of attribute events. An application that requires more control over
configuration of attribute change behavior can make use of this system,
described at AttributeEvents
.
sqlalchemy.orm.
validates
(*names, **kw)¶Decorate a method as a ‘validator’ for one or more named properties.
Designates a method as a validator, a method which receives the
name of the attribute as well as a value to be assigned, or in the
case of a collection, the value to be added to the collection.
The function can then raise validation exceptions to halt the
process from continuing (where Python’s built-in ValueError
and AssertionError
exceptions are reasonable choices), or can
modify or replace the value before proceeding. The function should
otherwise return the given value.
Note that a validator for a collection cannot issue a load of that collection within the validation routine - this usage raises an assertion to avoid recursion overflows. This is a reentrant condition which is not supported.
Parameters: |
|
---|
A more comprehensive way to produce modified behavior for an attribute is to
use descriptors. These are commonly used in Python using the property()
function. The standard SQLAlchemy technique for descriptors is to create a
plain descriptor, and to have it read/write from a mapped attribute with a
different name. Below we illustrate this using Python 2.6-style properties:
class EmailAddress(Base):
__tablename__ = 'email_address'
id = Column(Integer, primary_key=True)
# name the attribute with an underscore,
# different from the column name
_email = Column("email", String)
# then create an ".email" attribute
# to get/set "._email"
@property
def email(self):
return self._email
@email.setter
def email(self, email):
self._email = email
The approach above will work, but there’s more we can add. While our
EmailAddress
object will shuttle the value through the email
descriptor and into the _email
mapped attribute, the class level
EmailAddress.email
attribute does not have the usual expression semantics
usable with Query
. To provide these, we instead use the
hybrid
extension as follows:
from sqlalchemy.ext.hybrid import hybrid_property
class EmailAddress(Base):
__tablename__ = 'email_address'
id = Column(Integer, primary_key=True)
_email = Column("email", String)
@hybrid_property
def email(self):
return self._email
@email.setter
def email(self, email):
self._email = email
The .email
attribute, in addition to providing getter/setter behavior when we have an
instance of EmailAddress
, also provides a SQL expression when used at the class level,
that is, from the EmailAddress
class directly:
from sqlalchemy.orm import Session
session = Session()
sqladdress = session.query(EmailAddress).\
filter(EmailAddress.email == 'address@example.com').\
one()
SELECT address.email AS address_email, address.id AS address_id
FROM address
WHERE address.email = ?
('address@example.com',)
address.email = 'otheraddress@example.com'
sqlsession.commit()
UPDATE address SET email=? WHERE address.id = ?
('otheraddress@example.com', 1)
COMMIT
The hybrid_property
also allows us to change the behavior of the
attribute, including defining separate behaviors when the attribute is
accessed at the instance level versus at the class/expression level, using the
hybrid_property.expression()
modifier. Such as, if we wanted to add a
host name automatically, we might define two sets of string manipulation
logic:
class EmailAddress(Base):
__tablename__ = 'email_address'
id = Column(Integer, primary_key=True)
_email = Column("email", String)
@hybrid_property
def email(self):
"""Return the value of _email up until the last twelve
characters."""
return self._email[:-12]
@email.setter
def email(self, email):
"""Set the value of _email, tacking on the twelve character
value @example.com."""
self._email = email + "@example.com"
@email.expression
def email(cls):
"""Produce a SQL expression that represents the value
of the _email column, minus the last twelve characters."""
return func.substr(cls._email, 0, func.length(cls._email) - 12)
Above, accessing the email
property of an instance of EmailAddress
will return the value of the _email
attribute, removing or adding the
hostname @example.com
from the value. When we query against the email
attribute, a SQL function is rendered which produces the same effect:
sqladdress = session.query(EmailAddress).filter(EmailAddress.email == 'address').one()
SELECT address.email AS address_email, address.id AS address_id
FROM address
WHERE substr(address.email, ?, length(address.email) - ?) = ?
(0, 12, 'address')
Read more about Hybrids at Hybrid Attributes.
Synonyms are a mapper-level construct that applies expression behavior to a descriptor based attribute.
Changed in version 0.7: The functionality of synonym is superceded as of 0.7 by hybrid attributes.
sqlalchemy.orm.
synonym
(name, map_column=False, descriptor=None, comparator_factory=None, doc=None)¶Denote an attribute name as a synonym to a mapped property.
Changed in version 0.7: synonym()
is superseded by the hybrid
extension. See the documentation for hybrids
at Hybrid Attributes.
Used with the properties
dictionary sent to
mapper()
:
class MyClass(object):
def _get_status(self):
return self._status
def _set_status(self, value):
self._status = value
status = property(_get_status, _set_status)
mapper(MyClass, sometable, properties={
"status":synonym("_status", map_column=True)
})
Above, the status
attribute of MyClass will produce
expression behavior against the table column named status
,
using the Python attribute _status
on the mapped class
to represent the underlying value.
Parameters: |
|
---|
The “operators” used by the SQLAlchemy ORM and Core expression language
are fully customizable. For example, the comparison expression
User.name == 'ed'
makes usage of an operator built into Python
itself called operator.eq
- the actual SQL construct which SQLAlchemy
associates with such an operator can be modified. New
operations can be associated with column expressions as well. The operators
which take place for column expressions are most directly redefined at the
type level - see the
section Redefining and Creating New Operators for a description.
ORM level functions like column_property()
, relationship()
,
and composite()
also provide for operator redefinition at the ORM
level, by passing a PropComparator
subclass to the comparator_factory
argument of each function. Customization of operators at this level is a
rare use case. See the documentation at PropComparator
for an overview.
Sets of columns can be associated with a single user-defined datatype. The ORM provides a single attribute which represents the group of columns using the class you provide.
Changed in version 0.7: Composites have been simplified such that they no longer “conceal” the underlying column based attributes. Additionally, in-place mutation is no longer automatic; see the section below on enabling mutability to support tracking of in-place changes.
A simple example represents pairs of columns as a Point
object.
Point
represents such a pair as .x
and .y
:
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __composite_values__(self):
return self.x, self.y
def __repr__(self):
return "Point(x=%r, y=%r)" % (self.x, self.y)
def __eq__(self, other):
return isinstance(other, Point) and \
other.x == self.x and \
other.y == self.y
def __ne__(self, other):
return not self.__eq__(other)
The requirements for the custom datatype class are that it have a constructor
which accepts positional arguments corresponding to its column format, and
also provides a method __composite_values__()
which returns the state of
the object as a list or tuple, in order of its column-based attributes. It
also should supply adequate __eq__()
and __ne__()
methods which test
the equality of two instances.
We will create a mapping to a table vertice
, which represents two points
as x1/y1
and x2/y2
. These are created normally as Column
objects. Then, the composite()
function is used to assign new
attributes that will represent sets of columns via the Point
class:
from sqlalchemy import Column, Integer
from sqlalchemy.orm import composite
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Vertex(Base):
__tablename__ = 'vertice'
id = Column(Integer, primary_key=True)
x1 = Column(Integer)
y1 = Column(Integer)
x2 = Column(Integer)
y2 = Column(Integer)
start = composite(Point, x1, y1)
end = composite(Point, x2, y2)
A classical mapping above would define each composite()
against the existing table:
mapper(Vertex, vertice_table, properties={
'start':composite(Point, vertice_table.c.x1, vertice_table.c.y1),
'end':composite(Point, vertice_table.c.x2, vertice_table.c.y2),
})
We can now persist and use Vertex
instances, as well as query for them,
using the .start
and .end
attributes against ad-hoc Point
instances:
>>> v = Vertex(start=Point(3, 4), end=Point(5, 6))
>>> session.add(v)
>>> q = session.query(Vertex).filter(Vertex.start == Point(3, 4))
sql>>> print q.first().start
BEGIN (implicit)
INSERT INTO vertice (x1, y1, x2, y2) VALUES (?, ?, ?, ?)
(3, 4, 5, 6)
SELECT vertice.id AS vertice_id,
vertice.x1 AS vertice_x1,
vertice.y1 AS vertice_y1,
vertice.x2 AS vertice_x2,
vertice.y2 AS vertice_y2
FROM vertice
WHERE vertice.x1 = ? AND vertice.y1 = ?
LIMIT ? OFFSET ?
(3, 4, 1, 0)Point(x=3, y=4)
sqlalchemy.orm.
composite
(class_, *cols, **kwargs)¶Return a composite column-based property for use with a Mapper.
See the mapping documentation section Composite Column Types for a full usage example.
The MapperProperty
returned by composite()
is the CompositeProperty
.
Parameters: |
|
---|
In-place changes to an existing composite value are
not tracked automatically. Instead, the composite class needs to provide
events to its parent object explicitly. This task is largely automated
via the usage of the MutableComposite
mixin, which uses events
to associate each user-defined composite object with all parent associations.
Please see the example in Establishing Mutability on Composites.
Changed in version 0.7: In-place changes to an existing composite value are no longer
tracked automatically; the functionality is superseded by the
MutableComposite
class.
The “equals” comparison operation by default produces an AND of all
corresponding columns equated to one another. This can be changed using
the comparator_factory
argument to composite()
, where we
specify a custom CompositeProperty.Comparator
class
to define existing or new operations.
Below we illustrate the “greater than” operator, implementing
the same expression that the base “greater than” does:
from sqlalchemy.orm.properties import CompositeProperty
from sqlalchemy import sql
class PointComparator(CompositeProperty.Comparator):
def __gt__(self, other):
"""redefine the 'greater than' operation"""
return sql.and_(*[a>b for a, b in
zip(self.__clause_element__().clauses,
other.__composite_values__())])
class Vertex(Base):
___tablename__ = 'vertice'
id = Column(Integer, primary_key=True)
x1 = Column(Integer)
y1 = Column(Integer)
x2 = Column(Integer)
y2 = Column(Integer)
start = composite(Point, x1, y1,
comparator_factory=PointComparator)
end = composite(Point, x2, y2,
comparator_factory=PointComparator)
Mappers can be constructed against arbitrary relational units (called
selectables) in addition to plain tables. For example, the join()
function creates a selectable unit comprised of
multiple tables, complete with its own composite primary key, which can be
mapped in the same way as a Table
:
from sqlalchemy import Table, Column, Integer, \
String, MetaData, join, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import column_property
metadata = MetaData()
# define two Table objects
user_table = Table('user', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
address_table = Table('address', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', Integer, ForeignKey('user.id')),
Column('email_address', String)
)
# define a join between them. This
# takes place across the user.id and address.user_id
# columns.
user_address_join = join(user_table, address_table)
Base = declarative_base()
# map to it
class AddressUser(Base):
__table__ = user_address_join
id = column_property(user_table.c.id, address_table.c.user_id)
address_id = address_table.c.id
In the example above, the join expresses columns for both the
user
and the address
table. The user.id
and address.user_id
columns are equated by foreign key, so in the mapping they are defined
as one attribute, AddressUser.id
, using column_property()
to
indicate a specialized column mapping. Based on this part of the
configuration, the mapping will copy
new primary key values from user.id
into the address.user_id
column
when a flush occurs.
Additionally, the address.id
column is mapped explicitly to
an attribute named address_id
. This is to disambiguate the
mapping of the address.id
column from the same-named AddressUser.id
attribute, which here has been assigned to refer to the user
table
combined with the address.user_id
foreign key.
The natural primary key of the above mapping is the composite of
(user.id, address.id)
, as these are the primary key columns of the
user
and address
table combined together. The identity of an
AddressUser
object will be in terms of these two values, and
is represented from an AddressUser
object as
(AddressUser.id, AddressUser.address_id)
.
Similar to mapping against a join, a plain select()
object can be used with a
mapper as well. The example fragment below illustrates mapping a class
called Customer
to a select()
which includes a join to a
subquery:
from sqlalchemy import select, func
subq = select([
func.count(orders.c.id).label('order_count'),
func.max(orders.c.price).label('highest_order'),
orders.c.customer_id
]).group_by(orders.c.customer_id).alias()
customer_select = select([customers, subq]).\
select_from(
join(customers, subq,
customers.c.id == subq.c.customer_id)
).alias()
class Customer(Base):
__table__ = customer_select
Above, the full row represented by customer_select
will be all the
columns of the customers
table, in addition to those columns
exposed by the subq
subquery, which are order_count
,
highest_order
, and customer_id
. Mapping the Customer
class to this selectable then creates a class which will contain
those attributes.
When the ORM persists new instances of Customer
, only the
customers
table will actually receive an INSERT. This is because the
primary key of the orders
table is not represented in the mapping; the ORM
will only emit an INSERT into a table for which it has mapped the primary
key.
Note
The practice of mapping to arbitrary SELECT statements, especially
complex ones as above, is
almost never needed; it necessarily tends to produce complex queries
which are often less efficient than that which would be produced
by direct query construction. The practice is to some degree
based on the very early history of SQLAlchemy where the mapper()
construct was meant to represent the primary querying interface;
in modern usage, the Query
object can be used to construct
virtually any SELECT statement, including complex composites, and should
be favored over the “map-to-selectable” approach.
In modern SQLAlchemy, a particular class is only mapped by one mapper()
at a time. The rationale here is that the mapper()
modifies the class itself, not only
persisting it towards a particular Table
, but also instrumenting
attributes upon the class which are structured specifically according to the
table metadata.
One potential use case for another mapper to exist at the same time is if we
wanted to load instances of our class not just from the immediate Table
to which it is mapped, but from another selectable that is a derivation of that
Table
. While there technically is a way to create such a mapper()
,
using the non_primary=True
option, this approach is virtually never needed.
Instead, we use the functionality of the Query
object to achieve this,
using a method such as Query.select_from()
or Query.from_statement()
to specify a derived selectable.
Another potential use is if we genuinely want instances of our class to be persisted into different tables at different times; certain kinds of data sharding configurations may persist a particular class into tables that are identical in structure except for their name. For this kind of pattern, Python offers a better approach than the complexity of mapping the same class multiple times, which is to instead create new mapped classes for each target table. SQLAlchemy refers to this as the “entity name” pattern, which is described as a recipe at Entity Name.
Mapping imposes no restrictions or requirements on the constructor
(__init__
) method for the class. You are free to require any arguments for
the function that you wish, assign attributes to the instance that are unknown
to the ORM, and generally do anything else you would normally do when writing
a constructor for a Python class.
The SQLAlchemy ORM does not call __init__
when recreating objects from
database rows. The ORM’s process is somewhat akin to the Python standard
library’s pickle
module, invoking the low level __new__
method and
then quietly restoring attributes directly on the instance rather than calling
__init__
.
If you need to do some setup on database-loaded instances before they’re ready
to use, you can use the @reconstructor
decorator to tag a method as the
ORM counterpart to __init__
. SQLAlchemy will call this method with no
arguments every time it loads or reconstructs one of your instances. This is
useful for recreating transient properties that are normally assigned in your
__init__
:
from sqlalchemy import orm
class MyMappedClass(object):
def __init__(self, data):
self.data = data
# we need stuff on all instances, but not in the database.
self.stuff = []
@orm.reconstructor
def init_on_load(self):
self.stuff = []
When obj = MyMappedClass()
is executed, Python calls the __init__
method as normal and the data
argument is required. When instances are
loaded during a Query
operation as in
query(MyMappedClass).one()
, init_on_load
is called.
Any method may be tagged as the reconstructor()
, even
the __init__
method. SQLAlchemy will call the reconstructor method with no
arguments. Scalar (non-collection) database-mapped attributes of the instance
will be available for use within the function. Eagerly-loaded collections are
generally not yet available and will usually only contain the first element.
ORM state changes made to objects at this stage will not be recorded for the
next flush() operation, so the activity within a reconstructor should be
conservative.
reconstructor()
is a shortcut into a larger system
of “instance level” events, which can be subscribed to using the
event API - see InstanceEvents
for the full API description
of these events.
sqlalchemy.orm.
reconstructor
(fn)¶Decorate a method as the ‘reconstructor’ hook.
Designates a method as the “reconstructor”, an __init__
-like
method that will be called by the ORM after the instance has been
loaded from the database or otherwise reconstituted.
The reconstructor will be invoked with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.
The Mapper
supports management of a version id column, which
is a single table column that increments or otherwise updates its value
each time an UPDATE
to the mapped table occurs. This value is checked each
time the ORM emits an UPDATE
or DELETE
against the row to ensure that
the value held in memory matches the database value.
The purpose of this feature is to detect when two concurrent transactions
are modifying the same row at roughly the same time, or alternatively to provide
a guard against the usage of a “stale” row in a system that might be re-using
data from a previous transaction without refreshing (e.g. if one sets expire_on_commit=False
with a Session
, it is possible to re-use the data from a previous
transaction).
Concurrent transaction updates
When detecting concurrent updates within transactions, it is typically the case that the database’s transaction isolation level is below the level of repeatable read; otherwise, the transaction will not be exposed to a new row value created by a concurrent update which conflicts with the locally updated value. In this case, the SQLAlchemy versioning feature will typically not be useful for in-transaction conflict detection, though it still can be used for cross-transaction staleness detection.
The database that enforces repeatable reads will typically either have locked the target row against a concurrent update, or is employing some form of multi version concurrency control such that it will emit an error when the transaction is committed. SQLAlchemy’s version_id_col is an alternative which allows version tracking to occur for specific tables within a transaction that otherwise might not have this isolation level set.
See also
Repeatable Read Isolation Level - Postgresql’s implementation of repeatable read, including a description of the error condition.
The most straightforward way to track versions is to add an integer column
to the mapped table, then establish it as the version_id_col
within the
mapper options:
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
version_id = Column(Integer, nullable=False)
name = Column(String(50), nullable=False)
__mapper_args__ = {
"version_id_col": version_id
}
Above, the User
mapping tracks integer versions using the column
version_id
. When an object of type User
is first flushed, the
version_id
column will be given a value of “1”. Then, an UPDATE
of the table later on will always be emitted in a manner similar to the
following:
UPDATE user SET version_id=:version_id, name=:name
WHERE user.id = :user_id AND user.version_id = :user_version_id
{"name": "new name", "version_id": 2, "user_id": 1, "user_version_id": 1}
The above UPDATE statement is updating the row that not only matches
user.id = 1
, it also is requiring that user.version_id = 1
, where “1”
is the last version identifier we’ve been known to use on this object.
If a transaction elsewhere has modifed the row independently, this version id
will no longer match, and the UPDATE statement will report that no rows matched;
this is the condition that SQLAlchemy tests, that exactly one row matched our
UPDATE (or DELETE) statement. If zero rows match, that indicates our version
of the data is stale, and a StaleDataError
is raised.
Other kinds of values or counters can be used for versioning. Common types include
dates and GUIDs. When using an alternate type or counter scheme, SQLAlchemy
provides a hook for this scheme using the version_id_generator
argument,
which accepts a version generation callable. This callable is passed the value of the current
known version, and is expected to return the subsequent version.
For example, if we wanted to track the versioning of our User
class
using a randomly generated GUID, we could do this (note that some backends
support a native GUID type, but we illustrate here using a simple string):
import uuid
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
version_uuid = Column(String(32))
name = Column(String(50), nullable=False)
__mapper_args__ = {
'version_id_col':version_uuid,
'version_id_generator':lambda version: uuid.uuid4().hex
}
The persistence engine will call upon uuid.uuid4()
each time a
User
object is subject to an INSERT or an UPDATE. In this case, our
version generation function can disregard the incoming value of version
,
as the uuid4()
function
generates identifiers without any prerequisite value. If we were using
a sequential versioning scheme such as numeric or a special character system,
we could make use of the given version
in order to help determine the
subsequent value.
See also
sqlalchemy.orm.
mapper
(class_, local_table=None, *args, **params)¶Return a new Mapper
object.
This function is typically used behind the scenes
via the Declarative extension. When using Declarative,
many of the usual mapper()
arguments are handled
by the Declarative extension itself, including class_
,
local_table
, properties
, and inherits
.
Other options are passed to mapper()
using
the __mapper_args__
class variable:
class MyClass(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
type = Column(String(50))
alt = Column("some_alt", Integer)
__mapper_args__ = {
'polymorphic_on' : type
}
Explicit use of mapper()
is often referred to as classical mapping. The above
declarative example is equivalent in classical form to:
my_table = Table("my_table", metadata,
Column('id', Integer, primary_key=True),
Column('type', String(50)),
Column("some_alt", Integer)
)
class MyClass(object):
pass
mapper(MyClass, my_table,
polymorphic_on=my_table.c.type,
properties={
'alt':my_table.c.some_alt
})
See also:
Classical Mappings - discussion of direct usage of
mapper()
Parameters: |
|
---|
sqlalchemy.orm.
object_mapper
(instance)¶Given an object, return the primary Mapper associated with the object instance.
Raises sqlalchemy.orm.exc.UnmappedInstanceError
if no mapping is configured.
This function is available via the inspection system as:
inspect(instance).mapper
Using the inspection system will raise
sqlalchemy.exc.NoInspectionAvailable
if the instance is
not part of a mapping.
sqlalchemy.orm.
class_mapper
(class_, configure=True)¶Given a class, return the primary Mapper
associated
with the key.
Raises UnmappedClassError
if no mapping is configured
on the given class, or ArgumentError
if a non-class
object is passed.
Equivalent functionality is available via the inspect()
function as:
inspect(some_mapped_class)
Using the inspection system will raise
sqlalchemy.exc.NoInspectionAvailable
if the class is not mapped.
sqlalchemy.orm.
configure_mappers
()¶Initialize the inter-mapper relationships of all mappers that have been constructed thus far.
This function can be called any number of times, but in most cases is handled internally.
sqlalchemy.orm.
clear_mappers
()¶Remove all mappers from all classes.
This function removes all instrumentation from classes and disposes of their associated mappers. Once called, the classes are unmapped and can be later re-mapped with new mappers.
clear_mappers()
is not for normal use, as there is literally no
valid usage for it outside of very specific testing scenarios. Normally,
mappers are permanent structural components of user-defined classes, and
are never discarded independently of their class. If a mapped class itself
is garbage collected, its mapper is automatically disposed of as well. As
such, clear_mappers()
is only for usage in test suites that re-use
the same classes with different mappings, which is itself an extremely rare
use case - the only such use case is in fact SQLAlchemy’s own test suite,
and possibly the test suites of other ORM extension libraries which
intend to test various combinations of mapper construction upon a fixed
set of classes.
sqlalchemy.orm.util.
identity_key
(*args, **kwargs)¶Generate “identity key” tuples, as are used as keys in the
Session.identity_map
dictionary.
This function has several call styles:
identity_key(class, ident)
This form receives a mapped class and a primary key scalar or tuple as an argument.
E.g.:
>>> identity_key(MyClass, (1, 2))
(<class '__main__.MyClass'>, (1, 2))
param class: | mapped class (must be a positional argument) |
---|---|
param ident: | primary key, may be a scalar or tuple argument. |
identity_key(instance=instance)
This form will produce the identity key for a given instance. The
instance need not be persistent, only that its primary key attributes
are populated (else the key will contain None
for those missing
values).
E.g.:
>>> instance = MyClass(1, 2)
>>> identity_key(instance=instance)
(<class '__main__.MyClass'>, (1, 2))
In this form, the given instance is ultimately run though
Mapper.identity_key_from_instance()
, which will have the
effect of performing a database check for the corresponding row
if the object is expired.
param instance: | object instance (must be given as a keyword arg) |
---|
identity_key(class, row=row)
This form is similar to the class/tuple form, except is passed a
database result row as a RowProxy
object.
E.g.:
>>> row = engine.execute("select * from table where a=1 and b=2").first()
>>> identity_key(MyClass, row=row)
(<class '__main__.MyClass'>, (1, 2))
param class: | mapped class (must be a positional argument) |
---|---|
param row: | RowProxy row returned by a ResultProxy
(must be given as a keyword arg) |
sqlalchemy.orm.util.
polymorphic_union
(table_map, typecolname, aliasname='p_union', cast_nulls=True)¶Create a UNION
statement used by a polymorphic mapper.
See Concrete Table Inheritance for an example of how this is used.
Parameters: |
|
---|
sqlalchemy.orm.mapper.
Mapper
(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, legacy_is_orphan=False, _compiled_cache_size=100)¶Bases: sqlalchemy.orm.interfaces._InspectionAttr
Define the correlation of class attributes to database table columns.
The Mapper
object is instantiated using the
mapper()
function. For information
about instantiating new Mapper
objects, see
that function’s documentation.
When mapper()
is used
explicitly to link a user defined class with table
metadata, this is referred to as classical mapping.
Modern SQLAlchemy usage tends to favor the
sqlalchemy.ext.declarative
extension for class
configuration, which
makes usage of mapper()
behind the scenes.
Given a particular class known to be mapped by the ORM,
the Mapper
which maintains it can be acquired
using the inspect()
function:
from sqlalchemy import inspect
mapper = inspect(MyClass)
A class which was mapped by the sqlalchemy.ext.declarative
extension will also have its mapper available via the __mapper__
attribute.
__init__
(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, legacy_is_orphan=False, _compiled_cache_size=100)¶Construct a new mapper.
Mappers are normally constructed via the
mapper()
function. See for details.
add_properties
(dict_of_properties)¶Add the given dictionary of properties to this mapper, using add_property.
add_property
(key, prop)¶Add an individual MapperProperty to this mapper.
If the mapper has not been configured yet, just adds the property to the initial properties dictionary sent to the constructor. If this Mapper has already been configured, then the given MapperProperty is configured immediately.
all_orm_descriptors
¶A namespace of all _InspectionAttr
attributes associated
with the mapped class.
These attributes are in all cases Python descriptors associated with the mapped class or its superclasses.
This namespace includes attributes that are mapped to the class
as well as attributes declared by extension modules.
It includes any Python descriptor type that inherits from
_InspectionAttr
. This includes QueryableAttribute
,
as well as extension types such as hybrid_property
,
hybrid_method
and AssociationProxy
.
To distinguish between mapped attributes and extension attributes,
the attribute _InspectionAttr.extension_type
will refer
to a constant that distinguishes between different extension types.
When dealing with a QueryableAttribute
, the
QueryableAttribute.property
attribute refers to the
MapperProperty
property, which is what you get when referring
to the collection of mapped properties via Mapper.attrs
.
New in version 0.8.0.
See also
attrs
¶A namespace of all MapperProperty
objects
associated this mapper.
This is an object that provides each property based on
its key name. For instance, the mapper for a
User
class which has User.name
attribute would
provide mapper.attrs.name
, which would be the
ColumnProperty
representing the name
column. The namespace object can also be iterated,
which would yield each MapperProperty
.
Mapper
has several pre-filtered views
of this attribute which limit the types of properties
returned, inclding synonyms
, column_attrs
,
relationships
, and composites
.
See also
base_mapper
= None¶The base-most Mapper
in an inheritance chain.
In a non-inheriting scenario, this attribute will always be this
Mapper
. In an inheritance scenario, it references
the Mapper
which is parent to all other Mapper
objects in the inheritance chain.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
cascade_iterator
(type_, state, halt_on=None)¶Iterate each element and its mapper in an object graph, for all relationships that meet the given cascade rule.
Parameters: |
|
---|
the return value are object instances; this provides a strong reference so that they don’t fall out of scope immediately.
class_
= None¶The Python class which this Mapper
maps.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
class_manager
= None¶The ClassManager
which maintains event listeners
and class-bound descriptors for this Mapper
.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
column_attrs
¶Return a namespace of all ColumnProperty
properties maintained by this Mapper
.
See also
Mapper.attrs
- namespace of all MapperProperty
objects.
columns
= None¶A collection of Column
or other scalar expression
objects maintained by this Mapper
.
The collection behaves the same as that of the c
attribute on
any Table
object, except that only those columns included in
this mapping are present, and are keyed based on the attribute name
defined in the mapping, not necessarily the key
attribute of the
Column
itself. Additionally, scalar expressions mapped
by column_property()
are also present here.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
common_parent
(other)¶Return true if the given mapper shares a common inherited parent as this mapper.
compile
()¶Initialize the inter-mapper relationships of all mappers that
Deprecated since version 0.7: Mapper.compile()
is replaced by configure_mappers()
have been constructed thus far.
compiled
¶Deprecated since version 0.7: Mapper.compiled
is replaced by Mapper.configured
composites
¶Return a namespace of all CompositeProperty
properties maintained by this Mapper
.
See also
Mapper.attrs
- namespace of all MapperProperty
objects.
concrete
= None¶Represent True
if this Mapper
is a concrete
inheritance mapper.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
configured
= None¶Represent True
if this Mapper
has been configured.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
See also
entity
¶Part of the inspection API.
Returns self.class_.
get_property
(key, _configure_mappers=True)¶return a MapperProperty associated with the given key.
get_property_by_column
(column)¶Given a Column
object, return the
MapperProperty
which maps this column.
identity_key_from_instance
(instance)¶Return the identity key for the given instance, based on its primary key attributes.
If the instance’s state is expired, calling this method
will result in a database check to see if the object has been deleted.
If the row no longer exists,
ObjectDeletedError
is raised.
This value is typically also found on the instance state under the attribute name key.
identity_key_from_primary_key
(primary_key)¶Return an identity-map key for use in storing/retrieving an item from an identity map.
Parameters: | primary_key – A list of values indicating the identifier. |
---|
identity_key_from_row
(row, adapter=None)¶Return an identity-map key for use in storing/retrieving an item from the identity map.
Parameters: | row – A RowProxy instance. The columns which are mapped
by this Mapper should be locatable in the row, preferably
via the Column object directly (as is the case when a
select() construct is executed), or via string names of the form
<tablename>_<colname> . |
---|
inherits
= None¶References the Mapper
which this Mapper
inherits from, if any.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
is_mapper
= True¶Part of the inspection API.
isa
(other)¶Return True if the this mapper inherits from the given mapper.
iterate_properties
¶return an iterator of all MapperProperty objects.
local_table
= None¶The Selectable
which this Mapper
manages.
Typically is an instance of Table
or Alias
.
May also be None
.
The “local” table is the
selectable that the Mapper
is directly responsible for
managing from an attribute access and flush perspective. For
non-inheriting mappers, the local table is the same as the
“mapped” table. For joined-table inheritance mappers, local_table
will be the particular sub-table of the overall “join” which
this Mapper
represents. If this mapper is a
single-table inheriting mapper, local_table will be None
.
See also
mapped_table
= None¶The Selectable
to which this Mapper
is mapped.
Typically an instance of Table
, Join
, or
Alias
.
The “mapped” table is the selectable that
the mapper selects from during queries. For non-inheriting
mappers, the mapped table is the same as the “local” table.
For joined-table inheritance mappers, mapped_table references the
full Join
representing full rows for this particular
subclass. For single-table inheritance mappers, mapped_table
references the base table.
See also
mapper
¶Part of the inspection API.
Returns self.
non_primary
= None¶Represent True
if this Mapper
is a “non-primary”
mapper, e.g. a mapper that is used only to selet rows but not for
persistence management.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
polymorphic_identity
= None¶Represent an identifier which is matched against the
polymorphic_on
column during result row loading.
Used only with inheritance, this object can be of any type which is
comparable to the type of column represented by
polymorphic_on
.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
polymorphic_iterator
()¶Iterate through the collection including this mapper and all descendant mappers.
This includes not just the immediately inheriting mappers but all their inheriting mappers as well.
To iterate through an entire hierarchy, use
mapper.base_mapper.polymorphic_iterator()
.
polymorphic_map
= None¶A mapping of “polymorphic identity” identifiers mapped to
Mapper
instances, within an inheritance scenario.
The identifiers can be of any type which is comparable to the
type of column represented by polymorphic_on
.
An inheritance chain of mappers will all reference the same polymorphic map object. The object is used to correlate incoming result rows to target mappers.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
polymorphic_on
= None¶The Column
or SQL expression specified as the
polymorphic_on
argument
for this Mapper
, within an inheritance scenario.
This attribute is normally a Column
instance but
may also be an expression, such as one derived from
cast()
.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
primary_key
= None¶An iterable containing the collection of Column
objects
which comprise the ‘primary key’ of the mapped table, from the
perspective of this Mapper
.
This list is against the selectable in mapped_table
. In
the case of inheriting mappers, some columns may be managed by a
superclass mapper. For example, in the case of a Join
, the
primary key is determined by all of the primary key columns across all
tables referenced by the Join
.
The list is also not necessarily the same as the primary key column
collection associated with the underlying tables; the Mapper
features a primary_key
argument that can override what the
Mapper
considers as primary key columns.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
primary_key_from_instance
(instance)¶Return the list of primary key values for the given instance.
If the instance’s state is expired, calling this method
will result in a database check to see if the object has been deleted.
If the row no longer exists,
ObjectDeletedError
is raised.
primary_mapper
()¶Return the primary mapper corresponding to this mapper’s class key (class).
relationships
¶Return a namespace of all RelationshipProperty
properties maintained by this Mapper
.
See also
Mapper.attrs
- namespace of all MapperProperty
objects.
selectable
¶The select()
construct this Mapper
selects from
by default.
Normally, this is equivalent to mapped_table
, unless
the with_polymorphic
feature is in use, in which case the
full “polymorphic” selectable is returned.
self_and_descendants
¶The collection including this mapper and all descendant mappers.
This includes not just the immediately inheriting mappers but all their inheriting mappers as well.
single
= None¶Represent True
if this Mapper
is a single table
inheritance mapper.
local_table
will be None
if this flag is set.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
synonyms
¶Return a namespace of all SynonymProperty
properties maintained by this Mapper
.
See also
Mapper.attrs
- namespace of all MapperProperty
objects.
tables
= None¶An iterable containing the collection of Table
objects
which this Mapper
is aware of.
If the mapper is mapped to a Join
, or an Alias
representing a Select
, the individual Table
objects that comprise the full construct will be represented here.
This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.
validators
= None¶An immutable dictionary of attributes which have been decorated
using the validates()
decorator.
The dictionary contains string attribute names as keys mapped to the actual validation method.