SQLAlchemy 0.3 Documentation

Version: 0.3.1 Last Updated: 11/12/06 18:46:51

This section details all the options available to Mappers, as well as advanced patterns.

To start, heres the tables we will work with again:

from sqlalchemy import *

metadata = MetaData()

# a table to store users
users_table = Table('users', metadata,
    Column('user_id', Integer, primary_key = True),
    Column('user_name', String(40)),
    Column('password', String(80))
)

# a table that stores mailing addresses associated with a specific user
addresses_table = Table('addresses', metadata,
    Column('address_id', Integer, primary_key = True),
    Column('user_id', Integer, ForeignKey("users.user_id")),
    Column('street', String(100)),
    Column('city', String(80)),
    Column('state', String(2)),
    Column('zip', String(10))
)

# a table that stores keywords
keywords_table = Table('keywords', metadata,
    Column('keyword_id', Integer, primary_key = True),
    Column('name', VARCHAR(50))
)

# a table that associates keywords with users
userkeywords_table = Table('userkeywords', metadata,
    Column('user_id', INT, ForeignKey("users")),
    Column('keyword_id', INT, ForeignKey("keywords"))
)

More On Mapper Properties

Overriding Column Names

When mappers are constructed, by default the column names in the Table metadata are used as the names of attributes on the mapped class. This can be customzed within the properties by stating the key/column combinations explicitly:

user_mapper = mapper(User, users_table, properties={
    'id' : users_table.c.user_id,
    'name' : users_table.c.user_name,
})

In the situation when column names overlap in a mapper against multiple tables, columns may be referenced together with a list:

# join users and addresses
usersaddresses = sql.join(users_table, addresses_table, users_table.c.user_id == addresses_table.c.user_id)
m = mapper(User, usersaddresses,   
    properties = {
        'id' : [users_table.c.user_id, addresses_table.c.user_id],
    }
    )
back to section top

Overriding Properties

A common request is the ability to create custom class properties that override the behavior of setting/getting an attribute. Currently, the easiest way to do this in SQLAlchemy is how it would be done in any Python program; define your attribute with a different name, such as "_attribute", and use a property to get/set its value. The mapper just needs to be told of the special name:

class MyClass(object):
    def _set_email(self, email):
       self._email = email
    def _get_email(self, email):
       return self._email
    email = property(_get_email, _set_email)

mapper(MyClass, mytable, properties = {
   # map the '_email' attribute to the "email" column
   # on the table
   '_email': mytable.c.email
})

It is also possible to route the the select_by and get_by functions on Query using the new property name, by establishing a synonym:

mapper(MyClass, mytable, properties = {
    # map the '_email' attribute to the "email" column
    # on the table
    '_email': mytable.c.email

    # make a synonym 'email'
    'email' : synonym('_email')
})

# now you can select_by(email)
result = session.query(MyClass).select_by(email='john@smith.com')

Synonym can be established with the flag "proxy=True", to create a class-level proxy to the actual property:

mapper(MyClass, mytable, properties = {
    '_email': mytable.c.email
    'email' : synonym('_email', proxy=True)
})

x = MyClass()
x.email = 'john@doe.com'

>>> x._email
'john@doe.com'

The synonym keyword is currently an Alpha Feature.

back to section top

Custom List Classes

Feature Status: Alpha API

A one-to-many or many-to-many relationship results in a list-holding element being attached to all instances of a class. The actual list is an "instrumented" list, which transparently maintains a relationship to a plain Python list. The implementation of the underlying plain list can be changed to be any object that implements a list-style append and __iter__ method. A common need is for a list-based relationship to actually be a dictionary. This can be achieved by subclassing dict to have list-like behavior.

In this example, a class MyClass is defined, which is associated with a parent object MyParent. The collection of MyClass objects on each MyParent object will be a dictionary, storing each MyClass instance keyed to its name attribute.

# a class to be stored in the list
class MyClass(object):
    def __init__(self, name):
        self.name = name

# create a dictionary that will act like a list, and store
# instances of MyClass
class MyDict(dict):
    def append(self, item):
        self[item.name] = item
    def __iter__(self):
        return self.values()

# parent class
class MyParent(object):
    pass

# mappers, constructed normally
mapper(MyClass, myclass_table)
mapper(MyParent, myparent_table, properties={
    'myclasses' : relation(MyClass, collection_class=MyDict)
})

# elements on 'myclasses' can be accessed via string keyname
myparent = MyParent()
myparent.myclasses.append(MyClass('this is myclass'))
myclass = myparent.myclasses['this is myclass']
back to section top

Custom Join Conditions


When creating relations on a mapper, most examples so far have illustrated the mapper and relationship joining up based on the foreign keys of the tables they represent. in fact, this "automatic" inspection can be completely circumvented using the primaryjoin and secondaryjoin arguments to relation, as in this example which creates a User object which has a relationship to all of its Addresses which are in Boston:

class User(object):
    pass
class Address(object):
    pass

mapper(Address, addresses_table)
mapper(User, users_table, properties={
    'boston_addresses' : relation(Address, primaryjoin=
                and_(users_table.c.user_id==Address.c.user_id, 
                Addresses.c.city=='Boston'))
})

Many to many relationships can be customized by one or both of primaryjoin and secondaryjoin, shown below with just the default many-to-many relationship explicitly set:

class User(object):
    pass
class Keyword(object):
    pass
mapper(Keyword, keywords_table)
mapper(User, users_table, properties={
    'keywords':relation(Keyword, secondary=userkeywords_table,
        primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id,
        secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id
        )
})
back to section top

Lazy/Eager Joins Multiple Times to One Table

The previous example leads in to the idea of joining against the same table multiple times. Below is a User object that has lists of its Boston and New York addresses:

mapper(User, users_table, properties={
    'boston_addresses' : relation(Address, primaryjoin=
                and_(users_table.c.user_id==Address.c.user_id, 
                Addresses.c.city=='Boston')),
    'newyork_addresses' : relation(Address, primaryjoin=
                and_(users_table.c.user_id==Address.c.user_id, 
                Addresses.c.city=='New York')),
})

Both lazy and eager loading support multiple joins equally well.

back to section top

Deferred Column Loading

This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentailly "column-level lazy loading". This feature is useful when one wants to avoid loading a large text or binary field into memory when its not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together.

book_excerpts = Table('books', db, 
    Column('book_id', Integer, primary_key=True),
    Column('title', String(200), nullable=False),
    Column('summary', String(2000)),
    Column('excerpt', String),
    Column('photo', Binary)
)

class Book(object):
    pass

# define a mapper that will load each of 'excerpt' and 'photo' in 
# separate, individual-row SELECT statements when each attribute
# is first referenced on the individual object instance
mapper(Book, book_excerpts, properties = {
    'excerpt' : deferred(book_excerpts.c.excerpt),
    'photo' : deferred(book_excerpts.c.photo)
})

Deferred columns can be placed into groups so that they load together:

book_excerpts = Table('books', db, 
    Column('book_id', Integer, primary_key=True),
    Column('title', String(200), nullable=False),
    Column('summary', String(2000)),
    Column('excerpt', String),
    Column('photo1', Binary),
    Column('photo2', Binary),
    Column('photo3', Binary)
)

class Book(object):
    pass

# define a mapper with a 'photos' deferred group.  when one photo is referenced,
# all three photos will be loaded in one SELECT statement.  The 'excerpt' will 
# be loaded separately when it is first referenced.
mapper(Book, book_excerpts, properties = {
    'excerpt' : deferred(book_excerpts.c.excerpt),
    'photo1' : deferred(book_excerpts.c.photo1, group='photos'),
    'photo2' : deferred(book_excerpts.c.photo2, group='photos'),
    'photo3' : deferred(book_excerpts.c.photo3, group='photos')
})
back to section top

Relation Options

Keyword options to the relation function include:

  • lazy=(True|False|None) - specifies how the related items should be loaded. a value of True indicates they should be loaded when the property is first accessed. A value of False indicates they should be loaded by joining against the parent object query, so parent and child are loaded in one round trip. A value of None indicates the related items are not loaded by the mapper in any case; the application will manually insert items into the list in some other way. A relationship with lazy=None is still important; items added to the list or removed will cause the appropriate updates and deletes upon flush(). Future capabilities for lazy might also include "lazy='extra'", which would allow lazy loading of child elements one at a time, for very large collections.
  • cascade - a string list of cascade rules which determines how persistence operations should be "cascaded" from parent to child. For a description of cascade rules, see datamapping_relations_cycle and Cascade rules.
  • secondary - for a many-to-many relationship, specifies the intermediary table.
  • primaryjoin - a ClauseElement that will be used as the primary join of this child object against the parent object, or in a many-to-many relationship the join of the primary object to the association table. By default, this value is computed based on the foreign key relationships of the parent and child tables (or association table).
  • secondaryjoin - a ClauseElement that will be used as the join of an association table to the child object. By default, this value is computed based on the foreign key relationships of the association and child tables.
  • foreignkey - a single Column, or list of Columns, specifying which column(s) in this relationship are "foreign", i.e. which column refers to the remote object. This value is automatically determined in most cases from the primary join condition, by looking at which column in the join condition contains a ForeignKey pointing to the other column in an equality expression. Specifying it here can override the normal foreign key properties of the join condition, which is useful for self-referential table relationships, join conditions where a ForeignKey is not present, or where the same column might appear on both sides of the join condition.
  • uselist - a boolean that indicates if this property should be loaded as a list or a scalar. In most cases, this value is determined based on the type and direction of the relationship - one to many forms a list, many to one forms a scalar, many to many is a list. If a scalar is desired where normally a list would be present, such as a bi-directional one-to-one relationship, set uselist to False.
  • private - setting private=True is the equivalent of setting cascade="all, delete-orphan", and indicates the lifecycle of child objects should be contained within that of the parent. See the example in datamapping_relations_cycle.
  • backref - indicates the name of a property to be placed on the related mapper's class that will handle this relationship in the other direction, including synchronizing the object attributes on both sides of the relation. Can also point to a backref() construct for more configurability. See Backreferences.
  • order_by - indicates the ordering that should be applied when loading these items. See the section Controlling Ordering for details.
  • association - Deprecated; as of version 0.3.0 the association keyword is synonomous with applying the "all, delete-orphan" cascade to a "one-to-many" relationship. SA can now automatically reconcile a "delete" and "insert" operation of two objects with the same "identity" in a flush() operation into a single "update" statement, which is the pattern that "association" used to indicate. See the updated example of association mappings in Association Object.
  • post_update - this indicates that the relationship should be handled by a second UPDATE statement after an INSERT, or before a DELETE. using this flag essentially means the relationship will not incur any "dependency" between parent and child item, as the particular foreign key relationship between them is handled by a second statement. use this flag when a particular mapping arrangement will incur two rows that are dependent on each other, such as a table that has a one-to-many relationship to a set of child rows, and also has a column that references a single child row within that list (i.e. both tables contain a foreign key to each other). If a flush() operation returns an error that a "cyclical dependency" was detected, this is a cue that you might want to use post_update.
  • viewonly=(True|False) - when set to True, the relation is used only for loading objects within the relationship, and has no effect on the unit-of-work flush process. relations with viewonly can specify any kind of join conditions to provide additional views of related objects onto a parent object.
  • collection_class = None - a class or function that returns a new list-holding object. will be used in place of a plain list for storing elements.
  • passive_deletes = False - if False, child instances will be loaded from the database into the current session when a parent instance is deleted so that delete cascade can be processed on those child instances, even if they were not loaded already. When set to True, lazy loaders will not be fired off in order to process a "delete" cascade, which is appropriate if "ON DELETE" rules have been set up on the database tables themselves.
back to section top

Controlling Ordering

By default, mappers will attempt to ORDER BY the "oid" column of a table, or the primary key column, when selecting rows. This can be modified in several ways.

The "order_by" parameter can be sent to a mapper, overriding the per-engine ordering if any. A value of None means that the mapper should not use any ordering. A non-None value, which can be a column, an asc or desc clause, or an array of either one, indicates the ORDER BY clause that should be added to all select queries:

# disable all ordering
mapper = mapper(User, users_table, order_by=None)

# order by a column
mapper = mapper(User, users_table, order_by=users_tableusers_table.c.user_id)

# order by multiple items
mapper = mapper(User, users_table, order_by=[users_table.c.user_id, desc(users_table.c.user_name)])

"order_by" can also be specified to an individual select method, overriding all other per-engine/per-mapper orderings:

# order by a column
l = mapper.select(users_table.c.user_name=='fred', order_by=users_table.c.user_id)

# order by multiple criterion
l = mapper.select(users_table.c.user_name=='fred', order_by=[users_table.c.user_id, desc(users_table.c.user_name)])

For relations, the "order_by" property can also be specified to all forms of relation:

# order address objects by address id
mapper = mapper(User, users_table, properties = {
    'addresses' : relation(mapper(Address, addresses_table), order_by=addresses_table.c.address_id)
})

# eager load with ordering - the ORDER BY clauses of parent/child will be organized properly
mapper = mapper(User, users_table, properties = {
    'addresses' : relation(mapper(Address, addresses_table), order_by=desc(addresses_table.c.email_address), eager=True)
}, order_by=users_table.c.user_id)
back to section top

Limiting Rows

You can limit rows in a regular SQL query by specifying limit and offset. A Mapper can handle the same concepts:

class User(object):
    pass

mapper(User, users_table)
sqlr = session.query(User).select(limit=20, offset=10)

However, things get tricky when dealing with eager relationships, since a straight LIMIT of rows does not represent the count of items when joining against other tables to load related items as well. So here is what SQLAlchemy will do when you use limit or offset with an eager relationship:

class User(object):
    pass
class Address(object):
    pass
    mapper(User, users_table, properties={
    'addresses' : relation(mapper(Address, addresses_table), lazy=False)
})
r = session.query(User).select(User.c.user_name.like('F%'), limit=20, offset=10)
SELECT users.user_id AS users_user_id, users.user_name AS users_user_name,
users.password AS users_password, addresses.address_id AS addresses_address_id,
addresses.user_id AS addresses_user_id, addresses.street AS addresses_street,
addresses.city AS addresses_city, addresses.state AS addresses_state,
addresses.zip AS addresses_zip
FROM
(SELECT users.user_id FROM users WHERE users.user_name LIKE %(users_user_name)s
ORDER BY users.oid LIMIT 20 OFFSET 10) AS rowcount,
users LEFT OUTER JOIN addresses ON users.user_id = addresses.user_id
WHERE rowcount.user_id = users.user_id ORDER BY users.oid, addresses.oid
{'users_user_name': 'F%'}

The main WHERE clause as well as the limiting clauses are coerced into a subquery; this subquery represents the desired result of objects. A containing query, which handles the eager relationships, is joined against the subquery to produce the result.

back to section top

Mapping a Class with Table Inheritance

Feature Status: Alpha Implementation

Inheritance in databases comes in three forms: single table inheritance, where several types of classes are stored in one table, concrete table inheritance, where each type of class is stored in its own table, and multiple table inheritance, where the parent/child classes are stored in their own tables that are joined together in a select.

There is also a concept of polymorphic loading, which indicates if multiple kinds of classes can be loaded in one pass.

SQLAlchemy supports all three kinds of inheritance. Additionally, true polymorphic loading is supported in a straightfoward way for single table inheritance, and has some more manually-configured features that can make it happen for concrete and multiple table inheritance.

Working examples of polymorphic inheritance come with the distribution in the directory examples/polymorphic.

Here are the classes we will use to represent an inheritance relationship:

class Employee(object):
    def __init__(self, name):
        self.name = name
    def __repr__(self):
        return self.__class__.__name__ + " " + self.name

class Manager(Employee):
    def __init__(self, name, manager_data):
        self.name = name
        self.manager_data = manager_data
    def __repr__(self):
        return self.__class__.__name__ + " " + self.name + " " +  self.manager_data

class Engineer(Employee):
    def __init__(self, name, engineer_info):
        self.name = name
        self.engineer_info = engineer_info
    def __repr__(self):
        return self.__class__.__name__ + " " + self.name + " " +  self.engineer_info

Each class supports a common name attribute, while the Manager class has its own attribute manager_data and the Engineer class has its own attribute engineer_info.

Single Table Inheritance

This will support polymorphic loading via the Employee mapper.

employees_table = Table('employees', metadata, 
    Column('employee_id', Integer, primary_key=True),
    Column('name', String(50)),
    Column('manager_data', String(50)),
    Column('engineer_info', String(50)),
    Column('type', String(20))
)

employee_mapper = mapper(Employee, employees_table, polymorphic_on=employees_table.c.type)
manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer')
back to section top

Concrete Table Inheritance

Without polymorphic loading, you just define a separate mapper for each class.

Concrete Inheritance, Non-polymorphic
managers_table = Table('managers', metadata, 
    Column('employee_id', Integer, primary_key=True),
    Column('name', String(50)),
    Column('manager_data', String(50)),
)

engineers_table = Table('engineers', metadata, 
    Column('employee_id', Integer, primary_key=True),
    Column('name', String(50)),
    Column('engineer_info', String(50)),
)

manager_mapper = mapper(Manager, managers_table)
engineer_mapper = mapper(Engineer, engineers_table)

With polymorphic loading, the SQL query to do the actual polymorphic load must be constructed, usually as a UNION. There is a helper function to create these UNIONS called polymorphic_union.

Concrete Inheritance, Polymorphic
pjoin = polymorphic_union({
    'manager':managers_table,
    'engineer':engineers_table
}, 'type', 'pjoin')

employee_mapper = mapper(Employee, pjoin, polymorphic_on=pjoin.c.type)
manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')

A future release of SQLALchemy might better merge the generated UNION into the mapper construction phase.

back to section top

Multiple Table Inheritance

Like concrete table inheritance, this can be done non-polymorphically, or with a little more complexity, polymorphically:

Multiple Table Inheritance, Non-polymorphic
employees = Table('employees', metadata, 
   Column('person_id', Integer, primary_key=True),
   Column('name', String(50)),
   Column('type', String(30)))

engineers = Table('engineers', metadata, 
   Column('person_id', Integer, ForeignKey('employees.person_id'), primary_key=True),
   Column('engineer_info', String(50)),
  )

managers = Table('managers', metadata, 
   Column('person_id', Integer, ForeignKey('employees.person_id'), primary_key=True),
   Column('manager_data', String(50)),
   )

person_mapper = mapper(Employee, employees)
mapper(Engineer, engineers, inherits=person_mapper)
mapper(Manager, managers, inherits=person_mapper)

Polymorphic:

Multiple Table Inheritance, Polymorphic
person_join = polymorphic_union(
    {
        'engineer':employees.join(engineers),
        'manager':employees.join(managers),
        'person':employees.select(employees.c.type=='person'),
    }, None, 'pjoin')

person_mapper = mapper(Employee, employees, select_table=person_join, polymorphic_on=person_join.c.type, polymorphic_identity='person')
mapper(Engineer, engineers, inherits=person_mapper, polymorphic_identity='engineer')
mapper(Manager, managers, inherits=person_mapper, polymorphic_identity='manager')

The join condition in a multiple table inheritance relationship can be specified explicitly, using inherit_condition:

AddressUser.mapper = mapper(
        AddressUser,
        addresses_table, inherits=User.mapper, 
        inherit_condition=users_table.c.user_id==addresses_table.c.user_id
    )
back to section top

Mapping a Class against Multiple Tables

Mappers can be constructed against arbitrary relational units (called Selectables) as well as plain Tables. For example, The join keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.

# a class
class AddressUser(object):
    pass

# define a Join
j = join(users_table, addresses_table)

# map to it - the identity of an AddressUser object will be 
# based on (user_id, address_id) since those are the primary keys involved
m = mapper(AddressUser, j, properties={
    'user_id':[users_table.c.user_id, addresses_table.c.user_id]
})

A second example:

# many-to-many join on an association table
j = join(users_table, userkeywords, 
        users_table.c.user_id==userkeywords.c.user_id).join(keywords, 
           userkeywords.c.keyword_id==keywords.c.keyword_id)

# a class 
class KeywordUser(object):
    pass

# map to it - the identity of a KeywordUser object will be
# (user_id, keyword_id) since those are the primary keys involved
m = mapper(KeywordUser, j, properties={
    'user_id':[users_table.c.user_id, userkeywords.c.user_id],
    'keyword_id':[userkeywords.c.keyword_id, keywords.c.keyword_id]
})

In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.

back to section top

Mapping a Class against Arbitary Selects

Similar to mapping against a join, a plain select() object can be used with a mapper as well. Below, an example select which contains two aggregate functions and a group_by is mapped to a class:

s = select([customers, 
            func.count(orders).label('order_count'), 
            func.max(orders.price).label('highest_order')],
            customers.c.customer_id==orders.c.customer_id,
            group_by=[c for c in customers.c]
            )
class Customer(object):
    pass

m = mapper(Customer, s)

Above, the "customers" table is joined against the "orders" table to produce a full row for each customer row, the total count of related rows in the "orders" table, and the highest price in the "orders" table, grouped against the full set of columns in the "customers" table. That query is then mapped against the Customer class. New instances of Customer will contain attributes for each column in the "customers" table as well as an "order_count" and "highest_order" attribute. Updates to the Customer object will only be reflected in the "customers" table and not the "orders" table. This is because the primary keys of the "orders" table are not represented in this mapper and therefore the table is not affected by save or delete operations.

back to section top

Multiple Mappers for One Class

The first mapper created for a certain class is known as that class's "primary mapper." Other mappers can be created as well, these come in two varieties.

  • secondary mapper - this is a mapper that must be constructed with the keyword argument non_primary=True, and represents a load-only mapper. Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper. It is also invalid to add new relation()s to a non-primary mapper. To use this mapper with the Session, specify it to the query method:

example:

# primary mapper
mapper(User, users_table)

# make a secondary mapper to load User against a join
othermapper = mapper(User, users_table.join(someothertable), non_primary=True)

# select
result = session.query(othermapper).select()
  • entity name mapper - this is a mapper that is a fully functioning primary mapper for a class, which is distinguished from the regular primary mapper by an entity_name parameter. Instances loaded with this mapper will be totally managed by this new mapper and have no connection to the original one. Most methods on Session include an optional entity_name parameter in order to specify this condition.

example:

# primary mapper
mapper(User, users_table)

# make an entity name mapper that stores User objects in another table
mapper(User, alternate_users_table, entity_name='alt')

# make two User objects
user1 = User()
user2 = User()

# save one in in the "users" table
session.save(user1)

# save the other in the "alternate_users_table"
session.save(user2, entity_name='alt')

session.flush()

# select from the alternate mapper
session.query(User, entity_name='alt').select()
back to section top

Self Referential Mappers

A self-referential mapper is a mapper that is designed to operate with an adjacency list table. This is a table that contains one or more foreign keys back to itself, and is usually used to create hierarchical tree structures. SQLAlchemy's default model of saving items based on table dependencies is not sufficient in this case, as an adjacency list table introduces dependencies between individual rows. Fortunately, SQLAlchemy will automatically detect a self-referential mapper and do the extra lifting to make it work.

# define a self-referential table
trees = Table('treenodes', engine,
    Column('node_id', Integer, primary_key=True),
    Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
    Column('node_name', String(50), nullable=False),
    )

# treenode class
class TreeNode(object):
    pass

# mapper defines "children" property, pointing back to TreeNode class,
# with the mapper unspecified.  it will point back to the primary 
# mapper on the TreeNode class.
TreeNode.mapper = mapper(TreeNode, trees, properties={
        'children' : relation(
                        TreeNode, 
                        cascade="all"
                     ),
        }
    )

# or, specify the circular relationship after establishing the original mapper:
mymapper = mapper(TreeNode, trees)

mymapper.add_property('children', relation(
                        mymapper, 
                        cascade="all"
                     ))

This kind of mapper goes through a lot of extra effort when saving and deleting items, to determine the correct dependency graph of nodes within the tree.

A self-referential mapper where there is more than one relationship on the table requires that all join conditions be explicitly spelled out. Below is a self-referring table that contains a "parent_node_id" column to reference parent/child relationships, and a "root_node_id" column which points child nodes back to the ultimate root node:

# define a self-referential table with several relations
trees = Table('treenodes', engine,
    Column('node_id', Integer, primary_key=True),
    Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
    Column('root_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
    Column('node_name', String(50), nullable=False),
    )

# treenode class
class TreeNode(object):
    pass

# define the "children" property as well as the "root" property
TreeNode.mapper = mapper(TreeNode, trees, properties={
        'children' : relation(
                        TreeNode, 
                        primaryjoin=trees.c.parent_node_id==trees.c.node_id
                        cascade="all",
                        backref=backref("parent", foreignkey=trees.c.node_id)
                     ),
        'root' : relation(
                TreeNode,
                primaryjoin=trees.c.root_node_id=trees.c.node_id, 
                foreignkey=trees.c.node_id,
                uselist=False
            )
        }
    )

The "root" property on a TreeNode is a many-to-one relationship. By default, a self-referential mapper declares relationships as one-to-many, so the extra parameter foreignkey, pointing to a column or list of columns on the remote side of a relationship, is needed to indicate a "many-to-one" self-referring relationship. Both TreeNode examples above are available in functional form in the examples/adjacencytree directory of the distribution.

back to section top

Result-Set Mapping

Take any result set and feed it into a mapper to produce objects. Multiple mappers can be combined to retrieve unrelated objects from the same row in one step. The instances method on mapper takes a ResultProxy object, which is the result type generated from SQLEngine, and delivers object instances.

class User(object):
    pass

User.mapper = mapper(User, users_table)

# select users
c = users_table.select().execute()

# get objects
userlist = User.mapper.instances(c)

# define a second class/mapper
class Address(object):
    pass

Address.mapper = mapper(Address, addresses_table)

# select users and addresses in one query
s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id)

# execute it, and process the results with the User mapper, chained to the Address mapper
r = User.mapper.instances(s.execute(), Address.mapper)

# result rows are an array of objects, one for each mapper used
for entry in r:
    user = r[0]
    address = r[1]
back to section top

Mapper Arguments

Other arguments not covered above include:

  • select_table=None - often used with polymorphic mappers, this is a Selectable which will take the place of the Mapper's main table argument when performing queries.
  • version_id_col=None - an integer-holding Column object that will be assigned an incrementing counter, which is added to the WHERE clause used by UPDATE and DELETE statements. The matching row count returned by the database is compared to the expected row count, and an exception is raised if they dont match. This is a basic "optimistic concurrency" check. Without the version id column, SQLAlchemy still compares the updated rowcount.
  • always_refresh=False - this option will cause the mapper to refresh all the attributes of all objects loaded by select/get statements, regardless of if they already exist in the current session. this includes all lazy- and eager-loaded relationship attributes, and will also overwrite any changes made to attributes on the column.
back to section top

Extending Mapper

Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class. This class is just a series of "hooks" where various functionality takes place. An application can make its own MapperExtension objects, overriding only the methods it needs. Methods that are not overridden return the special value sqlalchemy.orm.mapper.EXT_PASS, which indicates the operation should proceed as normally.

class MapperExtension(object):
    """base implementation for an object that provides overriding behavior to various
    Mapper functions.  For each method in MapperExtension, a result of EXT_PASS indicates
    the functionality is not overridden."""
    def get_session(self):
        """called to retrieve a contextual Session instance with which to
        register a new object. Note: this is not called if a session is 
        provided with the __init__ params (i.e. _sa_session)"""
        return EXT_PASS
    def select_by(self, query, *args, **kwargs):
        """overrides the select_by method of the Query object"""
        return EXT_PASS
    def select(self, query, *args, **kwargs):
        """overrides the select method of the Query object"""
        return EXT_PASS
    def create_instance(self, mapper, selectcontext, row, class_):
        """called when a new object instance is about to be created from a row.  
        the method can choose to create the instance itself, or it can return 
        None to indicate normal object creation should take place.

        mapper - the mapper doing the operation

        selectcontext - SelectionContext corresponding to the instances() call

        row - the result row from the database

        class_ - the class we are mapping.
        """
        return EXT_PASS
    def append_result(self, mapper, selectcontext, row, instance, identitykey, result, isnew):
        """called when an object instance is being appended to a result list.

        If this method returns EXT_PASS, it is assumed that the mapper should do the appending, else
        if this method returns any other value or None, it is assumed that the append was handled by this method.

        mapper - the mapper doing the operation

        selectcontext - SelectionContext corresponding to the instances() call

        row - the result row from the database

        instance - the object instance to be appended to the result

        identitykey - the identity key of the instance

        result - list to which results are being appended

        isnew - indicates if this is the first time we have seen this object instance in the current result
        set.  if you are selecting from a join, such as an eager load, you might see the same object instance
        many times in the same result set.
        """
        return EXT_PASS
    def populate_instance(self, mapper, selectcontext, row, instance, identitykey, isnew):
        """called right before the mapper, after creating an instance from a row, passes the row
        to its MapperProperty objects which are responsible for populating the object's attributes.
        If this method returns EXT_PASS, it is assumed that the mapper should do the appending, else
        if this method returns any other value or None, it is assumed that the append was handled by this method.

        Essentially, this method is used to have a different mapper populate the object:

            def populate_instance(self, mapper, selectcontext, instance, row, identitykey, isnew):
                othermapper.populate_instance(selectcontext, instance, row, identitykey, isnew, frommapper=mapper)
                return True
        """
        return EXT_PASS
    def before_insert(self, mapper, connection, instance):
        """called before an object instance is INSERTed into its table.

        this is a good place to set up primary key values and such that arent handled otherwise."""
        return EXT_PASS
    def before_update(self, mapper, connection, instance):
        """called before an object instnace is UPDATED"""
        return EXT_PASS
    def after_update(self, mapper, connection, instance):
        """called after an object instnace is UPDATED"""
        return EXT_PASS
    def after_insert(self, mapper, connection, instance):
        """called after an object instance has been INSERTed"""
        return EXT_PASS
    def before_delete(self, mapper, connection, instance):
        """called before an object instance is DELETEed"""
        return EXT_PASS
    def after_delete(self, mapper, connection, instance):
        """called after an object instance is DELETEed"""
        return EXT_PASS

To use MapperExtension, make your own subclass of it and just send it off to a mapper:

m = mapper(User, users_table, extension=MyExtension())

Multiple extensions will be chained together and processed in order; they are specified as a list:

m = mapper(User, users_table, extension=[ext1, ext2, ext3])
back to section top