py-postgresql multithread problems

Сергей Мелехин cpro29a at
Fri Sep 28 03:55:30 UTC 2012

Hello again!
I got rid of errors. They were caused by cached prepared statements in
my Connection object. I still do not understand why (Connection object
is not shared between threads because of explicit locking), but when i
removed cache completely it just worked.

My caching statement caching routine looked like:

def prepare( self, sql ):
        '''Prepare and cache SQL'''
        normalized = re.sub( '\s', ' ', sql.strip( ' ;' ) ).lower()
        hash = hashlib.md5( normalized.encode( 'utf-8' ) ).digest()
        if not hash in self.prepared_statements.keys():
            with self.stmt_lock: ####did not help either
                self.prepared_statements[hash] = self.conn.prepare( sql )
        return self.prepared_statements[hash]

I don't know why, but it did not work.

With best regards, Sergey Melekhin

2012/9/26 Сергей Мелехин <cpro29a at>
> Hello!
> I discovered that under heavy load my pyramid web app throws
> py-postgresql exceptions like "postgresql.exceptions.ProtocolError".
> Some search revealed, that py-postgresql is not thread-safe and one
> connection can not be used by several threads concurrently.
> I tried to make some sort of pooling mechanism, but I still get
> ProtocolErrors :(
> What am I doing wrong?
> First I create number of connection objects:
>         for x in range(num_db_connections):
>             self.pool.append(Connection(conn_string,x))
> Each object in pool contains db_lock = threading.Lock()
> Then I try to acquire lock on a connection and do some work with it.
> This code can be executed bu many threads concurrently, but i think no
> two threads can run "work" on one connection concurrently because of
> lock.
>         time_start = time.time()
>         while time.time() - time_start < self.max_db_lock_wait_time:
>             for conn in self.pool:
>                 acquired = conn.db_lock.acquire(False)
>                 if acquired:
>                     try:
>                             lst =
>                     finally:
>                         conn.db_lock.release()
>                     return lst
>             time.sleep(0.05)
>         raise Exception('Could not get connection lock in time')
> Maybe there is flaw in my code, or I misunderstood the nature of
> "thread unsafety" of py-postgresql?
> Please, help me!
> With best regards, Sergey Melekhin

More information about the Python-general mailing list