py-postgresql multithread problems
cpro29a at gmail.com
Fri Sep 28 03:55:30 UTC 2012
I got rid of errors. They were caused by cached prepared statements in
my Connection object. I still do not understand why (Connection object
is not shared between threads because of explicit locking), but when i
removed cache completely it just worked.
My caching statement caching routine looked like:
def prepare( self, sql ):
'''Prepare and cache SQL'''
normalized = re.sub( '\s', ' ', sql.strip( ' ;' ) ).lower()
hash = hashlib.md5( normalized.encode( 'utf-8' ) ).digest()
if not hash in self.prepared_statements.keys():
with self.stmt_lock: ####did not help either
self.prepared_statements[hash] = self.conn.prepare( sql )
I don't know why, but it did not work.
With best regards, Sergey Melekhin
2012/9/26 Сергей Мелехин <cpro29a at gmail.com>
> I discovered that under heavy load my pyramid web app throws
> py-postgresql exceptions like "postgresql.exceptions.ProtocolError".
> Some search revealed, that py-postgresql is not thread-safe and one
> connection can not be used by several threads concurrently.
> I tried to make some sort of pooling mechanism, but I still get
> ProtocolErrors :(
> What am I doing wrong?
> First I create number of connection objects:
> for x in range(num_db_connections):
> Each object in pool contains db_lock = threading.Lock()
> Then I try to acquire lock on a connection and do some work with it.
> This code can be executed bu many threads concurrently, but i think no
> two threads can run "work" on one connection concurrently because of
> time_start = time.time()
> while time.time() - time_start < self.max_db_lock_wait_time:
> for conn in self.pool:
> acquired = conn.db_lock.acquire(False)
> if acquired:
> lst = conn.work()
> return lst
> raise Exception('Could not get connection lock in time')
> Maybe there is flaw in my code, or I misunderstood the nature of
> "thread unsafety" of py-postgresql?
> Please, help me!
> With best regards, Sergey Melekhin
More information about the Python-general