the exit function of custom context manager seemingly runs before computation done. context manager meant simplify writing concurrent/parallel code. here context manager code:
import time multiprocessing.dummy import pool, cpu_count class managed_pool: '''simple context manager multiprocessing.dummy.pool''' def __init__(self, msg): self.msg = msg def __enter__(self): cores = cpu_count() print 'start concurrent ({0} cores): {1}'.format(cores, self.msg) self.start = time.time() self.pool = pool(cores) return self.pool def __exit__(self, type_, value, traceback): print 'end concurrent:', self.msg print 'time:', time.time() - self.start self.pool.close() self.pool.join()
i've tried script multiprocessing.pool
instead of multiprocessing.dummy.pool
, seems fail time.
here example of using context manager:
def read_engine_files(f): engine_input = engineinput() open(f, 'rb') f: engine_input.parse_from_string(f.read()) return engine_input managed_pool('load input files') pool: data = pool.map(read_engine_files, files)
so, inside of read_engine_files
print name of file. you'll notice in __exit__
function print out when computation done , how long took. when viewing stdout __exit__
message appears way before computation finished. like, minutes before computation done. htop
says of cores still being used. here's example of output
start concurrent (4 cores): load engine input files file1.pbin file2.pbin ... file16.pbin end concurrent: load engine input files time: 246.43829298 file17.pbin ... file45.pbin
why __exit__
being called early?
are sure you're calling pool.map()
? should block until items have been mapped.
if you're calling 1 of asynchronous methods of pool
, should able solve problem changing order of things in __exit__()
. join pool before doing summary.
def __exit__(self, type_, value, traceback): self.pool.close() self.pool.join() print 'end concurrent:', self.msg print 'time:', time.time() - self.start
Comments
Post a Comment