1 min readNov 23, 2019
You are right! I did not consider sum()
in the the context of my experiment #facepalm
The results from timeit.repeat(stmt="sum(t)", setup="t = tuple([0]*99999)", repeat=5, number=99999)
suggests 34s out of 67s of execution time in my experiment is spent in data access. So, I agree removing this 50% overhead would result in a non-trivial gain.
That said, ~30s for IPC-ing and processing 100K integers seems rather slow. For my purpose, I’d prefer shared_memory if the combination of multiprocessing.map() and shared_memory was faster than the combination of multiprocessing.map() and non-shared memory.