You are right! I did not consider sum() in the the context of my experiment #facepalm

The results from timeit.repeat(stmt="sum(t)", setup="t = tuple([0]*99999)", repeat=5, number=99999) suggests 34s out of 67s of execution time in my experiment is spent in data access. So, I agree removing this 50% overhead would result in a non-trivial gain.

That said, ~30s for IPC-ing and processing 100K integers seems rather slow. For my purpose, I’d prefer shared_memory if the combination of multiprocessing.map() and shared_memory was faster than the combination of multiprocessing.map() and non-shared memory.

Written by

Programming, experimenting, writing | Past: SWE, Researcher, Professor | Present: SWE

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store