Last Updated
Viewed 29 Times

The following for loop is part of a iterative simulation process and is the main bottleneck regarding computational time:

import numpy as np

n_int = 10

class Simulation(object):

    def loop(self):

        for itr in range(n_int):

            cols_red_list = []
            rows_list = list(range(2500))
            diff = np.random.uniform(-1, 1, (2500, 300))

            for row in rows_list:
                col =  next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
                cols_red_list.append(col)
            print(len(cols_red_list))

sim1 = Simulation()
sim1.loop() 

Hence, I tried to parallelize it by using the multiprocessing package in hope to reduce computation time:

import numpy as np
from multiprocessing import  Pool, cpu_count
from functools import partial

n_int = 10

def crossings(row, diff):
    return next(idx for idx, val in enumerate(diff[row,:]) if val < 0)

class Simulation(object):

    def loop(self):

        for itr in range(n_int):

            rows_list = list(range(2500))
            diff = np.random.uniform(-1, 1, (2500, 300))

            if __name__ == '__main__':
                num_of_workers = cpu_count()
                print('number of CPUs : ', num_of_workers)
                pool = Pool(num_of_workers)
                cols_red_list = pool.map(partial(crossings,diff = diff), rows_list)
                pool.close()
                print(len(cols_red_list))
             #some code.....

sim1 = Simulation()
sim1.loop()

Unfortunately, the parallelization turns out to be much slower compared to the sequential piece of code. Hence my question: Did I use the multiprocessing package properly in that particular example? Are there alternative ways to parallelize the above mentioned for loop ?

Similar Question 1 : Continuous loop using threading

I am somewhat new to python. I have been trying to find the answer to this coding question for some time. I have a function set up to run on a threading timer. This allows it to execute every second while my other code is running. I would like this function to simply execute continuously, that is every time it is done executing it starts over, rather than on a timer. The reason for this is that due to a changing delay in a stepper motor the function takes different amounts of time run.

Let's say I have three modules:

mod1 mod2 mod3

where each of them runs infinitely long as soon as mod.launch() is called.

What are some elegant ways to launch all these infinite loops at once, without waiting for one to finish before calling the other?

Let's say I'd have a kind of launcher.py, where I'd try to:

import mod1
import mod2
import mod3

if __name__ == "__main__":
    mod1.launch()
    mod2.launch()
    mod3.launch()

This obviously doesn't work, as It will wait for mod1.launch() to finish before launching mod2.launch().

Any kind of help is appreciated.

Basically I don't know what I need to do to accomplish this..

I have two loops that will loop for different durations each:

import time

while True:
    print "Hello Matt"
    time.sleep(5)

and then another loop:

import time

while True:
    print "Hello world"
    time.sleep(1)

I need to incorporate both loops in a program and both need to run at the same time and process data independently, and there is no need to share data between them. I guess I'm looking for Threads or Multiprocessing but I'm not sure how to implement it for something like this.

Similar Question 4 (2 solutions) : Python - Multiprocessing - huge for loop

Similar Question 5 (2 solutions) : Python Multiprocessing Loop

Similar Question 6 (2 solutions) : Multithreading / multiprocessing with a python loop

cc