pyUSID.processing.comp_utils.parallel_compute¶
- pyUSID.processing.comp_utils.parallel_compute(data, func, cores=None, lengthy_computation=False, func_args=None, func_kwargs=None, verbose=False, joblib_backend='multiprocessing')[source]¶
Computes the provided function using multiple cores using the joblib library
- Parameters:
data (numpy.ndarray) – Data to map function to. Function will be mapped to the first axis of data
func (callable) – Function to map to data
cores (uint, optional) – Number of logical cores to use to compute Default - All cores - 1 (total cores <= 4) or - 2 (cores > 4) depending on number of cores. Ignored in the MPI context - each rank will execute serially
lengthy_computation (bool, optional) – Whether or not each computation is expected to take substantial time. Sometimes the time for adding more cores can outweigh the time per core Default - False
func_args (list, optional) – arguments to be passed to the function
func_kwargs (dict, optional) – keyword arguments to be passed onto function
joblib_backend (str, optional) – Backend to use for parallel computation with Joblib. The older paradigm - “multiprocessing” is the default in pyUSID. Set to None to use the joblib default - “loky”
verbose (bool, optional. default = False) – Whether or not to print statements that aid in debugging
- Returns:
results – List of computational results
- Return type: