Parallel Processing#

Some of the computational tasks performed in scqubits can benefit significantly from parallelization. The scqubits package leverages parallel-processing capabilities provided by the Python Standard Library multiprocessing module. For better pickling support, scqubits further supports use of pathos and dill.

One important consideration for parallelization of tasks like parameter sweeps is the fact that Numpy and Scipy tend to make use of multi-threading internally. (Details of that will depend on how they were built on the machine in question.) This will generally lead to competition between multi-threading on the Numpy/Scipy level and parallelization of map methods via multiprocessing or pathos.

In many cases, best performance is obtained by limiting the number of threads used by Numpy to “a few”. (Precise numbers will be machine dependent and need to be determined on a case by case basis.) Limiting this thread number can be achieved from within a Python script or Jupyter and is accomplished by setting environment variables.

Note

Limiting the number of threads will only be effective if environment variables are set before the first import of Numpy.

Several environment variables can play a role, and which one is needed may again be machine-dependent. We thus simply set them all:

[ ]:
import os

NUM_THREADS = "1"

os.environ["OMP_NUM_THREADS"] = NUM_THREADS
os.environ["OPENBLAS_NUM_THREADS"] = NUM_THREADS
os.environ["MKL_NUM_THREADS"] = NUM_THREADS
os.environ["VECLIB_MAXIMUM_THREADS"] = NUM_THREADS
os.environ["NUMEXPR_NUM_THREADS"] = NUM_THREADS

At this point, Numpy import and import of scqubits can proceed.

[7]:
import numpy as np

import scqubits
from scqubits import HilbertSpace, InteractionTerm, ParameterSweep

Enabling parallel processing#

Parallel processing is enabled for appropriate scqubits methods and classes by passing the number of cores to be used through the keyword argument num_cpus. The following classes and class methods support parallelization:

Classes and class methods supporting parallelization#

Class or class method

ParameterSweep

HilbertSpace.get_spectrum_vs_paramvals

<qubit_class>.get_spectrum_vs_paramvals

<qubit_class>.plot_evals_vs_paramvals

<qubit_class>.get_matelements_vs_paramvals

<qubit_class>.plot_matelem_vs_paramvals

To use parallelization, the keyword argument num_cpus must be passed, specifying the number of cores to be used as an integer, e.g.

[ ]:
transmon.get_spectrum_vs_paramvals(..., num_cpus=4)
[ ]:
sweep = ParameterSweep(
    param_name=param_name,
    ...,
    ...,
    num_cpus=4
)

Once num_cpus exceeds the value 1 when passed, scqubits starts a parallel processing pool of the desired number of processes.

Global num_cpus default#

The global default for num_cpus is stored in scqubits.settings.NUM_CPUS. Upon import of scqubits, that constant has the value 1 (no parallelization). To change this default and use a user-defined core number by default (say 6), set

[ ]:
scqubits.settings.NUM_CPUS = 6

multiprocessing vs. pathos#

scqubits supports parallelization through multiprocessing as well as pathos. The latter is the default option and is more robust thanks to the advanced pickling methods enabled through dill.

To switch from use of pathos/dill to multiprocessing, simply alter the following setting:

[ ]:
scqubits.settings.MULTIPROC = 'multiprocessing'