In [1]:
%pylab inline
Simulating large quantum systems with QuTiP can take a long
time. Especially if the computer you are using is not exactly a state
of the art supercomputer. So what if you had access to such a
supercomputer, whenever you wanted to perform bigger simulations using
QuTiP? Well as it turns out, you do. Since QuTiP is written in python
you can easily use the service provided
PiCloud to offload heavy computation to the
cloud.
PiCloud let's you compute in the cloud via an easy to use interface. You are charged by the time your jobs actually take and you don't need to worry about managing hardware or the cloud resources yourself. To get started, PiCloud even provides you with 20 compute hours for free every month. In this post, I'll show you how you can get started in no time to use PiCloud together with QuTiP.
(This post has been written as an IPython notebook, you can look at the notebook in nbviewer or download the notebook as a gist to execute the code)
PiCloud let's you compute in the cloud via an easy to use interface. You are charged by the time your jobs actually take and you don't need to worry about managing hardware or the cloud resources yourself. To get started, PiCloud even provides you with 20 compute hours for free every month. In this post, I'll show you how you can get started in no time to use PiCloud together with QuTiP.
(This post has been written as an IPython notebook, you can look at the notebook in nbviewer or download the notebook as a gist to execute the code)
Getting Started
In order to use QuTiP on the PiCloud, you need both a working
installation of QuTiP and an account at
picloud.com. Let's have a look at the local
QuTiP installation first.
In [2]:
from qutip.ipynbtools import version_table
version_table()
Out[2]:
As you can see, I am using the free Anaconda python
distribution and a recent
development version of QuTiP. Next you are going to need an account on
picloud.com and follow the easy installation
instructions on the web site. When you are done with that you are
ready to perform computations on the cloud.
In [3]:
import cloud
def add(x, y):
return x + y
jid = cloud.call(add, 1, 2)
cloud.result(jid)
Out[3]:
PiCloud tries to provide you with the ability to do computations on
the cloud as you would do them locally, without you having to do
anything. For example you don't have to worry about using common
modules on the cloud such as numpy.
In [4]:
import numpy as np
jid = cloud.call(add, np.arange(9.0), np.arange(9.0))
cloud.result(jid)
Out[4]:
If your code uses less common modules that are not preinstalled on the
cloud, then PiCloud will try to send all necessary information from
your computer to the cloud. This works really well unless you try to
use modules which have non-python code in them. Unfortunately qutip is
such a module, because some of its functions are compiled in c for
speed. So in in order to use qutip on the cloud, you have to make
PiCloud aware of qutip. PiCloud provides environments for this
purposes. Environments are customizable machine images, where you can
install all the modules your code needs. When you want to make use of
this environment later on, you just pass its name as an argument to
the functions provided by the cloud module. And what is even better,
once created environments can be easily shared among users. So to get
you started I shared made an environment public in which I installed
qutip, which is called "/mbaden/precise_with_qutip". If you are
curious how I did that, or want to create your own environment, take a
look at the very last section of this post.
But first let's simulate something on the cloud, using the public environment. Let's start by having a look at our QuTiP installation in the cloud.
But first let's simulate something on the cloud, using the public environment. Let's start by having a look at our QuTiP installation in the cloud.
In [5]:
jid = cloud.call(version_table, _env='/mbaden/precise_with_qutip')
cloud.result(jid)
Out[5]:
Note that the versions of qutip and some of the other packages differ
on my machine and the cloud. This can be the case for any extensions
with non-python code, such as numpy. It is probably a good idea to
work in a virtual environment on your local machine that has the same
versions of all packages as the environment in picloud to ensure
consistency. For now, we will just ignore this.
A simple example
Now that we have QuTiP up and running on PiCloud let's look at a
simple example on how we can use the cloud to do actual computations.
In [6]:
from qutip import *
import time
In this example we are interested in the steady state intra-cavity
photon number for a single atom couple to a cavity as a function of
the coupling strength. First we define the relevant operators, and
construct the Hamiltonian. One part of the Hamiltonian is static,
since only another part depends on the coupling strength. We will
state the problem in such a form, that we have a function iterating
over the problem for various coupling strength with as little overhead
as possible.
In [7]:
wc = 1.0 * 2 * pi # cavity frequency
wa = 1.0 * 2 * pi # atom frequency
N = 25 # number of cavity fock states
kappa = 0.05 # cavity decay rate
n_th = 0.5
a = tensor(destroy(N), qeye(2))
sm = tensor(qeye(N), destroy(2))
nc = a.dag() * a
na = sm.dag() * sm
c_ops = [sqrt(kappa * (1 + n_th)) * a, sqrt(kappa * n_th) * a.dag()]
H0 = wc * nc + wa * na
H1 = (a.dag() + a) * (sm + sm.dag())
args = {
'H0': H0,
'H1': H1,
'c_ops': c_ops
}
# Create a list of 400 coupling strengths If you have a slow computer
# you want to decrease that number to ~50 first.
g_vec = linspace(0, 2.5, 400) * 2 * pi
def compute_task(g_vec, args):
H0, H1, c_ops = args['H0'], args['H1'], args['c_ops']
n_vec = zeros(g_vec.shape)
for k, g in enumerate(g_vec):
H = H0 + g * H1
rho_ss = steadystate(H, c_ops)
n_vec[k] = expect(nc, rho_ss)
return n_vec
Here, the important function is compute_task, which takes the list
(numpy array) over which we iterate and some static arguments hidden
in the args dictionary. Let's also define a useful function for
plotting the results.
In [8]:
def visualize_results(g_vec, n_vec):
fig, ax = subplots()
ax.plot(g_vec, n_vec, lw=2)
ax.set_xlabel('Coupling strength (g)')
ax.set_ylabel('Photon Number')
ax.set_title('# of photons in the steady state')
Ok, no we are ready to do the calculation, both locally, and on the
cloud. Let's start with the local calculation. Note that above I made
the list of coupling strength 400 entries long, in order to get the
next calculation to take roughly a minute on my local
machine. Depending on how fast or slow your machine is your results
may vary significantly, so if you repeat the example yourself you might
want to go back and set the list to 50 entries only at first.
In [9]:
t0 = time.time()
n_vec = compute_task(g_vec, args)
t1 = time.time()
print "elapsed =", (t1-t0)
In [10]:
visualize_results(g_vec / (2 * pi), n_vec)
So far so good. Now let's move the calculation on to the cloud. We
have to break down the calculation into a number of jobs that can be
run in parallel and submit them to PiCloud. There scheduler will
estimate the workload and distribute the jobs among different
machines. In the common case of iterating over a list of parameters
and then solving the same problem over and over breaking the task into
independent chunks is quite easy.
We will use numpy's array_split function to create a number of chunks from our coupling strength array, create a function that takes only one argument, that is the list of input parameters, from our compute task and submit it to the cloud via cloud.map. In a last step we use numpy's concatenate to create a single output array again.
We will use numpy's array_split function to create a number of chunks from our coupling strength array, create a function that takes only one argument, that is the list of input parameters, from our compute task and submit it to the cloud via cloud.map. In a last step we use numpy's concatenate to create a single output array again.
In [11]:
t0 = time.time()
no_chunks = 10
g_chunks = array_split(g_vec, no_chunks)
single_variable_task = lambda g: compute_task(g, args=args)
jids = cloud.map(single_variable_task, g_chunks,
_env='/mbaden/precise_with_qutip',
_type='c2')
n_chunks = cloud.result(jids)
n_vec = concatenate(n_chunks)
t1 = time.time()
print "elapsed =", (t1-t0)
In [12]:
visualize_results(g_vec / (2 * pi), n_vec)
Yes, we have successfully sped up our calculation using PiCloud. Note
that this example is a bit artificial, since calculations taking just
a minute are nothing you usually would have the need to delegate to a
fast computer. As your job gets longer and longer and as you submit
more of them, PiCloud will keep increasing the number of cores
available to you and you will see more significant speed ups. Plus
there are many more ways to tweak PiCloud to your liking.
Happy simulating!
Happy simulating!
Footnote: Creating a QuTiP Environment
There are two easy ways to get your own environment with qutip
installed. The first way is to clone the public environment I
created. It is named /mbaden/precise_with_qutip and you should be able
to find it if you do a search for qutip. The second way is to create
the environment yourself. To do so go to the environments
page of your PiCloud
account, select Ubuntu Precise as base environment and log in via the
web console. To install QuTiP type
Now you can customize your new qutip environment to your liking.
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:jrjohansson/qutip-releases
sudo apt-get update
sudo apt-get install python-qutip
Here, the first step is an additional step as compared to the
installation instructions in the QuTiP documentation, which installs
add-apt-repository, which is not installed in the base environment. By
the way, these steps are exactly what I did to create
/mbaden/precise_with_ubuntu.Now you can customize your new qutip environment to your liking.