Thursday, May 9, 2013

QuTiP Now on MacPorts

On the macintosh platform, one of the possible installation methods for QuTiP has always been via MacPorts.  However, until recently there was no way of directly installing QuTiP from inside of MacPorts itself.  Thanks to the work by Frank Schima of MacPorts, QuTiP and all of its dependencies can be easily installed, after installing MacPorts, using a single line of code in the terminal!  Here is how you do it:

1. In order to install MacPorts, you need to have the Apple XCode Developer Tools installed on your machine.  This can be done from inside the Apple App Store.
2. Next, run Xcode and go to Preferences->Downloads and install the "Command Line Tools"
4. To install QuTiP, in the terminal type: sudo port install py27-qutip
Thats it!  All of the required packages are downloaded, compiled, and installed.  It will take several tens of minutes to finish, but that is what wine is for.

Wednesday, April 3, 2013

Getting Started with QuTiP on PiCloud

In [1]:
%pylab inline

Welcome to pylab, a matplotlib-based Python environment
[backend: module://IPython.zmq.pylab.backend_inline].

Simulating large quantum systems with QuTiP can take a long time. Especially if the computer you are using is not exactly a state of the art supercomputer. So what if you had access to such a supercomputer, whenever you wanted to perform bigger simulations using QuTiP? Well as it turns out, you do. Since QuTiP is written in python you can easily use the service provided PiCloud to offload heavy computation to the cloud.
PiCloud let's you compute in the cloud via an easy to use interface. You are charged by the time your jobs actually take and you don't need to worry about managing hardware or the cloud resources yourself. To get started, PiCloud even provides you with 20 compute hours for free every month. In this post, I'll show you how you can get started in no time to use PiCloud together with QuTiP.
(This post has been written as an IPython notebook, you can look at the notebook in nbviewer or download the notebook as a gist to execute the code)

Getting Started

In order to use QuTiP on the PiCloud, you need both a working installation of QuTiP and an account at picloud.com. Let's have a look at the local QuTiP installation first.
In [2]:
from qutip.ipynbtools import version_table
version_table()

Out[2]:
SoftwareVersion
Cython0.18
SciPy0.11.0
QuTiP2.3.0.dev-f19dd9e
Python2.7.3 |Anaconda 1.4.0 (x86_64)|
(default, Feb 25 2013, 18:45:56) [GCC 4.0.1 (Apple Inc. build 5493)]
IPython0.13.1
OSposix [darwin]
Numpy1.7.0
matplotlib1.2.0
Tue Apr 02 21:30:16 2013 SGT
As you can see, I am using the free Anaconda python distribution and a recent development version of QuTiP. Next you are going to need an account on picloud.com and follow the easy installation instructions on the web site. When you are done with that you are ready to perform computations on the cloud.
In [3]:
import cloud
return x + y

cloud.result(jid)

Out[3]:
3
PiCloud tries to provide you with the ability to do computations on the cloud as you would do them locally, without you having to do anything. For example you don't have to worry about using common modules on the cloud such as numpy.
In [4]:
import numpy as np

cloud.result(jid)

Out[4]:
array([  0.,   2.,   4.,   6.,   8.,  10.,  12.,  14.,  16.])
If your code uses less common modules that are not preinstalled on the cloud, then PiCloud will try to send all necessary information from your computer to the cloud. This works really well unless you try to use modules which have non-python code in them. Unfortunately qutip is such a module, because some of its functions are compiled in c for speed. So in in order to use qutip on the cloud, you have to make PiCloud aware of qutip. PiCloud provides environments for this purposes. Environments are customizable machine images, where you can install all the modules your code needs. When you want to make use of this environment later on, you just pass its name as an argument to the functions provided by the cloud module. And what is even better, once created environments can be easily shared among users. So to get you started I shared made an environment public in which I installed qutip, which is called "/mbaden/precise_with_qutip". If you are curious how I did that, or want to create your own environment, take a look at the very last section of this post.
But first let's simulate something on the cloud, using the public environment. Let's start by having a look at our QuTiP installation in the cloud.
In [5]:
jid = cloud.call(version_table, _env='/mbaden/precise_with_qutip')
cloud.result(jid)

Out[5]:
SoftwareVersion
Cython0.17.1
SciPy0.11.0
QuTiP2.2.0
Python2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3]
IPython0.13
OSposix [linux2]
Numpy1.6.2
matplotlib1.1.1
Tue Apr 2 13:31:07 2013
Note that the versions of qutip and some of the other packages differ on my machine and the cloud. This can be the case for any extensions with non-python code, such as numpy. It is probably a good idea to work in a virtual environment on your local machine that has the same versions of all packages as the environment in picloud to ensure consistency. For now, we will just ignore this.

A simple example

Now that we have QuTiP up and running on PiCloud let's look at a simple example on how we can use the cloud to do actual computations.
In [6]:
from qutip import *
import time

In this example we are interested in the steady state intra-cavity photon number for a single atom couple to a cavity as a function of the coupling strength. First we define the relevant operators, and construct the Hamiltonian. One part of the Hamiltonian is static, since only another part depends on the coupling strength. We will state the problem in such a form, that we have a function iterating over the problem for various coupling strength with as little overhead as possible.
In [7]:
wc = 1.0 * 2 * pi # cavity frequency
wa = 1.0 * 2 * pi # atom frequency
N = 25 # number of cavity fock states
kappa = 0.05 # cavity decay rate
n_th = 0.5

a = tensor(destroy(N), qeye(2))
sm = tensor(qeye(N), destroy(2))
nc = a.dag() * a
na = sm.dag() * sm

c_ops = [sqrt(kappa * (1 + n_th)) * a, sqrt(kappa * n_th) * a.dag()]

H0 = wc * nc + wa * na
H1 = (a.dag() + a) * (sm + sm.dag())

args = {
'H0': H0,
'H1': H1,
'c_ops': c_ops
}

# Create a list of 400 coupling strengths If you have a slow computer
# you want to decrease that number to ~50 first.
g_vec = linspace(0, 2.5, 400) * 2 * pi

H0, H1, c_ops = args['H0'], args['H1'], args['c_ops']
n_vec = zeros(g_vec.shape)
for k, g in enumerate(g_vec):
H = H0 + g * H1

n_vec[k] = expect(nc, rho_ss)
return n_vec

Here, the important function is compute_task, which takes the list (numpy array) over which we iterate and some static arguments hidden in the args dictionary. Let's also define a useful function for plotting the results.
In [8]:
def visualize_results(g_vec, n_vec):

fig, ax = subplots()
ax.plot(g_vec, n_vec, lw=2)
ax.set_xlabel('Coupling strength (g)')
ax.set_ylabel('Photon Number')
ax.set_title('# of photons in the steady state')

Ok, no we are ready to do the calculation, both locally, and on the cloud. Let's start with the local calculation. Note that above I made the list of coupling strength 400 entries long, in order to get the next calculation to take roughly a minute on my local machine. Depending on how fast or slow your machine is your results may vary significantly, so if you repeat the example yourself you might want to go back and set the list to 50 entries only at first.
In [9]:
t0 = time.time()
t1 = time.time()

print "elapsed =", (t1-t0)

elapsed = 70.7504220009

In [10]:
visualize_results(g_vec / (2 * pi), n_vec)

So far so good. Now let's move the calculation on to the cloud. We have to break down the calculation into a number of jobs that can be run in parallel and submit them to PiCloud. There scheduler will estimate the workload and distribute the jobs among different machines. In the common case of iterating over a list of parameters and then solving the same problem over and over breaking the task into independent chunks is quite easy.
We will use numpy's array_split function to create a number of chunks from our coupling strength array, create a function that takes only one argument, that is the list of input parameters, from our compute task and submit it to the cloud via cloud.map. In a last step we use numpy's concatenate to create a single output array again.
In [11]:
t0 = time.time()
no_chunks = 10
g_chunks = array_split(g_vec, no_chunks)
_type='c2')
n_chunks = cloud.result(jids)
n_vec = concatenate(n_chunks)
t1 = time.time()

print "elapsed =", (t1-t0)

elapsed = 39.917057991

In [12]:
visualize_results(g_vec / (2 * pi), n_vec)

Yes, we have successfully sped up our calculation using PiCloud. Note that this example is a bit artificial, since calculations taking just a minute are nothing you usually would have the need to delegate to a fast computer. As your job gets longer and longer and as you submit more of them, PiCloud will keep increasing the number of cores available to you and you will see more significant speed ups. Plus there are many more ways to tweak PiCloud to your liking.
Happy simulating!

Footnote: Creating a QuTiP Environment

There are two easy ways to get your own environment with qutip installed. The first way is to clone the public environment I created. It is named /mbaden/precise_with_qutip and you should be able to find it if you do a search for qutip. The second way is to create the environment yourself. To do so go to the environments page of your PiCloud account, select Ubuntu Precise as base environment and log in via the web console. To install QuTiP type
sudo apt-get install python-software-properties
sudo apt-get update
sudo apt-get install python-qutip

Here, the first step is an additional step as compared to the installation instructions in the QuTiP documentation, which installs add-apt-repository, which is not installed in the base environment. By the way, these steps are exactly what I did to create /mbaden/precise_with_ubuntu.

Why Release Your Code Part II: Reproducing Scientific Results

It has been a while since we have discussed the topic of publishing your source code, and the issue of reproducibility in numerical simulations.  I thought it be insightful to consider an example where we try to reproduce published results using QuTiP.  Here we will try to reproduce Figure #1 from Qian et. al., "Quantum Signatures of the Optomechanical Instability

This is an ideal example for several reasons: I) It is an interesting topic. II) This  paper relies almost entirely on numerical results to support their conclusion. III) It is a perfect project for applying QuTiP.

In essence, the goal in this work is to find steady state solutions for an optomechanical system where the Wigner function of the mechanical oscillator, after performing a partial trace, has a sufficient amount of negative area, such that the state can be considered non-classical.  Therefore, we will be making extensive use of the steadystate solver in QuTiP.  Before we discuss how to do this, let us just present the final results:

First of all yes, it does not look as good, but hey, I am also not submitting it to PRL.  Second, you will notice that the subfigure on the far right looks a bit different from the results in the original PRL figure.  In particular, there is an extra region of stability near Delta/wm=1 for lower coupling strengths, and the graph is quite "blocky" compared to the smooth regions presented by the original authors.  Why this is the case will be the focus of the discussion that follows.

Since this is an optomechanical system, it consists of two harmonic oscillators.  Therefore to begin, we we need to specify how many Fock states we should consider for both the cavity (Nc) and mechanical (Nm) resonators.  Here we pick Nc=6 and Nm=45.  Originally I did Nc=4 and got the wrong answer.  After talking with the authors of the paper, I adjusted my values to the same values that they used.  All of the couplings, and environmental decay rates can be obtained directly from the paper (in units of the mechanical frequency wm).  To find the steady-state of the system in QuTiP all we need to do is define the Hamiltonian and collapse operators and call:

rho_ss=steadystate(H,c_op_list)


where c_op_list is the list of collapse operators, and rho_ss is the steady state density matrix.  To generate subplots (a)-(h), all we need to do is calculate this steady-state for each value of detuning and coupling strength given in the paper.  Things like the Fano factor (part f) and Wigner functions can be calculated directly from this state.


The most difficult part to reproduce was part (i) from the figure.  To generate this figure, I initially broke up the region of interest, detuning from -1 to +1 and coupling from g0/kappa=0.8 to 1.2, into a grid of points and calculated the Wigner function at each of these locations.  From these Wigner functions, we can extract the minimum value, and the ratio of  the number of negative Wigner values to number of positive values.  These two numbers give the criteria for non-classicality used in the paper.  Per their definition, for a state to be nonclassical, the magnitude of the minimal Wigner value should greater than 5% of the maximum Wigner value, and the number of negative Wigner elements should be >= 3% of the number of positive elements.


The first of these criteria is a valid numerical benchmark, however physically, having just one point that is negative doesn't mean much for the entire state.  Therefore, the authors supplemented this with the much more physically relevant condition that the total negative Wigner area should be an appreciable part of the total.  The problem is that, as it stands, this criteria is subject to numerical errors.  This is because values that should be identically zero, sometimes pickup small positive or negative components at the end of a numerical calculation.  Therefore little values such at +/- 1e-15 can distort the ratio of negative to positive Wigner terms.  To get around this, I set a threshold of tol=1e-8 such that any Wigner value  -1e-8 <= w <= +1e-8 is set to zero.  This is quite a liberal bound for numerical errors but works none the less.



After setting up the calculation, the results did not agree with those of the original paper.  After contacting the lead author, I found out why.  First, in generating part (i), the authors change the power of the input drive called alpha_L.  However, in the caption of the figure, and the text of the paper, the authors say this parameter is fixed.  Therefore, the reader would have no idea that they vary this parameter.  Furthermore, the equation that they use to vary the laser power is not in the paper, nor in the supplementary material.  The authors plan to fix this in an errata.  In addition, the authors do not worry about the numerical errors from small non-zero terms.  To generate part (i) the authors calculated the states at several points and then, by eye, determined whether they satisfy the non-classicallty constrains.  Subfigure (i) plots the boundaries of the regions found via this method.  Having fixed our simulation to account for the changing input power alpha_L, to obtain sufficiently similar results to the original paper, it was found that our tolerance value for zeros must be set to tol=1e-4.  This is equivalent to saying that, in an experiment, one can only measure values with magnitude greater than this threshold.

The final result is presented in the QuTiP generated figure above.  The final thing to consider is the extra region on non-classicality near Delta/wm=+1.  It could be that the authors missed this section using their search method.  An indeed, using their parameters this seems to be the case.  However, this extra region of non-classicallty is actually a numerical artifact resulting from not considering enough basis states for the cavity resonator at these larger driving powers (again this fact is missing in the paper).  Increasing the number of basis states to Nc=8 and Nm=60 makes this region disappear.

What is the end message?  Well, what originally was planned to be a single days worth of calculations turned out to be a weeks worth of emailing with the authors and adjusting parameters to try to reproduce their results.  Had they included their source code with the paper, this could have been avoided.  The author stated that they did not have any room to discuss computational details in the manuscript, however I find this answer pretty weak given that PRL has supplementary material, which the authors used, made explicitly for details such as these.  After discussing with the authors, I now understand what they were trying to show with their calculations, and was actually quite impressed with the knowledge of the main author with respect to numerical simulations.  Unfortunately, a lot of this didn't come through in the paper.  Hopefully when the errata get published this will all become clear.

QuTiP Script for Generating Part (i) from Original Plot:

"""
This script regenerates the data for Figure #1 part (i) from:

Qian et. al. PRL 109, 253601 (2012).

This script is meant to be used in collaboration with the:

opto_figure.py

script that plots all of Figure #1.  This file MUST BE run first.

"""
from scipy import *
from qutip import *

#parameters (all rates in units of wm)
#-------------------------------------
Nc=6    #Number of cavity states
Nm=45   #Number of mechanical states
kappa=0.3  #Cavity damping rate
gamma=0.00147         #Mech damping rate
tol=1e-4                #sets limit for what is considered zero (measureable).

#operators
idc=qeye(Nc)
idm=qeye(Nm)
a=tensor(destroy(Nc),idm)
b=tensor(idc,destroy(Nm))

#collapse operators
cc=sqrt(kappa)*a
cm=sqrt(gamma)*b
c_op_list=[cc,cm]

#number of steps in detuning (delta) direction
steps=21
delta=linspace(-1.0,1.0,steps)
#range and number of steps for Wigner function
xvec=linspace(-10,10,500)

#steady state solver function for parfor
#---------------------------------------
def func(args):
x=args[0];g0=args[1]
#change laser driving strength, this is omitted in the paper :(
alpha=sqrt(0.0125/(g0**2))
#-------------------------
H=(-delta[x]+g0*(b.dag()+b))*(a.dag()*a)+b.dag()*b+alpha*(a.dag()+a)
rho_b=ptrace(rho_ss,1)
W=wigner(rho_b,xvec,xvec)
neg_inds=where(W<-tol)      #find all negative values
pos_inds=where(W>tol)     #find all positive values
zero_inds=where(abs(W)<tol) #find all zero values
min_val=W.min() #get minimum Wigner value
num_neg=len(W[neg_inds[0],neg_inds[1]]) #number of neg values
num_pos=len(W[pos_inds[0],pos_inds[1]]) #number of pos values
#determine if state is nonclassical according to definition
if min_val<=-0.05*W.max() and num_neg>=0.03*num_pos:
nonclass=1 #yes
else:
nonclass=0 #no
return nonclass

#number of steps in the coupling (G) direction
steps2=21
G=linspace(0.8,2.0,steps2)*kappa
non_class=zeros((steps2,steps))

#loop over the array of coupling values
for kk in range(steps2):
args=array([[jj,G[kk]] for jj in range(steps)])
results=parfor(func,args)
non_class[kk,:]=results
print kk, " done" #gives some feedback as to progress of calculation

#use QuTiP's store to file function
file_data_store("opto_data_%s_%s.dat" % (steps2,steps),non_class,numtype='real')





QuTiP Script for Generating the Entire Figure (After running the above script):


"""
This script regenerates Figure #1 from:

Qian et. al. PRL 109, 253601 (2012).

To use this script, you MUST first run its companion script:

opto_regions.py

that generates the required data file:

opto_data_21_21.dat

corresponding the the nonclassically tests for part (i)
in the original PRL Figure.

"""
from scipy import *
from qutip import *
from pylab import *

#parameters (all rates in units of wm)
#-------------------------------------
Nc=6     #Number of cavity states
Nm=45    #Number of mechanical states
alpha=0.311   #Coherent state amplitude
g0=0.36           #Coupling strength
kappa=0.3   #Cavity damping rate
gamma=0.00147          #Mech damping rate
xvec=linspace(-10,10,500)  #Wigner area and sampling

#operators
idc=qeye(Nc)
idm=qeye(Nm)
a=tensor(destroy(Nc),idm)
b=tensor(idc,destroy(Nm))

#Hamiltonian terms that do not depend on detuning or coupling
H_pre=b.dag()*b+alpha*(a.dag()+a)

#collapse operators
cc=sqrt(kappa)*a
cm=sqrt(gamma)*b
c_op_list=[cc,cm]

#array of points we specifically want
elems=[-0.8,-0.6,-0.25,0.0]
#setup array of detunings to iterate over
delta=linspace_with(-1.0,0.2,20,elems)
#get indicies of elems in delta
inds=array([i for i in range(len(delta)) if delta[i] in elems])

#---------------------------
def func(x):
H=(-delta[x]+g0*(b.dag()+b))*(a.dag()*a)+H_pre
numa=expect(a.dag()*a,rho_ss)
rho_b=ptrace(rho_ss,1)
numb=expect(num(Nm),rho_b)
numb2=expect(num(Nm)**2,rho_b)
fano=(numb2-numb**2)/numb
print delta[x]," done"
return numa,numb,fano,rho_b

#run steadystate in parallel over delta array
results=parfor(func,range(len(delta)))

#Make plot of results
#--------------------

#setup a plot grid using gridspec so we can
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(3,9)
#set subplot spacings
gs.update(wspace=0.75,hspace=0.40)

#add all wigner function plots by iterating over inds
for kk in range(4):
ind=inds[kk]
W=wigner(results[3][ind],xvec,xvec)
wmap=wigner_cmap(W)
if kk!=3:
ax=subplot(gs[kk,0])
contourf(xvec,xvec,W,50,cmap=wmap)
ax.xaxis.set_major_locator(MaxNLocator(3))
ax.yaxis.set_major_locator(MaxNLocator(3))
else:
ax=subplot(gs[0:2,1:3])
ax.xaxis.set_major_locator(MaxNLocator(3))
ax.yaxis.set_major_locator(MaxNLocator(3))
wig_den=contourf(xvec,xvec,W,50,cmap=wmap)
title("Wigner Density")
cbar=colorbar(wig_den)
cbar.set_ticks(linspace(round(W.min(),2),round(W.max(),2),6))

#add plot for phonon occupation number
ax=subplot(gs[2:3,1:3])
ax.xaxis.set_major_locator(MaxNLocator(4))
ax.yaxis.set_major_locator(MaxNLocator(4))
plot(delta,results[1],'g',lw=1.5)
plot(delta[inds],results[1][inds],'ko')
xlabel(r"Detuning $\Delta/\omega_{m}$")
ylabel(r"Phonon Number")
xlim([-1.0,0.2])

ax=subplot(gs[0:1,3:7])
ax.xaxis.set_major_locator(MaxNLocator(4))
ax.yaxis.set_major_locator(MaxNLocator(4))
plot(delta,results[2],'r',lw=1.5)
plot(delta[inds],results[2][inds],'ko')
xlabel(r"Detuning $\Delta/\omega_{m}$")
ylabel(r"Fano Factor")
axhline(y=1,lw=1)
xlim([-1.0,0.2])

#add plot for P(n) at g0=0.36
ax=subplot(gs[1:,3:5])
ax.xaxis.set_major_locator(MaxNLocator(4))
ax.yaxis.set_major_locator(MaxNLocator(4))
plot(range(Nm),results[3][inds[-1]].diag(),'k',lw=1.5)
plot(range(Nm),results[3][inds[-2]].diag(),'r--',lw=1.5)
ax.xaxis.set_major_locator(MaxNLocator(5,integer=True))
ax.yaxis.set_major_locator(MaxNLocator(4))
xlabel(r"$n$")
ylabel(r"$P(n)$")
xlim([0,Nm-1])

#generate steady state solutions at larger coupling strength g0=0.6
#required from reproducing part (h) of original plot

g0=0.6  #stronger coupling
alpha=0.186     #laser driving strength

#use delta = -0.5 and get steady state
delta=-0.5
H05=(-delta+g0*(b.dag()+b))*(a.dag()*a)+b.dag()*b+alpha*(a.dag()+a)
#mechanical DM at delta=-0.5
rhob5=ptrace(rho05,1)

#change detuning to delta = -0.25 and get steady state
delta=-0.25
H025=(-delta+g0*(b.dag()+b))*(a.dag()*a)+b.dag()*b+alpha*(a.dag()+a)
#mechanical DM at delta=-0.25
rhob25=ptrace(rho025,1)

ax=subplot(gs[1:,5:7])
ax.xaxis.set_major_locator(MaxNLocator(5,integer=True))
ax.yaxis.set_major_locator(MaxNLocator(4))
plot(range(Nm),rhob5.diag(),'r--',lw=1.5)
plot(range(Nm),rhob25.diag(),'k',lw=1.5)
xlim([0,Nm-1])
xlabel(r"$n$")
ylabel(r"$P(n)$")

#Plot data generated from parameter space plot
#of non-classicality of the mechanical resonator.

#get data from file
delta_from_data=linspace(-1.0,1.0,len(data[0,:]))
coupling_from_data=linspace(0.8,2.0,len(data[:,0]))
ax=subplot(gs[0:,7:])
ax.set_xlim(0.8,2.0)
ax.set_ylim(-1.0,1.0)
plot(1.2*ones(4),[0,-0.25,-0.6,-0.8],'ro',markersize=10)
text(1.25, 0, "D",ha='left', va='bottom',fontsize=18, color='g')
text(1.25, -0.25, "C",ha='left', va='bottom',fontsize=18, color='g')
text(1.25, -0.6, "B",ha='left', va='bottom',fontsize=18, color='g')
text(1.25, -0.8, "A",ha='left', va='bottom',fontsize=18, color='g')
#plot data
im=imshow(flipud(data.T),cmap=mpl.cm.binary)
im.set_extent([0.8,2.0,-1,1])
ax.xaxis.set_major_locator(MaxNLocator(4))
ax.yaxis.set_major_locator(MaxNLocator(4))
xlabel(r"Coupling Strength $g_{0}/\kappa$",fontsize=12)
ylabel(r"Detuning $\Delta/\omega_{m}$",fontsize=12)

#see what we get. Figure needs to be expanded manually (sorry)
show()