Grain Boundary Motion 4
Bill has corrected the boundary conditions so let's restate the problem. We have (the first equation is missing due to some bug or other in trac).
with a jump at , but both and are continuous. The jump condition for the first equation is,
This works perfectly as the equation can now be written
in the finite volume formulation. The issue is how to provide a boundary condition for the second equation with the correct form for the jump condition. Define to be the rotation of the grain boundary and the displacement angle of the upper surfaces in the frame of reference where the grain boundary is vertical. The following holds,
`
Essentially, we have to eliminate the angles for an expression for only the and the surface energies. Easy enough to do, but what we really want is a jump condition that fits with equation 2. Ideally this would be of the form.
where is a constant. This isn't possible with the above expression involving the angles and grain boundaries. It is quite obvious that
is just the jump in where is an arc length. This condition requires the vertical displacement from the horizontal and hence knowledge of the orientation of the grain boundary. In terms of the angles,
From a practical point of view, I suppose we can deal with this boundary conditions simply be iterating at each time step. However, there seems to be something deeply unsatisfying about it.
Grain Boundarly Motion 4
Bill, Has corrected the boundary conditions so let's restate the problem. We have:
with a jump at , but both and are continuous. The jump condition for the first equation is,
Grain Boundary Motion 3
The solution for grain boundary motion with translation is given by,
with
where
and
The comparison with the 1D numerical solution seems to be quite good. The numbers are , , , , and .
The code for this is source:trunk/boettinger/grainBoundary/gb.py@1561.
Grain Boundary Motion 2
In blog:GrainBoundaryMotion the calculation of the analytical solution was broken. The complete analytical solution including the power series is
where
and
The power series are given by,
with
and
with
The following image is generated using revision 1559 (source:trunk/boettinger/grainBoundary/gbStationary.py@1559) for , , , and .
Grain Boundary Motion
Bill has a simple 1D model for grain boundary motion and hillock formation. The governing equation is given by
where
with two jump conditions at .
We can include these conditions in the main equations if we write the equations in the weak form:
and
The fipy script is in source:trunk/boettinger/grainBoundary/gb.py.
Version 1551 (source:trunk/boettinger/grainBoundary/gb.py@1551) compares against an analytical solution given by:
where
where
The solution is good for small M in the vicinity of the groove and .
For values of , and with a box size of 40 and 200 grid points. The analytical and numerical solutions are in good agreement in the location of the groove at t=10.
Another test is with an analytical solution for . This looks like
where
and
The code for this comparison is source:trunk/boettinger/grainBoundary/gb.py@1553.
The image below is for at t=10 using the first 14 terms in the power series.
The analytical solution does a good job in the region of the groove, but doesn't rise above as it does when . Obviously, the power series is not much good if the argument is much above 1. If we use this as a boundary then we get at , which seems to correspond with the domain where the solutions are the same.
Notes on Light Propogation using the Eikonal
I want to try and figure out whether it is possible to solve the eikonal equation for a complex function of the wave fronts to simulate absorption of light. Here, we will recap assuming the wave front function is real. The wave equation for the electric field is given by,
where we will assume an electric field intensity of
where
If we assume a short wavelength than the wave equation reduces to
The time averaged Poynting vector is given by,
and the intensity is given by,
Electrical problem including metal diffusion
This post includes the effects of the counter electrode and the metal diffusion. The metal diffusion is included via a solution where the metal gradient doesn't change rapidly at the interface. This is not really very physical, but it will be useful when testing numerical solutions.
At the working electrode :
At the counter electrode :
where
The normals point out of the electrolyte.
In the bulk:
and
where and are the distances from the counter and working electrodes to the reference electode, respectively. is the total distance between the electrodes.
The metal diffusion has an analytical solution
where
This solution is valid when . This leads to two dimensionless quantities that must be small,
and
We can now derive an expression for .
Using the above solution we can derive a new expression for .
If we do this the equations can be reduced to two coupled ODEs for and :
and
The material parameters are:
Integrating this gives:
The code for this is source:trunk/moffat/electrical/misc/1D_metal.py.
The evolution of the :
and the cupric concentrations at various times:
Symmetric Electrical Problem
Extending blog:MoffatElectrical1 by making the system totally symmetric
The changes are:
14c14 < alpha = 0.5 --- > alpha = 0.4 30c30 < return i0 * (numpy.exp(-alpha * Fbar * V) - numpy.exp((1 - alpha) * Fbar * V)) --- > return i0 * (numpy.exp(-alpha * Fbar * V) - numpy.exp((2 - alpha) * Fbar * V)) 34c34 < - (1 - alpha) * Fbar * numpy.exp((1 - alpha) * Fbar * V)) --- > - (2 - alpha) * Fbar * numpy.exp((2 - alpha) * Fbar * V))
gives:
The code for this is source:trunk/moffat/electrical/misc/1D_symmetric.py.
The evolution of the :
Electrical Problem including counter electrode
Including the effects of the counter electrode.
At the working electrode :
At the counter electrode :
The normals point out of the electrolyte.
In the bulk:
and
where and are the distances from the counter and working electrodes to the reference electode, respectively. is the total distance between the electrodes.
If we do this the equations can be reduced to two coupled ODEs for and :
and
The material parameters are:
Integrating this gives:
The code for this is source:trunk/moffat/electrical/misc/1D_counter.py.
The evolution of the :
A few changes to Bill's code
Bill wants me to make four changes to his code, see source:trunk/kirkendall/A-V_phase-field.py@1457.
He needs to make four changes.
- Update to be an expression involving . This is done in r1458.
- The second item is to check source:trunk/kirkendall/A-V_phase-field.py@1457#L71 against Bill's notes. It seems correct other than that he left out the diffusion term in his notes by mistake.
- Save the data from the simulations for later use and read the saved files and plot the results independently from the simulations. See diff:@1458:1460. This now writes a data file directory of the form data/A-V_phase-field_17_36_53Thu02Feb2012. The data files are stored as numpy zip files .npz and numbered by their time step value. plotdata.py reads and plots from the latest data directory created.
The name of an alternative path can be specified as an argument
$ python plotdata.py data/A-V_phase-field_17_33_42Thu02Feb2012
This will use the specified directory, while
$ python plotdata.py
will use the latest created data directory. plotdata.py plots the data for the four main fields using the stored data.
- Add an improved method for extracting the 0.5 level set. See r1461. This is done using scipy's interp1d. After plotting the four main fields for every stored timestep, plotdata.py will throw up a plot of the interface position.
Light Propogation
Jon is interested in adding a better light calculation to his PV code. At the moment the Beer-Lambert law/equation is being used for light. Firstly we need to understand what is wrong with the current technique. We've been discussing the use of the level set method / fast marching method as an alternative to using FiPy's basic first order upwind scheme, which is quite inefficient and is not working for some problems. Some more thoughts:
- We don't have a second order accurate implicit scheme in FiPy.
- Would using a second order explicit scheme be very inefficient. It is a one time cost since it is static, but it could get prohibitively bad for the sort of meshes Jon may need for accurate solutions.
- FFM will be way more efficient and we have second order schemes available, but not on unstructured meshes.
- Can we even solve the Beer-Lambert equation using the Eikonal equation?
- We need to understand what is going wrong with the current solution that Jon showed me in 1D and why it seems to deviate form the analytical. I believe that it is grid spacing.
Given the Beer-Lambert equation,
let's try and solve a 1D problem with a shock using implicit upwind. See trunk/misc/light.py. With a fixed grid things seem quite good even though this is a 1D scheme. Not sure if this carries over to 1D. Basically the velocity goes from 10 to 1 half way across the domain and we see a shock as expected ( is constant). The results are not so bad.
The upwind implicit scheme can deal with shocks (at least in 1D). Let's observe how it deals with a large change in the mesh density (see trunk/misc/lightVariableMesh.py). Let's try a 100 fold change. A small jump occurs.
A ten fold increase. Seems fine.
A thousand fold increase. Big jump.
The jumps are not surprising and consistent with a first order scheme. The tail has the right shape even though it has a large displacement. The PV 1D result did not have a tail that modeled the local solution correctly. Maybe that is to do with a gradient of changes rather than 1 jump in the mesh density.
Adding a gradual change example trunk/misc/lightVariableMeshGradual.py shows this
This could be the issue with the PV code.
Tom's Electrical Problem
Tom has come up with a new problem that involves solving the electric field. Here are the equations at the electrode interface ():
and in the bulk:
on the top electrode ():
At first glance, I didn't see that this system is closed. Since the equation is steady state, and surrounded by Neumann boundary conditions it only needs one boundary to fix the value (it is an initial value problem for the slope of rather than a boundary value problem for .
Solving for the 1D problem
We have (assuming the normal points out of the electrolyte)
where is the distance between the electrodes
This leads to an ODE for
This can be solved with scipy's ODE integrator. To do this I took an example <http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html> and used that. The code is source:trunk/moffat/electrical/1D.py.
The results are
and
With the following numbers:
We get the following steady state values:
Installing Samrai
In order to use lsmlib in parallel, I need to install SAMRAI. Of course this is easier said than done. I downloaded Samrai 2.4.4 and did the following on bunter:
$ mkdir build $ cd build $ ../configure --prefix=/users/wd15/.virtualenvs/lsmlib --with-F77=/usr/bin/gfortran checking build system type... x86_64-unknown-linux-gnu ... config.status: executing default commands $ make make NDIM=1 compile || exit 1 ... g++ -g -Wall -fno-implicit-templates -I. -I../../../include -I../../../../include -I/usr/lib/openmpi/include -c ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C -o HDFDatabaseFactory.o ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C: In member function ‘virtual SAMRAI::tbox::Pointer<SAMRAI::tbox::Database> SAMRAI::tbox::HDFDatabaseFactory::allocate(const std::string&)’: ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: ‘HDFDatabase’ was not declared in this scope ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: template argument 1 is invalid ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: invalid type in declaration before ‘=’ token ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: expected type-specifier before ‘HDFDatabase’ ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: invalid conversion from ‘int*’ to ‘int’ ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:20: error: expected ‘,’ or ‘;’ before ‘HDFDatabase’ ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:21: error: invalid conversion from ‘int’ to ‘SAMRAI::tbox::Database*’ ../../../../source/toolbox/restartdb/HDFDatabaseFactory.C:21: error: initializing argument 1 of ‘SAMRAI::tbox::Pointer<TYPE>::Pointer(TYPE*, bool) [with TYPE = SAMRAI::tbox::Database]’ make[4]: *** [HDFDatabaseFactory.o] Error 1 make[4]: Leaving directory `/users/wd15/packages/SAMRAI/build/source/toolbox/restartdb' make[3]: *** [libXd] Error 1 make[3]: Leaving directory `/users/wd15/packages/SAMRAI/build/source/toolbox' make[2]: *** [libXd] Error 1 make[2]: Leaving directory `/users/wd15/packages/SAMRAI/build/source' make[1]: *** [compile] Error 1 make[1]: Leaving directory `/users/wd15/packages/SAMRAI/build' make: *** [library] Error 1
So that didn't work. I had the following exchange with someone on the SAMRAI list. I wrote
Hi, I'm trying to build SAMRAI version 2.4.4. The configure step seems to have no issues, but the compilation step throws the following error. $ make make NDIM=1 compile || exit 1 make[1]: Entering directory `/users/wd15/Documents/python/SAMRAI' (cd source && make libXd) || exit 1 ... ... ... g++ -g -Wall -fno-implicit-templates -I. -I../../../include -I../../../include -I/usr/lib/openmpi/include -c HDFDatabaseFactory.C -o HDFDatabaseFactory.o HDFDatabaseFactory.C: In member function ‘virtual SAMRAI::tbox::Pointer<SAMRAI::tbox::Database> SAMRAI::tbox::HDFDatabaseFactory::allocate(const std::string&)’: HDFDatabaseFactory.C:20: error: ‘HDFDatabase’ was not declared in this scope HDFDatabaseFactory.C:20: error: template argument 1 is invalid HDFDatabaseFactory.C:20: error: invalid type in declaration before ‘=’ token HDFDatabaseFactory.C:20: error: expected type-specifier before ‘HDFDatabase’ HDFDatabaseFactory.C:20: error: invalid conversion from ‘int*’ to ‘int’ HDFDatabaseFactory.C:20: error: expected ‘,’ or ‘;’ before ‘HDFDatabase’ HDFDatabaseFactory.C:21: error: invalid conversion from ‘int’ to ‘SAMRAI::tbox::Database*’ HDFDatabaseFactory.C:21: error: initializing argument 1 of ‘SAMRAI::tbox::Pointer<TYPE>::Pointer(TYPE*, bool) [with TYPE = SAMRAI::tbox::Database]’ make[4]: *** [HDFDatabaseFactory.o] Error 1 ... ... ... make[1]: *** [compile] Error 1 make[1]: Leaving directory `/users/wd15/Documents/python/SAMRAI' make: *** [library] Error 1 Any ideas what the issue may be? Other details: $ uname -a Linux bunter 2.6.32-5-amd64 #1 SMP Mon Oct 3 03:59:20 UTC 2011 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 6.0.3 (squeeze) Release: 6.0.3 Codename: squeeze All other packages are at standard versions for Debian Squeeze including the HDF5 libraries. Thanks
and I got the reply
Daniel, HDFDatabase.h is guarded with a #ifdef HAVE_HDF5. From the error message it appears that HAVE_HDF5 is not defined. Grep for HAVE_HDF5 in SAMRAI_config.h to see if it is defined. I suspect that something went wrong during configuration and you are not getting HDF5. Another clue that this is likely the problem is that there is no hdf5 include directory on the compile line. Let us know what you find. -- Bill Arrighi Lawrence Livermore National Lab
so I tried a new configure step to include hdf5
$../configure --prefix=/users/wd15/.virtualenvs/lsmlib --with-F77=/usr/bin/gfortran --with-hdf5 checking build system type... x86_64-unknown-linux-gnu ... checking for HDF5 installation... /usr checking whether HDF5 link works... no configure: error: HDF5 compile/link test failed
Andrew suggested debugging the configure script with the -x flag. The "set -x" was places on line 19422. Any way it showed that the configure script is trying to build and link the following c++ file (conftest.cppa):
#include "hdf5.h" #define FILE "file.h5" int main() { hid_t file_id; /* file identifier */ herr_t status; /* Create a new file using default properties. */ file_id = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); /* Terminate access to the file. */ status = H5Fclose(file_id); }
with
$ g++ -o conftest -g -O2 conftest.cpp -lz -ldl -lm -lz -lm /tmp/ccn31txn.o: In function `main': /users/wd15/packages/SAMRAI/build/conftest.cpp:10: undefined reference to `H5check_version' /users/wd15/packages/SAMRAI/build/conftest.cpp:10: undefined reference to `H5Fcreate' /users/wd15/packages/SAMRAI/build/conftest.cpp:13: undefined reference to `H5Fclose' collect2: ld returned 1 exit status
if I do
$ g++ -o conftest -g -O2 conftest.cpp -lz -ldl -lm -lz -lm -lhdf5 $ ./conftest
it works fine. Looking at the configure code, it looks like it wants a directory for --with-hdf5:
../configure --prefix=/users/wd15/.virtualenvs/lsmlib --with-F77=/usr/bin/gfortran --with-hdf5=/usr
gets through the config without issue. However make fails with loads of
g++ -g -Wall -fno-implicit-templates -I. -I../../../include -I../../../../include -I/usr/include -I/usr/lib/openmpi/include -c ../../../../source/toolbox/restartdb/HDFDatabase.C -o HDFDatabase.o /usr/include/H5Gpublic.h: In static member function ‘static herr_t SAMRAI::tbox::HDFDatabase::iterateKeys(hid_t, const char*, void*)’: /usr/include/H5Gpublic.h:148: error: too many arguments to function ‘hid_t H5Gopen1(hid_t, const char*)’ ../../../../source/toolbox/restartdb/HDFDatabase.C:160: error: at this point in file /usr/include/H5Gpublic.h:148: error: too many arguments to function ‘hid_t H5Gopen1(hid_t, const char*)’ ../../../../source/toolbox/restartdb/HDFDatabase.C:180: error: at this point in file ../../../../source/toolbox/restartdb/HDFDatabase.C:204: error: invalid conversion from ‘herr_t (**)(hid_t, void*)’ to ‘void**’ /usr/include/H5Epublic.h:212: error: too many arguments to function ‘herr_t H5Eget_auto1(herr_t (**)(void*), void**)’ ../../../../source/toolbox/restartdb/HDFDatabase.C:204: error: at this point in file /usr/include/H5Epublic.h:216: error: too many arguments to function ‘herr_t H5Eset_auto1(herr_t (*)(void*), void*)’ ... make[4]: *** [HDFDatabase.o] Error 1 make[4]: Leaving directory `/users/wd15/packages/SAMRAI/build/source/toolbox/restartdb' make[3]: *** [libXd] Error 1 make[3]: Leaving directory `/users/wd15/packages/SAMRAI/build/source/toolbox' make[2]: *** [libXd] Error 1 make[2]: Leaving directory `/users/wd15/packages/SAMRAI/build/source' make[1]: *** [compile] Error 1 make[1]: Leaving directory `/users/wd15/packages/SAMRAI/build' make: *** [library] Error 1
python setup.py develop issues
If python setup.py develop does not seem to be updating easy-install.pth correctly, then try running pip install setuptools --upgrade and rerunning python setup.py develop.
Installing Ubuntu on the Acer Aspire One Netbook
I purchased the Aspire One at Costco for $300 with the hope that Ubuntu would install smoothly without glitches. Unsurprisingly, it wasn't a straight forward experience. The first problem is that the monitor aspect ratio was wrong. All text was squashed in the vertical plane. Luckily, the first 11.04 update fixed these problems. It also prompted me to install two proprietary drivers, which I did. Before the drivers were installed I couldn't load Unity. After they were installed I could load Unity.
At first this all seemed fine apart from a weird AMD watermark in the bottom right hand corner of the screen. I did the googles and found the following, which fixed the issue,
Totally gross, but it works.
This all seemed totally fine, but after awhile the computer just started to freeze and needed to be rebooted. This turned out to be harder to fix in terms of finding answers. I believe that it is somehow connected to the wireless as it always seems to happen when the wireless is starting up, but who knows. Anyway I found this helpful post.
<http://ubuntuforums.org/showpost.php?p=10571078&postcount=16>
This worked (thank god). I had to boot into safe mode in order to have enough time to edit the necessary file. After this things all seem to be fine other than Unity not working. I'm not too bothered about that because I'm perfectly happy with classic mode. In fact classic seems a bit snappier. I'm not really sure about Unity anyway (what does it really add?).
Overall, compared with my now defunct Dell mini, it's a lot faster. Firefox works properly, which it didn't on the mini. I'm a happy bunny once more for $300 and some head scratching.
Generating Passwords
pwgen -Bcny 12 1
Making lam work across multiple machines
To make lam work across multiple machine take the following steps:
- Create a bhost file
bunter.nist.gov cpu=4 hobson.nist.gov cpu=4 alice.nist.gov cpu=4
- $ export LAMRSH=ssh
- $ lamboot -ssi boot_rsh_ignore_stderr 1 bhost
- $ mpirun -np 12 python script.py
This should then work transparently across the three machines. The stdout streams from the other machines didn't appear on the screen, but they were certainly running in parallel.
mpi4py, trilinos, lam and i686
We needed both trilinos amd mpi4py working against lam on the i686 machines so we can test effectively and run parallel jobs unopposed. I built mpi4py on bunter by
- making sure absolutely everything with mpi4py in it was removed from site-packages
- Adding
[i686-lam] mpi_dir = /usr/lib/lam mpicc = /usr/bin/mpicc.lam mpicxx = /usr/bin/mpic++.lam
to mpi.cfg and running Python setup.py build --mpi=i686-lam in the i686-lam virtualenv.
I built trilinos using i686-lam with
../configure CXX=mpic++.lam CC=mpicc.lam F77=mpif77.lam CXXFLAGS=-O3 CFLAGS=-O3 FFLAGS="-O5 -funroll-all-loops -malign-double" --enable-epetra --enable-aztecoo --enable-pytrilinos --enable-ml --enable-ifpack --enable-amesos --with-gnumake --enable-galeri --enable-mpi --with-mpi=/usr/lib/lam --cache-file=config.cache --prefix=/users/wd15/.virtualenvs/i686-lam
Problem in virtualenvwrapper
I've had some issues with virtualenvwrapper. Whenever I start up a new shell, the shell hangs when virtualenvwrapper is installed. I debugged this by putting a "set -x" at the top of "virtualenvwrapper.sh". "virtualenvwrapper.sh" is sourced from ".bashrc". Anyway, the debug information pointed the finger at line 193 of "virtualenvwrapper.sh"
fi trap "rm '$file' >/dev/null 2>&1" EXIT echo $file
It hangs on the "rm" command as my "rm" command is aliased to "rm -i". I am not sure what the right thing to is. For now I have just set
fi trap "\rm '$file' >/dev/null 2>&1" EXIT echo $file
which deals with the issue. Is the problem with "virtualenvwrapper.sh" because it is assuming that "rm" is the default version or should my aliases come last in the .bashrc file because commands should be the default during the execution of the "rc" type files?