New Policy for Commit Messages
From now on, the following codes should be used for commit messages. The commit message should have one of the following three letter codes at the beginning of the first line followed by a colon.
API: an (incompatible) API change BLD: change related to building fipy BUG: bug fix DEP: deprecate something, or remove a deprecated object DEV: development tool or utility DOC: documentation ENH: enhancement MAINT: maintenance commit (refactoring, typos, etc.) REV: revert an earlier commit STY: style fix TST: addition or modification of tests REL: related to releasing fipy
These codes are the current Numpy standard (see http://docs.scipy.org/doc/numpy/dev/gitwash/development_workflow.html#writing-the-commit-message).
Commit and Ticket Update Automation
Jon has enabled the http://trac.edgewall.org/wiki/CommitTicketUpdater.
Use it as follows:
In future, when we make a commit that responds to a bug, include "addresses ticket:1234" or "references ticket:1234".
When you merge a pull request, include "fixes ticket:1234" or "closes ticket:1234".
These will automatically add the commit message to the ticket and, if appropriate, close the ticket.
SvnToGit
See wiki:SvnToGit.
How to represent a third order term?
A third order equation of the form:
and
I haven't tried this, but this should be possible with a coupled equation in a fully implicit manner. Here is an attempt. The sign on the diffusion term seems to make a large difference in the stability.
from fipy import Grid1D, CellVariable, TransientTerm, DiffusionTerm, Viewer from fipy import UpwindConvectionTerm, ImplicitSourceTerm m = Grid1D(nx=100, Lx=1.) phi = CellVariable(mesh=m, hasOld=True, value=0.5, name=r'$\phi$') psi = CellVariable(mesh=m, hasOld=True, name=r'$\psi$') phi.constrain(0, m.facesLeft) phi.constrain(1, m.facesRight) psi.constrain(0, m.facesRight) psi.constrain(0, m.facesLeft) eqnphi = TransientTerm(1, var=phi) == UpwindConvectionTerm([[1]], var=psi) eqnpsi = ImplicitSourceTerm(1, var=psi) == -DiffusionTerm(1, var=phi) eqn = eqnphi & eqnpsi vi = Viewer((phi, psi)) for t in range(1000): phi.updateOld() eqn.solve(dt=0.001) print phi vi.plot() raw_input('stopped')
Including the Transverse Waves in the Roe Convection Term
This is a continuation from blog:RoeConvectionTerm where the first order and second order correction are described fro the Roe convection term. These schemes have now been implemented in source:riemann/fipy/terms/roeConvectionTerm.py@5160. The scheme gives identical results to the spinning cone and block in CLAWPACK, but The second order scheme has not been tested with multiple equations yet (the spinning cone/block example only has one field).
The scheme needs to be updated to include the transverse waves. I believe this would make it truly second order accurate. Unfortunately, I'm still a little shaky on the derivation for the scheme.
Andrew Reeve's FiPy Linux Image
http://www.ctcms.nist.gov/fipy/download/reeveFiPyLinuxImage.iso
I have created a disk image that can be installed on an 8 GByte thumb drive. By installing this onto a usb thumb drive, and then booting this thumb drive through the computer's boot menu, you will be put into a fully functional linux (Arch Linux) system that contains FiPy, text editor, and associated python software (among other things). The 'wicd' program (type 'wicd-curses' in a termial window) on the thumb drive can be used to access a wireless network and the pacman package manager can be used to update/add software on the thumb drive. The image must written to the 8 Gbyte thumb drive using a low level copying program, such as 'dd' or 'rawrite'. Care should be taken to not copy over your hard drive! All data on your thumb drive will be overwritten when using these programs.
1) access your bootloaded when logging onto your computer by hitting the appropriate button when booting up. The button you need to hit to do this seems to vary with the make of the computer.
2) When you access the linux bootloaded, there are four options coresponding to different device names (sda, sdb, sdc, and sdd). Load the one that corresponds to the usb port on your computer. On the laptops I've used, 'sdb' has been to correct device name to use.
3) I think gmsh is also in the thumb drive,
4) mayavi2 has not been added to the thumbdrive, but can be added using the 'yaourt' package manages which automated building packages from source.
Let me know if people actually use these. I'd be willing to update the images one or twice a year if people find them useful.
Andy
Setting Up A Debug Environment
I've been trying to set up a debugable version of fipy in virtualenv for debugging a trilinos issue. Here are the steps:
- Install the python-dbg package from the Debian repositories.
- Use mkvirtualenv -p python-dbg debug to make the debug environment.
- Install numpy with pip install, not debug.
- Install swig in the standard way.
- Here is the do-configure for trilinos
EXTRA_ARGS=$@ TRILINOS_HOME=/users/wd15/Documents/python/trilinos-10.8.3-Source CMAKE=cmake PYTHON_EXECUTABLE=${VIRTUAL_ENV}/bin/python ${CMAKE} \ -D CMAKE_BUILD_TYPE:STRING=DEBUG \ -D Trilinos_ENABLE_PyTrilinos:BOOL=ON \ -D BUILD_SHARED_LIBS:BOOL=ON \ -D Trilinos_ENABLE_ALL_OPTIONAL_PACKAGES:BOOL=ON \ -D TPL_ENABLE_MPI:BOOL=ON \ -D Trilinos_ENABLE_TESTS:BOOL=ON \ -D PYTHON_EXECUTABLE:FILEPATH=${PYTHON_EXECUTABLE} \ -D DART_TESTING_TIMEOUT:STRING=600 \ -D CMAKE_INSTALL_PREFIX:PATH=${VIRTUAL_ENV} \ -D PyTrilinos_INSTALL_PREFIX:PATH=${VIRTUAL_ENV} \ ${EXTRA_ARGS} \ ${TRILINOS_HOME}
Pyximport Weirdness
There seems to be an issue with using pyximport under certain conditions. Essentially, when pyximport is used here. When this loads from within the setup.py test environment it doesn't build on benson or on Jon's laptop. It does work on sandbox, which is using python 2.7 (rather than 2.5) and cython 0.15 (same as bunter). I'm not sure about Jon's configuration.
The error is that the compiler is trying to find the .c file in fipy/tools instead of the default .pyxblt/ directory. This manifests the following error:
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/users/wd15/.virtualenvs/riemann/lib/python2.5/site-packages/numpy/core/include -I/users/wd15/.virtualenvs/riemann/include/python2.5 -c /users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.c -o /users/wd15/.pyxbld/temp.linux-i686-2.5/users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.o gcc: /users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.c: No such file or directory gcc: no input files
I have sort of figured out where the issue is occurring, but it appears that magic is occurring that doesn't seem possible. Initially to debug I turned DEBUG to True in ~/.virtualenvs/riemann/lib/python2.5/site-packages/pyximport/pyxbuild.py. That lead me to see what the problem is. If one looks in ~/.virtualenvs/riemann/lib/python2.5/site-packages/pyximport/pyximport.py one will see the following method
def get_distutils_extension(modname, pyxfilename): # try: # import hashlib # except ImportError: # import md5 as hashlib # extra = "_" + hashlib.md5(open(pyxfilename).read()).hexdigest() # modname = modname + extra extension_mod,setup_args = handle_special_build(modname, pyxfilename) print 'extension_mod',extension_mod print 'pyxfilename',pyxfilename print 'modname',modname if not extension_mod: from distutils.extension import Extension extension_mod = Extension(name = modname, sources=[pyxfilename]) import distutils.extension print 'distutils.extension.__file__',distutils.extension.__file__ print 'extension_mod.sources',extension_mod.sources print 'pyxfilename',pyxfilename ## extension_mod.sources = [pyxfilename] print 'extension_mod.sources',extension_mod.sources print type(pyxfilename) raw_input('stopped') return extension_mod,setup_args
When pyximport runs through this it gives the following output
extension_mod None pyxfilename /users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.pyx modname fipy.tools.smallMatrixVectorOpsExt in extension self.sources ['/users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.pyx'] self.sources ['/users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.pyx'] distutils.extension.__file__ /users/wd15/Documents/python/fipy/riemann/tmp/distutils/extension.py extension_mod.sources ['/users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.c'] pyxfilename /users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.pyx extension_mod.sources ['/users/wd15/Documents/python/fipy/riemann/fipy/tools/smallMatrixVectorOpsExt.c'] <type 'str'>
What's weird here is that the pyxfilename passed to Extension has had it's extension changed from .pyx to .c. Now, this doesn't occur on sandbox, the file name never changes and everything runs smoothly. On bunter, if I explicitly set extension_mod.sources as I do in the commented line, then everything works. So what hocus-pocus is occurring in the Extension class to change .pyx to .c. Nothing! Absolutely nothing. Extension is just a stand alone class (no parents) and all it does it set self.sources = sources. In fact it I do print self.sources on the last line of Extension's __init__ it still has the .pyx file extension, but reverts to the .c file extension as soon as it returns to pyximport.py. What could possibly be occurring? Extension has no other methods. pyxfilename is a string. This seems impossible.
I finally figured out what was going on. When the tests are running, a different version of from distutils.extension import Extension is being imported. The version the tests are importing is in the virtualenv. When I run the script outside of the tests then it doesn't use the virtualenv version of distribute. The version in the virtualenv was dropping into the following:
try: from Pyrex.Distutils.build_ext import build_ext except ImportError: have_pyrex = False else: have_pyrex = True class Extension(_Extension): """Extension that uses '.c' files in place of '.pyx' files""" if not have_pyrex: # convert .pyx extensions to .c def __init__(self,*args,**kw): _Extension.__init__(self,*args,**kw) sources = [] for s in self.sources: if s.endswith('.pyx'): sources.append(s[:-3]+'c') else: sources.append(s) self.sources = sources
The code was changing all the .pyx extensions to .c. This is in distribute version 0.6.10. Updating to distribute version 0.6.24 fixed this as that snippet of code has been updated. Installing distribute was non-trivial as simply doing {{{pip install distribute --upgrade}}} broke everything because it wanted to install to the system directories instead of the virtualenv. It was unable to roll back breaking the virtualenv entirely. For some reason the --enivonment=$VIRTUAL_ENV flag was needed to avoid this.
Roe Convection Term
As a relatively painless entry to the implementation of higher order Riemann solvers in FiPy, I'm planning on implementing a basic Roe solver. Given a basic vector advection equation,
flux updates for a Roe solver can be written,
where the flux is across a face between cell and cell and the normal points from to . The question when implementing Roe solvers is how to approximate from the local Riemann problem
where
It is generally impossible to obtain an exact calculation for at the face. Strictly speaking, a Roe solver involves quite a complicated method to obtain an approximation for , . For the time being, in order to test the constant coefficient problems we will use a simple calculation for , namely,
From , we can obtain
where is the matrix of right eigenvectors and is the matrix of the absolute values of the eigenvalues along the diagonal.
I can pretty much see how to implement this algorithm in fipy without too many issues. I can see that getting from is not going to be easy, since an eigenvector/eigenvalue calculation is required for every face. At the moment the only way I see to do this is actually calculating and and reconstructing. Maybe there is a clever way to do this???
Higher Order Correction
Here we make the scheme second order on a regular grid. The correccted flux is given by,
where
is given by
and is given by
where is the cell to cell distance and is the face area. is given by,
where
The 's are given by,
Vector Diffusion Terms
We are now planning to implement equations of the form,
This has already been done partially in source:branches/vectorEquations. The convection term looks something like this
The coefficient is a rank 3 tensor in this case. The first index is always over the spatial range. The next two indices refer to the q's. We now have to address diffusion terms. The trouble is that we already have effective rank 3 coefficients for higher order diffusion terms so it's all getting a bit complicated, but it is still manageable.
For higher order terms, we generally use a list or tuple and don't embed the index as part of the tensor so I think that is automatically dealt with. So higher order terms should always be indexed with a tuple or list. That is our current policy and it makes sense. A second order diffusion term will now be of the following form,
Of course the rank of can vary depending on whether anisotropy is required. This requires some interpretation. If is rank 0 then it will be assumed that all the indices of are spatial. If q is rank 1 then it will be assumed that the last two indices are over q. For example if the variable is and the coefficient is then it's assumed that has a vector coefficient. If the variable was instead just then FiPy should throw an error as the coefficient has too many indices. Similarly, for a non-anisotropic vector equation the coefficient would simply be . The meaning of the coefficient indices changes based on context.
Benchmarking the Efficiency Branch
An efficiency comparison between source:trunk@4057 and source:branches/efficiency@4057 on bunter.
Example | Source | Flags | CPUs | 9 steps (s) | memory (kB) | ||||
1 | 2 | 3 | 4 | 5 | |||||
attachment:anisotropy.py | source:branches/efficiency@4057 | --pysparse | 1 | 25 | 25 | 29 | 29 | 333552 | |
attachment:anisotropy.py | source:branches/efficiency@4057 | --trilinos | 1 | 72 | 72 | 417776 | |||
attachment:anisotropy.py | source:branches/efficiency@4057 | --trilinos | 2 | 48 | 48 | 278368 | |||
attachment:anisotropy.py | source:branches/efficiency@4057 | --trilinos | 4 | 34 | 36 | 192424 | |||
attachment:anisotropy.py | source:trunk@4057 | --pysparse | 1 | 29 | 29 | 296020 | |||
attachment:anisotropy.py | source:trunk@4057 | --trilinos | 1 | 92 | 96 | 395408 | |||
attachment:anisotropy.py | source:trunk@4057 | --trilinos | 2 | 49 | 53 | 265860 | |||
attachment:anisotropy.py | source:trunk@4057 | --trilinos | 4 | 35 | 38 | 187092 |
Coupled Efficiency
An efficiency comparison between source:trunk@4055 and source:branches/coupled@4055 on bunter.
Notes:
- Running 1000x1000 for CH used about 1.1 GB for both trunk and branches/coupled
- --trilinos is heinously slow for CH.
- RST wiki formatting doesn't seem to work very well at least with the examples provided.
- branches/coupled appears to be no slower nor appears to use more memory than trunk.
Reconciling the new boundary conditions with the old
As explained in this ticket, boundary conditions have now been updated to use constraints and the FixedValue/FixedFlux boundary conditions are no longer required. As a check, we ran the source:branches/version-2_1@3873 examples against source:trunk@3873 and the convection examples threw a number of errors. On the surface, this is reasonable as the new (natural) boundary condition is in/outflow and hence if one uses a FixedValue boundary condition, one is also getting an in/outflow boundary condition.
Source
Version 2.1 for this example didn't work against trunk.
We noticed that this example had an added outflow boundary condition see line 70 of source.py. This should be removed on trunk as trunk already included the outflow boundary condition. If one removes the extra RHSBC, it still passes the test on trunk. Basically, the right outflow is miniscule as the value of the variable on that edge is and hence it doesn't matter whether RHSBC is included or not. We now understand the right BC issue. Now to understand the left BC.
The version 2.1 example when run against trunk has a value very close to 2 instead of 1 on the left hand boundary. The FixedValue boundary condition adds the following to the matrix and RHS vector
where is the boundary value. In this case, it results in 1 being added to the RHS vector. The new constraint boundary conditions now add this boundary condition by default (regardless of whether has been set), thus, the RHS vector is getting a value of 2, and, hence, the left most value now has a value of 2 (the diagonal entry is 1 from the first interior face).
Note that the inflow/outflow boundary condition has the same discretization as the FixedValue value boundary condition. The only difference is that an inflow/outflow condition does not require the external face to be explicitly fixed to a value, but uses whatever value var.getFaceValue() provides. If the boundary value is set then this is the same as a FixedValue BC
Robin
This example doesn't work when run against trunk for the same reasons as above. I did notice that the FixedFlux boundary condition is only ever applied once no matter how many terms invoke it. This meant that it previously worked as expected, see source:branches/version-2_1/fipy/boundaryConditions/fixedFlux@3873#L79.
Not much point going on, I think I understand why the rest of the examples fail.
Creating FiPy debs
This is how I created a fipy deb.
- apt-get install stdeb
- apt-get install debhelper
- svn co svn+ssh://[email protected]/usr/local/svn-fipy-repos/tags/version-2_1 CLEAN
- cd CLEAN
- Create stdeb.cfg, see below
- python setup.py --command-packages=stdeb.command bdist_deb
- The deb is deb_dist/python-fipy_2.1-1_all.deb
stdeb.cfg:
[DEFAULT] Depends: python (>= 2.4), python-numpy (>= 1.3), python-sparse (>= 1.1), python-matplotlib, python-scipy, gmsh, python-pytrilinos XS-Python-Version: >= 2.4
Check the archive by doing ar xv python-fipy_2.1-1_all.deb
Seems to work. The control file looks reasonable when compared with pysparse's. Now I have to test it...
- Create an Ubuntu virtual machine, see http://www.packtpub.com/article/creating-first-virtual-machine-ubuntu-linux-part1
- Install the guest additions, see http://ubuntu-tutorials.com/2010/06/26/install-virtualbox-guest-additions-on-virtualbox-guests/
- run the update manager
- install the fipy deb, went smoothly
- svn co svn+ssh://[email protected]/usr/local/svn-fipy-repos/tags/version-2_1/examples fipy-examples
- cd fipy-examples
- python setup.py test
The tests all pass apart from three that have path problems with their local imports of the sort:
Traceback (most recent call last): ... from examples.levelSet.electroChem.leveler import runLeveler ImportError: No module named examples.levelSet.electroChem.leveler
Easy to fix. Now for trilinos:
- python setup.py test --Trilinos
All the tests work other than the three failures from before. Now for parallel:
- sudo apt-get install python-setuptools
- sudo apt-get install mpi-default-dev
- sudo apt-get install python-dev
- sudo easy_install mpi4py
- mpirun -np 2 python setup.py test --Trilinos
This gave 5 errors, the 3 levelset errors along with elphf.diffusion.mesh1D and diffusion.nthOrder.input4thOrder1D, which we already know about and I think have been fixed. Next up, test matplolib:
- mpirun -np 2 python diffusion/mesh1D.py --Trilinos
- mpirun -np 2 python phase/anisotropy.py --Trilinos
Takes ages, but shows something running in parallel. Now mayavi
- export FIPY_VIEWER=mayavi
Okay none of the examples work with mayavi
- python cahnHilliard/sphere.py
python: can't open file 'examples/cahnHilliard/sphereDaemon.py': [Errno 2] No such file or directory viewer: NOT READY
- python phase/impingement/mesh20x20.py
... AttributeError: 'NoneType' object has no attribute 'cell_data'
- python cahnHilliard/mesh2D.py
OpenGL Warning: Failed to connect to host. Make sure 3D acceleration is enabled for this VM.
Anyhow, I'll remove mayavi2 from the list of packages until we have a better idea abut what is happening.
Fixing Bitten
Bitten is broken and I'm trying to fix it. Anyway, to cut a long story short the slave is checking out the code just fine, but stops without running the tests
[INFO ] A examples/elphf/generated/phaseDiffusion/binary.pdf [INFO ] A examples/elphf/generated/phaseDiffusion/quaternary.png [INFO ] A examples/elphf/generated/phaseDiffusion/ternaryAndElectrons.pdf [INFO ] A examples/elphf/phaseDiffusion.py [INFO ] U . [INFO ] Checked out revision 3712. [DEBUG ] svn exited with code 0 [INFO ] Build step checkout completed successfully [DEBUG ] Sending POST request to 'http://matforge.org/fipy/builds/1414/steps/' [DEBUG ] Server returned error 500: Internal Server Error (no message available) [ERROR ] Exception raised processing step checkout. Reraising HTTP Error 500: Internal Server Error [DEBUG ] Stopping keepalive thread [DEBUG ] Keepalive thread exiting. [DEBUG ] Keepalive thread stopped [DEBUG ] Removing build directory /tmp/bittenSnQIvn/build_trunk_1414 [ERROR ] HTTP Error 500: Internal Server Error [DEBUG ] Removing working directory /tmp/bittenSnQIvn [INFO ] Slave exited at 2010-07-28 14:22:36
Observing the masters log file reveals the following error, which seems to occur at the same time when observing the files out put side by side with tail -f. Could this be connected?
2010-07-28 14:26:36,900 Trac[perm] WARNING: perm.permissions() is deprecated and is only present for HDF compatibility 2010-07-28 14:27:45,230 Trac[main] ERROR: 'time' Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/Trac-0.11.1-py2.4.egg/trac/web/main.py", line 423, in _dispatch_request dispatcher.dispatch(req) File "/usr/local/lib/python2.4/site-packages/Trac-0.11.1-py2.4.egg/trac/web/main.py", line 197, in dispatch resp = chosen_handler.process_request(req) File "/usr/local/lib/python2.4/site-packages/Bitten-0.6dev_r562-py2.4.egg/bitten/master.py", line 93, in process_request return self._process_build_step(req, config, build) File "/usr/local/lib/python2.4/site-packages/Bitten-0.6dev_r562-py2.4.egg/bitten/master.py", line 229, in _process_build_step step.started = int(_parse_iso_datetime(elem.attr['time'])) File "/usr/local/lib/python2.4/site-packages/Bitten-0.6dev_r562-py2.4.egg/bitten/util/xmlio.py", line 252, in __getitem__ raise KeyError(name) KeyError: 'time'
The times are out of whack because matforge is 5 minutes fast. Let me investigate further.
Analysis of parallel speed ups
In the process of working with James I have tried to analyze some parallel runs more deeply. I should mention that http://www.mcs.anl.gov/~itf/dbpp is proving to be quite a useful text for understanding some of the concepts that I had overlooked. We can write an expression for the time for a given time step based on various aspects of the parallel partioning something like,
where
is the number of cells on node including overlaps, is the number of overlapping cells on node and is the total number of nodes. The terms in the equations represent in order- the local calculations (should be perfectly parallel in most of fipy outside the solver),
- the local processor to processor communication (questionable if this actually exists),
- global communication (probably more likely),
- calculations that are across the global cells (there should be none of this, very bad for scaling)
- a fixed penalty independent of the mesh or partioning
To look at the relative influence of each term I did calculations for various grid sizes and number of nodes, recorded the times and fit the data with a least squares fit using the anisotropy problem. In the least squares fit each timing value is weighed equally and the fastest out of 10 time steps is used for . This is done because luggage has high variability especially when egon is running openmp jobs. Using the attached scripts I get along with the following plot. The problem with this fit is that two parameters are actually negative (unphysical) and is way too big. This is caused because the script is not adjusting the number of crystals based on the box size (less comparative work in the solver per cell with less processors). As a quick fix we can assume the second and fourth terms are negligible and see how the fits looks. We get and a new plot.
This needs to be rerun with an updated timing values with the number of crystals increased with the box size.
Testing Anisotropy Example on a 4000 x 4000 grid
I'm running the anisotropy example on 32 processors on luggage. The phase and theta images seem to make sense after 30 steps. It took 11593 seconds to reach this point.
I also ran a 200 x 200 grid with 1 crystal (which is roughly the area occupied the crystals in the larger simulation). Running on 6 processors on poole it takes about 734 s to do 690 time steps. After 690 time steps the area of the crystal is 1.05 (the area of the box is 25). The initial area of the crystal is 0.05. So do a few calculations
>>> numpy.pi * (0.025 * 5.)**2 0.049087385212340517 >>> numpy.sqrt(0.05 / numpy.pi) 0.126156626101008 >>> numpy.sqrt(1.05 / numpy.pi) 0.57812228852811087 >>> (0.57812228852811087 - 0.126156626101008) / 690 0.00065502269916971432
and the rate of expansion is 0.00066 per time step. Now for crystals to touch and see structure the crystal must expand a distance of 2.5, which requires 2.5 / 0.00066 ~ 4000 time steps. The large 4000 x 4000 case will take ~18 days to get to a good point. Now we might not need the crystal to move a distance of 2.5, but it will probably slow down as the box gets filled with solid. We don't have 17 days at this point so on this evidence I'm going to scale back to 3000 x 3000 with 225 crystals. I think that will be pretty much guaranteed to show something good in 10 days.
Okay, the 3000 x 3000 case is taking about 100 s a time step, which will take about 5 days to do 4000 time steps. Much more reasonable.
Testing the divorcePysparse branch
The tables below show wall clock simulation duration data in seconds for source:branches/divorcePysparse@3716 branch and source:trunk@3716 on poole for 10 time steps of source:sandbox/anisotropy.py@3717. The simulations were conducted 5 times on 1, 2 and 4 processors.
source:branches/divorcePysparse@3716
1 | 2 | 4 |
30.80 | 17.57 | 10.14 |
37.29 | 19.73 | 9.74 |
30.51 | 19.18 | 9.73 |
32.23 | 16.86 | 10.35 |
30.27 | 16.86 | 10.27 |
source:trunk@3716
1 | 2 | 4 |
23.03 | 12.08 | 7.63 |
21.46 | 13.07 | 7.04 |
22.91 | 14.88 | 6.98 |
22.74 | 11.99 | 7.50 |
32.59 | 11.87 | 7.54 |