Posts for the month of March 2009

Parallel Efficiency Data

The following table shows run times and memory usage for the reactive wetting problem on a number of nodes. The time step is $\delta t = 1 \times 10^{-10}$. The machine is poole, which has 8 nodes and 32GB.

category no mpi mpi 1 mpi 2 mpi 4 mpi 8
sweep 1 (s) 85 85 74 53 39
sweep 2 (s) 186 180 156 105 81
sweep 3 (s) 289 284 243 161 136
memory high (GB) 0.5 1.1 3.4 6.4 10.2
memory stable (GB) 0.5 1.1 3.4 3.9 2.8

The memory usage results are very strange. Initially the memory spikes at a high value somewhere in the initialization phase (memory high) and then for the rest of the simulation (all subsequent sweeps) the memory stays at or around the stable value (memory stable). The stable value seems to be independent of the number of processors, but is higher when there is more than one processor. The memory usage is taken from top in the section with Mem:XXXX total, YYYY used. I'm using YYYY minus the initial YYYY to get the number.

  • Posted: 2009-03-17 13:16
  • Author: wd15
  • Categories: (none)
  • Comments (0)

Numpy on the Altix

Thus far, I've done the following

I've added

[mkl] library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/64 lapack_libs = mkl_lapack mkl_libs = mkl_intel_lp64, mkl_intel_thread, mkl_lapack, mkl_core

to the site.cfg file in numpy and the built numpy with the following command

python setup.py config --compiler=intel build_clib --fcompiler=intel build_ext --compiler=intel --verbose build install --prefix=${USR}

Currently I'm getting an import error:

>>> import numpy

Traceback (most recent call last):

File "<stdin>", line 1, in ? File "/users/wd15/usr/ia64/2.6.16.60-0.23.PTF.403865.0-default//lib/python2.4/site-packages/numpy/init.py", line 143, in ?

import linalg

File "/users/wd15/usr/ia64/2.6.16.60-0.23.PTF.403865.0-default//lib/python2.4/site-packages/numpy/linalg/init.py", line 47, in ?

from linalg import *

File "/users/wd15/usr/ia64/2.6.16.60-0.23.PTF.403865.0-default//lib/python2.4/site-packages/numpy/linalg/linalg.py", line 29, in ?

from numpy.linalg import lapack_lite

ImportError?: /opt/intel/Compiler/11.0/081/mkl/lib/64/libmkl_intel_thread.so: undefined symbol: kmpc_end_critical

  • Posted: 2009-03-11 17:18 (Updated: 2009-03-11 17:19)
  • Author: wd15
  • Categories: (none)
  • Comments (3)

Compiling Trilinos on the Altix with the Intel compilers

This looks promising (from a trilinos mailing list entry):

../configure CC=icc CXX=icc F77=ifort CFLAGS="-O3 -fPIC" CXXFLAGS="-O3 -fPIC -LANG:std -LANG:ansi -DMPI_NO_CPPBIND" FFLAGS="-O3 -fPIC" --prefix=${USR} --with-install="/usr/bin/install -p" --with-blas="-L/opt/intel/Compiler/11.0/081/mkl/lib/64 -lmkl -lpthread" --with-lapack="-L/opt/intel/Compiler/11.0/081/mkl/lib/64 -lmkl_lapack -lpthreads" --enable-mpi --enable-amesos --enable-ifpack --enable-shared --enable-aztecoo --enable-epetra --enable-epetraext --enable-external --enable-ml --enable-threadpool --enable-thyra --enable-stratimikos --enable-triutils --enable-galeri --enable-zoltan --cache-file=config.cache

This is the "normal" thing we do on 64 bit:

../configure CXXFLAGS="-O3 -fPIC" CFLAGS="-O3 -fPIC" FFLAGS="-O5 -funroll-all-loops -fPIC" F77="g77" --enable-epetra --enable-aztecoo --enable-pytrilinos --enable-ml --enable-ifpack --enable-amesos --with-gnumake --enable-galeri --cache-file=config.cache --with-swig=$PATH_TO_SWIG_EXECUTABLE --prefix=$LOCAL_INSTALLATION_DIR

  • Posted: 2009-03-11 11:31 (Updated: 2009-03-18 15:09)
  • Author: wd15
  • Categories: (none)
  • Comments (12)