Ocean Assimilation Kit (OAK)


Source code is distributed under the terms of the GPL and it is available at http://data-assimilation.net/upload/OAK/release/OAK-1.7.tar.gz

You need:

  • A Fortran 90 compiler
  • NetCDF (with Fortran 90 interface)
  • LAPACK library
  • an implementation of the BLAS library (such as ATLAS or GotoBLAS)

Optionally, for parallelization, either one of:

  • MPI (with Fortran 90 interface)
  • OpenMP (included in Fortran90 compilers)

Packages on Ubuntu/Debian

To compile with open-source gfortran, parallelized with MPI, at least the following packages are needed:

sudo apt-get install gfortran libatlas-base-dev liblapack-dev openmpi-bin libopenmpi-dev libnetcdf-dev netcdf-bin

Matlab/Octave scripts

The OAK package contains scripts which can run in Matlab and GNU Octave.


  • Documentation is available as PDF or HTML


  • Module ufileformat: high level input/output
  • Module matoper: matrix operators
  • Module_ndgrid: n-dimensional interpolation of arbitrary curvilinear grid
  • Module rrsqrt: reduced-rank analysis updates
  • Module assimilation: main assimilation module


  • adapt config.mk to by setting OS (Operating System), FORT (Fortran compiler) and library locations. This is not necessary on Linux if you use gfortran and if you have the nc-config utility to detect NetCDF4 and if all other libraries are at their standard location.
  • In config.mk, you can use EXTRA_F90FLAGS and EXTRA_LDFLAGS to define some additional compiler and linker options (such as libraries). For example:

EXTRA_F90FLAGS = -fdump-core
EXTRA_LDFLAGS = -L/some/path -lfoo

  • Create the assim tool:


  • several options can be passed to make:
DEBUG=on: enables debugging
PROFILING=on: enables profiling
PRECISION=single, PRECISION=double: all computations are made in single (default) or double precision
MPI=on: enable MPI
PIC=on: enable the generation of position-independent code (only necessary for a dynamic library)


Missing value

strange missing value when the horizontal grid of observation is identical to model grid. -> rounding error in ifort when compiled with -O3 and single precision. Problem disappears in double precision and is not present with gfortran (single or double precision). Solution: interpolation coefficients are now always in double precisions.

#ifdef argument starts with wrong symbol

ifort 11.1: produces the following error:

assimilation.F90(3368): #error: '#ifdef' argument starts with wrong symbol.

This is a bug in ifort, fixed in 11.1 Update 3. As work-around, define in config.mk:

EXTRA_F90FLAGS = -Qoption,fpp,-macro=no_com

reference: http://software.intel.com/en-us/forums/showthread.php?t=70635

internal compiler error in ensanalysisanamorph2

gfortran 4.2.1: Fatal compiler error

gfortran  -g -fbounds-check -fdefault-real-8 -fconvert=big-endian -frecord-marker=4 -I/u/mc/opt/netcdf-4.1.2/include -c assimilation.F9
assimilation.F90: In function 'ensanalysisanamorph2':
assimilation.F90:3454: internal compiler error: in modified_type_die, at dwarf2out.c:8495
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://bugzilla.redhat.com/bugzilla> for instructions.

stack-size problems with ifort

ifort: endless stack-size problems. Use -heap-array compiler options (http://software.intel.com/en-us/articles/intel-fortran-compiler-increased-stack-usage-of-80-or-higher-compilers-causes-segmentation-fault/)

partition loading with ifort

  • partition is not loaded correctly (ifort 12.0). While all values are greater than, the following error message is produced and minimum value in log file are wrong:

Error: all values in partition vector must be greater than one        79929  -9998

This error occurs only with ifort 12.0 and disappears when another compiler is used (or compiler version). It is probably a bug in the compiler.

Key not found in init file

Make sure that you only use white space and to tabs in the init file.

NetCDF not found

NETCDF_CONFIG currently defaults to nc-config. In newer version of netCDF, the Fortran and C library are separated. In this case you need to use nf-config. In any case, you can also use the absolute path of this script, e.g.

NETCDF_CONFIG = $(HOME)/local/ifort-12.1.6/bin/nf-config

Otherwise, the script has to be in your $PATH.


  • If a library is not at a standard location (e.g. in your home directory), you can set the “run path” at compile-time by using the option -Wl,-R instead of setting LD_LIBRARY_PATH at run time.

EXTRA_LDFLAGS ?= -Wl,-R/home/ulg/gher/abarth/opt/gcc-4.6.2/netcdf-4.1.3/lib

  • In config.mk place compiler-specific variables in an if-clause. For example:

ifeq ($(FORT),gfortran)
  EXTRA_LDFLAGS ?= -Wl,-R/home/ulg/gher/abarth/opt/gcc-4.6.2/netcdf-4.1.3/lib

ifeq ($(FORT),ifort)
  EXTRA_LDFLAGS ?= -Wl,-R/home/ulg/gher/abarth/opt/intel-12.0/netcdf-4.1.3/lib

Switching from one compiler to another is then just a matter of adding an option to make, e.g.:

make FORT=ifort clean # this is important
make FORT=ifort

  • You don’t like the quite verbose name of the executable (e.g. assim-gfortran-double-openmp)? Define the variable EXEC in config.mk with the name (and possibly path) of the executable (only in SVN version).
  • Compilation with Intel MKL library is possible by adding this at the end of the config.mk file:


Use the complete path in BLAS_LIBDIR. For sequential version of MKL use:

BLAS_LIB=-lmkl_intel_lp64 -lmkl_sequential -lmkl_core

For threaded version (this makes only sense for global assimilation):

BLAS_LIB=-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread

More info on linking on http://software.intel.com/en-us/articles/mkl_solver_libraries_are_deprecated_libraries_since_version_10_2_Update_2/

Publications & Operational use

  • L. Vandenbulcke, A. Capet, J.-M. Beckers. Onboard implementation of the GHER model for the Black Sea, with SST and CTD data assimilation. Journal of Operational Oceanography, 3(2): 47-54, 2010
  • A. Barth, A. Alvera-Azcárate, and R. H. Weisberg. Assimilation of high-frequency radar currents in a nested model of the West Florida Shelf. Journal of Geophysical Research, 113:C08033, 2008. doi: 10.1029/2007JC004585 ORBI
  • A. Barth, A. Alvera-Azcárate, J.-M. Beckers, M. Rixen, and L. Vandenbulcke. Multigrid state vector for data assimilation in a two-way nested model of the Ligurian Sea. Journal of Marine Systems, 65(1-4):41–59, 2007. doi: 10.1016/j.jmarsys.2005.07.006. URL http://hdl.handle.net/2268/4260.
  • L. Vandenbulcke, A. Barth, M. Rixen, A. Alvera-Azcárate, Z. Ben Bouallegue, and J.-M. Beckers. Study of the combined effects of data assimilation and grid nesting in ocean models. Application to the Gulf of Lions. Ocean Science, 2(2):213–222, 2006. doi: 10.5194/os-2-213-2006. URL http://www.ocean-sci.net/2/213/2006.
  • Operational Black Sea model running at: seamod.ro


The following realistic configuration was used as a benchmark case.

  • The computer used has an Intel i5 cpu with 4 cores and 16 GB or RAM, the OS being Debian Squeeze 64bit.
  • The state vector is multivariate and comprises temperature (3D), salinity (3D) and sea surface height (2D), totaling ~1.25 million sea points. The model error space has 10 directions specified as EOFs (each direction has the same length as the state vector). Furthermore, assim loads the norms (weights), also having the same length.
  • The observations are a high-resolution SST map, comprising 746401 sea points to be assimilated
  • About 1.5 GB of memory is used

Using O3 compilation optimization as well as further compiler-dependant specific optimization flags, the following was noted:

  • Global assimilation is performed very fast (1 to 2 minuts with 4 cpus).
  • Local assimilation requires to perform N times the assimilation, where N is the count of water columns in the state vector (i.e horizontal points count); it takes about 45 to 55 minuts walltime (using all 4 cores).
  • Gfortran seemed to perform slightly faster than Ifort, both having the most optimized compilator flags possible (~5% walltime gain)
  • Using the standard Lapack/BLAS libraries or (in the case of Intel fortran) the Intel MKL libaries does not change the execution time.
  • Single or double precision does not lead to significantly different walltime. If anything, double precision runs slightly faster.
  • MPI parallelization leads to slightly faster execution (~10% walltime gain) than OpenMP parallilization, although on a single (multicore) computer.
  • OAK uses a “distance” function (to determine which observations are relevant for a certain state vector column); the distance can be computed either on a horizontal plane or on a sphere. The former approximation leads to a ~5% walltime gain If you want to change this setting, edit the file assimilation.F90, and (un-)define the SIMPLE_DISTANCE flag by (adding/)removing the comment sign on the line “define SIMPLE_DISTANCE”).

All in all, unless a few minutes walltime really matter, no setting leads to really important performance gain or loss.