3. Getting Started with MITgcm¶
This chapter is divided into two main parts. The first part, which is covered in sections Section 3.1 through Section 3.6, contains information about how to download, build and run the MITgcm. The second part, covered in Section 4, contains a set of step-by-step tutorials for running specific pre-configured atmospheric and oceanic experiments.
We believe the best way to familiarize yourself with the model is to run the case study examples provided in the MITgcm repository. Information is also provided here on how to customize the code when you are ready to try implementing the configuration you have in mind. The code and algorithm are described more fully in Section 2 and Section 6 and chapters thereafter.
3.1. Where to find information¶
There is a web-archived support mailing list for the model that you can email at MITgcm-support@mitgcm.org once you have subscribed.
To sign up (subscribe) for the mailing list (highly recommended), click here
To browse through the support archive, click here
3.2. Obtaining the code¶
The MITgcm code and documentation are under continuous development and we generally recommend that one downloads the latest version of the code. You will need to decide if you want to work in a “git-aware” environment (Method 1) or with a one-time “stagnant” download (Method 2). We generally recommend method 1, as it is more flexible and allows your version of the code to be regularly updated as MITgcm developers check in bug fixes and new features. However, this typically requires at minimum a rudimentary understanding of git in order to make it worth one’s while.
Periodically we release an official checkpoint (or “tag”). We recommend one download the latest code, unless there are reasons for obtaining a specific checkpoint (e.g. duplicating older results, collaborating with someone using an older release, etc.)
3.2.1. Method 1¶
This section describes how to download git-aware copies of the repository. In a terminal window, cd to the directory where you want your code to reside. Type:
% git clone https://github.com/MITgcm/MITgcm.git
This will download the latest available code. If you now want to revert this code to a specific checkpoint release,
first cd
into the MITgcm directory you just downloaded, then type git checkout checkpointXXX
where XXX
is the checkpoint version.
Alternatively, if you prefer to use ssh keys (say for example, you have a firewall which won’t allow a https download), type:
% git clone git@github.com:MITgcm/MITgcm.git
You will need a GitHub account for this, and will have to generate a ssh key though your GitHub account user settings.
The fully git-aware download is over several hundred MB, which is considerable if one has limited internet download speed. In comparison, the one-time download zip file (Method 2, below) is order 100MB. However, one can obtain a truncated, yet still git-aware copy of the current code by adding the option --depth=1
to the git clone command above; all files will be present, but it will not include the full git history. However, the repository can be updated going forward.
3.2.2. Method 2¶
This section describes how to do a one-time download of the MITgcm, NOT git-aware.
In a terminal window, cd
to the directory where you want your code to reside.
To obtain the current code, type:
% wget https://github.com/MITgcm/MITgcm/archive/master.zip
For specific checkpoint release XXX
, instead type:
% wget https://github.com/MITgcm/MITgcm/archive/checkpointXXX.zip
3.3. Updating the code¶
There are several different approaches one can use to obtain updates to the MITgcm; which is best for you depends a bit on how you intend to use the MITgcm and your knowledge of git (and/or willingness to learn). Below we outline three suggested update pathways:
- Fresh Download of the MITgcm
This approach is the most simple, and virtually foolproof. Whether you downloaded the code from a static zip file (Method 2) or used the git clone command (Method 1), create a new directory and repeat this procedure to download a current copy of the MITgcm. Say for example you are starting a new research project, this would be a great time to grab the most recent code repository and keep this new work entirely separate from any past simulations. This approach requires no understanding of git, and you are free to make changes to any files in the MIT repo tree (although we generally recommend that you avoid doing so, instead working in new subdirectories or on separate scratch disks as described in Section 3.5.1, for example).
- Using
git pull
to update the (unmodified) MITgcm repo tree
If you have downloaded the code through a git clone command (Method 1 above), you can incorporate any changes to the source code (including any changes to any files in the MITgcm repository, new packages or analysis routines, etc.) that may have occurred since your original download. There is a simple command to bring all code in the repository to a ‘current release’ state. From the MITgcm top directory or any of its subdirectories, type:
% git pull
and all files will be updated to match the current state of the code repository, as it exists
at GitHub. (Note: if you plan to contribute to
the MITgcm and followed the steps to download the code as described in
Section 5, you will need to type git pull upstream
instead.)
This update pathway is ideal if you are in the midst of a project and you want to incorporate new MITgcm features into your executable(s), or take advantage of recently added analysis utilties, etc. After the git pull, any changes in model source code and include files will be updated, so you can repeat the build procedure (Section 3.5) and you will include all these new features in your new executable.
Be forewarned, this will only work if you have not modified ANY of the files in the MITgcm repository
(adding new files is ok; also, all verification run subdirectories build
and run
are also ignored by git).
If you have modified files and the git pull
fails with errors, there is no easy fix other than
to learn something about git (continue reading…)
- Fully embracing the power of git!
Git offers many tools to help organize and track changes in your work. For example, one might keep separate
projects on different branches, and update the code separately (using git pull
) on these separate branches.
You can even make changes to code in the MIT repo tree; when git then tries to update code from upstream
(see Figure 5.1), it will notify you about possible conflicts and even merge the code changes
together if it can. You can also use git commit
to help you track what you are modifying in your
simulations over time. If you’re planning to submit a pull request to include your changes, you should
read the contributing guide in Section 5, and we suggest you do this model development
in a separate, fresh copy of the code. See Section 5.2 for more information and how
to use git effectively to manage your workflow.
3.4. Model and directory structure¶
The “numerical” model is contained within a execution environment
support wrapper. This wrapper is designed to provide a general framework
for grid-point models; MITgcm is a specific numerical model that makes use of
this framework (see chapWrapper for additional detail). Under this structure,
the model is split into execution
environment support code and conventional numerical model code. The
execution environment support code is held under the eesupp
directory. The grid point model code is held under the model
directory. Code execution actually starts in the eesupp
routines and
not in the model
routines. For this reason the top-level MAIN.F
is in the eesupp/src
directory. In general, end-users should not
need to worry about the wrapper support code. The top-level routine for the numerical
part of the code is in model/src/THE_MODEL_MAIN.F
. Here is a brief
description of the directory structure of the model under the root tree.
model
: this directory contains the main source code. Also subdivided into two subdirectoriesinc
(includes files) andsrc
(source code).eesupp
: contains the execution environment source code. Also subdivided into two subdirectoriesinc
andsrc
.pkg
: contains the source code for the packages. Each package corresponds to a subdirectory. For example,gmredi
contains the code related to the Gent-McWilliams/Redi scheme,seaice
the code for a dynamic seaice model which can be coupled to the ocean model. The packages are described in detail in Section 8].doc
: contains the MITgcm documentation in reStructured Text (rst) format.tools
: this directory contains various useful tools. For example,genmake2
is a script written in bash that should be used to generate your makefile. The subdirectorybuild_options
contains ‘optfiles’ with the compiler options for many different compilers and machines that can run MITgcm (see Section 3.5.2.1). This directory also contains subdirectoriesadjoint
andOAD_support
that are used to generate the tangent linear and adjoint model (see details in Section 7).utils
: this directory contains various utilities. Thematlab
subdirectory contains matlab scripts for reading model output directly into matlab. The subdirectorypython
contains similar routines for python.scripts
contains C-shell post-processing scripts for joining processor-based and tiled-based model output.verification
: this directory contains the model examples. See Section 4.jobs
: contains sample job scripts for running MITgcm.lsopt
: Line search code used for optimization.optim
: Interface between MITgcm and line search code.
3.5. Building the code¶
To compile the code, we use the make
program. This uses a file
(Makefile
) that allows us to pre-process source files, specify
compiler and optimization options and also figures out any file
dependencies. We supply a script (genmake2
), described in section
Section 3.5.2, that automatically creates the Makefile
for you. You
then need to build the dependencies and compile the code.
As an example, assume that you want to build and run experiment
verification/exp2
. Let’s build the code in verification/exp2/build
:
% cd verification/exp2/build
First, build the Makefile
:
% ../../../tools/genmake2 -mods ../code
The -mods
command line option tells genmake2
to override model source code
with any files in the directory ../code/
. This and additional genmake2
command line options are described
more fully in Section 3.5.2.2.
On many systems, the genmake2
program will be able to automatically
recognize the hardware, find compilers and other tools within the user’s
path (“echo $PATH
”), and then choose an appropriate set of options
from the files (“optfiles”) contained in the tools/build_options
directory. Under some circumstances, a user may have to create a new
optfile in order to specify the exact combination of compiler,
compiler flags, libraries, and other options necessary to build a
particular configuration of MITgcm. In such cases, it is generally
helpful to peruse the existing optfiles and mimic their syntax.
See Section 3.5.2.1.
The MITgcm developers are willing to provide help writing or modifing optfiles. And we encourage users to ask for assistance or post new optfiles (particularly ones for new machines or architectures) through the GitHub issue tracker or email the MITgcm-support@mitgcm.org list.
To specify an optfile to genmake2
, the command line syntax is:
% ../../../tools/genmake2 -mods ../code -of /path/to/optfile
Once a Makefile
has been generated, we create the dependencies with
the command:
% make depend
This modifies the Makefile
by attaching a (usually, long) list of
files upon which other files depend. The purpose of this is to reduce
re-compilation if and when you start to modify the code. The make depend
command also creates links from the model source to this directory, except for links to those files
in the specified -mods
directory. IMPORTANT NOTE: Editing the source code files in the build directory
will not edit a local copy (since these are just links) but will edit the original files in model/src
(or model/inc
)
or in the specified -mods
directory. While the latter might be what you intend, editing the master copy in model/src
is usually NOT what was intended and may cause grief somewhere down the road. Rather, if you need to add
to the list of modified source code files, place a copy of
the file(s) to edit in the -mods
directory, make the edits to these -mods
directory files, go back to the build directory and type make Clean
,
and then re-build the makefile (these latter steps critical or the makefile will not
link to to this newly edited file).
It is important to note that the make depend stage will occasionally
produce warnings or errors if the dependency parsing tool is unable
to find all of the necessary header files (e.g., netcdf.inc
). In some cases you
may need to obtain help from your system administrator to locate these files.
Next, one can compile the code using:
% make
The make
command creates an executable called mitgcmuv
. Additional
make “targets” are defined within the makefile to aid in the production
of adjoint and other versions of MITgcm. On computers with multiple processor cores
or shared multi-processor (a.k.a. SMP) systems, the build process can often be sped
up appreciably using the command:
% make -j 2
where the “2” can be replaced with a number that corresponds to the number of cores (or discrete CPUs) available.
In addition, there are several housekeeping make clean
options that might be useful:
make clean
removes files thatmake
generates (e.g., *.o and *.f files)make Clean
removes files and links generated bymake
andmake depend
make CLEAN
removes pretty much everything, including any executibles and output from genmake2
Now you are ready to run the model. General instructions for doing so are given in section Section 3.6.
3.5.1. Building/compiling the code elsewhere¶
In the example above (Section 3.5) we built the
executable in the build
directory of the experiment.
Model object files and output data can use up large amounts of disk
space so it is often preferable to operate on a large
scratch disk. Here, we show how to configure and compile the code on a scratch disk,
without having to copy the entire source
tree. The only requirement to do so is you have genmake2
in your path, or
you know the absolute path to genmake2
.
Assuming the model source is in ~/MITgcm
, then the
following commands will build the model in /scratch/exp2-run1
:
% cd /scratch/exp2-run1
% ~/MITgcm/tools/genmake2 -rootdir ~/MITgcm -mods ~/MITgcm/verification/exp2/code
% make depend
% make
Note the use of the command line option -rootdir
to tell genmake2 where to find the MITgcm directory tree.
In general, one can compile the code in any given directory by following this procedure.
3.5.2. Using genmake2
¶
This section describes further details and capabilities of genmake2
(located in the
tools
directory), the MITgcm tool used to generate a Makefile. genmake2
is a shell
script written to work with all “sh”–compatible shells including bash
v1, bash v2, and Bourne (like many unix tools, there is a help option that is invoked thru genmake -h
).
genmake2
parses information from the following sources:
- a
genmake_local
file if one is found in the current directory - command-line options
- an “options file” as specified by the command-line option
–of /path/to/filename
- a
packages.conf
file (if one is found) with the specific list of packages to compile. The search path for filepackages.conf
is first the current directory, and then each of the-mods
directories in the given order (see here).
3.5.2.1. Optfiles in tools/build_options
directory:¶
The purpose of the optfiles is to provide all the compilation options for particular “platforms” (where “platform” roughly means the combination of the hardware and the compiler) and code configurations. Given the combinations of possible compilers and library dependencies (e.g., MPI and NetCDF) there may be numerous optfiles available for a single machine. The naming scheme for the majority of the optfiles shipped with the code is OS_HARDWARE_COMPILER where
- OS
- is the name of the operating system (generally the lower-case output
of a linux terminal
uname
command) - HARDWARE
is a string that describes the CPU type and corresponds to output from a
uname -m
command. Some common CPU types:- amd64
- is for x86_64 systems (most common, including AMD and Intel 64-bit CPUs)
- ia64
- is for Intel IA64 systems (eg. Itanium, Itanium2)
- ppc
- is for (old) Mac PowerPC systems
- COMPILER
- is the compiler name (generally, the name of the FORTRAN executable)
In many cases, the default optfiles are sufficient and will result in
usable Makefiles. However, for some machines or code configurations, new
optfiles must be written. To create a new optfile, it is generally
best to start with one of the defaults and modify it to suit your needs.
Like genmake2
, the optfiles are all written using a simple
sh–compatible syntax. While nearly all variables used within
genmake2
may be specified in the optfiles, the critical ones that
should be defined are:
FC
- the FORTRAN compiler (executable) to use
DEFINES
- the command-line DEFINE options passed to the compiler
CPP
- the C pre-processor to use
NOOPTFLAGS
- options flags for special files that should not be optimized
For example, the optfile for a typical Red Hat Linux machine (amd64 architecture) using the GCC (g77) compiler is
FC=g77
DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
CPP='cpp -traditional -P'
NOOPTFLAGS='-O0'
# For IEEE, use the "-ffloat-store" option
if test "x$IEEE" = x ; then
FFLAGS='-Wimplicit -Wunused -Wuninitialized'
FOPTIM='-O3 -malign-double -funroll-loops'
else
FFLAGS='-Wimplicit -Wunused -ffloat-store'
FOPTIM='-O0 -malign-double'
fi
If you write an optfile for an unrepresented machine or compiler, you are strongly encouraged to submit the optfile to the MITgcm project for inclusion. Please submit the file through the GitHub issue tracker or email the MITgcm-support@mitgcm.org list.
3.5.2.2. Command-line options:¶
In addition to the optfiles, genmake2
supports a number of helpful
command-line options. A complete list of these options can be obtained by:
% genmake2 -h
The most important command-line options are:
–optfile /path/to/file
specifies the optfile that should be used for a particular build.
If no optfile is specified (either through the command line or the
MITGCM_OPTFILE
environment variable),genmake2
will try to make a reasonable guess from the list provided intools/build_options
. The method used for making this guess is to first determine the combination of operating system and hardware (eg. “linux_amd64”) and then find a working FORTRAN compiler within the user’s path. When these three items have been identified, genmake2 will try to find an optfile that has a matching name.
–mods ’dir1 dir2 dir3 ...’
specifies a list of directories containing “modifications”. These directories contain files with names that may (or may not) exist in the main MITgcm source tree but will be overridden by any identically-named sources within the
-mods
directories.The order of precedence for this “name-hiding” is as follows:
- “mods” directories (in the order given)
- Packages either explicitly specified or provided by default (in the order given)
- Packages included due to package dependencies (in the order that that package dependencies are parsed)
- The “standard dirs” (which may have been specified by the “-standarddirs” option)
-oad
- generates a makefile for a OpenAD build
–adof /path/to/file
specifies the “adjoint” or automatic differentiation options file to be used. The file is analogous to the optfile defined above but it specifies information for the AD build process.
The default file is located in
tools/adjoint_options/adjoint_default
and it defines the “TAF” and “TAMC” compilers. An alternate version is also available attools/adjoint_options/adjoint_staf
that selects the newer “STAF” compiler. As with any compilers, it is helpful to have their directories listed in your $PATH environment variable.–mpi
- enables certain MPI features (using CPP
#define
) within the code and is necessary for MPI builds (see Section 3.5.3). –omp
- enables OPENMP code and compiler flag OMPFLAG
–ieee
- use IEEE numerics (requires support in optfile)
–make /path/to/gmake
- due to the poor handling of soft-links and other bugs common with
the
make
versions provided by commercial Unix vendors, GNUmake
(sometimes calledgmake
) may be preferred. This option provides a means for specifying the make executable to be used.
3.5.3. Building with MPI¶
Building MITgcm to use MPI libraries can be complicated due to the variety of different MPI implementations available, their dependencies or interactions with different compilers, and their often ad-hoc locations within file systems. For these reasons, its generally a good idea to start by finding and reading the documentation for your machine(s) and, if necessary, seeking help from your local systems administrator.
The steps for building MITgcm with MPI support are:
Determine the locations of your MPI-enabled compiler and/or MPI libraries and put them into an options file as described in Section 3.5.2.1. One can start with one of the examples in tools/build_options such as
linux_amd64_gfortran
orlinux_amd64_ifort+impi
and then edit it to suit the machine at hand. You may need help from your user guide or local systems administrator to determine the exact location of the MPI libraries. If libraries are not installed, MPI implementations and related tools are available including:Build the code with the
genmake2
-mpi
option (see Section 3.5.2.2) using commands such as:% ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE % make depend % make
3.6. Running the model¶
If compilation finished successfully (Section 3.5) then an
executable called mitgcmuv
will now exist in the local (build
) directory.
To run the model as a single process (i.e., not in parallel) simply
type (assuming you are still in the build
directory):
% cd ../run
% ln -s ../input/* .
% cp ../build/mitgcmuv .
% ./mitgcmuv
Here, we are making a link to all the support data files needed by the MITgcm
for this experiment, and then copying the executable from the the build directory.
The ./
in the last step is a safe-guard to make sure you use the local executable in
case you have others that might exist in your $PATH.
The above command will spew out many lines of text output to your
screen. This output contains details such as parameter values as well as
diagnostics such as mean kinetic energy, largest CFL number, etc. It is
worth keeping this text output with the binary output so we normally
re-direct the stdout
stream as follows:
% ./mitgcmuv > output.txt
In the event that the model encounters an error and stops, it is very
helpful to include the last few line of this output.txt
file along
with the (stderr
) error message within any bug reports.
For the example experiments in verification
, an example of the
output is kept in results/output.txt
for comparison. You can compare
your output.txt
with the corresponding one for that experiment to
check that your set-up indeed works. Congratulations!
3.6.1. Running with MPI¶
Run the code with the appropriate MPI “run” or “exec” program provided with your particular implementation of MPI. Typical MPI packages such as Open MPI will use something like:
% mpirun -np 4 ./mitgcmuv
Sightly more complicated scripts may be needed for many machines since execution of the code may be controlled by both the MPI library and a job scheduling and queueing system such as SLURM, PBS, LoadLeveler, or any of a number of similar tools. See your local cluster documentation or system administrator for the specific syntax required to run on your computing facility.
3.6.2. Output files¶
The model produces various output files and, when using mnc
(i.e., NetCDF),
sometimes even directories. Depending upon the I/O package(s) selected
at compile time (either mdsio
or mnc
or both as determined by
code/packages.conf
) and the run-time flags set (in
input/data.pkg
), the following output may appear. More complete information describing output files
and model diagnostics is described in chap_diagnosticsio.
3.6.2.1. MDSIO output files¶
The “traditional” output files are generated by the mdsio
package
(link to section_mdsio).The mdsio
model data are written according to a
“meta/data” file format. Each variable is associated with two files with
suffix names .data
and .meta
. The .data
file contains the
data written in binary form (big endian by default). The .meta
file
is a “header” file that contains information about the size and the
structure of the .data
file. This way of organizing the output is
particularly useful when running multi-processors calculations.
At a minimum, the instantaneous “state” of the model is written out, which is made of the following files:
U.00000nIter
- zonal component of velocity field (m/s and positive eastward).V.00000nIter
- meridional component of velocity field (m/s and positive northward).W.00000nIter
- vertical component of velocity field (ocean: m/s and positive upward, atmosphere: Pa/s and positive towards increasing pressure i.e., downward).T.00000nIter
- potential temperature (ocean: \(^{\circ}\mathrm{C}\), atmosphere: \(^{\circ}\mathrm{K}\)).S.00000nIter
- ocean: salinity (psu), atmosphere: water vapor (g/kg).Eta.00000nIter
- ocean: surface elevation (m), atmosphere: surface pressure anomaly (Pa).
The chain 00000nIter
consists of ten figures that specify the
iteration number at which the output is written out. For example,
U.0000000300
is the zonal velocity at iteration 300.
In addition, a “pickup” or “checkpoint” file called:
pickup.00000nIter
is written out. This file represents the state of the model in a condensed form and is used for restarting the integration (at the specific iteration number). Some additional packages and parameterizations also produce separate pickup files, e.g.,
pickup_cd.00000nIter
if the C-D scheme is used (see link to description)pickup_seaice.00000nIter
if the seaice package is turned on (see link to description)pickup_ptracers.00000nIter
if passive tracers are included in the simulation (see link to description)
Rolling checkpoint files are
the same as the pickup files but are named differently. Their name
contain the chain ckptA
or ckptB
instead of 00000nIter
. They
can be used to restart the model but are overwritten every other time
they are output to save disk space during long integrations.
3.6.2.2. MNC output files¶
The MNC package (link to section_mnc) is a set of routines written to read, write, and
append NetCDF files. Unlike the mdsio
output, the mnc
–generated output is usually
placed within a subdirectory with a name such as mnc_output_
(by default, NetCDF tries to append, rather than overwrite, existing files,
so a unique output directory is helpful for each separate run).
The MNC output files are all in the “self-describing” NetCDF format and can thus be browsed and/or plotted using tools such as:
- ncdump is a utility which is typically included with every NetCDF install, and converts the NetCDF binaries into formatted ASCII text files.
- ncview is a very convenient and quick way to plot NetCDF data and it runs on most platforms. Panoply is a similar alternative.
- Matlab, GrADS, IDL and other common post-processing environments provide built-in NetCDF interfaces.
3.6.3. Looking at the output¶
3.6.3.1. MATLAB¶
MDSIO output¶
The repository includes a few Matlab utilities to read output
files written in the mdsio
format. The Matlab scripts are located in the
directory utils/matlab
under the root tree. The script rdmds.m
reads the data. Look at the comments inside the script to see how to use
it.
Some examples of reading and visualizing some output in Matlab:
% matlab
>> H=rdmds('Depth');
>> contourf(H');colorbar;
>> title('Depth of fluid as used by model');
>> eta=rdmds('Eta',10);
>> imagesc(eta');axis ij;colorbar;
>> title('Surface height at iter=10');
>> eta=rdmds('Eta',[0:10:100]);
>> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
NetCDF¶
Similar scripts for netCDF output (rdmnc.m
) are available and they
are described in Section [sec:pkg:mnc].
3.6.3.2. Python¶
MDSIO output¶
The repository includes Python scripts for reading the mdsio
format under utils/python
.
The following example shows how to load in some data:
# python
import mds
Eta = mds.rdmds('Eta', itrs=10)
The docstring for mds.rdmds
contains much more detail about using this function and the options that it takes.
NetCDF output¶
The NetCDF output is currently produced with one file per processor. This means the individual tiles
need to be stitched together to create a single NetCDF file that spans the model domain. The script
gluemncbig.py
in the utils/python
folder can do this efficiently from the command line.
The following example shows how to use the xarray package to read the resulting NetCDF file into python:
# python
import xarray as xr
Eta = xr.open_dataset('Eta.nc')
3.7. Customizing the model configuration¶
When you are ready to run the model in the configuration you want, the easiest thing is to use and adapt the setup of the case studies experiment (described in Section 4) that is the closest to your configuration. Then, the amount of setup will be minimized. In this section, we focus on the setup relative to the “numerical model” part of the code (the setup relative to the “execution environment” part is covered in the software architecture/wrapper section) and on the variables and parameters that you are likely to change.
In what follows, the parameters are grouped into categories related to the computational domain, the equations solved in the model, and the simulation controls.
3.7.1. Parameters: Computational Domain, Geometry and Time-Discretization¶
Dimensions
Grid
Three different grids are available: cartesian, spherical polar, and curvilinear (which includes the cubed sphere). The grid is set through the logical variables usingCartesianGrid, usingSphericalPolarGrid, and usingCurvilinearGrid. In the case of spherical and curvilinear grids, the southern boundary is defined through the variable ygOrigin which corresponds to the latitude of the southern most cell face (in degrees). The resolution along the x and y directions is controlled by the 1D arrays delx and dely (in meters in the case of a cartesian grid, in degrees otherwise). The vertical grid spacing is set through the 1D array delz for the ocean (in meters) or delp for the atmosphere (in Pa). The variable Ro_SeaLevel represents the standard position of sea level in “r” coordinate. This is typically set to 0 m for the ocean (default value) and 105 Pa for the atmosphere. For the atmosphere, also set the logical variable groundAtK1 to
.TRUE.
which puts the first level (k=1) at the lower boundary (ground).For the cartesian grid case, the Coriolis parameter \(f\) is set through the variables f0 and beta which correspond to the reference Coriolis parameter (in s–1) and \(\frac{\partial f}{ \partial y}\)(in m–1s–1) respectively. If beta is set to a nonzero value, f0 is the value of \(f\) at the southern edge of the domain.
Topography - Full and Partial Cells
The domain bathymetry is read from a file that contains a 2D (x,y) map of depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The file name is represented by the variable bathyFile. The file is assumed to contain binary numbers giving the depth (pressure) of the model at each grid cell, ordered with the x coordinate varying fastest. The points are ordered from low coordinate to high coordinate for both axes. The model code applies without modification to enclosed, periodic, and double periodic domains. Periodicity is assumed by default and is suppressed by setting the depths to 0 m for the cells at the limits of the computational domain (note: not sure this is the case for the atmosphere). The precision with which to read the binary data is controlled by the integer variable readBinaryPrec which can take the value 32 (single precision) or 64 (double precision). See the matlab program
gendata.m
in theinput
directories ofverification
for several tutorial examples (e.g. gendata.m in the barotropic gyre tutorial) to see how the bathymetry files are generated for the case study experiments.To use the partial cell capability, the variable hFacMin needs to be set to a value between 0 and 1 (it is set to 1 by default) corresponding to the minimum fractional size of the cell. For example if the bottom cell is 500 m thick and hFacMin is set to 0.1, the actual thickness of the cell (i.e. used in the code) can cover a range of discrete values 50 m apart from 50 m to 500 m depending on the value of the bottom depth (in bathyFile) at this point.
Note that the bottom depths (or pressures) need not coincide with the models levels as deduced from delz or delp. The model will interpolate the numbers in bathyFile so that they match the levels obtained from delz or delp and hFacMin.
(Note: the atmospheric case is a bit more complicated than what is written here. To come soon…)
Time-Discretization
The time steps are set through the real variables deltaTMom and deltaTtracer (in s) which represent the time step for the momentum and tracer equations, respectively. For synchronous integrations, simply set the two variables to the same value (or you can prescribe one time step only through the variable deltaT). The Adams-Bashforth stabilizing parameter is set through the variable abEps (dimensionless). The stagger baroclinic time stepping can be activated by setting the logical variable staggerTimeStep to.TRUE.
.
3.7.2. Parameters: Equation of State¶
First, because the model equations are written in terms of perturbations, a reference thermodynamic state needs to be specified. This is done through the 1D arrays tRef and sRef. tRef specifies the reference potential temperature profile (in oC for the ocean and K for the atmosphere) starting from the level k=1. Similarly, sRef specifies the reference salinity profile (in ppt) for the ocean or the reference specific humidity profile (in g/kg) for the atmosphere.
The form of the equation of state is controlled by the character
variables buoyancyRelation and eosType. buoyancyRelation is
set to OCEANIC
by default and needs to be set to ATMOSPHERIC
for atmosphere simulations. In this case, eosType must be set to
IDEALGAS
. For the ocean, two forms of the equation of state are
available: linear (set eosType to LINEAR
) and a polynomial
approximation to the full nonlinear equation ( set eosType to
POLYNOMIAL
). In the linear case, you need to specify the thermal
and haline expansion coefficients represented by the variables
tAlpha (in K–1) and sBeta (in ppt–1).
For the nonlinear case, you need to generate a file of polynomial
coefficients called POLY3.COEFFS
. To do this, use the program
utils/knudsen2/knudsen2.f under the model tree (a Makefile is
available in the same directory and you will need to edit the number and
the values of the vertical levels in knudsen2.f so that they match
those of your configuration).
There there are also higher polynomials for the equation of state:
’UNESCO’
:- The UNESCO equation of state formula of Fofonoff and Millard (1983) [FRM83]. This equation of state assumes in-situ temperature, which is not a model variable; its use is therefore discouraged, and it is only listed for completeness.
’JMD95Z’
:- A modified UNESCO formula by Jackett and McDougall (1995) [JM95], which uses the model variable potential temperature as input. The ’Z’ indicates that this equation of state uses a horizontally and temporally constant pressure \(p_{0}=-g\rho_{0}z\).
’JMD95P’
:- A modified UNESCO formula by Jackett and McDougall (1995) [JM95], which uses the model variable potential temperature as input. The ’P’ indicates that this equation of state uses the actual hydrostatic pressure of the last time step. Lagging the pressure in this way requires an additional pickup file for restarts.
’MDJWF’
:- The new, more accurate and less expensive equation of state by McDougall et al. (1983) [MJWF03]. It also requires lagging the pressure and therefore an additional pickup file for restarts.
For none of these options an reference profile of temperature or salinity is required.
3.7.3. Parameters: Momentum Equations¶
In this section, we only focus for now on the parameters that you are
likely to change, i.e. the ones relative to forcing and dissipation for
example. The details relevant to the vector-invariant form of the
equations and the various advection schemes are not covered for the
moment. We assume that you use the standard form of the momentum
equations (i.e. the flux-form) with the default advection scheme. Also,
there are a few logical variables that allow you to turn on/off various
terms in the momentum equation. These variables are called
momViscosity, momAdvection, momForcing, useCoriolis,
momPressureForcing, momStepping and metricTerms and are assumed to
be set to .TRUE.
here. Look at the file PARAMS.h for a
precise definition of these variables.
Initialization
The initial horizontal velocity components can be specified from binary files uVelInitFile and vVelInitFile. These files should contain 3D data ordered in an (x,y,r) fashion with k=1 as the first vertical level (surface level). If no file names are provided, the velocity is initialized to zero. The initial vertical velocity is always derived from the horizontal velocity using the continuity equation, even in the case of non-hydrostatic simulation (see, e.g., verification/tutorial_deep_convection/input/).
In the case of a restart (from the end of a previous simulation), the velocity field is read from a pickup file (see section on simulation control parameters) and the initial velocity files are ignored.
Forcing
This section only applies to the ocean. You need to generate wind-stress data into two files zonalWindFile and meridWindFile corresponding to the zonal and meridional components of the wind stress, respectively (if you want the stress to be along the direction of only one of the model horizontal axes, you only need to generate one file). The format of the files is similar to the bathymetry file. The zonal (meridional) stress data are assumed to be in Pa and located at U-points (V-points). As for the bathymetry, the precision with which to read the binary data is controlled by the variable readBinaryPrec. See the matlab programgendata.m
in theinput
directories ofverification
for several tutorial example (e.g. gendata.m in the barotropic gyre tutorial) to see how simple analytical wind forcing data are generated for the case study experiments.
There is also the possibility of prescribing time-dependent periodic forcing. To do this, concatenate the successive time records into a single file (for each stress component) ordered in a (x,y,t) fashion and set the following variables: periodicExternalForcing to.TRUE.
, externForcingPeriod to the period (in s) of which the forcing varies (typically 1 month), and externForcingCycle to the repeat time (in s) of the forcing (typically 1 year; note externForcingCycle must be a multiple of externForcingPeriod). With these variables set up, the model will interpolate the forcing linearly at each iteration.
Dissipation
The lateral eddy viscosity coefficient is specified through the variable viscAh (in m2s–1). The vertical eddy viscosity coefficient is specified through the variable viscAz (in m2s–1) for the ocean and viscAp (in Pa2s–1) for the atmosphere. The vertical diffusive fluxes can be computed implicitly by setting the logical variable implicitViscosity to
.TRUE.
. In addition, biharmonic mixing can be added as well through the variable viscA4 (in m4s–1). On a spherical polar grid, you might also need to set the variable cosPower which is set to 0 by default and which represents the power of cosine of latitude to multiply viscosity. Slip or no-slip conditions at lateral and bottom boundaries are specified through the logical variables no_slip_sides and no_slip_bottom. If set to.FALSE.
, free-slip boundary conditions are applied. If no-slip boundary conditions are applied at the bottom, a bottom drag can be applied as well. Two forms are available: linear (set the variable bottomDragLinear in m/s) and quadratic (set the variable bottomDragQuadratic, dimensionless).The Fourier and Shapiro filters are described elsewhere.
C-D Scheme
If you run at a sufficiently coarse resolution, you will need the C-D scheme for the computation of the Coriolis terms. The variable tauCD, which represents the C-D scheme coupling timescale (in s) needs to be set.
Calculation of Pressure/Geopotential
First, to run a non-hydrostatic ocean simulation, set the logical variable nonHydrostatic to
.TRUE.
. The pressure field is then inverted through a 3D elliptic equation. (Note: this capability is not available for the atmosphere yet.) By default, a hydrostatic simulation is assumed and a 2D elliptic equation is used to invert the pressure field. The parameters controlling the behavior of the elliptic solvers are the variables cg2dMaxIters and cg2dTargetResidual for the 2D case and cg3dMaxIters and cg3dTargetResidual for the 3D case. You probably won’t need to alter the default values (are we sure of this?).For the calculation of the surface pressure (for the ocean) or surface geopotential (for the atmosphere) you need to set the logical variables rigidLid and implicitFreeSurface (set one to
.TRUE.
and the other to.FALSE.
depending on how you want to deal with the ocean upper or atmosphere lower boundary).
3.7.4. Parameters: Tracer Equations¶
This section covers the tracer equations i.e. the potential temperature
equation and the salinity (for the ocean) or specific humidity (for the
atmosphere) equation. As for the momentum equations, we only describe
for now the parameters that you are likely to change. The logical
variables tempDiffusion, tempAdvection, tempForcing, and
tempStepping allow you to turn on/off terms in the temperature
equation (same thing for salinity or specific humidity with variables
saltDiffusion, saltAdvection etc.). These variables are all
assumed here to be set to .TRUE.
. Look at file
PARAMS.h for a precise definition.
Initialization
The initial tracer data can be contained in the binary files hydrogThetaFile and hydrogSaltFile. These files should contain 3D data ordered in an (x,y,r) fashion with k=1 as the first vertical level. If no file names are provided, the tracers are then initialized with the values of tRef and sRef mentioned above. In this case, the initial tracer data are uniform in x and y for each depth level.
Forcing
This part is more relevant for the ocean, the procedure for the atmosphere not being completely stabilized at the moment.
A combination of fluxes data and relaxation terms can be used for driving the tracer equations. For potential temperature, heat flux data (in W/m2) can be stored in the 2D binary file surfQfile. Alternatively or in addition, the forcing can be specified through a relaxation term. The SST data to which the model surface temperatures are restored to are supposed to be stored in the 2D binary file thetaClimFile. The corresponding relaxation time scale coefficient is set through the variable tauThetaClimRelax (in s). The same procedure applies for salinity with the variable names EmPmRfile, saltClimFile, and tauSaltClimRelax for freshwater flux (in m/s) and surface salinity (in ppt) data files and relaxation time scale coefficient (in s), respectively. Also for salinity, if the CPP key
USE_NATURAL_BCS
is turned on, natural boundary conditions are applied, i.e., when computing the surface salinity tendency, the freshwater flux is multiplied by the model surface salinity instead of a constant salinity value.As for the other input files, the precision with which to read the data is controlled by the variable readBinaryPrec. Time-dependent, periodic forcing can be applied as well following the same procedure used for the wind forcing data (see above).
Dissipation
Lateral eddy diffusivities for temperature and salinity/specific humidity are specified through the variables diffKhT and diffKhS (in m2/s). Vertical eddy diffusivities are specified through the variables diffKzT and diffKzS (in m2/s) for the ocean and diffKpT and diffKpS (in Pa2/s) for the atmosphere. The vertical diffusive fluxes can be computed implicitly by setting the logical variable implicitDiffusion to.TRUE.
. In addition, biharmonic diffusivities can be specified as well through the coefficients diffK4T and diffK4S (in m4/s). Note that the cosine power scaling (specified through cosPower; see above) is applied to the tracer diffusivities (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization for oceanic tracers is described in the package section. Finally, note that tracers can be also subject to Fourier and Shapiro filtering (see the corresponding section on these filters).
Ocean convection
Two options are available to parameterize ocean convection. To use the first option, a convective adjustment scheme, you need to set the variable cadjFreq, which represents the frequency (in s) with which the adjustment algorithm is called, to a non-zero value (note, if cadjFreq set to a negative value by the user, the model will set it to the tracer time step). The second option is to parameterize convection with implicit vertical diffusion. To do this, set the logical variable implicitDiffusion to.TRUE.
and the real variable ivdc_kappa to a value (in m2/s) you wish the tracer vertical diffusivities to have when mixing tracers vertically due to static instabilities. Note that cadjFreq and ivdc_kappa cannot both have non-zero value.
3.7.5. Parameters: Simulation Controls¶
The model ”clock” is defined by the variable deltaTClock (in s) which determines the I/O frequencies and is used in tagging output. Typically, you will set it to the tracer time step for accelerated runs (otherwise it is simply set to the default time step deltaT). Frequency of checkpointing and dumping of the model state are referenced to this clock (see below).
Run Duration
The beginning of a simulation is set by specifying a start time (in s) through the real variable startTime or by specifying an initial iteration number through the integer variable nIter0. If these variables are set to nonzero values, the model will look for a ”pickup” filepickup.0000nIter0
to restart the integration. The end of a simulation is set through the real variable endTime (in s). Alternatively, you can specify instead the number of time steps to execute through the integer variable nTimeSteps.
Frequency of Output
Real variables defining frequencies (in s) with which output files are written on disk need to be set up. dumpFreq controls the frequency with which the instantaneous state of the model is saved. chkPtFreq and pchkPtFreq control the output frequency of rolling and permanent checkpoint files, respectively. In addition, time-averaged fields can be written out by setting the variable taveFreq (in s). The precision with which to write the binary data is controlled by the integer variable writeBinaryPrec (set it to 32 or 64).
3.7.6. Parameters: Default Values¶
The CPP keys relative to the “numerical model” part of the code are all
defined and set in the file CPP_OPTIONS.h in the directory
model/inc/ or in one of the code
directories of the case study
experiments under verification/. The model parameters are defined and
declared in the file PARAMS.h and their default values are
set in the routine set_defaults.F. The default values can
be modified in the namelist file data
which needs to be located in the
directory where you will run the model. The parameters are initialized
in the routine ini_parms.F. Look at this routine to see in
what part of the namelist the parameters are located. Here is a complete
list of the model parameters related to the main model (namelist
parameters for the packages are located in the package descriptions),
their meaning, and their default values:
Name | Value | Description |
buoyancyRelation | OCEANIC | buoyancy relation |
fluidIsAir | F | fluid major constituent is air |
fluidIsWater | T | fluid major constituent is water |
usingPCoords | F | use pressure coordinates |
usingZCoords | T | use z-coordinates |
tRef | 2.0E+01 at k=top | reference temperature profile ( oC or K ) |
sRef | 3.0E+01 at k=top | reference salinity profile ( psu ) |
viscAh | 0.0E+00 | lateral eddy viscosity ( m2/s ) |
viscAhMax | 1.0E+21 | maximum lateral eddy viscosity ( m2/s ) |
viscAhGrid | 0.0E+00 | grid dependent lateral eddy viscosity ( non-dim. ) |
useFullLeith | F | use full form of Leith viscosity on/off flag |
useStrainTensionVisc | F | use StrainTension form of viscous operator on/off flag |
useAreaViscLength | F | use area for visc length instead of geom. mean |
viscC2leith | 0.0E+00 | Leith harmonic visc. factor (on grad(vort),non-dim.) |
viscC2leithD | 0.0E+00 | Leith harmonic viscosity factor (on grad(div),non-dim.) |
viscC2smag | 0.0E+00 | Smagorinsky harmonic viscosity factor (non-dim.) |
viscA4 | 0.0E+00 | lateral biharmonic viscosity ( m4/s ) |
viscA4Max | 1.0E+21 | maximum biharmonic viscosity ( m4/s ) |
viscA4Grid | 0.0E+00 | grid dependent biharmonic viscosity ( non-dim. ) |
viscC4leith | 0.0E+00 | Leith biharmonic viscosity factor (on grad(vort), non-dim.) |
viscC4leithD | 0.0E+00 | Leith biharmonic viscosity factor (on grad(div), non-dim.) |
viscC4Smag | 0.0E+00 | Smagorinsky biharmonic viscosity factor (non-dim) |
no_slip_sides | T | viscous BCs: no-slip sides |
sideDragFactor | 2.0E+00 | side-drag scaling factor (non-dim) |
viscAr | 0.0E+00 | vertical eddy viscosity ( units of r2/s ) |
no_slip_bottom | T | viscous BCs: no-slip bottom |
bottomDragLinear | 0.0E+00 | linear bottom-drag coefficient ( m/s ) |
bottomDragQuadratic | 0.0E+00 | quadratic bottom-drag coeff. ( 1 ) |
diffKhT | 0.0E+00 | Laplacian diffusion of heat laterally ( m2/s ) |
diffK4T | 0.0E+00 | biharmonic diffusion of heat laterally ( m4/s ) |
diffKhS | 0.0E+00 | Laplacian diffusion of salt laterally ( m2/s ) |
diffK4S | 0.0E+00 | biharmonic diffusion of salt laterally ( m4/s ) |
diffKrNrT | 0.0E+00 at k=top | vertical profile of vertical diffusion of temp ( m2/s ) |
diffKrNrS | 0.0E+00 at k=top | vertical profile of vertical diffusion of salt ( m2/s ) |
diffKrBL79surf | 0.0E+00 | surface diffusion for Bryan and Lewis 1979 ( m2/s ) |
diffKrBL79deep | 0.0E+00 | deep diffusion for Bryan and Lewis 1979 ( m2/s ) |
diffKrBL79scl | 2.0E+02 | depth scale for Bryan and Lewis 1979 ( m ) |
diffKrBL79Ho | -2.0E+03 | turning depth for Bryan and Lewis 1979 ( m ) |
eosType | LINEAR | equation of state |
tAlpha | 2.0E-04 | linear EOS thermal expansion coefficient ( 1/oC ) |
Name | Value | Description |
sBeta | 7.4E-04 | linear EOS haline contraction coef ( 1/psu ) |
rhonil | 9.998E+02 | reference density ( kg/m3 ) |
rhoConst | 9.998E+02 | reference density ( kg/m3 ) |
rhoConstFresh | 9.998E+02 | reference density ( kg/m3 ) |
gravity | 9.81E+00 | gravitational acceleration ( m/s2 ) |
gBaro | 9.81E+00 | barotropic gravity ( m/s2 ) |
rotationPeriod | 8.6164E+04 | rotation period ( s ) |
omega | \(2\pi/\)rotationPeriod | angular velocity ( rad/s ) |
f0 | 1.0E-04 | reference coriolis parameter ( 1/s ) |
beta | 1.0E-11 | beta ( m–1s–1 ) |
freeSurfFac | 1.0E+00 | implicit free surface factor |
implicitFreeSurface | T | implicit free surface on/off flag |
rigidLid | F | rigid lid on/off flag |
implicSurfPress | 1.0E+00 | surface pressure implicit factor (0-1) |
implicDiv2Dflow | 1.0E+00 | barotropic flow div. implicit factor (0-1) |
exactConserv | F | exact volume conservation on/off flag |
uniformLin_PhiSurf | T | use uniform Bo_surf on/off flag |
nonlinFreeSurf | 0 | non-linear free surf. options (-1,0,1,2,3) |
hFacInf | 2.0E-01 | lower threshold for hFac (nonlinFreeSurf only) |
hFacSup | 2.0E+00 | upper threshold for hFac (nonlinFreeSurf only) |
select_rStar | 0 | r |
useRealFreshWaterFlux | F | real freshwater flux on/off flag |
convertFW2Salt | 3.5E+01 | convert FW flux to salt flux (-1=use local S) |
use3Dsolver | F | use 3-D pressure solver on/off flag |
nonHydrostatic | F | non-hydrostatic on/off flag |
nh_Am2 | 1.0E+00 | non-hydrostatic terms scaling factor |
quasiHydrostatic | F | quasi-hydrostatic on/off flag |
momStepping | T | momentum equation on/off flag |
vectorInvariantMomentum | F | vector-invariant momentum on/off |
momAdvection | T | momentum advection on/off flag |
momViscosity | T | momentum viscosity on/off flag |
momImplVertAdv | F | momentum implicit vert. advection on/off |
implicitViscosity | F | implicit viscosity on/off flag |
metricTerms | F | metric terms on/off flag |
useNHMTerms | F | non-hydrostatic metric terms on/off |
useCoriolis | T | Coriolis on/off flag |
useCDscheme | F | CD scheme on/off flag |
useJamartWetPoints | F | Coriolis wetpoints method flag |
useJamartMomAdv | F | VI non-linear terms Jamart flag |
Name | Value | Description |
SadournyCoriolis | F | Sadourny Coriolis discretization flag |
upwindVorticity | F | upwind bias vorticity flag |
useAbsVorticity | F | work with f |
highOrderVorticity | F | high order interp. of vort. flag |
upwindShear | F | upwind vertical shear advection flag |
selectKEscheme | 0 | kinetic energy scheme selector |
momForcing | T | momentum forcing on/off flag |
momPressureForcing | T | momentum pressure term on/off flag |
implicitIntGravWave | F | implicit internal gravity wave flag |
staggerTimeStep | F | stagger time stepping on/off flag |
multiDimAdvection | T | enable/disable multi-dim advection |
useMultiDimAdvec | F | multi-dim advection is/is-not used |
implicitDiffusion | F | implicit diffusion on/off flag |
tempStepping | T | temperature equation on/off flag |
tempAdvection | T | temperature advection on/off flag |
tempImplVertAdv | F | temp. implicit vert. advection on/off |
tempForcing | T | temperature forcing on/off flag |
saltStepping | T | salinity equation on/off flag |
saltAdvection | T | salinity advection on/off flag |
saltImplVertAdv | F | salinity implicit vert. advection on/off |
saltForcing | T | salinity forcing on/off flag |
readBinaryPrec | 32 | precision used for reading binary files |
writeBinaryPrec | 32 | precision used for writing binary files |
globalFiles | F | write “global” (=not per tile) files |
useSingleCpuIO | F | only master MPI process does I/O |
debugMode | F | debug Mode on/off flag |
debLevA | 1 | 1st level of debugging |
debLevB | 2 | 2nd level of debugging |
debugLevel | 1 | select debugging level |
cg2dMaxIters | 150 | upper limit on 2d con. grad iterations |
cg2dChkResFreq | 1 | 2d con. grad convergence test frequency |
cg2dTargetResidual | 1.0E-07 | 2d con. grad target residual |
cg2dTargetResWunit | -1.0E+00 | cg2d target residual [W units] |
cg2dPreCondFreq | 1 | freq. for updating cg2d pre-conditioner |
nIter0 | 0 | run starting timestep number |
nTimeSteps | 0 | number of timesteps |
deltatTmom | 6.0E+01 | momentum equation timestep ( s ) |
deltaTfreesurf | 6.0E+01 | freeSurface equation timestep ( s ) |
dTtracerLev | 6.0E+01 at k=top | tracer equation timestep ( s ) |
deltaTClock | 6.0E+01 | model clock timestep ( s ) |
Name | Value | Description |
cAdjFreq | 0.0E+00 | convective adjustment interval ( s ) |
momForcingOutAB | 0 | =1: take momentum forcing out of Adams-Bashforth |
tracForcingOutAB | 0 | =1: take T,S,pTr forcing out of Adams-Bashforth |
momDissip_In_AB | T | put dissipation tendency in Adams-Bashforth |
doAB_onGtGs | T | apply AB on tendencies (rather than on T,S) |
abEps | 1.0E-02 | Adams-Bashforth-2 stabilizing weight |
baseTime | 0.0E+00 | model base time ( s ) |
startTime | 0.0E+00 | run start time ( s ) |
endTime | 0.0E+00 | integration ending time ( s ) |
pChkPtFreq | 0.0E+00 | permanent restart/checkpoint file interval ( s ) |
chkPtFreq | 0.0E+00 | rolling restart/checkpoint file interval ( s ) |
pickup_write_mdsio | T | model I/O flag |
pickup_read_mdsio | T | model I/O flag |
pickup_write_immed | F | model I/O flag |
dumpFreq | 0.0E+00 | model state write out interval ( s ) |
dumpInitAndLast | T | write out initial and last iteration model state |
snapshot_mdsio | T | model I/O flag. | ||
monitorFreq | 6.0E+01 | monitor output interval ( s ) |
monitor_stdio | T | model I/O flag. | ||
externForcingPeriod | 0.0E+00 | forcing period (s) |
externForcingCycle | 0.0E+00 | period of the cycle (s) |
tauThetaClimRelax | 0.0E+00 | relaxation time scale (s) |
tauSaltClimRelax | 0.0E+00 | relaxation time scale (s) |
latBandClimRelax | 3.703701E+05 | maximum latitude where relaxation applied |
usingCartesianGrid | T | Cartesian coordinates flag ( true / false ) |
usingSphericalPolarGrid | F | spherical coordinates flag ( true / false ) |
usingCylindricalGrid | F | spherical coordinates flag ( true / false ) |
Ro_SeaLevel | 0.0E+00 | r(1) ( units of r ) |
rkSign | -1.0E+00 | index orientation relative to vertical coordinate |
horiVertRatio | 1.0E+00 | ratio on units : horizontal - vertical |
drC | 5.0E+03 at k=1 | center cell separation along Z axis ( units of r ) |
drF | 1.0E+04 at k=top | cell face separation along Z axis ( units of r ) |
delX | 1.234567E+05 at i=east | U-point spacing ( m - cartesian, degrees - spherical ) |
delY | 1.234567E+05 at j=1 | V-point spacing ( m - cartesian, degrees - spherical ) |
ygOrigin | 0.0E+00 | South edge Y-axis origin (cartesian: m, spherical: deg.) |
xgOrigin | 0.0E+00 | West edge X-axis origin (cartesian: m, spherical: deg.) |
rSphere | 6.37E+06 | Radius ( ignored - cartesian, m - spherical ) |
xcoord | 6.172835E+04 at i=1 | P-point X coord ( m - cartesian, degrees - spherical ) |
ycoord | 6.172835E+04 at j=1 | P-point Y coord ( m - cartesian, degrees - spherical ) |
rcoord | -5.0E+03 at k=1 | P-point r coordinate ( units of r ) |
rF | 0.0E+00 at k=1 | W-interface r coordinate ( units of r ) |
dBdrRef | 0.0E+00 at k=top | vertical gradient of reference buoyancy [ (m/s/r)2 ] |
Name | Value | Description |
dxF | 1.234567E+05 at k=top | dxF(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dyF | 1.234567E+05 at i=east | dyF(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dxG | 1.234567E+05 at i=east | dxG(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dyG | 1.234567E+05 at i=east | dyG(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dxC | 1.234567E+05 at i=east | dxC(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dyC | 1.234567E+05 at i=east | dyC(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dxV | 1.234567E+05 at i=east | dxV(:,1,:,1) ( m - cartesian, degrees - spherical ) |
dyU | 1.234567E+05 at i=east | dyU(:,1,:,1) ( m - cartesian, degrees - spherical ) |
rA | 1.524155E+10 at i=east | rA(:,1,:,1) ( m - cartesian, degrees - spherical ) |
rAw | 1.524155E+10 at k=top | rAw(:,1,:,1) ( m - cartesian, degrees - spherical ) |
rAs | 1.524155E+10 at k=top | rAs(:,1,:,1) ( m - cartesian, degrees - spherical ) |
Name | Value | Description |
tempAdvScheme | 2 | temp. horiz. advection scheme selector |
tempVertAdvScheme | 2 | temp. vert. advection scheme selector |
tempMultiDimAdvec | F | use multi-dim advection method for temp |
tempAdamsBashforth | T | use Adams-Bashforth time-stepping for temp |
saltAdvScheme | 2 | salinity horiz. advection scheme selector |
saltVertAdvScheme | 2 | salinity vert. advection scheme selector |
saltMultiDimAdvec | F | use multi-dim advection method for salt |
saltAdamsBashforth | T | use Adams-Bashforth time-stepping for salt |