.. _ref-pod-requirements: POD requirements ========================= This section lists all the steps that need to be taken in order to submit a POD for inclusion in the MDTF framework. Code and documentation submission --------------------------------- The material in this section must be submitted though a `pull request `__ to the `NOAA-GFDL GitHub repo `__. This is described in :doc:`dev_git_intro`. Use the `example_multicase POD `__ as a reference for how each component of the submission should be structured. The POD feature must be up-to-date with the NOAA-GFDL main branch, with no outstanding merge conflicts. See :doc:`dev_git_intro` for instructions on syncing your fork with NOAA-GFDL, and pulling updates from the NOAA-GFDL main branch into your feature branch. POD source code ^^^^^^^^^^^^^^^ All scripts should be placed in a subdirectory of ``diagnostics/``. Among the scripts, there should be 1) a main driver script, 2) a template html, and 3) a ``settings.jsonc`` file. The POD directory and html template should be named after your POD's short name. For instance, ``diagnostics/convective_transition_diag/`` contains its driver script ``convective_transition_diag.py``, ``convective_transition_diag.html``, and ``settings.jsonc``, etc. The framework will call the driver script, which calls the other scripts in the same POD directory. If you need a new Conda environment, add a new .yml file to ``src/conda/``, and install the environment using the ``conda_env_setup.sh`` or ``micromamba_env_setup.sh`` scripts as described in the :doc:`Getting Started `. POD settings file ^^^^^^^^^^^^^^^^^ The format of this file is described in :doc:`pod_settings`. POD html template for output ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The html template will be copied by the framework into the output directory to display the figures generated by the POD. You should be able to create a new html template by simply copying and modifying the example templates from existing PODs even without prior knowledge about html syntax. If you have a POD documentation ^^^^^^^^^^^^^^^^^ The documentation for the framework is automatically generated using `sphinx `__, which works with files in `reStructured text `__ (reST, ``.rst``) format. Use the `example_multicase POD documentation `__ as a template for the information required for your POD, by modifying its .rst `source code `__. The documentation should include the following information: - a one-paragraph synopsis of the POD - the developers’ contact information - required programming language and libraries - a brief summary of the presented diagnostics - references in which more in-depth discussions can be found. The .rst files and all linked figures should be placed in a ``doc`` subdirectory under your POD directory (e.g., ``diagnostics/example_multicase/doc/``) and put the .rst file and figures inside. The most convenient way to write and debug reST documentation is with an online editor. We recommend `https://livesphinx.herokuapp.com/ `__ because it recognizes sphinx-specific commands as well. For reference, see the reStructured text `introduction `__, `quick reference `__ and `in-depth guide `__. Also see a reST `syntax comparison `__ to other text formats you may be familiar with. - For maintainability, all scripts should be self-documenting by including in-line comments. The main driver script (e.g., ``example_multicase.py``) should contain a comprehensive header providing information that contains the same items as in the POD documentation, except for the "More about this diagnostic" section. - The one-paragraph POD synopsis (in the POD documentation) as well as a link to the full documentation should be placed at the top of the html template (e.g., ``example_multicase.html``). Sample and supporting data submission ------------------------------------- Data hosting for the MDTF framework is currently managed manually. The data is hosted via anonymous FTP on UCAR's servers. Digested observational or supporting data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Create a directory under ``inputdata/obs_data/`` named after the short name of your POD, and put all your *digested* observation data in (or more generally, any quantities that are independent of the model being analyzed). - Requirements - Digested data should be in the form of numerical data, not figures. - The data files should be small (preferably a few MB) and just enough for producing figures for model comparison. If you really cannot reduce the data size and your POD requires more than 1GB of space, consult with the lead team. - Include in the directory a “README.txt” description file with original source info. - Include in the directory any necessary licensing information, files, etc. (if applicable) - Create a tar file of your obs_data directory: - Use the --hard_dereference flag so that all users can read your file. - Naming convention: $pod_name.yyyymmdd.tar, where yyyymmdd is the file creation date. Alternatively, you may use some other version tag to allow the framework to check compatibiity between the POD code and data provided. - Create the tar file from the inputdata directory so the file paths start with obs_data. - Example (c-shell): .. code-block:: console set pod_name = MJO_suite set tartail = `date +'%Y%m%d'` cd inputdata/obs_data tar cfh $pod_name.$tartail.tar --hard-dereference $pod_name - To check: .. code-block:: console % tar tf $pod_name.$tartail.tar MJO_suite/ MJO_suite/ERA.v200.EOF.summer-0.png MJO_suite/ERA.u200.EOF.summer-1.png After following the above instructions, please refer to `the GitHub Discussion on transfering obs_data `__ or email Dani Coleman at bundy at ucar dot edu or contact your liason on the MDTF Leads Team. Files will be posted for Guest/anonymous access : ftp://ftp.cgd.ucar.edu/archive/mdtf/obs_data_latest/{$pod_name}.latest.tar with 'latest' pointing to the date-or-version-tagged tar file Note that prior to version 3, obs_data from all PODs was consolidated in one tar file. To assist in usability as the number of PODs grow, they will now be available individually, with the responsiblity for creating the tar files on the developer. Sample model data ^^^^^^^^^^^^^^^^^ For PODs dealing with atmospheric phenomena, we recommend that you use sample data from the following sources, if applicable: - A timeslice run of `NCAR CAM5 `__ - A timeslice run of `GFDL AM4 `__ (contact the leads for password).