/ Announce / Developer / Discussion / Help / Manual / Project / Source /
[ACME] [ANTLR] [CF] [DIWG] [GSL] [netCDF] [OPeNDAP] [SWAMP] [UDUnits]
Bienvenue sur le netCDF Operator (NCO) site Current stable NCO version is 4.7.3 released Friday, 02-Mar-2018 21:54:00 UTC The NCO toolkit manipulates and analyzes data stored in netCDF-accessible formats, including DAP, HDF4, and HDF5. It exploits the geophysical expressivity of many CF (Climate & Forecast) metadata conventions, the flexible description of physical dimensions translated by UDUnits, the network transparency of OPeNDAP, the storage features (e.g., compression, chunking, groups) of HDF (the Hierarchical Data Format), and many powerful mathematical and statistical algorithms of GSL (the GNU Scientific Library). NCO is fast, powerful, and free.
News & Milestones What is NCO? Contributing NCOÕSDO Project Publications Release Highlights Executables Documentation FAQ Help/Support/Contacts ANNOUNCE/ChangeLog/README/TODO Source Code Compiling Supercomputers Known Problems People
Recent Releases & Milestones 2018 Apr ??: 4.7.4 In Progress... 2018 Mar 02: 4.7.3 Sundry features/fixes 2018 Jan 25: 4.7.2 Tempest2 2017 Dec 21: 4.7.1 Conda Windows port 2017 Nov 08: 4.7.0 Sundry features/fixes 2017 Sep 18: 4.6.9 CDF5, CMake 2017 Aug 16: 4.6.8 Sundry features/fixes 2017 May 26: 4.6.7 Sub-gridscale regridding 2017 Apr 21: 4.6.6 UGRID 2017 Mar 15: 4.6.5 Sundry features/fixes 2017 Feb 07: 4.6.4 ncclimo splitter 2017 Jan 27: Geosci. Model Dev. publishes compression-error trade-off paper 2016 Dec 23: 4.6.3 Sundry features/fixes 2016 Nov 16: 4.6.2 JSON 2016 Sep 19: Geosci. Model Dev. publishes PPC paper 2016 Aug 06: 4.6.1 Sundry features/fixes 2016 Jul 06: Submitted compression-error trade-off manuscript to Geosci. Model Dev. 2016 May 12: 4.6.0 ncclimo 2016 Apr 06: Poster at NASA ESDSWG, Greenbelt 2016 Apr 05: Talk at NASA GES DISC, Greenbelt 2016 Mar 22: Submitted PPC manuscript to Geosci. Model Dev. 2016 Feb 17: 4.5.5 Sundry features/fixes 2016 Jan 07: 4.5.4 ncremap Milestones from 200301–201512 (versions 2.8.4–4.5.3) News and Announcements from 199801–200212 (version 1.1.0–2.8.3) and earlier
What is NCO? The netCDF Operators (NCO) comprise about a dozen standalone, command-line programs that take netCDF, HDF, and/or DAP files as input, then operate (e.g., derive new fields, compute statistics, print, hyperslab, manipulate metadata, regrid) and output the results to screen or files in text, binary, or netCDF formats. NCO aids analysis of gridded and unstructured scientific data. The shell-command style of NCO allows users to manipulate and analyze files interactively, or with expressive scripts that avoid some overhead of higher-level programming environments. Traditional geoscience data analysis requires users to work with numerous flat (data in one level or namespace) files. In that paradigm instruments or models produce, and then repositories archive and distribute, and then researchers request and analyze, collections of flat files. NCO works well with that paradigm, yet it also embodies the necessary algorithms to transition geoscience data analysis from relying solely on traditional (or “flat”) datasets to allowing newer hierarchical (or “nested”) datasets. The next logical step is to support and enable combining all datastreams that meet user-specified criteria into a single or small number of files that hold all the science-relevant data organized in hierarchical structures. NCO (and no other software to our knowledge) can do this now. We call the resulting data storage, distribution, and analysis paradigm Group-Oriented Data Analysis and Distribution (GODAD). GODAD lets the scientific question organize the data, not the ad hoc granularity of all relevant datasets. The User Guide illustrates GODAD techniques for climate data analysis: ncap2 netCDF Arithmetic Processor (examples) ncatted netCDF ATTribute EDitor (examples) ncbo netCDF Binary Operator (addition, multiplication...) (examples) ncclimo netCDF CLIMatOlogy Generator (examples) nces netCDF Ensemble Statistics (examples) ncecat netCDF Ensemble conCATenator (examples) ncflint netCDF FiLe INTerpolator (examples) ncks netCDF Kitchen Sink (examples) ncpdq netCDF Permute Dimensions Quickly, Pack Data Quietly (examples) ncra netCDF Record Averager (examples) ncrcat netCDF Record conCATenator (examples) ncremap netCDF REMAPer (examples) ncrename netCDF RENAMEer (examples) ncwa netCDF Weighted Averager (examples)
Note that the “averagers” (ncra and ncwa) are misnamed because they perform many non-linear statistics as well, e.g., total, minimum, RMS. Moreover, ncap2 implements a powerful domain language which handles arbitrarily complex algebra, calculus, and statistics (using GSL). The operators are as general as netCDF itself: there are no restrictions on the contents of input file(s). NCO's internal routines are completely dynamic and impose no limit on the number or sizes of dimensions, variables, and files. NCO is designed to be used both interactively and with large batch jobs. The default operator behavior is often sufficient for everyday needs, and there are numerous command line (i.e., run-time) options, for special cases.
How to Contribute: Volunteer, Endorse, or Donate NCO has always (since 1995) been GPL'd Open Source. SourceForge.net started hosting NCO in March, 2000. This facilitated collaboration, code contributions, and support. In March, 2015, NCO development moved to GitHub.com. We continue to use the SourceForge discussion fora for historical continuity (seventeen years of searchable Q&A). No matter what your programming background there is a task you can help with. From re-organizing the TODO list itself, to improving this cheesy webpage, to documentation, to designing and implementing new features and interfaces, we need your help! The project homepage contains mailing lists, discussion forums, and instructions to make contributing to NCO easy. Many users feel unable to volunteer their time. An equally effective contribution in the long-run would be your endorsement, which may make the difference between a declined and an accepted proposal. An endorsement can be a few sentences that describes how NCO benefits your work or research. E-mail your endorsement to Charlie “my surname is zender” Zender with Subject: “NCO Proposal Endorsement”. This information is useful advocating for more NCO support. “What future proposals?” you ask, “Aren't you already funded?” Yes, in 2012 NASA funded us to implement netCDF4 groups and HDF support, and in 2014 NASA funded us to improve SLD handling. These funds are/were primarily for development and maintainance of specific features. To realize our grander ambition, i.e., to shift geoscience data analysis from a flat- to a hierarchical-paradigm (GODAD), will require a sustained effort and software ecosystem that understands and implement hierarchical dataset concepts. And it's hard to sell a federal agency on the wisdom of investing in a paradigm shift! Other more prosaic tasks that need work are, for example, I/O parallelization (!!!), user-defined and non-atomic types, more CF conventions, cloud-services, JSON back-end, and user-defined ncap2 functions. If you send an endorsement, please include (at least) your Name, Title, and Institutional affiliation. Lastly, as of June, 2003, if you're more strapped for time than money and want to contribute something back, consider a monetary donation. This may incentivize us to tackle your favorite TODO items.
Inspired by President Obama's plan to bring more transparency to government investment, these homepage donation counters track the influence of your monetary donations on NCO development: Donations received between 20030624 and 20170217: US$149.55. Thank you, donors! NCO features “incentivized” by these donations: More emoticons in the documentation :)
NSF EarthCube Project The National Science Foundation Grant NSF ICER-1541031 funded the Unidata-led EarthCube Project, “EarthCube IA: Collaborative Proposal: Advancing netCDF-CF for the Geoscience Community” from 20150901–20170831 as part of the Integrative and Collaborative Education and Research (ICER) program. This URL, http://nco.sf.net#prp_e3, points to the most up-to-date information on the EarthCube proposal. UCI's primary role is to help extend CF to cover hierarchical data structures, aka groups. Groups are the Group-Oriented Data Analysis and Distribution (GODAD) paradigm we are developing for geoscience data analysis. We will convene workshops for interested stakeholders in 2016 and 2017. Be on the lookout for announcements!
DOE ACME Project “Lightweight Climate Analysis Tools for ACME” is a US Department of Energy Cooperative Agreement (CA) DE-SC0012998 from 20141215–20171214 that funds our contribution to the Accelerated Climate Modeling for Energy (ACME) project, a part of DOE's Earth System Modeling (ESM) program. The ACME project provides the resources to implement support for parallel regridding and workflows in NCO accessible through UV-CDAT. Spatially intelligent, parallelized analysis tools are a key component of the Group-Oriented Data Analysis and Distribution (GODAD) paradigm we are developing for geoscience data analysis.
NASA ACCESS Project The National Aeronautics and Space Administration (NASA) Cooperative Agreement NNX14AH55A funds our project, “Easy Access to and Analysis of NASA and Model Swath-like Data” from 20140701–20160630 as part of the Advancing Collaborative Connections for Earth System Science (ACCESS) program. This URL, http://nco.sf.net#prp_axs, points to the most up-to-date information on the ACCESS 2013 project. This ACCESS project provides resources to implement support in NCO for Swath-like data (SLD), i.e., dataset with non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all NASA Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data (e.g., CMIP5) that are increasingly stored on non-rectangular grids. Spatially intelligent software tools is a key component of the Group-Oriented Data Analysis and Distribution (GODAD) paradigm we are developing for geoscience data analysis. We are currently recruiting a programmer (aka software engineer) or postdoc based at UCI for at least two years, to accomplish our ACCESS objectives. As described in the proposal, this person will be responsible for incorporating geospatial features and parallelism into NCO. See the ads for more details. (PDF, TXT).
Publications and Presentations Silver, J. D. and C. S. Zender (2017), The compression-error trade-off for large gridded data sets, Geosci. Model Dev., 10, 413–423, doi:10.5194/gmd-10-413-2017. PDF Zender, C. S. (2016), Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+), Geosci. Model Dev., 9, 3199–3211, doi:10.5194/gmd-9-3199-2016. PDF Zender, C. S. (2016): Regrid Curvilinear, Rectangular, and Unstructured Data (CRUD) with ncremap, a new netCDF Operator. Presented to the Earth Science Data Systems Working Group (ESDSWG) Meeting, Greenbelt, MD, April 6--8, 2016. PDF Zender, C. S. (2016): Regridding Swath, Curvilinear, Rectangular, and Unstructured Data (SCRUD). Presented to the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC), Goddard Space Flight Center, Greenbelt, MD, April 5, 2016. PDF Zender, C. S. (2015): Regrid Curvilinear, Rectangular, and Unstructured Data (CRUD) with ncremap, a new netCDF Operator. Presented to the American Geophysical Union Fall Meeting, San Francisco, CA, December 14--18, 2015. Eos Trans. AGU, 95(54), Fall Meet. Suppl., Abstract IN31A-1744. PDF Zender, C. S. (2015): Optimizing Intrinsic Parallelism to generate climatologies with netCDF Operators (NCO). Presented to the DOE Accelerated Climate Modeling for Energy (ACME) PI Meeting, Albuquerque, NM, November 2--4, 2015. PDF Zender, C. S., P. Vicente, and W. Wang (2015): Use netCDF Operators (NCO) to Improve Data Interoperability and Usability. Presented to the Earth Science Data Systems Working Group (ESDSWG) Meeting, Greenbelt, MD, March 24--26, 2015. PDF Zender, C. S., P. Vicente, and W. Wang (2014): Simplifying and accelerating model evaluation by NASA satellite data. Presented to the Earth Science Data Systems Working Group (ESDSWG) Meeting, Greenbelt, MD, March 24--26, 2014. PDF Zender, C. S. (2014): Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism. Presented to the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC), Goddard Space Flight Center, Greenbelt, MD, March 27, 2014. PDF Zender, C. S., P. Vicente and W. Wang (2013): Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism. Presented at the Fall Meeting of the American Geophysical Union, San Francisco, CA, December 9–13, 2013. Eos Trans. AGU, 93(53), Fall Meet. Suppl., Abstract IN52A-06. PDF Zender, C. S., P. Vicente and W. Wang (2013): The Future of Model Evaluation. Presented to the Chapman University Symposium on Big Data and Analytics: 44th Symposium on the Interface of Computing Science and Statistics, Chapman University, Orange, CA, April 4–6, 2013. PDF Zender, C. S., P. Vicente and W. Wang (2012): NCO: Simpler and faster model evaluation by NASA satellite data via unified file-level netCDF and HDF-EOS data postprocessing tools.. Presented at the Fall Meeting of the American Geophysical Union, San Francisco, CA, December 3–7, 2012. Eos Trans. AGU, 93(53), Fall Meet. Suppl., Abstract IN34A-07. PDF Zender, C. S., P. Vicente and W. Wang (2012): Simplifying and accelerating model evaluation by NASA satellite data.. Presented to the Earth Science Data Systems Working Group (ESDSWG) Meeting, Annapolis MD, November 13–15, 2012. PDF Presentation and Publications from 2006–2011
Release Highlights Stable releases receive unique tags and their tarballs are created and stored here at GitHub. Identical copies of those tarballs are also stored here on SourceForge for historical continuity. You may retrieve the source of tagged versions directly with, e.g., git clone -b 4.7.3 http://github.com/nco/nco.git nco-4.7.3. NCO 4.7.5: (Future) Chunking bytes not elements; extensive hashing?; netCDF4 compound types?; NCO 4.7.4: (In Progress, features in-progress or complete include) ncclimo -v splitter support for regular expressions; ncks --xtn better extensive variable treatment; ncremap generate weights; NCO 4.7.3: (Current Stable Release) Sanitize input/output filenames; ncclimo Support YYYY-MM and YYYY-MM-01 rx's; ncks --fmt_val --fl_prn for printing; ncremap --a2o for TempestRemap2 ordering; ncap2 silence compiler warnings; NCO 4.7.2: ncclimo splitter ypf bugfix; ncremap canonical positional arguments; ncremap TempestRemap2 support; NCO 4.7.1: Conda Windows port; ncclimo --clm_md=dly/ann fix; JSON tweaks; NCO 4.7.0: CMake build-engine intrinsic math, networking support on MS Windows; ncclimo/ncremap --fl_fmt options; ncclimo/ncremap --dfl_lvl compression; ncclimo --ppc compression; ncra/ncrcat fix negative hyperslab off-by-one error; NCO 4.6.9: CDF5 autoconversion, features to help detect corruption; CMake build-engine is mature; ncks estimates correct filesizes; ncks CDL prints variable RAM size; ncks prints CDL by default; NCO 4.6.8: ncap2 full chunking options; ncclimo --seasons custom season support; ncks --dt_fmt put the “T” in time; ncremap --msk_[src/dst]=none mask suppression; ncremap -3,-4,-5,-6,-7 output format; ncwa -d -w -m hyperslab with mask or weight bugfix; NCO 4.6.7: Multi-argument (MTA) flag parsing; ncap2 in-fill functions simple_fill_miss() + weighted_fill_miss(); ncap2 duplicates input variable chunking; ncatted NC4 bugfix; ncremap -P sgs, alm, clm, cice sub-gridscale remapping; ncremap msk_src/dst conversion; ncremap src/dst_regional hints to ERWG; NCO 4.6.6: ncks --cdl prints units when dbg_lvl ≥ 1; ncks --cdl prints braces for compound record variables; ncks outputs UGRID-format rectangular grids; ncremap/ncclimo no_cll_msr,no_frm_trm,no_ntv_tms,no_stg_grd switches; ncremap masks; NCO 4.6.5: ncks --cal prints human-legible ISO8601 dates; ncclimo/ncremap --version switch; ncremap --vrb_lvl verbosity level; ncremap refactor dimension inferral, fix POP re-grids; Set chunk cache-size with --cnk_csh; NCO 4.6.4: ncremap add w_stag weights to FV output; ncclimo timeseries reshaping ("splitting") mode; ncclimo daily climo mode; Extract variables in CF cell_measures, formula_terms attributes; Fix UDUnits calendar bug introduced in 4.6.3; NCO 4.6.3: CMake build option; ncap2 udunits() function; ncclimo supports binary climos, annual mode; ncclimo, ncremap support long-options; ncks --cdl attribute types as comments; ncks --json strided brackets for multi-dimensional arrays; NCO 4.6.2: Improve ncclimo, ncremap behavior in module environments; ncks --json for JSON output; Multi-argument support for --gaa, --rgr, --ppc, --trr options; NCO 4.6.1: ncclimo incremental mode; ncflint -N normalization; NCO 4.6.0: ncap2 LHC metadata propagation; ncatted nappend mode; ncclimo debut; ncks reads ENVI bsq/bip/bil images; ncks Terraref features; ncks -A append consistency; ncpdq fix permutation with multiple groups; ncra --cb climatology bounds ncremap uses CF to find coordinates; ncremap uses $TMPDIR; NCO 4.5.5: Initial CDF5 support; ncap2 fix negative dimension indices handling; ncremap -P airs,hirdls,mls,mpas; nces/ncra fix -y mebs normalization, add -y tabs; ncatted/ncrename/ncpdq fix --gaa; NCO 4.5.4: ncap2 syntax simplification/fixes; ncks -V bugfix; ncks XML _Unsigned attribute; ncremap debuts; ncwa fix whitespace in coordinates attribute; Release Highlights from 200001–201512 (versions 1.1.47–4.5.3)
Pre-built Executables Pre-built binary executables are available for many platforms. Our source tarballs are always up-to-date, and work on our development systems (Fedora, Ubuntu, and Mac OS X). We also attempt to provide (theoretically) platform-independent sources in the most common Linux package formats (Debian and RPM). Below are one-line installation instructions and links to these and to packages for other platforms created by volunteers. Anyone willing to perform regular regression testing and porting of NCO to other platforms is welcome. Previous versions of these executables are still available by searching the directory index here.
AIX on IBM mainframes nco-4.2.5.aix53.tar.gz (9.5M): Executables AIX 5.3-compatible (last updated Tuesday, 29-Jan-2013 19:02:41 UTC). Maintained by NCO Project. Newer (beta- or pre-release) packages are sometimes available for AIX users as described here. Thanks to NSF for supporting AIX machines at NCAR over the years.
Anaconda Anaconda is a coordinated, cross-platform Python environment that utilizes the conda package manager. Anaconda can be easily installed into a user-owned directory. This bypasses the normal headache of relying on system-administrators to install the latest NCO on shared systems like supercomputers. Miniconda (rather than the full Anaconda install) suffices for most purposes. Up-to-date Anaconda-compatible versions of NCO for Linux, MacOS, and Windows are maintained at conda-forge. Install NCO in your Anaconda framework with one command ‘conda install -c conda-forge nco’. Or, alternatively, permanently add conda-forge (which teems with goodies besides NCO) to your automaticallysearched channels with ‘conda config --add channels conda-forge’, then install NCO with ‘conda install nco’. The default NCO installed by conda is generally within a month of the latest release. nco-4.7.3 Executables Anaconda-compatible. Maintained by Filipe Fernandes. Thanks to Rich Signell, Filipe Fernandes, Pedro Vicente, and others for developing and maintaining the NCO package for conda.
Debian and Ubuntu GNU/Linux Debian NCO and Ubuntu NCO homepages. ‘aptitude install nco’ installs the standard NCO for your Debian-compatible OS. NCO packages in the Debian/Ubuntu repositories (e.g., Sid and Raring) generally lag the packages distributed here by 6–12 months. Newer (beta- or pre-release) packages are often available for intrepid Debian/Ubuntu users as described here. Debian package for most recent NCO release (install with, e.g., ‘dpkg --install nco_4.7.3-1_i386.deb’): nco_4.7.3-1_amd64.deb : Executables AMD64-compatible Thanks to Daniel Baumann, Sebastian Couwenberg, Barry deFreese, Francesco Lovergine, Brian Mays, Rorik Peterson, and Matej Vela for their help packaging NCO for Debian over the years.
Fedora, RedHat Enterprise Linux (RHEL), and Community ENTerprise Operating System (CentOS) GNU/Linux The Fedora NCO RPMs are usually up-to-date so that ‘sudo dnf install nco’ installs a recent version. RHEL NCO RPMs are documented at the Fedora site. OpenSUSE keeps NCO RPMs here. A comprehensive list of pre-built RPMs for many OS's is here. Volunteers have updated and maintained fairly up-to-date NCO packages in Fedora since it was added by Ed Hill in about 2004. Thanks to Patrice Dumas, Ed Hill, Orion Poplawski, and Manfred Schwarb for packaging NCO RPMs over the years. Thanks to Gavin Burris and Kyle Wilcox for documenting build procedures for RHEL and CentOS.
Gentoo GNU/Linux Gentoo GNU/Linux homepage for NCO. Portage packages by George Shapavalov and Patrick Kursawe: nco-3.9.4: Latest(?) Gentoo package
Mac OS X/Darwin The most up-to-date executables are probably those in the tarball below. Those unfamiliar with installing executables from tarballs may try the (older) DMG files (you may need to add /opt/local/bin to your executable path to access those operators). nco-4.7.3.macosx.10.13.tar.gz ([an error occurred while processing this directive]): Executables MacOSX 10.13-compatible (last updated [an error occurred while processing this directive]). (NB: These executables require the MacPorts dependencies for NCO). Maintained by NCO Project. nco-4.0.7_x86_10.6.dmg ( 35M): For Mac OS 10.6 (last updated Wednesday, 02-Mar-2011 20:17:21 UTC). Maintained by Chad Cantwell. Fink packages for NCO: Currently NCO 3.9.5. Maintained by Alexander Hansen. Homebrew packages for NCO: Currently NCO 4.6.X. Install with ‘sudo brew install nco’. Maintained by Ian Lancaster (Alejandro Soto's instructions here). MacPorts infrastructure for NCO: Portfile: Currently NCO 4.6.6. Install with ‘sudo port install nco’. Maintained by Takeshi Enomoto.
Microsoft Windows (native build, compiled with Visual Studio 2015, use this if unsure) These native Windows executables (64bit) are stand-alone, i.e., do not require users to have any additional software. This is a new feature as of 20120615, please send us feedback. To build NCO from source yourself using MSVC with CMake, please see example in nco/cmake/build.bat. nco-4.7.3.windows.mvs.exe ( 17M) : Windows Self-Extracting Installer (last updated Tuesday, 06-Mar-2018 16:27:17 UTC). Maintained by Pedro Vicente.
Microsoft Windows (running Cygwin environment, compiled with GNU-toolchain) nco-4.4.4.win32.cygwin.tar.gz (4.8M): Executables Cygwin-compatible (last updated Sunday, 01-Jun-2014 22:47:31 UTC). Maintained by NCO Project. Before using, first install curl (in the "Web" category of Cygwin Setup), and set export UDUNITS2_XML_PATH='/usr/local/share/udunits/udunits2.xml' (or wherever udunits2.xml is installed). Thanks to Mark Hadfield and Pedro Vicente for creating Cygwin tarballs. Thanks to Cygnus Solutions and RedHat Inc. for developing and supporting Cygwin over the years.
Python Bindings Source and Documentation
Documentation and Users Guide The NCO Users Guide is available for reading in these formats: DVI Device Independent (kdvi, xdvi) HTML Hypertext (any browser) Info GNU Info (M-x Info, emacs) PDF Portable Document Format (acroread, evince, kpdf, okular, xpdf) Postscript Printing (ghostview, kghostview) TeXInfo Documentation Source code (emacs) Text Plain text (more) XML Extensible Markup Language (firefox) nco.texi is the most up-to-date. Files nco.dvi, nco.ps, and nco.pdf contain all the mathematical formulae (typeset with TeX) missing from the screen-oriented formats. The screen-
oriented formats—nco.html, nco.info, nco.txt, and nco.xml—contain all the documentation except the highly mathematical sections. Wenshan Wang of UCI contributed a Quick Reference Card (last updated Monday, 05-Mar-2018 00:13:48 UTC) suitable for printing, framing, and/or carrying on your person at all times.
Other documentation: This abbreviation key unlocks the mysteries of the source code abbreviations and acronyms.
FAQ: Frequently Asked Questions These questions show up almost as frequently as my mother. But they are more predictable: I still have questions, how do I contact the NCO project? The NCO project has various Q&A and discussion forums described below. Where can I find prebuilt NCO executables? Pre-built executables of some versions of NCO for the operating systems described above (Debian-compatible GNU/Linux, Fedora/RedHat GNU/Linux, Gentoo GNU/Linux, and Mac OS X). Otherwise, you may be on your own. Does NCAR support NCO? The NCAR CISL Consulting Service Group (CSG) supports NCO like other software packages. The NCAR CISL-suported executables are made available through “modules” so try module load nco. If you notice problems with the NCO installation on CISL machines, or if you would benefit from a more recent release or patch, then ask cislhelp. If you have a comment, suggestion, or bug report, then contact the developers as described below. Is there an easy way to keep up with new NCO releases? Subscribe to the nco-announce mailing list. This list is for NCO-related announcements, not for questions. nco-announce is very low volume—one message every few months.
Help/Support/Contacts: If you have support questions or comments please contact us via the Project Forums (rather than personal e-mail) so other users can benefit from and contribute to our exchange. Let us know how NCO is working for you—we'd like to hear. Have you read the documentation and browsed the forums to see if your question/comment has been reported before? Please read the Guide's suggestions for productive Help Requests and Bug Reports. Where should I ask my questions on how to use NCO? On the Help site. Where should I post suggestions/comments on NCO features and usage? On the Discussion site. Where are NCO development and bug-squashing discussed? At the Developer site.
ANNOUNCE/ChangeLog/README/TODO Files containing useful information about the current distribution: ANNOUNCE Notes on current release ChangeLog Change History since 1997 (version 0.9) README Platforms and software required TODO An unordered list of features and fixes we plan
Source Code The simplest way to acquire the source is to download the compressed tarball: nco-4.7.3.tar.gz (4.8M compressed tar-file) MD5(src/nco-4.7.3.tar.gz)= b2e739dd095886dd664cb08f8d3df0eb SHA1(src/nco-4.7.3.tar.gz)= 4bbd51dd1013b88b13a36c2942788ce9880407ae SHA256(src/nco-4.7.3.tar.gz)= c79fcfc291976466070e0da04c9ea57924feb7f41e4d89a27f313c465e42b381 The best way to acquire the source and occasionally update to the latest features is with Git. The browsable Repository contains up-to-the-minute sources and is the easiest way to stay synchronized with NCO features. Updating NCO source requires some familiarity with development tools, especially Git and Make. You may retrieve any NCO distribution you wish from GitHub. Usually you wish to retrieve a recent tagged (i.e., released) version. This command retrieves the entire NCO repository (< 20 MB) and then checks out NCO version 4.7.3: git clone https://github.com/nco/nco.git;cd nco;git checkout 4.7.3
These commands retrieve the current (“bleeding edge”) development version of NCO into a local directory named nco: git clone https://github.com/nco/nco.git ~/nco
or git clone
[email protected]:/nco/nco.git
, Track changes to the development version using cd nco;git pull
One difference between running a "tagged" release (e.g., 4.7.3) and the development version is that the tagged release operators will print a valid version number (e.g., 4.7.3) when asked to do so with the -r flag (e.g., ncks -r). The development version simply places today's date in place of the version. Once the autotools builds are working more robustly, the confusion over versions should largely disappear.
Developer NCO Source Documentation Automated source documentation, created by the Doxygen tool is available. Some developers find this documentation helpful, as it can clarify code and data relationships in the code. Source documentation for NCO and netCDF4(alpha13) The Doxygen documentation is infrequently (i.e., never since Daniel left) updated.
Compilation Requirements Best Practices: Although building NCO yourself can be easy, sexy, and lucrative, we recommend that you first try the pre-built executables for your system, e.g., brew install homebrew/science/nco # Mac (after installing Homebrew) conda install -c conda-forge nco # Any Linux or Mac (after installing Anaconda) sudo aptitude install nco # Debian-based Linux systems like Debian, Mint, Ubuntu sudo dnf-install nco # Newer RPM-based Linux systems like CentOS, Fedora, openSUSE, RHEL sudo yum-install nco # Older RPM-based Linux systems like CentOS, Fedora, openSUSE, RHEL sudo port install nco # Mac (after installing MacPorts on Mac OS X)
If pre-built executables do not satisfy you (e.g., are out-of-date) and you want the latest, greatest features, then the first steps to build (i.e., compile, for the most part) NCO from source code are to install the pre-requisites: ANTLR version 2.7.7 (like this one not version 3.x or 4.x!) (required for ncap2), GSL (desirable for ncap2), netCDF (absolutely required), OPeNDAP (enables network transparency), and UDUnits (allows dimensional unit transformations). If possible, install this software stack from pre-built executables (commands to do so on Debian, Mac, and RPM systems follow below). ANTLR executables from major distributions are pre-built with the source patch necessary to allow NCO to link to ANTLR. If you must build the source stack yourself (e.g., due to lack of root access, or systems without packages such as AIX), build all libraries with the same compiler and switches. The ANTLR 2.x source file CharScanner.hpp must include this line: #include
or else ncap2 will not compile (this ANTLR tarball is already patched). Recent versions of netCDF automatically build OPeNDAP and UDUnits. NCO is mostly written in C99, and although you may mix and match compilers, this is often difficult in practice and is not recommended. The exception is ncap2 which is written in C++. ANTLR, OPeNDAP, and NCO must be built with the same C++ compiler to properly resolve the C++ name-mangling. NCO does not yet support newer ANTLR versions because the ANTLR 3.x and 4.x C++ interfaces are incomplete. For the reasons explained above (compiler compatibility) install as much pre-requisite and optional software as possible from pre-compiled packages. This is easy on modern package-oriented OSs. The NCL/ESMF packages provide ESMF_RegridWeightGen for use by ncremap, they are not required to build NCO. Remember, to compile NCO from source, you need not only the library dependencies the "devel" versions (which include the header files) of the For Debian-based systems (like Ubuntu) (aptitude is similar to and interchangeable with apt-get): sudo aptitude install antlr libantlr-dev # ANTLR sudo aptitude install libcurl4-gnutls-dev libexpat1-dev libxml2-dev # DAP-prereqs (curl, expat XML parser) sudo aptitude install bison cmake flex gcc g++ # GNU toolchain sudo aptitude install gsl-bin libgsl-dev # GSL sudo aptitude install libnetcdf11 libnetcdf-dev netcdf-bin # netCDF and DAP sudo aptitude install libhdf5-serial-dev # HDF5 sudo aptitude install ncl-ncarg # ESMF_RegridWeightGen (for ncremap) sudo aptitude install udunits-bin libudunits2-0 libudunits2-dev # UDUnits
For RPM-based systems like Fedora and CentOS (substitute yum for dnf on older systems): sudo dnf install antlr antlr-C++ -y # ANTLR sudo dnf install curl-devel libxml2-devel -y # DAP-prereqs sudo dnf install expat expat-devel -y # expat XML parser, a UDUnits-prereq (RHEL only?) sudo dnf install libdap libdap-devel -y # DAP sudo dnf install bison cmake flex gcc gcc-c++ -y # GNU toolchain sudo dnf install ncl # ESMF_RegridWeightGen (for ncremap) sudo dnf install gsl gsl-devel -y # GSL sudo dnf install netcdf netcdf-devel -y # netCDF sudo dnf install zlib-devel -y # zlib sudo dnf install librx librx-devel -y # RX sudo dnf install udunits2 udunits2-devel -y # UDUnits
For Mac OS X with MacPorts: sudo port install antlr # Antlr sudo port install cmake # CMake sudo port install esmf # ESMF_RegridWeightGen (for ncremap) sudo port install libdap # DAP sudo port install gsl # GSL sudo port install netcdf # netCDF sudo port install netcdf-fortran +gcc7 # netCDF for ESMF sudo port install udunits2 # UDUnits
Without MacPorts, the system TeXInfo may be too old to build the documentation. In this case use configure --disable-doc. For Windows with Cygwin, select and install the following packages with Cygwin Setup: curl # DAP and wget pre-req expat # expat XML parser for UDUnits gsl # GSL hdf5 # HDF netcdf # netCDF udunits2 # UDUnits
As of 20131101 there is no Cygwin package for ANTLR, and the netCDF package does not yet support DAP. Once you have installed the pre-requisites as shown above, you may then build the latest stable NCO and install it in, e.g., /usr/local with: wget https://github.com/nco/nco/archive/4.7.3.tar.gz tar xvzf 4.7.3.tar.gz cd nco-4.7.3 ./configure --prefix=/usr/local make sudo make install export PATH=/usr/local/bin\:${PATH} export LD_LIBRARY_PATH=/usr/local/lib\:${LD_LIBRARY_PATH}
The CMake-equivalent to the autoconf/configure build-engine example above is: cd ~/nco/cmake cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local make sudo make install
Unlike autoconf/configure, CMake works well on MS Windows. CMake tries to find header and dependency libraries in standard locations. Override these manually with, e.g., cmake .. -DNETCDF_INCLUDE:PATH=/my/netcdf/include/path -DNETCDF_LIBRARY=/my/netcdf/library/file -DHDF5_LIBRARY=/my/hdf5/library/file DHDF5_HL_LIBRARY=/my/hdf5 high level/library/file -DSZIP_LIBRARY=/my/szip/library/file -DZLIB_LIBRARY=/my/zlib/library/file DCURL_LIBRARY=/my/curl/library/file -DANTLR_INCLUDE:PATH=/my/antlr/include/path -DANTLR_LIBRARY:FILE=/my/antlr/library/file DUDUNITS2_INCLUDE:PATH=/my/udunits2/include/path -DUDUNITS2_LIBRARY:FILE=/my/udunits2/library/file -DEXPAT_LIBRARY:FILE=/my/expat/library/file -DGSL_INCLUDE:PATH=/my/gsl/include/path -DGSL_LIBRARY:FILE=/my/gsl/library/file -DGSL_CBLAS_LIBRARY:FILE=/my/cblas/library/file
Please post questions about building or installing NCO to the list only after reading and attempting to follow these instructions. To indicate you have done this, include the word “bonobo” in the first sentence of your post. Yes, “bonobo”. Otherwise we will likely redirect you here. For more sophisticated build/install options, see the next section. Still having trouble building NCO from source? Read these (much older) Build Hints
Using NCO at UCI, NCAR, and other High Performance Computing Centers (HPCCs) HPCCs unfortunately do not utilize modern package systems like RPMs or .debs, or do so on old OSs with no access to newer RPMs and .debs. Institution-supported executables are usually available with module load nco. These stable releases are often many versions (up to two years) old. Thanks to funding from external grants, DOE, NCAR, and UCI HPCC users may find more recent pre-built NCO executableses the personal directories shown below. These are usually built from a recent tagged-version of NCO (e.g., 4.6.X-alphaY) not from the “bleeding-edge” of master which is usually untagged. One way to use these pre-built executables is to prepend them to your executable and library search paths, e.g., export PATH="~zender/bin:${PATH}", export LD_LIBRARY_PATH="~zender/lib:${LD_LIBRARY_PATH}" ANL ALCF Cooley cooley.alcf.anl.gov: ~zender/bin ANL ALCF Mira mira.alcf.anl.gov: ~zender/bin ANL LCRC Blues blues.lcrc.anl.gov: ~zender/bin LLNL aims4.llnl.gov: ~zender1/bin NCAR CISL yellowstone.ucar.edu: ~zender/bin NCAR CISL mirage0.ucar.edu: ~zender/bin NERSC Cori cori.nersc.gov: ~zender/bin_cori NERSC Edison edison.nersc.gov: ~zender/bin_edison ORNL OLCF Pileus pileus-login01.ornl.gov: ~zender/bin ORNL OLCF Rhea rhea.ccs.ornl.gov: ~zender/bin_rhea ORNL OLCF Titan titan.ccs.ornl.gov: ~zender/bin_titan UCI ESS greenplanet.ps.uci.edu: ~zender/bin
Known Problems from 2013 (version 4.2.4) Onwards Recent Generic Run-time Problems: netCDF CDF5 corruption: netCDF libraries 4.4.0+ supports the CDF5 binary format. Unfortunately the CDF5 implementation is buggy for large (> 4 GiB) variables in library versions 4.4.0–4.5.0. Writing CDF5 files with large variables is buggy unless there is only one such “large” variable and it is the last to be defined. If the file is an input dataset (i.e., NCO reads it) written by PnetCDF then the input data are fine (because PnetCDF writes CDF5 through a different mechanism than serial programs like NCO's writer). And if the CDF5 dataset was originally written by any netCDF version 4.5.1 or greater, then it may be fine (It depends whether/when Unidata patches the bug, see below, that we identified on 20170906.) However, a CDF5 input file with large variables written by any serial netCDF writer (like NCO) employing netCDF library 4.4.0–4.5.0, is likely corrupt and variables were silently truncated when writing it. Output files (that NCO wrote) with large variables will definitely be corrupt if NCO was linked to netCDF library version 4.4.0–4.5.0 (so upgrade to netCDF 4.5.1+ ASAP). Here are two potential workarounds for data affected by this bug: 1. Re-write (using any netCDF version) original input files in netCDF4 format instead of CDF5, then process these as normal and write netCDF4 output (instead of CDF5); 2. Re-compile NCO with netCDF library 4.5.1+ or later and use it to convert noncorrupt datasets to netCDF4 format, then process the data. For more information on this nasty bug, see here. Our understanding of this bug is still evolving and salient updates will be posted here. UPDATE: Unidata released netCDF 4.5.0 on 20171020. Unfortunately a patch to fix the CDF5 bug was not included. Hence the earliest a general-purpose CDF5 fix will appear is 4.5.1. Those bonking their heads against walls and willing to try what Unidata chose not to include in 4.5.0 can use this experimental patch. YMMV. Unsupported. netCDF4 Strided Hyperslab bug: Multiple users complain that access to strided hyperslabs of netCDF4 datasets is orders of magnitude slower than expected. This occurs with NCO and also with related software like NCL. The cause appears to be that all recent versions of netCDF up to 4.3.3 access strided hyperslabs via an algorithm (in nc_get_vars()) that becomes unwieldy and error-prone for large datasets. We developed and implemented a transparent workaround (that avoids the troublesome algorithm) for the most common case which is where strides are taken in only one dimension, e.g., -d time,0,,12. With the workaround introduced in NCO 4.4.6, strided access to netCDF4 datasets now completes in nearly the same amount of time as non-strided. This workaround works transparently with any version of netCDF. We are not yet sure that we have fully and correctly diagnosed the cause nor that our workaround is always effective. Comments welcome. Updates will be posted right here. netCDF4 Renaming bugs: Unforunately from 2007–present (February, 2015) the netCDF library (versions 4.0.0–4.3.3) contained bugs or limitations that prevent ncrename (and other netCDF4-based software) from correctly renaming coordinate variables, dimensions, groups, and attributes in netCDF4 files. (To our knowledge the netCDF library calls for renaming always work well on netCDF3 files so one workaround to netCDF4 bugs is convert to netCDF3, rename, then convert back). A summary of renaming limitations associated with particular versions of netCDF4 is maintained in the online manual here. Important updates will also be posted here on the homepage. There are still known bugs with renaming features as of netCDF library version 4.4.1.1 (November, 2016). Recent Operator-specific Run-time Problems: Hyperslabbing masked and/or weighted variables bug: Versions 4.2.4—4.6.7 of ncwa incorrectly handle masking and weighting of hyperslabbed variables. Performing these actions simultaneously causes subtle arithmetic errors (the worst kind!), unless the hyperslab happens to start with the first element of the variable. In other words, results from commands of the form ncwa -d ... -m ... and ncwa -d ... -w ... are suspect. The workaround is to downgrade to NCO 4.2.3. The solution is to upgrade to NCO 4.6.8. Minimization/maximization of packed variables bug: Versions 4.3.y—4.4.5 of ncwa incorrectly handled packed variables during these operations. The two workarounds are to unpack first or to perform the statistics in single precision with the --flt option. The solution is to upgrade to NCO 4.4.6. Promoting packed records with missing values bug: Versions 4.3.X—4.4.5 of ncra could produce (wildly) inaccurate statistics when promoting (e.g., to double- from singleprecision) variables that are packed and that have missing values. The two workarounds are to unpack first or to perform the statistics in single precision with the --flt option. The solution is to upgrade to NCO 4.4.6. Chunking while hyperslabbing bug: Versions 4.3.X—4.4.4 of most operators could send incorrect chunking requests to the netCDF library, resulting in failures. This occurred only while simultaneously hyperslabbing. The solution is to upgrade to NCO 4.4.5. ncwa mask condition bug: All versions through 4.4.3 of ncwa could return incorrect mask values for negative numbers. Thanks to Keith Lindsay for report, and to Henry Butowsky for fix. Prior to this fix, the ncwa lexer would drop the negative sign, if any, from the comparators appearing in the mask condition, e.g., ncwa --mask_condition "lat < -20" was parsed as "lat < 20" not "lat < -20". Hence, users of ncwa --mask_condition (or of -B) should upgrade. NB: The -m -M -T form of ncwa masking is/was not buggy. Thus the workaround is to use the -m -M -T form of ncwa masking, while the long-term solution is to upgrade to NCO 4.4.4+. ncra, ncea, and ncrcat file close bug: Versions 4.3.9—4.4.0 of ncra, ncea, and ncrcat failed to correctly close and optionally remove input files. This could cause NCO to exceed system limits on the maximum number of open files when hundreds-to-thousands of input files were specified per NCO invocation. The exact failure point is OS-dependent (NCO commands on Mac OS X 10.9 would fail when processing more than 256 files at a time). This is embarassing because NCO has always been designed to work with arbitrary numbers of input files and we want power users to be comfortable running it on hundreds of thousands of input files. The workaround is to avoid versions 4.3.9—4.4.0, while the long-term solution is to upgrade to NCO 4.4.1+. ncra MRO missing value bug: Versions 4.3.6—4.3.9 of ncra could treat missing values incorrectly during double-precision arithmetic. A symptom was that missing values could be replaced by strange numbers like, well, infinity or zero. This mainly affects ncra in MRO (multi-record output) mode, and the symptoms should be noticeable. The workaround is to run the affected versions of ncra using the --flt switch, so that single-precision floating point numbers are not promoted prior to arithmetic. The solution is to upgrade to NCO 4.4.0+. ncwa hyperslabbing while averaging bug: Versions 4.3.3—4.3.5 of ncwa could return incorrect answers when user-specified hyperslabs were simultaneously extracted. In such cases, hyperslab limits were not consistently applied. This could produce incorrect answers that look correct. This bug only affected hyperslabbed statistics (those produced by simultaneously invoking -a and -d switches); “global averages” were unaffected. We urge all ncwa users to upgrade to NCO 4.3.6+. ncpdq unpacking bug with auxiliary coordinates: Versions 4.3.2–4.3.3 of ncpdq did not correctly store unpacked variables. These versions unpacked (when specified with -U or -P upk) the values, but inadvertently stored their original packing attributes with the unpacked values. This would lead further operators to assume that the values were still packed. Hence consecutive operations could lead to incorrect values. Fixed in version 4.3.4. All ncpdq users are encouraged to upgrade. NB: this bug did not affect the other arithmetic operators which unpack data prior to arithmetic. Recent Platform-specific Run-time Problems: No known platform-specific problems with recent releases. Known Problems through 2012 (version 4.2.3)
People: Current Developers (please contact us via the project forums not via email): Charlie Zender, Professor of Earth System Science (ESS) and of Computer Science (CS). Role: Project PI. Contributions: Core library, porting, release manager Related Research: 1. Group-Oriented Data Analysis and Distribution (GODAD). 2. Extend empirically verified analytic model (described here) for terascale data reduction of gridded datasets to account for cluster- and network-effects. 3. Enable and optimize NCO for intra-file-level parallelism using netCDF4/HDF5 parallel filesystem features. Other Interests: Atmospheric Physics, Climate Change. Henry Butowsky, software engineer. Roles: Scientific programmer Current Research: 1. Efficient complex data analysis with storage-layer constraints. 2. Develop and thread the
ncap2 interpreter. Other Interests: Compilers and interpreters.
Wenshan Wang, Roles: PhD Candidate Current Research: 1. Causes and implications of Greenland snowmelt. 2. Rapid evaluation and exploitation of multi-model datasets. Other Interests: Advancing to candidacy. Alumni Developers: Dr. Scott Capps, earned Earth System Science Ph.D. (2009) with Zender at UCI, then postdoc at UCLA, now at Vertum Partners. Stephen Jenks, former Assistant Professor of Electrical Engineering and Computer Science (EECS). Pedro Vicente, software engineer, 201206—201405, and continuing as volunteer Dr. Daniel Wang: earned EECS Ph.D. (2008) with Jenks and Zender at UCI, now at SLAC. Thanks also to past NCO contributors.
/ Announce / Developer / Discussion / Help / Manual / Project / Source /
[ACME] [ANTLR] [CF] [DIWG] [GSL] [netCDF] [OPeNDAP] [SWAMP] [UDUnits]