re: Issue https://github.com/Unidata/netcdf-c/issues/2748
This PR fixes a number of issues and bugs.
## s3cleanup fixes
* Delete extraneous s3cleanup.sh related files.
* Remove duplicate s3cleanup.uids entries.
## Support the Google S3 API
* Add code to recognize "storage.gooleapis.com"
* Add extra code to track the kind of server being accessed: unknown, Amazon, Google.
* Add a new mode flag "gs3" (analog to "s3") to support this api.
* Modify the S3 URL code to support this case.
* Modify the listobjects result parsing because Google returns some non-standard XML elements.
* Change signature and calls for NC_s3urlrebuild.
## Handle corrupt Zarr files where shape is empty for a variable.
Modify behavior when a variable's "shape" dictionary entry.
Previously it returned an error, but now it suppresses such a variable.
This change makes it possible to read non-corrupt data from the file.
Also added a test case.
## Misc. Other Changes
* Fix the nclog level handling to suppress output by default.
* Fix de-duplicates code in ncuri.c
* Restore testing of iridl.ldeo.columbia.edu.
* Fix bug in define_vars() which did not always do a proper reclaim between variables.
re: Issue https://github.com/Unidata/netcdf-c/issues/2458
The above Github Issue revealed some bugs in the file netcdf-c/plugins/H5Zblosc.c. Fixed and added a testcase. Also discovered that the Blosc LZ sub-compressors do not work well with small datasets.
Misc. Other Change(s): I noticed that the file "dap4_test/baselinethredds/GOES16_CONUS_20170821_020218_0.47_1km_33.3N_91.4W.nc4.thredds" is still causing tar errors during "make distcheck", so I made some changes to do rename at test-time.