Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
#!/bin/bash
|
|
|
|
|
|
|
|
# Uncomment to get verbose output
|
|
|
|
#VERBOSE=1
|
|
|
|
|
|
|
|
if test "x$VERBOSE" = x1 ; then set -x; fi
|
|
|
|
|
|
|
|
# Constants passed in from configure.ac/CMakeLists
|
|
|
|
abs_top_srcdir='@abs_top_srcdir@'
|
|
|
|
abs_top_builddir='@abs_top_builddir@'
|
|
|
|
|
|
|
|
# Additional configuration information
|
|
|
|
. ${abs_top_builddir}/test_common.sh
|
|
|
|
|
|
|
|
delta="$1"
|
|
|
|
|
|
|
|
# Sanity checks
|
|
|
|
|
|
|
|
# 1. This requires that the AWS CLI (command line interface) is installed.
|
|
|
|
if ! which aws ; then
|
|
|
|
echo ">>>> The s3cleanup script requires the \"aws\" command (i.e. the AWS command line interface program)"
|
|
|
|
echo ">>>> Try installing \"awscli\" package with apt or equivalent."
|
|
|
|
exit 0
|
|
|
|
fi
|
|
|
|
|
|
|
|
# 2. Make sure S3TESTSUBTREE is defined
|
|
|
|
if test "x$S3TESTSUBTREE" = x ; then
|
|
|
|
echo ">>>> The s3cleanup script requires that S3TESTSUBTREE is defined."
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
|
|
|
# 3. Make sure delta is defined
|
|
|
|
if test "x$delta" = x ; then
|
|
|
|
echo ">>>> No delta argument provided"
|
|
|
|
echo ">>>> Usage: s3gc <delta>"
|
|
|
|
echo ">>>> where <delta> is number of days prior to today to begin cleanup"
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
|
|
|
|
|
|
|
# This script takes a delta (in days) as an argument.
|
|
|
|
# It then removes from the Unidata S3 bucket those keys
|
|
|
|
# that are older than (current_date - delta).
|
|
|
|
|
|
|
|
# Compute current_date - delta
|
|
|
|
|
|
|
|
# current date
|
|
|
|
current=`date +%s`
|
|
|
|
# convert delta to seconds
|
|
|
|
deltasec=$((delta*24*60*60))
|
|
|
|
# Compute cleanup point
|
|
|
|
lastdate=$((current-deltasec))
|
|
|
|
|
|
|
|
rm -f s3gc.json
|
|
|
|
|
|
|
|
# Get complete set of keys in ${S3TESTSUBTREE} prefix
|
|
|
|
if ! aws s3api list-objects-v2 --bucket ${S3TESTBUCKET} --prefix "${S3TESTSUBTREE}" | grep -F '"Key":' >s3gc.keys ; then
|
|
|
|
echo "No keys found"
|
|
|
|
rm -f s3gc.json
|
|
|
|
exit 0
|
|
|
|
fi
|
|
|
|
aws s3api list-objects-v2 --bucket ${S3TESTBUCKET} --prefix "${S3TESTSUBTREE}" | grep -F '"Key":' >s3gc.keys
|
|
|
|
while read -r line; do
|
2023-11-28 09:46:10 +08:00
|
|
|
KEY0=`echo "$line" | sed -e 's|[^"]*"Key":[^"]*"\([^"]*\)".*|\1|'`
|
|
|
|
# Strip off any leading '/'
|
|
|
|
KEY=`echo "$KEY0" | sed -e 's|^[/]*\(.*\)|\1|'`
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
# Ignore keys that do not start with ${S3TESTSUBTREE}
|
|
|
|
PREFIX=`echo "$KEY" | sed -e 's|\([^/]*\)/.*|\1|'`
|
|
|
|
if test "x$PREFIX" = "x$S3TESTSUBTREE" ; then
|
|
|
|
ALLKEYS="$ALLKEYS $KEY"
|
|
|
|
fi
|
|
|
|
done < s3gc.keys
|
|
|
|
|
|
|
|
# Look at each key and see if it is less than lastdate.
|
|
|
|
# If so, then record that key
|
|
|
|
|
|
|
|
# Capture the keys with old uids to delete
|
|
|
|
unset MATCHKEYS
|
|
|
|
for key in $ALLKEYS ; do
|
|
|
|
case "$key" in
|
|
|
|
"$S3TESTSUBTREE/testset_"*)
|
|
|
|
# Capture the uid for this key
|
|
|
|
s3uid=`echo $key | sed -e "s|$S3TESTSUBTREE/testset_\([0-9][0-9]*\)/.*|\1|"`
|
|
|
|
# check that we got a uid
|
|
|
|
if test "x$s3uid" != x ; then
|
|
|
|
# Test age of the uid
|
|
|
|
if test $((s3uid < lastdate)) = 1; then
|
|
|
|
MATCHKEYS="${MATCHKEYS} $key"
|
|
|
|
fi
|
|
|
|
else
|
|
|
|
if test "x$VERBOSE" = x1 ; then echo "Excluding \"$key\""; fi
|
|
|
|
fi
|
|
|
|
;;
|
|
|
|
*) if test "x$VERBOSE" = x1; then echo "Ignoring \"$key\""; fi ;;
|
|
|
|
esac
|
|
|
|
done
|
|
|
|
|
|
|
|
# We can delete at most 1000 objects at a time, so divide into sets of size 500
|
|
|
|
REM="$MATCHKEYS"
|
|
|
|
while test "x$REM" != x ; do
|
|
|
|
K500=`echo "$REM" | cut -d' ' -f 1-500`
|
|
|
|
REM=`echo "$REM" | cut -d' ' -f 501-`
|
|
|
|
unset DELLIST
|
|
|
|
MATCH=0
|
|
|
|
FIRST=1
|
|
|
|
DELLIST="{\"Objects\":["
|
|
|
|
for key in $K500 ; do
|
|
|
|
if test $FIRST = 0 ; then DELLIST="${DELLIST},"; fi
|
|
|
|
DELLIST="${DELLIST}
|
|
|
|
{\"Key\":\"$key\"}"
|
|
|
|
FIRST=0
|
|
|
|
MATCH=1
|
|
|
|
done
|
|
|
|
DELLIST="${DELLIST}],\"Quiet\":false}"
|
|
|
|
rm -f s3gc.json
|
|
|
|
if test "x$MATCH" = x1 ;then
|
|
|
|
rm -f s3gc.json
|
|
|
|
echo "$DELLIST" > s3gc.json
|
|
|
|
aws s3api delete-objects --bucket ${S3TESTBUCKET} --delete "file://s3gc.json"
|
|
|
|
fi
|
|
|
|
done
|
|
|
|
rm -f s3gc.json
|