Skip to content

Release notes for Gluster 3.10.2

This is a bugfix release. The release notes for 3.10.0 and 3.10.1 contains a listing of all the new features that were added and bugs in the GlusterFS 3.10 stable release.

Major changes, features and limitations addressed in this release

  1. Many bugs brick multiplexing and nfs-ganesha+ha bugs have been addressed.
  2. Rebalance and remove brick operations have been disabled for sharded volumes to prevent data corruption.

Major issues

  1. Expanding a gluster volume that is sharded may cause file corruption

  2. Sharded volumes are typically used for VM images, if such volumes are expanded or possibly contracted (i.e add/remove bricks and rebalance) there are reports of VM images getting corrupted.

  3. Status of this bug can be tracked here, #1426508

Bugs addressed

A total of 63 patches have been merged, addressing 46 bugs:

  • #1437854: Spellcheck issues reported during Debian build
  • #1425726: Stale export entries in ganesha.conf after executing "gluster nfs-ganesha disable"
  • #1427079: [Ganesha] : unexport fails if export configuration file is not present
  • #1440148: common-ha (debian/ubuntu): has a hard-coded /usr/libexec/ganesha...
  • #1443478: RFE: Support to update NFS-Ganesha export options dynamically
  • #1443490: [Nfs-ganesha] Refresh config fails when ganesha cluster is in failover mode.
  • #1441474: synclocks don't work correctly under contention
  • #1449002: [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
  • #1438813: Segmentation fault when creating a qcow2 with qemu-img
  • #1438423: [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
  • #1444540: rm -rf \<dir> returns ENOTEMPTY even though ls on the mount point returns no files
  • #1446227: Incorrect and redundant logs in the DHT rmdir code path
  • #1447608: Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
  • #1448864: Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
  • #1443349: [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
  • #1441576: [geo-rep]: rsync should not try to sync internal xattrs
  • #1441927: [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
  • #1401877: [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
  • #1425723: nfs-ganesha volume export file remains stale in shared_storage_volume when volume is deleted
  • #1427759: nfs-ganesha: Incorrect error message returned when disable fails
  • #1438325: Need to improve remove-brick failure message when the brick process is down.
  • #1438338: glusterd is setting replicate volume property over disperse volume or vice versa
  • #1438340: glusterd is not validating for allowed values while setting "cluster.brick-multiplex" property
  • #1441476: Glusterd crashes when restarted with many volumes
  • #1444128: [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
  • #1445260: [GANESHA] Volume start and stop having ganesha enable on it,turns off cache-invalidation on volume
  • #1445408: gluster volume stop hangs
  • #1449934: Brick Multiplexing :- resetting a brick bring down other bricks with same PID
  • #1435779: Inode ref leak on anonymous reads and writes
  • #1440278: [GSS] NFS Sub-directory mount not working on solaris10 client
  • #1450378: GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
  • #1449779: quota: limit-usage command failed with error " Failed to start aux mount"
  • #1450564: glfsheal: crashed(segfault) with disperse volume in RDMA
  • #1443501: Don't wind post-op on a brick where the fop phase failed.
  • #1444892: When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st_size value.
  • #1449169: Multiple bricks WILL crash after TCP port probing
  • #1440805: Update to check Change-Id consistency for backports
  • #1443010: snapshot: snapshots appear to be failing with respect to secure geo-rep slave
  • #1445209: snapshot: Unable to take snapshot on a geo-replicated volume, even after stopping the session
  • #1444773: explicitly specify executor to be bash for tests
  • #1445407: remove bug-1421590-brick-mux-reuse-ports.t
  • #1440742: Test files clean up for tier during 3.10
  • #1448790: [Tiering]: High and low watermark values when set to the same level, is allowed
  • #1435942: Enabling parallel-readdir causes dht linkto files to be visible on the mount,
  • #1437763: File-level WORM allows ftruncate() on read-only files
  • #1439148: Parallel readdir on Gluster NFS displays less number of dentries