Release notes for Gluster 3.12.2

This is a bugfix release. The release notes for 3.12.0, 3.12.1, 3.12.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.

Major changes, features and limitations addressed in this release

 1.) In a pure distribute volume there is no source to heal the replaced brick
 from and hence would cause a loss of data that was present in the replaced brick.
 The CLI has been enhanced to prevent a user from inadvertently using replace brick
 in a pure distribute volume. It is advised to use add/remove brick to migrate from
 an existing brick in a pure distribute volume.

Major issues

  1. Expanding a gluster volume that is sharded may cause file corruption

    • Sharded volumes are typically used for VM images, if such volumes are expanded or possibly contracted (i.e add/remove bricks and rebalance) there are reports of VM images getting corrupted.
    • The last known cause for corruption #1465123 is still pending, and not yet part of this release.
  2. Gluster volume restarts fail if the sub directory export feature is in use. Status of this issue can be tracked here, #1501315

  3. Mounting a gluster snapshot will fail, when attempting a FUSE based mount of the snapshot. So for the current users, it is recommend to only access snapshot via ".snaps" directory on a mounted gluster volume. Status of this issue can be tracked here, #1501378

Bugs addressed

 A total of 31 patches have been merged, addressing 28 bugs
  • #1490493: Sub-directory mount details are incorrect in /proc/mounts
  • #1491178: GlusterD returns a bad memory pointer in glusterd_get_args_from_dict()
  • #1491292: Provide brick list as part of VOLUME_CREATE event.
  • #1491690: rpc: TLSv1_2_method() is deprecated in OpenSSL-1.1
  • #1492026: set the shard-block-size to 64MB in virt profile
  • #1492061: CLIENT_CONNECT event not being raised
  • #1492066: AFR_SUBVOL_UP and AFR_SUBVOLS_DOWN events not working
  • #1493975: disallow replace brick operation on plain distribute volume
  • #1494523: Spelling errors in 3.12.1
  • #1495162: glusterd ends up with multiple uuids for the same node
  • #1495397: Make event-history feature configurable and have it disabled by default
  • #1495858: gluster volume create asks for confirmation for replica-2 volume even with force
  • #1496238: [geo-rep]: Scheduler help needs correction for description of --no-color
  • #1496317: [afr] split-brain observed on T files post hardlink and rename in x3 volume
  • #1496326: [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1
  • #1497084: glusterfs process consume huge memory on both server and client node
  • #1499123: Readdirp is considerably slower than readdir on acl clients
  • #1499150: Improve performance with xattrop update.
  • #1499158: client-io-threads option not working for replicated volumes
  • #1499202: self-heal daemon stuck
  • #1499392: [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script
  • #1500396: [geo-rep]: Observed "Operation not supported" error with traceback on slave log
  • #1500472: Use a bitmap to store local node info instead of conf->local_nodeuuids[i].uuids
  • #1500662: gluster volume heal info "healed" and "heal-failed" showing wrong information
  • #1500835: [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
  • #1500841: [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
  • #1500845: [geo-rep] master worker crash with interrupted system call
  • #1500853: [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start
  • #1501022: Make choose-local configurable through volume-set command
  • #1501154: Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes