pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

Bug #1817097 reported by bugproxy
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu on IBM z Systems
Fix Released
Medium
Canonical Foundations Team
lvm2
Fix Released
Medium
e2fsprogs (Ubuntu)
Fix Released
Undecided
Unassigned
linux (Ubuntu)
Invalid
Undecided
Unassigned
lvm2 (Ubuntu)
Invalid
Undecided
Skipper Bug Screeners

Bug Description

Problem Description---
Summary
=======
Environment: IBM Z13 LPAR and z/VM Guest
             IBM Type: 2964 Model: 701 NC9
OS: Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
             Package: lvm2 version 2.02.176-4.1ubuntu3
LVM: pvmove operation corrupts file system when using 4096 (4k) logical block size
     and default block size being 512 bytes in the underlying devices
The problem is immediately reproducible.

We see a real usability issue with data destruction as consequence - which is not acceptable.
We expect 'pvmove' to fail with error in such situations to prevent fs destruction,
which might possibly be overridden by a force flag.

Details
=======
After a 'pvmove' operation is run to move a physical volume onto an ecrypted
device with 4096 bytes logical block size we experience a file system corruption.
There is no need for the file system to be mounted, but the problem surfaces
differently if so.

Either, the 'pvs' command after the pvmove shows
  /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
  /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
  /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
  /dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

or

a subsequent mount shows (after umount if the fs had previously been mounted as in our
setup)
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

A minimal setup of LVM using one volume group with one logical volume defined,
based on one physical volume is sufficient to raise the problem. One more physical
volume of the same size is needed to run the pvmove operation to.

      LV
       |
    VG: LOOP_VG [ ]
       |
    PV: /dev/loop0 --> /dev/mapper/enc-loop
                        ( backed by /dev/mapper/enc-loop )

The physical volumes are backed by loopback devices (losetup) to base the
problem report on, but we have seen the error on real SCSI multipath volumes
also, with and without cryptsetup mapper devices in use.

Further discussion
==================
https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
The problem does not occur on block devices with native size of 4k.
E.g. DASDs, or file systems with mkfs -b 4096 option.

Terminal output
===============
See attached file pvmove-error.txt

Debug data
==========
pvmove was run with -dddddd (maximum debug level)
See attached journal file.

Contact Information = <email address hidden>

---uname output---
Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 s390x s390x s390x GNU/Linux

Machine Type = IBM Type: 2964 Model: 701 NC9

---Debugger---
A debugger is not configured

---Steps to Reproduce---
 1.) Create two image files of 500MB in size
    and set up two loopback devices with 'losetup -fP FILE'
2.) Create one physical volume and one volume group 'LOOP_VG',
    and one logical volume 'VG'
    Run:
    pvcreate /dev/loop0
    vgcreate LOOP_VG /dev/loop0
    lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
3.) Create a file system on the logical volume device:
    mkfs.ext4 /dev/mapper/LOOP_VG-LV
4.) mount the file system created in the previous step to some empty available directory:
    mount /dev/mapper/LOOP_VG-LV /mnt
5.) Set up a second physical volume, this time encrypted with LUKS2,
    and open the volume to make it available:
    cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
    cryptsetup luksOpen /dev/loop1 enc-loop
6.) Create the second physical volume, and add it to the LOOP_VG
    pvcreate /dev/mapper/enc-loop
    vgextend LOOP_VG /dev/mapper/enc-loop
7.) Ensure the new physical volume is part of the volume group:
    pvs
8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug option:
    pvmove -dddddd /dev/loop0 /dev/mapper/enc-loop
9.) The previous step succeeds, but corrupts the file system on the logical volume
     We expect an error here.
     There might be a command line flag to override used because corruption does not cause a data loss.

Userspace tool common name: pvmove

The userspace tool has the following bit modes: 64bit

Userspace rpm: lvm2 in versoin 2.02.176-4.1ubuntu3

Userspace tool obtained from project website: na

*Additional Instructions for <email address hidden>:
-Attach ltrace and strace of userspace application.

Revision history for this message
In , nkshirsa (nkshirsa-redhat-bugs) wrote :
Download full text (3.5 KiB)

Description of problem:

lvm should not allow extending an LV with a PV of different sector size than existing PVs making up the LV, since the FS on the LV does not mount once LVM adds in the new PV and extends the LV.

How reproducible:
Steps to Reproduce:

** Device: sdc (using the device with default sector size of 512)

# blockdev --report /dev/sdc
RO RA SSZ BSZ StartSec Size Device
rw 8192 512 4096 0 1073741824 /dev/sdc

** LVM is created with the default sector size of 512.

# blockdev --report /dev/mapper/testvg-testlv
RO RA SSZ BSZ StartSec Size Device
rw 8192 512 4096 0 1069547520 /dev/mapper/testvg-testlv

** The filesystem will also pick up 512 sector size.

# mkfs.xfs /dev/mapper/testvg-testlv
meta-data=/dev/mapper/testvg-testlv isize=512 agcount=4, agsize=65280 blks
         = sectsz=512 attr=2, projid32bit=1
         = crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=261120, imaxpct=25
         = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=855, version=2
         = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

** Now we will mount it

# xfs_info /test
meta-data=/dev/mapper/testvg-testlv isize=512 agcount=4, agsize=65280 blks
         = sectsz=512 attr=2, projid32bit=1
         = crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=261120, imaxpct=25
         = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=855, version=2
         = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

** Let's extend it with a PV with a sector size of 4096:

#modprobe scsi_debug sector_size=4096 dev_size_mb=512

# fdisk -l /dev/sdd

Disk /dev/sdd: 536 MB, 536870912 bytes, 131072 sectors
Units = sectors of 1 * 4096 = 4096 bytes <==============
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 262144 bytes

# blockdev --report /dev/sdd
RO RA SSZ BSZ StartSec Size Device
rw 8192 4096 4096 0 536870912 /dev/sdd

# vgextend testvg /dev/sdd
  Physical volume "/dev/sdd" successfully created
  Volume group "testvg" successfully extended

# lvextend -l +100%FREE /dev/mapper/testvg-testlv
  Size of logical volume testvg/testlv changed from 1020.00 MiB (255 extents) to 1.49 GiB (382 extents).
  Logical volume testlv successfully resized.

# umount /test

# mount /dev/mapper/testvg-testlv /test
mount: mount /dev/mapper/testvg-testlv on /test failed: Function not implemented <===========

# dmesg | grep -i dm-2

[ 477.517515] XFS (dm-2): Unmounting Filesystem
[ 486.905933] XFS (dm-2): device supports 4096 byte sectors (n...

Read more...

Revision history for this message
In , teigland (teigland-redhat-bugs) wrote :

Should we just require all PVs in the VG to have the same sector size?

Revision history for this message
In , zkabelac (zkabelac-redhat-bugs) wrote :

Basically that's what we have agreed in meeting - since we don't know yet how to handle different sector-sized PVs.

And a short fix could be to not allow that to happen on creating time.

But still there are already users having that VGs already created - so lvm2 can't just say such VG is invalid
and disable access to it...

So I'd probably see something similar we did for 'mirrorlog' -
add lvm.conf option to disable creation - that is respected on vgcreate time

Revision history for this message
bugproxy (bugproxy) wrote : journal with debug output of pvmove operation

Default Comment by Bridge

tags: added: architecture-s39064 bugnameltc-175696 severity-critical targetmilestone-inin1810
Revision history for this message
bugproxy (bugproxy) wrote : Detailed steps to reproduce

Default Comment by Bridge

Changed in ubuntu:
assignee: nobody → Skipper Bug Screeners (skipper-screen-team)
affects: ubuntu → linux (Ubuntu)
Frank Heimes (fheimes)
affects: linux (Ubuntu) → lvm2 (Ubuntu)
Changed in ubuntu-z-systems:
assignee: nobody → Canonical Foundations Team (canonical-foundations)
importance: Undecided → Critical
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2019-02-21 12:51 EDT-------
There is a minor correction needed to the setup outlined above, as the enc-loop mapper device is backed by the second loopback device, and the volume group consists of the two devices listed in [ ] :

LV
|
VG: LOOP_VG [ /dev/loop0, /dev/mapper/enc-loop ]
|
PV: /dev/loop0 --> /dev/mapper/enc-loop
( backed by /dev/loop1 )

Mind there are SCSI devices providing 4k block sizes, thus check with 'blockdev --getbsz' to actually run a scenario from 512 --> 4096 bytes block sizes.

Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2019-02-22 05:23 EDT-------
Further investigations revealed that no dm-crypt mapper device is needed at all to reproduce the behaviour but just two block devices with different physical block sizes, e.g.

# blockdev --getpbsz /dev/mapper/mpatha-part1
512
# blockdev --getpbsz /dev/dasdc1
4096

Mind to first add the SCSI device/the device with the smaller phys. block size to the volume group when running the 'vgcreate' command.
# blockdev --getpbsz /dev/mapper/TEST_VG-LV1
512

Use one SCSI disk partition (multipath devices are recommended but not required) and one DASD partition to recreate the pvmove problem. Run pvs after the move completed, and unmount, mount the fs again.
The fsck.ext4 does not detect any problems on the fs which is unexpected.

Pertaining syslog entries:
Feb 22 11:09:23 system kernel: print_req_error: I/O error, dev dasdc, sector 280770
Feb 22 11:09:23 system kernel: Buffer I/O error on dev dm-3, logical block 139265, lost sync page write
Feb 22 11:09:23 system kernel: JBD2: Error -5 detected when updating journal superblock for dm-3-8.
Feb 22 11:09:23 system kernel: Aborting journal on device dm-3-8.
Feb 22 11:09:23 system kernel: print_req_error: I/O error, dev dasdc, sector 280770
Feb 22 11:09:23 system kernel: Buffer I/O error on dev dm-3, logical block 139265, lost sync page write
Feb 22 11:09:23 system kernel: JBD2: Error -5 detected when updating journal superblock for dm-3-8.
Feb 22 11:09:23 system kernel: print_req_error: I/O error, dev dasdc, sector 2242
Feb 22 11:09:23 system kernel: Buffer I/O error on dev dm-3, logical block 1, lost sync page write
Feb 22 11:09:23 system kernel: EXT4-fs (dm-3): I/O error while writing superblock
Feb 22 11:09:23 system kernel: EXT4-fs error (device dm-3): ext4_put_super:938: Couldn't clean up the journal
Feb 22 11:09:23 system kernel: EXT4-fs (dm-3): Remounting filesystem read-only
Feb 22 11:09:23 system kernel: print_req_error: I/O error, dev dasdc, sector 2242
Feb 22 11:09:23 system kernel: Buffer I/O error on dev dm-3, logical block 1, lost sync page write
Feb 22 11:09:23 system kernel: EXT4-fs (dm-3): I/O error while writing superblock
Feb 22 11:09:32 system kernel: EXT4-fs (dm-3): bad block size 1024

The very last syslog line repeats upon 'mount /dev/mapper/TEST_VG-LV1 /mnt ' attempts, the 1024 block size is related to
# blockdev --getbsz /dev/mapper/TEST_VG-LV1
1024

After the pvmove the physical blocksize is also changed to
# blockdev --getpbsz /dev/mapper/TEST_VG-LV1
4096

Revision history for this message
In , teigland (teigland-redhat-bugs) wrote :
Revision history for this message
In , nsoffer (nsoffer-redhat-bugs) wrote :

Interesting, I asked about this here few weeks ago:
https://www.redhat.com/archives/linux-lvm/2019-February/msg00002.html

Based on the info in this bug, it looks like RHV should care about the
only the logical block size when extending or creating a VG.

David, Zdenek, what do you think?

Revision history for this message
In , teigland (teigland-redhat-bugs) wrote :

Here's an initial, lightly-tested solution to the VG-consistency part. It does not address the issue of checking that a given LV is used with a consistent sector size. Perhaps if a user overrides the VG consistency check, it should be their responsibility to ensure LVs are consistent.

https://sourceware.org/git/?p=lvm2.git;a=commit;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a

vgcreate/vgextend: check for inconsistent logical block sizes

When creating or extending a VG, check if the PVs have
inconsisent logical block sizes (value from BLKSSZGET ioctl).
If so, return an error. The error can be changed to a warning,
allowing the command to proceed with the mixed values, by
setting lvm.conf allow_mixed_logical_block_sizes=1.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

I see that this bug was created with Ubuntu 18.10 (judging by the tags).
I am trying to reproduce the issue on Ubuntu 19.04 (current development release).

I am failing to produce a mixed blocksize cryptsetup device:
$ sudo cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1

is failing for me with:
"Device size is not aligned to the requested sector size."

And on this machine, I do not have access to native 4k and non-4k drives at the same time. Let me get a better machine to debug this further.

Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2019-03-11 08:22 EDT-------
The message "Device size is not aligned to the requested sector size." is because the size of your loopback device is not a multiple of 4096 bytes. With --sector-size 4096, it's size must be a multiple of 4096 bytes, otherwise you would a half sector at the end of the device.

Besides the setup with cryptsetup and 4K sector size, you can also try to setup your loopback devices with different sector sizes (and omit dm-crypt/cryptsetup totally).
By default loopback devices use a physical block size of 512, however, you can create them with -b 4096 to get a loopback device with 4K physical block size. With that you should also be able to reproduce this.

BTW: Please also see the thread about this topic on the LVM Mailing list that I have started in parallel, and especially the following post from David Teigland:
https://www.redhat.com/archives/linux-lvm/2019-March/msg00018.html

There is also already a draft patch from David Teigland for this in a private branch of the LVM2 git repository: https://sourceware.org/git/?p=lvm2.git;a=commit;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a
I hope that this fix will make it into the master branch at some point in time.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Ok, reproduced this on x86_64 with raw files which are in multiples of 4k.

This is not an architecture specific issue.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

This is a well-known upstream issue/bug.
This is not s390x, Ubuntu 18.10, any other Ubuntu release specific.
There is no dataloss -> one can execute pvmove operation in reverse (or i guess onto any 512 sector size PV) to mount the filesystems again.

Thus this is not critical at all.

Also, I am failing to understand what is the expectation for Canonical to do, w.r.t. this bug report?

If you want support, as a workaround one can force using 4k sizes, with vgcreate and ext4, then moving volumes to/from 512/4k physical volumes appears to work seamlessly:

$ sudo vgcreate --physicalextentsize 4k newtestvg /dev/...
$ sudo mkfs.ext4 -b 4096 /dev/mapper/...

For a more general solution, do create stand-alone new VGs/LVs/FSs, and migrate data over using higther level tools - e.g. dump/restore, rsync, etc.

But note, that launchpad should not be used for support requests. Please use your UA account (salesforce) for support request for your production systems.

This is discussed upstream, where they are trying to introduce a soft check to prevent from moving data across. https://bugzilla.redhat.com/show_bug.cgi?id=1669751 But it's not a real solution, just a weak safety check. As one can still force create ext4fs of either 512 or 4k, and move the volume to the "wrong" size. As ideally it would be user friendly if moving to/from mixed sector sizes would just work(tm) but that's unlikely to happen upstream, thus is wont-fix downstream too.

Was there anything in particular that you were expecting for us to change?

We could change the cloud-images (if they don't already), installers (i.e. d-i / subiquity) or the utils (i.e. vgcreate, mkfs.ext4) to default to 4k minimum sector sizes. But at the moment, these utils try to guess the sector sizes based on heuristics at creation time, and obviously get is "wrong" if the underlying device is swapped away from under their feet post creation time. Thus this is expected.

References:
The upstream bug report is https://bugzilla.redhat.com/show_bug.cgi?id=1669751
The upstream overridable weak safety-net check is https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a
And that will make it into ubuntu eventually, when released in a stable lvm2 release and integration into ubuntu.

Please remove severity critical
Please remove target ubuntu 18.10
Please provide explanation as to why this issue was filed

Changed in linux (Ubuntu):
status: New → Invalid
Changed in ubuntu-z-systems:
status: New → Incomplete
Changed in lvm2 (Ubuntu):
status: New → Incomplete
Revision history for this message
Dimitri John Ledkov (xnox) wrote :

/etc/mke2fs.conf:
[defaults]
 blocksize = 4096
[fs_types]
 small = {
  blocksize = 1024
  inode_size = 128
  inode_ratio = 4096
 }

We default to 4k, unless one is formatting small filesystems which from manpage:
If the filesystem size is greater than or equal to 3 but less than 512 megabytes, mke2fs(8) will use the filesystem type small.

And in your tests you do appear to use 500MiB big images.

I wonder if we should bump even small ext4 filesystems to use 4k sector sizes.

Changed in lvm2:
importance: Unknown → Medium
status: Unknown → Confirmed
Revision history for this message
Frank Heimes (fheimes) wrote :

Decreasing importance from critical to medium, because the bug is known to the community, it is already discussed in RH Bug 1669751, and here https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html / https://www.redhat.com/archives/linux-lvm/2019-March/msg00000.html, and not platform specific, nor specific to a certain Ubuntu release.
On top there are actions possible to easily avoid this situation, like explicitly setting / forcing the sector size to be 4096 bytes or using a bigger image size (>512 MB - which is not uncommon), so that the sector size default changes to 4k anyway.
A patch was already suggested upstream:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a
Once that patch got upstream accepted and became picked-up in a new lvm2 version, it will eventually land in Ubuntu, too.

Changed in ubuntu-z-systems:
importance: Critical → Medium
Changed in lvm2 (Ubuntu):
status: Incomplete → Invalid
Changed in e2fsprogs (Ubuntu):
status: New → Fix Committed
Revision history for this message
Frank Heimes (fheimes) wrote :

With Eoan we now always default to 4k, hence Fix Released in e2fsprogs and the project.

Revision history for this message
Frank Heimes (fheimes) wrote :

Modified version e2fsprogs 1.45.1-1ubuntu1 still in eoan-proposed.
Once it left proposed this ticket will be changed to Fix Released (e2fsprogs and project).

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package e2fsprogs - 1.45.1-1ubuntu1

---------------
e2fsprogs (1.45.1-1ubuntu1) eoan; urgency=medium

  * Use 4k blocksize in all ext4 mke2fs.conf such that lvm migration
    between non-4k PVs and 4k PVs works irrespective of the volume
    size. LP: #1817097

 -- Dimitri John Ledkov <email address hidden> Wed, 15 May 2019 16:15:22 +0200

Changed in e2fsprogs (Ubuntu):
status: Fix Committed → Fix Released
Revision history for this message
Frank Heimes (fheimes) wrote :

Updated package landed in release pocket - changing project entry to Fix Released.

Changed in ubuntu-z-systems:
status: Incomplete → Fix Released
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2019-05-29 03:51 EDT-------
IBM bugzilla status -> closed, Fix Released by Canonical

Revision history for this message
In , teigland (teigland-redhat-bugs) wrote :

pushed to master branch:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=0404539edb25e4a9d3456bb3e6b402aa2767af6b

I can push to stable if this bug gets a rhel7 ack.

commit 0404539edb25e4a9d3456bb3e6b402aa2767af6b
Author: David Teigland <email address hidden>
Date: Thu Aug 1 10:06:47 2019 -0500

    vgcreate/vgextend: restrict PVs with mixed block sizes

    Avoid having PVs with different logical block sizes in the same VG.
    This prevents LVs from having mixed block sizes, which can produce
    file system errors.

    The new config setting devices/allow_mixed_block_sizes (default 0)
    can be changed to 1 to return to the unrestricted mode.

[root@null-01 ~]# blockdev --getss --getpbsz /dev/sdh
4096
2097152
[root@null-01 ~]# blockdev --getss --getpbsz /dev/loop0
512
512

[root@null-01 ~]# vgcreate mix /dev/sdh /dev/loop0
  Devices have inconsistent logical block sizes (4096 and 512).
  See lvm.conf allow_mixed_block_sizes.

[root@null-01 ~]# vgcreate --config devices/allow_mixed_block_sizes=1 mix /dev/loop0 /dev/sdh
  Volume group "mix" successfully created with system ID one

[root@null-01 ~]# vgcreate mix /dev/sdh
  Volume group "mix" successfully created with system ID one

[root@null-01 ~]# vgextend mix /dev/loop0
  Devices have inconsistent logical block sizes (4096 and 512).

[root@null-01 ~]# vgextend --config devices/allow_mixed_block_sizes=1 mix /dev/loop0
  Volume group "mix" successfully extended

Revision history for this message
In , mcsontos (mcsontos-redhat-bugs) wrote :

IMO this should go to 7.8 - the likelihood of this happening will be increasing.

Revision history for this message
In , rbednar (rbednar-redhat-bugs) wrote :

Verified.

lvm2-2.02.186-3.el7.x86_64

1) conf option present

# grep allow_mixed_block_sizes /etc/lvm/lvm.conf.rpmnew
 # Configuration option devices/allow_mixed_block_sizes.
 allow_mixed_block_sizes = 1

2) mixed block sizes allowed by default

# blockdev --report /dev/sd{a,k}
RO RA SSZ BSZ StartSec Size Device
rw 8192 512 4096 0 32212254720 /dev/sda
rw 8192 4096 4096 0 536870912 /dev/sdk

# vgcreate vg /dev/sda /dev/sdk
  Physical volume "/dev/sda" successfully created.
  Physical volume "/dev/sdk" successfully created.
  Volume group "vg" successfully created

Revision history for this message
In , errata-xmlrpc (errata-xmlrpc-redhat-bugs) wrote :

Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129

Changed in lvm2:
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.