[SRU] Use the hostname as the node name instead of hardcoded 'node1'

Bug #1874719 reported by Liam Young
52
This bug affects 7 people
Affects Status Importance Assigned to Milestone
OpenStack HA Cluster Charm
Fix Released
High
Unassigned
corosync (Ubuntu)
Fix Released
Undecided
Lucas Kanashiro
Focal
Won't Fix
Medium
Lucas Kanashiro
Groovy
Won't Fix
Undecided
Unassigned
Hirsute
Won't Fix
Undecided
Unassigned
pacemaker (Ubuntu)
Invalid
Undecided
Unassigned
Focal
Invalid
Undecided
Unassigned
Groovy
Invalid
Undecided
Unassigned
Hirsute
Invalid
Undecided
Unassigned

Bug Description

[Impact]

Users upgrading from Bionic to Focal are getting a different default node name. The former is using the hostname (output of "$(uname -n)"), and the latter uses a hardcoded name "node1". This issue was already fixed on Impish onward, so now only Focal is using the hardcoded "node1" node name.

[Test Plan]

$ lxc launch ubuntu-daily:focal corosync-sru
$ lxc shell corosync-sru
# apt update && apt upgrade -y
# apt install -y corosync pacemaker crmsh
# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: corosync-sru (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Apr 5 20:21:32 2022
  * Last change: Tue Apr 5 20:20:33 2022 by hacluster via crmd on corosync-sru
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ corosync-sru ]

Full List of Resources:
  * No resources

In the 'Node List' section the node should be called 'corosync-sru' and not 'node1'.

[Where problems could occur]

This might be a problem if Focal users are already relying in the 'node1' node name, other than that I do not foresee any issue.

[Original Message]

Testing of masakari on focal zaza tests failed because the test checks that all pacemaker nodes are online. This check failed due the appearance of a new node called 'node1' which was marked as offline. I don't know where that node came from or what is supposed to represent but it seems like an unwanted change in behaviour.

Related branches

Changed in pacemaker (Ubuntu):
assignee: nobody → Rafael David Tinoco (rafaeldtinoco)
Revision history for this message
Liam Young (gnuoy) wrote :

HAving looked into it further it seems to be the name of the node that has changed.

juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu

juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes crmsh pacemaker"

$ juju run --unit focal-ubuntu/0 "sudo crm status"
Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Fri Apr 24 15:03:52 2020
  * Last change: Fri Apr 24 15:02:20 2020 by hacluster via crmd on node1
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 ]

Full List of Resources:
  * No resources

$ juju run --unit bionic-ubuntu/0 "sudo crm status"
Stack: corosync
Current DC: juju-27f7a7-hatest2-0 (version 1.1.18-2b07d5c5a9) - partition WITHOUT quorum
Last updated: Fri Apr 24 15:04:05 2020
Last change: Fri Apr 24 15:00:43 2020 by hacluster via crmd on juju-27f7a7-hatest2-0

1 node configured
0 resources configured

Online: [ juju-27f7a7-hatest2-0 ]

No resources

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Here you will find the autopkgtests for pacemaker:

http://autopkgtest.ubuntu.com/packages/p/pacemaker/focal/amd64

the installation ends with a corosync node online:

Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Fri Apr 17 22:55:34 2020
  * Last change: Fri Apr 17 22:55:22 2020 by hacluster via crmd on node1
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 ]

Active Resources:
  * No active resources

pacemaker PASS
pacemaker PASS

So, for some reason, this localhost corosync installation made the service to be unavailable in your cause. I wonder if you have more logs so we can check what happened for it not to be online.

Changed in pacemaker (Ubuntu):
status: New → Invalid
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

I have seen this issue while working on https://review.opendev.org/741592 (adding an action for cleaning up the corosync ring). When adding a unit, sometimes pacemaker names it node1, but then renames it 2.5 minutes later with the correct name. However a function wait_for_pcmk() in the charm timed out after only 2 minutes waiting for the node to show up with the proper name. This then led to the charm taking further actions too early.

I believe the review I've just mentioned will fix this.

Changed in charm-hacluster:
status: New → In Progress
importance: Undecided → Medium
assignee: nobody → Aurelien Lourot (aurelien-lourot)
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote : Re: Focal/Groovy deploy creates a 'node1' node

My analysis was wrong and increasing the timeout [0] didn't help. The charm recovers eventually and running that new `update-ring` action [0] will be a valid workaround for removing `node1` from the corosync ring later on. I'll add an inline comment in the code and work around it in the functional tests.

So this isn't fixed but has a workaround and this seems to be happening a lot on zosci on groovy-victoria at the moment.

[0] https://review.opendev.org/741592
[1] https://github.com/openstack-charmers/zaza-openstack-tests/pull/369

summary: - Focal deploy creates a 'node1' node
+ Focal/Groovy deploy creates a 'node1' node
Changed in charm-hacluster:
status: In Progress → Triaged
assignee: Aurelien Lourot (aurelien-lourot) → nobody
tags: added: cdo-qa foundations-engine
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

Raising priority to "High" because this may lead to corosync/pacemaker thinking there are 4 nodes instead of 3 on a fresh deployment, and then we think we have an HA deployment where we are in fact not HA. Again there is a workaround: running the `update-ring` action after deployment.

Changed in charm-hacluster:
importance: Medium → High
Felipe Reyes (freyes)
tags: added: seg
James Page (james-page)
Changed in pacemaker (Ubuntu):
status: Invalid → New
Revision history for this message
James Page (james-page) wrote :

Setting the pacemaker distro task back to new - it seems very odd that a system designed to manage a cluster of servers would install on every node with a non-unique node id, which is a change in behaviour from older versions of the same software.

Can we not sanity fix this to use the hostname, which was the old behaviour and makes way more sense?

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

This is a change in the default corosync config file (/etc/corosync/corosync.conf), it impacts pacemaker clusters but it is not directly related to pacemaker. The change was made in Debian here:

https://salsa.debian.org/ha-team/corosync/-/commit/2f8aa88

Between corosync version 2.4.3 (present in Bionic) and version 3.0.3 (present in Focal) the 'nodelist' directive became required (as you can see in the commit message above). The decision was to configure a single node called 'node1' with 127.0.0.1 as the IP address link. This is not done in Bionic.

To fix this and keep the old behavior (use the hostname instead of a hardcoded 'node1') we simply need to comment out the "name: node1" line in /etc/corosync/corosync.conf. I did the following in a Focal VM to test it:

$ sudo apt-get install -y crmsh pacemaker
$ sudo crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Thu Jun 10 11:23:33 2021
  * Last change: Thu Jun 10 11:05:11 2021 by hacluster via crmd on node1
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 ]

Full List of Resources:
  * No resources

$ sudo sed -e '/name: node1/s/^/#/' -i /etc/corosync/corosync.conf
$ sudo systemctl restart corosync
$ sudo systemctl restart pacemaker
$ $ sudo crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: focal (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Thu Jun 10 11:51:08 2021
  * Last change: Thu Jun 10 11:36:45 2021 by hacluster via crmd on focal
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ focal ]

Full List of Resources:
  * No resources

In this case my hostname is 'focal', so I believe this will keep the old behavior. Do you believe this worth a SRU? Would you like me to work on this?

Changed in pacemaker (Ubuntu):
status: New → Invalid
Changed in corosync (Ubuntu):
status: New → Triaged
Revision history for this message
James Page (james-page) wrote :

Hi Lucas

Thanks for the commentary and feedback on why this change was introduced.

I feel this is worth looking at via the SRU process and yes please if you have cycles that would be great!

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

In case anyone here wants to try the proposed solution, I did upload corosync version 3.0.3-2ubuntu2.2~ppa1 to my PPA and it indeed fixes the issue, it uses the hostname as the node name by default now:

https://launchpad.net/~lucaskanashiro/+archive/ubuntu/ha-stack

I'll be working to release this as a SRU.

Changed in pacemaker (Ubuntu Focal):
status: New → Invalid
Changed in pacemaker (Ubuntu Hirsute):
status: New → Invalid
Changed in pacemaker (Ubuntu Groovy):
status: New → Invalid
Changed in corosync (Ubuntu Focal):
status: New → Triaged
Changed in corosync (Ubuntu Groovy):
status: New → Triaged
Changed in corosync (Ubuntu Hirsute):
status: New → Triaged
Changed in pacemaker (Ubuntu):
assignee: Rafael David Tinoco (rafaeldtinoco) → nobody
Changed in corosync (Ubuntu):
assignee: nobody → Lucas Kanashiro (lucaskanashiro)
Changed in corosync (Ubuntu Focal):
assignee: nobody → Lucas Kanashiro (lucaskanashiro)
Changed in corosync (Ubuntu Groovy):
assignee: nobody → Lucas Kanashiro (lucaskanashiro)
Changed in corosync (Ubuntu Hirsute):
assignee: nobody → Lucas Kanashiro (lucaskanashiro)
summary: - Focal/Groovy deploy creates a 'node1' node
+ [SRU] Use the hostname as the node name instead of hardcoded 'node1'
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package corosync - 3.1.0-2ubuntu4

---------------
corosync (3.1.0-2ubuntu4) impish; urgency=medium

  * d/p/Make-the-example-config-valid.patch: comment out the node name in
    config file (LP: #1874719). With this, we will keep the same behavior as we
    have in Bionic which is using the output of "uname -n" as the node name.
  * d/t/quorumtool: search for localhost instead of node1.

 -- Lucas Kanashiro <email address hidden> Thu, 17 Jun 2021 10:38:07 -0300

Changed in corosync (Ubuntu):
status: Triaged → Fix Released
Revision history for this message
Vern Hart (vern) wrote :

Just to clarify, this is fixed in impish but is still a problem on older releases. Is that correct?

And since the problem is fixed/fixable in corosync, this bug is probably invalid for charm-hacluster, right?

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

You are right, I did not backport the fix to the older LTS releases yet, it is still in my todo list. I was trying to convince the Debian maintainer to accept the change but he refused to do that. I was not willing to carry this as a delta and force us to do merges from now on, I'd prefer to keep it as a sync.

Revision history for this message
Brian Murray (brian-murray) wrote :

The Hirsute Hippo has reached End of Life, so this bug will not be fixed for that release.

Changed in corosync (Ubuntu Hirsute):
status: Triaged → Won't Fix
Revision history for this message
Ante Karamatić (ivoks) wrote :

Both DD's and Lucas' PoV has merits, imho. There's no right and wrong here. I would even prefer if we reduce the delta with Debian.

However, hacluster charm should be handling this situation. There's nothing special here - corosync has specific behaviour out of the box. Charms should handle it.

Revision history for this message
Vern Hart (vern) wrote :

In the meantime, we keep having to add a post-deployment cleanup step to our deployment guides:

  Delete node1 crm resources (LP#1874719):
  $ juju run -m openstack --all -- sudo crm node delete node1

Which is overkill but works.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-hacluster (master)
Changed in charm-hacluster:
status: Triaged → In Progress
Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

Since this is already in Jammy, I am adding the backport of this fix to Focal to the Server team backlog and it should be tackled "soon". Also setting the Groovy task to Won't Fix because of the end of its standard support.

tags: added: server-todo
Changed in corosync (Ubuntu Groovy):
status: Triaged → Won't Fix
Changed in corosync (Ubuntu Hirsute):
assignee: Lucas Kanashiro (lucaskanashiro) → nobody
Changed in corosync (Ubuntu Groovy):
assignee: Lucas Kanashiro (lucaskanashiro) → nobody
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-hacluster (master)

Reviewed: https://review.opendev.org/c/openstack/charm-hacluster/+/834034
Committed: https://opendev.org/openstack/charm-hacluster/commit/d1191dbcabdfd8684a86825f06c6ede266ba93ba
Submitter: "Zuul (22348)"
Branch: master

commit d1191dbcabdfd8684a86825f06c6ede266ba93ba
Author: Billy Olsen <email address hidden>
Date: Wed Mar 16 06:50:28 2022 -0700

    Render corosync.conf file prior to pkg install

    Starting in focal, the ubuntu version of corosync package synced in from
    debian includes node1 as the default name for the local node with a nodeid
    of 1. This causes the cluster to have knowledge of this extra node1 node,
    which affects quorum, etc. Installing the charm's corosync.conf file
    before package installation prevents this conditioning from happening.

    Additionally this change removes some Xenial bits in the charm and always
    includes a nodelist in corosync.conf as it is compulsory in focal and
    newer. It is optional in the bionic packages, so we'll always just
    render the nodelist.

    Change-Id: I06b9c23eb57274f0c99a3a05979c0cabf87c8118
    Closes-Bug: #1874719

Changed in charm-hacluster:
status: In Progress → Fix Committed
Changed in corosync (Ubuntu Focal):
importance: Undecided → Medium
description: updated
Changed in corosync (Ubuntu Focal):
status: Triaged → In Progress
description: updated
Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

Backporting the default config change to Focal breaks pacemaker DEP-8 test. The fix for this can be tracked here:

https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1968039

I created a separate bug to block this update in focal-proposed and avoid users updating the package because of a DEP-8 test fix.

Changed in charm-hacluster:
milestone: none → 22.04
Revision history for this message
Robie Basak (racb) wrote :

James said:

> Setting the pacemaker distro task back to new - it seems very odd that a system designed to manage a cluster of servers would install on every node with a non-unique node id, which is a change in behaviour from older versions of the same software.

In Jammy, I think this is still the case? corosync.conf still ships with a default nodeid of 1. It's just the name that's no longer supplied.

My understanding of the normal use of corosync in Ubuntu is that the entire file is generally always replaced after the package is installed. I believe the hacluster charm does this too.

So am I right in that the issue is that corosync started briefly before being configured by the charm, and is leaving state behind? In that case, I think the charm was possibly buggy in two ways:

1) It should use policy-rc.d to avoid corosync daemon startup before corosync.conf is written out, or maybe write it out in advance. Looks like Billy's commit fixed this in the charm already. FWIW, I find it surprising that charms don't generally always override with policy-rc.d and start services manually.

2) After rewriting corosync configuration, it should clear out corosync state files entirely before restarting the daemon. This is no longer necessary due to the other fix.

Both of these apply to anything configuring corosync on Ubuntu, not just the charm. So it's not clear to me that there's a bug in the corosync packaging in Ubuntu in Focal at all. We merely ship a default cluster of size 1 that isn't very useful and needs to be replaced correctly in order to be useful.

From an SRU perspective, I have further concerns for existing users.

1) It's a conffile change. Since corosync.conf is almost always modified by users, they're are going to be prompted on upgrade if interactive. This is a little alarming and not useful. Is there any actual case where existing users would realistically be using the default configuration file? Note also that since the issue is with state, changing the configuration file for existing users wouldn't avoid the issue for them anyway.

2) Changing the node name on an existing cluster seems dangerous to me.

For the SRU, what problem are we actually solving here then? The charm is fixed and no longer impacted. Are we trying to avoid having dirty state when users follow the broken installation flow of starting corosync with the default configuration and then changing it? In that case, it seems to me that the proposed fix only happens to work by chance in this case. The real fix is to make sure that the state is properly cleaned. I'm not sure how to do that in packaging except to try to guide the user into somehow not following the broken installation flow.

Therefore I'm soft-declining this SRU for now, but further discussion welcome if you disagree and I'll look again.

Revision history for this message
Robie Basak (racb) wrote :

> I'm not sure how to do that in packaging except to try to guide the user into somehow not following the broken installation flow.

Maybe, *if* it's never useful to use corosync with the default shipped corosync.conf, we should just not ship it, and add a ConditionPathExists=/etc/corosync/corosync.conf to corosync.service?

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

Thanks for taking a look at this proposed change Robie. The behavior we are trying to backport to Focal is already present in all supported releases, Focal is the only one that differs at the moment.

What you said is indeed true, the default configuration file is not production ready to most of the use cases I believe. I tried to backport it to keep the same behavior across all the releases (the change is just commenting out a single line in the config file) but TBH I also do not think it will benefit many users already running Focal. It might benefit some users upgrading from Bionic to Focal since the behavior would be the same. It'd be great to see James' opinion about this.

And after applying this change in Jammy I also needed to add some delta to some other packages fixing DEP-8 tests which were already expecting the hardcoded 'node1' default node name. The Debian maintainer is not willing to accept this change also, so the maintenance cost to keep this change is getting higher. Maybe we should reconsider how useful it is to keep this change as a Ubuntu delta for the future cycles.

Changed in charm-hacluster:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-hacluster (stable/focal)

Fix proposed to branch: stable/focal
Review: https://review.opendev.org/c/openstack/charm-hacluster/+/841587

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-hacluster (stable/focal)

Reviewed: https://review.opendev.org/c/openstack/charm-hacluster/+/841587
Committed: https://opendev.org/openstack/charm-hacluster/commit/567e54d87c55f945f222a9ebbe2e9bfc1984ce5e
Submitter: "Zuul (22348)"
Branch: stable/focal

commit 567e54d87c55f945f222a9ebbe2e9bfc1984ce5e
Author: Billy Olsen <email address hidden>
Date: Wed Mar 16 06:50:28 2022 -0700

    Render corosync.conf file prior to pkg install

    Starting in focal, the ubuntu version of corosync package synced in from
    debian includes node1 as the default name for the local node with a nodeid
    of 1. This causes the cluster to have knowledge of this extra node1 node,
    which affects quorum, etc. Installing the charm's corosync.conf file
    before package installation prevents this conditioning from happening.

    Additionally this change removes some Xenial bits in the charm and always
    includes a nodelist in corosync.conf as it is compulsory in focal and
    newer. It is optional in the bionic packages, so we'll always just
    render the nodelist.

    Change-Id: I06b9c23eb57274f0c99a3a05979c0cabf87c8118
    Closes-Bug: #1874719
    (cherry picked from commit d1191dbcabdfd8684a86825f06c6ede266ba93ba)

tags: added: in-stable-focal
Revision history for this message
Andreas Hasenack (ahasenack) wrote :

> However, hacluster charm should be handling this situation. There's nothing special here -
> corosync has specific behaviour out of the box. Charms should handle it.

I'm trying to understand the sequence of steps that led to this situation on Focal. From what I understand:

- install corosync
- you get a cluster named "debian", with a single node named "node1" (which does not match the hostname), in each unit. If you deployed 3 principals, and 3 hacluster subordinates, each one will be its own isolated island still.
- time passes
- some relation gets added, and the charm reacts. It renames each node to the hostname (juju-xxxx), and even node id is changed I believe. corosync.conf and other files are propagated to all nodes, and tries to form a cluster. I don't know what it does to the cluster name. Services get restarted
- at that point, some nodes (all?) still have "node1" stored somewhere, which was their own old name before

So is this a scenario of renaming a node while it's part of a cluster, even when it's a single-node cluster from the default package installation? I would expect that default single-node cluster to be destroyed when the charm takes over. Maybe it's hard to distinguish that scenario from a real-multi-node-cluster-but-degraded one.

In general, I prefer the idea of using the hostname as the node name by default. It's what at least has a chance of being unique right after install, whereas "node1" definitely does not. Sounds less surprising behavior. Now, doing an SRU to change this back to hostname, the pro and con balance isn't clear. Fresh installs would be better after the SRU, but existing ones would get a dpkg conf prompt, and risk getting the wrong answer and having the cluster broken. All in all, everyone deploying a more-than-one-node cluster will *definitely* change the conf file anyway for other reasons (or will they?), which also makes the "node1" argument kind of moot.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :
Download full text (3.6 KiB)

I did some tests, to see what the cluster behavior is when changing node name and/or id. It all boils down to the fact that whatever is being changed, one has to be aware that the change is being done to a live real cluster, even though it's a simple one-node cluster. That's what you get right after the package is installed: a single node cluster:
- node name is either "node1" or `uname -n`, depending on the ubuntu release. In the case of focal, topic of this bug, it's "node1" currently
- node id is 1
- ring0_addr is localhost

The charm is doing 3 changes compared to the default config file:
- node name is back to being localhost
- node id is 1000 + application unit id (a juju thing: for example, postgresql/0 is unit 0 of application postgresql).
- ring0_addr gets the real IP of the application unit, instead of 127.0.0.1

When changing nodeid *AND* node name, this is essentially creating a new node in the cluster. The old "node1" name and ID will remain around, but offline because no live host is responding as that anymore.

If you change just one of the two (node name or id), then the cluster seems to be able to coalesce them together again, and you get a plain rename, I haven't tested this exhaustively, but it seems to be the case by inspecting the current cib.raw xml file in each node, and diffing to a previous one shows the rename.

Let's test a user story, showing how one could deploy 3 nodes manually from these focal packages.

After installing pacemaker corosync in all 3 nodes, let's call them f1, f2 and f3, we get:
- f1: node id = 1, node name = node1, cluster name = debian
- f2: node id = 1, node name = node1, cluster name = debian
- f3: node id = 1, node name = node1, cluster name = debian

All with the identical config. These are esssentially 3 isolated clusters called debian, with one node called node1 in each.

The following set of changes will work and not show a phantom "node1" node at the end:
- on f1, adjust corosync.conf with this node list:
nodelist {
  node {
    # name: node1
    nodeid: 1
    ring0_addr: f1 # (or f1's ip)
  }
  node {
    nodeid: 2
    ring0_addr: f2 # (or f2's ip)
  }
  node {
    nodeid: 3
    ring0_addr: f3 # (or f3's ip)
  }
}

Then scp this file to the other nodes, and restart corosync and pacemaker there.

We kept the nodeid on f1 as 1, just got rid of its name. That renames that node to `uname -n`, because the id was kept at 1.
The other nodes also got a new name, but their ids changed. And crucially, node id 1 still exists in the cluster (it's f1), so it all works out.

If you were to also change the node id range together with the name, like the charm does, then it's an entirely new node. and you will have to get rid of node1 with a crm or pcs command, or just " crm_node --remove node1".

All in all, it's best to either start with the correct configuration (which the charm does nowadays), or clear everything beforehand (with pcs cluster destroy, perhaps). "pcs cluster destroy" is quite comprehensive, it does:
- rm -f /etc/corosync/{corosync.conf,authkey} /etc/pacemaker/authkey /var/lib/pcsd/disaster-recovery
- removes many files from /var/lib/pacemaker (cib, pengine/pe*bz2, hostcache, cts, othe...

Read more...

tags: removed: server-todo
Revision history for this message
Andreas Hasenack (ahasenack) wrote :

It's unfortunate that corosync in Focal behaves differently from the other Ubuntu releases regarding the default node name. We think that changing it back via an SRU is not worth it after all this time since the release:
- it's a change in behavior in an LTS release in a default install
- it would incur in annoying dpkg config prompts for all users who have clusters installed and configured
- the hacluster charm, which was impacted by this, was changed to accommodate the current behavior of corosync (but would still work if we changed it again)

Therefore I'm marking the focal task as wontfix.

Changed in corosync (Ubuntu Focal):
status: In Progress → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.