The force stop of maas-vhost1 node happened a little before 10:28:39. Then the DB failover happened to maas-vhost2 and maas-vhost3 was the node selected by pacemaker to execute DNS update from 10.100.1.2 -> 10.100.3.2 which succeeded but then zone files on surviving 2 maas servers (maas-vhost2 and maas-vhost3) were not updated so this reproducer is identical to the original one:
name | name | ip
-------------+------+------------
maas-region | test | 10.100.3.2
for i in 1 2 3 ; do ssh ubuntu@10.100.$i.2 nslookup maas-region.test ; done
ssh: connect to host 10.100.1.2 port 22: No route to host
Server: 127.0.0.53
Address: 127.0.0.53#53
Andres,
Uploaded /etc/ and /var from the node which processed the DNS update: /private- fileshare. canonical. com/~dima/ maas-dumps/ 2019-02- 25-maas- vhost3- etc-var- log.tar. gz
https:/
2019-02-25 10:28:39 regiond: [info] 127.0.0.1 PUT /MAAS/api/ 2.0/dnsresource s/1/ HTTP/1.1 --> 200 OK (referrer: -; agent: Python-urllib/3.6)
Additional outputs to illustrate what happened: /pastebin. canonical. com/p/zqKSvHxJr T/ /pastebin. canonical. com/p/Y6Vjftxdg 6/ (resource agent log)
https:/
https:/
The force stop of maas-vhost1 node happened a little before 10:28:39. Then the DB failover happened to maas-vhost2 and maas-vhost3 was the node selected by pacemaker to execute DNS update from 10.100.1.2 -> 10.100.3.2 which succeeded but then zone files on surviving 2 maas servers (maas-vhost2 and maas-vhost3) were not updated so this reproducer is identical to the original one:
name | name | ip ------+ ------+ ------- -----
-------
maas-region | test | 10.100.3.2
Non-authoritative answer:
Name: maas-region.test
Address: 10.100.1.2
for i in 1 2 3 ; do ssh ubuntu@10.100.$i.2 nslookup maas-region.test ; done
ssh: connect to host 10.100.1.2 port 22: No route to host
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: maas-region.test
Address: 10.100.1.2
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: maas-region.test
Address: 10.100.1.2