strongswan ipsec fails to finish connection (hangs after installing DNS server via resolvconf)

Bug #1786261 reported by fermulator
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
strongSwan
New
Undecided
Unassigned
bind9 (Ubuntu)
New
Undecided
Unassigned
resolvconf (Ubuntu)
New
Undecided
Unassigned
strongswan (Ubuntu)
New
Undecided
Unassigned

Bug Description

as a continuation of https://bugs.launchpad.net/ubuntu/+source/strongswan/+bug/1786250 ... (that bug can be focused on the apparmor profile issue in Ubuntu + strongswan)

--
this bug report is for the stuck VPN connection issue

Used to work fine in Ubuntu 16.04 LTS, and Ubuntu 17.10.

ii strongswan 5.6.2-1ubuntu2 all IPsec VPN solution metapackage

A while ago I upgrade to 18.04 LTS and had consistent issues with strongswan ipsec connectivity VPN.

```
 sudo ipsec up <CONNECTION_NAME>

... all the goods happen ...

but near the end:

IKE_SA <CONNECTION_NAME>[1] established between 1.0.0.6[<USER_SNIPPED>]...64.7.137.180[OU=Domain Control Validated, CN=<SNIPPED_HOST>.com]
scheduling reauthentication in 56358s
maximum IKE_SA lifetime 56538s
installing DNS server 192.168.194.20 via resolvconf
installing DNS server 192.168.196.20 via resolvconf
<<HANGS FOREVER>>
```

while in this state, we see:
```
 sudo ipsec statusall
Status of IKE charon daemon (strongSwan 5.6.2, Linux 4.15.0-29-generic, x86_64):
  uptime: 6 minutes, since Aug 09 10:03:04 2018
  malloc: sbrk 3403776, mmap 532480, used 1301456, free 2102320
  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 0
  loaded plugins: charon test-vectors unbound ldap pkcs11 tpm aesni aes rc2 sha2 sha1 md4 md5 mgf1 random nonce x509 revocation constraints acert pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey dnscert ipseckey pem openssl gcrypt af-alg fips-prf gmp curve25519 agent chapoly xcbc cmac hmac ctr ccm gcm ntru bliss curl soup mysql sqlite attr kernel-netlink resolve socket-default connmark farp stroke vici updown eap-identity eap-sim eap-sim-pcsc eap-aka eap-aka-3gpp2 eap-simaka-pseudonym eap-simaka-reauth eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap eap-tnc xauth-generic xauth-eap xauth-pam xauth-noauth tnc-tnccs tnccs-20 tnccs-11 tnccs-dynamic dhcp whitelist lookip error-notify certexpire led radattr addrblock unity counters
Listening IP addresses:
  1.0.0.6
  192.168.130.9
  192.168.140.17
  192.168.130.14
  192.168.140.2
  192.168.130.13
  192.168.130.15
  192.168.130.16
  192.168.130.8
  172.17.0.1
  192.168.122.1
Connections:
  <SITE_SNIPPED>primary: %any...<SITE_SNIPPED>primary.<SNIPPED>.com IKEv2, dpddelay=30s
  <SITE_SNIPPED>primary: local: [<USER_SNIPPED>] uses EAP_MSCHAPV2 authentication
  <SITE_SNIPPED>primary: remote: [OU=Domain Control Validated, CN=<SNIPPED>.com] uses public key authentication
  <SITE_SNIPPED>primary: child: 192.168.140.0/24 === 192.168.128.0/17 10.0.0.0/8 172.16.0.0/12 TUNNEL, dpdaction=clear
<SITE_SNIPPED>secondary: %any...<SITE_SNIPPED>secondary.<SNIPPED>.com IKEv2, dpddelay=30s
<SITE_SNIPPED>secondary: local: [<USER_SNIPPED>] uses EAP_MSCHAPV2 authentication
<SITE_SNIPPED>secondary: remote: [OU=Domain Control Validated, CN=<SNIPPED>.com] uses public key authentication
<SITE_SNIPPED>secondary: child: 192.168.130.0/24 === 192.168.128.0/17 10.0.0.0/8 172.16.0.0/12 TUNNEL, dpdaction=clear
Routed Connections:
<SITE_SNIPPED>secondary{2}: ROUTED, TUNNEL, reqid 2
<SITE_SNIPPED>secondary{2}: 192.168.130.0/24 === 10.0.0.0/8 172.16.0.0/12 192.168.128.0/17
  <SITE_SNIPPED>primary{1}: ROUTED, TUNNEL, reqid 1
  <SITE_SNIPPED>primary{1}: 192.168.140.0/24 === 10.0.0.0/8 172.16.0.0/12 192.168.128.0/17
Security Associations (0 up, 0 connecting):
  none
```

here are the logs (post-restart of strongswan service)

journalctl --system -u strongswan

```
Aug 09 10:03:05 <HOSTNAME_SNIPPED> systemd[1]: Started strongSwan IPsec IKEv1/IKEv2 daemon using ipsec.conf.
Aug 09 10:03:05 <HOSTNAME_SNIPPED> ipsec[10448]: Starting strongSwan 5.6.2 IPsec [starter]...
Aug 09 10:03:05 <HOSTNAME_SNIPPED> ipsec_starter[10448]: Starting strongSwan 5.6.2 IPsec [starter]...
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[DMN] Starting IKE charon daemon (strongSwan 5.6.2, Linux 4.15.0-29-generic, x86_64)
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] PKCS11 module '<name>' lacks library path
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] disabling load-tester plugin, not configured
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[LIB] plugin 'load-tester': failed to load - load_tester_plugin_create returned NULL
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[KNL] unable to create IPv4 routing table rule
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[KNL] unable to create IPv6 routing table rule
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] dnscert plugin is disabled
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] ipseckey plugin is disabled
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] attr-sql plugin: database URI not set
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading ca certificates from '/etc/ipsec.d/cacerts'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loaded ca certificate "C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., CN=Go Daddy Root Certificate Authority - G2" from '/etc/ipsec.d/cacerts/<SNIPPED>-wildca
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading aa certificates from '/etc/ipsec.d/aacerts'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading ocsp signer certificates from '/etc/ipsec.d/ocspcerts'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading attribute certificates from '/etc/ipsec.d/acerts'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading crls from '/etc/ipsec.d/crls'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loading secrets from '/etc/ipsec.secrets'
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loaded EAP secret for <USER_SNIPPED>
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] sql plugin: database URI not set
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] opening triplet file /etc/ipsec.d/triplets.dat failed: No such file or directory
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] eap-simaka-sql database URI missing
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] loaded 0 RADIUS server configurations
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] HA config misses local/remote address
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] no threshold configured for systime-fix, disabled
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[CFG] coupling file path unspecified
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[LIB] loaded plugins: charon test-vectors unbound ldap pkcs11 tpm aesni aes rc2 sha2 sha1 md4 md5 mgf1 random nonce x509 revocation constraints acert pubkey pkcs1 pkcs7 pk
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[LIB] dropped capabilities, running as uid 0, gid 0
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 00[JOB] spawning 16 worker threads
Aug 09 10:03:05 <HOSTNAME_SNIPPED> ipsec[10448]: charon (10474) started after 40 ms
Aug 09 10:03:05 <HOSTNAME_SNIPPED> ipsec_starter[10448]: charon (10474) started after 40 ms
```
---
and when I try to connect:
```
Aug 09 10:03:05 <HOSTNAME_SNIPPED> charon[10474]: 04[CFG] received stroke: add connection '<SITE_SNIPPED>primary'
Aug 09 10:03:15 <HOSTNAME_SNIPPED> charon[10474]: 04[CFG] CA certificate "/etc/ipsec.d/cacerts/<SNIPPED>-wildcard.pem" not found, discarding CA constraint
Aug 09 10:03:15 <HOSTNAME_SNIPPED> charon[10474]: 04[CFG] added configuration '<SITE_SNIPPED>primary'
Aug 09 10:03:15 <HOSTNAME_SNIPPED> charon[10474]: 07[CFG] received stroke: route '<SITE_SNIPPED>primary'
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> ipsec[10448]: '<SITE_SNIPPED>primary' routed
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 07[KNL] policy already exists, try to update it
Aug 09 10:03:20 <HOSTNAME_SNIPPED> ipsec_starter[10448]: '<SITE_SNIPPED>primary' routed
Aug 09 10:03:20 <HOSTNAME_SNIPPED> ipsec_starter[10448]:
Aug 09 10:03:20 <HOSTNAME_SNIPPED> charon[10474]: 12[CFG] received stroke: add connection '<SITE_SNIPPED>secondary'
Aug 09 10:03:25 <HOSTNAME_SNIPPED> charon[10474]: 12[CFG] CA certificate "/etc/ipsec.d/cacerts/<SNIPPED>-wildcard.pem" not found, discarding CA constraint
Aug 09 10:03:25 <HOSTNAME_SNIPPED> charon[10474]: 12[CFG] added configuration '<SITE_SNIPPED>secondary'
Aug 09 10:03:25 <HOSTNAME_SNIPPED> charon[10474]: 14[CFG] received stroke: route '<SITE_SNIPPED>secondary'
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> ipsec[10448]: '<SITE_SNIPPED>secondary' routed
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> charon[10474]: 14[KNL] policy already exists, try to update it
Aug 09 10:03:30 <HOSTNAME_SNIPPED> ipsec_starter[10448]: '<SITE_SNIPPED>secondary' routed
Aug 09 10:03:30 <HOSTNAME_SNIPPED> ipsec_starter[10448]:
```

fermulator (fermulator)
description: updated
fermulator (fermulator)
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Fermulator,
while I regularly pick up new strongswan versions I'm not enough of an expert to give good suggestions on what the reason might be.

But fortunately there are often a few community members here using strongswan that can chime in.
Never the less for a technical discussion on the very low details of your connection hang an upstream issue report might be useful.

I'd ask you to report the ID you get here so that we can link it and follow the discussion.
Also include as much of your setup (simplified and stripped of confidential things) to the bug here and there. As it would help a lot if we'd get it to a list of steps to locally reproduce the issue.

Version-wise you said 17.10 was good but in 18.04 it is showing this behavior.
In terms of versions that would be 5.5.1 -> 5.6.2 - maybe that rings a bell for upstream?

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Revision history for this message
fermulator (fermulator) wrote :
Revision history for this message
Erich E. Hoover (ehoover) wrote :

I am having a similar problem, but I have some additional tidbits:
1) I had the same issue after upgrading from 17.10 to 18.04, but before rebooting
2) I have the same problem if I downgrade all the strongswan/charon packages to 5.5.1-4ubuntu2.2
3) First connection attempt after reboot does not always work
4) "sudo service strongswan restart" has some probability that the next connection attempt will work, but no guarantee

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thanks Fermulator, no auto-import from this type of bug tracker, but the URL is great to check!.
Since remote tracking can't be done automatically, can you ping here once something interesting is coming back there?

Revision history for this message
fermulator (fermulator) wrote :

(yes, the upstream report back-refs this downstream report, myself or someone else should do the needful communication once upstream is fixed)

Revision history for this message
Erich E. Hoover (ehoover) wrote :

So, I've noticed something else that may be relevant. My work's configuration actually has two DNS servers, which show up like so:
installing DNS server 10.10.0.8 via resolvconf
installing DNS server 10.10.0.2 via resolvconf
However, I saw that there was some probability that it failed after either log message. In addition to that, there is a "leftupdown" script that makes an _additional_ call to resolvconf to setup the DNS search (and tear it down afterwards). I also (sometimes) locks up at these times, so I disabled that script and noticed more successful runs.

So, suspecting that the problem was with resolvconf I downgraded it to 1.79ubuntu8. That didn't do the trick, but these days resolvconf is managed by systemd - so I then downgraded systemd to 234-2ubuntu12.1 (except for a conflict with netplan.io, which I ignored). That "worked" in an interesting way, now it reliably connects and finishes - but sometimes it takes about 10 seconds to complete each resolvconf transaction.

Based on this, I suspect that the issue is actually somehow in calling resolvconf (if I call resolvconf in a terminal then I don't see a lockup).

Revision history for this message
Erich E. Hoover (ehoover) wrote :

I think I may have somewhat figured it out. @fermulator, could you get it to lockup and then run "sudo killall host" in another terminal window?

It looks like this is some sort of integration issue between the latest bind9-host (used by avahi-daemon) and the latest systemd, downgrading either one of these packages appears to resolve the issue for me. Further, digging through the resolvconf scripts I found that it appears to be hanging when "host" is called by /usr/lib/avahi/avahi-daemon-check-dns.sh, so interrupting that tasks seems to bypass the issue.

Revision history for this message
fermulator (fermulator) wrote :

@ehoover; great isolation! that's absolutely it

I too noticed it hangs in several other ways randomly.

One other example I had not mentioned was failing here:
{{{
Aug 20 09:02:49 fermmy charon[3698]: 14[IKE] maximum IKE_SA lifetime 56591s
Aug 20 09:02:49 fermmy charon[3698]: 14[IKE] processing INTERNAL_IP4_ADDRESS attribute
Aug 20 09:02:49 fermmy charon[3698]: 14[IKE] processing INTERNAL_IP4_DNS attribute
<<HANGS>>
}}}

During this state:
{{{
$ ps wuaxxx | grep host
root 8526 0.0 0.0 187628 8792 ? Sl 09:02 0:00 host -t soa local.
}}}

Then:
{{{
$ sudo killall host
}}}

tada: (this immediately followed log message then appears)
{{{
Aug 20 09:04:24 fermmy charon[3698]: 10[IKE] removing DNS server 192.168.194.20 via resolvconf
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] installing DNS server 192.168.194.20 via resolvconf
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] processing INTERNAL_IP4_DNS attribute
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] installing DNS server 192.168.196.20 via resolvconf
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] installing new virtual IP 192.168.130.4
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] CHILD_SA wtlsecondary{72} established with SPIs c557c437_i 9ae567d1_o and TS 192.168.130.4/32 === 10.0.0.0/8 172.16.0.0/12 192.168.128.0/17
Aug 20 09:04:24 fermmy charon[3698]: 14[IKE] CHILD_SA wtlsecondary{72} established with SPIs c557c437_i 9ae567d1_o and TS 192.168.130.4/32 === 10.0.0.0/8 172.16.0.0/12 192.168.128.0/17
}}}

it's definitely resolvconf related :( -- and as you suggest

we should move this defect to the appropriate sub-category then

Revision history for this message
fermulator (fermulator) wrote :

fwiw
{{{
$ dpkg --list | grep avahi-daemon
ii avahi-daemon 0.7-3.1ubuntu1 amd64 Avahi mDNS/DNS-SD daemon
$ dpkg --list | grep resolvconf
ii resolvconf 1.79ubuntu10 all name server information handler
}}}

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

bind9-host is deprecated anyway isn't it?
Could the avahi shell script easily be modified to try dig at the same place?

Also as requested since you isolated it towards resolveconf I'm adding a bug task for that as well.

Revision history for this message
Erich E. Hoover (ehoover) wrote :

@paelzer, it is making an usual call to host and checking the return:
LC_ALL=C host -t soa local. 2>&1
and theoretically this maps to:
LC_ALL=C dig -t soa local. 2>&1

Practically, I don't think this works. I do not have a "local." start of authority record and when I run these commands I get a 1 return value from host and a 0 return value from dig. It might be possible that you can parse the output of dig to get the same information, but I don't know what dig outputs when "local." shows up with an SOA record. If I recall correctly, this should only happen on _very_ broken networks. The avahi folks might know more about how to proceed.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi,
I just realized with you tracking it down so far - could it be "another symptom" of bug 1752411 ?

If so we should add strongswan there as affected and make this bug a Dup, so that ALL effects of this one issue are in one place.

I'd be glad if one of you could make the detail check if you think it really is the same.

Revision history for this message
fermulator (fermulator) wrote :

$ dpkg --list | grep bind9
ii bind9-host 1:9.11.3+dfsg-1ubuntu1.1 amd64 DNS lookup utility (deprecated)
ii libbind9-160:amd64 1:9.11.3+dfsg-1ubuntu1.1 amd64 BIND9 Shared Library used by BIND
rc libbind9-80 1:9.8.1.dfsg.P1-4ubuntu0.9 amd64 BIND9 Shared Library used by BIND
rc libbind9-90 1:9.9.5.dfsg-3ubuntu0.8 amd64 BIND9 Shared Library used by BIND

Revision history for this message
Erich E. Hoover (ehoover) wrote :

Yes, this is definitely another symptom (duplicate) of bug 1752411 . The folks on that bug might be people who will know if dig can be used instead.

Revision history for this message
fermulator (fermulator) wrote :

The relationship to LP #1752411 certainly feels valid.
(I think I agree to the duplication)

Check this out btw (perhaps better submitted to the other bug) -- but --

despite "host" claiming a default timeout of a few seconds, this NEVER returns!!
{{{
$ LC_ALL=C host -t soa local.
}}}

$ man host
{{{
       -W wait
           Timeout: Wait for up to wait seconds for a reply. If wait is less than one, the wait interval is set to one second.

           By default, host will wait for 5 seconds for UDP responses and 10 seconds for TCP connections. These defaults can be overridden by the timeout option in
           /etc/resolv.conf.

           See also the -w option.
}}}

But I even tried to manually specify how long to wait, it isn't honoured
{{{
$ time LC_ALL=C host -W 1 -t soa local.

<<HUNG>>
}}}

--
we're talking about THIS method btw

/usr/lib/avahi/avahi-daemon-check-dns.sh
{{{
dns_has_local() {
  # Some magic to do tests
  if [ -n "${FAKE_HOST_RETURN}" ] ; then
    if [ "${FAKE_HOST_RETURN}" = "true" ]; then
      return 0;
    else
      return 1;
    fi
  fi

  OUT=`LC_ALL=C host -t soa local. 2>&1` <<<---- HERE
  if [ $? -eq 0 ] ; then
    if echo "$OUT" | egrep -vq 'has no|not found'; then
      return 0
    fi
  else
    # Checking the dns servers failed. Assuming no .local unicast dns, but
    # remove the nameserver cache so we recheck the next time we're triggered
    rm -f ${NS_CACHE}
  fi
  return 1
}
}}}

---

Steps to reproduce:
 1. fresh boot
 2. run "host -t soa local." (works fine)
 {{{
$ LC_ALL=C host -t soa local.
Host local. not found: 3(NXDOMAIN)
 }}}
 2. connect to strongswan vpn
 3. disconnect the session
 4. , now that command hangs forever
{{{
$ time LC_ALL=C host -t soa local.
<HUNG>
}}}
 (tried timing it ...)

Revision history for this message
fermulator (fermulator) wrote :

I also note;

I think this is (at least partially) due to strongswan leaving a dangling duplicate DNS entry in resolve.conf.

It's 100% consistent, that after step #3 above, there is a dangling DNS entry in resolve.conf, and this script hangs.

More :

 1. fresh boot
 2. script checks:
 - "/usr/lib/avahi/avahi-daemon-check-dns.sh" is fine
 - "host -t soa local." returns
 3. activate strongswan connection = SUCCESS

{{{
fermulator@fermmy:~$ sudo /usr/lib/avahi/avahi-daemon-check-dns.sh

fermulator@fermmy:~$ LC_ALL=C host -t soa local.
Host local. not found: 3(NXDOMAIN)

resolv.conf contains:
nameserver 192.168.194.20
nameserver 192.168.196.20
nameserver 127.0.0.53
}}}

then;
 4. disconnect VPN,

{{{
resolv.conf dangling:

nameserver 192.168.194.20
nameserver 127.0.0.53
}}}

 5. script checks:
 - "/usr/lib/avahi/avahi-daemon-check-dns.sh" HANGS
 - "host -t soa local." HANGS

 6. killall host

back to normal;

resolv.conf properly only has the local nameserver now (no more dangling DNS),
{{{
nameserver 127.0.0.53
}}}
 7. script checks:
 - "/usr/lib/avahi/avahi-daemon-check-dns.sh" works
 - "host -t soa local." works

$ host -t soa local.
Host local not found: 2(SERVFAIL)

Revision history for this message
fermulator (fermulator) wrote :

**accepting duplication**

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.