Merge lp:~xnox/ubuntu/quantal/lvm2/merge95 into lp:ubuntu/quantal/lvm2
- Quantal (12.10)
- merge95
- Merge into quantal
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Merge reported by: | Dimitri John Ledkov | ||||||||
Merged at revision: | not available | ||||||||
Proposed branch: | lp:~xnox/ubuntu/quantal/lvm2/merge95 | ||||||||
Merge into: | lp:ubuntu/quantal/lvm2 | ||||||||
Diff against target: |
85158 lines (+39300/-20331) 513 files modified
.pc/applied-patches (+0/-7) .pc/avoid-dev-block.patch/lib/device/dev-cache.c (+0/-976) .pc/dirs.patch/daemons/dmeventd/Makefile.in (+108/-0) .pc/dirs.patch/daemons/dmeventd/dmeventd.c (+2009/-0) .pc/dirs.patch/doc/example.conf.in (+0/-662) .pc/dirs.patch/lib/commands/toolcontext.c (+0/-1513) .pc/dm-event-api.patch/daemons/dmeventd/.exported_symbols (+4/-0) .pc/dm-event-api.patch/daemons/dmeventd/dmeventd.c (+2018/-0) .pc/dm-event-api.patch/daemons/dmeventd/libdevmapper-event.c (+874/-0) .pc/force-modprobe.patch/configure.in (+0/-1369) .pc/implicit-pointer.patch/tools/lvm.c (+0/-253) .pc/install.patch/daemons/dmeventd/plugins/lvm2/Makefile.in (+31/-0) .pc/install.patch/daemons/dmeventd/plugins/mirror/Makefile.in (+37/-0) .pc/install.patch/daemons/dmeventd/plugins/raid/Makefile.in (+36/-0) .pc/install.patch/daemons/dmeventd/plugins/snapshot/Makefile.in (+33/-0) .pc/install.patch/daemons/dmeventd/plugins/thin/Makefile.in (+36/-0) .pc/install.patch/make.tmpl.in (+0/-396) .pc/libs-cleanup.patch/configure.in (+0/-1423) .pc/monitoring-default-off.patch/doc/example.conf.in (+773/-0) .pc/monitoring-default-off.patch/lib/config/defaults.h (+0/-165) .pc/monitoring-default-off.patch/tools/toollib.c (+1637/-0) Makefile.in (+14/-7) VERSION (+1/-1) VERSION_DM (+1/-1) WHATS_NEW (+273/-1) WHATS_NEW_DM (+109/-0) configure (+391/-118) configure.in (+158/-17) daemons/Makefile.in (+8/-4) daemons/clvmd/Makefile.in (+4/-2) daemons/clvmd/clvm.h (+11/-4) daemons/clvmd/clvmd-cman.c (+1/-0) daemons/clvmd/clvmd-command.c (+44/-47) daemons/clvmd/clvmd-comms.h (+2/-1) daemons/clvmd/clvmd-corosync.c (+61/-2) daemons/clvmd/clvmd-openais.c (+1/-0) daemons/clvmd/clvmd-singlenode.c (+51/-24) daemons/clvmd/clvmd.c (+158/-103) daemons/clvmd/lvm-functions.c (+78/-74) daemons/clvmd/lvm-functions.h (+1/-1) daemons/clvmd/refresh_clvmd.c (+27/-24) daemons/cmirrord/clogd.c (+17/-5) daemons/cmirrord/cluster.c (+120/-25) daemons/cmirrord/functions.c (+25/-8) daemons/dmeventd/.exported_symbols (+3/-3) daemons/dmeventd/Makefile.in (+5/-4) daemons/dmeventd/dmeventd.c (+165/-133) daemons/dmeventd/dmeventd.h (+1/-0) daemons/dmeventd/libdevmapper-event.c (+38/-30) daemons/dmeventd/libdevmapper-event.h (+3/-0) daemons/dmeventd/plugins/Makefile.in (+6/-1) daemons/dmeventd/plugins/lvm2/.exported_symbols (+1/-0) daemons/dmeventd/plugins/lvm2/Makefile.in (+3/-5) daemons/dmeventd/plugins/lvm2/dmeventd_lvm.c (+25/-4) daemons/dmeventd/plugins/lvm2/dmeventd_lvm.h (+3/-0) daemons/dmeventd/plugins/mirror/Makefile.in (+3/-4) daemons/dmeventd/plugins/mirror/dmeventd_mirror.c (+16/-26) daemons/dmeventd/plugins/raid/Makefile.in (+2/-4) daemons/dmeventd/plugins/raid/dmeventd_raid.c (+30/-3) daemons/dmeventd/plugins/snapshot/Makefile.in (+3/-4) daemons/dmeventd/plugins/snapshot/dmeventd_snapshot.c (+60/-38) daemons/dmeventd/plugins/thin/.exported_symbols (+3/-0) daemons/dmeventd/plugins/thin/Makefile.in (+37/-0) daemons/dmeventd/plugins/thin/dmeventd_thin.c (+288/-0) daemons/lvmetad/Makefile.in (+59/-0) daemons/lvmetad/lvmetad-client.h (+81/-0) daemons/lvmetad/lvmetad-core.c (+1126/-0) daemons/lvmetad/test.sh (+16/-0) daemons/lvmetad/testclient.c (+127/-0) debian/changelog (+118/-0) debian/clvm.init (+6/-4) debian/clvmd.ra (+2/-2) debian/control (+22/-10) debian/dmeventd.install (+2/-0) debian/libdevmapper-dev.install (+2/-5) debian/libdevmapper-event1.02.1.install (+1/-1) debian/libdevmapper-event1.02.1.symbols (+6/-6) debian/libdevmapper1.02.1.install (+1/-1) debian/libdevmapper1.02.1.symbols (+66/-0) debian/liblvm2app2.2.install (+1/-1) debian/liblvm2cmd2.02.install (+1/-1) debian/lvm2.init (+6/-6) debian/lvm2.postinst (+4/-3) debian/lvm2.postrm (+9/-0) debian/lvm2.preinst (+4/-0) debian/patches/dirs.patch (+35/-4) debian/patches/dm-event-api.patch (+115/-0) debian/patches/install.patch (+83/-8) debian/patches/libs-cleanup.patch (+5/-13) debian/patches/monitoring-default-off.patch (+50/-12) debian/patches/series (+1/-0) debian/rules (+55/-43) doc/example.conf.in (+118/-7) doc/kernel/crypt.txt (+76/-0) doc/kernel/delay.txt (+26/-0) doc/kernel/flakey.txt (+53/-0) doc/kernel/io.txt (+75/-0) doc/kernel/kcopyd.txt (+47/-0) doc/kernel/linear.txt (+61/-0) doc/kernel/log.txt (+54/-0) doc/kernel/persistent-data.txt (+84/-0) doc/kernel/queue-length.txt (+39/-0) doc/kernel/raid.txt (+108/-0) doc/kernel/service-time.txt (+91/-0) doc/kernel/snapshot.txt (+168/-0) doc/kernel/striped.txt (+58/-0) doc/kernel/thin-provisioning.txt (+285/-0) doc/kernel/uevent.txt (+97/-0) doc/kernel/zero.txt (+37/-0) doc/lvm2-raid.txt (+197/-20) doc/lvm_fault_handling.txt (+41/-60) doc/lvmetad_design.txt (+2/-2) doc/tagging.txt (+6/-6) doc/udev_assembly.txt (+2/-2) include/.symlinks.in (+5/-0) lib/Makefile.in (+17/-1) lib/activate/activate.c (+395/-132) lib/activate/activate.h (+30/-12) lib/activate/dev_manager.c (+430/-57) lib/activate/dev_manager.h (+12/-1) lib/activate/fs.c (+22/-11) lib/activate/fs.h (+1/-0) lib/cache/lvmcache.c (+433/-54) lib/cache/lvmcache.h (+68/-49) lib/cache/lvmetad.c (+717/-0) lib/cache/lvmetad.h (+126/-0) lib/commands/toolcontext.c (+152/-95) lib/commands/toolcontext.h (+11/-4) lib/config/config.c (+266/-1207) lib/config/config.h (+21/-89) lib/config/defaults.h (+19/-5) lib/datastruct/str_list.c (+3/-6) lib/datastruct/str_list.h (+2/-2) lib/device/dev-cache.c (+87/-45) lib/device/dev-cache.h (+3/-0) lib/device/dev-io.c (+7/-20) lib/display/display.c (+105/-23) lib/filters/filter-mpath.c (+213/-0) lib/filters/filter-mpath.h (+23/-0) lib/filters/filter-persistent.c (+21/-15) lib/filters/filter-persistent.h (+1/-1) lib/filters/filter-regex.c (+14/-13) lib/filters/filter-regex.h (+1/-1) lib/filters/filter-sysfs.c (+3/-1) lib/filters/filter.c (+48/-21) lib/filters/filter.h (+2/-1) lib/format1/disk-rep.c (+48/-24) lib/format1/format1.c (+42/-12) lib/format1/import-extents.c (+4/-4) lib/format1/lvm1-label.c (+5/-6) lib/format_pool/disk_rep.c (+106/-76) lib/format_pool/format_pool.c (+34/-5) lib/format_pool/import_export.c (+4/-4) lib/format_text/archive.c (+1/-1) lib/format_text/archiver.c (+27/-11) lib/format_text/export.c (+33/-11) lib/format_text/flags.c (+9/-3) lib/format_text/format-text.c (+191/-186) lib/format_text/format-text.h (+12/-0) lib/format_text/import-export.h (+8/-7) lib/format_text/import.c (+16/-15) lib/format_text/import_vsn1.c (+150/-141) lib/format_text/layout.h (+1/-11) lib/format_text/tags.c (+3/-3) lib/format_text/text_export.h (+2/-2) lib/format_text/text_import.h (+3/-3) lib/format_text/text_label.c (+117/-94) lib/label/label.c (+16/-22) lib/label/label.h (+2/-0) lib/locking/cluster_locking.c (+36/-30) lib/locking/external_locking.c (+1/-1) lib/locking/file_locking.c (+15/-7) lib/locking/locking.c (+61/-9) lib/locking/locking.h (+27/-7) lib/locking/no_locking.c (+2/-2) lib/log/log.c (+8/-8) lib/metadata/lv.c (+209/-27) lib/metadata/lv.h (+11/-1) lib/metadata/lv_alloc.h (+7/-4) lib/metadata/lv_manip.c (+693/-255) lib/metadata/merge.c (+127/-2) lib/metadata/metadata-exported.h (+123/-87) lib/metadata/metadata.c (+282/-191) lib/metadata/metadata.h (+50/-46) lib/metadata/mirror.c (+144/-93) lib/metadata/pv.c (+72/-45) lib/metadata/pv_manip.c (+8/-6) lib/metadata/pv_map.c (+3/-3) lib/metadata/pv_map.h (+2/-2) lib/metadata/raid_manip.c (+654/-22) lib/metadata/replicator_manip.c (+0/-8) lib/metadata/segtype.h (+19/-16) lib/metadata/snapshot_manip.c (+4/-0) lib/metadata/thin_manip.c (+425/-0) lib/metadata/vg.c (+22/-3) lib/metadata/vg.h (+5/-0) lib/mirror/mirrored.c (+37/-38) lib/misc/configure.h.in (+15/-0) lib/misc/lvm-exec.c (+3/-2) lib/misc/lvm-file.c (+2/-1) lib/misc/lvm-globals.c (+14/-3) lib/misc/lvm-globals.h (+3/-0) lib/misc/lvm-percent.h (+2/-1) lib/misc/lvm-string.c (+43/-284) lib/misc/lvm-string.h (+0/-34) lib/misc/sharedlib.c (+4/-2) lib/mm/memlock.c (+23/-22) lib/raid/.exported_symbols (+1/-1) lib/raid/raid.c (+115/-81) lib/replicator/replicator.c (+39/-35) lib/report/columns.h (+12/-1) lib/report/properties.c (+39/-3) lib/report/properties.h (+1/-1) lib/report/report.c (+244/-16) lib/snapshot/snapshot.c (+10/-8) lib/striped/striped.c (+13/-11) lib/thin/.exported_symbols (+1/-0) lib/thin/Makefile.in (+25/-0) lib/thin/thin.c (+601/-0) lib/unknown/unknown.c (+9/-7) libdaemon/Makefile.in (+29/-0) libdaemon/client/Makefile.in (+20/-0) libdaemon/client/daemon-client.c (+118/-0) libdaemon/client/daemon-client.h (+105/-0) libdaemon/client/daemon-shared.c (+141/-0) libdaemon/client/daemon-shared.h (+30/-0) libdaemon/server/Makefile.in (+22/-0) libdaemon/server/daemon-server.c (+523/-0) libdaemon/server/daemon-server.h (+122/-0) libdm/Makefile.in (+1/-0) libdm/ioctl/libdm-iface.c (+220/-126) libdm/ioctl/libdm-targets.h (+3/-0) libdm/libdevmapper.h (+325/-4) libdm/libdm-common.c (+760/-104) libdm/libdm-common.h (+17/-4) libdm/libdm-config.c (+1174/-0) libdm/libdm-deptree.c (+1011/-386) libdm/libdm-file.c (+20/-1) libdm/libdm-report.c (+3/-0) libdm/libdm-string.c (+272/-13) libdm/misc/dm-log-userspace.h (+25/-6) libdm/mm/dbg_malloc.c (+16/-8) libdm/mm/dbg_malloc.h (+0/-46) libdm/mm/pool-fast.c (+13/-2) libdm/mm/pool.c (+1/-0) libdm/regex/matcher.c (+108/-68) liblvm/Makefile.in (+3/-1) liblvm/lvm2app.h (+2/-1) liblvm/lvm_base.c (+1/-1) liblvm/lvm_lv.c (+15/-9) liblvm/lvm_pv.c (+5/-5) make.tmpl.in (+40/-30) man/Makefile.in (+4/-4) man/clvmd.8.in (+14/-0) man/dmeventd.8.in (+22/-13) man/dmsetup.8.in (+472/-258) man/fsadm.8.in (+59/-45) man/lvconvert.8.in (+42/-0) man/lvcreate.8.in (+196/-108) man/lvextend.8.in (+1/-0) man/lvm.8.in (+2/-1) man/lvm.conf.5.in (+16/-4) man/lvreduce.8.in (+43/-31) man/lvremove.8.in (+15/-9) man/lvrename.8.in (+17/-23) man/lvresize.8.in (+56/-52) man/lvs.8.in (+68/-33) man/pvcreate.8.in (+1/-1) man/pvscan.8.in (+19/-0) scripts/Makefile.in (+20/-2) scripts/dm_event_systemd_red_hat.service.in (+3/-1) scripts/fsadm.sh (+51/-46) scripts/gdbinit (+68/-43) scripts/lvm2_lvmetad_init_red_hat.in (+115/-0) scripts/lvm2_lvmetad_systemd_red_hat.service.in (+17/-0) scripts/lvm2_lvmetad_systemd_red_hat.socket.in (+10/-0) scripts/lvm2_monitoring_init_red_hat.in (+4/-4) scripts/lvm2_monitoring_init_rhel4 (+4/-4) scripts/lvm2_monitoring_systemd_red_hat.service.in (+1/-2) scripts/lvm2_tmpfiles_red_hat.conf.in (+2/-0) scripts/lvm2create_initrd/lvm2create_initrd (+3/-3) scripts/lvm2create_initrd/lvm2create_initrd.8 (+53/-44) scripts/lvm2create_initrd/lvm2create_initrd.pod (+5/-5) scripts/vgimportclone.sh (+7/-5) test/Makefile.in (+46/-40) test/api/Makefile.in (+15/-34) test/api/percent.sh (+2/-0) test/lib/aux.sh (+75/-11) test/lib/check.sh (+4/-4) test/lib/harness.c (+20/-9) test/lib/test.sh (+12/-3) test/shell/000-basic.sh (+28/-0) test/shell/activate-missing.sh (+87/-0) test/shell/activate-partial.sh (+30/-0) test/shell/clvmd-restart.sh (+53/-0) test/shell/covercmd.sh (+82/-0) test/shell/dmeventd-restart.sh (+42/-0) test/shell/dumpconfig.sh (+35/-0) test/shell/fsadm.sh (+128/-0) test/shell/inconsistent-metadata.sh (+78/-0) test/shell/listings.sh (+83/-0) test/shell/lock-blocking.sh (+41/-0) test/shell/lvchange-mirror.sh (+28/-0) test/shell/lvconvert-mirror-basic-0.sh (+12/-0) test/shell/lvconvert-mirror-basic-1.sh (+12/-0) test/shell/lvconvert-mirror-basic-2.sh (+12/-0) test/shell/lvconvert-mirror-basic-3.sh (+12/-0) test/shell/lvconvert-mirror-basic.sh (+143/-0) test/shell/lvconvert-mirror.sh (+259/-0) test/shell/lvconvert-raid.sh (+215/-0) test/shell/lvconvert-repair-dmeventd.sh (+26/-0) test/shell/lvconvert-repair-policy.sh (+91/-0) test/shell/lvconvert-repair-replace.sh (+93/-0) test/shell/lvconvert-repair-snapshot.sh (+27/-0) test/shell/lvconvert-repair-transient-dmeventd.sh (+27/-0) test/shell/lvconvert-repair-transient.sh (+26/-0) test/shell/lvconvert-repair.sh (+114/-0) test/shell/lvconvert-twostep.sh (+26/-0) test/shell/lvcreate-large.sh (+40/-0) test/shell/lvcreate-mirror.sh (+41/-0) test/shell/lvcreate-operation.sh (+43/-0) test/shell/lvcreate-pvtags.sh (+47/-0) test/shell/lvcreate-raid.sh (+95/-0) test/shell/lvcreate-repair.sh (+100/-0) test/shell/lvcreate-small-snap.sh (+30/-0) test/shell/lvcreate-striped-mirror.sh (+65/-0) test/shell/lvcreate-thin.sh (+216/-0) test/shell/lvcreate-usage.sh (+152/-0) test/shell/lvextend-percent-extents.sh (+106/-0) test/shell/lvextend-snapshot-dmeventd.sh (+62/-0) test/shell/lvextend-snapshot-policy.sh (+47/-0) test/shell/lvm-init.sh (+21/-0) test/shell/lvmcache-exercise.sh (+22/-0) test/shell/lvmetad-pvs.sh (+20/-0) test/shell/lvresize-mirror.sh (+38/-0) test/shell/lvresize-rounding.sh (+25/-0) test/shell/lvresize-usage.sh (+20/-0) test/shell/mdata-strings.sh (+39/-0) test/shell/metadata-balance.sh (+232/-0) test/shell/metadata-dirs.sh (+43/-0) test/shell/metadata.sh (+80/-0) test/shell/mirror-names.sh (+156/-0) test/shell/mirror-vgreduce-removemissing.sh (+424/-0) test/shell/name-mangling.sh (+230/-0) test/shell/nomda-missing.sh (+83/-0) test/shell/pool-labels.sh (+40/-0) test/shell/pv-duplicate.sh (+25/-0) test/shell/pv-min-size.sh (+31/-0) test/shell/pv-range-overflow.sh (+32/-0) test/shell/pvchange-usage.sh (+66/-0) test/shell/pvcreate-metadata0.sh (+32/-0) test/shell/pvcreate-operation-md.sh (+147/-0) test/shell/pvcreate-operation.sh (+121/-0) test/shell/pvcreate-usage.sh (+192/-0) test/shell/pvmove-basic.sh (+385/-0) test/shell/pvremove-usage.sh (+68/-0) test/shell/read-ahead.sh (+62/-0) test/shell/snapshot-autoumount-dmeventd.sh (+39/-0) test/shell/snapshot-merge.sh (+134/-0) test/shell/snapshots-of-mirrors.sh (+44/-0) test/shell/tags.sh (+74/-0) test/shell/test-partition.sh (+30/-0) test/shell/topology-support.sh (+106/-0) test/shell/unknown-segment.sh (+34/-0) test/shell/unlost-pv.sh (+38/-0) test/shell/vgcfgbackup-usage.sh (+54/-0) test/shell/vgchange-maxlv.sh (+31/-0) test/shell/vgchange-sysinit.sh (+51/-0) test/shell/vgchange-usage.sh (+44/-0) test/shell/vgcreate-usage.sh (+163/-0) test/shell/vgextend-restoremissing.sh (+30/-0) test/shell/vgextend-usage.sh (+129/-0) test/shell/vgimportclone.sh (+38/-0) test/shell/vgmerge-operation.sh (+81/-0) test/shell/vgmerge-usage.sh (+67/-0) test/shell/vgreduce-removemissing-snapshot.sh (+26/-0) test/shell/vgreduce-usage.sh (+87/-0) test/shell/vgrename-usage.sh (+41/-0) test/shell/vgsplit-operation.sh (+295/-0) test/shell/vgsplit-stacked.sh (+29/-0) test/shell/vgsplit-usage.sh (+168/-0) test/t-000-basic.sh (+0/-30) test/t-activate-missing.sh (+0/-87) test/t-activate-partial.sh (+0/-30) test/t-covercmd.sh (+0/-82) test/t-dmeventd-restart.sh (+0/-40) test/t-fsadm.sh (+0/-123) test/t-inconsistent-metadata.sh (+0/-75) test/t-listings.sh (+0/-83) test/t-lock-blocking.sh (+0/-41) test/t-lvchange-mirror.sh (+0/-28) test/t-lvconvert-mirror-basic-0.sh (+0/-12) test/t-lvconvert-mirror-basic-1.sh (+0/-12) test/t-lvconvert-mirror-basic-2.sh (+0/-12) test/t-lvconvert-mirror-basic-3.sh (+0/-12) test/t-lvconvert-mirror-basic.sh (+0/-142) test/t-lvconvert-mirror.sh (+0/-255) test/t-lvconvert-raid.sh (+0/-158) test/t-lvconvert-repair-dmeventd.sh (+0/-26) test/t-lvconvert-repair-policy.sh (+0/-91) test/t-lvconvert-repair-replace.sh (+0/-93) test/t-lvconvert-repair-snapshot.sh (+0/-27) test/t-lvconvert-repair-transient-dmeventd.sh (+0/-27) test/t-lvconvert-repair-transient.sh (+0/-26) test/t-lvconvert-repair.sh (+0/-108) test/t-lvconvert-twostep.sh (+0/-26) test/t-lvcreate-mirror.sh (+0/-41) test/t-lvcreate-operation.sh (+0/-43) test/t-lvcreate-pvtags.sh (+0/-45) test/t-lvcreate-raid.sh (+0/-113) test/t-lvcreate-repair.sh (+0/-97) test/t-lvcreate-small-snap.sh (+0/-30) test/t-lvcreate-usage.sh (+0/-152) test/t-lvextend-percent-extents.sh (+0/-106) test/t-lvextend-snapshot-dmeventd.sh (+0/-51) test/t-lvextend-snapshot-policy.sh (+0/-47) test/t-lvm-init.sh (+0/-21) test/t-lvmcache-exercise.sh (+0/-23) test/t-lvresize-mirror.sh (+0/-38) test/t-lvresize-usage.sh (+0/-20) test/t-mdata-strings.sh (+0/-33) test/t-metadata-balance.sh (+0/-232) test/t-metadata-dirs.sh (+0/-43) test/t-metadata.sh (+0/-80) test/t-mirror-names.sh (+0/-156) test/t-mirror-vgreduce-removemissing.sh (+0/-424) test/t-nomda-missing.sh (+0/-83) test/t-pool-labels.sh (+0/-39) test/t-pv-duplicate.sh (+0/-25) test/t-pv-min-size.sh (+0/-31) test/t-pv-range-overflow.sh (+0/-32) test/t-pvchange-usage.sh (+0/-66) test/t-pvcreate-metadata0.sh (+0/-32) test/t-pvcreate-operation-md.sh (+0/-147) test/t-pvcreate-operation.sh (+0/-121) test/t-pvcreate-usage.sh (+0/-192) test/t-pvmove-basic.sh (+0/-378) test/t-pvremove-usage.sh (+0/-68) test/t-read-ahead.sh (+0/-62) test/t-snapshot-autoumount-dmeventd.sh (+0/-39) test/t-snapshot-merge.sh (+0/-133) test/t-snapshots-of-mirrors.sh (+0/-44) test/t-tags.sh (+0/-74) test/t-test-partition.sh (+0/-30) test/t-topology-support.sh (+0/-106) test/t-unknown-segment.sh (+0/-34) test/t-unlost-pv.sh (+0/-38) test/t-vgcfgbackup-usage.sh (+0/-54) test/t-vgchange-maxlv.sh (+0/-31) test/t-vgchange-sysinit.sh (+0/-51) test/t-vgchange-usage.sh (+0/-44) test/t-vgcreate-usage.sh (+0/-163) test/t-vgextend-restoremissing.sh (+0/-30) test/t-vgextend-usage.sh (+0/-129) test/t-vgimportclone.sh (+0/-36) test/t-vgmerge-operation.sh (+0/-81) test/t-vgmerge-usage.sh (+0/-67) test/t-vgreduce-removemissing-snapshot.sh (+0/-26) test/t-vgreduce-usage.sh (+0/-87) test/t-vgrename-usage.sh (+0/-41) test/t-vgsplit-operation.sh (+0/-290) test/t-vgsplit-stacked.sh (+0/-29) test/t-vgsplit-usage.sh (+0/-168) test/unit/Makefile.in (+33/-0) test/unit/bitset_t.c (+133/-0) test/unit/config_t.c (+156/-0) test/unit/matcher_data.h (+1013/-0) test/unit/matcher_t.c (+85/-0) test/unit/run.c (+29/-0) test/unit/string_t.c (+83/-0) tools/Makefile.in (+2/-4) tools/args.h (+7/-2) tools/commands.h (+16/-7) tools/dmsetup.c (+378/-71) tools/dumpconfig.c (+1/-1) tools/lvchange.c (+45/-17) tools/lvconvert.c (+144/-23) tools/lvcreate.c (+582/-156) tools/lvm.c (+1/-2) tools/lvm2cmd.h (+6/-0) tools/lvmcmdlib.c (+4/-0) tools/lvmcmdline.c (+28/-16) tools/lvmdiskscan.c (+7/-4) tools/lvrename.c (+15/-0) tools/lvresize.c (+128/-31) tools/polldaemon.c (+24/-7) tools/polldaemon.h (+3/-3) tools/pvchange.c (+1/-1) tools/pvck.c (+1/-1) tools/pvcreate.c (+3/-3) tools/pvmove.c (+32/-29) tools/pvremove.c (+9/-6) tools/pvresize.c (+1/-1) tools/pvscan.c (+146/-7) tools/reporter.c (+3/-0) tools/toollib.c (+57/-39) tools/toollib.h (+0/-1) tools/tools.h (+3/-0) tools/vgcfgbackup.c (+1/-1) tools/vgcfgrestore.c (+2/-0) tools/vgchange.c (+17/-10) tools/vgconvert.c (+1/-5) tools/vgcreate.c (+2/-0) tools/vgmerge.c (+2/-1) tools/vgreduce.c (+4/-4) tools/vgremove.c (+1/-1) tools/vgrename.c (+8/-3) tools/vgscan.c (+8/-1) tools/vgsplit.c (+16/-3) udev/13-dm-disk.rules (+0/-27) udev/13-dm-disk.rules.in (+38/-0) udev/69-dm-lvm-metad.rules (+25/-0) udev/Makefile.in (+12/-2) |
||||||||
To merge this branch: | bzr merge lp:~xnox/ubuntu/quantal/lvm2/merge95 | ||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Steve Langasek | Needs Fixing | ||
Kees Cook | Pending | ||
Canonical Foundations Team | Pending | ||
Ubuntu branches | Pending | ||
Review via email: mp+119696@code.launchpad.net |
Commit message
Description of the change
* Debian accepted some of the event manager packaging
* Debian multiarched libdevmapper libs, but not the libdevmapper-dev
* Ubuntu's "Don't install documentation in udebs" vanished and there are no traces of it
* Ubuntu's "don't ship lvm2 init script" transformed into "don't ship clvm init script" see bug 1037033
* I added libdevmapper-dev multiarching
* It looks like Debian refreshed api patch, which is now different to Ubuntu's. Since the 'new' api was only in quantal so far, I am tempted to switch for Debian's API instead & recompile. See $ bzr log -p debian/*.symbols
The rest stayed the same.
- 79. By Dimitri John Ledkov
-
Note additional multi-arch changes only once.
- 80. By Dimitri John Ledkov
-
(omit leading /var/): create /run/lvm if it doesn't exist.
- 81. By Dimitri John Ledkov
-
* debian/
lvm2.{preinst, postinst, postrm} :
- Implement removal of obsolete /etc/init.d/lvm2 conffile, which
should not have been re-introduced in Quantal. - 82. By Dimitri John Ledkov
-
minimise diff
- 83. By Dimitri John Ledkov
-
* libdevmapper-
event1. 02.1:
- Add Breaks: dmeventd (<< 2.02.95-4ubuntu1) due to debian symbol rename
Dimitri John Ledkov (xnox) wrote : | # |
Address all comments.
/etc/init.d/lvm2 was only reintroduced in quantal, added rm_conffile snippets for it which can be removed in R.
Preview Diff
1 | === added file '.pc/applied-patches' |
2 | --- .pc/applied-patches 1970-01-01 00:00:00 +0000 |
3 | +++ .pc/applied-patches 2012-08-21 10:18:22 +0000 |
4 | @@ -0,0 +1,8 @@ |
5 | +install.patch |
6 | +libs-cleanup.patch |
7 | +dirs.patch |
8 | +force-modprobe.patch |
9 | +implicit-pointer.patch |
10 | +avoid-dev-block.patch |
11 | +dm-event-api.patch |
12 | +monitoring-default-off.patch |
13 | |
14 | === removed file '.pc/applied-patches' |
15 | --- .pc/applied-patches 2012-04-14 02:57:53 +0000 |
16 | +++ .pc/applied-patches 1970-01-01 00:00:00 +0000 |
17 | @@ -1,7 +0,0 @@ |
18 | -install.patch |
19 | -libs-cleanup.patch |
20 | -dirs.patch |
21 | -force-modprobe.patch |
22 | -implicit-pointer.patch |
23 | -avoid-dev-block.patch |
24 | -monitoring-default-off.patch |
25 | |
26 | === added directory '.pc/avoid-dev-block.patch' |
27 | === removed directory '.pc/avoid-dev-block.patch' |
28 | === added directory '.pc/avoid-dev-block.patch/lib' |
29 | === removed directory '.pc/avoid-dev-block.patch/lib' |
30 | === added directory '.pc/avoid-dev-block.patch/lib/device' |
31 | === removed directory '.pc/avoid-dev-block.patch/lib/device' |
32 | === added file '.pc/avoid-dev-block.patch/lib/device/dev-cache.c' |
33 | --- .pc/avoid-dev-block.patch/lib/device/dev-cache.c 1970-01-01 00:00:00 +0000 |
34 | +++ .pc/avoid-dev-block.patch/lib/device/dev-cache.c 2012-08-21 10:18:22 +0000 |
35 | @@ -0,0 +1,1018 @@ |
36 | +/* |
37 | + * Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved. |
38 | + * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved. |
39 | + * |
40 | + * This file is part of LVM2. |
41 | + * |
42 | + * This copyrighted material is made available to anyone wishing to use, |
43 | + * modify, copy, or redistribute it subject to the terms and conditions |
44 | + * of the GNU Lesser General Public License v.2.1. |
45 | + * |
46 | + * You should have received a copy of the GNU Lesser General Public License |
47 | + * along with this program; if not, write to the Free Software Foundation, |
48 | + * Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
49 | + */ |
50 | + |
51 | +#include "lib.h" |
52 | +#include "dev-cache.h" |
53 | +#include "lvm-types.h" |
54 | +#include "btree.h" |
55 | +#include "filter.h" |
56 | +#include "filter-persistent.h" |
57 | +#include "toolcontext.h" |
58 | + |
59 | +#include <unistd.h> |
60 | +#include <sys/param.h> |
61 | +#include <dirent.h> |
62 | + |
63 | +struct dev_iter { |
64 | + struct btree_iter *current; |
65 | + struct dev_filter *filter; |
66 | +}; |
67 | + |
68 | +struct dir_list { |
69 | + struct dm_list list; |
70 | + char dir[0]; |
71 | +}; |
72 | + |
73 | +static struct { |
74 | + struct dm_pool *mem; |
75 | + struct dm_hash_table *names; |
76 | + struct btree *devices; |
77 | + struct dm_regex *preferred_names_matcher; |
78 | + const char *dev_dir; |
79 | + |
80 | + int has_scanned; |
81 | + struct dm_list dirs; |
82 | + struct dm_list files; |
83 | + |
84 | +} _cache; |
85 | + |
86 | +#define _zalloc(x) dm_pool_zalloc(_cache.mem, (x)) |
87 | +#define _free(x) dm_pool_free(_cache.mem, (x)) |
88 | +#define _strdup(x) dm_pool_strdup(_cache.mem, (x)) |
89 | + |
90 | +static int _insert(const char *path, int rec, int check_with_udev_db); |
91 | + |
92 | +/* Setup non-zero members of passed zeroed 'struct device' */ |
93 | +static void _dev_init(struct device *dev, int max_error_count) |
94 | +{ |
95 | + dev->block_size = -1; |
96 | + dev->fd = -1; |
97 | + dev->read_ahead = -1; |
98 | + dev->max_error_count = max_error_count; |
99 | + |
100 | + dm_list_init(&dev->aliases); |
101 | + dm_list_init(&dev->open_list); |
102 | +} |
103 | + |
104 | +struct device *dev_create_file(const char *filename, struct device *dev, |
105 | + struct str_list *alias, int use_malloc) |
106 | +{ |
107 | + int allocate = !dev; |
108 | + |
109 | + if (allocate) { |
110 | + if (use_malloc) { |
111 | + if (!(dev = dm_zalloc(sizeof(*dev)))) { |
112 | + log_error("struct device allocation failed"); |
113 | + return NULL; |
114 | + } |
115 | + if (!(alias = dm_zalloc(sizeof(*alias)))) { |
116 | + log_error("struct str_list allocation failed"); |
117 | + dm_free(dev); |
118 | + return NULL; |
119 | + } |
120 | + if (!(alias->str = dm_strdup(filename))) { |
121 | + log_error("filename strdup failed"); |
122 | + dm_free(dev); |
123 | + dm_free(alias); |
124 | + return NULL; |
125 | + } |
126 | + } else { |
127 | + if (!(dev = _zalloc(sizeof(*dev)))) { |
128 | + log_error("struct device allocation failed"); |
129 | + return NULL; |
130 | + } |
131 | + if (!(alias = _zalloc(sizeof(*alias)))) { |
132 | + log_error("struct str_list allocation failed"); |
133 | + _free(dev); |
134 | + return NULL; |
135 | + } |
136 | + if (!(alias->str = _strdup(filename))) { |
137 | + log_error("filename strdup failed"); |
138 | + return NULL; |
139 | + } |
140 | + } |
141 | + } else if (!(alias->str = dm_strdup(filename))) { |
142 | + log_error("filename strdup failed"); |
143 | + return NULL; |
144 | + } |
145 | + |
146 | + _dev_init(dev, NO_DEV_ERROR_COUNT_LIMIT); |
147 | + dev->flags = DEV_REGULAR | ((use_malloc) ? DEV_ALLOCED : 0); |
148 | + dm_list_add(&dev->aliases, &alias->list); |
149 | + |
150 | + return dev; |
151 | +} |
152 | + |
153 | +static struct device *_dev_create(dev_t d) |
154 | +{ |
155 | + struct device *dev; |
156 | + |
157 | + if (!(dev = _zalloc(sizeof(*dev)))) { |
158 | + log_error("struct device allocation failed"); |
159 | + return NULL; |
160 | + } |
161 | + |
162 | + _dev_init(dev, dev_disable_after_error_count()); |
163 | + dev->dev = d; |
164 | + |
165 | + return dev; |
166 | +} |
167 | + |
168 | +void dev_set_preferred_name(struct str_list *sl, struct device *dev) |
169 | +{ |
170 | + /* |
171 | + * Don't interfere with ordering specified in config file. |
172 | + */ |
173 | + if (_cache.preferred_names_matcher) |
174 | + return; |
175 | + |
176 | + log_debug("%s: New preferred name", sl->str); |
177 | + dm_list_del(&sl->list); |
178 | + dm_list_add_h(&dev->aliases, &sl->list); |
179 | +} |
180 | + |
181 | +/* |
182 | + * Check whether path0 or path1 contains the subpath. The path that |
183 | + * *does not* contain the subpath wins (return 0 or 1). If both paths |
184 | + * contain the subpath, return -1. If none of them contains the subpath, |
185 | + * return -2. |
186 | + */ |
187 | +static int _builtin_preference(const char *path0, const char *path1, |
188 | + size_t skip_prefix_count, const char *subpath) |
189 | +{ |
190 | + size_t subpath_len; |
191 | + int r0, r1; |
192 | + |
193 | + subpath_len = strlen(subpath); |
194 | + |
195 | + r0 = !strncmp(path0 + skip_prefix_count, subpath, subpath_len); |
196 | + r1 = !strncmp(path1 + skip_prefix_count, subpath, subpath_len); |
197 | + |
198 | + if (!r0 && r1) |
199 | + /* path0 does not have the subpath - it wins */ |
200 | + return 0; |
201 | + else if (r0 && !r1) |
202 | + /* path1 does not have the subpath - it wins */ |
203 | + return 1; |
204 | + else if (r0 && r1) |
205 | + /* both of them have the subpath */ |
206 | + return -1; |
207 | + |
208 | + /* no path has the subpath */ |
209 | + return -2; |
210 | +} |
211 | + |
212 | +static int _apply_builtin_path_preference_rules(const char *path0, const char *path1) |
213 | +{ |
214 | + size_t devdir_len; |
215 | + int r; |
216 | + |
217 | + devdir_len = strlen(_cache.dev_dir); |
218 | + |
219 | + if (!strncmp(path0, _cache.dev_dir, devdir_len) && |
220 | + !strncmp(path1, _cache.dev_dir, devdir_len)) { |
221 | + /* |
222 | + * We're trying to achieve the ordering: |
223 | + * /dev/block/ < /dev/dm-* < /dev/disk/ < /dev/mapper/ < anything else |
224 | + */ |
225 | + |
226 | + /* Prefer any other path over /dev/block/ path. */ |
227 | + if ((r = _builtin_preference(path0, path1, devdir_len, "block/")) >= -1) |
228 | + return r; |
229 | + |
230 | + /* Prefer any other path over /dev/dm-* path. */ |
231 | + if ((r = _builtin_preference(path0, path1, devdir_len, "dm-")) >= -1) |
232 | + return r; |
233 | + |
234 | + /* Prefer any other path over /dev/disk/ path. */ |
235 | + if ((r = _builtin_preference(path0, path1, devdir_len, "disk/")) >= -1) |
236 | + return r; |
237 | + |
238 | + /* Prefer any other path over /dev/mapper/ path. */ |
239 | + if ((r = _builtin_preference(path0, path1, 0, dm_dir())) >= -1) |
240 | + return r; |
241 | + } |
242 | + |
243 | + return -1; |
244 | +} |
245 | + |
246 | +/* Return 1 if we prefer path1 else return 0 */ |
247 | +static int _compare_paths(const char *path0, const char *path1) |
248 | +{ |
249 | + int slash0 = 0, slash1 = 0; |
250 | + int m0, m1; |
251 | + const char *p; |
252 | + char p0[PATH_MAX], p1[PATH_MAX]; |
253 | + char *s0, *s1; |
254 | + struct stat stat0, stat1; |
255 | + int r; |
256 | + |
257 | + /* |
258 | + * FIXME Better to compare patterns one-at-a-time against all names. |
259 | + */ |
260 | + if (_cache.preferred_names_matcher) { |
261 | + m0 = dm_regex_match(_cache.preferred_names_matcher, path0); |
262 | + m1 = dm_regex_match(_cache.preferred_names_matcher, path1); |
263 | + |
264 | + if (m0 != m1) { |
265 | + if (m0 < 0) |
266 | + return 1; |
267 | + if (m1 < 0) |
268 | + return 0; |
269 | + if (m0 < m1) |
270 | + return 1; |
271 | + if (m1 < m0) |
272 | + return 0; |
273 | + } |
274 | + } |
275 | + |
276 | + /* Apply built-in preference rules first. */ |
277 | + if ((r = _apply_builtin_path_preference_rules(path0, path1)) >= 0) |
278 | + return r; |
279 | + |
280 | + /* Return the path with fewer slashes */ |
281 | + for (p = path0; p++; p = (const char *) strchr(p, '/')) |
282 | + slash0++; |
283 | + |
284 | + for (p = path1; p++; p = (const char *) strchr(p, '/')) |
285 | + slash1++; |
286 | + |
287 | + if (slash0 < slash1) |
288 | + return 0; |
289 | + if (slash1 < slash0) |
290 | + return 1; |
291 | + |
292 | + strncpy(p0, path0, sizeof(p0) - 1); |
293 | + p0[sizeof(p0) - 1] = '\0'; |
294 | + strncpy(p1, path1, sizeof(p1) - 1); |
295 | + p1[sizeof(p1) - 1] = '\0'; |
296 | + s0 = p0 + 1; |
297 | + s1 = p1 + 1; |
298 | + |
299 | + /* |
300 | + * If we reach here, both paths are the same length. |
301 | + * Now skip past identical path components. |
302 | + */ |
303 | + while (*s0 && *s0 == *s1) |
304 | + s0++, s1++; |
305 | + |
306 | + /* We prefer symlinks - they exist for a reason! |
307 | + * So we prefer a shorter path before the first symlink in the name. |
308 | + * FIXME Configuration option to invert this? */ |
309 | + while (s0) { |
310 | + s0 = strchr(s0, '/'); |
311 | + s1 = strchr(s1, '/'); |
312 | + if (s0) { |
313 | + *s0 = '\0'; |
314 | + *s1 = '\0'; |
315 | + } |
316 | + if (lstat(p0, &stat0)) { |
317 | + log_sys_very_verbose("lstat", p0); |
318 | + return 1; |
319 | + } |
320 | + if (lstat(p1, &stat1)) { |
321 | + log_sys_very_verbose("lstat", p1); |
322 | + return 0; |
323 | + } |
324 | + if (S_ISLNK(stat0.st_mode) && !S_ISLNK(stat1.st_mode)) |
325 | + return 0; |
326 | + if (!S_ISLNK(stat0.st_mode) && S_ISLNK(stat1.st_mode)) |
327 | + return 1; |
328 | + if (s0) { |
329 | + *s0++ = '/'; |
330 | + *s1++ = '/'; |
331 | + } |
332 | + } |
333 | + |
334 | + /* ASCII comparison */ |
335 | + if (strcmp(path0, path1) < 0) |
336 | + return 0; |
337 | + else |
338 | + return 1; |
339 | +} |
340 | + |
341 | +static int _add_alias(struct device *dev, const char *path) |
342 | +{ |
343 | + struct str_list *sl = _zalloc(sizeof(*sl)); |
344 | + struct str_list *strl; |
345 | + const char *oldpath; |
346 | + int prefer_old = 1; |
347 | + |
348 | + if (!sl) |
349 | + return_0; |
350 | + |
351 | + /* Is name already there? */ |
352 | + dm_list_iterate_items(strl, &dev->aliases) { |
353 | + if (!strcmp(strl->str, path)) { |
354 | + log_debug("%s: Already in device cache", path); |
355 | + return 1; |
356 | + } |
357 | + } |
358 | + |
359 | + sl->str = path; |
360 | + |
361 | + if (!dm_list_empty(&dev->aliases)) { |
362 | + oldpath = dm_list_item(dev->aliases.n, struct str_list)->str; |
363 | + prefer_old = _compare_paths(path, oldpath); |
364 | + log_debug("%s: Aliased to %s in device cache%s", |
365 | + path, oldpath, prefer_old ? "" : " (preferred name)"); |
366 | + |
367 | + } else |
368 | + log_debug("%s: Added to device cache", path); |
369 | + |
370 | + if (prefer_old) |
371 | + dm_list_add(&dev->aliases, &sl->list); |
372 | + else |
373 | + dm_list_add_h(&dev->aliases, &sl->list); |
374 | + |
375 | + return 1; |
376 | +} |
377 | + |
378 | +/* |
379 | + * Either creates a new dev, or adds an alias to |
380 | + * an existing dev. |
381 | + */ |
382 | +static int _insert_dev(const char *path, dev_t d) |
383 | +{ |
384 | + struct device *dev; |
385 | + static dev_t loopfile_count = 0; |
386 | + int loopfile = 0; |
387 | + char *path_copy; |
388 | + |
389 | + /* Generate pretend device numbers for loopfiles */ |
390 | + if (!d) { |
391 | + if (dm_hash_lookup(_cache.names, path)) |
392 | + return 1; |
393 | + d = ++loopfile_count; |
394 | + loopfile = 1; |
395 | + } |
396 | + |
397 | + /* is this device already registered ? */ |
398 | + if (!(dev = (struct device *) btree_lookup(_cache.devices, |
399 | + (uint32_t) d))) { |
400 | + /* create new device */ |
401 | + if (loopfile) { |
402 | + if (!(dev = dev_create_file(path, NULL, NULL, 0))) |
403 | + return_0; |
404 | + } else if (!(dev = _dev_create(d))) |
405 | + return_0; |
406 | + |
407 | + if (!(btree_insert(_cache.devices, (uint32_t) d, dev))) { |
408 | + log_error("Couldn't insert device into binary tree."); |
409 | + _free(dev); |
410 | + return 0; |
411 | + } |
412 | + } |
413 | + |
414 | + if (!(path_copy = dm_pool_strdup(_cache.mem, path))) { |
415 | + log_error("Failed to duplicate path string."); |
416 | + return 0; |
417 | + } |
418 | + |
419 | + if (!loopfile && !_add_alias(dev, path_copy)) { |
420 | + log_error("Couldn't add alias to dev cache."); |
421 | + return 0; |
422 | + } |
423 | + |
424 | + if (!dm_hash_insert(_cache.names, path_copy, dev)) { |
425 | + log_error("Couldn't add name to hash in dev cache."); |
426 | + return 0; |
427 | + } |
428 | + |
429 | + return 1; |
430 | +} |
431 | + |
432 | +static char *_join(const char *dir, const char *name) |
433 | +{ |
434 | + size_t len = strlen(dir) + strlen(name) + 2; |
435 | + char *r = dm_malloc(len); |
436 | + if (r) |
437 | + snprintf(r, len, "%s/%s", dir, name); |
438 | + |
439 | + return r; |
440 | +} |
441 | + |
442 | +/* |
443 | + * Get rid of extra slashes in the path string. |
444 | + */ |
445 | +static void _collapse_slashes(char *str) |
446 | +{ |
447 | + char *ptr; |
448 | + int was_slash = 0; |
449 | + |
450 | + for (ptr = str; *ptr; ptr++) { |
451 | + if (*ptr == '/') { |
452 | + if (was_slash) |
453 | + continue; |
454 | + |
455 | + was_slash = 1; |
456 | + } else |
457 | + was_slash = 0; |
458 | + *str++ = *ptr; |
459 | + } |
460 | + |
461 | + *str = *ptr; |
462 | +} |
463 | + |
464 | +static int _insert_dir(const char *dir) |
465 | +{ |
466 | + int n, dirent_count, r = 1; |
467 | + struct dirent **dirent; |
468 | + char *path; |
469 | + |
470 | + dirent_count = scandir(dir, &dirent, NULL, alphasort); |
471 | + if (dirent_count > 0) { |
472 | + for (n = 0; n < dirent_count; n++) { |
473 | + if (dirent[n]->d_name[0] == '.') { |
474 | + free(dirent[n]); |
475 | + continue; |
476 | + } |
477 | + |
478 | + if (!(path = _join(dir, dirent[n]->d_name))) |
479 | + return_0; |
480 | + |
481 | + _collapse_slashes(path); |
482 | + r &= _insert(path, 1, 0); |
483 | + dm_free(path); |
484 | + |
485 | + free(dirent[n]); |
486 | + } |
487 | + free(dirent); |
488 | + } |
489 | + |
490 | + return r; |
491 | +} |
492 | + |
493 | +static int _insert_file(const char *path) |
494 | +{ |
495 | + struct stat info; |
496 | + |
497 | + if (stat(path, &info) < 0) { |
498 | + log_sys_very_verbose("stat", path); |
499 | + return 0; |
500 | + } |
501 | + |
502 | + if (!S_ISREG(info.st_mode)) { |
503 | + log_debug("%s: Not a regular file", path); |
504 | + return 0; |
505 | + } |
506 | + |
507 | + if (!_insert_dev(path, 0)) |
508 | + return_0; |
509 | + |
510 | + return 1; |
511 | +} |
512 | + |
513 | +#ifdef UDEV_SYNC_SUPPORT |
514 | + |
515 | +static int _device_in_udev_db(const dev_t d) |
516 | +{ |
517 | + struct udev *udev; |
518 | + struct udev_device *udev_device; |
519 | + |
520 | + if (!(udev = udev_get_library_context())) |
521 | + return_0; |
522 | + |
523 | + if ((udev_device = udev_device_new_from_devnum(udev, 'b', d))) { |
524 | + udev_device_unref(udev_device); |
525 | + return 1; |
526 | + } |
527 | + |
528 | + return 0; |
529 | +} |
530 | + |
531 | +static int _insert_udev_dir(struct udev *udev, const char *dir) |
532 | +{ |
533 | + struct udev_enumerate *udev_enum = NULL; |
534 | + struct udev_list_entry *device_entry, *symlink_entry; |
535 | + const char *node_name, *symlink_name; |
536 | + struct udev_device *device; |
537 | + int r = 1; |
538 | + |
539 | + if (!(udev_enum = udev_enumerate_new(udev))) |
540 | + goto bad; |
541 | + |
542 | + if (udev_enumerate_add_match_subsystem(udev_enum, "block") || |
543 | + udev_enumerate_scan_devices(udev_enum)) |
544 | + goto bad; |
545 | + |
546 | + udev_list_entry_foreach(device_entry, udev_enumerate_get_list_entry(udev_enum)) { |
547 | + if (!(device = udev_device_new_from_syspath(udev, udev_list_entry_get_name(device_entry)))) { |
548 | + log_warn("WARNING: udev failed to return a device entry."); |
549 | + continue; |
550 | + } |
551 | + |
552 | + if (!(node_name = udev_device_get_devnode(device))) |
553 | + log_warn("WARNING: udev failed to return a device node."); |
554 | + else |
555 | + r &= _insert(node_name, 0, 0); |
556 | + |
557 | + udev_list_entry_foreach(symlink_entry, udev_device_get_devlinks_list_entry(device)) { |
558 | + if (!(symlink_name = udev_list_entry_get_name(symlink_entry))) |
559 | + log_warn("WARNING: udev failed to return a symlink name."); |
560 | + else |
561 | + r &= _insert(symlink_name, 0, 0); |
562 | + } |
563 | + |
564 | + udev_device_unref(device); |
565 | + } |
566 | + |
567 | + udev_enumerate_unref(udev_enum); |
568 | + return r; |
569 | + |
570 | +bad: |
571 | + log_error("Failed to enumerate udev device list."); |
572 | + udev_enumerate_unref(udev_enum); |
573 | + return 0; |
574 | +} |
575 | + |
576 | +static void _insert_dirs(struct dm_list *dirs) |
577 | +{ |
578 | + struct dir_list *dl; |
579 | + struct udev *udev; |
580 | + int with_udev; |
581 | + |
582 | + with_udev = obtain_device_list_from_udev() && |
583 | + (udev = udev_get_library_context()); |
584 | + |
585 | + dm_list_iterate_items(dl, &_cache.dirs) { |
586 | + if (with_udev) { |
587 | + if (!_insert_udev_dir(udev, dl->dir)) |
588 | + log_debug("%s: Failed to insert devices from " |
589 | + "udev-managed directory to device " |
590 | + "cache fully", dl->dir); |
591 | + } |
592 | + else if (!_insert_dir(dl->dir)) |
593 | + log_debug("%s: Failed to insert devices to " |
594 | + "device cache fully", dl->dir); |
595 | + } |
596 | +} |
597 | + |
598 | +#else /* UDEV_SYNC_SUPPORT */ |
599 | + |
600 | +static int _device_in_udev_db(const dev_t d) |
601 | +{ |
602 | + return 0; |
603 | +} |
604 | + |
605 | +static void _insert_dirs(struct dm_list *dirs) |
606 | +{ |
607 | + struct dir_list *dl; |
608 | + |
609 | + dm_list_iterate_items(dl, &_cache.dirs) |
610 | + _insert_dir(dl->dir); |
611 | +} |
612 | + |
613 | +#endif /* UDEV_SYNC_SUPPORT */ |
614 | + |
615 | +static int _insert(const char *path, int rec, int check_with_udev_db) |
616 | +{ |
617 | + struct stat info; |
618 | + int r = 0; |
619 | + |
620 | + if (stat(path, &info) < 0) { |
621 | + log_sys_very_verbose("stat", path); |
622 | + return 0; |
623 | + } |
624 | + |
625 | + if (check_with_udev_db && !_device_in_udev_db(info.st_rdev)) { |
626 | + log_very_verbose("%s: Not in udev db", path); |
627 | + return 0; |
628 | + } |
629 | + |
630 | + if (S_ISDIR(info.st_mode)) { /* add a directory */ |
631 | + /* check it's not a symbolic link */ |
632 | + if (lstat(path, &info) < 0) { |
633 | + log_sys_very_verbose("lstat", path); |
634 | + return 0; |
635 | + } |
636 | + |
637 | + if (S_ISLNK(info.st_mode)) { |
638 | + log_debug("%s: Symbolic link to directory", path); |
639 | + return 0; |
640 | + } |
641 | + |
642 | + if (rec) |
643 | + r = _insert_dir(path); |
644 | + |
645 | + } else { /* add a device */ |
646 | + if (!S_ISBLK(info.st_mode)) { |
647 | + log_debug("%s: Not a block device", path); |
648 | + return 0; |
649 | + } |
650 | + |
651 | + if (!_insert_dev(path, info.st_rdev)) |
652 | + return_0; |
653 | + |
654 | + r = 1; |
655 | + } |
656 | + |
657 | + return r; |
658 | +} |
659 | + |
660 | +static void _full_scan(int dev_scan) |
661 | +{ |
662 | + struct dir_list *dl; |
663 | + |
664 | + if (_cache.has_scanned && !dev_scan) |
665 | + return; |
666 | + |
667 | + _insert_dirs(&_cache.dirs); |
668 | + |
669 | + dm_list_iterate_items(dl, &_cache.files) |
670 | + _insert_file(dl->dir); |
671 | + |
672 | + _cache.has_scanned = 1; |
673 | + init_full_scan_done(1); |
674 | +} |
675 | + |
676 | +int dev_cache_has_scanned(void) |
677 | +{ |
678 | + return _cache.has_scanned; |
679 | +} |
680 | + |
681 | +void dev_cache_scan(int do_scan) |
682 | +{ |
683 | + if (!do_scan) |
684 | + _cache.has_scanned = 1; |
685 | + else |
686 | + _full_scan(1); |
687 | +} |
688 | + |
689 | +static int _init_preferred_names(struct cmd_context *cmd) |
690 | +{ |
691 | + const struct dm_config_node *cn; |
692 | + const struct dm_config_value *v; |
693 | + struct dm_pool *scratch = NULL; |
694 | + const char **regex; |
695 | + unsigned count = 0; |
696 | + int i, r = 0; |
697 | + |
698 | + _cache.preferred_names_matcher = NULL; |
699 | + |
700 | + if (!(cn = find_config_tree_node(cmd, "devices/preferred_names")) || |
701 | + cn->v->type == DM_CFG_EMPTY_ARRAY) { |
702 | + log_very_verbose("devices/preferred_names not found in config file: " |
703 | + "using built-in preferences"); |
704 | + return 1; |
705 | + } |
706 | + |
707 | + for (v = cn->v; v; v = v->next) { |
708 | + if (v->type != DM_CFG_STRING) { |
709 | + log_error("preferred_names patterns must be enclosed in quotes"); |
710 | + return 0; |
711 | + } |
712 | + |
713 | + count++; |
714 | + } |
715 | + |
716 | + if (!(scratch = dm_pool_create("preferred device name matcher", 1024))) |
717 | + return_0; |
718 | + |
719 | + if (!(regex = dm_pool_alloc(scratch, sizeof(*regex) * count))) { |
720 | + log_error("Failed to allocate preferred device name " |
721 | + "pattern list."); |
722 | + goto out; |
723 | + } |
724 | + |
725 | + for (v = cn->v, i = count - 1; v; v = v->next, i--) { |
726 | + if (!(regex[i] = dm_pool_strdup(scratch, v->v.str))) { |
727 | + log_error("Failed to allocate a preferred device name " |
728 | + "pattern."); |
729 | + goto out; |
730 | + } |
731 | + } |
732 | + |
733 | + if (!(_cache.preferred_names_matcher = |
734 | + dm_regex_create(_cache.mem, regex, count))) { |
735 | + log_error("Preferred device name pattern matcher creation failed."); |
736 | + goto out; |
737 | + } |
738 | + |
739 | + r = 1; |
740 | + |
741 | +out: |
742 | + dm_pool_destroy(scratch); |
743 | + |
744 | + return r; |
745 | +} |
746 | + |
747 | +int dev_cache_init(struct cmd_context *cmd) |
748 | +{ |
749 | + _cache.names = NULL; |
750 | + _cache.has_scanned = 0; |
751 | + |
752 | + if (!(_cache.mem = dm_pool_create("dev_cache", 10 * 1024))) |
753 | + return_0; |
754 | + |
755 | + if (!(_cache.names = dm_hash_create(128))) { |
756 | + dm_pool_destroy(_cache.mem); |
757 | + _cache.mem = 0; |
758 | + return_0; |
759 | + } |
760 | + |
761 | + if (!(_cache.devices = btree_create(_cache.mem))) { |
762 | + log_error("Couldn't create binary tree for dev-cache."); |
763 | + goto bad; |
764 | + } |
765 | + |
766 | + if (!(_cache.dev_dir = _strdup(cmd->dev_dir))) { |
767 | + log_error("strdup dev_dir failed."); |
768 | + goto bad; |
769 | + } |
770 | + |
771 | + dm_list_init(&_cache.dirs); |
772 | + dm_list_init(&_cache.files); |
773 | + |
774 | + if (!_init_preferred_names(cmd)) |
775 | + goto_bad; |
776 | + |
777 | + return 1; |
778 | + |
779 | + bad: |
780 | + dev_cache_exit(); |
781 | + return 0; |
782 | +} |
783 | + |
784 | +static void _check_closed(struct device *dev) |
785 | +{ |
786 | + if (dev->fd >= 0) |
787 | + log_error("Device '%s' has been left open.", dev_name(dev)); |
788 | +} |
789 | + |
790 | +static void _check_for_open_devices(void) |
791 | +{ |
792 | + dm_hash_iter(_cache.names, (dm_hash_iterate_fn) _check_closed); |
793 | +} |
794 | + |
795 | +void dev_cache_exit(void) |
796 | +{ |
797 | + if (_cache.names) |
798 | + _check_for_open_devices(); |
799 | + |
800 | + if (_cache.preferred_names_matcher) |
801 | + _cache.preferred_names_matcher = NULL; |
802 | + |
803 | + if (_cache.mem) { |
804 | + dm_pool_destroy(_cache.mem); |
805 | + _cache.mem = NULL; |
806 | + } |
807 | + |
808 | + if (_cache.names) { |
809 | + dm_hash_destroy(_cache.names); |
810 | + _cache.names = NULL; |
811 | + } |
812 | + |
813 | + _cache.devices = NULL; |
814 | + _cache.has_scanned = 0; |
815 | + dm_list_init(&_cache.dirs); |
816 | + dm_list_init(&_cache.files); |
817 | +} |
818 | + |
819 | +int dev_cache_add_dir(const char *path) |
820 | +{ |
821 | + struct dir_list *dl; |
822 | + struct stat st; |
823 | + |
824 | + if (stat(path, &st)) { |
825 | + log_error("Ignoring %s: %s", path, strerror(errno)); |
826 | + /* But don't fail */ |
827 | + return 1; |
828 | + } |
829 | + |
830 | + if (!S_ISDIR(st.st_mode)) { |
831 | + log_error("Ignoring %s: Not a directory", path); |
832 | + return 1; |
833 | + } |
834 | + |
835 | + if (!(dl = _zalloc(sizeof(*dl) + strlen(path) + 1))) { |
836 | + log_error("dir_list allocation failed"); |
837 | + return 0; |
838 | + } |
839 | + |
840 | + strcpy(dl->dir, path); |
841 | + dm_list_add(&_cache.dirs, &dl->list); |
842 | + return 1; |
843 | +} |
844 | + |
845 | +int dev_cache_add_loopfile(const char *path) |
846 | +{ |
847 | + struct dir_list *dl; |
848 | + struct stat st; |
849 | + |
850 | + if (stat(path, &st)) { |
851 | + log_error("Ignoring %s: %s", path, strerror(errno)); |
852 | + /* But don't fail */ |
853 | + return 1; |
854 | + } |
855 | + |
856 | + if (!S_ISREG(st.st_mode)) { |
857 | + log_error("Ignoring %s: Not a regular file", path); |
858 | + return 1; |
859 | + } |
860 | + |
861 | + if (!(dl = _zalloc(sizeof(*dl) + strlen(path) + 1))) { |
862 | + log_error("dir_list allocation failed for file"); |
863 | + return 0; |
864 | + } |
865 | + |
866 | + strcpy(dl->dir, path); |
867 | + dm_list_add(&_cache.files, &dl->list); |
868 | + return 1; |
869 | +} |
870 | + |
871 | +/* Check cached device name is still valid before returning it */ |
872 | +/* This should be a rare occurrence */ |
873 | +/* set quiet if the cache is expected to be out-of-date */ |
874 | +/* FIXME Make rest of code pass/cache struct device instead of dev_name */ |
875 | +const char *dev_name_confirmed(struct device *dev, int quiet) |
876 | +{ |
877 | + struct stat buf; |
878 | + const char *name; |
879 | + int r; |
880 | + |
881 | + if ((dev->flags & DEV_REGULAR)) |
882 | + return dev_name(dev); |
883 | + |
884 | + while ((r = stat(name = dm_list_item(dev->aliases.n, |
885 | + struct str_list)->str, &buf)) || |
886 | + (buf.st_rdev != dev->dev)) { |
887 | + if (r < 0) { |
888 | + if (quiet) |
889 | + log_sys_debug("stat", name); |
890 | + else |
891 | + log_sys_error("stat", name); |
892 | + } |
893 | + if (quiet) |
894 | + log_debug("Path %s no longer valid for device(%d,%d)", |
895 | + name, (int) MAJOR(dev->dev), |
896 | + (int) MINOR(dev->dev)); |
897 | + else |
898 | + log_error("Path %s no longer valid for device(%d,%d)", |
899 | + name, (int) MAJOR(dev->dev), |
900 | + (int) MINOR(dev->dev)); |
901 | + |
902 | + /* Remove the incorrect hash entry */ |
903 | + dm_hash_remove(_cache.names, name); |
904 | + |
905 | + /* Leave list alone if there isn't an alternative name */ |
906 | + /* so dev_name will always find something to return. */ |
907 | + /* Otherwise add the name to the correct device. */ |
908 | + if (dm_list_size(&dev->aliases) > 1) { |
909 | + dm_list_del(dev->aliases.n); |
910 | + if (!r) |
911 | + _insert(name, 0, obtain_device_list_from_udev()); |
912 | + continue; |
913 | + } |
914 | + |
915 | + /* Scanning issues this inappropriately sometimes. */ |
916 | + log_debug("Aborting - please provide new pathname for what " |
917 | + "used to be %s", name); |
918 | + return NULL; |
919 | + } |
920 | + |
921 | + return dev_name(dev); |
922 | +} |
923 | + |
924 | +struct device *dev_cache_get(const char *name, struct dev_filter *f) |
925 | +{ |
926 | + struct stat buf; |
927 | + struct device *d = (struct device *) dm_hash_lookup(_cache.names, name); |
928 | + |
929 | + if (d && (d->flags & DEV_REGULAR)) |
930 | + return d; |
931 | + |
932 | + /* If the entry's wrong, remove it */ |
933 | + if (d && (stat(name, &buf) || (buf.st_rdev != d->dev))) { |
934 | + dm_hash_remove(_cache.names, name); |
935 | + d = NULL; |
936 | + } |
937 | + |
938 | + if (!d) { |
939 | + _insert(name, 0, obtain_device_list_from_udev()); |
940 | + d = (struct device *) dm_hash_lookup(_cache.names, name); |
941 | + if (!d) { |
942 | + _full_scan(0); |
943 | + d = (struct device *) dm_hash_lookup(_cache.names, name); |
944 | + } |
945 | + } |
946 | + |
947 | + return (d && (!f || (d->flags & DEV_REGULAR) || |
948 | + f->passes_filter(f, d))) ? d : NULL; |
949 | +} |
950 | + |
951 | +static struct device *_dev_cache_seek_devt(dev_t dev) |
952 | +{ |
953 | + struct device *d = NULL; |
954 | + struct dm_hash_node *n = dm_hash_get_first(_cache.names); |
955 | + while (n) { |
956 | + d = dm_hash_get_data(_cache.names, n); |
957 | + if (d->dev == dev) |
958 | + return d; |
959 | + n = dm_hash_get_next(_cache.names, n); |
960 | + } |
961 | + return NULL; |
962 | +} |
963 | + |
964 | +/* |
965 | + * TODO This is very inefficient. We probably want a hash table indexed by |
966 | + * major:minor for keys to speed up these lookups. |
967 | + */ |
968 | +struct device *dev_cache_get_by_devt(dev_t dev, struct dev_filter *f) |
969 | +{ |
970 | + struct device *d = _dev_cache_seek_devt(dev); |
971 | + |
972 | + if (d && (d->flags & DEV_REGULAR)) |
973 | + return d; |
974 | + |
975 | + if (!d) { |
976 | + _full_scan(0); |
977 | + d = _dev_cache_seek_devt(dev); |
978 | + } |
979 | + |
980 | + return (d && (!f || (d->flags & DEV_REGULAR) || |
981 | + f->passes_filter(f, d))) ? d : NULL; |
982 | +} |
983 | + |
984 | +struct dev_iter *dev_iter_create(struct dev_filter *f, int dev_scan) |
985 | +{ |
986 | + struct dev_iter *di = dm_malloc(sizeof(*di)); |
987 | + |
988 | + if (!di) { |
989 | + log_error("dev_iter allocation failed"); |
990 | + return NULL; |
991 | + } |
992 | + |
993 | + if (dev_scan && !trust_cache()) { |
994 | + /* Flag gets reset between each command */ |
995 | + if (!full_scan_done()) |
996 | + persistent_filter_wipe(f); /* Calls _full_scan(1) */ |
997 | + } else |
998 | + _full_scan(0); |
999 | + |
1000 | + di->current = btree_first(_cache.devices); |
1001 | + di->filter = f; |
1002 | + di->filter->use_count++; |
1003 | + |
1004 | + return di; |
1005 | +} |
1006 | + |
1007 | +void dev_iter_destroy(struct dev_iter *iter) |
1008 | +{ |
1009 | + iter->filter->use_count--; |
1010 | + dm_free(iter); |
1011 | +} |
1012 | + |
1013 | +static struct device *_iter_next(struct dev_iter *iter) |
1014 | +{ |
1015 | + struct device *d = btree_get_data(iter->current); |
1016 | + iter->current = btree_next(iter->current); |
1017 | + return d; |
1018 | +} |
1019 | + |
1020 | +struct device *dev_iter_get(struct dev_iter *iter) |
1021 | +{ |
1022 | + while (iter->current) { |
1023 | + struct device *d = _iter_next(iter); |
1024 | + if (!iter->filter || (d->flags & DEV_REGULAR) || |
1025 | + iter->filter->passes_filter(iter->filter, d)) |
1026 | + return d; |
1027 | + } |
1028 | + |
1029 | + return NULL; |
1030 | +} |
1031 | + |
1032 | +void dev_reset_error_count(struct cmd_context *cmd) |
1033 | +{ |
1034 | + struct dev_iter iter; |
1035 | + |
1036 | + if (!_cache.devices) |
1037 | + return; |
1038 | + |
1039 | + iter.current = btree_first(_cache.devices); |
1040 | + while (iter.current) |
1041 | + _iter_next(&iter)->error_count = 0; |
1042 | +} |
1043 | + |
1044 | +int dev_fd(struct device *dev) |
1045 | +{ |
1046 | + return dev->fd; |
1047 | +} |
1048 | + |
1049 | +const char *dev_name(const struct device *dev) |
1050 | +{ |
1051 | + return (dev) ? dm_list_item(dev->aliases.n, struct str_list)->str : |
1052 | + "unknown device"; |
1053 | +} |
1054 | |
1055 | === removed file '.pc/avoid-dev-block.patch/lib/device/dev-cache.c' |
1056 | --- .pc/avoid-dev-block.patch/lib/device/dev-cache.c 2012-04-14 02:57:53 +0000 |
1057 | +++ .pc/avoid-dev-block.patch/lib/device/dev-cache.c 1970-01-01 00:00:00 +0000 |
1058 | @@ -1,976 +0,0 @@ |
1059 | -/* |
1060 | - * Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved. |
1061 | - * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved. |
1062 | - * |
1063 | - * This file is part of LVM2. |
1064 | - * |
1065 | - * This copyrighted material is made available to anyone wishing to use, |
1066 | - * modify, copy, or redistribute it subject to the terms and conditions |
1067 | - * of the GNU Lesser General Public License v.2.1. |
1068 | - * |
1069 | - * You should have received a copy of the GNU Lesser General Public License |
1070 | - * along with this program; if not, write to the Free Software Foundation, |
1071 | - * Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
1072 | - */ |
1073 | - |
1074 | -#include "lib.h" |
1075 | -#include "dev-cache.h" |
1076 | -#include "lvm-types.h" |
1077 | -#include "btree.h" |
1078 | -#include "filter.h" |
1079 | -#include "filter-persistent.h" |
1080 | -#include "toolcontext.h" |
1081 | - |
1082 | -#include <unistd.h> |
1083 | -#include <sys/param.h> |
1084 | -#include <dirent.h> |
1085 | - |
1086 | -struct dev_iter { |
1087 | - struct btree_iter *current; |
1088 | - struct dev_filter *filter; |
1089 | -}; |
1090 | - |
1091 | -struct dir_list { |
1092 | - struct dm_list list; |
1093 | - char dir[0]; |
1094 | -}; |
1095 | - |
1096 | -static struct { |
1097 | - struct dm_pool *mem; |
1098 | - struct dm_hash_table *names; |
1099 | - struct btree *devices; |
1100 | - struct dm_regex *preferred_names_matcher; |
1101 | - const char *dev_dir; |
1102 | - |
1103 | - int has_scanned; |
1104 | - struct dm_list dirs; |
1105 | - struct dm_list files; |
1106 | - |
1107 | -} _cache; |
1108 | - |
1109 | -#define _alloc(x) dm_pool_zalloc(_cache.mem, (x)) |
1110 | -#define _free(x) dm_pool_free(_cache.mem, (x)) |
1111 | -#define _strdup(x) dm_pool_strdup(_cache.mem, (x)) |
1112 | - |
1113 | -static int _insert(const char *path, int rec, int check_with_udev_db); |
1114 | - |
1115 | -struct device *dev_create_file(const char *filename, struct device *dev, |
1116 | - struct str_list *alias, int use_malloc) |
1117 | -{ |
1118 | - int allocate = !dev; |
1119 | - |
1120 | - if (allocate) { |
1121 | - if (use_malloc) { |
1122 | - if (!(dev = dm_malloc(sizeof(*dev)))) { |
1123 | - log_error("struct device allocation failed"); |
1124 | - return NULL; |
1125 | - } |
1126 | - if (!(alias = dm_malloc(sizeof(*alias)))) { |
1127 | - log_error("struct str_list allocation failed"); |
1128 | - dm_free(dev); |
1129 | - return NULL; |
1130 | - } |
1131 | - if (!(alias->str = dm_strdup(filename))) { |
1132 | - log_error("filename strdup failed"); |
1133 | - dm_free(dev); |
1134 | - dm_free(alias); |
1135 | - return NULL; |
1136 | - } |
1137 | - dev->flags = DEV_ALLOCED; |
1138 | - } else { |
1139 | - if (!(dev = _alloc(sizeof(*dev)))) { |
1140 | - log_error("struct device allocation failed"); |
1141 | - return NULL; |
1142 | - } |
1143 | - if (!(alias = _alloc(sizeof(*alias)))) { |
1144 | - log_error("struct str_list allocation failed"); |
1145 | - _free(dev); |
1146 | - return NULL; |
1147 | - } |
1148 | - if (!(alias->str = _strdup(filename))) { |
1149 | - log_error("filename strdup failed"); |
1150 | - return NULL; |
1151 | - } |
1152 | - } |
1153 | - } else if (!(alias->str = dm_strdup(filename))) { |
1154 | - log_error("filename strdup failed"); |
1155 | - return NULL; |
1156 | - } |
1157 | - |
1158 | - dev->flags |= DEV_REGULAR; |
1159 | - dm_list_init(&dev->aliases); |
1160 | - dm_list_add(&dev->aliases, &alias->list); |
1161 | - dev->end = UINT64_C(0); |
1162 | - dev->dev = 0; |
1163 | - dev->fd = -1; |
1164 | - dev->open_count = 0; |
1165 | - dev->error_count = 0; |
1166 | - dev->max_error_count = NO_DEV_ERROR_COUNT_LIMIT; |
1167 | - dev->block_size = -1; |
1168 | - dev->read_ahead = -1; |
1169 | - memset(dev->pvid, 0, sizeof(dev->pvid)); |
1170 | - dm_list_init(&dev->open_list); |
1171 | - |
1172 | - return dev; |
1173 | -} |
1174 | - |
1175 | -static struct device *_dev_create(dev_t d) |
1176 | -{ |
1177 | - struct device *dev; |
1178 | - |
1179 | - if (!(dev = _alloc(sizeof(*dev)))) { |
1180 | - log_error("struct device allocation failed"); |
1181 | - return NULL; |
1182 | - } |
1183 | - dev->flags = 0; |
1184 | - dm_list_init(&dev->aliases); |
1185 | - dev->dev = d; |
1186 | - dev->fd = -1; |
1187 | - dev->open_count = 0; |
1188 | - dev->max_error_count = dev_disable_after_error_count(); |
1189 | - dev->block_size = -1; |
1190 | - dev->read_ahead = -1; |
1191 | - dev->end = UINT64_C(0); |
1192 | - memset(dev->pvid, 0, sizeof(dev->pvid)); |
1193 | - dm_list_init(&dev->open_list); |
1194 | - |
1195 | - return dev; |
1196 | -} |
1197 | - |
1198 | -void dev_set_preferred_name(struct str_list *sl, struct device *dev) |
1199 | -{ |
1200 | - /* |
1201 | - * Don't interfere with ordering specified in config file. |
1202 | - */ |
1203 | - if (_cache.preferred_names_matcher) |
1204 | - return; |
1205 | - |
1206 | - log_debug("%s: New preferred name", sl->str); |
1207 | - dm_list_del(&sl->list); |
1208 | - dm_list_add_h(&dev->aliases, &sl->list); |
1209 | -} |
1210 | - |
1211 | -/* |
1212 | - * Check whether path0 or path1 contains the subpath. The path that |
1213 | - * *does not* contain the subpath wins (return 0 or 1). If both paths |
1214 | - * contain the subpath, return -1. If none of them contains the subpath, |
1215 | - * return -2. |
1216 | - */ |
1217 | -static int _builtin_preference(const char *path0, const char *path1, |
1218 | - size_t skip_prefix_count, const char *subpath) |
1219 | -{ |
1220 | - size_t subpath_len; |
1221 | - int r0, r1; |
1222 | - |
1223 | - subpath_len = strlen(subpath); |
1224 | - |
1225 | - r0 = !strncmp(path0 + skip_prefix_count, subpath, subpath_len); |
1226 | - r1 = !strncmp(path1 + skip_prefix_count, subpath, subpath_len); |
1227 | - |
1228 | - if (!r0 && r1) |
1229 | - /* path0 does not have the subpath - it wins */ |
1230 | - return 0; |
1231 | - else if (r0 && !r1) |
1232 | - /* path1 does not have the subpath - it wins */ |
1233 | - return 1; |
1234 | - else if (r0 && r1) |
1235 | - /* both of them have the subpath */ |
1236 | - return -1; |
1237 | - |
1238 | - /* no path has the subpath */ |
1239 | - return -2; |
1240 | -} |
1241 | - |
1242 | -static int _apply_builtin_path_preference_rules(const char *path0, const char *path1) |
1243 | -{ |
1244 | - size_t devdir_len; |
1245 | - int r; |
1246 | - |
1247 | - devdir_len = strlen(_cache.dev_dir); |
1248 | - |
1249 | - if (!strncmp(path0, _cache.dev_dir, devdir_len) && |
1250 | - !strncmp(path1, _cache.dev_dir, devdir_len)) { |
1251 | - /* |
1252 | - * We're trying to achieve the ordering: |
1253 | - * /dev/block/ < /dev/dm-* < /dev/disk/ < /dev/mapper/ < anything else |
1254 | - */ |
1255 | - |
1256 | - /* Prefer any other path over /dev/block/ path. */ |
1257 | - if ((r = _builtin_preference(path0, path1, devdir_len, "block/")) >= -1) |
1258 | - return r; |
1259 | - |
1260 | - /* Prefer any other path over /dev/dm-* path. */ |
1261 | - if ((r = _builtin_preference(path0, path1, devdir_len, "dm-")) >= -1) |
1262 | - return r; |
1263 | - |
1264 | - /* Prefer any other path over /dev/disk/ path. */ |
1265 | - if ((r = _builtin_preference(path0, path1, devdir_len, "disk/")) >= -1) |
1266 | - return r; |
1267 | - |
1268 | - /* Prefer any other path over /dev/mapper/ path. */ |
1269 | - if ((r = _builtin_preference(path0, path1, 0, dm_dir())) >= -1) |
1270 | - return r; |
1271 | - } |
1272 | - |
1273 | - return -1; |
1274 | -} |
1275 | - |
1276 | -/* Return 1 if we prefer path1 else return 0 */ |
1277 | -static int _compare_paths(const char *path0, const char *path1) |
1278 | -{ |
1279 | - int slash0 = 0, slash1 = 0; |
1280 | - int m0, m1; |
1281 | - const char *p; |
1282 | - char p0[PATH_MAX], p1[PATH_MAX]; |
1283 | - char *s0, *s1; |
1284 | - struct stat stat0, stat1; |
1285 | - int r; |
1286 | - |
1287 | - /* |
1288 | - * FIXME Better to compare patterns one-at-a-time against all names. |
1289 | - */ |
1290 | - if (_cache.preferred_names_matcher) { |
1291 | - m0 = dm_regex_match(_cache.preferred_names_matcher, path0); |
1292 | - m1 = dm_regex_match(_cache.preferred_names_matcher, path1); |
1293 | - |
1294 | - if (m0 != m1) { |
1295 | - if (m0 < 0) |
1296 | - return 1; |
1297 | - if (m1 < 0) |
1298 | - return 0; |
1299 | - if (m0 < m1) |
1300 | - return 1; |
1301 | - if (m1 < m0) |
1302 | - return 0; |
1303 | - } |
1304 | - } |
1305 | - |
1306 | - /* Apply built-in preference rules first. */ |
1307 | - if ((r = _apply_builtin_path_preference_rules(path0, path1)) >= 0) |
1308 | - return r; |
1309 | - |
1310 | - /* Return the path with fewer slashes */ |
1311 | - for (p = path0; p++; p = (const char *) strchr(p, '/')) |
1312 | - slash0++; |
1313 | - |
1314 | - for (p = path1; p++; p = (const char *) strchr(p, '/')) |
1315 | - slash1++; |
1316 | - |
1317 | - if (slash0 < slash1) |
1318 | - return 0; |
1319 | - if (slash1 < slash0) |
1320 | - return 1; |
1321 | - |
1322 | - strncpy(p0, path0, PATH_MAX); |
1323 | - strncpy(p1, path1, PATH_MAX); |
1324 | - s0 = &p0[0] + 1; |
1325 | - s1 = &p1[0] + 1; |
1326 | - |
1327 | - /* We prefer symlinks - they exist for a reason! |
1328 | - * So we prefer a shorter path before the first symlink in the name. |
1329 | - * FIXME Configuration option to invert this? */ |
1330 | - while (s0) { |
1331 | - s0 = strchr(s0, '/'); |
1332 | - s1 = strchr(s1, '/'); |
1333 | - if (s0) { |
1334 | - *s0 = '\0'; |
1335 | - *s1 = '\0'; |
1336 | - } |
1337 | - if (lstat(p0, &stat0)) { |
1338 | - log_sys_very_verbose("lstat", p0); |
1339 | - return 1; |
1340 | - } |
1341 | - if (lstat(p1, &stat1)) { |
1342 | - log_sys_very_verbose("lstat", p1); |
1343 | - return 0; |
1344 | - } |
1345 | - if (S_ISLNK(stat0.st_mode) && !S_ISLNK(stat1.st_mode)) |
1346 | - return 0; |
1347 | - if (!S_ISLNK(stat0.st_mode) && S_ISLNK(stat1.st_mode)) |
1348 | - return 1; |
1349 | - if (s0) { |
1350 | - *s0++ = '/'; |
1351 | - *s1++ = '/'; |
1352 | - } |
1353 | - } |
1354 | - |
1355 | - /* ASCII comparison */ |
1356 | - if (strcmp(path0, path1) < 0) |
1357 | - return 0; |
1358 | - else |
1359 | - return 1; |
1360 | -} |
1361 | - |
1362 | -static int _add_alias(struct device *dev, const char *path) |
1363 | -{ |
1364 | - struct str_list *sl = _alloc(sizeof(*sl)); |
1365 | - struct str_list *strl; |
1366 | - const char *oldpath; |
1367 | - int prefer_old = 1; |
1368 | - |
1369 | - if (!sl) |
1370 | - return_0; |
1371 | - |
1372 | - /* Is name already there? */ |
1373 | - dm_list_iterate_items(strl, &dev->aliases) { |
1374 | - if (!strcmp(strl->str, path)) { |
1375 | - log_debug("%s: Already in device cache", path); |
1376 | - return 1; |
1377 | - } |
1378 | - } |
1379 | - |
1380 | - sl->str = path; |
1381 | - |
1382 | - if (!dm_list_empty(&dev->aliases)) { |
1383 | - oldpath = dm_list_item(dev->aliases.n, struct str_list)->str; |
1384 | - prefer_old = _compare_paths(path, oldpath); |
1385 | - log_debug("%s: Aliased to %s in device cache%s", |
1386 | - path, oldpath, prefer_old ? "" : " (preferred name)"); |
1387 | - |
1388 | - } else |
1389 | - log_debug("%s: Added to device cache", path); |
1390 | - |
1391 | - if (prefer_old) |
1392 | - dm_list_add(&dev->aliases, &sl->list); |
1393 | - else |
1394 | - dm_list_add_h(&dev->aliases, &sl->list); |
1395 | - |
1396 | - return 1; |
1397 | -} |
1398 | - |
1399 | -/* |
1400 | - * Either creates a new dev, or adds an alias to |
1401 | - * an existing dev. |
1402 | - */ |
1403 | -static int _insert_dev(const char *path, dev_t d) |
1404 | -{ |
1405 | - struct device *dev; |
1406 | - static dev_t loopfile_count = 0; |
1407 | - int loopfile = 0; |
1408 | - char *path_copy; |
1409 | - |
1410 | - /* Generate pretend device numbers for loopfiles */ |
1411 | - if (!d) { |
1412 | - if (dm_hash_lookup(_cache.names, path)) |
1413 | - return 1; |
1414 | - d = ++loopfile_count; |
1415 | - loopfile = 1; |
1416 | - } |
1417 | - |
1418 | - /* is this device already registered ? */ |
1419 | - if (!(dev = (struct device *) btree_lookup(_cache.devices, |
1420 | - (uint32_t) d))) { |
1421 | - /* create new device */ |
1422 | - if (loopfile) { |
1423 | - if (!(dev = dev_create_file(path, NULL, NULL, 0))) |
1424 | - return_0; |
1425 | - } else if (!(dev = _dev_create(d))) |
1426 | - return_0; |
1427 | - |
1428 | - if (!(btree_insert(_cache.devices, (uint32_t) d, dev))) { |
1429 | - log_error("Couldn't insert device into binary tree."); |
1430 | - _free(dev); |
1431 | - return 0; |
1432 | - } |
1433 | - } |
1434 | - |
1435 | - if (!(path_copy = dm_pool_strdup(_cache.mem, path))) { |
1436 | - log_error("Failed to duplicate path string."); |
1437 | - return 0; |
1438 | - } |
1439 | - |
1440 | - if (!loopfile && !_add_alias(dev, path_copy)) { |
1441 | - log_error("Couldn't add alias to dev cache."); |
1442 | - return 0; |
1443 | - } |
1444 | - |
1445 | - if (!dm_hash_insert(_cache.names, path_copy, dev)) { |
1446 | - log_error("Couldn't add name to hash in dev cache."); |
1447 | - return 0; |
1448 | - } |
1449 | - |
1450 | - return 1; |
1451 | -} |
1452 | - |
1453 | -static char *_join(const char *dir, const char *name) |
1454 | -{ |
1455 | - size_t len = strlen(dir) + strlen(name) + 2; |
1456 | - char *r = dm_malloc(len); |
1457 | - if (r) |
1458 | - snprintf(r, len, "%s/%s", dir, name); |
1459 | - |
1460 | - return r; |
1461 | -} |
1462 | - |
1463 | -/* |
1464 | - * Get rid of extra slashes in the path string. |
1465 | - */ |
1466 | -static void _collapse_slashes(char *str) |
1467 | -{ |
1468 | - char *ptr; |
1469 | - int was_slash = 0; |
1470 | - |
1471 | - for (ptr = str; *ptr; ptr++) { |
1472 | - if (*ptr == '/') { |
1473 | - if (was_slash) |
1474 | - continue; |
1475 | - |
1476 | - was_slash = 1; |
1477 | - } else |
1478 | - was_slash = 0; |
1479 | - *str++ = *ptr; |
1480 | - } |
1481 | - |
1482 | - *str = *ptr; |
1483 | -} |
1484 | - |
1485 | -static int _insert_dir(const char *dir) |
1486 | -{ |
1487 | - int n, dirent_count, r = 1; |
1488 | - struct dirent **dirent; |
1489 | - char *path; |
1490 | - |
1491 | - dirent_count = scandir(dir, &dirent, NULL, alphasort); |
1492 | - if (dirent_count > 0) { |
1493 | - for (n = 0; n < dirent_count; n++) { |
1494 | - if (dirent[n]->d_name[0] == '.') { |
1495 | - free(dirent[n]); |
1496 | - continue; |
1497 | - } |
1498 | - |
1499 | - if (!(path = _join(dir, dirent[n]->d_name))) |
1500 | - return_0; |
1501 | - |
1502 | - _collapse_slashes(path); |
1503 | - r &= _insert(path, 1, 0); |
1504 | - dm_free(path); |
1505 | - |
1506 | - free(dirent[n]); |
1507 | - } |
1508 | - free(dirent); |
1509 | - } |
1510 | - |
1511 | - return r; |
1512 | -} |
1513 | - |
1514 | -static int _insert_file(const char *path) |
1515 | -{ |
1516 | - struct stat info; |
1517 | - |
1518 | - if (stat(path, &info) < 0) { |
1519 | - log_sys_very_verbose("stat", path); |
1520 | - return 0; |
1521 | - } |
1522 | - |
1523 | - if (!S_ISREG(info.st_mode)) { |
1524 | - log_debug("%s: Not a regular file", path); |
1525 | - return 0; |
1526 | - } |
1527 | - |
1528 | - if (!_insert_dev(path, 0)) |
1529 | - return_0; |
1530 | - |
1531 | - return 1; |
1532 | -} |
1533 | - |
1534 | -#ifdef UDEV_SYNC_SUPPORT |
1535 | - |
1536 | -static int _device_in_udev_db(const dev_t d) |
1537 | -{ |
1538 | - struct udev *udev; |
1539 | - struct udev_device *udev_device; |
1540 | - |
1541 | - if (!(udev = udev_get_library_context())) |
1542 | - return_0; |
1543 | - |
1544 | - if ((udev_device = udev_device_new_from_devnum(udev, 'b', d))) { |
1545 | - udev_device_unref(udev_device); |
1546 | - return 1; |
1547 | - } |
1548 | - |
1549 | - return 0; |
1550 | -} |
1551 | - |
1552 | -static int _insert_udev_dir(struct udev *udev, const char *dir) |
1553 | -{ |
1554 | - struct udev_enumerate *udev_enum = NULL; |
1555 | - struct udev_list_entry *device_entry, *symlink_entry; |
1556 | - const char *node_name, *symlink_name; |
1557 | - struct udev_device *device; |
1558 | - int r = 1; |
1559 | - |
1560 | - if (!(udev_enum = udev_enumerate_new(udev))) |
1561 | - goto bad; |
1562 | - |
1563 | - if (udev_enumerate_add_match_subsystem(udev_enum, "block") || |
1564 | - udev_enumerate_scan_devices(udev_enum)) |
1565 | - goto bad; |
1566 | - |
1567 | - udev_list_entry_foreach(device_entry, udev_enumerate_get_list_entry(udev_enum)) { |
1568 | - device = udev_device_new_from_syspath(udev, udev_list_entry_get_name(device_entry)); |
1569 | - |
1570 | - node_name = udev_device_get_devnode(device); |
1571 | - r &= _insert(node_name, 0, 0); |
1572 | - |
1573 | - udev_list_entry_foreach(symlink_entry, udev_device_get_devlinks_list_entry(device)) { |
1574 | - symlink_name = udev_list_entry_get_name(symlink_entry); |
1575 | - r &= _insert(symlink_name, 0, 0); |
1576 | - } |
1577 | - |
1578 | - udev_device_unref(device); |
1579 | - } |
1580 | - |
1581 | - udev_enumerate_unref(udev_enum); |
1582 | - return r; |
1583 | - |
1584 | -bad: |
1585 | - log_error("Failed to enumerate udev device list."); |
1586 | - udev_enumerate_unref(udev_enum); |
1587 | - return 0; |
1588 | -} |
1589 | - |
1590 | -static void _insert_dirs(struct dm_list *dirs) |
1591 | -{ |
1592 | - struct dir_list *dl; |
1593 | - struct udev *udev; |
1594 | - int with_udev; |
1595 | - |
1596 | - with_udev = obtain_device_list_from_udev() && |
1597 | - (udev = udev_get_library_context()); |
1598 | - |
1599 | - dm_list_iterate_items(dl, &_cache.dirs) { |
1600 | - if (with_udev) { |
1601 | - if (!_insert_udev_dir(udev, dl->dir)) |
1602 | - log_debug("%s: Failed to insert devices from " |
1603 | - "udev-managed directory to device " |
1604 | - "cache fully", dl->dir); |
1605 | - } |
1606 | - else if (!_insert_dir(dl->dir)) |
1607 | - log_debug("%s: Failed to insert devices to " |
1608 | - "device cache fully", dl->dir); |
1609 | - } |
1610 | -} |
1611 | - |
1612 | -#else /* UDEV_SYNC_SUPPORT */ |
1613 | - |
1614 | -static int _device_in_udev_db(const dev_t d) |
1615 | -{ |
1616 | - return 0; |
1617 | -} |
1618 | - |
1619 | -static void _insert_dirs(struct dm_list *dirs) |
1620 | -{ |
1621 | - struct dir_list *dl; |
1622 | - |
1623 | - dm_list_iterate_items(dl, &_cache.dirs) |
1624 | - _insert_dir(dl->dir); |
1625 | -} |
1626 | - |
1627 | -#endif /* UDEV_SYNC_SUPPORT */ |
1628 | - |
1629 | -static int _insert(const char *path, int rec, int check_with_udev_db) |
1630 | -{ |
1631 | - struct stat info; |
1632 | - int r = 0; |
1633 | - |
1634 | - if (stat(path, &info) < 0) { |
1635 | - log_sys_very_verbose("stat", path); |
1636 | - return 0; |
1637 | - } |
1638 | - |
1639 | - if (check_with_udev_db && !_device_in_udev_db(info.st_rdev)) { |
1640 | - log_very_verbose("%s: Not in udev db", path); |
1641 | - return 0; |
1642 | - } |
1643 | - |
1644 | - if (S_ISDIR(info.st_mode)) { /* add a directory */ |
1645 | - /* check it's not a symbolic link */ |
1646 | - if (lstat(path, &info) < 0) { |
1647 | - log_sys_very_verbose("lstat", path); |
1648 | - return 0; |
1649 | - } |
1650 | - |
1651 | - if (S_ISLNK(info.st_mode)) { |
1652 | - log_debug("%s: Symbolic link to directory", path); |
1653 | - return 0; |
1654 | - } |
1655 | - |
1656 | - if (rec) |
1657 | - r = _insert_dir(path); |
1658 | - |
1659 | - } else { /* add a device */ |
1660 | - if (!S_ISBLK(info.st_mode)) { |
1661 | - log_debug("%s: Not a block device", path); |
1662 | - return 0; |
1663 | - } |
1664 | - |
1665 | - if (!_insert_dev(path, info.st_rdev)) |
1666 | - return_0; |
1667 | - |
1668 | - r = 1; |
1669 | - } |
1670 | - |
1671 | - return r; |
1672 | -} |
1673 | - |
1674 | -static void _full_scan(int dev_scan) |
1675 | -{ |
1676 | - struct dir_list *dl; |
1677 | - |
1678 | - if (_cache.has_scanned && !dev_scan) |
1679 | - return; |
1680 | - |
1681 | - _insert_dirs(&_cache.dirs); |
1682 | - |
1683 | - dm_list_iterate_items(dl, &_cache.files) |
1684 | - _insert_file(dl->dir); |
1685 | - |
1686 | - _cache.has_scanned = 1; |
1687 | - init_full_scan_done(1); |
1688 | -} |
1689 | - |
1690 | -int dev_cache_has_scanned(void) |
1691 | -{ |
1692 | - return _cache.has_scanned; |
1693 | -} |
1694 | - |
1695 | -void dev_cache_scan(int do_scan) |
1696 | -{ |
1697 | - if (!do_scan) |
1698 | - _cache.has_scanned = 1; |
1699 | - else |
1700 | - _full_scan(1); |
1701 | -} |
1702 | - |
1703 | -static int _init_preferred_names(struct cmd_context *cmd) |
1704 | -{ |
1705 | - const struct config_node *cn; |
1706 | - const struct config_value *v; |
1707 | - struct dm_pool *scratch = NULL; |
1708 | - const char **regex; |
1709 | - unsigned count = 0; |
1710 | - int i, r = 0; |
1711 | - |
1712 | - _cache.preferred_names_matcher = NULL; |
1713 | - |
1714 | - if (!(cn = find_config_tree_node(cmd, "devices/preferred_names")) || |
1715 | - cn->v->type == CFG_EMPTY_ARRAY) { |
1716 | - log_very_verbose("devices/preferred_names not found in config file: " |
1717 | - "using built-in preferences"); |
1718 | - return 1; |
1719 | - } |
1720 | - |
1721 | - for (v = cn->v; v; v = v->next) { |
1722 | - if (v->type != CFG_STRING) { |
1723 | - log_error("preferred_names patterns must be enclosed in quotes"); |
1724 | - return 0; |
1725 | - } |
1726 | - |
1727 | - count++; |
1728 | - } |
1729 | - |
1730 | - if (!(scratch = dm_pool_create("preferred device name matcher", 1024))) |
1731 | - return_0; |
1732 | - |
1733 | - if (!(regex = dm_pool_alloc(scratch, sizeof(*regex) * count))) { |
1734 | - log_error("Failed to allocate preferred device name " |
1735 | - "pattern list."); |
1736 | - goto out; |
1737 | - } |
1738 | - |
1739 | - for (v = cn->v, i = count - 1; v; v = v->next, i--) { |
1740 | - if (!(regex[i] = dm_pool_strdup(scratch, v->v.str))) { |
1741 | - log_error("Failed to allocate a preferred device name " |
1742 | - "pattern."); |
1743 | - goto out; |
1744 | - } |
1745 | - } |
1746 | - |
1747 | - if (!(_cache.preferred_names_matcher = |
1748 | - dm_regex_create(_cache.mem, regex, count))) { |
1749 | - log_error("Preferred device name pattern matcher creation failed."); |
1750 | - goto out; |
1751 | - } |
1752 | - |
1753 | - r = 1; |
1754 | - |
1755 | -out: |
1756 | - dm_pool_destroy(scratch); |
1757 | - |
1758 | - return r; |
1759 | -} |
1760 | - |
1761 | -int dev_cache_init(struct cmd_context *cmd) |
1762 | -{ |
1763 | - _cache.names = NULL; |
1764 | - _cache.has_scanned = 0; |
1765 | - |
1766 | - if (!(_cache.mem = dm_pool_create("dev_cache", 10 * 1024))) |
1767 | - return_0; |
1768 | - |
1769 | - if (!(_cache.names = dm_hash_create(128))) { |
1770 | - dm_pool_destroy(_cache.mem); |
1771 | - _cache.mem = 0; |
1772 | - return_0; |
1773 | - } |
1774 | - |
1775 | - if (!(_cache.devices = btree_create(_cache.mem))) { |
1776 | - log_error("Couldn't create binary tree for dev-cache."); |
1777 | - goto bad; |
1778 | - } |
1779 | - |
1780 | - if (!(_cache.dev_dir = _strdup(cmd->dev_dir))) { |
1781 | - log_error("strdup dev_dir failed."); |
1782 | - goto bad; |
1783 | - } |
1784 | - |
1785 | - dm_list_init(&_cache.dirs); |
1786 | - dm_list_init(&_cache.files); |
1787 | - |
1788 | - if (!_init_preferred_names(cmd)) |
1789 | - goto_bad; |
1790 | - |
1791 | - return 1; |
1792 | - |
1793 | - bad: |
1794 | - dev_cache_exit(); |
1795 | - return 0; |
1796 | -} |
1797 | - |
1798 | -static void _check_closed(struct device *dev) |
1799 | -{ |
1800 | - if (dev->fd >= 0) |
1801 | - log_error("Device '%s' has been left open.", dev_name(dev)); |
1802 | -} |
1803 | - |
1804 | -static void _check_for_open_devices(void) |
1805 | -{ |
1806 | - dm_hash_iter(_cache.names, (dm_hash_iterate_fn) _check_closed); |
1807 | -} |
1808 | - |
1809 | -void dev_cache_exit(void) |
1810 | -{ |
1811 | - if (_cache.names) |
1812 | - _check_for_open_devices(); |
1813 | - |
1814 | - if (_cache.preferred_names_matcher) |
1815 | - _cache.preferred_names_matcher = NULL; |
1816 | - |
1817 | - if (_cache.mem) { |
1818 | - dm_pool_destroy(_cache.mem); |
1819 | - _cache.mem = NULL; |
1820 | - } |
1821 | - |
1822 | - if (_cache.names) { |
1823 | - dm_hash_destroy(_cache.names); |
1824 | - _cache.names = NULL; |
1825 | - } |
1826 | - |
1827 | - _cache.devices = NULL; |
1828 | - _cache.has_scanned = 0; |
1829 | - dm_list_init(&_cache.dirs); |
1830 | - dm_list_init(&_cache.files); |
1831 | -} |
1832 | - |
1833 | -int dev_cache_add_dir(const char *path) |
1834 | -{ |
1835 | - struct dir_list *dl; |
1836 | - struct stat st; |
1837 | - |
1838 | - if (stat(path, &st)) { |
1839 | - log_error("Ignoring %s: %s", path, strerror(errno)); |
1840 | - /* But don't fail */ |
1841 | - return 1; |
1842 | - } |
1843 | - |
1844 | - if (!S_ISDIR(st.st_mode)) { |
1845 | - log_error("Ignoring %s: Not a directory", path); |
1846 | - return 1; |
1847 | - } |
1848 | - |
1849 | - if (!(dl = _alloc(sizeof(*dl) + strlen(path) + 1))) { |
1850 | - log_error("dir_list allocation failed"); |
1851 | - return 0; |
1852 | - } |
1853 | - |
1854 | - strcpy(dl->dir, path); |
1855 | - dm_list_add(&_cache.dirs, &dl->list); |
1856 | - return 1; |
1857 | -} |
1858 | - |
1859 | -int dev_cache_add_loopfile(const char *path) |
1860 | -{ |
1861 | - struct dir_list *dl; |
1862 | - struct stat st; |
1863 | - |
1864 | - if (stat(path, &st)) { |
1865 | - log_error("Ignoring %s: %s", path, strerror(errno)); |
1866 | - /* But don't fail */ |
1867 | - return 1; |
1868 | - } |
1869 | - |
1870 | - if (!S_ISREG(st.st_mode)) { |
1871 | - log_error("Ignoring %s: Not a regular file", path); |
1872 | - return 1; |
1873 | - } |
1874 | - |
1875 | - if (!(dl = _alloc(sizeof(*dl) + strlen(path) + 1))) { |
1876 | - log_error("dir_list allocation failed for file"); |
1877 | - return 0; |
1878 | - } |
1879 | - |
1880 | - strcpy(dl->dir, path); |
1881 | - dm_list_add(&_cache.files, &dl->list); |
1882 | - return 1; |
1883 | -} |
1884 | - |
1885 | -/* Check cached device name is still valid before returning it */ |
1886 | -/* This should be a rare occurrence */ |
1887 | -/* set quiet if the cache is expected to be out-of-date */ |
1888 | -/* FIXME Make rest of code pass/cache struct device instead of dev_name */ |
1889 | -const char *dev_name_confirmed(struct device *dev, int quiet) |
1890 | -{ |
1891 | - struct stat buf; |
1892 | - const char *name; |
1893 | - int r; |
1894 | - |
1895 | - if ((dev->flags & DEV_REGULAR)) |
1896 | - return dev_name(dev); |
1897 | - |
1898 | - while ((r = stat(name = dm_list_item(dev->aliases.n, |
1899 | - struct str_list)->str, &buf)) || |
1900 | - (buf.st_rdev != dev->dev)) { |
1901 | - if (r < 0) { |
1902 | - if (quiet) |
1903 | - log_sys_debug("stat", name); |
1904 | - else |
1905 | - log_sys_error("stat", name); |
1906 | - } |
1907 | - if (quiet) |
1908 | - log_debug("Path %s no longer valid for device(%d,%d)", |
1909 | - name, (int) MAJOR(dev->dev), |
1910 | - (int) MINOR(dev->dev)); |
1911 | - else |
1912 | - log_error("Path %s no longer valid for device(%d,%d)", |
1913 | - name, (int) MAJOR(dev->dev), |
1914 | - (int) MINOR(dev->dev)); |
1915 | - |
1916 | - /* Remove the incorrect hash entry */ |
1917 | - dm_hash_remove(_cache.names, name); |
1918 | - |
1919 | - /* Leave list alone if there isn't an alternative name */ |
1920 | - /* so dev_name will always find something to return. */ |
1921 | - /* Otherwise add the name to the correct device. */ |
1922 | - if (dm_list_size(&dev->aliases) > 1) { |
1923 | - dm_list_del(dev->aliases.n); |
1924 | - if (!r) |
1925 | - _insert(name, 0, obtain_device_list_from_udev()); |
1926 | - continue; |
1927 | - } |
1928 | - |
1929 | - /* Scanning issues this inappropriately sometimes. */ |
1930 | - log_debug("Aborting - please provide new pathname for what " |
1931 | - "used to be %s", name); |
1932 | - return NULL; |
1933 | - } |
1934 | - |
1935 | - return dev_name(dev); |
1936 | -} |
1937 | - |
1938 | -struct device *dev_cache_get(const char *name, struct dev_filter *f) |
1939 | -{ |
1940 | - struct stat buf; |
1941 | - struct device *d = (struct device *) dm_hash_lookup(_cache.names, name); |
1942 | - |
1943 | - if (d && (d->flags & DEV_REGULAR)) |
1944 | - return d; |
1945 | - |
1946 | - /* If the entry's wrong, remove it */ |
1947 | - if (d && (stat(name, &buf) || (buf.st_rdev != d->dev))) { |
1948 | - dm_hash_remove(_cache.names, name); |
1949 | - d = NULL; |
1950 | - } |
1951 | - |
1952 | - if (!d) { |
1953 | - _insert(name, 0, obtain_device_list_from_udev()); |
1954 | - d = (struct device *) dm_hash_lookup(_cache.names, name); |
1955 | - if (!d) { |
1956 | - _full_scan(0); |
1957 | - d = (struct device *) dm_hash_lookup(_cache.names, name); |
1958 | - } |
1959 | - } |
1960 | - |
1961 | - return (d && (!f || (d->flags & DEV_REGULAR) || |
1962 | - f->passes_filter(f, d))) ? d : NULL; |
1963 | -} |
1964 | - |
1965 | -struct dev_iter *dev_iter_create(struct dev_filter *f, int dev_scan) |
1966 | -{ |
1967 | - struct dev_iter *di = dm_malloc(sizeof(*di)); |
1968 | - |
1969 | - if (!di) { |
1970 | - log_error("dev_iter allocation failed"); |
1971 | - return NULL; |
1972 | - } |
1973 | - |
1974 | - if (dev_scan && !trust_cache()) { |
1975 | - /* Flag gets reset between each command */ |
1976 | - if (!full_scan_done()) |
1977 | - persistent_filter_wipe(f); /* Calls _full_scan(1) */ |
1978 | - } else |
1979 | - _full_scan(0); |
1980 | - |
1981 | - di->current = btree_first(_cache.devices); |
1982 | - di->filter = f; |
1983 | - di->filter->use_count++; |
1984 | - |
1985 | - return di; |
1986 | -} |
1987 | - |
1988 | -void dev_iter_destroy(struct dev_iter *iter) |
1989 | -{ |
1990 | - iter->filter->use_count--; |
1991 | - dm_free(iter); |
1992 | -} |
1993 | - |
1994 | -static struct device *_iter_next(struct dev_iter *iter) |
1995 | -{ |
1996 | - struct device *d = btree_get_data(iter->current); |
1997 | - iter->current = btree_next(iter->current); |
1998 | - return d; |
1999 | -} |
2000 | - |
2001 | -struct device *dev_iter_get(struct dev_iter *iter) |
2002 | -{ |
2003 | - while (iter->current) { |
2004 | - struct device *d = _iter_next(iter); |
2005 | - if (!iter->filter || (d->flags & DEV_REGULAR) || |
2006 | - iter->filter->passes_filter(iter->filter, d)) |
2007 | - return d; |
2008 | - } |
2009 | - |
2010 | - return NULL; |
2011 | -} |
2012 | - |
2013 | -void dev_reset_error_count(struct cmd_context *cmd) |
2014 | -{ |
2015 | - struct dev_iter iter; |
2016 | - |
2017 | - if (!_cache.devices) |
2018 | - return; |
2019 | - |
2020 | - iter.current = btree_first(_cache.devices); |
2021 | - while (iter.current) |
2022 | - _iter_next(&iter)->error_count = 0; |
2023 | -} |
2024 | - |
2025 | -int dev_fd(struct device *dev) |
2026 | -{ |
2027 | - return dev->fd; |
2028 | -} |
2029 | - |
2030 | -const char *dev_name(const struct device *dev) |
2031 | -{ |
2032 | - return (dev) ? dm_list_item(dev->aliases.n, struct str_list)->str : |
2033 | - "unknown device"; |
2034 | -} |
2035 | |
2036 | === added directory '.pc/dirs.patch' |
2037 | === removed directory '.pc/dirs.patch' |
2038 | === added directory '.pc/dirs.patch/daemons' |
2039 | === added directory '.pc/dirs.patch/daemons/dmeventd' |
2040 | === added file '.pc/dirs.patch/daemons/dmeventd/Makefile.in' |
2041 | --- .pc/dirs.patch/daemons/dmeventd/Makefile.in 1970-01-01 00:00:00 +0000 |
2042 | +++ .pc/dirs.patch/daemons/dmeventd/Makefile.in 2012-08-21 10:18:22 +0000 |
2043 | @@ -0,0 +1,108 @@ |
2044 | +# |
2045 | +# Copyright (C) 2005-2011 Red Hat, Inc. All rights reserved. |
2046 | +# |
2047 | +# This file is part of the device-mapper userspace tools. |
2048 | +# |
2049 | +# This copyrighted material is made available to anyone wishing to use, |
2050 | +# modify, copy, or redistribute it subject to the terms and conditions |
2051 | +# of the GNU Lesser General Public License v.2.1. |
2052 | +# |
2053 | +# You should have received a copy of the GNU Lesser General Public License |
2054 | +# along with this program; if not, write to the Free Software Foundation, |
2055 | +# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
2056 | + |
2057 | +srcdir = @srcdir@ |
2058 | +top_srcdir = @top_srcdir@ |
2059 | +top_builddir = @top_builddir@ |
2060 | + |
2061 | +SOURCES = libdevmapper-event.c |
2062 | +SOURCES2 = dmeventd.c |
2063 | + |
2064 | +TARGETS = dmeventd |
2065 | + |
2066 | +.PHONY: install_lib_dynamic install_lib_static install_include \ |
2067 | + install_pkgconfig install_dmeventd_dynamic install_dmeventd_static \ |
2068 | + install_lib install_dmeventd |
2069 | + |
2070 | +INSTALL_DMEVENTD_TARGETS = install_dmeventd_dynamic |
2071 | +INSTALL_LIB_TARGETS = install_lib_dynamic |
2072 | + |
2073 | +LIB_NAME = libdevmapper-event |
2074 | +ifeq ("@STATIC_LINK@", "yes") |
2075 | + LIB_STATIC = $(LIB_NAME).a |
2076 | + TARGETS += $(LIB_STATIC) dmeventd.static |
2077 | + INSTALL_DMEVENTD_TARGETS += install_dmeventd_static |
2078 | + INSTALL_LIB_TARGETS += install_lib_static |
2079 | +endif |
2080 | + |
2081 | +LIB_VERSION = $(LIB_VERSION_DM) |
2082 | +LIB_SHARED = $(LIB_NAME).$(LIB_SUFFIX) |
2083 | + |
2084 | +CLEAN_TARGETS = dmeventd.static $(LIB_NAME).a |
2085 | + |
2086 | +ifneq ($(MAKECMDGOALS),device-mapper) |
2087 | + SUBDIRS+=plugins |
2088 | +endif |
2089 | + |
2090 | +CFLOW_LIST = $(SOURCES) |
2091 | +CFLOW_LIST_TARGET = $(LIB_NAME).cflow |
2092 | +CFLOW_TARGET = dmeventd |
2093 | + |
2094 | +EXPORTED_HEADER = $(srcdir)/libdevmapper-event.h |
2095 | +EXPORTED_FN_PREFIX = dm_event |
2096 | + |
2097 | +include $(top_builddir)/make.tmpl |
2098 | + |
2099 | +all: device-mapper |
2100 | +device-mapper: $(TARGETS) |
2101 | + |
2102 | +LIBS += -ldevmapper |
2103 | +LVMLIBS += -ldevmapper-event $(PTHREAD_LIBS) |
2104 | + |
2105 | +dmeventd: $(LIB_SHARED) dmeventd.o |
2106 | + $(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -L. -o $@ dmeventd.o \ |
2107 | + $(DL_LIBS) $(LVMLIBS) $(LIBS) -rdynamic |
2108 | + |
2109 | +dmeventd.static: $(LIB_STATIC) dmeventd.o $(interfacebuilddir)/libdevmapper.a |
2110 | + $(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -static -L. -L$(interfacebuilddir) -o $@ \ |
2111 | + dmeventd.o $(DL_LIBS) $(LVMLIBS) $(LIBS) $(STATIC_LIBS) |
2112 | + |
2113 | +ifeq ("@PKGCONFIG@", "yes") |
2114 | + INSTALL_LIB_TARGETS += install_pkgconfig |
2115 | +endif |
2116 | + |
2117 | +ifneq ("$(CFLOW_CMD)", "") |
2118 | +CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES)) |
2119 | +-include $(top_builddir)/libdm/libdevmapper.cflow |
2120 | +-include $(top_builddir)/lib/liblvm-internal.cflow |
2121 | +-include $(top_builddir)/lib/liblvm2cmd.cflow |
2122 | +-include $(top_builddir)/daemons/dmeventd/$(LIB_NAME).cflow |
2123 | +-include $(top_builddir)/daemons/dmeventd/plugins/mirror/$(LIB_NAME)-lvm2mirror.cflow |
2124 | +endif |
2125 | + |
2126 | +install_include: $(srcdir)/libdevmapper-event.h |
2127 | + $(INSTALL_DATA) -D $< $(includedir)/$(<F) |
2128 | + |
2129 | +install_pkgconfig: libdevmapper-event.pc |
2130 | + $(INSTALL_DATA) -D $< $(pkgconfigdir)/devmapper-event.pc |
2131 | + |
2132 | +install_lib_dynamic: install_lib_shared |
2133 | + |
2134 | +install_lib_static: $(LIB_STATIC) |
2135 | + $(INSTALL_DATA) -D $< $(usrlibdir)/$(<F) |
2136 | + |
2137 | +install_lib: $(INSTALL_LIB_TARGETS) |
2138 | + |
2139 | +install_dmeventd_dynamic: dmeventd |
2140 | + $(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F) |
2141 | + |
2142 | +install_dmeventd_static: dmeventd.static |
2143 | + $(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F) |
2144 | + |
2145 | +install_dmeventd: $(INSTALL_DMEVENTD_TARGETS) |
2146 | + |
2147 | +install: install_include install_lib install_dmeventd |
2148 | + |
2149 | +install_device-mapper: install_include install_lib install_dmeventd |
2150 | + |
2151 | +DISTCLEAN_TARGETS += libdevmapper-event.pc |
2152 | |
2153 | === added file '.pc/dirs.patch/daemons/dmeventd/dmeventd.c' |
2154 | --- .pc/dirs.patch/daemons/dmeventd/dmeventd.c 1970-01-01 00:00:00 +0000 |
2155 | +++ .pc/dirs.patch/daemons/dmeventd/dmeventd.c 2012-08-21 10:18:22 +0000 |
2156 | @@ -0,0 +1,2009 @@ |
2157 | +/* |
2158 | + * Copyright (C) 2005-2007 Red Hat, Inc. All rights reserved. |
2159 | + * |
2160 | + * This file is part of the device-mapper userspace tools. |
2161 | + * |
2162 | + * This copyrighted material is made available to anyone wishing to use, |
2163 | + * modify, copy, or redistribute it subject to the terms and conditions |
2164 | + * of the GNU Lesser General Public License v.2.1. |
2165 | + * |
2166 | + * You should have received a copy of the GNU Lesser General Public License |
2167 | + * along with this program; if not, write to the Free Software Foundation, |
2168 | + * Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
2169 | + */ |
2170 | + |
2171 | +/* |
2172 | + * dmeventd - dm event daemon to monitor active mapped devices |
2173 | + */ |
2174 | + |
2175 | +#define _GNU_SOURCE |
2176 | +#define _FILE_OFFSET_BITS 64 |
2177 | + |
2178 | +#include "configure.h" |
2179 | +#include "libdevmapper.h" |
2180 | +#include "libdevmapper-event.h" |
2181 | +#include "dmeventd.h" |
2182 | +//#include "libmultilog.h" |
2183 | +#include "dm-logging.h" |
2184 | + |
2185 | +#include <dlfcn.h> |
2186 | +#include <errno.h> |
2187 | +#include <pthread.h> |
2188 | +#include <sys/file.h> |
2189 | +#include <sys/stat.h> |
2190 | +#include <sys/wait.h> |
2191 | +#include <sys/time.h> |
2192 | +#include <sys/resource.h> |
2193 | +#include <unistd.h> |
2194 | +#include <signal.h> |
2195 | +#include <arpa/inet.h> /* for htonl, ntohl */ |
2196 | + |
2197 | +#ifdef linux |
2198 | +# include <malloc.h> |
2199 | + |
2200 | +/* |
2201 | + * Kernel version 2.6.36 and higher has |
2202 | + * new OOM killer adjustment interface. |
2203 | + */ |
2204 | +# define OOM_ADJ_FILE_OLD "/proc/self/oom_adj" |
2205 | +# define OOM_ADJ_FILE "/proc/self/oom_score_adj" |
2206 | + |
2207 | +/* From linux/oom.h */ |
2208 | +/* Old interface */ |
2209 | +# define OOM_DISABLE (-17) |
2210 | +# define OOM_ADJUST_MIN (-16) |
2211 | +/* New interface */ |
2212 | +# define OOM_SCORE_ADJ_MIN (-1000) |
2213 | + |
2214 | +/* Systemd on-demand activation support */ |
2215 | +# define SD_LISTEN_PID_ENV_VAR_NAME "LISTEN_PID" |
2216 | +# define SD_LISTEN_FDS_ENV_VAR_NAME "LISTEN_FDS" |
2217 | +# define SD_LISTEN_FDS_START 3 |
2218 | +# define SD_FD_FIFO_SERVER SD_LISTEN_FDS_START |
2219 | +# define SD_FD_FIFO_CLIENT (SD_LISTEN_FDS_START + 1) |
2220 | + |
2221 | +#endif |
2222 | + |
2223 | +/* FIXME We use syslog for now, because multilog is not yet implemented */ |
2224 | +#include <syslog.h> |
2225 | + |
2226 | +static volatile sig_atomic_t _exit_now = 0; /* set to '1' when signal is given to exit */ |
2227 | +static volatile sig_atomic_t _thread_registries_empty = 1; /* registries are empty initially */ |
2228 | + |
2229 | +/* List (un)link macros. */ |
2230 | +#define LINK(x, head) dm_list_add(head, &(x)->list) |
2231 | +#define LINK_DSO(dso) LINK(dso, &_dso_registry) |
2232 | +#define LINK_THREAD(thread) LINK(thread, &_thread_registry) |
2233 | + |
2234 | +#define UNLINK(x) dm_list_del(&(x)->list) |
2235 | +#define UNLINK_DSO(x) UNLINK(x) |
2236 | +#define UNLINK_THREAD(x) UNLINK(x) |
2237 | + |
2238 | +#define DAEMON_NAME "dmeventd" |
2239 | + |
2240 | +/* |
2241 | + Global mutex for thread list access. Has to be held when: |
2242 | + - iterating thread list |
2243 | + - adding or removing elements from thread list |
2244 | + - changing or reading thread_status's fields: |
2245 | + processing, status, events |
2246 | + Use _lock_mutex() and _unlock_mutex() to hold/release it |
2247 | +*/ |
2248 | +static pthread_mutex_t _global_mutex; |
2249 | + |
2250 | +/* |
2251 | + There are three states a thread can attain (see struct |
2252 | + thread_status, field int status): |
2253 | + |
2254 | + - DM_THREAD_RUNNING: thread has started up and is either working or |
2255 | + waiting for events... transitions to either SHUTDOWN or DONE |
2256 | + - DM_THREAD_SHUTDOWN: thread is still doing something, but it is |
2257 | + supposed to terminate (and transition to DONE) as soon as it |
2258 | + finishes whatever it was doing at the point of flipping state to |
2259 | + SHUTDOWN... the thread is still on the thread list |
2260 | + - DM_THREAD_DONE: thread has terminated and has been moved over to |
2261 | + unused thread list, cleanup pending |
2262 | + */ |
2263 | +#define DM_THREAD_RUNNING 0 |
2264 | +#define DM_THREAD_SHUTDOWN 1 |
2265 | +#define DM_THREAD_DONE 2 |
2266 | + |
2267 | +#define THREAD_STACK_SIZE (300*1024) |
2268 | + |
2269 | +int dmeventd_debug = 0; |
2270 | +static int _systemd_activation = 0; |
2271 | +static int _foreground = 0; |
2272 | +static int _restart = 0; |
2273 | +static char **_initial_registrations = 0; |
2274 | + |
2275 | +/* Data kept about a DSO. */ |
2276 | +struct dso_data { |
2277 | + struct dm_list list; |
2278 | + |
2279 | + char *dso_name; /* DSO name (eg, "evms", "dmraid", "lvm2"). */ |
2280 | + |
2281 | + void *dso_handle; /* Opaque handle as returned from dlopen(). */ |
2282 | + unsigned int ref_count; /* Library reference count. */ |
2283 | + |
2284 | + /* |
2285 | + * Event processing. |
2286 | + * |
2287 | + * The DSO can do whatever appropriate steps if an event |
2288 | + * happens such as changing the mapping in case a mirror |
2289 | + * fails, update the application metadata etc. |
2290 | + * |
2291 | + * This function gets a dm_task that is a result of |
2292 | + * DM_DEVICE_WAITEVENT ioctl (results equivalent to |
2293 | + * DM_DEVICE_STATUS). It should not destroy it. |
2294 | + * The caller must dispose of the task. |
2295 | + */ |
2296 | + void (*process_event)(struct dm_task *dmt, enum dm_event_mask event, void **user); |
2297 | + |
2298 | + /* |
2299 | + * Device registration. |
2300 | + * |
2301 | + * When an application registers a device for an event, the DSO |
2302 | + * can carry out appropriate steps so that a later call to |
2303 | + * the process_event() function is sane (eg, read metadata |
2304 | + * and activate a mapping). |
2305 | + */ |
2306 | + int (*register_device)(const char *device, const char *uuid, int major, |
2307 | + int minor, void **user); |
2308 | + |
2309 | + /* |
2310 | + * Device unregistration. |
2311 | + * |
2312 | + * In case all devices of a mapping (eg, RAID10) are unregistered |
2313 | + * for events, the DSO can recognize this and carry out appropriate |
2314 | + * steps (eg, deactivate mapping, metadata update). |
2315 | + */ |
2316 | + int (*unregister_device)(const char *device, const char *uuid, |
2317 | + int major, int minor, void **user); |
2318 | +}; |
2319 | +static DM_LIST_INIT(_dso_registry); |
2320 | + |
2321 | +/* Structure to keep parsed register variables from client message. */ |
2322 | +struct message_data { |
2323 | + char *id; |
2324 | + char *dso_name; /* Name of DSO. */ |
2325 | + char *device_uuid; /* Mapped device path. */ |
2326 | + union { |
2327 | + char *str; /* Events string as fetched from message. */ |
2328 | + enum dm_event_mask field; /* Events bitfield. */ |
2329 | + } events; |
2330 | + union { |
2331 | + char *str; |
2332 | + uint32_t secs; |
2333 | + } timeout; |
2334 | + struct dm_event_daemon_message *msg; /* Pointer to message buffer. */ |
2335 | +}; |
2336 | + |
2337 | +/* |
2338 | + * Housekeeping of thread+device states. |
2339 | + * |
2340 | + * One thread per mapped device which can block on it until an event |
2341 | + * occurs and the event processing function of the DSO gets called. |
2342 | + */ |
2343 | +struct thread_status { |
2344 | + struct dm_list list; |
2345 | + |
2346 | + pthread_t thread; |
2347 | + |
2348 | + struct dso_data *dso_data; /* DSO this thread accesses. */ |
2349 | + |
2350 | + struct { |
2351 | + char *uuid; |
2352 | + char *name; |
2353 | + int major, minor; |
2354 | + } device; |
2355 | + uint32_t event_nr; /* event number */ |
2356 | + int processing; /* Set when event is being processed */ |
2357 | + |
2358 | + int status; /* see DM_THREAD_{RUNNING,SHUTDOWN,DONE} |
2359 | + constants above */ |
2360 | + enum dm_event_mask events; /* bitfield for event filter. */ |
2361 | + enum dm_event_mask current_events; /* bitfield for occured events. */ |
2362 | + struct dm_task *current_task; |
2363 | + time_t next_time; |
2364 | + uint32_t timeout; |
2365 | + struct dm_list timeout_list; |
2366 | + void *dso_private; /* dso per-thread status variable */ |
2367 | +}; |
2368 | +static DM_LIST_INIT(_thread_registry); |
2369 | +static DM_LIST_INIT(_thread_registry_unused); |
2370 | + |
2371 | +static int _timeout_running; |
2372 | +static DM_LIST_INIT(_timeout_registry); |
2373 | +static pthread_mutex_t _timeout_mutex = PTHREAD_MUTEX_INITIALIZER; |
2374 | +static pthread_cond_t _timeout_cond = PTHREAD_COND_INITIALIZER; |
2375 | + |
2376 | +/* Allocate/free the status structure for a monitoring thread. */ |
2377 | +static struct thread_status *_alloc_thread_status(struct message_data *data, |
2378 | + struct dso_data *dso_data) |
2379 | +{ |
2380 | + struct thread_status *ret = (typeof(ret)) dm_zalloc(sizeof(*ret)); |
2381 | + |
2382 | + if (!ret) |
2383 | + return NULL; |
2384 | + |
2385 | + if (!(ret->device.uuid = dm_strdup(data->device_uuid))) { |
2386 | + dm_free(ret); |
2387 | + return NULL; |
2388 | + } |
2389 | + |
2390 | + ret->current_task = NULL; |
2391 | + ret->device.name = NULL; |
2392 | + ret->device.major = ret->device.minor = 0; |
2393 | + ret->dso_data = dso_data; |
2394 | + ret->events = data->events.field; |
2395 | + ret->timeout = data->timeout.secs; |
2396 | + dm_list_init(&ret->timeout_list); |
2397 | + |
2398 | + return ret; |
2399 | +} |
2400 | + |
2401 | +static void _lib_put(struct dso_data *data); |
2402 | +static void _free_thread_status(struct thread_status *thread) |
2403 | +{ |
2404 | + _lib_put(thread->dso_data); |
2405 | + if (thread->current_task) |
2406 | + dm_task_destroy(thread->current_task); |
2407 | + dm_free(thread->device.uuid); |
2408 | + dm_free(thread->device.name); |
2409 | + dm_free(thread); |
2410 | +} |
2411 | + |
2412 | +/* Allocate/free DSO data. */ |
2413 | +static struct dso_data *_alloc_dso_data(struct message_data *data) |
2414 | +{ |
2415 | + struct dso_data *ret = (typeof(ret)) dm_zalloc(sizeof(*ret)); |
2416 | + |
2417 | + if (!ret) |
2418 | + return NULL; |
2419 | + |
2420 | + if (!(ret->dso_name = dm_strdup(data->dso_name))) { |
2421 | + dm_free(ret); |
2422 | + return NULL; |
2423 | + } |
2424 | + |
2425 | + return ret; |
2426 | +} |
2427 | + |
2428 | +/* Create a device monitoring thread. */ |
2429 | +static int _pthread_create_smallstack(pthread_t *t, void *(*fun)(void *), void *arg) |
2430 | +{ |
2431 | + pthread_attr_t attr; |
2432 | + pthread_attr_init(&attr); |
2433 | + /* |
2434 | + * We use a smaller stack since it gets preallocated in its entirety |
2435 | + */ |
2436 | + pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE); |
2437 | + return pthread_create(t, &attr, fun, arg); |
2438 | +} |
2439 | + |
2440 | +static void _free_dso_data(struct dso_data *data) |
2441 | +{ |
2442 | + dm_free(data->dso_name); |
2443 | + dm_free(data); |
2444 | +} |
2445 | + |
2446 | +/* |
2447 | + * Fetch a string off src and duplicate it into *ptr. |
2448 | + * Pay attention to zero-length strings. |
2449 | + */ |
2450 | +/* FIXME? move to libdevmapper to share with the client lib (need to |
2451 | + make delimiter a parameter then) */ |
2452 | +static int _fetch_string(char **ptr, char **src, const int delimiter) |
2453 | +{ |
2454 | + int ret = 0; |
2455 | + char *p; |
2456 | + size_t len; |
2457 | + |
2458 | + if ((p = strchr(*src, delimiter))) |
2459 | + *p = 0; |
2460 | + |
2461 | + if ((*ptr = dm_strdup(*src))) { |
2462 | + if ((len = strlen(*ptr))) |
2463 | + *src += len; |
2464 | + else { |
2465 | + dm_free(*ptr); |
2466 | + *ptr = NULL; |
2467 | + } |
2468 | + |
2469 | + (*src)++; |
2470 | + ret = 1; |
2471 | + } |
2472 | + |
2473 | + if (p) |
2474 | + *p = delimiter; |
2475 | + |
2476 | + return ret; |
2477 | +} |
2478 | + |
2479 | +/* Free message memory. */ |
2480 | +static void _free_message(struct message_data *message_data) |
2481 | +{ |
2482 | + dm_free(message_data->id); |
2483 | + dm_free(message_data->dso_name); |
2484 | + |
2485 | + dm_free(message_data->device_uuid); |
2486 | + |
2487 | +} |
2488 | + |
2489 | +/* Parse a register message from the client. */ |
2490 | +static int _parse_message(struct message_data *message_data) |
2491 | +{ |
2492 | + int ret = 0; |
2493 | + char *p = message_data->msg->data; |
2494 | + struct dm_event_daemon_message *msg = message_data->msg; |
2495 | + |
2496 | + if (!msg->data) |
2497 | + return 0; |
2498 | + |
2499 | + /* |
2500 | + * Retrieve application identifier, mapped device |
2501 | + * path and events # string from message. |
2502 | + */ |
2503 | + if (_fetch_string(&message_data->id, &p, ' ') && |
2504 | + _fetch_string(&message_data->dso_name, &p, ' ') && |
2505 | + _fetch_string(&message_data->device_uuid, &p, ' ') && |
2506 | + _fetch_string(&message_data->events.str, &p, ' ') && |
2507 | + _fetch_string(&message_data->timeout.str, &p, ' ')) { |
2508 | + if (message_data->events.str) { |
2509 | + enum dm_event_mask i = atoi(message_data->events.str); |
2510 | + |
2511 | + /* |
2512 | + * Free string representaion of events. |
2513 | + * Not needed an more. |
2514 | + */ |
2515 | + dm_free(message_data->events.str); |
2516 | + message_data->events.field = i; |
2517 | + } |
2518 | + if (message_data->timeout.str) { |
2519 | + uint32_t secs = atoi(message_data->timeout.str); |
2520 | + dm_free(message_data->timeout.str); |
2521 | + message_data->timeout.secs = secs ? secs : |
2522 | + DM_EVENT_DEFAULT_TIMEOUT; |
2523 | + } |
2524 | + |
2525 | + ret = 1; |
2526 | + } |
2527 | + |
2528 | + dm_free(msg->data); |
2529 | + msg->data = NULL; |
2530 | + msg->size = 0; |
2531 | + return ret; |
2532 | +}; |
2533 | + |
2534 | +/* Global mutex to lock access to lists et al. See _global_mutex |
2535 | + above. */ |
2536 | +static int _lock_mutex(void) |
2537 | +{ |
2538 | + return pthread_mutex_lock(&_global_mutex); |
2539 | +} |
2540 | + |
2541 | +static int _unlock_mutex(void) |
2542 | +{ |
2543 | + return pthread_mutex_unlock(&_global_mutex); |
2544 | +} |
2545 | + |
2546 | +/* Check, if a device exists. */ |
2547 | +static int _fill_device_data(struct thread_status *ts) |
2548 | +{ |
2549 | + struct dm_task *dmt; |
2550 | + struct dm_info dmi; |
2551 | + |
2552 | + if (!ts->device.uuid) |
2553 | + return 0; |
2554 | + |
2555 | + ts->device.name = NULL; |
2556 | + ts->device.major = ts->device.minor = 0; |
2557 | + |
2558 | + dmt = dm_task_create(DM_DEVICE_INFO); |
2559 | + if (!dmt) |
2560 | + return 0; |
2561 | + |
2562 | + if (!dm_task_set_uuid(dmt, ts->device.uuid)) |
2563 | + goto fail; |
2564 | + |
2565 | + if (!dm_task_run(dmt)) |
2566 | + goto fail; |
2567 | + |
2568 | + ts->device.name = dm_strdup(dm_task_get_name(dmt)); |
2569 | + if (!ts->device.name) |
2570 | + goto fail; |
2571 | + |
2572 | + if (!dm_task_get_info(dmt, &dmi)) |
2573 | + goto fail; |
2574 | + |
2575 | + ts->device.major = dmi.major; |
2576 | + ts->device.minor = dmi.minor; |
2577 | + |
2578 | + dm_task_destroy(dmt); |
2579 | + return 1; |
2580 | + |
2581 | + fail: |
2582 | + dm_task_destroy(dmt); |
2583 | + dm_free(ts->device.name); |
2584 | + return 0; |
2585 | +} |
2586 | + |
2587 | +/* |
2588 | + * Find an existing thread for a device. |
2589 | + * |
2590 | + * Mutex must be held when calling this. |
2591 | + */ |
2592 | +static struct thread_status *_lookup_thread_status(struct message_data *data) |
2593 | +{ |
2594 | + struct thread_status *thread; |
2595 | + |
2596 | + dm_list_iterate_items(thread, &_thread_registry) |
2597 | + if (!strcmp(data->device_uuid, thread->device.uuid)) |
2598 | + return thread; |
2599 | + |
2600 | + return NULL; |
2601 | +} |
2602 | + |
2603 | +static int _get_status(struct message_data *message_data) |
2604 | +{ |
2605 | + struct dm_event_daemon_message *msg = message_data->msg; |
2606 | + struct thread_status *thread; |
2607 | + int i, j; |
2608 | + int ret = -1; |
2609 | + int count = dm_list_size(&_thread_registry); |
2610 | + int size = 0, current = 0; |
2611 | + char *buffers[count]; |
2612 | + char *message; |
2613 | + |
2614 | + dm_free(msg->data); |
2615 | + |
2616 | + for (i = 0; i < count; ++i) |
2617 | + buffers[i] = NULL; |
2618 | + |
2619 | + i = 0; |
2620 | + _lock_mutex(); |
2621 | + dm_list_iterate_items(thread, &_thread_registry) { |
2622 | + if ((current = dm_asprintf(buffers + i, "0:%d %s %s %u %" PRIu32 ";", |
2623 | + i, thread->dso_data->dso_name, |
2624 | + thread->device.uuid, thread->events, |
2625 | + thread->timeout)) < 0) { |
2626 | + _unlock_mutex(); |
2627 | + goto out; |
2628 | + } |
2629 | + ++ i; |
2630 | + size += current; |
2631 | + } |
2632 | + _unlock_mutex(); |
2633 | + |
2634 | + msg->size = size + strlen(message_data->id) + 1; |
2635 | + msg->data = dm_malloc(msg->size); |
2636 | + if (!msg->data) |
2637 | + goto out; |
2638 | + *msg->data = 0; |
2639 | + |
2640 | + message = msg->data; |
2641 | + strcpy(message, message_data->id); |
2642 | + message += strlen(message_data->id); |
2643 | + *message = ' '; |
2644 | + message ++; |
2645 | + for (j = 0; j < i; ++j) { |
2646 | + strcpy(message, buffers[j]); |
2647 | + message += strlen(buffers[j]); |
2648 | + } |
2649 | + |
2650 | + ret = 0; |
2651 | + out: |
2652 | + for (j = 0; j < i; ++j) |
2653 | + dm_free(buffers[j]); |
2654 | + return ret; |
2655 | + |
2656 | +} |
2657 | + |
2658 | +/* Cleanup at exit. */ |
2659 | +static void _exit_dm_lib(void) |
2660 | +{ |
2661 | + dm_lib_release(); |
2662 | + dm_lib_exit(); |
2663 | +} |
2664 | + |
2665 | +static void _exit_timeout(void *unused __attribute__((unused))) |
2666 | +{ |
2667 | + _timeout_running = 0; |
2668 | + pthread_mutex_unlock(&_timeout_mutex); |
2669 | +} |
2670 | + |
2671 | +/* Wake up monitor threads every so often. */ |
2672 | +static void *_timeout_thread(void *unused __attribute__((unused))) |
2673 | +{ |
2674 | + struct timespec timeout; |
2675 | + time_t curr_time; |
2676 | + |
2677 | + timeout.tv_nsec = 0; |
2678 | + pthread_cleanup_push(_exit_timeout, NULL); |
2679 | + pthread_mutex_lock(&_timeout_mutex); |
2680 | + |
2681 | + while (!dm_list_empty(&_timeout_registry)) { |
2682 | + struct thread_status *thread; |
2683 | + |
2684 | + timeout.tv_sec = 0; |
2685 | + curr_time = time(NULL); |
2686 | + |
2687 | + dm_list_iterate_items_gen(thread, &_timeout_registry, timeout_list) { |
2688 | + if (thread->next_time <= curr_time) { |
2689 | + thread->next_time = curr_time + thread->timeout; |
2690 | + pthread_kill(thread->thread, SIGALRM); |
2691 | + } |
2692 | + |
2693 | + if (thread->next_time < timeout.tv_sec || !timeout.tv_sec) |
2694 | + timeout.tv_sec = thread->next_time; |
2695 | + } |
2696 | + |
2697 | + pthread_cond_timedwait(&_timeout_cond, &_timeout_mutex, |
2698 | + &timeout); |
2699 | + } |
2700 | + |
2701 | + pthread_cleanup_pop(1); |
2702 | + |
2703 | + return NULL; |
2704 | +} |
2705 | + |
2706 | +static int _register_for_timeout(struct thread_status *thread) |
2707 | +{ |
2708 | + int ret = 0; |
2709 | + |
2710 | + pthread_mutex_lock(&_timeout_mutex); |
2711 | + |
2712 | + thread->next_time = time(NULL) + thread->timeout; |
2713 | + |
2714 | + if (dm_list_empty(&thread->timeout_list)) { |
2715 | + dm_list_add(&_timeout_registry, &thread->timeout_list); |
2716 | + if (_timeout_running) |
2717 | + pthread_cond_signal(&_timeout_cond); |
2718 | + } |
2719 | + |
2720 | + if (!_timeout_running) { |
2721 | + pthread_t timeout_id; |
2722 | + |
2723 | + if (!(ret = -_pthread_create_smallstack(&timeout_id, _timeout_thread, NULL))) |
2724 | + _timeout_running = 1; |
2725 | + } |
2726 | + |
2727 | + pthread_mutex_unlock(&_timeout_mutex); |
2728 | + |
2729 | + return ret; |
2730 | +} |
2731 | + |
2732 | +static void _unregister_for_timeout(struct thread_status *thread) |
2733 | +{ |
2734 | + pthread_mutex_lock(&_timeout_mutex); |
2735 | + if (!dm_list_empty(&thread->timeout_list)) { |
2736 | + dm_list_del(&thread->timeout_list); |
2737 | + dm_list_init(&thread->timeout_list); |
2738 | + } |
2739 | + pthread_mutex_unlock(&_timeout_mutex); |
2740 | +} |
2741 | + |
2742 | +__attribute__((format(printf, 4, 5))) |
2743 | +static void _no_intr_log(int level, const char *file, int line, |
2744 | + const char *f, ...) |
2745 | +{ |
2746 | + va_list ap; |
2747 | + |
2748 | + if (errno == EINTR) |
2749 | + return; |
2750 | + if (level > _LOG_WARN) |
2751 | + return; |
2752 | + |
2753 | + va_start(ap, f); |
2754 | + |
2755 | + if (level < _LOG_WARN) |
2756 | + vfprintf(stderr, f, ap); |
2757 | + else |
2758 | + vprintf(f, ap); |
2759 | + |
2760 | + va_end(ap); |
2761 | + |
2762 | + if (level < _LOG_WARN) |
2763 | + fprintf(stderr, "\n"); |
2764 | + else |
2765 | + fprintf(stdout, "\n"); |
2766 | +} |
2767 | + |
2768 | +static sigset_t _unblock_sigalrm(void) |
2769 | +{ |
2770 | + sigset_t set, old; |
2771 | + |
2772 | + sigemptyset(&set); |
2773 | + sigaddset(&set, SIGALRM); |
2774 | + pthread_sigmask(SIG_UNBLOCK, &set, &old); |
2775 | + return old; |
2776 | +} |
2777 | + |
2778 | +#define DM_WAIT_RETRY 0 |
2779 | +#define DM_WAIT_INTR 1 |
2780 | +#define DM_WAIT_FATAL 2 |
2781 | + |
2782 | +/* Wait on a device until an event occurs. */ |
2783 | +static int _event_wait(struct thread_status *thread, struct dm_task **task) |
2784 | +{ |
2785 | + sigset_t set; |
2786 | + int ret = DM_WAIT_RETRY; |
2787 | + struct dm_task *dmt; |
2788 | + struct dm_info info; |
2789 | + |
2790 | + *task = 0; |
2791 | + |
2792 | + if (!(dmt = dm_task_create(DM_DEVICE_WAITEVENT))) |
2793 | + return DM_WAIT_RETRY; |
2794 | + |
2795 | + thread->current_task = dmt; |
2796 | + |
2797 | + if (!dm_task_set_uuid(dmt, thread->device.uuid) || |
2798 | + !dm_task_set_event_nr(dmt, thread->event_nr)) |
2799 | + goto out; |
2800 | + |
2801 | + /* |
2802 | + * This is so that you can break out of waiting on an event, |
2803 | + * either for a timeout event, or to cancel the thread. |
2804 | + */ |
2805 | + set = _unblock_sigalrm(); |
2806 | + dm_log_init(_no_intr_log); |
2807 | + errno = 0; |
2808 | + if (dm_task_run(dmt)) { |
2809 | + thread->current_events |= DM_EVENT_DEVICE_ERROR; |
2810 | + ret = DM_WAIT_INTR; |
2811 | + |
2812 | + if ((ret = dm_task_get_info(dmt, &info))) |
2813 | + thread->event_nr = info.event_nr; |
2814 | + } else if (thread->events & DM_EVENT_TIMEOUT && errno == EINTR) { |
2815 | + thread->current_events |= DM_EVENT_TIMEOUT; |
2816 | + ret = DM_WAIT_INTR; |
2817 | + } else if (thread->status == DM_THREAD_SHUTDOWN && errno == EINTR) { |
2818 | + ret = DM_WAIT_FATAL; |
2819 | + } else { |
2820 | + syslog(LOG_NOTICE, "dm_task_run failed, errno = %d, %s", |
2821 | + errno, strerror(errno)); |
2822 | + if (errno == ENXIO) { |
2823 | + syslog(LOG_ERR, "%s disappeared, detaching", |
2824 | + thread->device.name); |
2825 | + ret = DM_WAIT_FATAL; |
2826 | + } |
2827 | + } |
2828 | + |
2829 | + pthread_sigmask(SIG_SETMASK, &set, NULL); |
2830 | + dm_log_init(NULL); |
2831 | + |
2832 | + out: |
2833 | + if (ret == DM_WAIT_FATAL || ret == DM_WAIT_RETRY) { |
2834 | + dm_task_destroy(dmt); |
2835 | + thread->current_task = NULL; |
2836 | + } else |
2837 | + *task = dmt; |
2838 | + |
2839 | + return ret; |
2840 | +} |
2841 | + |
2842 | +/* Register a device with the DSO. */ |
2843 | +static int _do_register_device(struct thread_status *thread) |
2844 | +{ |
2845 | + return thread->dso_data->register_device(thread->device.name, |
2846 | + thread->device.uuid, |
2847 | + thread->device.major, |
2848 | + thread->device.minor, |
2849 | + &(thread->dso_private)); |
2850 | +} |
2851 | + |
2852 | +/* Unregister a device with the DSO. */ |
2853 | +static int _do_unregister_device(struct thread_status *thread) |
2854 | +{ |
2855 | + return thread->dso_data->unregister_device(thread->device.name, |
2856 | + thread->device.uuid, |
2857 | + thread->device.major, |
2858 | + thread->device.minor, |
2859 | + &(thread->dso_private)); |
2860 | +} |
2861 | + |
2862 | +/* Process an event in the DSO. */ |
2863 | +static void _do_process_event(struct thread_status *thread, struct dm_task *task) |
2864 | +{ |
2865 | + thread->dso_data->process_event(task, thread->current_events, &(thread->dso_private)); |
2866 | +} |
2867 | + |
2868 | +/* Thread cleanup handler to unregister device. */ |
2869 | +static void _monitor_unregister(void *arg) |
2870 | +{ |
2871 | + struct thread_status *thread = arg, *thread_iter; |
2872 | + |
2873 | + if (!_do_unregister_device(thread)) |
2874 | + syslog(LOG_ERR, "%s: %s unregister failed\n", __func__, |
2875 | + thread->device.name); |
2876 | + if (thread->current_task) |
2877 | + dm_task_destroy(thread->current_task); |
2878 | + thread->current_task = NULL; |
2879 | + |
2880 | + _lock_mutex(); |
2881 | + if (thread->events & DM_EVENT_TIMEOUT) { |
2882 | + /* _unregister_for_timeout locks another mutex, we |
2883 | + don't want to deadlock so we release our mutex for |
2884 | + a bit */ |
2885 | + _unlock_mutex(); |
2886 | + _unregister_for_timeout(thread); |
2887 | + _lock_mutex(); |
2888 | + } |
2889 | + /* we may have been relinked to unused registry since we were |
2890 | + called, so check that */ |
2891 | + dm_list_iterate_items(thread_iter, &_thread_registry_unused) |
2892 | + if (thread_iter == thread) { |
2893 | + thread->status = DM_THREAD_DONE; |
2894 | + _unlock_mutex(); |
2895 | + return; |
2896 | + } |
2897 | + thread->status = DM_THREAD_DONE; |
2898 | + pthread_mutex_lock(&_timeout_mutex); |
2899 | + UNLINK_THREAD(thread); |
2900 | + LINK(thread, &_thread_registry_unused); |
2901 | + pthread_mutex_unlock(&_timeout_mutex); |
2902 | + _unlock_mutex(); |
2903 | +} |
2904 | + |
2905 | +static struct dm_task *_get_device_status(struct thread_status *ts) |
2906 | +{ |
2907 | + struct dm_task *dmt = dm_task_create(DM_DEVICE_STATUS); |
2908 | + |
2909 | + if (!dmt) |
2910 | + return NULL; |
2911 | + |
2912 | + if (!dm_task_set_uuid(dmt, ts->device.uuid)) { |
2913 | + dm_task_destroy(dmt); |
2914 | + return NULL; |
2915 | + } |
2916 | + |
2917 | + if (!dm_task_run(dmt)) { |
2918 | + dm_task_destroy(dmt); |
2919 | + return NULL; |
2920 | + } |
2921 | + |
2922 | + return dmt; |
2923 | +} |
2924 | + |
2925 | +/* Device monitoring thread. */ |
2926 | +static void *_monitor_thread(void *arg) |
2927 | +{ |
2928 | + struct thread_status *thread = arg; |
2929 | + int wait_error = 0; |
2930 | + struct dm_task *task; |
2931 | + |
2932 | + pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, NULL); |
2933 | + pthread_cleanup_push(_monitor_unregister, thread); |
2934 | + |
2935 | + /* Wait for do_process_request() to finish its task. */ |
2936 | + _lock_mutex(); |
2937 | + thread->status = DM_THREAD_RUNNING; |
2938 | + _unlock_mutex(); |
2939 | + |
2940 | + /* Loop forever awaiting/analyzing device events. */ |
2941 | + while (1) { |
2942 | + thread->current_events = 0; |
2943 | + |
2944 | + wait_error = _event_wait(thread, &task); |
2945 | + if (wait_error == DM_WAIT_RETRY) |
2946 | + continue; |
2947 | + |
2948 | + if (wait_error == DM_WAIT_FATAL) |
2949 | + break; |
2950 | + |
2951 | + /* Timeout occurred, task is not filled properly. |
2952 | + * We get device status here for processing it in DSO. |
2953 | + */ |
2954 | + if (wait_error == DM_WAIT_INTR && |
2955 | + thread->current_events & DM_EVENT_TIMEOUT) { |
2956 | + dm_task_destroy(task); |
2957 | + task = _get_device_status(thread); |
2958 | + /* FIXME: syslog fail here ? */ |
2959 | + if (!(thread->current_task = task)) |
2960 | + continue; |
2961 | + } |
2962 | + |
2963 | + /* |
2964 | + * We know that wait succeeded and stored a |
2965 | + * pointer to dm_task with device status into task. |
2966 | + */ |
2967 | + |
2968 | + /* |
2969 | + * Check against filter. |
2970 | + * |
2971 | + * If there's current events delivered from _event_wait() AND |
2972 | + * the device got registered for those events AND |
2973 | + * those events haven't been processed yet, call |
2974 | + * the DSO's process_event() handler. |
2975 | + */ |
2976 | + _lock_mutex(); |
2977 | + if (thread->status == DM_THREAD_SHUTDOWN) { |
2978 | + _unlock_mutex(); |
2979 | + break; |
2980 | + } |
2981 | + _unlock_mutex(); |
2982 | + |
2983 | + if (thread->events & thread->current_events) { |
2984 | + _lock_mutex(); |
2985 | + thread->processing = 1; |
2986 | + _unlock_mutex(); |
2987 | + |
2988 | + _do_process_event(thread, task); |
2989 | + dm_task_destroy(task); |
2990 | + thread->current_task = NULL; |
2991 | + |
2992 | + _lock_mutex(); |
2993 | + thread->processing = 0; |
2994 | + _unlock_mutex(); |
2995 | + } else { |
2996 | + dm_task_destroy(task); |
2997 | + thread->current_task = NULL; |
2998 | + } |
2999 | + } |
3000 | + |
3001 | + pthread_cleanup_pop(1); |
3002 | + |
3003 | + return NULL; |
3004 | +} |
3005 | + |
3006 | +/* Create a device monitoring thread. */ |
3007 | +static int _create_thread(struct thread_status *thread) |
3008 | +{ |
3009 | + return _pthread_create_smallstack(&thread->thread, _monitor_thread, thread); |
3010 | +} |
3011 | + |
3012 | +static int _terminate_thread(struct thread_status *thread) |
3013 | +{ |
3014 | + return pthread_kill(thread->thread, SIGALRM); |
3015 | +} |
3016 | + |
3017 | +/* DSO reference counting. Call with _global_mutex locked! */ |
3018 | +static void _lib_get(struct dso_data *data) |
3019 | +{ |
3020 | + data->ref_count++; |
3021 | +} |
3022 | + |
3023 | +static void _lib_put(struct dso_data *data) |
3024 | +{ |
3025 | + if (!--data->ref_count) { |
3026 | + dlclose(data->dso_handle); |
3027 | + UNLINK_DSO(data); |
3028 | + _free_dso_data(data); |
3029 | + } |
3030 | +} |
3031 | + |
3032 | +/* Find DSO data. */ |
3033 | +static struct dso_data *_lookup_dso(struct message_data *data) |
3034 | +{ |
3035 | + struct dso_data *dso_data, *ret = NULL; |
3036 | + |
3037 | + dm_list_iterate_items(dso_data, &_dso_registry) |
3038 | + if (!strcmp(data->dso_name, dso_data->dso_name)) { |
3039 | + _lib_get(dso_data); |
3040 | + ret = dso_data; |
3041 | + break; |
3042 | + } |
3043 | + |
3044 | + return ret; |
3045 | +} |
3046 | + |
3047 | +/* Lookup DSO symbols we need. */ |
3048 | +static int _lookup_symbol(void *dl, void **symbol, const char *name) |
3049 | +{ |
3050 | + if ((*symbol = dlsym(dl, name))) |
3051 | + return 1; |
3052 | + |
3053 | + return 0; |
3054 | +} |
3055 | + |
3056 | +static int lookup_symbols(void *dl, struct dso_data *data) |
3057 | +{ |
3058 | + return _lookup_symbol(dl, (void *) &data->process_event, |
3059 | + "process_event") && |
3060 | + _lookup_symbol(dl, (void *) &data->register_device, |
3061 | + "register_device") && |
3062 | + _lookup_symbol(dl, (void *) &data->unregister_device, |
3063 | + "unregister_device"); |
3064 | +} |
3065 | + |
3066 | +/* Load an application specific DSO. */ |
3067 | +static struct dso_data *_load_dso(struct message_data *data) |
3068 | +{ |
3069 | + void *dl; |
3070 | + struct dso_data *ret = NULL; |
3071 | + |
3072 | + if (!(dl = dlopen(data->dso_name, RTLD_NOW))) { |
3073 | + const char *dlerr = dlerror(); |
3074 | + syslog(LOG_ERR, "dmeventd %s dlopen failed: %s", data->dso_name, |
3075 | + dlerr); |
3076 | + data->msg->size = |
3077 | + dm_asprintf(&(data->msg->data), "%s %s dlopen failed: %s", |
3078 | + data->id, data->dso_name, dlerr); |
3079 | + return NULL; |
3080 | + } |
3081 | + |
3082 | + if (!(ret = _alloc_dso_data(data))) { |
3083 | + dlclose(dl); |
3084 | + return NULL; |
3085 | + } |
3086 | + |
3087 | + if (!(lookup_symbols(dl, ret))) { |
3088 | + _free_dso_data(ret); |
3089 | + dlclose(dl); |
3090 | + return NULL; |
3091 | + } |
3092 | + |
3093 | + /* |
3094 | + * Keep handle to close the library once |
3095 | + * we've got no references to it any more. |
3096 | + */ |
3097 | + ret->dso_handle = dl; |
3098 | + _lib_get(ret); |
3099 | + |
3100 | + _lock_mutex(); |
3101 | + LINK_DSO(ret); |
3102 | + _unlock_mutex(); |
3103 | + |
3104 | + return ret; |
3105 | +} |
3106 | + |
3107 | +/* Return success on daemon active check. */ |
3108 | +static int _active(struct message_data *message_data) |
3109 | +{ |
3110 | + return 0; |
3111 | +} |
3112 | + |
3113 | +/* |
3114 | + * Register for an event. |
3115 | + * |
3116 | + * Only one caller at a time here, because we use |
3117 | + * a FIFO and lock it against multiple accesses. |
3118 | + */ |
3119 | +static int _register_for_event(struct message_data *message_data) |
3120 | +{ |
3121 | + int ret = 0; |
3122 | + struct thread_status *thread, *thread_new = NULL; |
3123 | + struct dso_data *dso_data; |
3124 | + |
3125 | + if (!(dso_data = _lookup_dso(message_data)) && |
3126 | + !(dso_data = _load_dso(message_data))) { |
3127 | + stack; |
3128 | +#ifdef ELIBACC |
3129 | + ret = -ELIBACC; |
3130 | +#else |
3131 | + ret = -ENODEV; |
3132 | +#endif |
3133 | + goto out; |
3134 | + } |
3135 | + |
3136 | + /* Preallocate thread status struct to avoid deadlock. */ |
3137 | + if (!(thread_new = _alloc_thread_status(message_data, dso_data))) { |
3138 | + stack; |
3139 | + ret = -ENOMEM; |
3140 | + goto out; |
3141 | + } |
3142 | + |
3143 | + if (!_fill_device_data(thread_new)) { |
3144 | + stack; |
3145 | + ret = -ENODEV; |
3146 | + goto out; |
3147 | + } |
3148 | + |
3149 | + _lock_mutex(); |
3150 | + |
3151 | + /* If creation of timeout thread fails (as it may), we fail |
3152 | + here completely. The client is responsible for either |
3153 | + retrying later or trying to register without timeout |
3154 | + events. However, if timeout thread cannot be started, it |
3155 | + usually means we are so starved on resources that we are |
3156 | + almost as good as dead already... */ |
3157 | + if (thread_new->events & DM_EVENT_TIMEOUT) { |
3158 | + ret = -_register_for_timeout(thread_new); |
3159 | + if (ret) |
3160 | + goto outth; |
3161 | + } |
3162 | + |
3163 | + if (!(thread = _lookup_thread_status(message_data))) { |
3164 | + _unlock_mutex(); |
3165 | + |
3166 | + if (!(ret = _do_register_device(thread_new))) |
3167 | + goto out; |
3168 | + |
3169 | + thread = thread_new; |
3170 | + thread_new = NULL; |
3171 | + |
3172 | + /* Try to create the monitoring thread for this device. */ |
3173 | + _lock_mutex(); |
3174 | + if ((ret = -_create_thread(thread))) { |
3175 | + _unlock_mutex(); |
3176 | + _do_unregister_device(thread); |
3177 | + _free_thread_status(thread); |
3178 | + goto out; |
3179 | + } else |
3180 | + LINK_THREAD(thread); |
3181 | + } |
3182 | + |
3183 | + /* Or event # into events bitfield. */ |
3184 | + thread->events |= message_data->events.field; |
3185 | + |
3186 | + outth: |
3187 | + _unlock_mutex(); |
3188 | + |
3189 | + out: |
3190 | + /* |
3191 | + * Deallocate thread status after releasing |
3192 | + * the lock in case we haven't used it. |
3193 | + */ |
3194 | + if (thread_new) |
3195 | + _free_thread_status(thread_new); |
3196 | + |
3197 | + return ret; |
3198 | +} |
3199 | + |
3200 | +/* |
3201 | + * Unregister for an event. |
3202 | + * |
3203 | + * Only one caller at a time here as with register_for_event(). |
3204 | + */ |
3205 | +static int _unregister_for_event(struct message_data *message_data) |
3206 | +{ |
3207 | + int ret = 0; |
3208 | + struct thread_status *thread; |
3209 | + |
3210 | + /* |
3211 | + * Clear event in bitfield and deactivate |
3212 | + * monitoring thread in case bitfield is 0. |
3213 | + */ |
3214 | + _lock_mutex(); |
3215 | + |
3216 | + if (!(thread = _lookup_thread_status(message_data))) { |
3217 | + _unlock_mutex(); |
3218 | + ret = -ENODEV; |
3219 | + goto out; |
3220 | + } |
3221 | + |
3222 | + if (thread->status == DM_THREAD_DONE) { |
3223 | + /* the thread has terminated while we were not |
3224 | + watching */ |
3225 | + _unlock_mutex(); |
3226 | + return 0; |
3227 | + } |
3228 | + |
3229 | + thread->events &= ~message_data->events.field; |
3230 | + |
3231 | + if (!(thread->events & DM_EVENT_TIMEOUT)) |
3232 | + _unregister_for_timeout(thread); |
3233 | + /* |
3234 | + * In case there's no events to monitor on this device -> |
3235 | + * unlink and terminate its monitoring thread. |
3236 | + */ |
3237 | + if (!thread->events) { |
3238 | + pthread_mutex_lock(&_timeout_mutex); |
3239 | + UNLINK_THREAD(thread); |
3240 | + LINK(thread, &_thread_registry_unused); |
3241 | + pthread_mutex_unlock(&_timeout_mutex); |
3242 | + } |
3243 | + _unlock_mutex(); |
3244 | + |
3245 | + out: |
3246 | + return ret; |
3247 | +} |
3248 | + |
3249 | +/* |
3250 | + * Get registered device. |
3251 | + * |
3252 | + * Only one caller at a time here as with register_for_event(). |
3253 | + */ |
3254 | +static int _registered_device(struct message_data *message_data, |
3255 | + struct thread_status *thread) |
3256 | +{ |
3257 | + struct dm_event_daemon_message *msg = message_data->msg; |
3258 | + |
3259 | + const char *fmt = "%s %s %s %u"; |
3260 | + const char *id = message_data->id; |
3261 | + const char *dso = thread->dso_data->dso_name; |
3262 | + const char *dev = thread->device.uuid; |
3263 | + int r; |
3264 | + unsigned events = ((thread->status == DM_THREAD_RUNNING) |
3265 | + && (thread->events)) ? thread->events : thread-> |
3266 | + events | DM_EVENT_REGISTRATION_PENDING; |
3267 | + |
3268 | + dm_free(msg->data); |
3269 | + |
3270 | + if ((r = dm_asprintf(&(msg->data), fmt, id, dso, dev, events)) < 0) { |
3271 | + msg->size = 0; |
3272 | + return -ENOMEM; |
3273 | + } |
3274 | + |
3275 | + msg->size = (uint32_t) r; |
3276 | + |
3277 | + return 0; |
3278 | +} |
3279 | + |
3280 | +static int _want_registered_device(char *dso_name, char *device_uuid, |
3281 | + struct thread_status *thread) |
3282 | +{ |
3283 | + /* If DSO names and device paths are equal. */ |
3284 | + if (dso_name && device_uuid) |
3285 | + return !strcmp(dso_name, thread->dso_data->dso_name) && |
3286 | + !strcmp(device_uuid, thread->device.uuid) && |
3287 | + (thread->status == DM_THREAD_RUNNING || |
3288 | + (thread->events & DM_EVENT_REGISTRATION_PENDING)); |
3289 | + |
3290 | + /* If DSO names are equal. */ |
3291 | + if (dso_name) |
3292 | + return !strcmp(dso_name, thread->dso_data->dso_name) && |
3293 | + (thread->status == DM_THREAD_RUNNING || |
3294 | + (thread->events & DM_EVENT_REGISTRATION_PENDING)); |
3295 | + |
3296 | + /* If device paths are equal. */ |
3297 | + if (device_uuid) |
3298 | + return !strcmp(device_uuid, thread->device.uuid) && |
3299 | + (thread->status == DM_THREAD_RUNNING || |
3300 | + (thread->events & DM_EVENT_REGISTRATION_PENDING)); |
3301 | + |
3302 | + return 1; |
3303 | +} |
3304 | + |
3305 | +static int _get_registered_dev(struct message_data *message_data, int next) |
3306 | +{ |
3307 | + struct thread_status *thread, *hit = NULL; |
3308 | + int ret = -ENOENT; |
3309 | + |
3310 | + _lock_mutex(); |
3311 | + |
3312 | + /* Iterate list of threads checking if we want a particular one. */ |
3313 | + dm_list_iterate_items(thread, &_thread_registry) |
3314 | + if (_want_registered_device(message_data->dso_name, |
3315 | + message_data->device_uuid, |
3316 | + thread)) { |
3317 | + hit = thread; |
3318 | + break; |
3319 | + } |
3320 | + |
3321 | + /* |
3322 | + * If we got a registered device and want the next one -> |
3323 | + * fetch next conforming element off the list. |
3324 | + */ |
3325 | + if (hit && !next) |
3326 | + goto reg; |
3327 | + |
3328 | + if (!hit) |
3329 | + goto out; |
3330 | + |
3331 | + while (1) { |
3332 | + if (dm_list_end(&_thread_registry, &thread->list)) |
3333 | + goto out; |
3334 | + |
3335 | + thread = dm_list_item(thread->list.n, struct thread_status); |
3336 | + if (_want_registered_device(message_data->dso_name, NULL, thread)) { |
3337 | + hit = thread; |
3338 | + break; |
3339 | + } |
3340 | + } |
3341 | + |
3342 | + reg: |
3343 | + ret = _registered_device(message_data, hit); |
3344 | + |
3345 | + out: |
3346 | + _unlock_mutex(); |
3347 | + |
3348 | + return ret; |
3349 | +} |
3350 | + |
3351 | +static int _get_registered_device(struct message_data *message_data) |
3352 | +{ |
3353 | + return _get_registered_dev(message_data, 0); |
3354 | +} |
3355 | + |
3356 | +static int _get_next_registered_device(struct message_data *message_data) |
3357 | +{ |
3358 | + return _get_registered_dev(message_data, 1); |
3359 | +} |
3360 | + |
3361 | +static int _set_timeout(struct message_data *message_data) |
3362 | +{ |
3363 | + struct thread_status *thread; |
3364 | + |
3365 | + _lock_mutex(); |
3366 | + if ((thread = _lookup_thread_status(message_data))) |
3367 | + thread->timeout = message_data->timeout.secs; |
3368 | + _unlock_mutex(); |
3369 | + |
3370 | + return thread ? 0 : -ENODEV; |
3371 | +} |
3372 | + |
3373 | +static int _get_timeout(struct message_data *message_data) |
3374 | +{ |
3375 | + struct thread_status *thread; |
3376 | + struct dm_event_daemon_message *msg = message_data->msg; |
3377 | + |
3378 | + dm_free(msg->data); |
3379 | + |
3380 | + _lock_mutex(); |
3381 | + if ((thread = _lookup_thread_status(message_data))) { |
3382 | + msg->size = |
3383 | + dm_asprintf(&(msg->data), "%s %" PRIu32, message_data->id, |
3384 | + thread->timeout); |
3385 | + } else { |
3386 | + msg->data = NULL; |
3387 | + msg->size = 0; |
3388 | + } |
3389 | + _unlock_mutex(); |
3390 | + |
3391 | + return thread ? 0 : -ENODEV; |
3392 | +} |
3393 | + |
3394 | +/* Initialize a fifos structure with path names. */ |
3395 | +static void _init_fifos(struct dm_event_fifos *fifos) |
3396 | +{ |
3397 | + memset(fifos, 0, sizeof(*fifos)); |
3398 | + |
3399 | + fifos->client_path = DM_EVENT_FIFO_CLIENT; |
3400 | + fifos->server_path = DM_EVENT_FIFO_SERVER; |
3401 | +} |
3402 | + |
3403 | +/* Open fifos used for client communication. */ |
3404 | +static int _open_fifos(struct dm_event_fifos *fifos) |
3405 | +{ |
3406 | + struct stat st; |
3407 | + |
3408 | + /* Create client fifo. */ |
3409 | + (void) dm_prepare_selinux_context(fifos->client_path, S_IFIFO); |
3410 | + if ((mkfifo(fifos->client_path, 0600) == -1) && errno != EEXIST) { |
3411 | + syslog(LOG_ERR, "%s: Failed to create client fifo %s: %m.\n", |
3412 | + __func__, fifos->client_path); |
3413 | + (void) dm_prepare_selinux_context(NULL, 0); |
3414 | + return 0; |
3415 | + } |
3416 | + |
3417 | + /* Create server fifo. */ |
3418 | + (void) dm_prepare_selinux_context(fifos->server_path, S_IFIFO); |
3419 | + if ((mkfifo(fifos->server_path, 0600) == -1) && errno != EEXIST) { |
3420 | + syslog(LOG_ERR, "%s: Failed to create server fifo %s: %m.\n", |
3421 | + __func__, fifos->server_path); |
3422 | + (void) dm_prepare_selinux_context(NULL, 0); |
3423 | + return 0; |
3424 | + } |
3425 | + |
3426 | + (void) dm_prepare_selinux_context(NULL, 0); |
3427 | + |
3428 | + /* Warn about wrong permissions if applicable */ |
3429 | + if ((!stat(fifos->client_path, &st)) && (st.st_mode & 0777) != 0600) |
3430 | + syslog(LOG_WARNING, "Fixing wrong permissions on %s: %m.\n", |
3431 | + fifos->client_path); |
3432 | + |
3433 | + if ((!stat(fifos->server_path, &st)) && (st.st_mode & 0777) != 0600) |
3434 | + syslog(LOG_WARNING, "Fixing wrong permissions on %s: %m.\n", |
3435 | + fifos->server_path); |
3436 | + |
3437 | + /* If they were already there, make sure permissions are ok. */ |
3438 | + if (chmod(fifos->client_path, 0600)) { |
3439 | + syslog(LOG_ERR, "Unable to set correct file permissions on %s: %m.\n", |
3440 | + fifos->client_path); |
3441 | + return 0; |
3442 | + } |
3443 | + |
3444 | + if (chmod(fifos->server_path, 0600)) { |
3445 | + syslog(LOG_ERR, "Unable to set correct file permissions on %s: %m.\n", |
3446 | + fifos->server_path); |
3447 | + return 0; |
3448 | + } |
3449 | + |
3450 | + /* Need to open read+write or we will block or fail */ |
3451 | + if ((fifos->server = open(fifos->server_path, O_RDWR)) < 0) { |
3452 | + syslog(LOG_ERR, "Failed to open fifo server %s: %m.\n", |
3453 | + fifos->server_path); |
3454 | + return 0; |
3455 | + } |
3456 | + |
3457 | + /* Need to open read+write for select() to work. */ |
3458 | + if ((fifos->client = open(fifos->client_path, O_RDWR)) < 0) { |
3459 | + syslog(LOG_ERR, "Failed to open fifo client %s: %m", fifos->client_path); |
3460 | + if (close(fifos->server)) |
3461 | + syslog(LOG_ERR, "Failed to close fifo server %s: %m", fifos->server_path); |
3462 | + return 0; |
3463 | + } |
3464 | + |
3465 | + return 1; |
3466 | +} |
3467 | + |
3468 | +/* |
3469 | + * Read message from client making sure that data is available |
3470 | + * and a complete message is read. Must not block indefinitely. |
3471 | + */ |
3472 | +static int _client_read(struct dm_event_fifos *fifos, |
3473 | + struct dm_event_daemon_message *msg) |
3474 | +{ |
3475 | + struct timeval t; |
3476 | + unsigned bytes = 0; |
3477 | + int ret = 0; |
3478 | + fd_set fds; |
3479 | + size_t size = 2 * sizeof(uint32_t); /* status + size */ |
3480 | + uint32_t *header = alloca(size); |
3481 | + char *buf = (char *)header; |
3482 | + |
3483 | + msg->data = NULL; |
3484 | + |
3485 | + errno = 0; |
3486 | + while (bytes < size && errno != EOF) { |
3487 | + /* Watch client read FIFO for input. */ |
3488 | + FD_ZERO(&fds); |
3489 | + FD_SET(fifos->client, &fds); |
3490 | + t.tv_sec = 1; |
3491 | + t.tv_usec = 0; |
3492 | + ret = select(fifos->client + 1, &fds, NULL, NULL, &t); |
3493 | + |
3494 | + if (!ret && !bytes) /* nothing to read */ |
3495 | + return 0; |
3496 | + |
3497 | + if (!ret) /* trying to finish read */ |
3498 | + continue; |
3499 | + |
3500 | + if (ret < 0) /* error */ |
3501 | + return 0; |
3502 | + |
3503 | + ret = read(fifos->client, buf + bytes, size - bytes); |
3504 | + bytes += ret > 0 ? ret : 0; |
3505 | + if (header && (bytes == 2 * sizeof(uint32_t))) { |
3506 | + msg->cmd = ntohl(header[0]); |
3507 | + msg->size = ntohl(header[1]); |
3508 | + buf = msg->data = dm_malloc(msg->size); |
3509 | + size = msg->size; |
3510 | + bytes = 0; |
3511 | + header = 0; |
3512 | + } |
3513 | + } |
3514 | + |
3515 | + if (bytes != size) { |
3516 | + dm_free(msg->data); |
3517 | + msg->data = NULL; |
3518 | + msg->size = 0; |
3519 | + } |
3520 | + |
3521 | + return bytes == size; |
3522 | +} |
3523 | + |
3524 | +/* |
3525 | + * Write a message to the client making sure that it is ready to write. |
3526 | + */ |
3527 | +static int _client_write(struct dm_event_fifos *fifos, |
3528 | + struct dm_event_daemon_message *msg) |
3529 | +{ |
3530 | + unsigned bytes = 0; |
3531 | + int ret = 0; |
3532 | + fd_set fds; |
3533 | + |
3534 | + size_t size = 2 * sizeof(uint32_t) + msg->size; |
3535 | + uint32_t *header = alloca(size); |
3536 | + char *buf = (char *)header; |
3537 | + |
3538 | + header[0] = htonl(msg->cmd); |
3539 | + header[1] = htonl(msg->size); |
3540 | + if (msg->data) |
3541 | + memcpy(buf + 2 * sizeof(uint32_t), msg->data, msg->size); |
3542 | + |
3543 | + errno = 0; |
3544 | + while (bytes < size && errno != EIO) { |
3545 | + do { |
3546 | + /* Watch client write FIFO to be ready for output. */ |
3547 | + FD_ZERO(&fds); |
3548 | + FD_SET(fifos->server, &fds); |
3549 | + } while (select(fifos->server + 1, NULL, &fds, NULL, NULL) != |
3550 | + 1); |
3551 | + |
3552 | + ret = write(fifos->server, buf + bytes, size - bytes); |
3553 | + bytes += ret > 0 ? ret : 0; |
3554 | + } |
3555 | + |
3556 | + return bytes == size; |
3557 | +} |
3558 | + |
3559 | +/* |
3560 | + * Handle a client request. |
3561 | + * |
3562 | + * We put the request handling functions into |
3563 | + * a list because of the growing number. |
3564 | + */ |
3565 | +static int _handle_request(struct dm_event_daemon_message *msg, |
3566 | + struct message_data *message_data) |
3567 | +{ |
3568 | + static struct request { |
3569 | + unsigned int cmd; |
3570 | + int (*f)(struct message_data *); |
3571 | + } requests[] = { |
3572 | + { DM_EVENT_CMD_REGISTER_FOR_EVENT, _register_for_event}, |
3573 | + { DM_EVENT_CMD_UNREGISTER_FOR_EVENT, _unregister_for_event}, |
3574 | + { DM_EVENT_CMD_GET_REGISTERED_DEVICE, _get_registered_device}, |
3575 | + { DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE, |
3576 | + _get_next_registered_device}, |
3577 | + { DM_EVENT_CMD_SET_TIMEOUT, _set_timeout}, |
3578 | + { DM_EVENT_CMD_GET_TIMEOUT, _get_timeout}, |
3579 | + { DM_EVENT_CMD_ACTIVE, _active}, |
3580 | + { DM_EVENT_CMD_GET_STATUS, _get_status}, |
3581 | + }, *req; |
3582 | + |
3583 | + for (req = requests; req < requests + sizeof(requests) / sizeof(struct request); req++) |
3584 | + if (req->cmd == msg->cmd) |
3585 | + return req->f(message_data); |
3586 | + |
3587 | + return -EINVAL; |
3588 | +} |
3589 | + |
3590 | +/* Process a request passed from the communication thread. */ |
3591 | +static int _do_process_request(struct dm_event_daemon_message *msg) |
3592 | +{ |
3593 | + int ret; |
3594 | + char *answer; |
3595 | + static struct message_data message_data; |
3596 | + |
3597 | + /* Parse the message. */ |
3598 | + memset(&message_data, 0, sizeof(message_data)); |
3599 | + message_data.msg = msg; |
3600 | + if (msg->cmd == DM_EVENT_CMD_HELLO || msg->cmd == DM_EVENT_CMD_DIE) { |
3601 | + ret = 0; |
3602 | + answer = msg->data; |
3603 | + if (answer) { |
3604 | + msg->size = dm_asprintf(&(msg->data), "%s %s %d", answer, |
3605 | + msg->cmd == DM_EVENT_CMD_DIE ? "DYING" : "HELLO", |
3606 | + DM_EVENT_PROTOCOL_VERSION); |
3607 | + dm_free(answer); |
3608 | + } else { |
3609 | + msg->size = 0; |
3610 | + msg->data = NULL; |
3611 | + } |
3612 | + } else if (msg->cmd != DM_EVENT_CMD_ACTIVE && !_parse_message(&message_data)) { |
3613 | + stack; |
3614 | + ret = -EINVAL; |
3615 | + } else |
3616 | + ret = _handle_request(msg, &message_data); |
3617 | + |
3618 | + msg->cmd = ret; |
3619 | + if (!msg->data) |
3620 | + msg->size = dm_asprintf(&(msg->data), "%s %s", message_data.id, strerror(-ret)); |
3621 | + |
3622 | + _free_message(&message_data); |
3623 | + |
3624 | + return ret; |
3625 | +} |
3626 | + |
3627 | +/* Only one caller at a time. */ |
3628 | +static void _process_request(struct dm_event_fifos *fifos) |
3629 | +{ |
3630 | + int die = 0; |
3631 | + struct dm_event_daemon_message msg; |
3632 | + |
3633 | + memset(&msg, 0, sizeof(msg)); |
3634 | + |
3635 | + /* |
3636 | + * Read the request from the client (client_read, client_write |
3637 | + * give true on success and false on failure). |
3638 | + */ |
3639 | + if (!_client_read(fifos, &msg)) |
3640 | + return; |
3641 | + |
3642 | + if (msg.cmd == DM_EVENT_CMD_DIE) |
3643 | + die = 1; |
3644 | + |
3645 | + /* _do_process_request fills in msg (if memory allows for |
3646 | + data, otherwise just cmd and size = 0) */ |
3647 | + _do_process_request(&msg); |
3648 | + |
3649 | + if (!_client_write(fifos, &msg)) |
3650 | + stack; |
3651 | + |
3652 | + dm_free(msg.data); |
3653 | + |
3654 | + if (die) raise(9); |
3655 | +} |
3656 | + |
3657 | +static void _process_initial_registrations(void) |
3658 | +{ |
3659 | + int i = 0; |
3660 | + char *reg; |
3661 | + struct dm_event_daemon_message msg = { 0, 0, NULL }; |
3662 | + |
3663 | + while ((reg = _initial_registrations[i])) { |
3664 | + msg.cmd = DM_EVENT_CMD_REGISTER_FOR_EVENT; |
3665 | + if ((msg.size = strlen(reg))) { |
3666 | + msg.data = reg; |
3667 | + _do_process_request(&msg); |
3668 | + } |
3669 | + ++ i; |
3670 | + } |
3671 | +} |
3672 | + |
3673 | +static void _cleanup_unused_threads(void) |
3674 | +{ |
3675 | + int ret; |
3676 | + struct dm_list *l; |
3677 | + struct thread_status *thread; |
3678 | + int join_ret = 0; |
3679 | + |
3680 | + _lock_mutex(); |
3681 | + while ((l = dm_list_first(&_thread_registry_unused))) { |
3682 | + thread = dm_list_item(l, struct thread_status); |
3683 | + if (thread->processing) |
3684 | + break; /* cleanup on the next round */ |
3685 | + |
3686 | + if (thread->status == DM_THREAD_RUNNING) { |
3687 | + thread->status = DM_THREAD_SHUTDOWN; |
3688 | + break; |
3689 | + } |
3690 | + |
3691 | + if (thread->status == DM_THREAD_SHUTDOWN) { |
3692 | + if (!thread->events) { |
3693 | + /* turn codes negative -- should we be returning this? */ |
3694 | + ret = _terminate_thread(thread); |
3695 | + |
3696 | + if (ret == ESRCH) { |
3697 | + thread->status = DM_THREAD_DONE; |
3698 | + } else if (ret) { |
3699 | + syslog(LOG_ERR, |
3700 | + "Unable to terminate thread: %s\n", |
3701 | + strerror(-ret)); |
3702 | + stack; |
3703 | + } |
3704 | + break; |
3705 | + } |
3706 | + |
3707 | + dm_list_del(l); |
3708 | + syslog(LOG_ERR, |
3709 | + "thread can't be on unused list unless !thread->events"); |
3710 | + thread->status = DM_THREAD_RUNNING; |
3711 | + LINK_THREAD(thread); |
3712 | + |
3713 | + continue; |
3714 | + } |
3715 | + |
3716 | + if (thread->status == DM_THREAD_DONE) { |
3717 | + dm_list_del(l); |
3718 | + join_ret = pthread_join(thread->thread, NULL); |
3719 | + _free_thread_status(thread); |
3720 | + } |
3721 | + } |
3722 | + |
3723 | + _unlock_mutex(); |
3724 | + |
3725 | + if (join_ret) |
3726 | + syslog(LOG_ERR, "Failed pthread_join: %s\n", strerror(join_ret)); |
3727 | +} |
3728 | + |
3729 | +static void _sig_alarm(int signum __attribute__((unused))) |
3730 | +{ |
3731 | + pthread_testcancel(); |
3732 | +} |
3733 | + |
3734 | +/* Init thread signal handling. */ |
3735 | +static void _init_thread_signals(void) |
3736 | +{ |
3737 | + sigset_t my_sigset; |
3738 | + struct sigaction act; |
3739 | + |
3740 | + memset(&act, 0, sizeof(act)); |
3741 | + act.sa_handler = _sig_alarm; |
3742 | + sigaction(SIGALRM, &act, NULL); |
3743 | + sigfillset(&my_sigset); |
3744 | + |
3745 | + /* These are used for exiting */ |
3746 | + sigdelset(&my_sigset, SIGTERM); |
3747 | + sigdelset(&my_sigset, SIGINT); |
3748 | + sigdelset(&my_sigset, SIGHUP); |
3749 | + sigdelset(&my_sigset, SIGQUIT); |
3750 | + |
3751 | + pthread_sigmask(SIG_BLOCK, &my_sigset, NULL); |
3752 | +} |
3753 | + |
3754 | +/* |
3755 | + * exit_handler |
3756 | + * @sig |
3757 | + * |
3758 | + * Set the global variable which the process should |
3759 | + * be watching to determine when to exit. |
3760 | + */ |
3761 | +static void _exit_handler(int sig __attribute__((unused))) |
3762 | +{ |
3763 | + /* |
3764 | + * We exit when '_exit_now' is set. |
3765 | + * That is, when a signal has been received. |
3766 | + * |
3767 | + * We can not simply set '_exit_now' unless all |
3768 | + * threads are done processing. |
3769 | + */ |
3770 | + if (!_thread_registries_empty) { |
3771 | + syslog(LOG_ERR, "There are still devices being monitored."); |
3772 | + syslog(LOG_ERR, "Refusing to exit."); |
3773 | + } else |
3774 | + _exit_now = 1; |
3775 | + |
3776 | +} |
3777 | + |
3778 | +#ifdef linux |
3779 | +static int _set_oom_adj(const char *oom_adj_path, int val) |
3780 | +{ |
3781 | + FILE *fp; |
3782 | + |
3783 | + if (!(fp = fopen(oom_adj_path, "w"))) { |
3784 | + perror("oom_adj: fopen failed"); |
3785 | + return 0; |
3786 | + } |
3787 | + |
3788 | + fprintf(fp, "%i", val); |
3789 | + |
3790 | + if (dm_fclose(fp)) |
3791 | + perror("oom_adj: fclose failed"); |
3792 | + |
3793 | + return 1; |
3794 | +} |
3795 | + |
3796 | +/* |
3797 | + * Protection against OOM killer if kernel supports it |
3798 | + */ |
3799 | +static int _protect_against_oom_killer(void) |
3800 | +{ |
3801 | + struct stat st; |
3802 | + |
3803 | + if (stat(OOM_ADJ_FILE, &st) == -1) { |
3804 | + if (errno != ENOENT) |
3805 | + perror(OOM_ADJ_FILE ": stat failed"); |
3806 | + |
3807 | + /* Try old oom_adj interface as a fallback */ |
3808 | + if (stat(OOM_ADJ_FILE_OLD, &st) == -1) { |
3809 | + if (errno == ENOENT) |
3810 | + perror(OOM_ADJ_FILE_OLD " not found"); |
3811 | + else |
3812 | + perror(OOM_ADJ_FILE_OLD ": stat failed"); |
3813 | + return 1; |
3814 | + } |
3815 | + |
3816 | + return _set_oom_adj(OOM_ADJ_FILE_OLD, OOM_DISABLE) || |
3817 | + _set_oom_adj(OOM_ADJ_FILE_OLD, OOM_ADJUST_MIN); |
3818 | + } |
3819 | + |
3820 | + return _set_oom_adj(OOM_ADJ_FILE, OOM_SCORE_ADJ_MIN); |
3821 | +} |
3822 | + |
3823 | +static int _handle_preloaded_fifo(int fd, const char *path) |
3824 | +{ |
3825 | + struct stat st_fd, st_path; |
3826 | + int flags; |
3827 | + |
3828 | + if ((flags = fcntl(fd, F_GETFD)) < 0) |
3829 | + return 0; |
3830 | + |
3831 | + if (flags & FD_CLOEXEC) |
3832 | + return 0; |
3833 | + |
3834 | + if (fstat(fd, &st_fd) < 0 || !S_ISFIFO(st_fd.st_mode)) |
3835 | + return 0; |
3836 | + |
3837 | + if (stat(path, &st_path) < 0 || |
3838 | + st_path.st_dev != st_fd.st_dev || |
3839 | + st_path.st_ino != st_fd.st_ino) |
3840 | + return 0; |
3841 | + |
3842 | + if (fcntl(fd, F_SETFD, flags | FD_CLOEXEC) < 0) |
3843 | + return 0; |
3844 | + |
3845 | + return 1; |
3846 | +} |
3847 | + |
3848 | +static int _systemd_handover(struct dm_event_fifos *fifos) |
3849 | +{ |
3850 | + const char *e; |
3851 | + char *p; |
3852 | + unsigned long env_pid, env_listen_fds; |
3853 | + int r = 0; |
3854 | + |
3855 | + memset(fifos, 0, sizeof(*fifos)); |
3856 | + |
3857 | + /* LISTEN_PID must be equal to our PID! */ |
3858 | + if (!(e = getenv(SD_LISTEN_PID_ENV_VAR_NAME))) |
3859 | + goto out; |
3860 | + |
3861 | + errno = 0; |
3862 | + env_pid = strtoul(e, &p, 10); |
3863 | + if (errno || !p || *p || env_pid <= 0 || |
3864 | + getpid() != (pid_t) env_pid) |
3865 | + goto out; |
3866 | + |
3867 | + /* LISTEN_FDS must be 2 and the fds must be FIFOSs! */ |
3868 | + if (!(e = getenv(SD_LISTEN_FDS_ENV_VAR_NAME))) |
3869 | + goto out; |
3870 | + |
3871 | + errno = 0; |
3872 | + env_listen_fds = strtoul(e, &p, 10); |
3873 | + if (errno || !p || *p || env_listen_fds != 2) |
3874 | + goto out; |
3875 | + |
3876 | + /* Check and handle the FIFOs passed in */ |
3877 | + r = (_handle_preloaded_fifo(SD_FD_FIFO_SERVER, DM_EVENT_FIFO_SERVER) && |
3878 | + _handle_preloaded_fifo(SD_FD_FIFO_CLIENT, DM_EVENT_FIFO_CLIENT)); |
3879 | + |
3880 | + if (r) { |
3881 | + fifos->server = SD_FD_FIFO_SERVER; |
3882 | + fifos->server_path = DM_EVENT_FIFO_SERVER; |
3883 | + fifos->client = SD_FD_FIFO_CLIENT; |
3884 | + fifos->client_path = DM_EVENT_FIFO_CLIENT; |
3885 | + } |
3886 | + |
3887 | +out: |
3888 | + unsetenv(SD_LISTEN_PID_ENV_VAR_NAME); |
3889 | + unsetenv(SD_LISTEN_FDS_ENV_VAR_NAME); |
3890 | + return r; |
3891 | +} |
3892 | +#endif |
3893 | + |
3894 | +static void remove_lockfile(void) |
3895 | +{ |
3896 | + if (unlink(DMEVENTD_PIDFILE)) |
3897 | + perror(DMEVENTD_PIDFILE ": unlink failed"); |
3898 | +} |
3899 | + |
3900 | +static void _daemonize(void) |
3901 | +{ |
3902 | + int child_status; |
3903 | + int fd; |
3904 | + pid_t pid; |
3905 | + struct rlimit rlim; |
3906 | + struct timeval tval; |
3907 | + sigset_t my_sigset; |
3908 | + |
3909 | + sigemptyset(&my_sigset); |
3910 | + if (sigprocmask(SIG_SETMASK, &my_sigset, NULL) < 0) { |
3911 | + fprintf(stderr, "Unable to restore signals.\n"); |
3912 | + exit(EXIT_FAILURE); |
3913 | + } |
3914 | + signal(SIGTERM, &_exit_handler); |
3915 | + |
3916 | + switch (pid = fork()) { |
3917 | + case -1: |
3918 | + perror("fork failed:"); |
3919 | + exit(EXIT_FAILURE); |
3920 | + |
3921 | + case 0: /* Child */ |
3922 | + break; |
3923 | + |
3924 | + default: |
3925 | + /* Wait for response from child */ |
3926 | + while (!waitpid(pid, &child_status, WNOHANG) && !_exit_now) { |
3927 | + tval.tv_sec = 0; |
3928 | + tval.tv_usec = 250000; /* .25 sec */ |
3929 | + select(0, NULL, NULL, NULL, &tval); |
3930 | + } |
3931 | + |
3932 | + if (_exit_now) /* Child has signaled it is ok - we can exit now */ |
3933 | + exit(EXIT_SUCCESS); |
3934 | + |
3935 | + /* Problem with child. Determine what it is by exit code */ |
3936 | + switch (WEXITSTATUS(child_status)) { |
3937 | + case EXIT_DESC_CLOSE_FAILURE: |
3938 | + case EXIT_DESC_OPEN_FAILURE: |
3939 | + case EXIT_FIFO_FAILURE: |
3940 | + case EXIT_CHDIR_FAILURE: |
3941 | + default: |
3942 | + fprintf(stderr, "Child exited with code %d\n", WEXITSTATUS(child_status)); |
3943 | + break; |
3944 | + } |
3945 | + |
3946 | + exit(WEXITSTATUS(child_status)); |
3947 | + } |
3948 | + |
3949 | + if (chdir("/")) |
3950 | + exit(EXIT_CHDIR_FAILURE); |
3951 | + |
3952 | + if (getrlimit(RLIMIT_NOFILE, &rlim) < 0) |
3953 | + fd = 256; /* just have to guess */ |
3954 | + else |
3955 | + fd = rlim.rlim_cur; |
3956 | + |
3957 | + for (--fd; fd >= 0; fd--) { |
3958 | +#ifdef linux |
3959 | + /* Do not close fds preloaded by systemd! */ |
3960 | + if (_systemd_activation && |
3961 | + (fd == SD_FD_FIFO_SERVER || fd == SD_FD_FIFO_CLIENT)) |
3962 | + continue; |
3963 | +#endif |
3964 | + (void) close(fd); |
3965 | + } |
3966 | + |
3967 | + if ((open("/dev/null", O_RDONLY) < 0) || |
3968 | + (open("/dev/null", O_WRONLY) < 0) || |
3969 | + (open("/dev/null", O_WRONLY) < 0)) |
3970 | + exit(EXIT_DESC_OPEN_FAILURE); |
3971 | + |
3972 | + setsid(); |
3973 | +} |
3974 | + |
3975 | +static void restart(void) |
3976 | +{ |
3977 | + struct dm_event_fifos fifos; |
3978 | + struct dm_event_daemon_message msg = { 0, 0, NULL }; |
3979 | + int i, count = 0; |
3980 | + char *message; |
3981 | + int length; |
3982 | + int version; |
3983 | + |
3984 | + /* Get the list of registrations from the running daemon. */ |
3985 | + |
3986 | + if (!init_fifos(&fifos)) { |
3987 | + fprintf(stderr, "WARNING: Could not initiate communication with existing dmeventd.\n"); |
3988 | + return; |
3989 | + } |
3990 | + |
3991 | + if (!dm_event_get_version(&fifos, &version)) { |
3992 | + fprintf(stderr, "WARNING: Could not communicate with existing dmeventd.\n"); |
3993 | + fini_fifos(&fifos); |
3994 | + return; |
3995 | + } |
3996 | + |
3997 | + if (version < 1) { |
3998 | + fprintf(stderr, "WARNING: The running dmeventd instance is too old.\n" |
3999 | + "Protocol version %d (required: 1). Action cancelled.\n", |
4000 | + version); |
4001 | + exit(EXIT_FAILURE); |
4002 | + } |
4003 | + |
4004 | + if (daemon_talk(&fifos, &msg, DM_EVENT_CMD_GET_STATUS, "-", "-", 0, 0)) { |
4005 | + exit(EXIT_FAILURE); |
4006 | + } |
4007 | + |
4008 | + message = msg.data; |
4009 | + message = strchr(message, ' '); |
4010 | + ++ message; |
4011 | + length = strlen(msg.data); |
4012 | + for (i = 0; i < length; ++i) { |
4013 | + if (msg.data[i] == ';') { |
4014 | + msg.data[i] = 0; |
4015 | + ++count; |
4016 | + } |
4017 | + } |
4018 | + |
4019 | + if (!(_initial_registrations = dm_malloc(sizeof(char*) * (count + 1)))) { |
4020 | + fprintf(stderr, "Memory allocation registration failed.\n"); |
4021 | + exit(EXIT_FAILURE); |
4022 | + } |
4023 | + |
4024 | + for (i = 0; i < count; ++i) { |
4025 | + if (!(_initial_registrations[i] = dm_strdup(message))) { |
4026 | + fprintf(stderr, "Memory allocation for message failed.\n"); |
4027 | + exit(EXIT_FAILURE); |
4028 | + } |
4029 | + message += strlen(message) + 1; |
4030 | + } |
4031 | + _initial_registrations[count] = 0; |
4032 | + |
4033 | + if (daemon_talk(&fifos, &msg, DM_EVENT_CMD_DIE, "-", "-", 0, 0)) { |
4034 | + fprintf(stderr, "Old dmeventd refused to die.\n"); |
4035 | + exit(EXIT_FAILURE); |
4036 | + } |
4037 | + |
4038 | + fini_fifos(&fifos); |
4039 | +} |
4040 | + |
4041 | +static void usage(char *prog, FILE *file) |
4042 | +{ |
4043 | + fprintf(file, "Usage:\n" |
4044 | + "%s [-d [-d [-d]]] [-f] [-h] [-R] [-V] [-?]\n\n" |
4045 | + " -d Log debug messages to syslog (-d, -dd, -ddd)\n" |
4046 | + " -f Don't fork, run in the foreground\n" |
4047 | + " -h -? Show this help information\n" |
4048 | + " -R Restart dmeventd\n" |
4049 | + " -V Show version of dmeventd\n\n", prog); |
4050 | +} |
4051 | + |
4052 | +int main(int argc, char *argv[]) |
4053 | +{ |
4054 | + signed char opt; |
4055 | + struct dm_event_fifos fifos; |
4056 | + //struct sys_log logdata = {DAEMON_NAME, LOG_DAEMON}; |
4057 | + |
4058 | + opterr = 0; |
4059 | + optind = 0; |
4060 | + |
4061 | + while ((opt = getopt(argc, argv, "?fhVdR")) != EOF) { |
4062 | + switch (opt) { |
4063 | + case 'h': |
4064 | + usage(argv[0], stdout); |
4065 | + exit(0); |
4066 | + case '?': |
4067 | + usage(argv[0], stderr); |
4068 | + exit(0); |
4069 | + case 'R': |
4070 | + _restart++; |
4071 | + break; |
4072 | + case 'f': |
4073 | + _foreground++; |
4074 | + break; |
4075 | + case 'd': |
4076 | + dmeventd_debug++; |
4077 | + break; |
4078 | + case 'V': |
4079 | + printf("dmeventd version: %s\n", DM_LIB_VERSION); |
4080 | + exit(1); |
4081 | + } |
4082 | + } |
4083 | + |
4084 | + /* |
4085 | + * Switch to C locale to avoid reading large locale-archive file |
4086 | + * used by some glibc (on some distributions it takes over 100MB). |
4087 | + * Daemon currently needs to use mlockall(). |
4088 | + */ |
4089 | + if (setenv("LANG", "C", 1)) |
4090 | + perror("Cannot set LANG to C"); |
4091 | + |
4092 | + if (_restart) |
4093 | + restart(); |
4094 | + |
4095 | +#ifdef linux |
4096 | + _systemd_activation = _systemd_handover(&fifos); |
4097 | +#endif |
4098 | + |
4099 | + if (!_foreground) |
4100 | + _daemonize(); |
4101 | + |
4102 | + openlog("dmeventd", LOG_PID, LOG_DAEMON); |
4103 | + |
4104 | + (void) dm_prepare_selinux_context(DMEVENTD_PIDFILE, S_IFREG); |
4105 | + if (dm_create_lockfile(DMEVENTD_PIDFILE) == 0) |
4106 | + exit(EXIT_FAILURE); |
4107 | + |
4108 | + atexit(remove_lockfile); |
4109 | + (void) dm_prepare_selinux_context(NULL, 0); |
4110 | + |
4111 | + /* Set the rest of the signals to cause '_exit_now' to be set */ |
4112 | + signal(SIGINT, &_exit_handler); |
4113 | + signal(SIGHUP, &_exit_handler); |
4114 | + signal(SIGQUIT, &_exit_handler); |
4115 | + |
4116 | +#ifdef linux |
4117 | + /* Systemd has adjusted oom killer for us already */ |
4118 | + if (!_systemd_activation && !_protect_against_oom_killer()) |
4119 | + syslog(LOG_ERR, "Failed to protect against OOM killer"); |
4120 | +#endif |
4121 | + |
4122 | + _init_thread_signals(); |
4123 | + |
4124 | + //multilog_clear_logging(); |
4125 | + //multilog_add_type(std_syslog, &logdata); |
4126 | + //multilog_init_verbose(std_syslog, _LOG_DEBUG); |
4127 | + //multilog_async(1); |
4128 | + |
4129 | + if (!_systemd_activation) |
4130 | + _init_fifos(&fifos); |
4131 | + |
4132 | + pthread_mutex_init(&_global_mutex, NULL); |
4133 | + |
4134 | + if (!_systemd_activation && !_open_fifos(&fifos)) |
4135 | + exit(EXIT_FIFO_FAILURE); |
4136 | + |
4137 | + /* Signal parent, letting them know we are ready to go. */ |
4138 | + if (!_foreground) |
4139 | + kill(getppid(), SIGTERM); |
4140 | + syslog(LOG_NOTICE, "dmeventd ready for processing."); |
4141 | + |
4142 | + if (_initial_registrations) |
4143 | + _process_initial_registrations(); |
4144 | + |
4145 | + while (!_exit_now) { |
4146 | + _process_request(&fifos); |
4147 | + _cleanup_unused_threads(); |
4148 | + _lock_mutex(); |
4149 | + if (!dm_list_empty(&_thread_registry) |
4150 | + || !dm_list_empty(&_thread_registry_unused)) |
4151 | + _thread_registries_empty = 0; |
4152 | + else |
4153 | + _thread_registries_empty = 1; |
4154 | + _unlock_mutex(); |
4155 | + } |
4156 | + |
4157 | + _exit_dm_lib(); |
4158 | + |
4159 | + pthread_mutex_destroy(&_global_mutex); |
4160 | + |
4161 | + syslog(LOG_NOTICE, "dmeventd shutting down."); |
4162 | + closelog(); |
4163 | + |
4164 | + exit(EXIT_SUCCESS); |
4165 | +} |
4166 | |
4167 | === added directory '.pc/dirs.patch/doc' |
4168 | === removed directory '.pc/dirs.patch/doc' |
4169 | === added file '.pc/dirs.patch/doc/example.conf.in' |
4170 | --- .pc/dirs.patch/doc/example.conf.in 1970-01-01 00:00:00 +0000 |
4171 | +++ .pc/dirs.patch/doc/example.conf.in 2012-08-21 10:18:22 +0000 |
4172 | @@ -0,0 +1,773 @@ |
4173 | +# This is an example configuration file for the LVM2 system. |
4174 | +# It contains the default settings that would be used if there was no |
4175 | +# @DEFAULT_SYS_DIR@/lvm.conf file. |
4176 | +# |
4177 | +# Refer to 'man lvm.conf' for further information including the file layout. |
4178 | +# |
4179 | +# To put this file in a different directory and override @DEFAULT_SYS_DIR@ set |
4180 | +# the environment variable LVM_SYSTEM_DIR before running the tools. |
4181 | +# |
4182 | +# N.B. Take care that each setting only appears once if uncommenting |
4183 | +# example settings in this file. |
4184 | + |
4185 | + |
4186 | +# This section allows you to configure which block devices should |
4187 | +# be used by the LVM system. |
4188 | +devices { |
4189 | + |
4190 | + # Where do you want your volume groups to appear ? |
4191 | + dir = "/dev" |
4192 | + |
4193 | + # An array of directories that contain the device nodes you wish |
4194 | + # to use with LVM2. |
4195 | + scan = [ "/dev" ] |
4196 | + |
4197 | + # If set, the cache of block device nodes with all associated symlinks |
4198 | + # will be constructed out of the existing udev database content. |
4199 | + # This avoids using and opening any inapplicable non-block devices or |
4200 | + # subdirectories found in the device directory. This setting is applied |
4201 | + # to udev-managed device directory only, other directories will be scanned |
4202 | + # fully. LVM2 needs to be compiled with udev support for this setting to |
4203 | + # take effect. N.B. Any device node or symlink not managed by udev in |
4204 | + # udev directory will be ignored with this setting on. |
4205 | + obtain_device_list_from_udev = 1 |
4206 | + |
4207 | + # If several entries in the scanned directories correspond to the |
4208 | + # same block device and the tools need to display a name for device, |
4209 | + # all the pathnames are matched against each item in the following |
4210 | + # list of regular expressions in turn and the first match is used. |
4211 | + preferred_names = [ ] |
4212 | + |
4213 | + # Try to avoid using undescriptive /dev/dm-N names, if present. |
4214 | + # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ] |
4215 | + |
4216 | + # A filter that tells LVM2 to only use a restricted set of devices. |
4217 | + # The filter consists of an array of regular expressions. These |
4218 | + # expressions can be delimited by a character of your choice, and |
4219 | + # prefixed with either an 'a' (for accept) or 'r' (for reject). |
4220 | + # The first expression found to match a device name determines if |
4221 | + # the device will be accepted or rejected (ignored). Devices that |
4222 | + # don't match any patterns are accepted. |
4223 | + |
4224 | + # Be careful if there there are symbolic links or multiple filesystem |
4225 | + # entries for the same device as each name is checked separately against |
4226 | + # the list of patterns. The effect is that if the first pattern in the |
4227 | + # list to match a name is an 'a' pattern for any of the names, the device |
4228 | + # is accepted; otherwise if the first pattern in the list to match a name |
4229 | + # is an 'r' pattern for any of the names it is rejected; otherwise it is |
4230 | + # accepted. |
4231 | + |
4232 | + # Don't have more than one filter line active at once: only one gets used. |
4233 | + |
4234 | + # Run vgscan after you change this parameter to ensure that |
4235 | + # the cache file gets regenerated (see below). |
4236 | + # If it doesn't do what you expect, check the output of 'vgscan -vvvv'. |
4237 | + |
4238 | + |
4239 | + # By default we accept every block device: |
4240 | + filter = [ "a/.*/" ] |
4241 | + |
4242 | + # Exclude the cdrom drive |
4243 | + # filter = [ "r|/dev/cdrom|" ] |
4244 | + |
4245 | + # When testing I like to work with just loopback devices: |
4246 | + # filter = [ "a/loop/", "r/.*/" ] |
4247 | + |
4248 | + # Or maybe all loops and ide drives except hdc: |
4249 | + # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ] |
4250 | + |
4251 | + # Use anchors if you want to be really specific |
4252 | + # filter = [ "a|^/dev/hda8$|", "r/.*/" ] |
4253 | + |
4254 | + # The results of the filtering are cached on disk to avoid |
4255 | + # rescanning dud devices (which can take a very long time). |
4256 | + # By default this cache is stored in the @DEFAULT_SYS_DIR@/@DEFAULT_CACHE_SUBDIR@ directory |
4257 | + # in a file called '.cache'. |
4258 | + # It is safe to delete the contents: the tools regenerate it. |
4259 | + # (The old setting 'cache' is still respected if neither of |
4260 | + # these new ones is present.) |
4261 | + cache_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_CACHE_SUBDIR@" |
4262 | + cache_file_prefix = "" |
4263 | + |
4264 | + # You can turn off writing this cache file by setting this to 0. |
4265 | + write_cache_state = 1 |
4266 | + |
4267 | + # Advanced settings. |
4268 | + |
4269 | + # List of pairs of additional acceptable block device types found |
4270 | + # in /proc/devices with maximum (non-zero) number of partitions. |
4271 | + # types = [ "fd", 16 ] |
4272 | + |
4273 | + # If sysfs is mounted (2.6 kernels) restrict device scanning to |
4274 | + # the block devices it believes are valid. |
4275 | + # 1 enables; 0 disables. |
4276 | + sysfs_scan = 1 |
4277 | + |
4278 | + # By default, LVM2 will ignore devices used as component paths |
4279 | + # of device-mapper multipath devices. |
4280 | + # 1 enables; 0 disables. |
4281 | + multipath_component_detection = 1 |
4282 | + |
4283 | + # By default, LVM2 will ignore devices used as components of |
4284 | + # software RAID (md) devices by looking for md superblocks. |
4285 | + # 1 enables; 0 disables. |
4286 | + md_component_detection = 1 |
4287 | + |
4288 | + # By default, if a PV is placed directly upon an md device, LVM2 |
4289 | + # will align its data blocks with the md device's stripe-width. |
4290 | + # 1 enables; 0 disables. |
4291 | + md_chunk_alignment = 1 |
4292 | + |
4293 | + # Default alignment of the start of a data area in MB. If set to 0, |
4294 | + # a value of 64KB will be used. Set to 1 for 1MiB, 2 for 2MiB, etc. |
4295 | + # default_data_alignment = @DEFAULT_DATA_ALIGNMENT@ |
4296 | + |
4297 | + # By default, the start of a PV's data area will be a multiple of |
4298 | + # the 'minimum_io_size' or 'optimal_io_size' exposed in sysfs. |
4299 | + # - minimum_io_size - the smallest request the device can perform |
4300 | + # w/o incurring a read-modify-write penalty (e.g. MD's chunk size) |
4301 | + # - optimal_io_size - the device's preferred unit of receiving I/O |
4302 | + # (e.g. MD's stripe width) |
4303 | + # minimum_io_size is used if optimal_io_size is undefined (0). |
4304 | + # If md_chunk_alignment is enabled, that detects the optimal_io_size. |
4305 | + # This setting takes precedence over md_chunk_alignment. |
4306 | + # 1 enables; 0 disables. |
4307 | + data_alignment_detection = 1 |
4308 | + |
4309 | + # Alignment (in KB) of start of data area when creating a new PV. |
4310 | + # md_chunk_alignment and data_alignment_detection are disabled if set. |
4311 | + # Set to 0 for the default alignment (see: data_alignment_default) |
4312 | + # or page size, if larger. |
4313 | + data_alignment = 0 |
4314 | + |
4315 | + # By default, the start of the PV's aligned data area will be shifted by |
4316 | + # the 'alignment_offset' exposed in sysfs. This offset is often 0 but |
4317 | + # may be non-zero; e.g.: certain 4KB sector drives that compensate for |
4318 | + # windows partitioning will have an alignment_offset of 3584 bytes |
4319 | + # (sector 7 is the lowest aligned logical block, the 4KB sectors start |
4320 | + # at LBA -1, and consequently sector 63 is aligned on a 4KB boundary). |
4321 | + # But note that pvcreate --dataalignmentoffset will skip this detection. |
4322 | + # 1 enables; 0 disables. |
4323 | + data_alignment_offset_detection = 1 |
4324 | + |
4325 | + # If, while scanning the system for PVs, LVM2 encounters a device-mapper |
4326 | + # device that has its I/O suspended, it waits for it to become accessible. |
4327 | + # Set this to 1 to skip such devices. This should only be needed |
4328 | + # in recovery situations. |
4329 | + ignore_suspended_devices = 0 |
4330 | + |
4331 | + # During each LVM operation errors received from each device are counted. |
4332 | + # If the counter of a particular device exceeds the limit set here, no |
4333 | + # further I/O is sent to that device for the remainder of the respective |
4334 | + # operation. Setting the parameter to 0 disables the counters altogether. |
4335 | + disable_after_error_count = 0 |
4336 | + |
4337 | + # Allow use of pvcreate --uuid without requiring --restorefile. |
4338 | + require_restorefile_with_uuid = 1 |
4339 | + |
4340 | + # Minimum size (in KB) of block devices which can be used as PVs. |
4341 | + # In a clustered environment all nodes must use the same value. |
4342 | + # Any value smaller than 512KB is ignored. |
4343 | + |
4344 | + # Ignore devices smaller than 2MB such as floppy drives. |
4345 | + pv_min_size = 2048 |
4346 | + |
4347 | + # The original built-in setting was 512 up to and including version 2.02.84. |
4348 | + # pv_min_size = 512 |
4349 | + |
4350 | + # Issue discards to a logical volumes's underlying physical volume(s) when |
4351 | + # the logical volume is no longer using the physical volumes' space (e.g. |
4352 | + # lvremove, lvreduce, etc). Discards inform the storage that a region is |
4353 | + # no longer in use. Storage that supports discards advertise the protocol |
4354 | + # specific way discards should be issued by the kernel (TRIM, UNMAP, or |
4355 | + # WRITE SAME with UNMAP bit set). Not all storage will support or benefit |
4356 | + # from discards but SSDs and thinly provisioned LUNs generally do. If set |
4357 | + # to 1, discards will only be issued if both the storage and kernel provide |
4358 | + # support. |
4359 | + # 1 enables; 0 disables. |
4360 | + issue_discards = 0 |
4361 | +} |
4362 | + |
4363 | +# This section allows you to configure the way in which LVM selects |
4364 | +# free space for its Logical Volumes. |
4365 | +#allocation { |
4366 | +# When searching for free space to extend an LV, the "cling" |
4367 | +# allocation policy will choose space on the same PVs as the last |
4368 | +# segment of the existing LV. If there is insufficient space and a |
4369 | +# list of tags is defined here, it will check whether any of them are |
4370 | +# attached to the PVs concerned and then seek to match those PV tags |
4371 | +# between existing extents and new extents. |
4372 | +# Use the special tag "@*" as a wildcard to match any PV tag. |
4373 | +# |
4374 | +# Example: LVs are mirrored between two sites within a single VG. |
4375 | +# PVs are tagged with either @site1 or @site2 to indicate where |
4376 | +# they are situated. |
4377 | +# |
4378 | +# cling_tag_list = [ "@site1", "@site2" ] |
4379 | +# cling_tag_list = [ "@*" ] |
4380 | +# |
4381 | +# Changes made in version 2.02.85 extended the reach of the 'cling' |
4382 | +# policies to detect more situations where data can be grouped |
4383 | +# onto the same disks. Set this to 0 to revert to the previous |
4384 | +# algorithm. |
4385 | +# |
4386 | +# maximise_cling = 1 |
4387 | +# |
4388 | +# Set to 1 to guarantee that mirror logs will always be placed on |
4389 | +# different PVs from the mirror images. This was the default |
4390 | +# until version 2.02.85. |
4391 | +# |
4392 | +# mirror_logs_require_separate_pvs = 0 |
4393 | +# |
4394 | +# Set to 1 to guarantee that thin pool metadata will always |
4395 | +# be placed on different PVs from the pool data. |
4396 | +# |
4397 | +# thin_pool_metadata_require_separate_pvs = 0 |
4398 | +#} |
4399 | + |
4400 | +# This section that allows you to configure the nature of the |
4401 | +# information that LVM2 reports. |
4402 | +log { |
4403 | + |
4404 | + # Controls the messages sent to stdout or stderr. |
4405 | + # There are three levels of verbosity, 3 being the most verbose. |
4406 | + verbose = 0 |
4407 | + |
4408 | + # Should we send log messages through syslog? |
4409 | + # 1 is yes; 0 is no. |
4410 | + syslog = 1 |
4411 | + |
4412 | + # Should we log error and debug messages to a file? |
4413 | + # By default there is no log file. |
4414 | + #file = "/var/log/lvm2.log" |
4415 | + |
4416 | + # Should we overwrite the log file each time the program is run? |
4417 | + # By default we append. |
4418 | + overwrite = 0 |
4419 | + |
4420 | + # What level of log messages should we send to the log file and/or syslog? |
4421 | + # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive. |
4422 | + # 7 is the most verbose (LOG_DEBUG). |
4423 | + level = 0 |
4424 | + |
4425 | + # Format of output messages |
4426 | + # Whether or not (1 or 0) to indent messages according to their severity |
4427 | + indent = 1 |
4428 | + |
4429 | + # Whether or not (1 or 0) to display the command name on each line output |
4430 | + command_names = 0 |
4431 | + |
4432 | + # A prefix to use before the message text (but after the command name, |
4433 | + # if selected). Default is two spaces, so you can see/grep the severity |
4434 | + # of each message. |
4435 | + prefix = " " |
4436 | + |
4437 | + # To make the messages look similar to the original LVM tools use: |
4438 | + # indent = 0 |
4439 | + # command_names = 1 |
4440 | + # prefix = " -- " |
4441 | + |
4442 | + # Set this if you want log messages during activation. |
4443 | + # Don't use this in low memory situations (can deadlock). |
4444 | + # activation = 0 |
4445 | +} |
4446 | + |
4447 | +# Configuration of metadata backups and archiving. In LVM2 when we |
4448 | +# talk about a 'backup' we mean making a copy of the metadata for the |
4449 | +# *current* system. The 'archive' contains old metadata configurations. |
4450 | +# Backups are stored in a human readeable text format. |
4451 | +backup { |
4452 | + |
4453 | + # Should we maintain a backup of the current metadata configuration ? |
4454 | + # Use 1 for Yes; 0 for No. |
4455 | + # Think very hard before turning this off! |
4456 | + backup = 1 |
4457 | + |
4458 | + # Where shall we keep it ? |
4459 | + # Remember to back up this directory regularly! |
4460 | + backup_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_BACKUP_SUBDIR@" |
4461 | + |
4462 | + # Should we maintain an archive of old metadata configurations. |
4463 | + # Use 1 for Yes; 0 for No. |
4464 | + # On by default. Think very hard before turning this off. |
4465 | + archive = 1 |
4466 | + |
4467 | + # Where should archived files go ? |
4468 | + # Remember to back up this directory regularly! |
4469 | + archive_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_ARCHIVE_SUBDIR@" |
4470 | + |
4471 | + # What is the minimum number of archive files you wish to keep ? |
4472 | + retain_min = 10 |
4473 | + |
4474 | + # What is the minimum time you wish to keep an archive file for ? |
4475 | + retain_days = 30 |
4476 | +} |
4477 | + |
4478 | +# Settings for the running LVM2 in shell (readline) mode. |
4479 | +shell { |
4480 | + |
4481 | + # Number of lines of history to store in ~/.lvm_history |
4482 | + history_size = 100 |
4483 | +} |
4484 | + |
4485 | + |
4486 | +# Miscellaneous global LVM2 settings |
4487 | +global { |
4488 | + |
4489 | + # The file creation mask for any files and directories created. |
4490 | + # Interpreted as octal if the first digit is zero. |
4491 | + umask = 077 |
4492 | + |
4493 | + # Allow other users to read the files |
4494 | + #umask = 022 |
4495 | + |
4496 | + # Enabling test mode means that no changes to the on disk metadata |
4497 | + # will be made. Equivalent to having the -t option on every |
4498 | + # command. Defaults to off. |
4499 | + test = 0 |
4500 | + |
4501 | + # Default value for --units argument |
4502 | + units = "h" |
4503 | + |
4504 | + # Since version 2.02.54, the tools distinguish between powers of |
4505 | + # 1024 bytes (e.g. KiB, MiB, GiB) and powers of 1000 bytes (e.g. |
4506 | + # KB, MB, GB). |
4507 | + # If you have scripts that depend on the old behaviour, set this to 0 |
4508 | + # temporarily until you update them. |
4509 | + si_unit_consistency = 1 |
4510 | + |
4511 | + # Whether or not to communicate with the kernel device-mapper. |
4512 | + # Set to 0 if you want to use the tools to manipulate LVM metadata |
4513 | + # without activating any logical volumes. |
4514 | + # If the device-mapper kernel driver is not present in your kernel |
4515 | + # setting this to 0 should suppress the error messages. |
4516 | + activation = 1 |
4517 | + |
4518 | + # If we can't communicate with device-mapper, should we try running |
4519 | + # the LVM1 tools? |
4520 | + # This option only applies to 2.4 kernels and is provided to help you |
4521 | + # switch between device-mapper kernels and LVM1 kernels. |
4522 | + # The LVM1 tools need to be installed with .lvm1 suffices |
4523 | + # e.g. vgscan.lvm1 and they will stop working after you start using |
4524 | + # the new lvm2 on-disk metadata format. |
4525 | + # The default value is set when the tools are built. |
4526 | + # fallback_to_lvm1 = 0 |
4527 | + |
4528 | + # The default metadata format that commands should use - "lvm1" or "lvm2". |
4529 | + # The command line override is -M1 or -M2. |
4530 | + # Defaults to "lvm2". |
4531 | + # format = "lvm2" |
4532 | + |
4533 | + # Location of proc filesystem |
4534 | + proc = "/proc" |
4535 | + |
4536 | + # Type of locking to use. Defaults to local file-based locking (1). |
4537 | + # Turn locking off by setting to 0 (dangerous: risks metadata corruption |
4538 | + # if LVM2 commands get run concurrently). |
4539 | + # Type 2 uses the external shared library locking_library. |
4540 | + # Type 3 uses built-in clustered locking. |
4541 | + # Type 4 uses read-only locking which forbids any operations that might |
4542 | + # change metadata. |
4543 | + locking_type = 1 |
4544 | + |
4545 | + # Set to 0 to fail when a lock request cannot be satisfied immediately. |
4546 | + wait_for_locks = 1 |
4547 | + |
4548 | + # If using external locking (type 2) and initialisation fails, |
4549 | + # with this set to 1 an attempt will be made to use the built-in |
4550 | + # clustered locking. |
4551 | + # If you are using a customised locking_library you should set this to 0. |
4552 | + fallback_to_clustered_locking = 1 |
4553 | + |
4554 | + # If an attempt to initialise type 2 or type 3 locking failed, perhaps |
4555 | + # because cluster components such as clvmd are not running, with this set |
4556 | + # to 1 an attempt will be made to use local file-based locking (type 1). |
4557 | + # If this succeeds, only commands against local volume groups will proceed. |
4558 | + # Volume Groups marked as clustered will be ignored. |
4559 | + fallback_to_local_locking = 1 |
4560 | + |
4561 | + # Local non-LV directory that holds file-based locks while commands are |
4562 | + # in progress. A directory like /tmp that may get wiped on reboot is OK. |
4563 | + locking_dir = "@DEFAULT_LOCK_DIR@" |
4564 | + |
4565 | + # Whenever there are competing read-only and read-write access requests for |
4566 | + # a volume group's metadata, instead of always granting the read-only |
4567 | + # requests immediately, delay them to allow the read-write requests to be |
4568 | + # serviced. Without this setting, write access may be stalled by a high |
4569 | + # volume of read-only requests. |
4570 | + # NB. This option only affects locking_type = 1 viz. local file-based |
4571 | + # locking. |
4572 | + prioritise_write_locks = 1 |
4573 | + |
4574 | + # Other entries can go here to allow you to load shared libraries |
4575 | + # e.g. if support for LVM1 metadata was compiled as a shared library use |
4576 | + # format_libraries = "liblvm2format1.so" |
4577 | + # Full pathnames can be given. |
4578 | + |
4579 | + # Search this directory first for shared libraries. |
4580 | + # library_dir = "/lib" |
4581 | + |
4582 | + # The external locking library to load if locking_type is set to 2. |
4583 | + # locking_library = "liblvm2clusterlock.so" |
4584 | + |
4585 | + # Treat any internal errors as fatal errors, aborting the process that |
4586 | + # encountered the internal error. Please only enable for debugging. |
4587 | + abort_on_internal_errors = 0 |
4588 | + |
4589 | + # Check whether CRC is matching when parsed VG is used multiple times. |
4590 | + # This is useful to catch unexpected internal cached volume group |
4591 | + # structure modification. Please only enable for debugging. |
4592 | + detect_internal_vg_cache_corruption = 0 |
4593 | + |
4594 | + # If set to 1, no operations that change on-disk metadata will be permitted. |
4595 | + # Additionally, read-only commands that encounter metadata in need of repair |
4596 | + # will still be allowed to proceed exactly as if the repair had been |
4597 | + # performed (except for the unchanged vg_seqno). |
4598 | + # Inappropriate use could mess up your system, so seek advice first! |
4599 | + metadata_read_only = 0 |
4600 | + |
4601 | + # 'mirror_segtype_default' defines which segtype will be used when the |
4602 | + # shorthand '-m' option is used for mirroring. The possible options are: |
4603 | + # |
4604 | + # "mirror" - The original RAID1 implementation provided by LVM2/DM. It is |
4605 | + # characterized by a flexible log solution (core, disk, mirrored) |
4606 | + # and by the necessity to block I/O while reconfiguring in the |
4607 | + # event of a failure. Snapshots of this type of RAID1 can be |
4608 | + # problematic. |
4609 | + # |
4610 | + # "raid1" - This implementation leverages MD's RAID1 personality through |
4611 | + # device-mapper. It is characterized by a lack of log options. |
4612 | + # (A log is always allocated for every device and they are placed |
4613 | + # on the same device as the image - no separate devices are |
4614 | + # required.) This mirror implementation does not require I/O |
4615 | + # to be blocked in the kernel in the event of a failure. |
4616 | + # |
4617 | + # Specify the '--type <mirror|raid1>' option to override this default |
4618 | + # setting. |
4619 | + mirror_segtype_default = "mirror" |
4620 | + |
4621 | + # The default format for displaying LV names in lvdisplay was changed |
4622 | + # in version 2.02.89 to show the LV name and path separately. |
4623 | + # Previously this was always shown as /dev/vgname/lvname even when that |
4624 | + # was never a valid path in the /dev filesystem. |
4625 | + # Set to 1 to reinstate the previous format. |
4626 | + # |
4627 | + # lvdisplay_shows_full_device_path = 0 |
4628 | + |
4629 | + # Whether to use (trust) a running instance of lvmetad. If this is set to |
4630 | + # 0, all commands fall back to the usual scanning mechanisms. When set to 1 |
4631 | + # *and* when lvmetad is running (it is not auto-started), the volume group |
4632 | + # metadata and PV state flags are obtained from the lvmetad instance and no |
4633 | + # scanning is done by the individual commands. In a setup with lvmetad, |
4634 | + # lvmetad udev rules *must* be set up for LVM to work correctly. Without |
4635 | + # proper udev rules, all changes in block device configuration will be |
4636 | + # *ignored* until a manual 'vgscan' is performed. |
4637 | + use_lvmetad = 0 |
4638 | +} |
4639 | + |
4640 | +activation { |
4641 | + # Set to 1 to perform internal checks on the operations issued to |
4642 | + # libdevmapper. Useful for debugging problems with activation. |
4643 | + # Some of the checks may be expensive, so it's best to use this |
4644 | + # only when there seems to be a problem. |
4645 | + checks = 0 |
4646 | + |
4647 | + # Set to 0 to disable udev synchronisation (if compiled into the binaries). |
4648 | + # Processes will not wait for notification from udev. |
4649 | + # They will continue irrespective of any possible udev processing |
4650 | + # in the background. You should only use this if udev is not running |
4651 | + # or has rules that ignore the devices LVM2 creates. |
4652 | + # The command line argument --nodevsync takes precedence over this setting. |
4653 | + # If set to 1 when udev is not running, and there are LVM2 processes |
4654 | + # waiting for udev, run 'dmsetup udevcomplete_all' manually to wake them up. |
4655 | + udev_sync = 1 |
4656 | + |
4657 | + # Set to 0 to disable the udev rules installed by LVM2 (if built with |
4658 | + # --enable-udev_rules). LVM2 will then manage the /dev nodes and symlinks |
4659 | + # for active logical volumes directly itself. |
4660 | + # N.B. Manual intervention may be required if this setting is changed |
4661 | + # while any logical volumes are active. |
4662 | + udev_rules = 1 |
4663 | + |
4664 | + # Set to 1 for LVM2 to verify operations performed by udev. This turns on |
4665 | + # additional checks (and if necessary, repairs) on entries in the device |
4666 | + # directory after udev has completed processing its events. |
4667 | + # Useful for diagnosing problems with LVM2/udev interactions. |
4668 | + verify_udev_operations = 0 |
4669 | + |
4670 | + # If set to 1 and if deactivation of an LV fails, perhaps because |
4671 | + # a process run from a quick udev rule temporarily opened the device, |
4672 | + # retry the operation for a few seconds before failing. |
4673 | + retry_deactivation = 1 |
4674 | + |
4675 | + # How to fill in missing stripes if activating an incomplete volume. |
4676 | + # Using "error" will make inaccessible parts of the device return |
4677 | + # I/O errors on access. You can instead use a device path, in which |
4678 | + # case, that device will be used to in place of missing stripes. |
4679 | + # But note that using anything other than "error" with mirrored |
4680 | + # or snapshotted volumes is likely to result in data corruption. |
4681 | + missing_stripe_filler = "error" |
4682 | + |
4683 | + # The linear target is an optimised version of the striped target |
4684 | + # that only handles a single stripe. Set this to 0 to disable this |
4685 | + # optimisation and always use the striped target. |
4686 | + use_linear_target = 1 |
4687 | + |
4688 | + # How much stack (in KB) to reserve for use while devices suspended |
4689 | + # Prior to version 2.02.89 this used to be set to 256KB |
4690 | + reserved_stack = 64 |
4691 | + |
4692 | + # How much memory (in KB) to reserve for use while devices suspended |
4693 | + reserved_memory = 8192 |
4694 | + |
4695 | + # Nice value used while devices suspended |
4696 | + process_priority = -18 |
4697 | + |
4698 | + # If volume_list is defined, each LV is only activated if there is a |
4699 | + # match against the list. |
4700 | + # "vgname" and "vgname/lvname" are matched exactly. |
4701 | + # "@tag" matches any tag set in the LV or VG. |
4702 | + # "@*" matches if any tag defined on the host is also set in the LV or VG |
4703 | + # |
4704 | + # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] |
4705 | + |
4706 | + # If read_only_volume_list is defined, each LV that is to be activated |
4707 | + # is checked against the list, and if it matches, it as activated |
4708 | + # in read-only mode. (This overrides '--permission rw' stored in the |
4709 | + # metadata.) |
4710 | + # "vgname" and "vgname/lvname" are matched exactly. |
4711 | + # "@tag" matches any tag set in the LV or VG. |
4712 | + # "@*" matches if any tag defined on the host is also set in the LV or VG |
4713 | + # |
4714 | + # read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] |
4715 | + |
4716 | + # Size (in KB) of each copy operation when mirroring |
4717 | + mirror_region_size = 512 |
4718 | + |
4719 | + # Setting to use when there is no readahead value stored in the metadata. |
4720 | + # |
4721 | + # "none" - Disable readahead. |
4722 | + # "auto" - Use default value chosen by kernel. |
4723 | + readahead = "auto" |
4724 | + |
4725 | + # 'raid_fault_policy' defines how a device failure in a RAID logical |
4726 | + # volume is handled. This includes logical volumes that have the following |
4727 | + # segment types: raid1, raid4, raid5*, and raid6*. |
4728 | + # |
4729 | + # In the event of a failure, the following policies will determine what |
4730 | + # actions are performed during the automated response to failures (when |
4731 | + # dmeventd is monitoring the RAID logical volume) and when 'lvconvert' is |
4732 | + # called manually with the options '--repair' and '--use-policies'. |
4733 | + # |
4734 | + # "warn" - Use the system log to warn the user that a device in the RAID |
4735 | + # logical volume has failed. It is left to the user to run |
4736 | + # 'lvconvert --repair' manually to remove or replace the failed |
4737 | + # device. As long as the number of failed devices does not |
4738 | + # exceed the redundancy of the logical volume (1 device for |
4739 | + # raid4/5, 2 for raid6, etc) the logical volume will remain |
4740 | + # usable. |
4741 | + # |
4742 | + # "allocate" - Attempt to use any extra physical volumes in the volume |
4743 | + # group as spares and replace faulty devices. |
4744 | + # |
4745 | + raid_fault_policy = "warn" |
4746 | + |
4747 | + # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define |
4748 | + # how a device failure affecting a mirror (of "mirror" segment type) is |
4749 | + # handled. A mirror is composed of mirror images (copies) and a log. |
4750 | + # A disk log ensures that a mirror does not need to be re-synced |
4751 | + # (all copies made the same) every time a machine reboots or crashes. |
4752 | + # |
4753 | + # In the event of a failure, the specified policy will be used to determine |
4754 | + # what happens. This applies to automatic repairs (when the mirror is being |
4755 | + # monitored by dmeventd) and to manual lvconvert --repair when |
4756 | + # --use-policies is given. |
4757 | + # |
4758 | + # "remove" - Simply remove the faulty device and run without it. If |
4759 | + # the log device fails, the mirror would convert to using |
4760 | + # an in-memory log. This means the mirror will not |
4761 | + # remember its sync status across crashes/reboots and |
4762 | + # the entire mirror will be re-synced. If a |
4763 | + # mirror image fails, the mirror will convert to a |
4764 | + # non-mirrored device if there is only one remaining good |
4765 | + # copy. |
4766 | + # |
4767 | + # "allocate" - Remove the faulty device and try to allocate space on |
4768 | + # a new device to be a replacement for the failed device. |
4769 | + # Using this policy for the log is fast and maintains the |
4770 | + # ability to remember sync state through crashes/reboots. |
4771 | + # Using this policy for a mirror device is slow, as it |
4772 | + # requires the mirror to resynchronize the devices, but it |
4773 | + # will preserve the mirror characteristic of the device. |
4774 | + # This policy acts like "remove" if no suitable device and |
4775 | + # space can be allocated for the replacement. |
4776 | + # |
4777 | + # "allocate_anywhere" - Not yet implemented. Useful to place the log device |
4778 | + # temporarily on same physical volume as one of the mirror |
4779 | + # images. This policy is not recommended for mirror devices |
4780 | + # since it would break the redundant nature of the mirror. This |
4781 | + # policy acts like "remove" if no suitable device and space can |
4782 | + # be allocated for the replacement. |
4783 | + |
4784 | + mirror_log_fault_policy = "allocate" |
4785 | + mirror_image_fault_policy = "remove" |
4786 | + |
4787 | + # 'snapshot_autoextend_threshold' and 'snapshot_autoextend_percent' define |
4788 | + # how to handle automatic snapshot extension. The former defines when the |
4789 | + # snapshot should be extended: when its space usage exceeds this many |
4790 | + # percent. The latter defines how much extra space should be allocated for |
4791 | + # the snapshot, in percent of its current size. |
4792 | + # |
4793 | + # For example, if you set snapshot_autoextend_threshold to 70 and |
4794 | + # snapshot_autoextend_percent to 20, whenever a snapshot exceeds 70% usage, |
4795 | + # it will be extended by another 20%. For a 1G snapshot, using up 700M will |
4796 | + # trigger a resize to 1.2G. When the usage exceeds 840M, the snapshot will |
4797 | + # be extended to 1.44G, and so on. |
4798 | + # |
4799 | + # Setting snapshot_autoextend_threshold to 100 disables automatic |
4800 | + # extensions. The minimum value is 50 (A setting below 50 will be treated |
4801 | + # as 50). |
4802 | + |
4803 | + snapshot_autoextend_threshold = 100 |
4804 | + snapshot_autoextend_percent = 20 |
4805 | + |
4806 | + # 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' define |
4807 | + # how to handle automatic pool extension. The former defines when the |
4808 | + # pool should be extended: when its space usage exceeds this many |
4809 | + # percent. The latter defines how much extra space should be allocated for |
4810 | + # the pool, in percent of its current size. |
4811 | + # |
4812 | + # For example, if you set thin_pool_autoextend_threshold to 70 and |
4813 | + # thin_pool_autoextend_percent to 20, whenever a pool exceeds 70% usage, |
4814 | + # it will be extended by another 20%. For a 1G pool, using up 700M will |
4815 | + # trigger a resize to 1.2G. When the usage exceeds 840M, the pool will |
4816 | + # be extended to 1.44G, and so on. |
4817 | + # |
4818 | + # Setting thin_pool_autoextend_threshold to 100 disables automatic |
4819 | + # extensions. The minimum value is 50 (A setting below 50 will be treated |
4820 | + # as 50). |
4821 | + |
4822 | + thin_pool_autoextend_threshold = 100 |
4823 | + thin_pool_autoextend_percent = 20 |
4824 | + |
4825 | + # Full path of the utility called to check that a thin metadata device |
4826 | + # is in a state that allows it to be used. |
4827 | + # Each time a thin pool needs to be activated, this utility is executed. |
4828 | + # The activation will only proceed if the utility has an exit status of 0. |
4829 | + # Set to "" to skip this check. (Not recommended.) |
4830 | + # The thin tools are available as part of the device-mapper-persistent-data |
4831 | + # package from https://github.com/jthornber/thin-provisioning-tools. |
4832 | + # |
4833 | + thin_check_executable = "/sbin/thin_check -q" |
4834 | + |
4835 | + # While activating devices, I/O to devices being (re)configured is |
4836 | + # suspended, and as a precaution against deadlocks, LVM2 needs to pin |
4837 | + # any memory it is using so it is not paged out. Groups of pages that |
4838 | + # are known not to be accessed during activation need not be pinned |
4839 | + # into memory. Each string listed in this setting is compared against |
4840 | + # each line in /proc/self/maps, and the pages corresponding to any |
4841 | + # lines that match are not pinned. On some systems locale-archive was |
4842 | + # found to make up over 80% of the memory used by the process. |
4843 | + # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ] |
4844 | + |
4845 | + # Set to 1 to revert to the default behaviour prior to version 2.02.62 |
4846 | + # which used mlockall() to pin the whole process's memory while activating |
4847 | + # devices. |
4848 | + use_mlockall = 0 |
4849 | + |
4850 | + # Monitoring is enabled by default when activating logical volumes. |
4851 | + # Set to 0 to disable monitoring or use the --ignoremonitoring option. |
4852 | + monitoring = 1 |
4853 | + |
4854 | + # When pvmove or lvconvert must wait for the kernel to finish |
4855 | + # synchronising or merging data, they check and report progress |
4856 | + # at intervals of this number of seconds. The default is 15 seconds. |
4857 | + # If this is set to 0 and there is only one thing to wait for, there |
4858 | + # are no progress reports, but the process is awoken immediately the |
4859 | + # operation is complete. |
4860 | + polling_interval = 15 |
4861 | +} |
4862 | + |
4863 | + |
4864 | +#################### |
4865 | +# Advanced section # |
4866 | +#################### |
4867 | + |
4868 | +# Metadata settings |
4869 | +# |
4870 | +# metadata { |
4871 | + # Default number of copies of metadata to hold on each PV. 0, 1 or 2. |
4872 | + # You might want to override it from the command line with 0 |
4873 | + # when running pvcreate on new PVs which are to be added to large VGs. |
4874 | + |
4875 | + # pvmetadatacopies = 1 |
4876 | + |
4877 | + # Default number of copies of metadata to maintain for each VG. |
4878 | + # If set to a non-zero value, LVM automatically chooses which of |
4879 | + # the available metadata areas to use to achieve the requested |
4880 | + # number of copies of the VG metadata. If you set a value larger |
4881 | + # than the the total number of metadata areas available then |
4882 | + # metadata is stored in them all. |
4883 | + # The default value of 0 ("unmanaged") disables this automatic |
4884 | + # management and allows you to control which metadata areas |
4885 | + # are used at the individual PV level using 'pvchange |
4886 | + # --metadataignore y/n'. |
4887 | + |
4888 | + # vgmetadatacopies = 0 |
4889 | + |
4890 | + # Approximate default size of on-disk metadata areas in sectors. |
4891 | + # You should increase this if you have large volume groups or |
4892 | + # you want to retain a large on-disk history of your metadata changes. |
4893 | + |
4894 | + # pvmetadatasize = 255 |
4895 | + |
4896 | + # List of directories holding live copies of text format metadata. |
4897 | + # These directories must not be on logical volumes! |
4898 | + # It's possible to use LVM2 with a couple of directories here, |
4899 | + # preferably on different (non-LV) filesystems, and with no other |
4900 | + # on-disk metadata (pvmetadatacopies = 0). Or this can be in |
4901 | + # addition to on-disk metadata areas. |
4902 | + # The feature was originally added to simplify testing and is not |
4903 | + # supported under low memory situations - the machine could lock up. |
4904 | + # |
4905 | + # Never edit any files in these directories by hand unless you |
4906 | + # you are absolutely sure you know what you are doing! Use |
4907 | + # the supplied toolset to make changes (e.g. vgcfgrestore). |
4908 | + |
4909 | + # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ] |
4910 | +#} |
4911 | + |
4912 | +# Event daemon |
4913 | +# |
4914 | +dmeventd { |
4915 | + # mirror_library is the library used when monitoring a mirror device. |
4916 | + # |
4917 | + # "libdevmapper-event-lvm2mirror.so" attempts to recover from |
4918 | + # failures. It removes failed devices from a volume group and |
4919 | + # reconfigures a mirror as necessary. If no mirror library is |
4920 | + # provided, mirrors are not monitored through dmeventd. |
4921 | + |
4922 | + mirror_library = "libdevmapper-event-lvm2mirror.so" |
4923 | + |
4924 | + # snapshot_library is the library used when monitoring a snapshot device. |
4925 | + # |
4926 | + # "libdevmapper-event-lvm2snapshot.so" monitors the filling of |
4927 | + # snapshots and emits a warning through syslog when the use of |
4928 | + # the snapshot exceeds 80%. The warning is repeated when 85%, 90% and |
4929 | + # 95% of the snapshot is filled. |
4930 | + |
4931 | + snapshot_library = "libdevmapper-event-lvm2snapshot.so" |
4932 | + |
4933 | + # thin_library is the library used when monitoring a thin device. |
4934 | + # |
4935 | + # "libdevmapper-event-lvm2thin.so" monitors the filling of |
4936 | + # pool and emits a warning through syslog when the use of |
4937 | + # the pool exceeds 80%. The warning is repeated when 85%, 90% and |
4938 | + # 95% of the pool is filled. |
4939 | + |
4940 | + thin_library = "libdevmapper-event-lvm2thin.so" |
4941 | + |
4942 | + # Full path of the dmeventd binary. |
4943 | + # |
4944 | + # executable = "@DMEVENTD_PATH@" |
4945 | +} |
4946 | |
4947 | === removed file '.pc/dirs.patch/doc/example.conf.in' |
4948 | --- .pc/dirs.patch/doc/example.conf.in 2012-04-14 02:57:53 +0000 |
4949 | +++ .pc/dirs.patch/doc/example.conf.in 1970-01-01 00:00:00 +0000 |
4950 | @@ -1,662 +0,0 @@ |
4951 | -# This is an example configuration file for the LVM2 system. |
4952 | -# It contains the default settings that would be used if there was no |
4953 | -# @DEFAULT_SYS_DIR@/lvm.conf file. |
4954 | -# |
4955 | -# Refer to 'man lvm.conf' for further information including the file layout. |
4956 | -# |
4957 | -# To put this file in a different directory and override @DEFAULT_SYS_DIR@ set |
4958 | -# the environment variable LVM_SYSTEM_DIR before running the tools. |
4959 | -# |
4960 | -# N.B. Take care that each setting only appears once if uncommenting |
4961 | -# example settings in this file. |
4962 | - |
4963 | - |
4964 | -# This section allows you to configure which block devices should |
4965 | -# be used by the LVM system. |
4966 | -devices { |
4967 | - |
4968 | - # Where do you want your volume groups to appear ? |
4969 | - dir = "/dev" |
4970 | - |
4971 | - # An array of directories that contain the device nodes you wish |
4972 | - # to use with LVM2. |
4973 | - scan = [ "/dev" ] |
4974 | - |
4975 | - # If set, the cache of block device nodes with all associated symlinks |
4976 | - # will be constructed out of the existing udev database content. |
4977 | - # This avoids using and opening any inapplicable non-block devices or |
4978 | - # subdirectories found in the device directory. This setting is applied |
4979 | - # to udev-managed device directory only, other directories will be scanned |
4980 | - # fully. LVM2 needs to be compiled with udev support for this setting to |
4981 | - # take effect. N.B. Any device node or symlink not managed by udev in |
4982 | - # udev directory will be ignored with this setting on. |
4983 | - obtain_device_list_from_udev = 1 |
4984 | - |
4985 | - # If several entries in the scanned directories correspond to the |
4986 | - # same block device and the tools need to display a name for device, |
4987 | - # all the pathnames are matched against each item in the following |
4988 | - # list of regular expressions in turn and the first match is used. |
4989 | - preferred_names = [ ] |
4990 | - |
4991 | - # Try to avoid using undescriptive /dev/dm-N names, if present. |
4992 | - # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ] |
4993 | - |
4994 | - # A filter that tells LVM2 to only use a restricted set of devices. |
4995 | - # The filter consists of an array of regular expressions. These |
4996 | - # expressions can be delimited by a character of your choice, and |
4997 | - # prefixed with either an 'a' (for accept) or 'r' (for reject). |
4998 | - # The first expression found to match a device name determines if |
4999 | - # the device will be accepted or rejected (ignored). Devices that |
5000 | - # don't match any patterns are accepted. |
> - debian/ {clvmd. ra,clvm. init}:
> - create /var/run/lvm if it doesn't exist.
debian/rules now points at /run/lvm; this should be updated to match.
--- debian/ libdevmapper- dev.install 2010-12-07 08:08:45 +0000 libdevmapper- dev.install 2012-08-18 04:00:45 +0000 libdevmapper. h libdevmapper- event.h libdevmapper. so pkgconfig/ devmapper. pc pkgconfig/ devmapper- event.pc libdevmapper* */pkgconfig/ devmapper*
+++ debian/
@@ -1,5 +1,2 @@
-usr/include/
-usr/include/
-usr/lib/
-usr/lib/
-usr/lib/
+usr/include/
+usr/lib/
Hmm, good catch. I think you should call this out as a separate change in the changelog, since this isn't just a "remaining change" but a fix to multiarch support for the runtime lib package.
--- debian/ lvm2.postinst 2009-10-08 18:17:43 +0000 lvm2.postinst 2012-08-15 17:05:32 +0000
vgcfgbackup >/dev/null 2>&1 || : update- initramfs ]; then
update- initramfs -u
+++ debian/
@@ -5,7 +5,9 @@
case "$1" in
configure)
- invoke-rc.d lvm2 start || :
+ if [ -x /etc/init.d/lvm2 ]; then
+ invoke-rc.d lvm2 start || :
+ fi
if [ -x /usr/sbin/
fi
Since the lvm2.init is being dropped, this probably shouldn't be conditional - it should probably be removed entirely. In fact, when this regression was introduced during the lucid merge, the previous postinst was doing this:
if test -f /etc/init.d/lvm2; then
update-rc.d -f lvm2 remove >/dev/null 2>&1 || true
rm -f /etc/init.d/lvm2
fi
We probably want to be doing that again, possibly with better handling of modified conffiles.
BTW, perhaps it's worth checking with Kees to see if there was a reason he thought this init script should be restored when merging.
--- debian/ libdevmapper- event1. 02.1.symbols 2012-04-14 03:19:00 +0000 libdevmapper- event1. 02.1.symbols 2012-08-18 04:00:45 +0000 event.so. 1.02.1 libdevmapper- event1. 02.1 #MINVER# daemon_ fini_fifos@ Base 2:1.02.74 daemon_ init_fifos@ Base 2:1.02.74 daemon_ talk@Base 2:1.02.74
+++ debian/
@@ -1,11 +1,13 @@
libdevmapper-
Base@Base 2:1.02.20
- daemon_talk@Base 2:1.02.67
+ dm_event_
+ dm_event_
+ dm_event_
This is unfortunate, but it happens... In this case, it appears that dmeventd used the old symbol, so we want to have a Breaks against the old version of dmeventd. No other packages in Ubuntu that depend on libdevmapper- event1. 02.1 appear to use that symbol, so this Breaks is the only fix needed.
- dm_event_ get_version@ Base 2:1.02.67 get_version@ Base 2:1.02.74 handler_ create@ Base 2:1.02.20 handler_ destroy@ Base 2:1.02.20 handler_ get_dev_ name@Base 2:1.02.67 handler_ get_dev_ name@Base 2:1.02.74
+ dm_event_
dm_event_
dm_event_
- dm_event_
+ dm_event_
Those version bumps are unnecessary in Ubuntu, but mostly harmless.
@@ -22,5 +24,3 @@ handler_ set_uuid@ Base 2:1.02.20 register_ handler@ Base 2:1.02.20 unregister_ handler@ Base 2:1.02.20
dm_event_
dm_event_
dm_event_
- fini_fifos@Base 2:1.02.67
- init_fifos@Base 2:1.02.67
These symbols are also only used by dmeventd. (This makes sense, as all three symbols appear to be due to a Debian-specific patch for dmeventd.)
The rest looks good to me.