Merge lp:~nuclearbob/utah/cobbler-pidlock into lp:utah
Status: | Rejected |
---|---|
Rejected by: | Max Brustkern |
Proposed branch: | lp:~nuclearbob/utah/cobbler-pidlock |
Merge into: | lp:utah |
Diff against target: |
90 lines (+44/-26) 1 file modified
utah/provisioning/inventory/sqlite.py (+44/-26) |
To merge this branch: | bzr merge lp:~nuclearbob/utah/cobbler-pidlock |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Max Brustkern (community) | Disapprove | ||
Review via email: mp+129046@code.launchpad.net |
Description of the change
The original cobbler inventory used 'available' and 'provisioned' as states. If a job timed out and left the machine in the 'provisioned' state, there's no automatic process to recover that. This changes the inventory to use the pid of the process requesting the machine as the state, and if a machine is requested and that pid is no longer active on the system, the machine will be claimed anyway. If the listed pid is still active, the machine will not be used. I've tested it in magners-orchestra with the dx team's jenkins job. It could cause problems if multiple provisioning hosts were trying to use the same sqlite database to manage physical machines, but I think that scenario would introduce other problems as well. If we're dealing with multiple hosts trying to provision the same pool of machines, a more robust solution than sqlite is probably warranted.
Keep in mind that on a highly loaded machine PIDs get recycled so you might have a whole different process getting an available PID that is in the inventory database.