My plan was as outlined in LP-568 and LP-584: - decide which functionality should be exposed via hooks (e.g. here the list of system packages) - convert internal functions into plugins (e.g. lpcraft/plugin/lib.py -> lpcraft_install_packages) And here the work for the current MP should have stopped, but I was nosy how external plugins work, so I added one, and then there was the discussion about how to test external plugins, and so I added a test for it. Until then, I had a vague idea how to install plugins on runtime, which seems to be necessary if we ever want to support third party plugins. My idea, originating back from the very first thoughts about a plugin system was to use a subprocess to install the plugins via pip or similar, but as lpcraft no longer runs in the guest and has to run different jobs with different plugins, this is no longer viable. Let's quickly define what I mean with internal and external plugins: internal plugin = replace data/function with a hook, and provide the original data/function via an implementation external plugin = every plugin, no matter whether created by us (e.g. tox plugin), or a future third party plugin, which can add data or even change behavior It is true, that all hooks of all installed plugins are loaded, but not necessarily executed. All (external) hooks have names, and they can be filtered by e.g. name. ``` (Pdb++) list(pm.get_plugins())[0].__name__ 'lp_tox' ``` So when there is the `tox` plugin mentioned in the configuration file, all other plugins could be filtered out, and only the tox hook gets executed. This may sound a bit complicated, but this way all external plugins could have the same structure (and currently we could just install all plugins in the same environment). For this approach I assumed that all external plugins are outside the lpcraft repository. An alternative approach could be to dynamically load plugins from a module/class from within lpcraft, e.g. we add a tox plugin to the lpcraft repository. Additionally to loading "internal" plugins via `pm.register(HookImplementations(job))` we could also load additional plugins, e.g. `pm.register((job))` - for this approach we'd possibly need a mapping from strings from the configuration file (plugin: tox) to class names, where the classes provide all the implementations for the relevant hooks. This all may or may not be too complex, and maybe even a simple data structure would be enough for a plugin? e.g. a list of additional deb and snap packages, a list of environment variables... Then a plugin could be a simple dictionary, or a data class, or an ini file? This should be clarified, too. From LP-568 there is at least one question unanswered - how will we install third party plugins? Maybe something like this? ``` pipeline: - test jobs: test: series: focal architectures: amd64 run: check-python-versions plugin: lp/check-python-versions ``` ... where `lp/check-python-versions` would translate to a git repository on Launchpad, which would be cloned and installed -> and then the above approach with setuptools entry_points would pick it up. While third party plugins are a thing of the future, we should have them in mind when we design the plugin system now. tl/dr Good timing with your questions! We need to clarify - what is the scope of a plugin, ie a simple data structure vs executable code, or even a package - should plugins which we write be part of lpcraft or a standalone package/repo? The current spec "LP105" both says: "Plugins are implemented as modules in a suitable Python namespace (e.g. lpcraft.plugins)" "We do not immediately need to make it possible to add third-party plugins from outside the runner package itself" The previous point also directly influences third party plugins. Let's have a 15-30 min video call tomorrow if time permits.