Merge lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005 into lp:ubuntu/oneiric/cobbler
- Oneiric (11.10)
- oneiric-201106090005
- Merge into oneiric
Proposed by
James Westby
Status: | Work in progress |
---|---|
Proposed branch: | lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005 |
Merge into: | lp:ubuntu/oneiric/cobbler |
Diff against target: |
4459 lines (+52/-4124) (has conflicts) 24 files modified
.pc/.version (+0/-1) .pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py (+0/-568) .pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py (+0/-66) .pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-777) .pc/33_authn_configfile.patch/config/modules.conf (+0/-86) .pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf (+0/-14) .pc/39_cw_remove_vhost.patch/config/cobbler_web.conf (+0/-14) .pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py (+0/-482) .pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py (+0/-332) .pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py (+0/-66) .pc/40_ubuntu_bind9_management.patch/templates/etc/named.template (+0/-31) .pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-777) .pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py (+0/-98) .pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-779) .pc/applied-patches (+0/-10) cobbler/action_check.py (+1/-1) cobbler/action_reposync.py (+1/-6) cobbler/codes.py (+2/-2) cobbler/modules/manage_bind.py (+3/-3) cobbler/modules/manage_import_debian_ubuntu.py (+18/-5) cobbler/modules/sync_post_restart_services.py (+2/-2) config/cobbler_web.conf (+5/-1) config/modules.conf (+1/-1) templates/etc/named.template (+19/-2) Conflict: can't delete .pc because it is not empty. Not deleting. Conflict because .pc is not versioned, but has versioned children. Versioned directory. Conflict: can't delete .pc/42_fix_repomirror_create_sync.patch because it is not empty. Not deleting. Conflict because .pc/42_fix_repomirror_create_sync.patch is not versioned, but has versioned children. Versioned directory. Conflict: can't delete .pc/42_fix_repomirror_create_sync.patch/cobbler because it is not empty. Not deleting. Conflict because .pc/42_fix_repomirror_create_sync.patch/cobbler is not versioned, but has versioned children. Versioned directory. Conflict: can't delete .pc/43_fix_reposync_env_variable.patch because it is not empty. Not deleting. Conflict because .pc/43_fix_reposync_env_variable.patch is not versioned, but has versioned children. Versioned directory. Conflict: can't delete .pc/43_fix_reposync_env_variable.patch/cobbler because it is not empty. Not deleting. Conflict because .pc/43_fix_reposync_env_variable.patch/cobbler is not versioned, but has versioned children. Versioned directory. Contents conflict in .pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py Text conflict in cobbler/modules/manage_import_debian_ubuntu.py |
To merge this branch: | bzr merge lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Dave Walker (community) | Disapprove | ||
Ubuntu branches | Pending | ||
Review via email: mp+63942@code.launchpad.net |
Commit message
Description of the change
The package history in the archive and the history in the bzr branch differ. As the archive is authoritative the history of lp:ubuntu/oneiric/cobbler now reflects that and the old bzr branch has been pushed to lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005. A merge should be performed if necessary.
To post a comment you must log in.
Unmerged revisions
- 26. By Andres Rodriguez
-
* debian/
patches/ 42_fix_ repomirror_ create_ sync.patch: Improve method to
obtain Ubuntu mirror to use if python-apt installed.
* debian/cobbler. postinst: Really fix setting of 'server'. Move the logic
to obtain IP to debian/cobbler. config and set it default if available. - 25. By Andres Rodriguez
-
Un-apply all patches and remove .pc. IMO branches should not reflect patches
applied as they generate huge diff's when updating or working with them.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === removed file '.pc/.version' | |||
2 | --- .pc/.version 2011-01-18 12:03:14 +0000 | |||
3 | +++ .pc/.version 1970-01-01 00:00:00 +0000 | |||
4 | @@ -1,1 +0,0 @@ | |||
5 | 1 | 2 | ||
6 | 2 | 0 | ||
7 | === removed directory '.pc/05_cobbler_fix_reposync_permissions.patch' | |||
8 | === removed directory '.pc/05_cobbler_fix_reposync_permissions.patch/cobbler' | |||
9 | === removed file '.pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py' | |||
10 | --- .pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py 2011-01-28 14:39:12 +0000 | |||
11 | +++ .pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py 1970-01-01 00:00:00 +0000 | |||
12 | @@ -1,568 +0,0 @@ | |||
13 | 1 | """ | ||
14 | 2 | Builds out and synchronizes yum repo mirrors. | ||
15 | 3 | Initial support for rsync, perhaps reposync coming later. | ||
16 | 4 | |||
17 | 5 | Copyright 2006-2007, Red Hat, Inc | ||
18 | 6 | Michael DeHaan <mdehaan@redhat.com> | ||
19 | 7 | |||
20 | 8 | This program is free software; you can redistribute it and/or modify | ||
21 | 9 | it under the terms of the GNU General Public License as published by | ||
22 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
23 | 11 | (at your option) any later version. | ||
24 | 12 | |||
25 | 13 | This program is distributed in the hope that it will be useful, | ||
26 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
27 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
28 | 16 | GNU General Public License for more details. | ||
29 | 17 | |||
30 | 18 | You should have received a copy of the GNU General Public License | ||
31 | 19 | along with this program; if not, write to the Free Software | ||
32 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
33 | 21 | 02110-1301 USA | ||
34 | 22 | """ | ||
35 | 23 | |||
36 | 24 | import os | ||
37 | 25 | import os.path | ||
38 | 26 | import time | ||
39 | 27 | import yaml # Howell-Clark version | ||
40 | 28 | import sys | ||
41 | 29 | HAS_YUM = True | ||
42 | 30 | try: | ||
43 | 31 | import yum | ||
44 | 32 | except: | ||
45 | 33 | HAS_YUM = False | ||
46 | 34 | |||
47 | 35 | import utils | ||
48 | 36 | from cexceptions import * | ||
49 | 37 | import traceback | ||
50 | 38 | import errno | ||
51 | 39 | from utils import _ | ||
52 | 40 | import clogger | ||
53 | 41 | |||
54 | 42 | class RepoSync: | ||
55 | 43 | """ | ||
56 | 44 | Handles conversion of internal state to the tftpboot tree layout | ||
57 | 45 | """ | ||
58 | 46 | |||
59 | 47 | # ================================================================================== | ||
60 | 48 | |||
61 | 49 | def __init__(self,config,tries=1,nofail=False,logger=None): | ||
62 | 50 | """ | ||
63 | 51 | Constructor | ||
64 | 52 | """ | ||
65 | 53 | self.verbose = True | ||
66 | 54 | self.api = config.api | ||
67 | 55 | self.config = config | ||
68 | 56 | self.distros = config.distros() | ||
69 | 57 | self.profiles = config.profiles() | ||
70 | 58 | self.systems = config.systems() | ||
71 | 59 | self.settings = config.settings() | ||
72 | 60 | self.repos = config.repos() | ||
73 | 61 | self.rflags = self.settings.reposync_flags | ||
74 | 62 | self.tries = tries | ||
75 | 63 | self.nofail = nofail | ||
76 | 64 | self.logger = logger | ||
77 | 65 | |||
78 | 66 | if logger is None: | ||
79 | 67 | self.logger = clogger.Logger() | ||
80 | 68 | |||
81 | 69 | self.logger.info("hello, reposync") | ||
82 | 70 | |||
83 | 71 | |||
84 | 72 | # =================================================================== | ||
85 | 73 | |||
86 | 74 | def run(self, name=None, verbose=True): | ||
87 | 75 | """ | ||
88 | 76 | Syncs the current repo configuration file with the filesystem. | ||
89 | 77 | """ | ||
90 | 78 | |||
91 | 79 | self.logger.info("run, reposync, run!") | ||
92 | 80 | |||
93 | 81 | try: | ||
94 | 82 | self.tries = int(self.tries) | ||
95 | 83 | except: | ||
96 | 84 | utils.die(self.logger,"retry value must be an integer") | ||
97 | 85 | |||
98 | 86 | self.verbose = verbose | ||
99 | 87 | |||
100 | 88 | report_failure = False | ||
101 | 89 | for repo in self.repos: | ||
102 | 90 | |||
103 | 91 | env = repo.environment | ||
104 | 92 | |||
105 | 93 | for k in env.keys(): | ||
106 | 94 | self.logger.info("environment: %s=%s" % (k,env[k])) | ||
107 | 95 | if env[k] is not None: | ||
108 | 96 | os.putenv(k,env[k]) | ||
109 | 97 | |||
110 | 98 | if name is not None and repo.name != name: | ||
111 | 99 | # invoked to sync only a specific repo, this is not the one | ||
112 | 100 | continue | ||
113 | 101 | elif name is None and not repo.keep_updated: | ||
114 | 102 | # invoked to run against all repos, but this one is off | ||
115 | 103 | self.logger.info("%s is set to not be updated" % repo.name) | ||
116 | 104 | continue | ||
117 | 105 | |||
118 | 106 | repo_mirror = os.path.join(self.settings.webdir, "repo_mirror") | ||
119 | 107 | repo_path = os.path.join(repo_mirror, repo.name) | ||
120 | 108 | mirror = repo.mirror | ||
121 | 109 | |||
122 | 110 | if not os.path.isdir(repo_path) and not repo.mirror.lower().startswith("rhn://"): | ||
123 | 111 | os.makedirs(repo_path) | ||
124 | 112 | |||
125 | 113 | # which may actually NOT reposync if the repo is set to not mirror locally | ||
126 | 114 | # but that's a technicality | ||
127 | 115 | |||
128 | 116 | for x in range(self.tries+1,1,-1): | ||
129 | 117 | success = False | ||
130 | 118 | try: | ||
131 | 119 | self.sync(repo) | ||
132 | 120 | success = True | ||
133 | 121 | except: | ||
134 | 122 | utils.log_exc(self.logger) | ||
135 | 123 | self.logger.warning("reposync failed, tries left: %s" % (x-2)) | ||
136 | 124 | |||
137 | 125 | if not success: | ||
138 | 126 | report_failure = True | ||
139 | 127 | if not self.nofail: | ||
140 | 128 | utils.die(self.logger,"reposync failed, retry limit reached, aborting") | ||
141 | 129 | else: | ||
142 | 130 | self.logger.error("reposync failed, retry limit reached, skipping") | ||
143 | 131 | |||
144 | 132 | self.update_permissions(repo_path) | ||
145 | 133 | |||
146 | 134 | if report_failure: | ||
147 | 135 | utils.die(self.logger,"overall reposync failed, at least one repo failed to synchronize") | ||
148 | 136 | |||
149 | 137 | return True | ||
150 | 138 | |||
151 | 139 | # ================================================================================== | ||
152 | 140 | |||
153 | 141 | def sync(self, repo): | ||
154 | 142 | |||
155 | 143 | """ | ||
156 | 144 | Conditionally sync a repo, based on type. | ||
157 | 145 | """ | ||
158 | 146 | |||
159 | 147 | if repo.breed == "rhn": | ||
160 | 148 | return self.rhn_sync(repo) | ||
161 | 149 | elif repo.breed == "yum": | ||
162 | 150 | return self.yum_sync(repo) | ||
163 | 151 | elif repo.breed == "apt": | ||
164 | 152 | return self.apt_sync(repo) | ||
165 | 153 | elif repo.breed == "rsync": | ||
166 | 154 | return self.rsync_sync(repo) | ||
167 | 155 | else: | ||
168 | 156 | utils.die(self.logger,"unable to sync repo (%s), unknown or unsupported repo type (%s)" % (repo.name, repo.breed)) | ||
169 | 157 | |||
170 | 158 | # ==================================================================================== | ||
171 | 159 | |||
172 | 160 | def createrepo_walker(self, repo, dirname, fnames): | ||
173 | 161 | """ | ||
174 | 162 | Used to run createrepo on a copied Yum mirror. | ||
175 | 163 | """ | ||
176 | 164 | if os.path.exists(dirname) or repo['breed'] == 'rsync': | ||
177 | 165 | utils.remove_yum_olddata(dirname) | ||
178 | 166 | |||
179 | 167 | # add any repo metadata we can use | ||
180 | 168 | mdoptions = [] | ||
181 | 169 | if os.path.isfile("%s/.origin/repomd.xml" % (dirname)): | ||
182 | 170 | if not HAS_YUM: | ||
183 | 171 | utils.die(self.logger,"yum is required to use this feature") | ||
184 | 172 | |||
185 | 173 | rmd = yum.repoMDObject.RepoMD('', "%s/.origin/repomd.xml" % (dirname)) | ||
186 | 174 | if rmd.repoData.has_key("group"): | ||
187 | 175 | groupmdfile = rmd.getData("group").location[1] | ||
188 | 176 | mdoptions.append("-g %s" % groupmdfile) | ||
189 | 177 | if rmd.repoData.has_key("prestodelta"): | ||
190 | 178 | # need createrepo >= 0.9.7 to add deltas | ||
191 | 179 | if utils.check_dist() == "redhat" or utils.check_dist() == "suse": | ||
192 | 180 | cmd = "/usr/bin/rpmquery --queryformat=%{VERSION} createrepo" | ||
193 | 181 | createrepo_ver = utils.subprocess_get(self.logger, cmd) | ||
194 | 182 | if createrepo_ver >= "0.9.7": | ||
195 | 183 | mdoptions.append("--deltas") | ||
196 | 184 | else: | ||
197 | 185 | self.logger.error("this repo has presto metadata; you must upgrade createrepo to >= 0.9.7 first and then need to resync the repo through cobbler.") | ||
198 | 186 | |||
199 | 187 | blended = utils.blender(self.api, False, repo) | ||
200 | 188 | flags = blended.get("createrepo_flags","(ERROR: FLAGS)") | ||
201 | 189 | try: | ||
202 | 190 | # BOOKMARK | ||
203 | 191 | cmd = "createrepo %s %s %s" % (" ".join(mdoptions), flags, dirname) | ||
204 | 192 | utils.subprocess_call(self.logger, cmd) | ||
205 | 193 | except: | ||
206 | 194 | utils.log_exc(self.logger) | ||
207 | 195 | self.logger.error("createrepo failed.") | ||
208 | 196 | del fnames[:] # we're in the right place | ||
209 | 197 | |||
210 | 198 | # ==================================================================================== | ||
211 | 199 | |||
212 | 200 | def rsync_sync(self, repo): | ||
213 | 201 | |||
214 | 202 | """ | ||
215 | 203 | Handle copying of rsync:// and rsync-over-ssh repos. | ||
216 | 204 | """ | ||
217 | 205 | |||
218 | 206 | repo_mirror = repo.mirror | ||
219 | 207 | |||
220 | 208 | if not repo.mirror_locally: | ||
221 | 209 | utils.die(self.logger,"rsync:// urls must be mirrored locally, yum cannot access them directly") | ||
222 | 210 | |||
223 | 211 | if repo.rpm_list != "" and repo.rpm_list != []: | ||
224 | 212 | self.logger.warning("--rpm-list is not supported for rsync'd repositories") | ||
225 | 213 | |||
226 | 214 | # FIXME: don't hardcode | ||
227 | 215 | dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name) | ||
228 | 216 | |||
229 | 217 | spacer = "" | ||
230 | 218 | if not repo.mirror.startswith("rsync://") and not repo.mirror.startswith("/"): | ||
231 | 219 | spacer = "-e ssh" | ||
232 | 220 | if not repo.mirror.endswith("/"): | ||
233 | 221 | repo.mirror = "%s/" % repo.mirror | ||
234 | 222 | |||
235 | 223 | # FIXME: wrapper for subprocess that logs to logger | ||
236 | 224 | cmd = "rsync -rltDv %s --delete --exclude-from=/etc/cobbler/rsync.exclude %s %s" % (spacer, repo.mirror, dest_path) | ||
237 | 225 | rc = utils.subprocess_call(self.logger, cmd) | ||
238 | 226 | |||
239 | 227 | if rc !=0: | ||
240 | 228 | utils.die(self.logger,"cobbler reposync failed") | ||
241 | 229 | os.path.walk(dest_path, self.createrepo_walker, repo) | ||
242 | 230 | self.create_local_file(dest_path, repo) | ||
243 | 231 | |||
244 | 232 | # ==================================================================================== | ||
245 | 233 | |||
246 | 234 | def rhn_sync(self, repo): | ||
247 | 235 | |||
248 | 236 | """ | ||
249 | 237 | Handle mirroring of RHN repos. | ||
250 | 238 | """ | ||
251 | 239 | |||
252 | 240 | repo_mirror = repo.mirror | ||
253 | 241 | |||
254 | 242 | # FIXME? warn about not having yum-utils. We don't want to require it in the package because | ||
255 | 243 | # RHEL4 and RHEL5U0 don't have it. | ||
256 | 244 | |||
257 | 245 | if not os.path.exists("/usr/bin/reposync"): | ||
258 | 246 | utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils") | ||
259 | 247 | |||
260 | 248 | cmd = "" # command to run | ||
261 | 249 | has_rpm_list = False # flag indicating not to pull the whole repo | ||
262 | 250 | |||
263 | 251 | # detect cases that require special handling | ||
264 | 252 | |||
265 | 253 | if repo.rpm_list != "" and repo.rpm_list != []: | ||
266 | 254 | has_rpm_list = True | ||
267 | 255 | |||
268 | 256 | # create yum config file for use by reposync | ||
269 | 257 | # FIXME: don't hardcode | ||
270 | 258 | dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name) | ||
271 | 259 | temp_path = os.path.join(dest_path, ".origin") | ||
272 | 260 | |||
273 | 261 | if not os.path.isdir(temp_path): | ||
274 | 262 | # FIXME: there's a chance this might break the RHN D/L case | ||
275 | 263 | os.makedirs(temp_path) | ||
276 | 264 | |||
277 | 265 | # how we invoke yum-utils depends on whether this is RHN content or not. | ||
278 | 266 | |||
279 | 267 | |||
280 | 268 | # this is the somewhat more-complex RHN case. | ||
281 | 269 | # NOTE: this requires that you have entitlements for the server and you give the mirror as rhn://$channelname | ||
282 | 270 | if not repo.mirror_locally: | ||
283 | 271 | utils.die("rhn:// repos do not work with --mirror-locally=1") | ||
284 | 272 | |||
285 | 273 | if has_rpm_list: | ||
286 | 274 | self.logger.warning("warning: --rpm-list is not supported for RHN content") | ||
287 | 275 | rest = repo.mirror[6:] # everything after rhn:// | ||
288 | 276 | cmd = "/usr/bin/reposync %s -r %s --download_path=%s" % (self.rflags, rest, "/var/www/cobbler/repo_mirror") | ||
289 | 277 | if repo.name != rest: | ||
290 | 278 | args = { "name" : repo.name, "rest" : rest } | ||
291 | 279 | utils.die(self.logger,"ERROR: repository %(name)s needs to be renamed %(rest)s as the name of the cobbler repository must match the name of the RHN channel" % args) | ||
292 | 280 | |||
293 | 281 | if repo.arch == "i386": | ||
294 | 282 | # counter-intuitive, but we want the newish kernels too | ||
295 | 283 | repo.arch = "i686" | ||
296 | 284 | |||
297 | 285 | if repo.arch != "": | ||
298 | 286 | cmd = "%s -a %s" % (cmd, repo.arch) | ||
299 | 287 | |||
300 | 288 | # now regardless of whether we're doing yumdownloader or reposync | ||
301 | 289 | # or whether the repo was http://, ftp://, or rhn://, execute all queued | ||
302 | 290 | # commands here. Any failure at any point stops the operation. | ||
303 | 291 | |||
304 | 292 | if repo.mirror_locally: | ||
305 | 293 | rc = utils.subprocess_call(self.logger, cmd) | ||
306 | 294 | # Don't die if reposync fails, it is logged | ||
307 | 295 | # if rc !=0: | ||
308 | 296 | # utils.die(self.logger,"cobbler reposync failed") | ||
309 | 297 | |||
310 | 298 | # some more special case handling for RHN. | ||
311 | 299 | # create the config file now, because the directory didn't exist earlier | ||
312 | 300 | |||
313 | 301 | temp_file = self.create_local_file(temp_path, repo, output=False) | ||
314 | 302 | |||
315 | 303 | # now run createrepo to rebuild the index | ||
316 | 304 | |||
317 | 305 | if repo.mirror_locally: | ||
318 | 306 | os.path.walk(dest_path, self.createrepo_walker, repo) | ||
319 | 307 | |||
320 | 308 | # create the config file the hosts will use to access the repository. | ||
321 | 309 | |||
322 | 310 | self.create_local_file(dest_path, repo) | ||
323 | 311 | |||
324 | 312 | # ==================================================================================== | ||
325 | 313 | |||
326 | 314 | def yum_sync(self, repo): | ||
327 | 315 | |||
328 | 316 | """ | ||
329 | 317 | Handle copying of http:// and ftp:// yum repos. | ||
330 | 318 | """ | ||
331 | 319 | |||
332 | 320 | repo_mirror = repo.mirror | ||
333 | 321 | |||
334 | 322 | # warn about not having yum-utils. We don't want to require it in the package because | ||
335 | 323 | # RHEL4 and RHEL5U0 don't have it. | ||
336 | 324 | |||
337 | 325 | if not os.path.exists("/usr/bin/reposync"): | ||
338 | 326 | utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils") | ||
339 | 327 | |||
340 | 328 | cmd = "" # command to run | ||
341 | 329 | has_rpm_list = False # flag indicating not to pull the whole repo | ||
342 | 330 | |||
343 | 331 | # detect cases that require special handling | ||
344 | 332 | |||
345 | 333 | if repo.rpm_list != "" and repo.rpm_list != []: | ||
346 | 334 | has_rpm_list = True | ||
347 | 335 | |||
348 | 336 | # create yum config file for use by reposync | ||
349 | 337 | dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name) | ||
350 | 338 | temp_path = os.path.join(dest_path, ".origin") | ||
351 | 339 | |||
352 | 340 | if not os.path.isdir(temp_path) and repo.mirror_locally: | ||
353 | 341 | # FIXME: there's a chance this might break the RHN D/L case | ||
354 | 342 | os.makedirs(temp_path) | ||
355 | 343 | |||
356 | 344 | # create the config file that yum will use for the copying | ||
357 | 345 | |||
358 | 346 | if repo.mirror_locally: | ||
359 | 347 | temp_file = self.create_local_file(temp_path, repo, output=False) | ||
360 | 348 | |||
361 | 349 | if not has_rpm_list and repo.mirror_locally: | ||
362 | 350 | # if we have not requested only certain RPMs, use reposync | ||
363 | 351 | cmd = "/usr/bin/reposync %s --config=%s --repoid=%s --download_path=%s" % (self.rflags, temp_file, repo.name, "/var/www/cobbler/repo_mirror") | ||
364 | 352 | if repo.arch != "": | ||
365 | 353 | if repo.arch == "x86": | ||
366 | 354 | repo.arch = "i386" # FIX potential arch errors | ||
367 | 355 | if repo.arch == "i386": | ||
368 | 356 | # counter-intuitive, but we want the newish kernels too | ||
369 | 357 | cmd = "%s -a i686" % (cmd) | ||
370 | 358 | else: | ||
371 | 359 | cmd = "%s -a %s" % (cmd, repo.arch) | ||
372 | 360 | |||
373 | 361 | elif repo.mirror_locally: | ||
374 | 362 | |||
375 | 363 | # create the output directory if it doesn't exist | ||
376 | 364 | if not os.path.exists(dest_path): | ||
377 | 365 | os.makedirs(dest_path) | ||
378 | 366 | |||
379 | 367 | use_source = "" | ||
380 | 368 | if repo.arch == "src": | ||
381 | 369 | use_source = "--source" | ||
382 | 370 | |||
383 | 371 | # older yumdownloader sometimes explodes on --resolvedeps | ||
384 | 372 | # if this happens to you, upgrade yum & yum-utils | ||
385 | 373 | extra_flags = self.settings.yumdownloader_flags | ||
386 | 374 | cmd = "/usr/bin/yumdownloader %s %s --disablerepo=* --enablerepo=%s -c %s --destdir=%s %s" % (extra_flags, use_source, repo.name, temp_file, dest_path, " ".join(repo.rpm_list)) | ||
387 | 375 | |||
388 | 376 | # now regardless of whether we're doing yumdownloader or reposync | ||
389 | 377 | # or whether the repo was http://, ftp://, or rhn://, execute all queued | ||
390 | 378 | # commands here. Any failure at any point stops the operation. | ||
391 | 379 | |||
392 | 380 | if repo.mirror_locally: | ||
393 | 381 | rc = utils.subprocess_call(self.logger, cmd) | ||
394 | 382 | if rc !=0: | ||
395 | 383 | utils.die(self.logger,"cobbler reposync failed") | ||
396 | 384 | |||
397 | 385 | repodata_path = os.path.join(dest_path, "repodata") | ||
398 | 386 | |||
399 | 387 | if not os.path.exists("/usr/bin/wget"): | ||
400 | 388 | utils.die(self.logger,"no /usr/bin/wget found, please install wget") | ||
401 | 389 | |||
402 | 390 | # grab repomd.xml and use it to download any metadata we can use | ||
403 | 391 | cmd2 = "/usr/bin/wget -q %s/repodata/repomd.xml -O %s/repomd.xml" % (repo_mirror, temp_path) | ||
404 | 392 | rc = utils.subprocess_call(self.logger,cmd2) | ||
405 | 393 | if rc == 0: | ||
406 | 394 | # create our repodata directory now, as any extra metadata we're | ||
407 | 395 | # about to download probably lives there | ||
408 | 396 | if not os.path.isdir(repodata_path): | ||
409 | 397 | os.makedirs(repodata_path) | ||
410 | 398 | rmd = yum.repoMDObject.RepoMD('', "%s/repomd.xml" % (temp_path)) | ||
411 | 399 | for mdtype in rmd.repoData.keys(): | ||
412 | 400 | # don't download metadata files that are created by default | ||
413 | 401 | if mdtype not in ["primary", "primary_db", "filelists", "filelists_db", "other", "other_db"]: | ||
414 | 402 | mdfile = rmd.getData(mdtype).location[1] | ||
415 | 403 | cmd3 = "/usr/bin/wget -q %s/%s -O %s/%s" % (repo_mirror, mdfile, dest_path, mdfile) | ||
416 | 404 | utils.subprocess_call(self.logger,cmd3) | ||
417 | 405 | if rc !=0: | ||
418 | 406 | utils.die(self.logger,"wget failed") | ||
419 | 407 | |||
420 | 408 | # now run createrepo to rebuild the index | ||
421 | 409 | |||
422 | 410 | if repo.mirror_locally: | ||
423 | 411 | os.path.walk(dest_path, self.createrepo_walker, repo) | ||
424 | 412 | |||
425 | 413 | # create the config file the hosts will use to access the repository. | ||
426 | 414 | |||
427 | 415 | self.create_local_file(dest_path, repo) | ||
428 | 416 | |||
429 | 417 | # ==================================================================================== | ||
430 | 418 | |||
431 | 419 | |||
432 | 420 | def apt_sync(self, repo): | ||
433 | 421 | |||
434 | 422 | """ | ||
435 | 423 | Handle copying of http:// and ftp:// debian repos. | ||
436 | 424 | """ | ||
437 | 425 | |||
438 | 426 | repo_mirror = repo.mirror | ||
439 | 427 | |||
440 | 428 | # warn about not having mirror program. | ||
441 | 429 | |||
442 | 430 | mirror_program = "/usr/bin/debmirror" | ||
443 | 431 | if not os.path.exists(mirror_program): | ||
444 | 432 | utils.die(self.logger,"no %s found, please install it"%(mirror_program)) | ||
445 | 433 | |||
446 | 434 | cmd = "" # command to run | ||
447 | 435 | has_rpm_list = False # flag indicating not to pull the whole repo | ||
448 | 436 | |||
449 | 437 | # detect cases that require special handling | ||
450 | 438 | |||
451 | 439 | if repo.rpm_list != "" and repo.rpm_list != []: | ||
452 | 440 | utils.die(self.logger,"has_rpm_list not yet supported on apt repos") | ||
453 | 441 | |||
454 | 442 | if not repo.arch: | ||
455 | 443 | utils.die(self.logger,"Architecture is required for apt repositories") | ||
456 | 444 | |||
457 | 445 | # built destination path for the repo | ||
458 | 446 | dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name) | ||
459 | 447 | |||
460 | 448 | if repo.mirror_locally: | ||
461 | 449 | mirror = repo.mirror.replace("@@suite@@",repo.os_version) | ||
462 | 450 | |||
463 | 451 | idx = mirror.find("://") | ||
464 | 452 | method = mirror[:idx] | ||
465 | 453 | mirror = mirror[idx+3:] | ||
466 | 454 | |||
467 | 455 | idx = mirror.find("/") | ||
468 | 456 | host = mirror[:idx] | ||
469 | 457 | mirror = mirror[idx+1:] | ||
470 | 458 | |||
471 | 459 | idx = mirror.rfind("/dists/") | ||
472 | 460 | suite = mirror[idx+7:] | ||
473 | 461 | mirror = mirror[:idx] | ||
474 | 462 | |||
475 | 463 | mirror_data = "--method=%s --host=%s --root=%s --dist=%s " % ( method , host , mirror , suite ) | ||
476 | 464 | |||
477 | 465 | # FIXME : flags should come from repo instead of being hardcoded | ||
478 | 466 | |||
479 | 467 | rflags = "--passive --nocleanup" | ||
480 | 468 | for x in repo.yumopts: | ||
481 | 469 | if repo.yumopts[x]: | ||
482 | 470 | rflags += " %s %s" % ( x , repo.yumopts[x] ) | ||
483 | 471 | else: | ||
484 | 472 | rflags += " %s" % x | ||
485 | 473 | cmd = "%s %s %s %s" % (mirror_program, rflags, mirror_data, dest_path) | ||
486 | 474 | if repo.arch == "src": | ||
487 | 475 | cmd = "%s --source" % cmd | ||
488 | 476 | else: | ||
489 | 477 | arch = repo.arch | ||
490 | 478 | if arch == "x86": | ||
491 | 479 | arch = "i386" # FIX potential arch errors | ||
492 | 480 | if arch == "x86_64": | ||
493 | 481 | arch = "amd64" # FIX potential arch errors | ||
494 | 482 | cmd = "%s --nosource -a %s" % (cmd, arch) | ||
495 | 483 | |||
496 | 484 | rc = utils.subprocess_call(self.logger, cmd) | ||
497 | 485 | if rc !=0: | ||
498 | 486 | utils.die(self.logger,"cobbler reposync failed") | ||
499 | 487 | |||
500 | 488 | |||
501 | 489 | def create_local_file(self, dest_path, repo, output=True): | ||
502 | 490 | """ | ||
503 | 491 | |||
504 | 492 | Creates Yum config files for use by reposync | ||
505 | 493 | |||
506 | 494 | Two uses: | ||
507 | 495 | (A) output=True, Create local files that can be used with yum on provisioned clients to make use of this mirror. | ||
508 | 496 | (B) output=False, Create a temporary file for yum to feed into yum for mirroring | ||
509 | 497 | """ | ||
510 | 498 | |||
511 | 499 | # the output case will generate repo configuration files which are usable | ||
512 | 500 | # for the installed systems. They need to be made compatible with --server-override | ||
513 | 501 | # which means they are actually templates, which need to be rendered by a cobbler-sync | ||
514 | 502 | # on per profile/system basis. | ||
515 | 503 | |||
516 | 504 | if output: | ||
517 | 505 | fname = os.path.join(dest_path,"config.repo") | ||
518 | 506 | else: | ||
519 | 507 | fname = os.path.join(dest_path, "%s.repo" % repo.name) | ||
520 | 508 | self.logger.debug("creating: %s" % fname) | ||
521 | 509 | if not os.path.exists(dest_path): | ||
522 | 510 | utils.mkdir(dest_path) | ||
523 | 511 | config_file = open(fname, "w+") | ||
524 | 512 | config_file.write("[%s]\n" % repo.name) | ||
525 | 513 | config_file.write("name=%s\n" % repo.name) | ||
526 | 514 | optenabled = False | ||
527 | 515 | optgpgcheck = False | ||
528 | 516 | if output: | ||
529 | 517 | if repo.mirror_locally: | ||
530 | 518 | line = "baseurl=http://${server}/cobbler/repo_mirror/%s\n" % (repo.name) | ||
531 | 519 | else: | ||
532 | 520 | mstr = repo.mirror | ||
533 | 521 | if mstr.startswith("/"): | ||
534 | 522 | mstr = "file://%s" % mstr | ||
535 | 523 | line = "baseurl=%s\n" % mstr | ||
536 | 524 | |||
537 | 525 | config_file.write(line) | ||
538 | 526 | # user may have options specific to certain yum plugins | ||
539 | 527 | # add them to the file | ||
540 | 528 | for x in repo.yumopts: | ||
541 | 529 | config_file.write("%s=%s\n" % (x, repo.yumopts[x])) | ||
542 | 530 | if x == "enabled": | ||
543 | 531 | optenabled = True | ||
544 | 532 | if x == "gpgcheck": | ||
545 | 533 | optgpgcheck = True | ||
546 | 534 | else: | ||
547 | 535 | mstr = repo.mirror | ||
548 | 536 | if mstr.startswith("/"): | ||
549 | 537 | mstr = "file://%s" % mstr | ||
550 | 538 | line = "baseurl=%s\n" % mstr | ||
551 | 539 | if self.settings.http_port not in (80, '80'): | ||
552 | 540 | http_server = "%s:%s" % (self.settings.server, self.settings.http_port) | ||
553 | 541 | else: | ||
554 | 542 | http_server = self.settings.server | ||
555 | 543 | line = line.replace("@@server@@",http_server) | ||
556 | 544 | config_file.write(line) | ||
557 | 545 | if not optenabled: | ||
558 | 546 | config_file.write("enabled=1\n") | ||
559 | 547 | config_file.write("priority=%s\n" % repo.priority) | ||
560 | 548 | # FIXME: potentially might want a way to turn this on/off on a per-repo basis | ||
561 | 549 | if not optgpgcheck: | ||
562 | 550 | config_file.write("gpgcheck=0\n") | ||
563 | 551 | config_file.close() | ||
564 | 552 | return fname | ||
565 | 553 | |||
566 | 554 | # ================================================================================== | ||
567 | 555 | |||
568 | 556 | def update_permissions(self, repo_path): | ||
569 | 557 | """ | ||
570 | 558 | Verifies that permissions and contexts after an rsync are as expected. | ||
571 | 559 | Sending proper rsync flags should prevent the need for this, though this is largely | ||
572 | 560 | a safeguard. | ||
573 | 561 | """ | ||
574 | 562 | # all_path = os.path.join(repo_path, "*") | ||
575 | 563 | cmd1 = "chown -R root:apache %s" % repo_path | ||
576 | 564 | utils.subprocess_call(self.logger, cmd1) | ||
577 | 565 | |||
578 | 566 | cmd2 = "chmod -R 755 %s" % repo_path | ||
579 | 567 | utils.subprocess_call(self.logger, cmd2) | ||
580 | 568 | |||
581 | 569 | 0 | ||
582 | === removed directory '.pc/12_fix_dhcp_restart.patch' | |||
583 | === removed directory '.pc/12_fix_dhcp_restart.patch/cobbler' | |||
584 | === removed directory '.pc/12_fix_dhcp_restart.patch/cobbler/modules' | |||
585 | === removed file '.pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py' | |||
586 | --- .pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py 2011-01-28 14:39:12 +0000 | |||
587 | +++ .pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py 1970-01-01 00:00:00 +0000 | |||
588 | @@ -1,66 +0,0 @@ | |||
589 | 1 | import distutils.sysconfig | ||
590 | 2 | import sys | ||
591 | 3 | import os | ||
592 | 4 | import traceback | ||
593 | 5 | import cexceptions | ||
594 | 6 | import os | ||
595 | 7 | import sys | ||
596 | 8 | import xmlrpclib | ||
597 | 9 | import cobbler.module_loader as module_loader | ||
598 | 10 | import cobbler.utils as utils | ||
599 | 11 | |||
600 | 12 | plib = distutils.sysconfig.get_python_lib() | ||
601 | 13 | mod_path="%s/cobbler" % plib | ||
602 | 14 | sys.path.insert(0, mod_path) | ||
603 | 15 | |||
604 | 16 | def register(): | ||
605 | 17 | # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. | ||
606 | 18 | # the return of this method indicates the trigger type | ||
607 | 19 | return "/var/lib/cobbler/triggers/sync/post/*" | ||
608 | 20 | |||
609 | 21 | def run(api,args,logger): | ||
610 | 22 | |||
611 | 23 | settings = api.settings() | ||
612 | 24 | |||
613 | 25 | manage_dhcp = str(settings.manage_dhcp).lower() | ||
614 | 26 | manage_dns = str(settings.manage_dns).lower() | ||
615 | 27 | manage_tftpd = str(settings.manage_tftpd).lower() | ||
616 | 28 | restart_dhcp = str(settings.restart_dhcp).lower() | ||
617 | 29 | restart_dns = str(settings.restart_dns).lower() | ||
618 | 30 | |||
619 | 31 | which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip() | ||
620 | 32 | which_dns_module = module_loader.get_module_from_file("dns","module",just_name=True).strip() | ||
621 | 33 | |||
622 | 34 | # special handling as we don't want to restart it twice | ||
623 | 35 | has_restarted_dnsmasq = False | ||
624 | 36 | |||
625 | 37 | rc = 0 | ||
626 | 38 | if manage_dhcp != "0": | ||
627 | 39 | if which_dhcp_module == "manage_isc": | ||
628 | 40 | if restart_dhcp != "0": | ||
629 | 41 | rc = utils.subprocess_call(logger, "dhcpd -t -q", shell=True) | ||
630 | 42 | if rc != 0: | ||
631 | 43 | logger.error("dhcpd -t failed") | ||
632 | 44 | return 1 | ||
633 | 45 | rc = utils.subprocess_call(logger,"service dhcpd restart", shell=True) | ||
634 | 46 | elif which_dhcp_module == "manage_dnsmasq": | ||
635 | 47 | if restart_dhcp != "0": | ||
636 | 48 | rc = utils.subprocess_call(logger, "service dnsmasq restart") | ||
637 | 49 | has_restarted_dnsmasq = True | ||
638 | 50 | else: | ||
639 | 51 | logger.error("unknown DHCP engine: %s" % which_dhcp_module) | ||
640 | 52 | rc = 411 | ||
641 | 53 | |||
642 | 54 | if manage_dns != "0" and restart_dns != "0": | ||
643 | 55 | if which_dns_module == "manage_bind": | ||
644 | 56 | rc = utils.subprocess_call(logger, "service named restart", shell=True) | ||
645 | 57 | elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq: | ||
646 | 58 | rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True) | ||
647 | 59 | elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq: | ||
648 | 60 | rc = 0 | ||
649 | 61 | else: | ||
650 | 62 | logger.error("unknown DNS engine: %s" % which_dns_module) | ||
651 | 63 | rc = 412 | ||
652 | 64 | |||
653 | 65 | return rc | ||
654 | 66 | |||
655 | 67 | 0 | ||
656 | === removed directory '.pc/21_cobbler_use_netboot.patch' | |||
657 | === removed directory '.pc/21_cobbler_use_netboot.patch/cobbler' | |||
658 | === removed directory '.pc/21_cobbler_use_netboot.patch/cobbler/modules' | |||
659 | === removed file '.pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py' | |||
660 | --- .pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-01-18 12:03:14 +0000 | |||
661 | +++ .pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000 | |||
662 | @@ -1,777 +0,0 @@ | |||
663 | 1 | """ | ||
664 | 2 | This is some of the code behind 'cobbler sync'. | ||
665 | 3 | |||
666 | 4 | Copyright 2006-2009, Red Hat, Inc | ||
667 | 5 | Michael DeHaan <mdehaan@redhat.com> | ||
668 | 6 | John Eckersberg <jeckersb@redhat.com> | ||
669 | 7 | |||
670 | 8 | This program is free software; you can redistribute it and/or modify | ||
671 | 9 | it under the terms of the GNU General Public License as published by | ||
672 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
673 | 11 | (at your option) any later version. | ||
674 | 12 | |||
675 | 13 | This program is distributed in the hope that it will be useful, | ||
676 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
677 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
678 | 16 | GNU General Public License for more details. | ||
679 | 17 | |||
680 | 18 | You should have received a copy of the GNU General Public License | ||
681 | 19 | along with this program; if not, write to the Free Software | ||
682 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
683 | 21 | 02110-1301 USA | ||
684 | 22 | """ | ||
685 | 23 | |||
686 | 24 | import os | ||
687 | 25 | import os.path | ||
688 | 26 | import shutil | ||
689 | 27 | import time | ||
690 | 28 | import sys | ||
691 | 29 | import glob | ||
692 | 30 | import traceback | ||
693 | 31 | import errno | ||
694 | 32 | import re | ||
695 | 33 | from utils import popen2 | ||
696 | 34 | from shlex import shlex | ||
697 | 35 | |||
698 | 36 | |||
699 | 37 | import utils | ||
700 | 38 | from cexceptions import * | ||
701 | 39 | import templar | ||
702 | 40 | |||
703 | 41 | import item_distro | ||
704 | 42 | import item_profile | ||
705 | 43 | import item_repo | ||
706 | 44 | import item_system | ||
707 | 45 | |||
708 | 46 | from utils import _ | ||
709 | 47 | |||
710 | 48 | def register(): | ||
711 | 49 | """ | ||
712 | 50 | The mandatory cobbler module registration hook. | ||
713 | 51 | """ | ||
714 | 52 | return "manage/import" | ||
715 | 53 | |||
716 | 54 | |||
717 | 55 | class ImportDebianUbuntuManager: | ||
718 | 56 | |||
719 | 57 | def __init__(self,config,logger): | ||
720 | 58 | """ | ||
721 | 59 | Constructor | ||
722 | 60 | """ | ||
723 | 61 | self.logger = logger | ||
724 | 62 | self.config = config | ||
725 | 63 | self.api = config.api | ||
726 | 64 | self.distros = config.distros() | ||
727 | 65 | self.profiles = config.profiles() | ||
728 | 66 | self.systems = config.systems() | ||
729 | 67 | self.settings = config.settings() | ||
730 | 68 | self.repos = config.repos() | ||
731 | 69 | self.templar = templar.Templar(config) | ||
732 | 70 | |||
733 | 71 | # required function for import modules | ||
734 | 72 | def what(self): | ||
735 | 73 | return "import/debian_ubuntu" | ||
736 | 74 | |||
737 | 75 | # required function for import modules | ||
738 | 76 | def check_for_signature(self,path,cli_breed): | ||
739 | 77 | signatures = [ | ||
740 | 78 | 'pool', | ||
741 | 79 | ] | ||
742 | 80 | |||
743 | 81 | #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path) | ||
744 | 82 | for signature in signatures: | ||
745 | 83 | d = os.path.join(path,signature) | ||
746 | 84 | if os.path.exists(d): | ||
747 | 85 | self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature) | ||
748 | 86 | return (True,signature) | ||
749 | 87 | |||
750 | 88 | if cli_breed and cli_breed in self.get_valid_breeds(): | ||
751 | 89 | self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) | ||
752 | 90 | return (True,None) | ||
753 | 91 | |||
754 | 92 | return (False,None) | ||
755 | 93 | |||
756 | 94 | # required function for import modules | ||
757 | 95 | def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): | ||
758 | 96 | self.pkgdir = pkgdir | ||
759 | 97 | self.mirror = mirror | ||
760 | 98 | self.mirror_name = mirror_name | ||
761 | 99 | self.network_root = network_root | ||
762 | 100 | self.kickstart_file = kickstart_file | ||
763 | 101 | self.rsync_flags = rsync_flags | ||
764 | 102 | self.arch = arch | ||
765 | 103 | self.breed = breed | ||
766 | 104 | self.os_version = os_version | ||
767 | 105 | |||
768 | 106 | # some fixups for the XMLRPC interface, which does not use "None" | ||
769 | 107 | if self.arch == "": self.arch = None | ||
770 | 108 | if self.mirror == "": self.mirror = None | ||
771 | 109 | if self.mirror_name == "": self.mirror_name = None | ||
772 | 110 | if self.kickstart_file == "": self.kickstart_file = None | ||
773 | 111 | if self.os_version == "": self.os_version = None | ||
774 | 112 | if self.rsync_flags == "": self.rsync_flags = None | ||
775 | 113 | if self.network_root == "": self.network_root = None | ||
776 | 114 | |||
777 | 115 | # If no breed was specified on the command line, figure it out | ||
778 | 116 | if self.breed == None: | ||
779 | 117 | self.breed = self.get_breed_from_directory() | ||
780 | 118 | if not self.breed: | ||
781 | 119 | utils.die(self.logger,"import failed - could not determine breed of debian-based distro") | ||
782 | 120 | |||
783 | 121 | # debug log stuff for testing | ||
784 | 122 | #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir)) | ||
785 | 123 | #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror)) | ||
786 | 124 | #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name)) | ||
787 | 125 | #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root)) | ||
788 | 126 | #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file)) | ||
789 | 127 | #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags)) | ||
790 | 128 | #self.logger.info("DEBUG: self.arch = %s" % str(self.arch)) | ||
791 | 129 | #self.logger.info("DEBUG: self.breed = %s" % str(self.breed)) | ||
792 | 130 | #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version)) | ||
793 | 131 | |||
794 | 132 | # both --import and --name are required arguments | ||
795 | 133 | |||
796 | 134 | if self.mirror is None: | ||
797 | 135 | utils.die(self.logger,"import failed. no --path specified") | ||
798 | 136 | if self.mirror_name is None: | ||
799 | 137 | utils.die(self.logger,"import failed. no --name specified") | ||
800 | 138 | |||
801 | 139 | # if --arch is supplied, validate it to ensure it's valid | ||
802 | 140 | |||
803 | 141 | if self.arch is not None and self.arch != "": | ||
804 | 142 | self.arch = self.arch.lower() | ||
805 | 143 | if self.arch == "x86": | ||
806 | 144 | # be consistent | ||
807 | 145 | self.arch = "i386" | ||
808 | 146 | if self.arch not in self.get_valid_arches(): | ||
809 | 147 | utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", ")) | ||
810 | 148 | |||
811 | 149 | # if we're going to do any copying, set where to put things | ||
812 | 150 | # and then make sure nothing is already there. | ||
813 | 151 | |||
814 | 152 | self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) ) | ||
815 | 153 | if os.path.exists(self.path) and self.arch is None: | ||
816 | 154 | # FIXME : Raise exception even when network_root is given ? | ||
817 | 155 | utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path) | ||
818 | 156 | |||
819 | 157 | # import takes a --kickstart for forcing selection that can't be used in all circumstances | ||
820 | 158 | |||
821 | 159 | if self.kickstart_file and not self.breed: | ||
822 | 160 | utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") | ||
823 | 161 | |||
824 | 162 | if self.os_version and not self.breed: | ||
825 | 163 | utils.die(self.logger,"OS version can only be specified when a specific breed is selected") | ||
826 | 164 | |||
827 | 165 | if self.breed and self.breed.lower() not in self.get_valid_breeds(): | ||
828 | 166 | utils.die(self.logger,"Supplied import breed is not supported by this module") | ||
829 | 167 | |||
830 | 168 | # if --arch is supplied, make sure the user is not importing a path with a different | ||
831 | 169 | # arch, which would just be silly. | ||
832 | 170 | |||
833 | 171 | if self.arch: | ||
834 | 172 | # append the arch path to the name if the arch is not already | ||
835 | 173 | # found in the name. | ||
836 | 174 | for x in self.get_valid_arches(): | ||
837 | 175 | if self.path.lower().find(x) != -1: | ||
838 | 176 | if self.arch != x : | ||
839 | 177 | utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch)) | ||
840 | 178 | break | ||
841 | 179 | else: | ||
842 | 180 | # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again | ||
843 | 181 | self.path += ("-%s" % self.arch) | ||
844 | 182 | |||
845 | 183 | # make the output path and mirror content but only if not specifying that a network | ||
846 | 184 | # accessible support location already exists (this is --available-as on the command line) | ||
847 | 185 | |||
848 | 186 | if self.network_root is None: | ||
849 | 187 | # we need to mirror (copy) the files | ||
850 | 188 | |||
851 | 189 | utils.mkdir(self.path) | ||
852 | 190 | |||
853 | 191 | if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"): | ||
854 | 192 | |||
855 | 193 | # http mirrors are kind of primative. rsync is better. | ||
856 | 194 | # that's why this isn't documented in the manpage and we don't support them. | ||
857 | 195 | # TODO: how about adding recursive FTP as an option? | ||
858 | 196 | |||
859 | 197 | utils.die(self.logger,"unsupported protocol") | ||
860 | 198 | |||
861 | 199 | else: | ||
862 | 200 | |||
863 | 201 | # good, we're going to use rsync.. | ||
864 | 202 | # we don't use SSH for public mirrors and local files. | ||
865 | 203 | # presence of user@host syntax means use SSH | ||
866 | 204 | |||
867 | 205 | # kick off the rsync now | ||
868 | 206 | |||
869 | 207 | if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger): | ||
870 | 208 | utils.die(self.logger, "failed to rsync the files") | ||
871 | 209 | |||
872 | 210 | else: | ||
873 | 211 | |||
874 | 212 | # rather than mirroring, we're going to assume the path is available | ||
875 | 213 | # over http, ftp, and nfs, perhaps on an external filer. scanning still requires | ||
876 | 214 | # --mirror is a filesystem path, but --available-as marks the network path | ||
877 | 215 | |||
878 | 216 | if not os.path.exists(self.mirror): | ||
879 | 217 | utils.die(self.logger, "path does not exist: %s" % self.mirror) | ||
880 | 218 | |||
881 | 219 | # find the filesystem part of the path, after the server bits, as each distro | ||
882 | 220 | # URL needs to be calculated relative to this. | ||
883 | 221 | |||
884 | 222 | if not self.network_root.endswith("/"): | ||
885 | 223 | self.network_root = self.network_root + "/" | ||
886 | 224 | self.path = os.path.normpath( self.mirror ) | ||
887 | 225 | valid_roots = [ "nfs://", "ftp://", "http://" ] | ||
888 | 226 | for valid_root in valid_roots: | ||
889 | 227 | if self.network_root.startswith(valid_root): | ||
890 | 228 | break | ||
891 | 229 | else: | ||
892 | 230 | utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://") | ||
893 | 231 | if self.network_root.startswith("nfs://"): | ||
894 | 232 | try: | ||
895 | 233 | (a,b,rest) = self.network_root.split(":",3) | ||
896 | 234 | except: | ||
897 | 235 | utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.") | ||
898 | 236 | |||
899 | 237 | # now walk the filesystem looking for distributions that match certain patterns | ||
900 | 238 | |||
901 | 239 | self.logger.info("adding distros") | ||
902 | 240 | distros_added = [] | ||
903 | 241 | # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST | ||
904 | 242 | os.path.walk(self.path, self.distro_adder, distros_added) | ||
905 | 243 | |||
906 | 244 | # find out if we can auto-create any repository records from the install tree | ||
907 | 245 | |||
908 | 246 | if self.network_root is None: | ||
909 | 247 | self.logger.info("associating repos") | ||
910 | 248 | # FIXME: this automagic is not possible (yet) without mirroring | ||
911 | 249 | self.repo_finder(distros_added) | ||
912 | 250 | |||
913 | 251 | # find the most appropriate answer files for each profile object | ||
914 | 252 | |||
915 | 253 | self.logger.info("associating kickstarts") | ||
916 | 254 | self.kickstart_finder(distros_added) | ||
917 | 255 | |||
918 | 256 | # ensure bootloaders are present | ||
919 | 257 | self.api.pxegen.copy_bootloaders() | ||
920 | 258 | |||
921 | 259 | return True | ||
922 | 260 | |||
923 | 261 | # required function for import modules | ||
924 | 262 | def get_valid_arches(self): | ||
925 | 263 | return ["i386", "ppc", "x86_64", "x86",] | ||
926 | 264 | |||
927 | 265 | # required function for import modules | ||
928 | 266 | def get_valid_breeds(self): | ||
929 | 267 | return ["debian","ubuntu"] | ||
930 | 268 | |||
931 | 269 | # required function for import modules | ||
932 | 270 | def get_valid_os_versions(self): | ||
933 | 271 | if self.breed == "debian": | ||
934 | 272 | return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",] | ||
935 | 273 | elif self.breed == "ubuntu": | ||
936 | 274 | return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",] | ||
937 | 275 | else: | ||
938 | 276 | return [] | ||
939 | 277 | |||
940 | 278 | def get_valid_repo_breeds(self): | ||
941 | 279 | return ["apt",] | ||
942 | 280 | |||
943 | 281 | def get_release_files(self): | ||
944 | 282 | """ | ||
945 | 283 | Find distro release packages. | ||
946 | 284 | """ | ||
947 | 285 | return glob.glob(os.path.join(self.get_rootdir(), "dists/*")) | ||
948 | 286 | |||
949 | 287 | def get_breed_from_directory(self): | ||
950 | 288 | for breed in self.get_valid_breeds(): | ||
951 | 289 | # NOTE : Although we break the loop after the first match, | ||
952 | 290 | # multiple debian derived distros can actually live at the same pool -- JP | ||
953 | 291 | d = os.path.join(self.mirror, breed) | ||
954 | 292 | if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed: | ||
955 | 293 | return breed | ||
956 | 294 | else: | ||
957 | 295 | return None | ||
958 | 296 | |||
959 | 297 | def get_tree_location(self, distro): | ||
960 | 298 | """ | ||
961 | 299 | Once a distribution is identified, find the part of the distribution | ||
962 | 300 | that has the URL in it that we want to use for kickstarting the | ||
963 | 301 | distribution, and create a ksmeta variable $tree that contains this. | ||
964 | 302 | """ | ||
965 | 303 | |||
966 | 304 | base = self.get_rootdir() | ||
967 | 305 | |||
968 | 306 | if self.network_root is None: | ||
969 | 307 | dists_path = os.path.join(self.path, "dists") | ||
970 | 308 | if os.path.isdir(dists_path): | ||
971 | 309 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
972 | 310 | else: | ||
973 | 311 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
974 | 312 | self.set_install_tree(distro, tree) | ||
975 | 313 | else: | ||
976 | 314 | # where we assign the kickstart source is relative to our current directory | ||
977 | 315 | # and the input start directory in the crawl. We find the path segments | ||
978 | 316 | # between and tack them on the network source path to find the explicit | ||
979 | 317 | # network path to the distro that Anaconda can digest. | ||
980 | 318 | tail = self.path_tail(self.path, base) | ||
981 | 319 | tree = self.network_root[:-1] + tail | ||
982 | 320 | self.set_install_tree(distro, tree) | ||
983 | 321 | |||
984 | 322 | return | ||
985 | 323 | |||
986 | 324 | def repo_finder(self, distros_added): | ||
987 | 325 | for distro in distros_added: | ||
988 | 326 | self.logger.info("traversing distro %s" % distro.name) | ||
989 | 327 | # FIXME : Shouldn't decide this the value of self.network_root ? | ||
990 | 328 | if distro.kernel.find("ks_mirror") != -1: | ||
991 | 329 | basepath = os.path.dirname(distro.kernel) | ||
992 | 330 | top = self.get_rootdir() | ||
993 | 331 | self.logger.info("descent into %s" % top) | ||
994 | 332 | dists_path = os.path.join(self.path, "dists") | ||
995 | 333 | if not os.path.isdir(dists_path): | ||
996 | 334 | self.process_repos() | ||
997 | 335 | else: | ||
998 | 336 | self.logger.info("this distro isn't mirrored") | ||
999 | 337 | |||
1000 | 338 | def process_repos(self): | ||
1001 | 339 | pass | ||
1002 | 340 | |||
1003 | 341 | def distro_adder(self,distros_added,dirname,fnames): | ||
1004 | 342 | """ | ||
1005 | 343 | This is an os.path.walk routine that finds distributions in the directory | ||
1006 | 344 | to be scanned and then creates them. | ||
1007 | 345 | """ | ||
1008 | 346 | |||
1009 | 347 | # FIXME: If there are more than one kernel or initrd image on the same directory, | ||
1010 | 348 | # results are unpredictable | ||
1011 | 349 | |||
1012 | 350 | initrd = None | ||
1013 | 351 | kernel = None | ||
1014 | 352 | |||
1015 | 353 | for x in fnames: | ||
1016 | 354 | adtls = [] | ||
1017 | 355 | |||
1018 | 356 | fullname = os.path.join(dirname,x) | ||
1019 | 357 | if os.path.islink(fullname) and os.path.isdir(fullname): | ||
1020 | 358 | if fullname.startswith(self.path): | ||
1021 | 359 | self.logger.warning("avoiding symlink loop") | ||
1022 | 360 | continue | ||
1023 | 361 | self.logger.info("following symlink: %s" % fullname) | ||
1024 | 362 | os.path.walk(fullname, self.distro_adder, distros_added) | ||
1025 | 363 | |||
1026 | 364 | if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") ) and x != "initrd.size": | ||
1027 | 365 | initrd = os.path.join(dirname,x) | ||
1028 | 366 | if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") ) and x.find("initrd") == -1: | ||
1029 | 367 | kernel = os.path.join(dirname,x) | ||
1030 | 368 | |||
1031 | 369 | # if we've collected a matching kernel and initrd pair, turn the in and add them to the list | ||
1032 | 370 | if initrd is not None and kernel is not None: | ||
1033 | 371 | adtls.append(self.add_entry(dirname,kernel,initrd)) | ||
1034 | 372 | kernel = None | ||
1035 | 373 | initrd = None | ||
1036 | 374 | |||
1037 | 375 | for adtl in adtls: | ||
1038 | 376 | distros_added.extend(adtl) | ||
1039 | 377 | |||
1040 | 378 | def add_entry(self,dirname,kernel,initrd): | ||
1041 | 379 | """ | ||
1042 | 380 | When we find a directory with a valid kernel/initrd in it, create the distribution objects | ||
1043 | 381 | as appropriate and save them. This includes creating xen and rescue distros/profiles | ||
1044 | 382 | if possible. | ||
1045 | 383 | """ | ||
1046 | 384 | |||
1047 | 385 | proposed_name = self.get_proposed_name(dirname,kernel) | ||
1048 | 386 | proposed_arch = self.get_proposed_arch(dirname) | ||
1049 | 387 | |||
1050 | 388 | if self.arch and proposed_arch and self.arch != proposed_arch: | ||
1051 | 389 | utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) | ||
1052 | 390 | |||
1053 | 391 | archs = self.learn_arch_from_tree() | ||
1054 | 392 | if not archs: | ||
1055 | 393 | if self.arch: | ||
1056 | 394 | archs.append( self.arch ) | ||
1057 | 395 | else: | ||
1058 | 396 | if self.arch and self.arch not in archs: | ||
1059 | 397 | utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) | ||
1060 | 398 | if proposed_arch: | ||
1061 | 399 | if archs and proposed_arch not in archs: | ||
1062 | 400 | self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) | ||
1063 | 401 | return | ||
1064 | 402 | |||
1065 | 403 | archs = [ proposed_arch ] | ||
1066 | 404 | |||
1067 | 405 | if len(archs)>1: | ||
1068 | 406 | self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) | ||
1069 | 407 | |||
1070 | 408 | distros_added = [] | ||
1071 | 409 | |||
1072 | 410 | for pxe_arch in archs: | ||
1073 | 411 | name = proposed_name + "-" + pxe_arch | ||
1074 | 412 | existing_distro = self.distros.find(name=name) | ||
1075 | 413 | |||
1076 | 414 | if existing_distro is not None: | ||
1077 | 415 | self.logger.warning("skipping import, as distro name already exists: %s" % name) | ||
1078 | 416 | continue | ||
1079 | 417 | |||
1080 | 418 | else: | ||
1081 | 419 | self.logger.info("creating new distro: %s" % name) | ||
1082 | 420 | distro = self.config.new_distro() | ||
1083 | 421 | |||
1084 | 422 | if name.find("-autoboot") != -1: | ||
1085 | 423 | # this is an artifact of some EL-3 imports | ||
1086 | 424 | continue | ||
1087 | 425 | |||
1088 | 426 | distro.set_name(name) | ||
1089 | 427 | distro.set_kernel(kernel) | ||
1090 | 428 | distro.set_initrd(initrd) | ||
1091 | 429 | distro.set_arch(pxe_arch) | ||
1092 | 430 | distro.set_breed(self.breed) | ||
1093 | 431 | # If a version was supplied on command line, we set it now | ||
1094 | 432 | if self.os_version: | ||
1095 | 433 | distro.set_os_version(self.os_version) | ||
1096 | 434 | |||
1097 | 435 | self.distros.add(distro,save=True) | ||
1098 | 436 | distros_added.append(distro) | ||
1099 | 437 | |||
1100 | 438 | existing_profile = self.profiles.find(name=name) | ||
1101 | 439 | |||
1102 | 440 | # see if the profile name is already used, if so, skip it and | ||
1103 | 441 | # do not modify the existing profile | ||
1104 | 442 | |||
1105 | 443 | if existing_profile is None: | ||
1106 | 444 | self.logger.info("creating new profile: %s" % name) | ||
1107 | 445 | #FIXME: The created profile holds a default kickstart, and should be breed specific | ||
1108 | 446 | profile = self.config.new_profile() | ||
1109 | 447 | else: | ||
1110 | 448 | self.logger.info("skipping existing profile, name already exists: %s" % name) | ||
1111 | 449 | continue | ||
1112 | 450 | |||
1113 | 451 | # save our minimal profile which just points to the distribution and a good | ||
1114 | 452 | # default answer file | ||
1115 | 453 | |||
1116 | 454 | profile.set_name(name) | ||
1117 | 455 | profile.set_distro(name) | ||
1118 | 456 | profile.set_kickstart(self.kickstart_file) | ||
1119 | 457 | |||
1120 | 458 | # depending on the name of the profile we can define a good virt-type | ||
1121 | 459 | # for usage with koan | ||
1122 | 460 | |||
1123 | 461 | if name.find("-xen") != -1: | ||
1124 | 462 | profile.set_virt_type("xenpv") | ||
1125 | 463 | elif name.find("vmware") != -1: | ||
1126 | 464 | profile.set_virt_type("vmware") | ||
1127 | 465 | else: | ||
1128 | 466 | profile.set_virt_type("qemu") | ||
1129 | 467 | |||
1130 | 468 | # save our new profile to the collection | ||
1131 | 469 | |||
1132 | 470 | self.profiles.add(profile,save=True) | ||
1133 | 471 | |||
1134 | 472 | return distros_added | ||
1135 | 473 | |||
1136 | 474 | def get_proposed_name(self,dirname,kernel=None): | ||
1137 | 475 | """ | ||
1138 | 476 | Given a directory name where we have a kernel/initrd pair, try to autoname | ||
1139 | 477 | the distribution (and profile) object based on the contents of that path | ||
1140 | 478 | """ | ||
1141 | 479 | |||
1142 | 480 | if self.network_root is not None: | ||
1143 | 481 | name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/")) | ||
1144 | 482 | else: | ||
1145 | 483 | # remove the part that says /var/www/cobbler/ks_mirror/name | ||
1146 | 484 | name = "-".join(dirname.split("/")[5:]) | ||
1147 | 485 | |||
1148 | 486 | if kernel is not None and kernel.find("PAE") != -1: | ||
1149 | 487 | name = name + "-PAE" | ||
1150 | 488 | |||
1151 | 489 | # These are all Ubuntu's doing, the netboot images are buried pretty | ||
1152 | 490 | # deep. ;-) -JC | ||
1153 | 491 | name = name.replace("-netboot","") | ||
1154 | 492 | name = name.replace("-ubuntu-installer","") | ||
1155 | 493 | name = name.replace("-amd64","") | ||
1156 | 494 | name = name.replace("-i386","") | ||
1157 | 495 | |||
1158 | 496 | # we know that some kernel paths should not be in the name | ||
1159 | 497 | |||
1160 | 498 | name = name.replace("-images","") | ||
1161 | 499 | name = name.replace("-pxeboot","") | ||
1162 | 500 | name = name.replace("-install","") | ||
1163 | 501 | name = name.replace("-isolinux","") | ||
1164 | 502 | |||
1165 | 503 | # some paths above the media root may have extra path segments we want | ||
1166 | 504 | # to clean up | ||
1167 | 505 | |||
1168 | 506 | name = name.replace("-os","") | ||
1169 | 507 | name = name.replace("-tree","") | ||
1170 | 508 | name = name.replace("var-www-cobbler-", "") | ||
1171 | 509 | name = name.replace("ks_mirror-","") | ||
1172 | 510 | name = name.replace("--","-") | ||
1173 | 511 | |||
1174 | 512 | # remove any architecture name related string, as real arch will be appended later | ||
1175 | 513 | |||
1176 | 514 | name = name.replace("chrp","ppc64") | ||
1177 | 515 | |||
1178 | 516 | for separator in [ '-' , '_' , '.' ] : | ||
1179 | 517 | for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: | ||
1180 | 518 | name = name.replace("%s%s" % ( separator , arch ),"") | ||
1181 | 519 | |||
1182 | 520 | return name | ||
1183 | 521 | |||
1184 | 522 | def get_proposed_arch(self,dirname): | ||
1185 | 523 | """ | ||
1186 | 524 | Given an directory name, can we infer an architecture from a path segment? | ||
1187 | 525 | """ | ||
1188 | 526 | if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: | ||
1189 | 527 | return "x86_64" | ||
1190 | 528 | if dirname.find("ia64") != -1: | ||
1191 | 529 | return "ia64" | ||
1192 | 530 | if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: | ||
1193 | 531 | return "i386" | ||
1194 | 532 | if dirname.find("s390x") != -1: | ||
1195 | 533 | return "s390x" | ||
1196 | 534 | if dirname.find("s390") != -1: | ||
1197 | 535 | return "s390" | ||
1198 | 536 | if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: | ||
1199 | 537 | return "ppc64" | ||
1200 | 538 | if dirname.find("ppc32") != -1: | ||
1201 | 539 | return "ppc" | ||
1202 | 540 | if dirname.find("ppc") != -1: | ||
1203 | 541 | return "ppc" | ||
1204 | 542 | return None | ||
1205 | 543 | |||
1206 | 544 | def arch_walker(self,foo,dirname,fnames): | ||
1207 | 545 | """ | ||
1208 | 546 | See docs on learn_arch_from_tree. | ||
1209 | 547 | |||
1210 | 548 | The TRY_LIST is used to speed up search, and should be dropped for default importer | ||
1211 | 549 | Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem | ||
1212 | 550 | |||
1213 | 551 | This method is useful to get the archs, but also to package type and a raw guess of the breed | ||
1214 | 552 | """ | ||
1215 | 553 | |||
1216 | 554 | # try to find a kernel header RPM and then look at it's arch. | ||
1217 | 555 | for x in fnames: | ||
1218 | 556 | if self.match_kernelarch_file(x): | ||
1219 | 557 | for arch in self.get_valid_arches(): | ||
1220 | 558 | if x.find(arch) != -1: | ||
1221 | 559 | foo[arch] = 1 | ||
1222 | 560 | for arch in [ "i686" , "amd64" ]: | ||
1223 | 561 | if x.find(arch) != -1: | ||
1224 | 562 | foo[arch] = 1 | ||
1225 | 563 | |||
1226 | 564 | def kickstart_finder(self,distros_added): | ||
1227 | 565 | """ | ||
1228 | 566 | For all of the profiles in the config w/o a kickstart, use the | ||
1229 | 567 | given kickstart file, or look at the kernel path, from that, | ||
1230 | 568 | see if we can guess the distro, and if we can, assign a kickstart | ||
1231 | 569 | if one is available for it. | ||
1232 | 570 | """ | ||
1233 | 571 | for profile in self.profiles: | ||
1234 | 572 | distro = self.distros.find(name=profile.get_conceptual_parent().name) | ||
1235 | 573 | if distro is None or not (distro in distros_added): | ||
1236 | 574 | continue | ||
1237 | 575 | |||
1238 | 576 | kdir = os.path.dirname(distro.kernel) | ||
1239 | 577 | if self.kickstart_file == None: | ||
1240 | 578 | for file in self.get_release_files(): | ||
1241 | 579 | results = self.scan_pkg_filename(file) | ||
1242 | 580 | # FIXME : If os is not found on tree but set with CLI, no kickstart is searched | ||
1243 | 581 | if results is None: | ||
1244 | 582 | self.logger.warning("skipping %s" % file) | ||
1245 | 583 | continue | ||
1246 | 584 | (flavor, major, minor, release) = results | ||
1247 | 585 | # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata | ||
1248 | 586 | #version , ks = self.set_variance(flavor, major, minor, distro.arch) | ||
1249 | 587 | if self.os_version: | ||
1250 | 588 | if self.os_version != flavor: | ||
1251 | 589 | utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor)) | ||
1252 | 590 | distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch)) | ||
1253 | 591 | distro.set_os_version(flavor) | ||
1254 | 592 | # is this even valid for debian/ubuntu? - jcammarata | ||
1255 | 593 | #ds = self.get_datestamp() | ||
1256 | 594 | #if ds is not None: | ||
1257 | 595 | # distro.set_tree_build_time(ds) | ||
1258 | 596 | profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed") | ||
1259 | 597 | self.profiles.add(profile,save=True) | ||
1260 | 598 | |||
1261 | 599 | self.configure_tree_location(distro) | ||
1262 | 600 | self.distros.add(distro,save=True) # re-save | ||
1263 | 601 | self.api.serialize() | ||
1264 | 602 | |||
1265 | 603 | def configure_tree_location(self, distro): | ||
1266 | 604 | """ | ||
1267 | 605 | Once a distribution is identified, find the part of the distribution | ||
1268 | 606 | that has the URL in it that we want to use for kickstarting the | ||
1269 | 607 | distribution, and create a ksmeta variable $tree that contains this. | ||
1270 | 608 | """ | ||
1271 | 609 | |||
1272 | 610 | base = self.get_rootdir() | ||
1273 | 611 | |||
1274 | 612 | if self.network_root is None: | ||
1275 | 613 | dists_path = os.path.join( self.path , "dists" ) | ||
1276 | 614 | if os.path.isdir( dists_path ): | ||
1277 | 615 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
1278 | 616 | else: | ||
1279 | 617 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
1280 | 618 | self.set_install_tree(distro, tree) | ||
1281 | 619 | else: | ||
1282 | 620 | # where we assign the kickstart source is relative to our current directory | ||
1283 | 621 | # and the input start directory in the crawl. We find the path segments | ||
1284 | 622 | # between and tack them on the network source path to find the explicit | ||
1285 | 623 | # network path to the distro that Anaconda can digest. | ||
1286 | 624 | tail = utils.path_tail(self.path, base) | ||
1287 | 625 | tree = self.network_root[:-1] + tail | ||
1288 | 626 | self.set_install_tree(distro, tree) | ||
1289 | 627 | |||
1290 | 628 | def get_rootdir(self): | ||
1291 | 629 | return self.mirror | ||
1292 | 630 | |||
1293 | 631 | def get_pkgdir(self): | ||
1294 | 632 | if not self.pkgdir: | ||
1295 | 633 | return None | ||
1296 | 634 | return os.path.join(self.get_rootdir(),self.pkgdir) | ||
1297 | 635 | |||
1298 | 636 | def set_install_tree(self, distro, url): | ||
1299 | 637 | distro.ks_meta["tree"] = url | ||
1300 | 638 | |||
1301 | 639 | def learn_arch_from_tree(self): | ||
1302 | 640 | """ | ||
1303 | 641 | If a distribution is imported from DVD, there is a good chance the path doesn't | ||
1304 | 642 | contain the arch and we should add it back in so that it's part of the | ||
1305 | 643 | meaningful name ... so this code helps figure out the arch name. This is important | ||
1306 | 644 | for producing predictable distro names (and profile names) from differing import sources | ||
1307 | 645 | """ | ||
1308 | 646 | result = {} | ||
1309 | 647 | # FIXME : this is called only once, should not be a walk | ||
1310 | 648 | if self.get_pkgdir(): | ||
1311 | 649 | os.path.walk(self.get_pkgdir(), self.arch_walker, result) | ||
1312 | 650 | if result.pop("amd64",False): | ||
1313 | 651 | result["x86_64"] = 1 | ||
1314 | 652 | if result.pop("i686",False): | ||
1315 | 653 | result["i386"] = 1 | ||
1316 | 654 | return result.keys() | ||
1317 | 655 | |||
1318 | 656 | def match_kernelarch_file(self, filename): | ||
1319 | 657 | """ | ||
1320 | 658 | Is the given filename a kernel filename? | ||
1321 | 659 | """ | ||
1322 | 660 | if not filename.endswith("deb"): | ||
1323 | 661 | return False | ||
1324 | 662 | if filename.startswith("linux-headers-"): | ||
1325 | 663 | return True | ||
1326 | 664 | return False | ||
1327 | 665 | |||
1328 | 666 | def scan_pkg_filename(self, file): | ||
1329 | 667 | """ | ||
1330 | 668 | Determine what the distro is based on the release package filename. | ||
1331 | 669 | """ | ||
1332 | 670 | # FIXME: all of these dist_names should probably be put in a function | ||
1333 | 671 | # which would be called in place of looking in codes.py. Right now | ||
1334 | 672 | # you have to update both codes.py and this to add a new release | ||
1335 | 673 | if self.breed == "debian": | ||
1336 | 674 | dist_names = ['etch','lenny',] | ||
1337 | 675 | elif self.breed == "ubuntu": | ||
1338 | 676 | dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',] | ||
1339 | 677 | else: | ||
1340 | 678 | return None | ||
1341 | 679 | |||
1342 | 680 | if os.path.basename(file) in dist_names: | ||
1343 | 681 | release_file = os.path.join(file,'Release') | ||
1344 | 682 | self.logger.info("Found %s release file: %s" % (self.breed,release_file)) | ||
1345 | 683 | |||
1346 | 684 | f = open(release_file,'r') | ||
1347 | 685 | lines = f.readlines() | ||
1348 | 686 | f.close() | ||
1349 | 687 | |||
1350 | 688 | for line in lines: | ||
1351 | 689 | if line.lower().startswith('version: '): | ||
1352 | 690 | version = line.split(':')[1].strip() | ||
1353 | 691 | values = version.split('.') | ||
1354 | 692 | if len(values) == 1: | ||
1355 | 693 | # I don't think you'd ever hit this currently with debian or ubuntu, | ||
1356 | 694 | # just including it for safety reasons | ||
1357 | 695 | return (os.path.basename(file), values[0], "0", "0") | ||
1358 | 696 | elif len(values) == 2: | ||
1359 | 697 | return (os.path.basename(file), values[0], values[1], "0") | ||
1360 | 698 | elif len(values) > 2: | ||
1361 | 699 | return (os.path.basename(file), values[0], values[1], values[2]) | ||
1362 | 700 | return None | ||
1363 | 701 | |||
1364 | 702 | def get_datestamp(self): | ||
1365 | 703 | """ | ||
1366 | 704 | Not used for debian/ubuntu... should probably be removed? - jcammarata | ||
1367 | 705 | """ | ||
1368 | 706 | pass | ||
1369 | 707 | |||
1370 | 708 | def set_variance(self, flavor, major, minor, arch): | ||
1371 | 709 | """ | ||
1372 | 710 | Set distro specific versioning. | ||
1373 | 711 | """ | ||
1374 | 712 | # I don't think this is required anymore, as the scan_pkg_filename() function | ||
1375 | 713 | # above does everything we need it to - jcammarata | ||
1376 | 714 | # | ||
1377 | 715 | #if self.breed == "debian": | ||
1378 | 716 | # dist_names = { '4.0' : "etch" , '5.0' : "lenny" } | ||
1379 | 717 | # dist_vers = "%s.%s" % ( major , minor ) | ||
1380 | 718 | # os_version = dist_names[dist_vers] | ||
1381 | 719 | # | ||
1382 | 720 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
1383 | 721 | #elif self.breed == "ubuntu": | ||
1384 | 722 | # # Release names taken from wikipedia | ||
1385 | 723 | # dist_names = { '6.4' :"dapper", | ||
1386 | 724 | # '8.4' :"hardy", | ||
1387 | 725 | # '8.10' :"intrepid", | ||
1388 | 726 | # '9.4' :"jaunty", | ||
1389 | 727 | # '9.10' :"karmic", | ||
1390 | 728 | # '10.4' :"lynx", | ||
1391 | 729 | # '10.10':"maverick", | ||
1392 | 730 | # '11.4' :"natty", | ||
1393 | 731 | # } | ||
1394 | 732 | # dist_vers = "%s.%s" % ( major , minor ) | ||
1395 | 733 | # if not dist_names.has_key( dist_vers ): | ||
1396 | 734 | # dist_names['4ubuntu2.0'] = "IntrepidIbex" | ||
1397 | 735 | # os_version = dist_names[dist_vers] | ||
1398 | 736 | # | ||
1399 | 737 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
1400 | 738 | #else: | ||
1401 | 739 | # return None | ||
1402 | 740 | pass | ||
1403 | 741 | |||
1404 | 742 | def process_repos(self, main_importer, distro): | ||
1405 | 743 | # Create a disabled repository for the new distro, and the security updates | ||
1406 | 744 | # | ||
1407 | 745 | # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage | ||
1408 | 746 | |||
1409 | 747 | repo = item_repo.Repo(main_importer.config) | ||
1410 | 748 | repo.set_breed( "apt" ) | ||
1411 | 749 | repo.set_arch( distro.arch ) | ||
1412 | 750 | repo.set_keep_updated( False ) | ||
1413 | 751 | repo.yumopts["--ignore-release-gpg"] = None | ||
1414 | 752 | repo.yumopts["--verbose"] = None | ||
1415 | 753 | repo.set_name( distro.name ) | ||
1416 | 754 | repo.set_os_version( distro.os_version ) | ||
1417 | 755 | # NOTE : The location of the mirror should come from timezone | ||
1418 | 756 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) ) | ||
1419 | 757 | |||
1420 | 758 | security_repo = item_repo.Repo(main_importer.config) | ||
1421 | 759 | security_repo.set_breed( "apt" ) | ||
1422 | 760 | security_repo.set_arch( distro.arch ) | ||
1423 | 761 | security_repo.set_keep_updated( False ) | ||
1424 | 762 | security_repo.yumopts["--ignore-release-gpg"] = None | ||
1425 | 763 | security_repo.yumopts["--verbose"] = None | ||
1426 | 764 | security_repo.set_name( distro.name + "-security" ) | ||
1427 | 765 | security_repo.set_os_version( distro.os_version ) | ||
1428 | 766 | # There are no official mirrors for security updates | ||
1429 | 767 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' ) | ||
1430 | 768 | |||
1431 | 769 | self.logger.info("Added repos for %s" % distro.name) | ||
1432 | 770 | repos = main_importer.config.repos() | ||
1433 | 771 | repos.add(repo,save=True) | ||
1434 | 772 | repos.add(security_repo,save=True) | ||
1435 | 773 | |||
1436 | 774 | # ========================================================================== | ||
1437 | 775 | |||
1438 | 776 | def get_import_manager(config,logger): | ||
1439 | 777 | return ImportDebianUbuntuManager(config,logger) | ||
1440 | 778 | 0 | ||
1441 | === removed directory '.pc/33_authn_configfile.patch' | |||
1442 | === removed directory '.pc/33_authn_configfile.patch/config' | |||
1443 | === removed file '.pc/33_authn_configfile.patch/config/modules.conf' | |||
1444 | --- .pc/33_authn_configfile.patch/config/modules.conf 2011-04-04 12:55:44 +0000 | |||
1445 | +++ .pc/33_authn_configfile.patch/config/modules.conf 1970-01-01 00:00:00 +0000 | |||
1446 | @@ -1,86 +0,0 @@ | |||
1447 | 1 | # cobbler module configuration file | ||
1448 | 2 | # ================================= | ||
1449 | 3 | |||
1450 | 4 | # authentication: | ||
1451 | 5 | # what users can log into the WebUI and Read-Write XMLRPC? | ||
1452 | 6 | # choices: | ||
1453 | 7 | # authn_denyall -- no one (default) | ||
1454 | 8 | # authn_configfile -- use /etc/cobbler/users.digest (for basic setups) | ||
1455 | 9 | # authn_passthru -- ask Apache to handle it (used for kerberos) | ||
1456 | 10 | # authn_ldap -- authenticate against LDAP | ||
1457 | 11 | # authn_spacewalk -- ask Spacewalk/Satellite (experimental) | ||
1458 | 12 | # authn_testing -- username/password is always testing/testing (debug) | ||
1459 | 13 | # (user supplied) -- you may write your own module | ||
1460 | 14 | # WARNING: this is a security setting, do not choose an option blindly. | ||
1461 | 15 | # for more information: | ||
1462 | 16 | # https://fedorahosted.org/cobbler/wiki/CobblerWebInterface | ||
1463 | 17 | # https://fedorahosted.org/cobbler/wiki/CustomizableSecurity | ||
1464 | 18 | # https://fedorahosted.org/cobbler/wiki/CobblerWithKerberos | ||
1465 | 19 | # https://fedorahosted.org/cobbler/wiki/CobblerWithLdap | ||
1466 | 20 | |||
1467 | 21 | [authentication] | ||
1468 | 22 | module = authn_denyall | ||
1469 | 23 | |||
1470 | 24 | # authorization: | ||
1471 | 25 | # once a user has been cleared by the WebUI/XMLRPC, what can they do? | ||
1472 | 26 | # choices: | ||
1473 | 27 | # authz_allowall -- full access for all authneticated users (default) | ||
1474 | 28 | # authz_ownership -- use users.conf, but add object ownership semantics | ||
1475 | 29 | # (user supplied) -- you may write your own module | ||
1476 | 30 | # WARNING: this is a security setting, do not choose an option blindly. | ||
1477 | 31 | # If you want to further restrict cobbler with ACLs for various groups, | ||
1478 | 32 | # pick authz_ownership. authz_allowall does not support ACLs. configfile | ||
1479 | 33 | # does but does not support object ownership which is useful as an additional | ||
1480 | 34 | # layer of control. | ||
1481 | 35 | |||
1482 | 36 | # for more information: | ||
1483 | 37 | # https://fedorahosted.org/cobbler/wiki/CobblerWebInterface | ||
1484 | 38 | # https://fedorahosted.org/cobbler/wiki/CustomizableSecurity | ||
1485 | 39 | # https://fedorahosted.org/cobbler/wiki/CustomizableAuthorization | ||
1486 | 40 | # https://fedorahosted.org/cobbler/wiki/AuthorizationWithOwnership | ||
1487 | 41 | # https://fedorahosted.org/cobbler/wiki/AclFeature | ||
1488 | 42 | |||
1489 | 43 | [authorization] | ||
1490 | 44 | module = authz_allowall | ||
1491 | 45 | |||
1492 | 46 | # dns: | ||
1493 | 47 | # chooses the DNS management engine if manage_dns is enabled | ||
1494 | 48 | # in /etc/cobbler/settings, which is off by default. | ||
1495 | 49 | # choices: | ||
1496 | 50 | # manage_bind -- default, uses BIND/named | ||
1497 | 51 | # manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dhcp below | ||
1498 | 52 | # NOTE: more configuration is still required in /etc/cobbler | ||
1499 | 53 | # for more information: | ||
1500 | 54 | # https://fedorahosted.org/cobbler/wiki/ManageDns | ||
1501 | 55 | |||
1502 | 56 | [dns] | ||
1503 | 57 | module = manage_bind | ||
1504 | 58 | |||
1505 | 59 | # dhcp: | ||
1506 | 60 | # chooses the DHCP management engine if manage_dhcp is enabled | ||
1507 | 61 | # in /etc/cobbler/settings, which is off by default. | ||
1508 | 62 | # choices: | ||
1509 | 63 | # manage_isc -- default, uses ISC dhcpd | ||
1510 | 64 | # manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dns above | ||
1511 | 65 | # NOTE: more configuration is still required in /etc/cobbler | ||
1512 | 66 | # for more information: | ||
1513 | 67 | # https://fedorahosted.org/cobbler/wiki/ManageDhcp | ||
1514 | 68 | |||
1515 | 69 | [dhcp] | ||
1516 | 70 | module = manage_isc | ||
1517 | 71 | |||
1518 | 72 | # tftpd: | ||
1519 | 73 | # chooses the TFTP management engine if manage_tftp is enabled | ||
1520 | 74 | # in /etc/cobbler/settings, which is ON by default. | ||
1521 | 75 | # | ||
1522 | 76 | # choices: | ||
1523 | 77 | # manage_in_tftpd -- default, uses the system's tftp server | ||
1524 | 78 | # manage_tftpd_py -- uses cobbler's tftp server | ||
1525 | 79 | # | ||
1526 | 80 | # for more information: | ||
1527 | 81 | # https://fedorahosted.org/cobbler/wiki/ManageTftp | ||
1528 | 82 | |||
1529 | 83 | [tftpd] | ||
1530 | 84 | module = manage_in_tftpd | ||
1531 | 85 | |||
1532 | 86 | #-------------------------------------------------- | ||
1533 | 87 | 0 | ||
1534 | === removed directory '.pc/34_fix_apache_wont_start.patch' | |||
1535 | === removed directory '.pc/34_fix_apache_wont_start.patch/config' | |||
1536 | === removed file '.pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf' | |||
1537 | --- .pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf 2011-04-04 12:55:44 +0000 | |||
1538 | +++ .pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf 1970-01-01 00:00:00 +0000 | |||
1539 | @@ -1,14 +0,0 @@ | |||
1540 | 1 | # This configuration file enables the cobbler web | ||
1541 | 2 | # interface (django version) | ||
1542 | 3 | |||
1543 | 4 | <VirtualHost *:80> | ||
1544 | 5 | |||
1545 | 6 | # Do not log the requests generated from the event notification system | ||
1546 | 7 | SetEnvIf Request_URI ".*/op/events/user/.*" dontlog | ||
1547 | 8 | # Log only what remains | ||
1548 | 9 | CustomLog logs/access_log combined env=!dontlog | ||
1549 | 10 | |||
1550 | 11 | WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi | ||
1551 | 12 | |||
1552 | 13 | </VirtualHost> | ||
1553 | 14 | |||
1554 | 15 | 0 | ||
1555 | === removed directory '.pc/39_cw_remove_vhost.patch' | |||
1556 | === removed directory '.pc/39_cw_remove_vhost.patch/config' | |||
1557 | === removed file '.pc/39_cw_remove_vhost.patch/config/cobbler_web.conf' | |||
1558 | --- .pc/39_cw_remove_vhost.patch/config/cobbler_web.conf 2011-04-15 12:47:39 +0000 | |||
1559 | +++ .pc/39_cw_remove_vhost.patch/config/cobbler_web.conf 1970-01-01 00:00:00 +0000 | |||
1560 | @@ -1,14 +0,0 @@ | |||
1561 | 1 | # This configuration file enables the cobbler web | ||
1562 | 2 | # interface (django version) | ||
1563 | 3 | |||
1564 | 4 | <VirtualHost *:80> | ||
1565 | 5 | |||
1566 | 6 | # Do not log the requests generated from the event notification system | ||
1567 | 7 | SetEnvIf Request_URI ".*/op/events/user/.*" dontlog | ||
1568 | 8 | # Log only what remains | ||
1569 | 9 | #CustomLog logs/access_log combined env=!dontlog | ||
1570 | 10 | |||
1571 | 11 | WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi | ||
1572 | 12 | |||
1573 | 13 | </VirtualHost> | ||
1574 | 14 | |||
1575 | 15 | 0 | ||
1576 | === removed directory '.pc/40_ubuntu_bind9_management.patch' | |||
1577 | === removed directory '.pc/40_ubuntu_bind9_management.patch/cobbler' | |||
1578 | === removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py' | |||
1579 | --- .pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py 2011-04-18 11:15:59 +0000 | |||
1580 | +++ .pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py 1970-01-01 00:00:00 +0000 | |||
1581 | @@ -1,482 +0,0 @@ | |||
1582 | 1 | """ | ||
1583 | 2 | Validates whether the system is reasonably well configured for | ||
1584 | 3 | serving up content. This is the code behind 'cobbler check'. | ||
1585 | 4 | |||
1586 | 5 | Copyright 2006-2009, Red Hat, Inc | ||
1587 | 6 | Michael DeHaan <mdehaan@redhat.com> | ||
1588 | 7 | |||
1589 | 8 | This program is free software; you can redistribute it and/or modify | ||
1590 | 9 | it under the terms of the GNU General Public License as published by | ||
1591 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
1592 | 11 | (at your option) any later version. | ||
1593 | 12 | |||
1594 | 13 | This program is distributed in the hope that it will be useful, | ||
1595 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1596 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1597 | 16 | GNU General Public License for more details. | ||
1598 | 17 | |||
1599 | 18 | You should have received a copy of the GNU General Public License | ||
1600 | 19 | along with this program; if not, write to the Free Software | ||
1601 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
1602 | 21 | 02110-1301 USA | ||
1603 | 22 | """ | ||
1604 | 23 | |||
1605 | 24 | import os | ||
1606 | 25 | import re | ||
1607 | 26 | import action_sync | ||
1608 | 27 | import utils | ||
1609 | 28 | import glob | ||
1610 | 29 | from utils import _ | ||
1611 | 30 | import clogger | ||
1612 | 31 | |||
1613 | 32 | class BootCheck: | ||
1614 | 33 | |||
1615 | 34 | def __init__(self,config,logger=None): | ||
1616 | 35 | """ | ||
1617 | 36 | Constructor | ||
1618 | 37 | """ | ||
1619 | 38 | self.config = config | ||
1620 | 39 | self.settings = config.settings() | ||
1621 | 40 | if logger is None: | ||
1622 | 41 | logger = clogger.Logger() | ||
1623 | 42 | self.logger = logger | ||
1624 | 43 | |||
1625 | 44 | |||
1626 | 45 | def run(self): | ||
1627 | 46 | """ | ||
1628 | 47 | Returns None if there are no errors, otherwise returns a list | ||
1629 | 48 | of things to correct prior to running application 'for real'. | ||
1630 | 49 | (The CLI usage is "cobbler check" before "cobbler sync") | ||
1631 | 50 | """ | ||
1632 | 51 | status = [] | ||
1633 | 52 | self.checked_dist = utils.check_dist() | ||
1634 | 53 | self.check_name(status) | ||
1635 | 54 | self.check_selinux(status) | ||
1636 | 55 | if self.settings.manage_dhcp: | ||
1637 | 56 | mode = self.config.api.get_sync().dhcp.what() | ||
1638 | 57 | if mode == "isc": | ||
1639 | 58 | self.check_dhcpd_bin(status) | ||
1640 | 59 | self.check_dhcpd_conf(status) | ||
1641 | 60 | self.check_service(status,"dhcpd") | ||
1642 | 61 | elif mode == "dnsmasq": | ||
1643 | 62 | self.check_dnsmasq_bin(status) | ||
1644 | 63 | self.check_service(status,"dnsmasq") | ||
1645 | 64 | |||
1646 | 65 | if self.settings.manage_dns: | ||
1647 | 66 | mode = self.config.api.get_sync().dns.what() | ||
1648 | 67 | if mode == "bind": | ||
1649 | 68 | self.check_bind_bin(status) | ||
1650 | 69 | self.check_service(status,"named") | ||
1651 | 70 | elif mode == "dnsmasq" and not self.settings.manage_dhcp: | ||
1652 | 71 | self.check_dnsmasq_bin(status) | ||
1653 | 72 | self.check_service(status,"dnsmasq") | ||
1654 | 73 | |||
1655 | 74 | mode = self.config.api.get_sync().tftpd.what() | ||
1656 | 75 | if mode == "in_tftpd": | ||
1657 | 76 | self.check_tftpd_bin(status) | ||
1658 | 77 | self.check_tftpd_dir(status) | ||
1659 | 78 | self.check_tftpd_conf(status) | ||
1660 | 79 | elif mode == "tftpd_py": | ||
1661 | 80 | self.check_ctftpd_bin(status) | ||
1662 | 81 | self.check_ctftpd_dir(status) | ||
1663 | 82 | self.check_ctftpd_conf(status) | ||
1664 | 83 | |||
1665 | 84 | self.check_service(status, "cobblerd") | ||
1666 | 85 | |||
1667 | 86 | self.check_bootloaders(status) | ||
1668 | 87 | self.check_rsync_conf(status) | ||
1669 | 88 | self.check_httpd(status) | ||
1670 | 89 | self.check_iptables(status) | ||
1671 | 90 | self.check_yum(status) | ||
1672 | 91 | self.check_debmirror(status) | ||
1673 | 92 | self.check_for_ksvalidator(status) | ||
1674 | 93 | self.check_for_default_password(status) | ||
1675 | 94 | self.check_for_unreferenced_repos(status) | ||
1676 | 95 | self.check_for_unsynced_repos(status) | ||
1677 | 96 | self.check_for_cman(status) | ||
1678 | 97 | |||
1679 | 98 | return status | ||
1680 | 99 | |||
1681 | 100 | def check_for_ksvalidator(self, status): | ||
1682 | 101 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1683 | 102 | return | ||
1684 | 103 | |||
1685 | 104 | if not os.path.exists("/usr/bin/ksvalidator"): | ||
1686 | 105 | status.append("ksvalidator was not found, install pykickstart") | ||
1687 | 106 | |||
1688 | 107 | return True | ||
1689 | 108 | |||
1690 | 109 | def check_for_cman(self, status): | ||
1691 | 110 | # not doing rpm -q here to be cross-distro friendly | ||
1692 | 111 | if not os.path.exists("/sbin/fence_ilo") and not os.path.exists("/usr/sbin/fence_ilo"): | ||
1693 | 112 | status.append("fencing tools were not found, and are required to use the (optional) power management features. install cman or fence-agents to use them") | ||
1694 | 113 | return True | ||
1695 | 114 | |||
1696 | 115 | def check_service(self, status, which, notes=""): | ||
1697 | 116 | if notes != "": | ||
1698 | 117 | notes = " (NOTE: %s)" % notes | ||
1699 | 118 | rc = 0 | ||
1700 | 119 | if self.checked_dist == "redhat" or self.checked_dist == "suse": | ||
1701 | 120 | if os.path.exists("/etc/rc.d/init.d/%s" % which): | ||
1702 | 121 | rc = utils.subprocess_call(self.logger,"/sbin/service %s status > /dev/null 2>/dev/null" % which, shell=True) | ||
1703 | 122 | if rc != 0: | ||
1704 | 123 | status.append(_("service %s is not running%s") % (which,notes)) | ||
1705 | 124 | return False | ||
1706 | 125 | elif self.checked_dist in ["debian", "ubuntu"]: | ||
1707 | 126 | # we still use /etc/init.d | ||
1708 | 127 | if os.path.exists("/etc/init.d/%s" % which): | ||
1709 | 128 | rc = utils.subprocess_call(self.logger,"/etc/init.d/%s status /dev/null 2>/dev/null" % which, shell=True) | ||
1710 | 129 | if rc != 0: | ||
1711 | 130 | status.append(_("service %s is not running%s") % which,notes) | ||
1712 | 131 | return False | ||
1713 | 132 | elif self.checked_dist == "ubuntu": | ||
1714 | 133 | if os.path.exists("/etc/init/%s.conf" % which): | ||
1715 | 134 | rc = utils.subprocess_call(self.logger,"status %s > /dev/null 2>&1" % which, shell=True) | ||
1716 | 135 | if rc != 0: | ||
1717 | 136 | status.append(_("service %s is not running%s") % (which,notes)) | ||
1718 | 137 | else: | ||
1719 | 138 | status.append(_("Unknown distribution type, cannot check for running service %s" % which)) | ||
1720 | 139 | return False | ||
1721 | 140 | return True | ||
1722 | 141 | |||
1723 | 142 | def check_iptables(self, status): | ||
1724 | 143 | if os.path.exists("/etc/rc.d/init.d/iptables"): | ||
1725 | 144 | rc = utils.subprocess_call(self.logger,"/sbin/service iptables status >/dev/null 2>/dev/null", shell=True) | ||
1726 | 145 | if rc == 0: | ||
1727 | 146 | status.append(_("since iptables may be running, ensure 69, 80, and %(xmlrpc)s are unblocked") % { "xmlrpc" : self.settings.xmlrpc_port }) | ||
1728 | 147 | |||
1729 | 148 | def check_yum(self,status): | ||
1730 | 149 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1731 | 150 | return | ||
1732 | 151 | |||
1733 | 152 | if not os.path.exists("/usr/bin/createrepo"): | ||
1734 | 153 | status.append(_("createrepo package is not installed, needed for cobbler import and cobbler reposync, install createrepo?")) | ||
1735 | 154 | if not os.path.exists("/usr/bin/reposync"): | ||
1736 | 155 | status.append(_("reposync is not installed, need for cobbler reposync, install/upgrade yum-utils?")) | ||
1737 | 156 | if not os.path.exists("/usr/bin/yumdownloader"): | ||
1738 | 157 | status.append(_("yumdownloader is not installed, needed for cobbler repo add with --rpm-list parameter, install/upgrade yum-utils?")) | ||
1739 | 158 | if self.settings.reposync_flags.find("-l"): | ||
1740 | 159 | if self.checked_dist == "redhat" or self.checked_dist == "suse": | ||
1741 | 160 | yum_utils_ver = utils.subprocess_get(self.logger,"/usr/bin/rpmquery --queryformat=%{VERSION} yum-utils", shell=True) | ||
1742 | 161 | if yum_utils_ver < "1.1.17": | ||
1743 | 162 | status.append(_("yum-utils need to be at least version 1.1.17 for reposync -l, current version is %s") % yum_utils_ver ) | ||
1744 | 163 | |||
1745 | 164 | def check_debmirror(self,status): | ||
1746 | 165 | if not os.path.exists("/usr/bin/debmirror"): | ||
1747 | 166 | status.append(_("debmirror package is not installed, it will be required to manage debian deployments and repositories")) | ||
1748 | 167 | if os.path.exists("/etc/debmirror.conf"): | ||
1749 | 168 | f = open("/etc/debmirror.conf") | ||
1750 | 169 | re_dists = re.compile(r'@dists=') | ||
1751 | 170 | re_arches = re.compile(r'@arches=') | ||
1752 | 171 | for line in f.readlines(): | ||
1753 | 172 | if re_dists.search(line) and not line.strip().startswith("#"): | ||
1754 | 173 | status.append(_("comment 'dists' on /etc/debmirror.conf for proper debian support")) | ||
1755 | 174 | if re_arches.search(line) and not line.strip().startswith("#"): | ||
1756 | 175 | status.append(_("comment 'arches' on /etc/debmirror.conf for proper debian support")) | ||
1757 | 176 | |||
1758 | 177 | |||
1759 | 178 | def check_name(self,status): | ||
1760 | 179 | """ | ||
1761 | 180 | If the server name in the config file is still set to localhost | ||
1762 | 181 | kickstarts run from koan will not have proper kernel line | ||
1763 | 182 | parameters. | ||
1764 | 183 | """ | ||
1765 | 184 | if self.settings.server == "127.0.0.1": | ||
1766 | 185 | status.append(_("The 'server' field in /etc/cobbler/settings must be set to something other than localhost, or kickstarting features will not work. This should be a resolvable hostname or IP for the boot server as reachable by all machines that will use it.")) | ||
1767 | 186 | if self.settings.next_server == "127.0.0.1": | ||
1768 | 187 | status.append(_("For PXE to be functional, the 'next_server' field in /etc/cobbler/settings must be set to something other than 127.0.0.1, and should match the IP of the boot server on the PXE network.")) | ||
1769 | 188 | |||
1770 | 189 | def check_selinux(self,status): | ||
1771 | 190 | """ | ||
1772 | 191 | Suggests various SELinux rules changes to run Cobbler happily with | ||
1773 | 192 | SELinux in enforcing mode. FIXME: this method could use some | ||
1774 | 193 | refactoring in the future. | ||
1775 | 194 | """ | ||
1776 | 195 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1777 | 196 | return | ||
1778 | 197 | |||
1779 | 198 | enabled = self.config.api.is_selinux_enabled() | ||
1780 | 199 | if enabled: | ||
1781 | 200 | data2 = utils.subprocess_get(self.logger,"/usr/sbin/getsebool -a",shell=True) | ||
1782 | 201 | for line in data2.split("\n"): | ||
1783 | 202 | if line.find("httpd_can_network_connect ") != -1: | ||
1784 | 203 | if line.find("off") != -1: | ||
1785 | 204 | status.append(_("Must enable a selinux boolean to enable vital web services components, run: setsebool -P httpd_can_network_connect true")) | ||
1786 | 205 | if line.find("rsync_disable_trans ") != -1: | ||
1787 | 206 | if line.find("on") != -1: | ||
1788 | 207 | status.append(_("Must enable the cobbler import and replicate commands, run: setsebool -P rsync_disable_trans=1")) | ||
1789 | 208 | |||
1790 | 209 | data3 = utils.subprocess_get(self.logger,"/usr/sbin/semanage fcontext -l | grep public_content_t",shell=True) | ||
1791 | 210 | |||
1792 | 211 | rule1 = False | ||
1793 | 212 | rule2 = False | ||
1794 | 213 | rule3 = False | ||
1795 | 214 | selinux_msg = "/usr/sbin/semanage fcontext -a -t public_content_t \"%s\"" | ||
1796 | 215 | for line in data3.split("\n"): | ||
1797 | 216 | if line.startswith("/tftpboot/.*"): | ||
1798 | 217 | rule1 = True | ||
1799 | 218 | if line.startswith("/var/lib/tftpboot/.*"): | ||
1800 | 219 | rule2 = True | ||
1801 | 220 | if line.startswith("/var/www/cobbler/images/.*"): | ||
1802 | 221 | rule3 = True | ||
1803 | 222 | |||
1804 | 223 | rules = [] | ||
1805 | 224 | if os.path.exists("/tftpboot") and not rule1: | ||
1806 | 225 | rules.append(selinux_msg % "/tftpboot/.*") | ||
1807 | 226 | else: | ||
1808 | 227 | if not rule2: | ||
1809 | 228 | rules.append(selinux_msg % "/var/lib/tftpboot/.*") | ||
1810 | 229 | if not rule3: | ||
1811 | 230 | rules.append(selinux_msg % "/var/www/cobbler/images/.*") | ||
1812 | 231 | if len(rules) > 0: | ||
1813 | 232 | status.append("you need to set some SELinux content rules to ensure cobbler serves content correctly in your SELinux environment, run the following: %s" % " && ".join(rules)) | ||
1814 | 233 | |||
1815 | 234 | # now check to see that the Django sessions path is accessible | ||
1816 | 235 | # by Apache | ||
1817 | 236 | |||
1818 | 237 | data4 = utils.subprocess_get(self.logger,"/usr/sbin/semanage fcontext -l | grep httpd_sys_content_rw_t",shell=True) | ||
1819 | 238 | selinux_msg = "you need to set some SELinux rules if you want to use cobbler-web (an optional package), run the following: /usr/sbin/semanage fcontext -a -t httpd_sys_content_rw_t \"%s\"" | ||
1820 | 239 | rule4 = False | ||
1821 | 240 | for line in data4.split("\n"): | ||
1822 | 241 | if line.startswith("/var/lib/cobbler/webui_sessions/.*"): | ||
1823 | 242 | rule4 = True | ||
1824 | 243 | if not rule4: | ||
1825 | 244 | status.append(selinux_msg % "/var/lib/cobbler/webui_sessions/.*") | ||
1826 | 245 | |||
1827 | 246 | |||
1828 | 247 | def check_for_default_password(self,status): | ||
1829 | 248 | default_pass = self.settings.default_password_crypted | ||
1830 | 249 | if default_pass == "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac.": | ||
1831 | 250 | status.append(_("The default password used by the sample templates for newly installed machines (default_password_crypted in /etc/cobbler/settings) is still set to 'cobbler' and should be changed, try: \"openssl passwd -1 -salt 'random-phrase-here' 'your-password-here'\" to generate new one")) | ||
1832 | 251 | |||
1833 | 252 | |||
1834 | 253 | def check_for_unreferenced_repos(self,status): | ||
1835 | 254 | repos = [] | ||
1836 | 255 | referenced = [] | ||
1837 | 256 | not_found = [] | ||
1838 | 257 | for r in self.config.api.repos(): | ||
1839 | 258 | repos.append(r.name) | ||
1840 | 259 | for p in self.config.api.profiles(): | ||
1841 | 260 | my_repos = p.repos | ||
1842 | 261 | if my_repos != "<<inherit>>": | ||
1843 | 262 | referenced.extend(my_repos) | ||
1844 | 263 | for r in referenced: | ||
1845 | 264 | if r not in repos and r != "<<inherit>>": | ||
1846 | 265 | not_found.append(r) | ||
1847 | 266 | if len(not_found) > 0: | ||
1848 | 267 | status.append(_("One or more repos referenced by profile objects is no longer defined in cobbler: %s") % ", ".join(not_found)) | ||
1849 | 268 | |||
1850 | 269 | def check_for_unsynced_repos(self,status): | ||
1851 | 270 | need_sync = [] | ||
1852 | 271 | for r in self.config.repos(): | ||
1853 | 272 | if r.mirror_locally == 1: | ||
1854 | 273 | lookfor = os.path.join(self.settings.webdir, "repo_mirror", r.name) | ||
1855 | 274 | if not os.path.exists(lookfor): | ||
1856 | 275 | need_sync.append(r.name) | ||
1857 | 276 | if len(need_sync) > 0: | ||
1858 | 277 | status.append(_("One or more repos need to be processed by cobbler reposync for the first time before kickstarting against them: %s") % ", ".join(need_sync)) | ||
1859 | 278 | |||
1860 | 279 | |||
1861 | 280 | def check_httpd(self,status): | ||
1862 | 281 | """ | ||
1863 | 282 | Check if Apache is installed. | ||
1864 | 283 | """ | ||
1865 | 284 | if self.checked_dist in [ "suse", "redhat" ]: | ||
1866 | 285 | rc = utils.subprocess_get(self.logger,"httpd -v") | ||
1867 | 286 | else: | ||
1868 | 287 | rc = utils.subprocess_get(self.logger,"apache2 -v") | ||
1869 | 288 | if rc.find("Server") == -1: | ||
1870 | 289 | status.append("Apache (httpd) is not installed and/or in path") | ||
1871 | 290 | |||
1872 | 291 | |||
1873 | 292 | def check_dhcpd_bin(self,status): | ||
1874 | 293 | """ | ||
1875 | 294 | Check if dhcpd is installed | ||
1876 | 295 | """ | ||
1877 | 296 | if not os.path.exists("/usr/sbin/dhcpd"): | ||
1878 | 297 | status.append("dhcpd is not installed") | ||
1879 | 298 | |||
1880 | 299 | def check_dnsmasq_bin(self,status): | ||
1881 | 300 | """ | ||
1882 | 301 | Check if dnsmasq is installed | ||
1883 | 302 | """ | ||
1884 | 303 | rc = utils.subprocess_get(self.logger,"dnsmasq --help") | ||
1885 | 304 | if rc.find("Valid options") == -1: | ||
1886 | 305 | status.append("dnsmasq is not installed and/or in path") | ||
1887 | 306 | |||
1888 | 307 | def check_bind_bin(self,status): | ||
1889 | 308 | """ | ||
1890 | 309 | Check if bind is installed. | ||
1891 | 310 | """ | ||
1892 | 311 | rc = utils.subprocess_get(self.logger,"named -v") | ||
1893 | 312 | # it should return something like "BIND 9.6.1-P1-RedHat-9.6.1-6.P1.fc11" | ||
1894 | 313 | if rc.find("BIND") == -1: | ||
1895 | 314 | status.append("named is not installed and/or in path") | ||
1896 | 315 | |||
1897 | 316 | def check_bootloaders(self,status): | ||
1898 | 317 | """ | ||
1899 | 318 | Check if network bootloaders are installed | ||
1900 | 319 | """ | ||
1901 | 320 | # FIXME: move zpxe.rexx to loaders | ||
1902 | 321 | |||
1903 | 322 | bootloaders = { | ||
1904 | 323 | "elilo" : [ "/var/lib/cobbler/loaders/elilo*.efi" ], | ||
1905 | 324 | "menu.c32" : [ "/usr/share/syslinux/menu.c32", | ||
1906 | 325 | "/usr/lib/syslinux/menu.c32", | ||
1907 | 326 | "/var/lib/cobbler/loaders/menu.c32" ], | ||
1908 | 327 | "yaboot" : [ "/var/lib/cobbler/loaders/yaboot*" ], | ||
1909 | 328 | "pxelinux.0" : [ "/usr/share/syslinux/pxelinux.0", | ||
1910 | 329 | "/usr/lib/syslinux/pxelinux.0", | ||
1911 | 330 | "/var/lib/cobbler/loaders/pxelinux.0" ], | ||
1912 | 331 | "efi" : [ "/var/lib/cobbler/loaders/grub-x86.efi", | ||
1913 | 332 | "/var/lib/cobbler/loaders/grub-x86_64.efi" ], | ||
1914 | 333 | } | ||
1915 | 334 | |||
1916 | 335 | # look for bootloaders at the glob locations above | ||
1917 | 336 | found_bootloaders = [] | ||
1918 | 337 | items = bootloaders.keys() | ||
1919 | 338 | for loader_name in items: | ||
1920 | 339 | patterns = bootloaders[loader_name] | ||
1921 | 340 | for pattern in patterns: | ||
1922 | 341 | matches = glob.glob(pattern) | ||
1923 | 342 | if len(matches) > 0: | ||
1924 | 343 | found_bootloaders.append(loader_name) | ||
1925 | 344 | not_found = [] | ||
1926 | 345 | |||
1927 | 346 | # invert the list of what we've found so we can report on what we haven't found | ||
1928 | 347 | for loader_name in items: | ||
1929 | 348 | if loader_name not in found_bootloaders: | ||
1930 | 349 | not_found.append(loader_name) | ||
1931 | 350 | |||
1932 | 351 | if len(not_found) > 0: | ||
1933 | 352 | status.append("some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.") | ||
1934 | 353 | |||
1935 | 354 | def check_tftpd_bin(self,status): | ||
1936 | 355 | """ | ||
1937 | 356 | Check if tftpd is installed | ||
1938 | 357 | """ | ||
1939 | 358 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1940 | 359 | return | ||
1941 | 360 | |||
1942 | 361 | if not os.path.exists("/etc/xinetd.d/tftp"): | ||
1943 | 362 | status.append("missing /etc/xinetd.d/tftp, install tftp-server?") | ||
1944 | 363 | |||
1945 | 364 | def check_tftpd_dir(self,status): | ||
1946 | 365 | """ | ||
1947 | 366 | Check if cobbler.conf's tftpboot directory exists | ||
1948 | 367 | """ | ||
1949 | 368 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1950 | 369 | return | ||
1951 | 370 | |||
1952 | 371 | bootloc = utils.tftpboot_location() | ||
1953 | 372 | if not os.path.exists(bootloc): | ||
1954 | 373 | status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc }) | ||
1955 | 374 | |||
1956 | 375 | |||
1957 | 376 | def check_tftpd_conf(self,status): | ||
1958 | 377 | """ | ||
1959 | 378 | Check that configured tftpd boot directory matches with actual | ||
1960 | 379 | Check that tftpd is enabled to autostart | ||
1961 | 380 | """ | ||
1962 | 381 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1963 | 382 | return | ||
1964 | 383 | |||
1965 | 384 | if os.path.exists("/etc/xinetd.d/tftp"): | ||
1966 | 385 | f = open("/etc/xinetd.d/tftp") | ||
1967 | 386 | re_disable = re.compile(r'disable.*=.*yes') | ||
1968 | 387 | for line in f.readlines(): | ||
1969 | 388 | if re_disable.search(line) and not line.strip().startswith("#"): | ||
1970 | 389 | status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" }) | ||
1971 | 390 | else: | ||
1972 | 391 | status.append("missing configuration file: /etc/xinetd.d/tftp") | ||
1973 | 392 | |||
1974 | 393 | def check_ctftpd_bin(self,status): | ||
1975 | 394 | """ | ||
1976 | 395 | Check if the Cobbler tftp server is installed | ||
1977 | 396 | """ | ||
1978 | 397 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1979 | 398 | return | ||
1980 | 399 | |||
1981 | 400 | if not os.path.exists("/etc/xinetd.d/ctftp"): | ||
1982 | 401 | status.append("missing /etc/xinetd.d/ctftp") | ||
1983 | 402 | |||
1984 | 403 | def check_ctftpd_dir(self,status): | ||
1985 | 404 | """ | ||
1986 | 405 | Check if cobbler.conf's tftpboot directory exists | ||
1987 | 406 | """ | ||
1988 | 407 | if self.checked_dist in ["debian", "ubuntu"]: | ||
1989 | 408 | return | ||
1990 | 409 | |||
1991 | 410 | bootloc = utils.tftpboot_location() | ||
1992 | 411 | if not os.path.exists(bootloc): | ||
1993 | 412 | status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc }) | ||
1994 | 413 | |||
1995 | 414 | def check_ctftpd_conf(self,status): | ||
1996 | 415 | """ | ||
1997 | 416 | Check that configured tftpd boot directory matches with actual | ||
1998 | 417 | Check that tftpd is enabled to autostart | ||
1999 | 418 | """ | ||
2000 | 419 | if self.checked_dist in ["debian", "ubuntu"]: | ||
2001 | 420 | return | ||
2002 | 421 | |||
2003 | 422 | if os.path.exists("/etc/xinetd.d/tftp"): | ||
2004 | 423 | f = open("/etc/xinetd.d/tftp") | ||
2005 | 424 | re_disable = re.compile(r'disable.*=.*no') | ||
2006 | 425 | for line in f.readlines(): | ||
2007 | 426 | if re_disable.search(line) and not line.strip().startswith("#"): | ||
2008 | 427 | status.append(_("change 'disable' to 'yes' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" }) | ||
2009 | 428 | if os.path.exists("/etc/xinetd.d/ctftp"): | ||
2010 | 429 | f = open("/etc/xinetd.d/ctftp") | ||
2011 | 430 | re_disable = re.compile(r'disable.*=.*yes') | ||
2012 | 431 | for line in f.readlines(): | ||
2013 | 432 | if re_disable.search(line) and not line.strip().startswith("#"): | ||
2014 | 433 | status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/ctftp" }) | ||
2015 | 434 | else: | ||
2016 | 435 | status.append("missing configuration file: /etc/xinetd.d/ctftp") | ||
2017 | 436 | |||
2018 | 437 | def check_rsync_conf(self,status): | ||
2019 | 438 | """ | ||
2020 | 439 | Check that rsync is enabled to autostart | ||
2021 | 440 | """ | ||
2022 | 441 | if self.checked_dist in ["debian", "ubuntu"]: | ||
2023 | 442 | return | ||
2024 | 443 | |||
2025 | 444 | if os.path.exists("/etc/xinetd.d/rsync"): | ||
2026 | 445 | f = open("/etc/xinetd.d/rsync") | ||
2027 | 446 | re_disable = re.compile(r'disable.*=.*yes') | ||
2028 | 447 | for line in f.readlines(): | ||
2029 | 448 | if re_disable.search(line) and not line.strip().startswith("#"): | ||
2030 | 449 | status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/rsync" }) | ||
2031 | 450 | else: | ||
2032 | 451 | status.append(_("file %(file)s does not exist") % { "file" : "/etc/xinetd.d/rsync" }) | ||
2033 | 452 | |||
2034 | 453 | |||
2035 | 454 | def check_dhcpd_conf(self,status): | ||
2036 | 455 | """ | ||
2037 | 456 | NOTE: this code only applies if cobbler is *NOT* set to generate | ||
2038 | 457 | a dhcp.conf file | ||
2039 | 458 | |||
2040 | 459 | Check that dhcpd *appears* to be configured for pxe booting. | ||
2041 | 460 | We can't assure file correctness. Since a cobbler user might | ||
2042 | 461 | have dhcp on another server, it's okay if it's not there and/or | ||
2043 | 462 | not configured correctly according to automated scans. | ||
2044 | 463 | """ | ||
2045 | 464 | if not (self.settings.manage_dhcp == 0): | ||
2046 | 465 | return | ||
2047 | 466 | |||
2048 | 467 | if os.path.exists(self.settings.dhcpd_conf): | ||
2049 | 468 | match_next = False | ||
2050 | 469 | match_file = False | ||
2051 | 470 | f = open(self.settings.dhcpd_conf) | ||
2052 | 471 | for line in f.readlines(): | ||
2053 | 472 | if line.find("next-server") != -1: | ||
2054 | 473 | match_next = True | ||
2055 | 474 | if line.find("filename") != -1: | ||
2056 | 475 | match_file = True | ||
2057 | 476 | if not match_next: | ||
2058 | 477 | status.append(_("expecting next-server entry in %(file)s") % { "file" : self.settings.dhcpd_conf }) | ||
2059 | 478 | if not match_file: | ||
2060 | 479 | status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf }) | ||
2061 | 480 | else: | ||
2062 | 481 | status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf }) | ||
2063 | 482 | |||
2064 | 483 | 0 | ||
2065 | === removed directory '.pc/40_ubuntu_bind9_management.patch/cobbler/modules' | |||
2066 | === removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py' | |||
2067 | --- .pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py 2011-04-18 11:15:59 +0000 | |||
2068 | +++ .pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py 1970-01-01 00:00:00 +0000 | |||
2069 | @@ -1,332 +0,0 @@ | |||
2070 | 1 | """ | ||
2071 | 2 | This is some of the code behind 'cobbler sync'. | ||
2072 | 3 | |||
2073 | 4 | Copyright 2006-2009, Red Hat, Inc | ||
2074 | 5 | Michael DeHaan <mdehaan@redhat.com> | ||
2075 | 6 | John Eckersberg <jeckersb@redhat.com> | ||
2076 | 7 | |||
2077 | 8 | This program is free software; you can redistribute it and/or modify | ||
2078 | 9 | it under the terms of the GNU General Public License as published by | ||
2079 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
2080 | 11 | (at your option) any later version. | ||
2081 | 12 | |||
2082 | 13 | This program is distributed in the hope that it will be useful, | ||
2083 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2084 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2085 | 16 | GNU General Public License for more details. | ||
2086 | 17 | |||
2087 | 18 | You should have received a copy of the GNU General Public License | ||
2088 | 19 | along with this program; if not, write to the Free Software | ||
2089 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
2090 | 21 | 02110-1301 USA | ||
2091 | 22 | """ | ||
2092 | 23 | |||
2093 | 24 | import os | ||
2094 | 25 | import os.path | ||
2095 | 26 | import shutil | ||
2096 | 27 | import time | ||
2097 | 28 | import sys | ||
2098 | 29 | import glob | ||
2099 | 30 | import traceback | ||
2100 | 31 | import errno | ||
2101 | 32 | import re | ||
2102 | 33 | from shlex import shlex | ||
2103 | 34 | |||
2104 | 35 | |||
2105 | 36 | import utils | ||
2106 | 37 | from cexceptions import * | ||
2107 | 38 | import templar | ||
2108 | 39 | |||
2109 | 40 | import item_distro | ||
2110 | 41 | import item_profile | ||
2111 | 42 | import item_repo | ||
2112 | 43 | import item_system | ||
2113 | 44 | |||
2114 | 45 | from utils import _ | ||
2115 | 46 | |||
2116 | 47 | |||
2117 | 48 | def register(): | ||
2118 | 49 | """ | ||
2119 | 50 | The mandatory cobbler module registration hook. | ||
2120 | 51 | """ | ||
2121 | 52 | return "manage" | ||
2122 | 53 | |||
2123 | 54 | |||
2124 | 55 | class BindManager: | ||
2125 | 56 | |||
2126 | 57 | def what(self): | ||
2127 | 58 | return "bind" | ||
2128 | 59 | |||
2129 | 60 | def __init__(self,config,logger): | ||
2130 | 61 | """ | ||
2131 | 62 | Constructor | ||
2132 | 63 | """ | ||
2133 | 64 | self.logger = logger | ||
2134 | 65 | self.config = config | ||
2135 | 66 | self.api = config.api | ||
2136 | 67 | self.distros = config.distros() | ||
2137 | 68 | self.profiles = config.profiles() | ||
2138 | 69 | self.systems = config.systems() | ||
2139 | 70 | self.settings = config.settings() | ||
2140 | 71 | self.repos = config.repos() | ||
2141 | 72 | self.templar = templar.Templar(config) | ||
2142 | 73 | |||
2143 | 74 | def regen_hosts(self): | ||
2144 | 75 | pass # not used | ||
2145 | 76 | |||
2146 | 77 | def __forward_zones(self): | ||
2147 | 78 | """ | ||
2148 | 79 | Returns a map of zones and the records that belong | ||
2149 | 80 | in them | ||
2150 | 81 | """ | ||
2151 | 82 | zones = {} | ||
2152 | 83 | forward_zones = self.settings.manage_forward_zones | ||
2153 | 84 | if type(forward_zones) != type([]): | ||
2154 | 85 | # gracefully handle when user inputs only a single zone | ||
2155 | 86 | # as a string instead of a list with only a single item | ||
2156 | 87 | forward_zones = [forward_zones] | ||
2157 | 88 | |||
2158 | 89 | for zone in forward_zones: | ||
2159 | 90 | zones[zone] = {} | ||
2160 | 91 | |||
2161 | 92 | for system in self.systems: | ||
2162 | 93 | for (name, interface) in system.interfaces.iteritems(): | ||
2163 | 94 | host = interface["dns_name"] | ||
2164 | 95 | ip = interface["ip_address"] | ||
2165 | 96 | if not system.is_management_supported(cidr_ok=False): | ||
2166 | 97 | continue | ||
2167 | 98 | if not host or not ip: | ||
2168 | 99 | # gotsta have some dns_name and ip or else! | ||
2169 | 100 | continue | ||
2170 | 101 | if host.find(".") == -1: | ||
2171 | 102 | continue | ||
2172 | 103 | |||
2173 | 104 | # match the longest zone! | ||
2174 | 105 | # e.g. if you have a host a.b.c.d.e | ||
2175 | 106 | # if manage_forward_zones has: | ||
2176 | 107 | # - c.d.e | ||
2177 | 108 | # - b.c.d.e | ||
2178 | 109 | # then a.b.c.d.e should go in b.c.d.e | ||
2179 | 110 | best_match = '' | ||
2180 | 111 | for zone in zones.keys(): | ||
2181 | 112 | if re.search('\.%s$' % zone, host) and len(zone) > len(best_match): | ||
2182 | 113 | best_match = zone | ||
2183 | 114 | |||
2184 | 115 | if best_match == '': # no match | ||
2185 | 116 | continue | ||
2186 | 117 | |||
2187 | 118 | # strip the zone off the dns_name and append the | ||
2188 | 119 | # remainder + ip to the zone list | ||
2189 | 120 | host = re.sub('\.%s$' % best_match, '', host) | ||
2190 | 121 | |||
2191 | 122 | zones[best_match][host] = ip | ||
2192 | 123 | |||
2193 | 124 | return zones | ||
2194 | 125 | |||
2195 | 126 | def __reverse_zones(self): | ||
2196 | 127 | """ | ||
2197 | 128 | Returns a map of zones and the records that belong | ||
2198 | 129 | in them | ||
2199 | 130 | """ | ||
2200 | 131 | zones = {} | ||
2201 | 132 | reverse_zones = self.settings.manage_reverse_zones | ||
2202 | 133 | if type(reverse_zones) != type([]): | ||
2203 | 134 | # gracefully handle when user inputs only a single zone | ||
2204 | 135 | # as a string instead of a list with only a single item | ||
2205 | 136 | reverse_zones = [reverse_zones] | ||
2206 | 137 | |||
2207 | 138 | for zone in reverse_zones: | ||
2208 | 139 | zones[zone] = {} | ||
2209 | 140 | |||
2210 | 141 | for sys in self.systems: | ||
2211 | 142 | for (name, interface) in sys.interfaces.iteritems(): | ||
2212 | 143 | host = interface["dns_name"] | ||
2213 | 144 | ip = interface["ip_address"] | ||
2214 | 145 | if not sys.is_management_supported(cidr_ok=False): | ||
2215 | 146 | continue | ||
2216 | 147 | if not host or not ip: | ||
2217 | 148 | # gotsta have some dns_name and ip or else! | ||
2218 | 149 | continue | ||
2219 | 150 | |||
2220 | 151 | # match the longest zone! | ||
2221 | 152 | # e.g. if you have an ip 1.2.3.4 | ||
2222 | 153 | # if manage_reverse_zones has: | ||
2223 | 154 | # - 1.2 | ||
2224 | 155 | # - 1.2.3 | ||
2225 | 156 | # then 1.2.3.4 should go in 1.2.3 | ||
2226 | 157 | best_match = '' | ||
2227 | 158 | for zone in zones.keys(): | ||
2228 | 159 | if re.search('^%s\.' % zone, ip) and len(zone) > len(best_match): | ||
2229 | 160 | best_match = zone | ||
2230 | 161 | |||
2231 | 162 | if best_match == '': # no match | ||
2232 | 163 | continue | ||
2233 | 164 | |||
2234 | 165 | # strip the zone off the front of the ip | ||
2235 | 166 | # reverse the rest of the octets | ||
2236 | 167 | # append the remainder + dns_name | ||
2237 | 168 | ip = ip.replace(best_match, '', 1) | ||
2238 | 169 | if ip[0] == '.': # strip leading '.' if it's there | ||
2239 | 170 | ip = ip[1:] | ||
2240 | 171 | tokens = ip.split('.') | ||
2241 | 172 | tokens.reverse() | ||
2242 | 173 | ip = '.'.join(tokens) | ||
2243 | 174 | zones[best_match][ip] = host + '.' | ||
2244 | 175 | |||
2245 | 176 | return zones | ||
2246 | 177 | |||
2247 | 178 | |||
2248 | 179 | def __write_named_conf(self): | ||
2249 | 180 | """ | ||
2250 | 181 | Write out the named.conf main config file from the template. | ||
2251 | 182 | """ | ||
2252 | 183 | settings_file = "/etc/named.conf" | ||
2253 | 184 | template_file = "/etc/cobbler/named.template" | ||
2254 | 185 | forward_zones = self.settings.manage_forward_zones | ||
2255 | 186 | reverse_zones = self.settings.manage_reverse_zones | ||
2256 | 187 | |||
2257 | 188 | metadata = {'forward_zones': self.__forward_zones().keys(), | ||
2258 | 189 | 'reverse_zones': [], | ||
2259 | 190 | 'zone_include': ''} | ||
2260 | 191 | |||
2261 | 192 | for zone in metadata['forward_zones']: | ||
2262 | 193 | txt = """ | ||
2263 | 194 | zone "%(zone)s." { | ||
2264 | 195 | type master; | ||
2265 | 196 | file "%(zone)s"; | ||
2266 | 197 | }; | ||
2267 | 198 | """ % {'zone': zone} | ||
2268 | 199 | metadata['zone_include'] = metadata['zone_include'] + txt | ||
2269 | 200 | |||
2270 | 201 | for zone in self.__reverse_zones().keys(): | ||
2271 | 202 | tokens = zone.split('.') | ||
2272 | 203 | tokens.reverse() | ||
2273 | 204 | arpa = '.'.join(tokens) + '.in-addr.arpa' | ||
2274 | 205 | metadata['reverse_zones'].append((zone, arpa)) | ||
2275 | 206 | txt = """ | ||
2276 | 207 | zone "%(arpa)s." { | ||
2277 | 208 | type master; | ||
2278 | 209 | file "%(zone)s"; | ||
2279 | 210 | }; | ||
2280 | 211 | """ % {'arpa': arpa, 'zone': zone} | ||
2281 | 212 | metadata['zone_include'] = metadata['zone_include'] + txt | ||
2282 | 213 | |||
2283 | 214 | try: | ||
2284 | 215 | f2 = open(template_file,"r") | ||
2285 | 216 | except: | ||
2286 | 217 | raise CX(_("error reading template from file: %s") % template_file) | ||
2287 | 218 | template_data = "" | ||
2288 | 219 | template_data = f2.read() | ||
2289 | 220 | f2.close() | ||
2290 | 221 | |||
2291 | 222 | if self.logger is not None: | ||
2292 | 223 | self.logger.info("generating %s" % settings_file) | ||
2293 | 224 | self.templar.render(template_data, metadata, settings_file, None) | ||
2294 | 225 | |||
2295 | 226 | def __ip_sort(self, ips): | ||
2296 | 227 | """ | ||
2297 | 228 | Sorts IP addresses (or partial addresses) in a numerical fashion per-octet | ||
2298 | 229 | """ | ||
2299 | 230 | # strings to integer octet chunks so we can sort numerically | ||
2300 | 231 | octets = map(lambda x: [int(i) for i in x.split('.')], ips) | ||
2301 | 232 | octets.sort() | ||
2302 | 233 | # integers back to strings | ||
2303 | 234 | octets = map(lambda x: [str(i) for i in x], octets) | ||
2304 | 235 | return ['.'.join(i) for i in octets] | ||
2305 | 236 | |||
2306 | 237 | def __pretty_print_host_records(self, hosts, rectype='A', rclass='IN'): | ||
2307 | 238 | """ | ||
2308 | 239 | Format host records by order and with consistent indentation | ||
2309 | 240 | """ | ||
2310 | 241 | names = [k for k,v in hosts.iteritems()] | ||
2311 | 242 | if not names: return '' # zones with no hosts | ||
2312 | 243 | |||
2313 | 244 | if rectype == 'PTR': | ||
2314 | 245 | names = self.__ip_sort(names) | ||
2315 | 246 | else: | ||
2316 | 247 | names.sort() | ||
2317 | 248 | |||
2318 | 249 | max_name = max([len(i) for i in names]) | ||
2319 | 250 | |||
2320 | 251 | s = "" | ||
2321 | 252 | for name in names: | ||
2322 | 253 | spacing = " " * (max_name - len(name)) | ||
2323 | 254 | my_name = "%s%s" % (name, spacing) | ||
2324 | 255 | my_host = hosts[name] | ||
2325 | 256 | s += "%s %s %s %s\n" % (my_name, rclass, rectype, my_host) | ||
2326 | 257 | return s | ||
2327 | 258 | |||
2328 | 259 | def __write_zone_files(self): | ||
2329 | 260 | """ | ||
2330 | 261 | Write out the forward and reverse zone files for all configured zones | ||
2331 | 262 | """ | ||
2332 | 263 | default_template_file = "/etc/cobbler/zone.template" | ||
2333 | 264 | cobbler_server = self.settings.server | ||
2334 | 265 | serial = int(time.time()) | ||
2335 | 266 | forward = self.__forward_zones() | ||
2336 | 267 | reverse = self.__reverse_zones() | ||
2337 | 268 | |||
2338 | 269 | try: | ||
2339 | 270 | f2 = open(default_template_file,"r") | ||
2340 | 271 | except: | ||
2341 | 272 | raise CX(_("error reading template from file: %s") % default_template_file) | ||
2342 | 273 | default_template_data = "" | ||
2343 | 274 | default_template_data = f2.read() | ||
2344 | 275 | f2.close() | ||
2345 | 276 | |||
2346 | 277 | for (zone, hosts) in forward.iteritems(): | ||
2347 | 278 | metadata = { | ||
2348 | 279 | 'cobbler_server': cobbler_server, | ||
2349 | 280 | 'serial': serial, | ||
2350 | 281 | 'host_record': '' | ||
2351 | 282 | } | ||
2352 | 283 | |||
2353 | 284 | # grab zone-specific template if it exists | ||
2354 | 285 | try: | ||
2355 | 286 | fd = open('/etc/cobbler/zone_templates/%s' % zone) | ||
2356 | 287 | template_data = fd.read() | ||
2357 | 288 | fd.close() | ||
2358 | 289 | except: | ||
2359 | 290 | template_data = default_template_data | ||
2360 | 291 | |||
2361 | 292 | metadata['host_record'] = self.__pretty_print_host_records(hosts) | ||
2362 | 293 | |||
2363 | 294 | zonefilename='/var/named/' + zone | ||
2364 | 295 | if self.logger is not None: | ||
2365 | 296 | self.logger.info("generating (forward) %s" % zonefilename) | ||
2366 | 297 | self.templar.render(template_data, metadata, zonefilename, None) | ||
2367 | 298 | |||
2368 | 299 | for (zone, hosts) in reverse.iteritems(): | ||
2369 | 300 | metadata = { | ||
2370 | 301 | 'cobbler_server': cobbler_server, | ||
2371 | 302 | 'serial': serial, | ||
2372 | 303 | 'host_record': '' | ||
2373 | 304 | } | ||
2374 | 305 | |||
2375 | 306 | # grab zone-specific template if it exists | ||
2376 | 307 | try: | ||
2377 | 308 | fd = open('/etc/cobbler/zone_templates/%s' % zone) | ||
2378 | 309 | template_data = fd.read() | ||
2379 | 310 | fd.close() | ||
2380 | 311 | except: | ||
2381 | 312 | template_data = default_template_data | ||
2382 | 313 | |||
2383 | 314 | metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR') | ||
2384 | 315 | |||
2385 | 316 | zonefilename='/var/named/' + zone | ||
2386 | 317 | if self.logger is not None: | ||
2387 | 318 | self.logger.info("generating (reverse) %s" % zonefilename) | ||
2388 | 319 | self.templar.render(template_data, metadata, zonefilename, None) | ||
2389 | 320 | |||
2390 | 321 | |||
2391 | 322 | def write_dns_files(self): | ||
2392 | 323 | """ | ||
2393 | 324 | BIND files are written when manage_dns is set in | ||
2394 | 325 | /var/lib/cobbler/settings. | ||
2395 | 326 | """ | ||
2396 | 327 | |||
2397 | 328 | self.__write_named_conf() | ||
2398 | 329 | self.__write_zone_files() | ||
2399 | 330 | |||
2400 | 331 | def get_manager(config,logger): | ||
2401 | 332 | return BindManager(config,logger) | ||
2402 | 333 | 0 | ||
2403 | === removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py' | |||
2404 | --- .pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py 2011-04-18 11:15:59 +0000 | |||
2405 | +++ .pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py 1970-01-01 00:00:00 +0000 | |||
2406 | @@ -1,66 +0,0 @@ | |||
2407 | 1 | import distutils.sysconfig | ||
2408 | 2 | import sys | ||
2409 | 3 | import os | ||
2410 | 4 | import traceback | ||
2411 | 5 | import cexceptions | ||
2412 | 6 | import os | ||
2413 | 7 | import sys | ||
2414 | 8 | import xmlrpclib | ||
2415 | 9 | import cobbler.module_loader as module_loader | ||
2416 | 10 | import cobbler.utils as utils | ||
2417 | 11 | |||
2418 | 12 | plib = distutils.sysconfig.get_python_lib() | ||
2419 | 13 | mod_path="%s/cobbler" % plib | ||
2420 | 14 | sys.path.insert(0, mod_path) | ||
2421 | 15 | |||
2422 | 16 | def register(): | ||
2423 | 17 | # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. | ||
2424 | 18 | # the return of this method indicates the trigger type | ||
2425 | 19 | return "/var/lib/cobbler/triggers/sync/post/*" | ||
2426 | 20 | |||
2427 | 21 | def run(api,args,logger): | ||
2428 | 22 | |||
2429 | 23 | settings = api.settings() | ||
2430 | 24 | |||
2431 | 25 | manage_dhcp = str(settings.manage_dhcp).lower() | ||
2432 | 26 | manage_dns = str(settings.manage_dns).lower() | ||
2433 | 27 | manage_tftpd = str(settings.manage_tftpd).lower() | ||
2434 | 28 | restart_dhcp = str(settings.restart_dhcp).lower() | ||
2435 | 29 | restart_dns = str(settings.restart_dns).lower() | ||
2436 | 30 | |||
2437 | 31 | which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip() | ||
2438 | 32 | which_dns_module = module_loader.get_module_from_file("dns","module",just_name=True).strip() | ||
2439 | 33 | |||
2440 | 34 | # special handling as we don't want to restart it twice | ||
2441 | 35 | has_restarted_dnsmasq = False | ||
2442 | 36 | |||
2443 | 37 | rc = 0 | ||
2444 | 38 | if manage_dhcp != "0": | ||
2445 | 39 | if which_dhcp_module == "manage_isc": | ||
2446 | 40 | if restart_dhcp != "0": | ||
2447 | 41 | rc = utils.subprocess_call(logger, "dhcpd -t -q", shell=True) | ||
2448 | 42 | if rc != 0: | ||
2449 | 43 | logger.error("dhcpd -t failed") | ||
2450 | 44 | return 1 | ||
2451 | 45 | rc = utils.subprocess_call(logger,"service isc-dhcp-server restart", shell=True) | ||
2452 | 46 | elif which_dhcp_module == "manage_dnsmasq": | ||
2453 | 47 | if restart_dhcp != "0": | ||
2454 | 48 | rc = utils.subprocess_call(logger, "service dnsmasq restart") | ||
2455 | 49 | has_restarted_dnsmasq = True | ||
2456 | 50 | else: | ||
2457 | 51 | logger.error("unknown DHCP engine: %s" % which_dhcp_module) | ||
2458 | 52 | rc = 411 | ||
2459 | 53 | |||
2460 | 54 | if manage_dns != "0" and restart_dns != "0": | ||
2461 | 55 | if which_dns_module == "manage_bind": | ||
2462 | 56 | rc = utils.subprocess_call(logger, "service named restart", shell=True) | ||
2463 | 57 | elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq: | ||
2464 | 58 | rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True) | ||
2465 | 59 | elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq: | ||
2466 | 60 | rc = 0 | ||
2467 | 61 | else: | ||
2468 | 62 | logger.error("unknown DNS engine: %s" % which_dns_module) | ||
2469 | 63 | rc = 412 | ||
2470 | 64 | |||
2471 | 65 | return rc | ||
2472 | 66 | |||
2473 | 67 | 0 | ||
2474 | === removed directory '.pc/40_ubuntu_bind9_management.patch/templates' | |||
2475 | === removed directory '.pc/40_ubuntu_bind9_management.patch/templates/etc' | |||
2476 | === removed file '.pc/40_ubuntu_bind9_management.patch/templates/etc/named.template' | |||
2477 | --- .pc/40_ubuntu_bind9_management.patch/templates/etc/named.template 2011-04-18 11:15:59 +0000 | |||
2478 | +++ .pc/40_ubuntu_bind9_management.patch/templates/etc/named.template 1970-01-01 00:00:00 +0000 | |||
2479 | @@ -1,31 +0,0 @@ | |||
2480 | 1 | options { | ||
2481 | 2 | listen-on port 53 { 127.0.0.1; }; | ||
2482 | 3 | directory "/var/named"; | ||
2483 | 4 | dump-file "/var/named/data/cache_dump.db"; | ||
2484 | 5 | statistics-file "/var/named/data/named_stats.txt"; | ||
2485 | 6 | memstatistics-file "/var/named/data/named_mem_stats.txt"; | ||
2486 | 7 | allow-query { localhost; }; | ||
2487 | 8 | recursion yes; | ||
2488 | 9 | }; | ||
2489 | 10 | |||
2490 | 11 | logging { | ||
2491 | 12 | channel default_debug { | ||
2492 | 13 | file "data/named.run"; | ||
2493 | 14 | severity dynamic; | ||
2494 | 15 | }; | ||
2495 | 16 | }; | ||
2496 | 17 | |||
2497 | 18 | #for $zone in $forward_zones | ||
2498 | 19 | zone "${zone}." { | ||
2499 | 20 | type master; | ||
2500 | 21 | file "$zone"; | ||
2501 | 22 | }; | ||
2502 | 23 | |||
2503 | 24 | #end for | ||
2504 | 25 | #for $zone, $arpa in $reverse_zones | ||
2505 | 26 | zone "${arpa}." { | ||
2506 | 27 | type master; | ||
2507 | 28 | file "$zone"; | ||
2508 | 29 | }; | ||
2509 | 30 | |||
2510 | 31 | #end for | ||
2511 | 32 | 0 | ||
2512 | === removed directory '.pc/41_update_tree_path_with_arch.patch' | |||
2513 | === removed directory '.pc/41_update_tree_path_with_arch.patch/cobbler' | |||
2514 | === removed directory '.pc/41_update_tree_path_with_arch.patch/cobbler/modules' | |||
2515 | === removed file '.pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py' | |||
2516 | --- .pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-05-02 18:26:03 +0000 | |||
2517 | +++ .pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000 | |||
2518 | @@ -1,777 +0,0 @@ | |||
2519 | 1 | """ | ||
2520 | 2 | This is some of the code behind 'cobbler sync'. | ||
2521 | 3 | |||
2522 | 4 | Copyright 2006-2009, Red Hat, Inc | ||
2523 | 5 | Michael DeHaan <mdehaan@redhat.com> | ||
2524 | 6 | John Eckersberg <jeckersb@redhat.com> | ||
2525 | 7 | |||
2526 | 8 | This program is free software; you can redistribute it and/or modify | ||
2527 | 9 | it under the terms of the GNU General Public License as published by | ||
2528 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
2529 | 11 | (at your option) any later version. | ||
2530 | 12 | |||
2531 | 13 | This program is distributed in the hope that it will be useful, | ||
2532 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2533 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2534 | 16 | GNU General Public License for more details. | ||
2535 | 17 | |||
2536 | 18 | You should have received a copy of the GNU General Public License | ||
2537 | 19 | along with this program; if not, write to the Free Software | ||
2538 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
2539 | 21 | 02110-1301 USA | ||
2540 | 22 | """ | ||
2541 | 23 | |||
2542 | 24 | import os | ||
2543 | 25 | import os.path | ||
2544 | 26 | import shutil | ||
2545 | 27 | import time | ||
2546 | 28 | import sys | ||
2547 | 29 | import glob | ||
2548 | 30 | import traceback | ||
2549 | 31 | import errno | ||
2550 | 32 | import re | ||
2551 | 33 | from utils import popen2 | ||
2552 | 34 | from shlex import shlex | ||
2553 | 35 | |||
2554 | 36 | |||
2555 | 37 | import utils | ||
2556 | 38 | from cexceptions import * | ||
2557 | 39 | import templar | ||
2558 | 40 | |||
2559 | 41 | import item_distro | ||
2560 | 42 | import item_profile | ||
2561 | 43 | import item_repo | ||
2562 | 44 | import item_system | ||
2563 | 45 | |||
2564 | 46 | from utils import _ | ||
2565 | 47 | |||
2566 | 48 | def register(): | ||
2567 | 49 | """ | ||
2568 | 50 | The mandatory cobbler module registration hook. | ||
2569 | 51 | """ | ||
2570 | 52 | return "manage/import" | ||
2571 | 53 | |||
2572 | 54 | |||
2573 | 55 | class ImportDebianUbuntuManager: | ||
2574 | 56 | |||
2575 | 57 | def __init__(self,config,logger): | ||
2576 | 58 | """ | ||
2577 | 59 | Constructor | ||
2578 | 60 | """ | ||
2579 | 61 | self.logger = logger | ||
2580 | 62 | self.config = config | ||
2581 | 63 | self.api = config.api | ||
2582 | 64 | self.distros = config.distros() | ||
2583 | 65 | self.profiles = config.profiles() | ||
2584 | 66 | self.systems = config.systems() | ||
2585 | 67 | self.settings = config.settings() | ||
2586 | 68 | self.repos = config.repos() | ||
2587 | 69 | self.templar = templar.Templar(config) | ||
2588 | 70 | |||
2589 | 71 | # required function for import modules | ||
2590 | 72 | def what(self): | ||
2591 | 73 | return "import/debian_ubuntu" | ||
2592 | 74 | |||
2593 | 75 | # required function for import modules | ||
2594 | 76 | def check_for_signature(self,path,cli_breed): | ||
2595 | 77 | signatures = [ | ||
2596 | 78 | 'pool', | ||
2597 | 79 | ] | ||
2598 | 80 | |||
2599 | 81 | #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path) | ||
2600 | 82 | for signature in signatures: | ||
2601 | 83 | d = os.path.join(path,signature) | ||
2602 | 84 | if os.path.exists(d): | ||
2603 | 85 | self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature) | ||
2604 | 86 | return (True,signature) | ||
2605 | 87 | |||
2606 | 88 | if cli_breed and cli_breed in self.get_valid_breeds(): | ||
2607 | 89 | self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) | ||
2608 | 90 | return (True,None) | ||
2609 | 91 | |||
2610 | 92 | return (False,None) | ||
2611 | 93 | |||
2612 | 94 | # required function for import modules | ||
2613 | 95 | def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): | ||
2614 | 96 | self.pkgdir = pkgdir | ||
2615 | 97 | self.mirror = mirror | ||
2616 | 98 | self.mirror_name = mirror_name | ||
2617 | 99 | self.network_root = network_root | ||
2618 | 100 | self.kickstart_file = kickstart_file | ||
2619 | 101 | self.rsync_flags = rsync_flags | ||
2620 | 102 | self.arch = arch | ||
2621 | 103 | self.breed = breed | ||
2622 | 104 | self.os_version = os_version | ||
2623 | 105 | |||
2624 | 106 | # some fixups for the XMLRPC interface, which does not use "None" | ||
2625 | 107 | if self.arch == "": self.arch = None | ||
2626 | 108 | if self.mirror == "": self.mirror = None | ||
2627 | 109 | if self.mirror_name == "": self.mirror_name = None | ||
2628 | 110 | if self.kickstart_file == "": self.kickstart_file = None | ||
2629 | 111 | if self.os_version == "": self.os_version = None | ||
2630 | 112 | if self.rsync_flags == "": self.rsync_flags = None | ||
2631 | 113 | if self.network_root == "": self.network_root = None | ||
2632 | 114 | |||
2633 | 115 | # If no breed was specified on the command line, figure it out | ||
2634 | 116 | if self.breed == None: | ||
2635 | 117 | self.breed = self.get_breed_from_directory() | ||
2636 | 118 | if not self.breed: | ||
2637 | 119 | utils.die(self.logger,"import failed - could not determine breed of debian-based distro") | ||
2638 | 120 | |||
2639 | 121 | # debug log stuff for testing | ||
2640 | 122 | #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir)) | ||
2641 | 123 | #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror)) | ||
2642 | 124 | #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name)) | ||
2643 | 125 | #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root)) | ||
2644 | 126 | #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file)) | ||
2645 | 127 | #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags)) | ||
2646 | 128 | #self.logger.info("DEBUG: self.arch = %s" % str(self.arch)) | ||
2647 | 129 | #self.logger.info("DEBUG: self.breed = %s" % str(self.breed)) | ||
2648 | 130 | #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version)) | ||
2649 | 131 | |||
2650 | 132 | # both --import and --name are required arguments | ||
2651 | 133 | |||
2652 | 134 | if self.mirror is None: | ||
2653 | 135 | utils.die(self.logger,"import failed. no --path specified") | ||
2654 | 136 | if self.mirror_name is None: | ||
2655 | 137 | utils.die(self.logger,"import failed. no --name specified") | ||
2656 | 138 | |||
2657 | 139 | # if --arch is supplied, validate it to ensure it's valid | ||
2658 | 140 | |||
2659 | 141 | if self.arch is not None and self.arch != "": | ||
2660 | 142 | self.arch = self.arch.lower() | ||
2661 | 143 | if self.arch == "x86": | ||
2662 | 144 | # be consistent | ||
2663 | 145 | self.arch = "i386" | ||
2664 | 146 | if self.arch not in self.get_valid_arches(): | ||
2665 | 147 | utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", ")) | ||
2666 | 148 | |||
2667 | 149 | # if we're going to do any copying, set where to put things | ||
2668 | 150 | # and then make sure nothing is already there. | ||
2669 | 151 | |||
2670 | 152 | self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) ) | ||
2671 | 153 | if os.path.exists(self.path) and self.arch is None: | ||
2672 | 154 | # FIXME : Raise exception even when network_root is given ? | ||
2673 | 155 | utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path) | ||
2674 | 156 | |||
2675 | 157 | # import takes a --kickstart for forcing selection that can't be used in all circumstances | ||
2676 | 158 | |||
2677 | 159 | if self.kickstart_file and not self.breed: | ||
2678 | 160 | utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") | ||
2679 | 161 | |||
2680 | 162 | if self.os_version and not self.breed: | ||
2681 | 163 | utils.die(self.logger,"OS version can only be specified when a specific breed is selected") | ||
2682 | 164 | |||
2683 | 165 | if self.breed and self.breed.lower() not in self.get_valid_breeds(): | ||
2684 | 166 | utils.die(self.logger,"Supplied import breed is not supported by this module") | ||
2685 | 167 | |||
2686 | 168 | # if --arch is supplied, make sure the user is not importing a path with a different | ||
2687 | 169 | # arch, which would just be silly. | ||
2688 | 170 | |||
2689 | 171 | if self.arch: | ||
2690 | 172 | # append the arch path to the name if the arch is not already | ||
2691 | 173 | # found in the name. | ||
2692 | 174 | for x in self.get_valid_arches(): | ||
2693 | 175 | if self.path.lower().find(x) != -1: | ||
2694 | 176 | if self.arch != x : | ||
2695 | 177 | utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch)) | ||
2696 | 178 | break | ||
2697 | 179 | else: | ||
2698 | 180 | # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again | ||
2699 | 181 | self.path += ("-%s" % self.arch) | ||
2700 | 182 | |||
2701 | 183 | # make the output path and mirror content but only if not specifying that a network | ||
2702 | 184 | # accessible support location already exists (this is --available-as on the command line) | ||
2703 | 185 | |||
2704 | 186 | if self.network_root is None: | ||
2705 | 187 | # we need to mirror (copy) the files | ||
2706 | 188 | |||
2707 | 189 | utils.mkdir(self.path) | ||
2708 | 190 | |||
2709 | 191 | if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"): | ||
2710 | 192 | |||
2711 | 193 | # http mirrors are kind of primative. rsync is better. | ||
2712 | 194 | # that's why this isn't documented in the manpage and we don't support them. | ||
2713 | 195 | # TODO: how about adding recursive FTP as an option? | ||
2714 | 196 | |||
2715 | 197 | utils.die(self.logger,"unsupported protocol") | ||
2716 | 198 | |||
2717 | 199 | else: | ||
2718 | 200 | |||
2719 | 201 | # good, we're going to use rsync.. | ||
2720 | 202 | # we don't use SSH for public mirrors and local files. | ||
2721 | 203 | # presence of user@host syntax means use SSH | ||
2722 | 204 | |||
2723 | 205 | # kick off the rsync now | ||
2724 | 206 | |||
2725 | 207 | if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger): | ||
2726 | 208 | utils.die(self.logger, "failed to rsync the files") | ||
2727 | 209 | |||
2728 | 210 | else: | ||
2729 | 211 | |||
2730 | 212 | # rather than mirroring, we're going to assume the path is available | ||
2731 | 213 | # over http, ftp, and nfs, perhaps on an external filer. scanning still requires | ||
2732 | 214 | # --mirror is a filesystem path, but --available-as marks the network path | ||
2733 | 215 | |||
2734 | 216 | if not os.path.exists(self.mirror): | ||
2735 | 217 | utils.die(self.logger, "path does not exist: %s" % self.mirror) | ||
2736 | 218 | |||
2737 | 219 | # find the filesystem part of the path, after the server bits, as each distro | ||
2738 | 220 | # URL needs to be calculated relative to this. | ||
2739 | 221 | |||
2740 | 222 | if not self.network_root.endswith("/"): | ||
2741 | 223 | self.network_root = self.network_root + "/" | ||
2742 | 224 | self.path = os.path.normpath( self.mirror ) | ||
2743 | 225 | valid_roots = [ "nfs://", "ftp://", "http://" ] | ||
2744 | 226 | for valid_root in valid_roots: | ||
2745 | 227 | if self.network_root.startswith(valid_root): | ||
2746 | 228 | break | ||
2747 | 229 | else: | ||
2748 | 230 | utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://") | ||
2749 | 231 | if self.network_root.startswith("nfs://"): | ||
2750 | 232 | try: | ||
2751 | 233 | (a,b,rest) = self.network_root.split(":",3) | ||
2752 | 234 | except: | ||
2753 | 235 | utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.") | ||
2754 | 236 | |||
2755 | 237 | # now walk the filesystem looking for distributions that match certain patterns | ||
2756 | 238 | |||
2757 | 239 | self.logger.info("adding distros") | ||
2758 | 240 | distros_added = [] | ||
2759 | 241 | # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST | ||
2760 | 242 | os.path.walk(self.path, self.distro_adder, distros_added) | ||
2761 | 243 | |||
2762 | 244 | # find out if we can auto-create any repository records from the install tree | ||
2763 | 245 | |||
2764 | 246 | if self.network_root is None: | ||
2765 | 247 | self.logger.info("associating repos") | ||
2766 | 248 | # FIXME: this automagic is not possible (yet) without mirroring | ||
2767 | 249 | self.repo_finder(distros_added) | ||
2768 | 250 | |||
2769 | 251 | # find the most appropriate answer files for each profile object | ||
2770 | 252 | |||
2771 | 253 | self.logger.info("associating kickstarts") | ||
2772 | 254 | self.kickstart_finder(distros_added) | ||
2773 | 255 | |||
2774 | 256 | # ensure bootloaders are present | ||
2775 | 257 | self.api.pxegen.copy_bootloaders() | ||
2776 | 258 | |||
2777 | 259 | return True | ||
2778 | 260 | |||
2779 | 261 | # required function for import modules | ||
2780 | 262 | def get_valid_arches(self): | ||
2781 | 263 | return ["i386", "ppc", "x86_64", "x86",] | ||
2782 | 264 | |||
2783 | 265 | # required function for import modules | ||
2784 | 266 | def get_valid_breeds(self): | ||
2785 | 267 | return ["debian","ubuntu"] | ||
2786 | 268 | |||
2787 | 269 | # required function for import modules | ||
2788 | 270 | def get_valid_os_versions(self): | ||
2789 | 271 | if self.breed == "debian": | ||
2790 | 272 | return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",] | ||
2791 | 273 | elif self.breed == "ubuntu": | ||
2792 | 274 | return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",] | ||
2793 | 275 | else: | ||
2794 | 276 | return [] | ||
2795 | 277 | |||
2796 | 278 | def get_valid_repo_breeds(self): | ||
2797 | 279 | return ["apt",] | ||
2798 | 280 | |||
2799 | 281 | def get_release_files(self): | ||
2800 | 282 | """ | ||
2801 | 283 | Find distro release packages. | ||
2802 | 284 | """ | ||
2803 | 285 | return glob.glob(os.path.join(self.get_rootdir(), "dists/*")) | ||
2804 | 286 | |||
2805 | 287 | def get_breed_from_directory(self): | ||
2806 | 288 | for breed in self.get_valid_breeds(): | ||
2807 | 289 | # NOTE : Although we break the loop after the first match, | ||
2808 | 290 | # multiple debian derived distros can actually live at the same pool -- JP | ||
2809 | 291 | d = os.path.join(self.mirror, breed) | ||
2810 | 292 | if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed: | ||
2811 | 293 | return breed | ||
2812 | 294 | else: | ||
2813 | 295 | return None | ||
2814 | 296 | |||
2815 | 297 | def get_tree_location(self, distro): | ||
2816 | 298 | """ | ||
2817 | 299 | Once a distribution is identified, find the part of the distribution | ||
2818 | 300 | that has the URL in it that we want to use for kickstarting the | ||
2819 | 301 | distribution, and create a ksmeta variable $tree that contains this. | ||
2820 | 302 | """ | ||
2821 | 303 | |||
2822 | 304 | base = self.get_rootdir() | ||
2823 | 305 | |||
2824 | 306 | if self.network_root is None: | ||
2825 | 307 | dists_path = os.path.join(self.path, "dists") | ||
2826 | 308 | if os.path.isdir(dists_path): | ||
2827 | 309 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
2828 | 310 | else: | ||
2829 | 311 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
2830 | 312 | self.set_install_tree(distro, tree) | ||
2831 | 313 | else: | ||
2832 | 314 | # where we assign the kickstart source is relative to our current directory | ||
2833 | 315 | # and the input start directory in the crawl. We find the path segments | ||
2834 | 316 | # between and tack them on the network source path to find the explicit | ||
2835 | 317 | # network path to the distro that Anaconda can digest. | ||
2836 | 318 | tail = self.path_tail(self.path, base) | ||
2837 | 319 | tree = self.network_root[:-1] + tail | ||
2838 | 320 | self.set_install_tree(distro, tree) | ||
2839 | 321 | |||
2840 | 322 | return | ||
2841 | 323 | |||
2842 | 324 | def repo_finder(self, distros_added): | ||
2843 | 325 | for distro in distros_added: | ||
2844 | 326 | self.logger.info("traversing distro %s" % distro.name) | ||
2845 | 327 | # FIXME : Shouldn't decide this the value of self.network_root ? | ||
2846 | 328 | if distro.kernel.find("ks_mirror") != -1: | ||
2847 | 329 | basepath = os.path.dirname(distro.kernel) | ||
2848 | 330 | top = self.get_rootdir() | ||
2849 | 331 | self.logger.info("descent into %s" % top) | ||
2850 | 332 | dists_path = os.path.join(self.path, "dists") | ||
2851 | 333 | if not os.path.isdir(dists_path): | ||
2852 | 334 | self.process_repos() | ||
2853 | 335 | else: | ||
2854 | 336 | self.logger.info("this distro isn't mirrored") | ||
2855 | 337 | |||
2856 | 338 | def process_repos(self): | ||
2857 | 339 | pass | ||
2858 | 340 | |||
2859 | 341 | def distro_adder(self,distros_added,dirname,fnames): | ||
2860 | 342 | """ | ||
2861 | 343 | This is an os.path.walk routine that finds distributions in the directory | ||
2862 | 344 | to be scanned and then creates them. | ||
2863 | 345 | """ | ||
2864 | 346 | |||
2865 | 347 | # FIXME: If there are more than one kernel or initrd image on the same directory, | ||
2866 | 348 | # results are unpredictable | ||
2867 | 349 | |||
2868 | 350 | initrd = None | ||
2869 | 351 | kernel = None | ||
2870 | 352 | |||
2871 | 353 | for x in fnames: | ||
2872 | 354 | adtls = [] | ||
2873 | 355 | |||
2874 | 356 | fullname = os.path.join(dirname,x) | ||
2875 | 357 | if os.path.islink(fullname) and os.path.isdir(fullname): | ||
2876 | 358 | if fullname.startswith(self.path): | ||
2877 | 359 | self.logger.warning("avoiding symlink loop") | ||
2878 | 360 | continue | ||
2879 | 361 | self.logger.info("following symlink: %s" % fullname) | ||
2880 | 362 | os.path.walk(fullname, self.distro_adder, distros_added) | ||
2881 | 363 | |||
2882 | 364 | if ( x.startswith("initrd.gz") ) and x != "initrd.size": | ||
2883 | 365 | initrd = os.path.join(dirname,x) | ||
2884 | 366 | if ( x.startswith("linux") ) and x.find("initrd") == -1: | ||
2885 | 367 | kernel = os.path.join(dirname,x) | ||
2886 | 368 | |||
2887 | 369 | # if we've collected a matching kernel and initrd pair, turn the in and add them to the list | ||
2888 | 370 | if initrd is not None and kernel is not None: | ||
2889 | 371 | adtls.append(self.add_entry(dirname,kernel,initrd)) | ||
2890 | 372 | kernel = None | ||
2891 | 373 | initrd = None | ||
2892 | 374 | |||
2893 | 375 | for adtl in adtls: | ||
2894 | 376 | distros_added.extend(adtl) | ||
2895 | 377 | |||
2896 | 378 | def add_entry(self,dirname,kernel,initrd): | ||
2897 | 379 | """ | ||
2898 | 380 | When we find a directory with a valid kernel/initrd in it, create the distribution objects | ||
2899 | 381 | as appropriate and save them. This includes creating xen and rescue distros/profiles | ||
2900 | 382 | if possible. | ||
2901 | 383 | """ | ||
2902 | 384 | |||
2903 | 385 | proposed_name = self.get_proposed_name(dirname,kernel) | ||
2904 | 386 | proposed_arch = self.get_proposed_arch(dirname) | ||
2905 | 387 | |||
2906 | 388 | if self.arch and proposed_arch and self.arch != proposed_arch: | ||
2907 | 389 | utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) | ||
2908 | 390 | |||
2909 | 391 | archs = self.learn_arch_from_tree() | ||
2910 | 392 | if not archs: | ||
2911 | 393 | if self.arch: | ||
2912 | 394 | archs.append( self.arch ) | ||
2913 | 395 | else: | ||
2914 | 396 | if self.arch and self.arch not in archs: | ||
2915 | 397 | utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) | ||
2916 | 398 | if proposed_arch: | ||
2917 | 399 | if archs and proposed_arch not in archs: | ||
2918 | 400 | self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) | ||
2919 | 401 | return | ||
2920 | 402 | |||
2921 | 403 | archs = [ proposed_arch ] | ||
2922 | 404 | |||
2923 | 405 | if len(archs)>1: | ||
2924 | 406 | self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) | ||
2925 | 407 | |||
2926 | 408 | distros_added = [] | ||
2927 | 409 | |||
2928 | 410 | for pxe_arch in archs: | ||
2929 | 411 | name = proposed_name + "-" + pxe_arch | ||
2930 | 412 | existing_distro = self.distros.find(name=name) | ||
2931 | 413 | |||
2932 | 414 | if existing_distro is not None: | ||
2933 | 415 | self.logger.warning("skipping import, as distro name already exists: %s" % name) | ||
2934 | 416 | continue | ||
2935 | 417 | |||
2936 | 418 | else: | ||
2937 | 419 | self.logger.info("creating new distro: %s" % name) | ||
2938 | 420 | distro = self.config.new_distro() | ||
2939 | 421 | |||
2940 | 422 | if name.find("-autoboot") != -1: | ||
2941 | 423 | # this is an artifact of some EL-3 imports | ||
2942 | 424 | continue | ||
2943 | 425 | |||
2944 | 426 | distro.set_name(name) | ||
2945 | 427 | distro.set_kernel(kernel) | ||
2946 | 428 | distro.set_initrd(initrd) | ||
2947 | 429 | distro.set_arch(pxe_arch) | ||
2948 | 430 | distro.set_breed(self.breed) | ||
2949 | 431 | # If a version was supplied on command line, we set it now | ||
2950 | 432 | if self.os_version: | ||
2951 | 433 | distro.set_os_version(self.os_version) | ||
2952 | 434 | |||
2953 | 435 | self.distros.add(distro,save=True) | ||
2954 | 436 | distros_added.append(distro) | ||
2955 | 437 | |||
2956 | 438 | existing_profile = self.profiles.find(name=name) | ||
2957 | 439 | |||
2958 | 440 | # see if the profile name is already used, if so, skip it and | ||
2959 | 441 | # do not modify the existing profile | ||
2960 | 442 | |||
2961 | 443 | if existing_profile is None: | ||
2962 | 444 | self.logger.info("creating new profile: %s" % name) | ||
2963 | 445 | #FIXME: The created profile holds a default kickstart, and should be breed specific | ||
2964 | 446 | profile = self.config.new_profile() | ||
2965 | 447 | else: | ||
2966 | 448 | self.logger.info("skipping existing profile, name already exists: %s" % name) | ||
2967 | 449 | continue | ||
2968 | 450 | |||
2969 | 451 | # save our minimal profile which just points to the distribution and a good | ||
2970 | 452 | # default answer file | ||
2971 | 453 | |||
2972 | 454 | profile.set_name(name) | ||
2973 | 455 | profile.set_distro(name) | ||
2974 | 456 | profile.set_kickstart(self.kickstart_file) | ||
2975 | 457 | |||
2976 | 458 | # depending on the name of the profile we can define a good virt-type | ||
2977 | 459 | # for usage with koan | ||
2978 | 460 | |||
2979 | 461 | if name.find("-xen") != -1: | ||
2980 | 462 | profile.set_virt_type("xenpv") | ||
2981 | 463 | elif name.find("vmware") != -1: | ||
2982 | 464 | profile.set_virt_type("vmware") | ||
2983 | 465 | else: | ||
2984 | 466 | profile.set_virt_type("qemu") | ||
2985 | 467 | |||
2986 | 468 | # save our new profile to the collection | ||
2987 | 469 | |||
2988 | 470 | self.profiles.add(profile,save=True) | ||
2989 | 471 | |||
2990 | 472 | return distros_added | ||
2991 | 473 | |||
2992 | 474 | def get_proposed_name(self,dirname,kernel=None): | ||
2993 | 475 | """ | ||
2994 | 476 | Given a directory name where we have a kernel/initrd pair, try to autoname | ||
2995 | 477 | the distribution (and profile) object based on the contents of that path | ||
2996 | 478 | """ | ||
2997 | 479 | |||
2998 | 480 | if self.network_root is not None: | ||
2999 | 481 | name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/")) | ||
3000 | 482 | else: | ||
3001 | 483 | # remove the part that says /var/www/cobbler/ks_mirror/name | ||
3002 | 484 | name = "-".join(dirname.split("/")[5:]) | ||
3003 | 485 | |||
3004 | 486 | if kernel is not None and kernel.find("PAE") != -1: | ||
3005 | 487 | name = name + "-PAE" | ||
3006 | 488 | |||
3007 | 489 | # These are all Ubuntu's doing, the netboot images are buried pretty | ||
3008 | 490 | # deep. ;-) -JC | ||
3009 | 491 | name = name.replace("-netboot","") | ||
3010 | 492 | name = name.replace("-ubuntu-installer","") | ||
3011 | 493 | name = name.replace("-amd64","") | ||
3012 | 494 | name = name.replace("-i386","") | ||
3013 | 495 | |||
3014 | 496 | # we know that some kernel paths should not be in the name | ||
3015 | 497 | |||
3016 | 498 | name = name.replace("-images","") | ||
3017 | 499 | name = name.replace("-pxeboot","") | ||
3018 | 500 | name = name.replace("-install","") | ||
3019 | 501 | name = name.replace("-isolinux","") | ||
3020 | 502 | |||
3021 | 503 | # some paths above the media root may have extra path segments we want | ||
3022 | 504 | # to clean up | ||
3023 | 505 | |||
3024 | 506 | name = name.replace("-os","") | ||
3025 | 507 | name = name.replace("-tree","") | ||
3026 | 508 | name = name.replace("var-www-cobbler-", "") | ||
3027 | 509 | name = name.replace("ks_mirror-","") | ||
3028 | 510 | name = name.replace("--","-") | ||
3029 | 511 | |||
3030 | 512 | # remove any architecture name related string, as real arch will be appended later | ||
3031 | 513 | |||
3032 | 514 | name = name.replace("chrp","ppc64") | ||
3033 | 515 | |||
3034 | 516 | for separator in [ '-' , '_' , '.' ] : | ||
3035 | 517 | for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: | ||
3036 | 518 | name = name.replace("%s%s" % ( separator , arch ),"") | ||
3037 | 519 | |||
3038 | 520 | return name | ||
3039 | 521 | |||
3040 | 522 | def get_proposed_arch(self,dirname): | ||
3041 | 523 | """ | ||
3042 | 524 | Given an directory name, can we infer an architecture from a path segment? | ||
3043 | 525 | """ | ||
3044 | 526 | if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: | ||
3045 | 527 | return "x86_64" | ||
3046 | 528 | if dirname.find("ia64") != -1: | ||
3047 | 529 | return "ia64" | ||
3048 | 530 | if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: | ||
3049 | 531 | return "i386" | ||
3050 | 532 | if dirname.find("s390x") != -1: | ||
3051 | 533 | return "s390x" | ||
3052 | 534 | if dirname.find("s390") != -1: | ||
3053 | 535 | return "s390" | ||
3054 | 536 | if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: | ||
3055 | 537 | return "ppc64" | ||
3056 | 538 | if dirname.find("ppc32") != -1: | ||
3057 | 539 | return "ppc" | ||
3058 | 540 | if dirname.find("ppc") != -1: | ||
3059 | 541 | return "ppc" | ||
3060 | 542 | return None | ||
3061 | 543 | |||
3062 | 544 | def arch_walker(self,foo,dirname,fnames): | ||
3063 | 545 | """ | ||
3064 | 546 | See docs on learn_arch_from_tree. | ||
3065 | 547 | |||
3066 | 548 | The TRY_LIST is used to speed up search, and should be dropped for default importer | ||
3067 | 549 | Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem | ||
3068 | 550 | |||
3069 | 551 | This method is useful to get the archs, but also to package type and a raw guess of the breed | ||
3070 | 552 | """ | ||
3071 | 553 | |||
3072 | 554 | # try to find a kernel header RPM and then look at it's arch. | ||
3073 | 555 | for x in fnames: | ||
3074 | 556 | if self.match_kernelarch_file(x): | ||
3075 | 557 | for arch in self.get_valid_arches(): | ||
3076 | 558 | if x.find(arch) != -1: | ||
3077 | 559 | foo[arch] = 1 | ||
3078 | 560 | for arch in [ "i686" , "amd64" ]: | ||
3079 | 561 | if x.find(arch) != -1: | ||
3080 | 562 | foo[arch] = 1 | ||
3081 | 563 | |||
3082 | 564 | def kickstart_finder(self,distros_added): | ||
3083 | 565 | """ | ||
3084 | 566 | For all of the profiles in the config w/o a kickstart, use the | ||
3085 | 567 | given kickstart file, or look at the kernel path, from that, | ||
3086 | 568 | see if we can guess the distro, and if we can, assign a kickstart | ||
3087 | 569 | if one is available for it. | ||
3088 | 570 | """ | ||
3089 | 571 | for profile in self.profiles: | ||
3090 | 572 | distro = self.distros.find(name=profile.get_conceptual_parent().name) | ||
3091 | 573 | if distro is None or not (distro in distros_added): | ||
3092 | 574 | continue | ||
3093 | 575 | |||
3094 | 576 | kdir = os.path.dirname(distro.kernel) | ||
3095 | 577 | if self.kickstart_file == None: | ||
3096 | 578 | for file in self.get_release_files(): | ||
3097 | 579 | results = self.scan_pkg_filename(file) | ||
3098 | 580 | # FIXME : If os is not found on tree but set with CLI, no kickstart is searched | ||
3099 | 581 | if results is None: | ||
3100 | 582 | self.logger.warning("skipping %s" % file) | ||
3101 | 583 | continue | ||
3102 | 584 | (flavor, major, minor, release) = results | ||
3103 | 585 | # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata | ||
3104 | 586 | #version , ks = self.set_variance(flavor, major, minor, distro.arch) | ||
3105 | 587 | if self.os_version: | ||
3106 | 588 | if self.os_version != flavor: | ||
3107 | 589 | utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor)) | ||
3108 | 590 | distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch)) | ||
3109 | 591 | distro.set_os_version(flavor) | ||
3110 | 592 | # is this even valid for debian/ubuntu? - jcammarata | ||
3111 | 593 | #ds = self.get_datestamp() | ||
3112 | 594 | #if ds is not None: | ||
3113 | 595 | # distro.set_tree_build_time(ds) | ||
3114 | 596 | profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed") | ||
3115 | 597 | self.profiles.add(profile,save=True) | ||
3116 | 598 | |||
3117 | 599 | self.configure_tree_location(distro) | ||
3118 | 600 | self.distros.add(distro,save=True) # re-save | ||
3119 | 601 | self.api.serialize() | ||
3120 | 602 | |||
3121 | 603 | def configure_tree_location(self, distro): | ||
3122 | 604 | """ | ||
3123 | 605 | Once a distribution is identified, find the part of the distribution | ||
3124 | 606 | that has the URL in it that we want to use for kickstarting the | ||
3125 | 607 | distribution, and create a ksmeta variable $tree that contains this. | ||
3126 | 608 | """ | ||
3127 | 609 | |||
3128 | 610 | base = self.get_rootdir() | ||
3129 | 611 | |||
3130 | 612 | if self.network_root is None: | ||
3131 | 613 | dists_path = os.path.join( self.path , "dists" ) | ||
3132 | 614 | if os.path.isdir( dists_path ): | ||
3133 | 615 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
3134 | 616 | else: | ||
3135 | 617 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
3136 | 618 | self.set_install_tree(distro, tree) | ||
3137 | 619 | else: | ||
3138 | 620 | # where we assign the kickstart source is relative to our current directory | ||
3139 | 621 | # and the input start directory in the crawl. We find the path segments | ||
3140 | 622 | # between and tack them on the network source path to find the explicit | ||
3141 | 623 | # network path to the distro that Anaconda can digest. | ||
3142 | 624 | tail = utils.path_tail(self.path, base) | ||
3143 | 625 | tree = self.network_root[:-1] + tail | ||
3144 | 626 | self.set_install_tree(distro, tree) | ||
3145 | 627 | |||
3146 | 628 | def get_rootdir(self): | ||
3147 | 629 | return self.mirror | ||
3148 | 630 | |||
3149 | 631 | def get_pkgdir(self): | ||
3150 | 632 | if not self.pkgdir: | ||
3151 | 633 | return None | ||
3152 | 634 | return os.path.join(self.get_rootdir(),self.pkgdir) | ||
3153 | 635 | |||
3154 | 636 | def set_install_tree(self, distro, url): | ||
3155 | 637 | distro.ks_meta["tree"] = url | ||
3156 | 638 | |||
3157 | 639 | def learn_arch_from_tree(self): | ||
3158 | 640 | """ | ||
3159 | 641 | If a distribution is imported from DVD, there is a good chance the path doesn't | ||
3160 | 642 | contain the arch and we should add it back in so that it's part of the | ||
3161 | 643 | meaningful name ... so this code helps figure out the arch name. This is important | ||
3162 | 644 | for producing predictable distro names (and profile names) from differing import sources | ||
3163 | 645 | """ | ||
3164 | 646 | result = {} | ||
3165 | 647 | # FIXME : this is called only once, should not be a walk | ||
3166 | 648 | if self.get_pkgdir(): | ||
3167 | 649 | os.path.walk(self.get_pkgdir(), self.arch_walker, result) | ||
3168 | 650 | if result.pop("amd64",False): | ||
3169 | 651 | result["x86_64"] = 1 | ||
3170 | 652 | if result.pop("i686",False): | ||
3171 | 653 | result["i386"] = 1 | ||
3172 | 654 | return result.keys() | ||
3173 | 655 | |||
3174 | 656 | def match_kernelarch_file(self, filename): | ||
3175 | 657 | """ | ||
3176 | 658 | Is the given filename a kernel filename? | ||
3177 | 659 | """ | ||
3178 | 660 | if not filename.endswith("deb"): | ||
3179 | 661 | return False | ||
3180 | 662 | if filename.startswith("linux-headers-"): | ||
3181 | 663 | return True | ||
3182 | 664 | return False | ||
3183 | 665 | |||
3184 | 666 | def scan_pkg_filename(self, file): | ||
3185 | 667 | """ | ||
3186 | 668 | Determine what the distro is based on the release package filename. | ||
3187 | 669 | """ | ||
3188 | 670 | # FIXME: all of these dist_names should probably be put in a function | ||
3189 | 671 | # which would be called in place of looking in codes.py. Right now | ||
3190 | 672 | # you have to update both codes.py and this to add a new release | ||
3191 | 673 | if self.breed == "debian": | ||
3192 | 674 | dist_names = ['etch','lenny',] | ||
3193 | 675 | elif self.breed == "ubuntu": | ||
3194 | 676 | dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',] | ||
3195 | 677 | else: | ||
3196 | 678 | return None | ||
3197 | 679 | |||
3198 | 680 | if os.path.basename(file) in dist_names: | ||
3199 | 681 | release_file = os.path.join(file,'Release') | ||
3200 | 682 | self.logger.info("Found %s release file: %s" % (self.breed,release_file)) | ||
3201 | 683 | |||
3202 | 684 | f = open(release_file,'r') | ||
3203 | 685 | lines = f.readlines() | ||
3204 | 686 | f.close() | ||
3205 | 687 | |||
3206 | 688 | for line in lines: | ||
3207 | 689 | if line.lower().startswith('version: '): | ||
3208 | 690 | version = line.split(':')[1].strip() | ||
3209 | 691 | values = version.split('.') | ||
3210 | 692 | if len(values) == 1: | ||
3211 | 693 | # I don't think you'd ever hit this currently with debian or ubuntu, | ||
3212 | 694 | # just including it for safety reasons | ||
3213 | 695 | return (os.path.basename(file), values[0], "0", "0") | ||
3214 | 696 | elif len(values) == 2: | ||
3215 | 697 | return (os.path.basename(file), values[0], values[1], "0") | ||
3216 | 698 | elif len(values) > 2: | ||
3217 | 699 | return (os.path.basename(file), values[0], values[1], values[2]) | ||
3218 | 700 | return None | ||
3219 | 701 | |||
3220 | 702 | def get_datestamp(self): | ||
3221 | 703 | """ | ||
3222 | 704 | Not used for debian/ubuntu... should probably be removed? - jcammarata | ||
3223 | 705 | """ | ||
3224 | 706 | pass | ||
3225 | 707 | |||
3226 | 708 | def set_variance(self, flavor, major, minor, arch): | ||
3227 | 709 | """ | ||
3228 | 710 | Set distro specific versioning. | ||
3229 | 711 | """ | ||
3230 | 712 | # I don't think this is required anymore, as the scan_pkg_filename() function | ||
3231 | 713 | # above does everything we need it to - jcammarata | ||
3232 | 714 | # | ||
3233 | 715 | #if self.breed == "debian": | ||
3234 | 716 | # dist_names = { '4.0' : "etch" , '5.0' : "lenny" } | ||
3235 | 717 | # dist_vers = "%s.%s" % ( major , minor ) | ||
3236 | 718 | # os_version = dist_names[dist_vers] | ||
3237 | 719 | # | ||
3238 | 720 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
3239 | 721 | #elif self.breed == "ubuntu": | ||
3240 | 722 | # # Release names taken from wikipedia | ||
3241 | 723 | # dist_names = { '6.4' :"dapper", | ||
3242 | 724 | # '8.4' :"hardy", | ||
3243 | 725 | # '8.10' :"intrepid", | ||
3244 | 726 | # '9.4' :"jaunty", | ||
3245 | 727 | # '9.10' :"karmic", | ||
3246 | 728 | # '10.4' :"lynx", | ||
3247 | 729 | # '10.10':"maverick", | ||
3248 | 730 | # '11.4' :"natty", | ||
3249 | 731 | # } | ||
3250 | 732 | # dist_vers = "%s.%s" % ( major , minor ) | ||
3251 | 733 | # if not dist_names.has_key( dist_vers ): | ||
3252 | 734 | # dist_names['4ubuntu2.0'] = "IntrepidIbex" | ||
3253 | 735 | # os_version = dist_names[dist_vers] | ||
3254 | 736 | # | ||
3255 | 737 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
3256 | 738 | #else: | ||
3257 | 739 | # return None | ||
3258 | 740 | pass | ||
3259 | 741 | |||
3260 | 742 | def process_repos(self, main_importer, distro): | ||
3261 | 743 | # Create a disabled repository for the new distro, and the security updates | ||
3262 | 744 | # | ||
3263 | 745 | # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage | ||
3264 | 746 | |||
3265 | 747 | repo = item_repo.Repo(main_importer.config) | ||
3266 | 748 | repo.set_breed( "apt" ) | ||
3267 | 749 | repo.set_arch( distro.arch ) | ||
3268 | 750 | repo.set_keep_updated( False ) | ||
3269 | 751 | repo.yumopts["--ignore-release-gpg"] = None | ||
3270 | 752 | repo.yumopts["--verbose"] = None | ||
3271 | 753 | repo.set_name( distro.name ) | ||
3272 | 754 | repo.set_os_version( distro.os_version ) | ||
3273 | 755 | # NOTE : The location of the mirror should come from timezone | ||
3274 | 756 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) ) | ||
3275 | 757 | |||
3276 | 758 | security_repo = item_repo.Repo(main_importer.config) | ||
3277 | 759 | security_repo.set_breed( "apt" ) | ||
3278 | 760 | security_repo.set_arch( distro.arch ) | ||
3279 | 761 | security_repo.set_keep_updated( False ) | ||
3280 | 762 | security_repo.yumopts["--ignore-release-gpg"] = None | ||
3281 | 763 | security_repo.yumopts["--verbose"] = None | ||
3282 | 764 | security_repo.set_name( distro.name + "-security" ) | ||
3283 | 765 | security_repo.set_os_version( distro.os_version ) | ||
3284 | 766 | # There are no official mirrors for security updates | ||
3285 | 767 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' ) | ||
3286 | 768 | |||
3287 | 769 | self.logger.info("Added repos for %s" % distro.name) | ||
3288 | 770 | repos = main_importer.config.repos() | ||
3289 | 771 | repos.add(repo,save=True) | ||
3290 | 772 | repos.add(security_repo,save=True) | ||
3291 | 773 | |||
3292 | 774 | # ========================================================================== | ||
3293 | 775 | |||
3294 | 776 | def get_import_manager(config,logger): | ||
3295 | 777 | return ImportDebianUbuntuManager(config,logger) | ||
3296 | 778 | 0 | ||
3297 | === removed file '.pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py' | |||
3298 | --- .pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py 2011-05-02 18:26:03 +0000 | |||
3299 | +++ .pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py 1970-01-01 00:00:00 +0000 | |||
3300 | @@ -1,98 +0,0 @@ | |||
3301 | 1 | |||
3302 | 2 | """ | ||
3303 | 3 | various codes and constants used by Cobbler | ||
3304 | 4 | |||
3305 | 5 | Copyright 2006-2009, Red Hat, Inc | ||
3306 | 6 | Michael DeHaan <mdehaan@redhat.com> | ||
3307 | 7 | |||
3308 | 8 | This program is free software; you can redistribute it and/or modify | ||
3309 | 9 | it under the terms of the GNU General Public License as published by | ||
3310 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
3311 | 11 | (at your option) any later version. | ||
3312 | 12 | |||
3313 | 13 | This program is distributed in the hope that it will be useful, | ||
3314 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
3315 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
3316 | 16 | GNU General Public License for more details. | ||
3317 | 17 | |||
3318 | 18 | You should have received a copy of the GNU General Public License | ||
3319 | 19 | along with this program; if not, write to the Free Software | ||
3320 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
3321 | 21 | 02110-1301 USA | ||
3322 | 22 | """ | ||
3323 | 23 | |||
3324 | 24 | import utils | ||
3325 | 25 | |||
3326 | 26 | # OS variants table. This is a variance of the data from | ||
3327 | 27 | # ls /usr/lib/python2.X/site-packages/virtinst/FullVirtGuest.py | ||
3328 | 28 | # but replicated here as we can't assume cobbler is installed on a system with libvirt. | ||
3329 | 29 | # in many cases it will not be (i.e. old EL4 server, etc) and we need this info to | ||
3330 | 30 | # know how to validate --os-variant and --os-version. | ||
3331 | 31 | # | ||
3332 | 32 | # The keys of this hash correspond with the --breed flag in Cobbler. | ||
3333 | 33 | # --breed has physical provisioning semantics as well as virt semantics. | ||
3334 | 34 | # | ||
3335 | 35 | # presense of something in this table does /not/ mean it's supported. | ||
3336 | 36 | # for instance, currently, "redhat", "debian", and "suse" do something interesting. | ||
3337 | 37 | # the rest are undefined (for now), this will evolve. | ||
3338 | 38 | |||
3339 | 39 | VALID_OS_BREEDS = [ | ||
3340 | 40 | "redhat", "debian", "ubuntu", "suse", "generic", "windows", "unix", "vmware", "other" | ||
3341 | 41 | ] | ||
3342 | 42 | |||
3343 | 43 | VALID_OS_VERSIONS = { | ||
3344 | 44 | "redhat" : [ "rhel2.1", "rhel3", "rhel4", "rhel5", "rhel6", "fedora5", "fedora6", "fedora7", "fedora8", "fedora9", "fedora10", "fedora11", "fedora12", "fedora13", "fedora14", "generic24", "generic26", "virtio26", "other" ], | ||
3345 | 45 | "suse" : [ "sles10", "generic24", "generic26", "virtio26", "other" ], | ||
3346 | 46 | "debian" : [ "etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "generic24", "generic26", "other" ], | ||
3347 | 47 | "ubuntu" : [ "dapper", "hardy", "intrepid", "jaunty", "karmic", "lucid", "maverick", "natty" ], | ||
3348 | 48 | "generic" : [ "generic24", "generic26", "other" ], | ||
3349 | 49 | "windows" : [ "winxp", "win2k", "win2k3", "vista", "other" ], | ||
3350 | 50 | "unix" : [ "solaris9", "solaris10", "freebsd6", "openbsd4", "other" ], | ||
3351 | 51 | "vmware" : [ "esx4", "esxi4" ], | ||
3352 | 52 | "other" : [ "msdos", "netware4", "netware5", "netware6", "generic", "other" ] | ||
3353 | 53 | } | ||
3354 | 54 | |||
3355 | 55 | VALID_REPO_BREEDS = [ | ||
3356 | 56 | # "rsync", "rhn", "yum", "apt" | ||
3357 | 57 | "rsync", "rhn", "yum" | ||
3358 | 58 | ] | ||
3359 | 59 | |||
3360 | 60 | def uniquify(seq, idfun=None): | ||
3361 | 61 | |||
3362 | 62 | # this is odd (older mod_python scoping bug?) but we can't use | ||
3363 | 63 | # utils.uniquify here because on older distros (RHEL4/5) | ||
3364 | 64 | # mod_python gets another utils. As a result, | ||
3365 | 65 | # it is duplicated here for now. Bad, but ... now you know. | ||
3366 | 66 | # | ||
3367 | 67 | # credit: http://www.peterbe.com/plog/uniqifiers-benchmark | ||
3368 | 68 | # FIXME: if this is actually slower than some other way, overhaul it | ||
3369 | 69 | |||
3370 | 70 | if idfun is None: | ||
3371 | 71 | def idfun(x): | ||
3372 | 72 | return x | ||
3373 | 73 | seen = {} | ||
3374 | 74 | result = [] | ||
3375 | 75 | for item in seq: | ||
3376 | 76 | marker = idfun(item) | ||
3377 | 77 | if marker in seen: | ||
3378 | 78 | continue | ||
3379 | 79 | seen[marker] = 1 | ||
3380 | 80 | result.append(item) | ||
3381 | 81 | return result | ||
3382 | 82 | |||
3383 | 83 | |||
3384 | 84 | def get_all_os_versions(): | ||
3385 | 85 | """ | ||
3386 | 86 | Collapse the above list of OS versions for usage/display by the CLI/webapp. | ||
3387 | 87 | """ | ||
3388 | 88 | results = [''] | ||
3389 | 89 | for x in VALID_OS_VERSIONS.keys(): | ||
3390 | 90 | for y in VALID_OS_VERSIONS[x]: | ||
3391 | 91 | results.append(y) | ||
3392 | 92 | |||
3393 | 93 | results = uniquify(results) | ||
3394 | 94 | |||
3395 | 95 | results.sort() | ||
3396 | 96 | return results | ||
3397 | 97 | |||
3398 | 98 | |||
3399 | 99 | 0 | ||
3400 | === removed directory '.pc/42_fix_repomirror_create_sync.patch/cobbler/modules' | |||
3401 | === removed file '.pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py' | |||
3402 | --- .pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-05-02 18:26:03 +0000 | |||
3403 | +++ .pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000 | |||
3404 | @@ -1,779 +0,0 @@ | |||
3405 | 1 | """ | ||
3406 | 2 | This is some of the code behind 'cobbler sync'. | ||
3407 | 3 | |||
3408 | 4 | Copyright 2006-2009, Red Hat, Inc | ||
3409 | 5 | Michael DeHaan <mdehaan@redhat.com> | ||
3410 | 6 | John Eckersberg <jeckersb@redhat.com> | ||
3411 | 7 | |||
3412 | 8 | This program is free software; you can redistribute it and/or modify | ||
3413 | 9 | it under the terms of the GNU General Public License as published by | ||
3414 | 10 | the Free Software Foundation; either version 2 of the License, or | ||
3415 | 11 | (at your option) any later version. | ||
3416 | 12 | |||
3417 | 13 | This program is distributed in the hope that it will be useful, | ||
3418 | 14 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
3419 | 15 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
3420 | 16 | GNU General Public License for more details. | ||
3421 | 17 | |||
3422 | 18 | You should have received a copy of the GNU General Public License | ||
3423 | 19 | along with this program; if not, write to the Free Software | ||
3424 | 20 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | ||
3425 | 21 | 02110-1301 USA | ||
3426 | 22 | """ | ||
3427 | 23 | |||
3428 | 24 | import os | ||
3429 | 25 | import os.path | ||
3430 | 26 | import shutil | ||
3431 | 27 | import time | ||
3432 | 28 | import sys | ||
3433 | 29 | import glob | ||
3434 | 30 | import traceback | ||
3435 | 31 | import errno | ||
3436 | 32 | import re | ||
3437 | 33 | from utils import popen2 | ||
3438 | 34 | from shlex import shlex | ||
3439 | 35 | |||
3440 | 36 | |||
3441 | 37 | import utils | ||
3442 | 38 | from cexceptions import * | ||
3443 | 39 | import templar | ||
3444 | 40 | |||
3445 | 41 | import item_distro | ||
3446 | 42 | import item_profile | ||
3447 | 43 | import item_repo | ||
3448 | 44 | import item_system | ||
3449 | 45 | |||
3450 | 46 | from utils import _ | ||
3451 | 47 | |||
3452 | 48 | def register(): | ||
3453 | 49 | """ | ||
3454 | 50 | The mandatory cobbler module registration hook. | ||
3455 | 51 | """ | ||
3456 | 52 | return "manage/import" | ||
3457 | 53 | |||
3458 | 54 | |||
3459 | 55 | class ImportDebianUbuntuManager: | ||
3460 | 56 | |||
3461 | 57 | def __init__(self,config,logger): | ||
3462 | 58 | """ | ||
3463 | 59 | Constructor | ||
3464 | 60 | """ | ||
3465 | 61 | self.logger = logger | ||
3466 | 62 | self.config = config | ||
3467 | 63 | self.api = config.api | ||
3468 | 64 | self.distros = config.distros() | ||
3469 | 65 | self.profiles = config.profiles() | ||
3470 | 66 | self.systems = config.systems() | ||
3471 | 67 | self.settings = config.settings() | ||
3472 | 68 | self.repos = config.repos() | ||
3473 | 69 | self.templar = templar.Templar(config) | ||
3474 | 70 | |||
3475 | 71 | # required function for import modules | ||
3476 | 72 | def what(self): | ||
3477 | 73 | return "import/debian_ubuntu" | ||
3478 | 74 | |||
3479 | 75 | # required function for import modules | ||
3480 | 76 | def check_for_signature(self,path,cli_breed): | ||
3481 | 77 | signatures = [ | ||
3482 | 78 | 'pool', | ||
3483 | 79 | ] | ||
3484 | 80 | |||
3485 | 81 | #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path) | ||
3486 | 82 | for signature in signatures: | ||
3487 | 83 | d = os.path.join(path,signature) | ||
3488 | 84 | if os.path.exists(d): | ||
3489 | 85 | self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature) | ||
3490 | 86 | return (True,signature) | ||
3491 | 87 | |||
3492 | 88 | if cli_breed and cli_breed in self.get_valid_breeds(): | ||
3493 | 89 | self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) | ||
3494 | 90 | return (True,None) | ||
3495 | 91 | |||
3496 | 92 | return (False,None) | ||
3497 | 93 | |||
3498 | 94 | # required function for import modules | ||
3499 | 95 | def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): | ||
3500 | 96 | self.pkgdir = pkgdir | ||
3501 | 97 | self.mirror = mirror | ||
3502 | 98 | self.mirror_name = mirror_name | ||
3503 | 99 | self.network_root = network_root | ||
3504 | 100 | self.kickstart_file = kickstart_file | ||
3505 | 101 | self.rsync_flags = rsync_flags | ||
3506 | 102 | self.arch = arch | ||
3507 | 103 | self.breed = breed | ||
3508 | 104 | self.os_version = os_version | ||
3509 | 105 | |||
3510 | 106 | # some fixups for the XMLRPC interface, which does not use "None" | ||
3511 | 107 | if self.arch == "": self.arch = None | ||
3512 | 108 | if self.mirror == "": self.mirror = None | ||
3513 | 109 | if self.mirror_name == "": self.mirror_name = None | ||
3514 | 110 | if self.kickstart_file == "": self.kickstart_file = None | ||
3515 | 111 | if self.os_version == "": self.os_version = None | ||
3516 | 112 | if self.rsync_flags == "": self.rsync_flags = None | ||
3517 | 113 | if self.network_root == "": self.network_root = None | ||
3518 | 114 | |||
3519 | 115 | # If no breed was specified on the command line, figure it out | ||
3520 | 116 | if self.breed == None: | ||
3521 | 117 | self.breed = self.get_breed_from_directory() | ||
3522 | 118 | if not self.breed: | ||
3523 | 119 | utils.die(self.logger,"import failed - could not determine breed of debian-based distro") | ||
3524 | 120 | |||
3525 | 121 | # debug log stuff for testing | ||
3526 | 122 | #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir)) | ||
3527 | 123 | #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror)) | ||
3528 | 124 | #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name)) | ||
3529 | 125 | #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root)) | ||
3530 | 126 | #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file)) | ||
3531 | 127 | #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags)) | ||
3532 | 128 | #self.logger.info("DEBUG: self.arch = %s" % str(self.arch)) | ||
3533 | 129 | #self.logger.info("DEBUG: self.breed = %s" % str(self.breed)) | ||
3534 | 130 | #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version)) | ||
3535 | 131 | |||
3536 | 132 | # both --import and --name are required arguments | ||
3537 | 133 | |||
3538 | 134 | if self.mirror is None: | ||
3539 | 135 | utils.die(self.logger,"import failed. no --path specified") | ||
3540 | 136 | if self.mirror_name is None: | ||
3541 | 137 | utils.die(self.logger,"import failed. no --name specified") | ||
3542 | 138 | |||
3543 | 139 | # if --arch is supplied, validate it to ensure it's valid | ||
3544 | 140 | |||
3545 | 141 | if self.arch is not None and self.arch != "": | ||
3546 | 142 | self.arch = self.arch.lower() | ||
3547 | 143 | if self.arch == "x86": | ||
3548 | 144 | # be consistent | ||
3549 | 145 | self.arch = "i386" | ||
3550 | 146 | if self.arch not in self.get_valid_arches(): | ||
3551 | 147 | utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", ")) | ||
3552 | 148 | |||
3553 | 149 | # if we're going to do any copying, set where to put things | ||
3554 | 150 | # and then make sure nothing is already there. | ||
3555 | 151 | |||
3556 | 152 | self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) ) | ||
3557 | 153 | if os.path.exists(self.path) and self.arch is None: | ||
3558 | 154 | # FIXME : Raise exception even when network_root is given ? | ||
3559 | 155 | utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path) | ||
3560 | 156 | |||
3561 | 157 | # import takes a --kickstart for forcing selection that can't be used in all circumstances | ||
3562 | 158 | |||
3563 | 159 | if self.kickstart_file and not self.breed: | ||
3564 | 160 | utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") | ||
3565 | 161 | |||
3566 | 162 | if self.os_version and not self.breed: | ||
3567 | 163 | utils.die(self.logger,"OS version can only be specified when a specific breed is selected") | ||
3568 | 164 | |||
3569 | 165 | if self.breed and self.breed.lower() not in self.get_valid_breeds(): | ||
3570 | 166 | utils.die(self.logger,"Supplied import breed is not supported by this module") | ||
3571 | 167 | |||
3572 | 168 | # if --arch is supplied, make sure the user is not importing a path with a different | ||
3573 | 169 | # arch, which would just be silly. | ||
3574 | 170 | |||
3575 | 171 | if self.arch: | ||
3576 | 172 | # append the arch path to the name if the arch is not already | ||
3577 | 173 | # found in the name. | ||
3578 | 174 | for x in self.get_valid_arches(): | ||
3579 | 175 | if self.path.lower().find(x) != -1: | ||
3580 | 176 | if self.arch != x : | ||
3581 | 177 | utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch)) | ||
3582 | 178 | break | ||
3583 | 179 | else: | ||
3584 | 180 | # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again | ||
3585 | 181 | self.path += ("-%s" % self.arch) | ||
3586 | 182 | # If arch is specified we also need to update the mirror name. | ||
3587 | 183 | self.mirror_name = self.mirror_name + "-" + self.arch | ||
3588 | 184 | |||
3589 | 185 | # make the output path and mirror content but only if not specifying that a network | ||
3590 | 186 | # accessible support location already exists (this is --available-as on the command line) | ||
3591 | 187 | |||
3592 | 188 | if self.network_root is None: | ||
3593 | 189 | # we need to mirror (copy) the files | ||
3594 | 190 | |||
3595 | 191 | utils.mkdir(self.path) | ||
3596 | 192 | |||
3597 | 193 | if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"): | ||
3598 | 194 | |||
3599 | 195 | # http mirrors are kind of primative. rsync is better. | ||
3600 | 196 | # that's why this isn't documented in the manpage and we don't support them. | ||
3601 | 197 | # TODO: how about adding recursive FTP as an option? | ||
3602 | 198 | |||
3603 | 199 | utils.die(self.logger,"unsupported protocol") | ||
3604 | 200 | |||
3605 | 201 | else: | ||
3606 | 202 | |||
3607 | 203 | # good, we're going to use rsync.. | ||
3608 | 204 | # we don't use SSH for public mirrors and local files. | ||
3609 | 205 | # presence of user@host syntax means use SSH | ||
3610 | 206 | |||
3611 | 207 | # kick off the rsync now | ||
3612 | 208 | |||
3613 | 209 | if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger): | ||
3614 | 210 | utils.die(self.logger, "failed to rsync the files") | ||
3615 | 211 | |||
3616 | 212 | else: | ||
3617 | 213 | |||
3618 | 214 | # rather than mirroring, we're going to assume the path is available | ||
3619 | 215 | # over http, ftp, and nfs, perhaps on an external filer. scanning still requires | ||
3620 | 216 | # --mirror is a filesystem path, but --available-as marks the network path | ||
3621 | 217 | |||
3622 | 218 | if not os.path.exists(self.mirror): | ||
3623 | 219 | utils.die(self.logger, "path does not exist: %s" % self.mirror) | ||
3624 | 220 | |||
3625 | 221 | # find the filesystem part of the path, after the server bits, as each distro | ||
3626 | 222 | # URL needs to be calculated relative to this. | ||
3627 | 223 | |||
3628 | 224 | if not self.network_root.endswith("/"): | ||
3629 | 225 | self.network_root = self.network_root + "/" | ||
3630 | 226 | self.path = os.path.normpath( self.mirror ) | ||
3631 | 227 | valid_roots = [ "nfs://", "ftp://", "http://" ] | ||
3632 | 228 | for valid_root in valid_roots: | ||
3633 | 229 | if self.network_root.startswith(valid_root): | ||
3634 | 230 | break | ||
3635 | 231 | else: | ||
3636 | 232 | utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://") | ||
3637 | 233 | if self.network_root.startswith("nfs://"): | ||
3638 | 234 | try: | ||
3639 | 235 | (a,b,rest) = self.network_root.split(":",3) | ||
3640 | 236 | except: | ||
3641 | 237 | utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.") | ||
3642 | 238 | |||
3643 | 239 | # now walk the filesystem looking for distributions that match certain patterns | ||
3644 | 240 | |||
3645 | 241 | self.logger.info("adding distros") | ||
3646 | 242 | distros_added = [] | ||
3647 | 243 | # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST | ||
3648 | 244 | os.path.walk(self.path, self.distro_adder, distros_added) | ||
3649 | 245 | |||
3650 | 246 | # find out if we can auto-create any repository records from the install tree | ||
3651 | 247 | |||
3652 | 248 | if self.network_root is None: | ||
3653 | 249 | self.logger.info("associating repos") | ||
3654 | 250 | # FIXME: this automagic is not possible (yet) without mirroring | ||
3655 | 251 | self.repo_finder(distros_added) | ||
3656 | 252 | |||
3657 | 253 | # find the most appropriate answer files for each profile object | ||
3658 | 254 | |||
3659 | 255 | self.logger.info("associating kickstarts") | ||
3660 | 256 | self.kickstart_finder(distros_added) | ||
3661 | 257 | |||
3662 | 258 | # ensure bootloaders are present | ||
3663 | 259 | self.api.pxegen.copy_bootloaders() | ||
3664 | 260 | |||
3665 | 261 | return True | ||
3666 | 262 | |||
3667 | 263 | # required function for import modules | ||
3668 | 264 | def get_valid_arches(self): | ||
3669 | 265 | return ["i386", "ppc", "x86_64", "x86",] | ||
3670 | 266 | |||
3671 | 267 | # required function for import modules | ||
3672 | 268 | def get_valid_breeds(self): | ||
3673 | 269 | return ["debian","ubuntu"] | ||
3674 | 270 | |||
3675 | 271 | # required function for import modules | ||
3676 | 272 | def get_valid_os_versions(self): | ||
3677 | 273 | if self.breed == "debian": | ||
3678 | 274 | return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",] | ||
3679 | 275 | elif self.breed == "ubuntu": | ||
3680 | 276 | return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",] | ||
3681 | 277 | else: | ||
3682 | 278 | return [] | ||
3683 | 279 | |||
3684 | 280 | def get_valid_repo_breeds(self): | ||
3685 | 281 | return ["apt",] | ||
3686 | 282 | |||
3687 | 283 | def get_release_files(self): | ||
3688 | 284 | """ | ||
3689 | 285 | Find distro release packages. | ||
3690 | 286 | """ | ||
3691 | 287 | return glob.glob(os.path.join(self.get_rootdir(), "dists/*")) | ||
3692 | 288 | |||
3693 | 289 | def get_breed_from_directory(self): | ||
3694 | 290 | for breed in self.get_valid_breeds(): | ||
3695 | 291 | # NOTE : Although we break the loop after the first match, | ||
3696 | 292 | # multiple debian derived distros can actually live at the same pool -- JP | ||
3697 | 293 | d = os.path.join(self.mirror, breed) | ||
3698 | 294 | if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed: | ||
3699 | 295 | return breed | ||
3700 | 296 | else: | ||
3701 | 297 | return None | ||
3702 | 298 | |||
3703 | 299 | def get_tree_location(self, distro): | ||
3704 | 300 | """ | ||
3705 | 301 | Once a distribution is identified, find the part of the distribution | ||
3706 | 302 | that has the URL in it that we want to use for kickstarting the | ||
3707 | 303 | distribution, and create a ksmeta variable $tree that contains this. | ||
3708 | 304 | """ | ||
3709 | 305 | |||
3710 | 306 | base = self.get_rootdir() | ||
3711 | 307 | |||
3712 | 308 | if self.network_root is None: | ||
3713 | 309 | dists_path = os.path.join(self.path, "dists") | ||
3714 | 310 | if os.path.isdir(dists_path): | ||
3715 | 311 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
3716 | 312 | else: | ||
3717 | 313 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
3718 | 314 | self.set_install_tree(distro, tree) | ||
3719 | 315 | else: | ||
3720 | 316 | # where we assign the kickstart source is relative to our current directory | ||
3721 | 317 | # and the input start directory in the crawl. We find the path segments | ||
3722 | 318 | # between and tack them on the network source path to find the explicit | ||
3723 | 319 | # network path to the distro that Anaconda can digest. | ||
3724 | 320 | tail = self.path_tail(self.path, base) | ||
3725 | 321 | tree = self.network_root[:-1] + tail | ||
3726 | 322 | self.set_install_tree(distro, tree) | ||
3727 | 323 | |||
3728 | 324 | return | ||
3729 | 325 | |||
3730 | 326 | def repo_finder(self, distros_added): | ||
3731 | 327 | for distro in distros_added: | ||
3732 | 328 | self.logger.info("traversing distro %s" % distro.name) | ||
3733 | 329 | # FIXME : Shouldn't decide this the value of self.network_root ? | ||
3734 | 330 | if distro.kernel.find("ks_mirror") != -1: | ||
3735 | 331 | basepath = os.path.dirname(distro.kernel) | ||
3736 | 332 | top = self.get_rootdir() | ||
3737 | 333 | self.logger.info("descent into %s" % top) | ||
3738 | 334 | dists_path = os.path.join(self.path, "dists") | ||
3739 | 335 | if not os.path.isdir(dists_path): | ||
3740 | 336 | self.process_repos() | ||
3741 | 337 | else: | ||
3742 | 338 | self.logger.info("this distro isn't mirrored") | ||
3743 | 339 | |||
3744 | 340 | def process_repos(self): | ||
3745 | 341 | pass | ||
3746 | 342 | |||
3747 | 343 | def distro_adder(self,distros_added,dirname,fnames): | ||
3748 | 344 | """ | ||
3749 | 345 | This is an os.path.walk routine that finds distributions in the directory | ||
3750 | 346 | to be scanned and then creates them. | ||
3751 | 347 | """ | ||
3752 | 348 | |||
3753 | 349 | # FIXME: If there are more than one kernel or initrd image on the same directory, | ||
3754 | 350 | # results are unpredictable | ||
3755 | 351 | |||
3756 | 352 | initrd = None | ||
3757 | 353 | kernel = None | ||
3758 | 354 | |||
3759 | 355 | for x in fnames: | ||
3760 | 356 | adtls = [] | ||
3761 | 357 | |||
3762 | 358 | fullname = os.path.join(dirname,x) | ||
3763 | 359 | if os.path.islink(fullname) and os.path.isdir(fullname): | ||
3764 | 360 | if fullname.startswith(self.path): | ||
3765 | 361 | self.logger.warning("avoiding symlink loop") | ||
3766 | 362 | continue | ||
3767 | 363 | self.logger.info("following symlink: %s" % fullname) | ||
3768 | 364 | os.path.walk(fullname, self.distro_adder, distros_added) | ||
3769 | 365 | |||
3770 | 366 | if ( x.startswith("initrd.gz") ) and x != "initrd.size": | ||
3771 | 367 | initrd = os.path.join(dirname,x) | ||
3772 | 368 | if ( x.startswith("linux") ) and x.find("initrd") == -1: | ||
3773 | 369 | kernel = os.path.join(dirname,x) | ||
3774 | 370 | |||
3775 | 371 | # if we've collected a matching kernel and initrd pair, turn the in and add them to the list | ||
3776 | 372 | if initrd is not None and kernel is not None: | ||
3777 | 373 | adtls.append(self.add_entry(dirname,kernel,initrd)) | ||
3778 | 374 | kernel = None | ||
3779 | 375 | initrd = None | ||
3780 | 376 | |||
3781 | 377 | for adtl in adtls: | ||
3782 | 378 | distros_added.extend(adtl) | ||
3783 | 379 | |||
3784 | 380 | def add_entry(self,dirname,kernel,initrd): | ||
3785 | 381 | """ | ||
3786 | 382 | When we find a directory with a valid kernel/initrd in it, create the distribution objects | ||
3787 | 383 | as appropriate and save them. This includes creating xen and rescue distros/profiles | ||
3788 | 384 | if possible. | ||
3789 | 385 | """ | ||
3790 | 386 | |||
3791 | 387 | proposed_name = self.get_proposed_name(dirname,kernel) | ||
3792 | 388 | proposed_arch = self.get_proposed_arch(dirname) | ||
3793 | 389 | |||
3794 | 390 | if self.arch and proposed_arch and self.arch != proposed_arch: | ||
3795 | 391 | utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) | ||
3796 | 392 | |||
3797 | 393 | archs = self.learn_arch_from_tree() | ||
3798 | 394 | if not archs: | ||
3799 | 395 | if self.arch: | ||
3800 | 396 | archs.append( self.arch ) | ||
3801 | 397 | else: | ||
3802 | 398 | if self.arch and self.arch not in archs: | ||
3803 | 399 | utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) | ||
3804 | 400 | if proposed_arch: | ||
3805 | 401 | if archs and proposed_arch not in archs: | ||
3806 | 402 | self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) | ||
3807 | 403 | return | ||
3808 | 404 | |||
3809 | 405 | archs = [ proposed_arch ] | ||
3810 | 406 | |||
3811 | 407 | if len(archs)>1: | ||
3812 | 408 | self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) | ||
3813 | 409 | |||
3814 | 410 | distros_added = [] | ||
3815 | 411 | |||
3816 | 412 | for pxe_arch in archs: | ||
3817 | 413 | name = proposed_name + "-" + pxe_arch | ||
3818 | 414 | existing_distro = self.distros.find(name=name) | ||
3819 | 415 | |||
3820 | 416 | if existing_distro is not None: | ||
3821 | 417 | self.logger.warning("skipping import, as distro name already exists: %s" % name) | ||
3822 | 418 | continue | ||
3823 | 419 | |||
3824 | 420 | else: | ||
3825 | 421 | self.logger.info("creating new distro: %s" % name) | ||
3826 | 422 | distro = self.config.new_distro() | ||
3827 | 423 | |||
3828 | 424 | if name.find("-autoboot") != -1: | ||
3829 | 425 | # this is an artifact of some EL-3 imports | ||
3830 | 426 | continue | ||
3831 | 427 | |||
3832 | 428 | distro.set_name(name) | ||
3833 | 429 | distro.set_kernel(kernel) | ||
3834 | 430 | distro.set_initrd(initrd) | ||
3835 | 431 | distro.set_arch(pxe_arch) | ||
3836 | 432 | distro.set_breed(self.breed) | ||
3837 | 433 | # If a version was supplied on command line, we set it now | ||
3838 | 434 | if self.os_version: | ||
3839 | 435 | distro.set_os_version(self.os_version) | ||
3840 | 436 | |||
3841 | 437 | self.distros.add(distro,save=True) | ||
3842 | 438 | distros_added.append(distro) | ||
3843 | 439 | |||
3844 | 440 | existing_profile = self.profiles.find(name=name) | ||
3845 | 441 | |||
3846 | 442 | # see if the profile name is already used, if so, skip it and | ||
3847 | 443 | # do not modify the existing profile | ||
3848 | 444 | |||
3849 | 445 | if existing_profile is None: | ||
3850 | 446 | self.logger.info("creating new profile: %s" % name) | ||
3851 | 447 | #FIXME: The created profile holds a default kickstart, and should be breed specific | ||
3852 | 448 | profile = self.config.new_profile() | ||
3853 | 449 | else: | ||
3854 | 450 | self.logger.info("skipping existing profile, name already exists: %s" % name) | ||
3855 | 451 | continue | ||
3856 | 452 | |||
3857 | 453 | # save our minimal profile which just points to the distribution and a good | ||
3858 | 454 | # default answer file | ||
3859 | 455 | |||
3860 | 456 | profile.set_name(name) | ||
3861 | 457 | profile.set_distro(name) | ||
3862 | 458 | profile.set_kickstart(self.kickstart_file) | ||
3863 | 459 | |||
3864 | 460 | # depending on the name of the profile we can define a good virt-type | ||
3865 | 461 | # for usage with koan | ||
3866 | 462 | |||
3867 | 463 | if name.find("-xen") != -1: | ||
3868 | 464 | profile.set_virt_type("xenpv") | ||
3869 | 465 | elif name.find("vmware") != -1: | ||
3870 | 466 | profile.set_virt_type("vmware") | ||
3871 | 467 | else: | ||
3872 | 468 | profile.set_virt_type("qemu") | ||
3873 | 469 | |||
3874 | 470 | # save our new profile to the collection | ||
3875 | 471 | |||
3876 | 472 | self.profiles.add(profile,save=True) | ||
3877 | 473 | |||
3878 | 474 | return distros_added | ||
3879 | 475 | |||
3880 | 476 | def get_proposed_name(self,dirname,kernel=None): | ||
3881 | 477 | """ | ||
3882 | 478 | Given a directory name where we have a kernel/initrd pair, try to autoname | ||
3883 | 479 | the distribution (and profile) object based on the contents of that path | ||
3884 | 480 | """ | ||
3885 | 481 | |||
3886 | 482 | if self.network_root is not None: | ||
3887 | 483 | name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/")) | ||
3888 | 484 | else: | ||
3889 | 485 | # remove the part that says /var/www/cobbler/ks_mirror/name | ||
3890 | 486 | name = "-".join(dirname.split("/")[5:]) | ||
3891 | 487 | |||
3892 | 488 | if kernel is not None and kernel.find("PAE") != -1: | ||
3893 | 489 | name = name + "-PAE" | ||
3894 | 490 | |||
3895 | 491 | # These are all Ubuntu's doing, the netboot images are buried pretty | ||
3896 | 492 | # deep. ;-) -JC | ||
3897 | 493 | name = name.replace("-netboot","") | ||
3898 | 494 | name = name.replace("-ubuntu-installer","") | ||
3899 | 495 | name = name.replace("-amd64","") | ||
3900 | 496 | name = name.replace("-i386","") | ||
3901 | 497 | |||
3902 | 498 | # we know that some kernel paths should not be in the name | ||
3903 | 499 | |||
3904 | 500 | name = name.replace("-images","") | ||
3905 | 501 | name = name.replace("-pxeboot","") | ||
3906 | 502 | name = name.replace("-install","") | ||
3907 | 503 | name = name.replace("-isolinux","") | ||
3908 | 504 | |||
3909 | 505 | # some paths above the media root may have extra path segments we want | ||
3910 | 506 | # to clean up | ||
3911 | 507 | |||
3912 | 508 | name = name.replace("-os","") | ||
3913 | 509 | name = name.replace("-tree","") | ||
3914 | 510 | name = name.replace("var-www-cobbler-", "") | ||
3915 | 511 | name = name.replace("ks_mirror-","") | ||
3916 | 512 | name = name.replace("--","-") | ||
3917 | 513 | |||
3918 | 514 | # remove any architecture name related string, as real arch will be appended later | ||
3919 | 515 | |||
3920 | 516 | name = name.replace("chrp","ppc64") | ||
3921 | 517 | |||
3922 | 518 | for separator in [ '-' , '_' , '.' ] : | ||
3923 | 519 | for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: | ||
3924 | 520 | name = name.replace("%s%s" % ( separator , arch ),"") | ||
3925 | 521 | |||
3926 | 522 | return name | ||
3927 | 523 | |||
3928 | 524 | def get_proposed_arch(self,dirname): | ||
3929 | 525 | """ | ||
3930 | 526 | Given an directory name, can we infer an architecture from a path segment? | ||
3931 | 527 | """ | ||
3932 | 528 | if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: | ||
3933 | 529 | return "x86_64" | ||
3934 | 530 | if dirname.find("ia64") != -1: | ||
3935 | 531 | return "ia64" | ||
3936 | 532 | if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: | ||
3937 | 533 | return "i386" | ||
3938 | 534 | if dirname.find("s390x") != -1: | ||
3939 | 535 | return "s390x" | ||
3940 | 536 | if dirname.find("s390") != -1: | ||
3941 | 537 | return "s390" | ||
3942 | 538 | if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: | ||
3943 | 539 | return "ppc64" | ||
3944 | 540 | if dirname.find("ppc32") != -1: | ||
3945 | 541 | return "ppc" | ||
3946 | 542 | if dirname.find("ppc") != -1: | ||
3947 | 543 | return "ppc" | ||
3948 | 544 | return None | ||
3949 | 545 | |||
3950 | 546 | def arch_walker(self,foo,dirname,fnames): | ||
3951 | 547 | """ | ||
3952 | 548 | See docs on learn_arch_from_tree. | ||
3953 | 549 | |||
3954 | 550 | The TRY_LIST is used to speed up search, and should be dropped for default importer | ||
3955 | 551 | Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem | ||
3956 | 552 | |||
3957 | 553 | This method is useful to get the archs, but also to package type and a raw guess of the breed | ||
3958 | 554 | """ | ||
3959 | 555 | |||
3960 | 556 | # try to find a kernel header RPM and then look at it's arch. | ||
3961 | 557 | for x in fnames: | ||
3962 | 558 | if self.match_kernelarch_file(x): | ||
3963 | 559 | for arch in self.get_valid_arches(): | ||
3964 | 560 | if x.find(arch) != -1: | ||
3965 | 561 | foo[arch] = 1 | ||
3966 | 562 | for arch in [ "i686" , "amd64" ]: | ||
3967 | 563 | if x.find(arch) != -1: | ||
3968 | 564 | foo[arch] = 1 | ||
3969 | 565 | |||
3970 | 566 | def kickstart_finder(self,distros_added): | ||
3971 | 567 | """ | ||
3972 | 568 | For all of the profiles in the config w/o a kickstart, use the | ||
3973 | 569 | given kickstart file, or look at the kernel path, from that, | ||
3974 | 570 | see if we can guess the distro, and if we can, assign a kickstart | ||
3975 | 571 | if one is available for it. | ||
3976 | 572 | """ | ||
3977 | 573 | for profile in self.profiles: | ||
3978 | 574 | distro = self.distros.find(name=profile.get_conceptual_parent().name) | ||
3979 | 575 | if distro is None or not (distro in distros_added): | ||
3980 | 576 | continue | ||
3981 | 577 | |||
3982 | 578 | kdir = os.path.dirname(distro.kernel) | ||
3983 | 579 | if self.kickstart_file == None: | ||
3984 | 580 | for file in self.get_release_files(): | ||
3985 | 581 | results = self.scan_pkg_filename(file) | ||
3986 | 582 | # FIXME : If os is not found on tree but set with CLI, no kickstart is searched | ||
3987 | 583 | if results is None: | ||
3988 | 584 | self.logger.warning("skipping %s" % file) | ||
3989 | 585 | continue | ||
3990 | 586 | (flavor, major, minor, release) = results | ||
3991 | 587 | # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata | ||
3992 | 588 | #version , ks = self.set_variance(flavor, major, minor, distro.arch) | ||
3993 | 589 | if self.os_version: | ||
3994 | 590 | if self.os_version != flavor: | ||
3995 | 591 | utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor)) | ||
3996 | 592 | distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch)) | ||
3997 | 593 | distro.set_os_version(flavor) | ||
3998 | 594 | # is this even valid for debian/ubuntu? - jcammarata | ||
3999 | 595 | #ds = self.get_datestamp() | ||
4000 | 596 | #if ds is not None: | ||
4001 | 597 | # distro.set_tree_build_time(ds) | ||
4002 | 598 | profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed") | ||
4003 | 599 | self.profiles.add(profile,save=True) | ||
4004 | 600 | |||
4005 | 601 | self.configure_tree_location(distro) | ||
4006 | 602 | self.distros.add(distro,save=True) # re-save | ||
4007 | 603 | self.api.serialize() | ||
4008 | 604 | |||
4009 | 605 | def configure_tree_location(self, distro): | ||
4010 | 606 | """ | ||
4011 | 607 | Once a distribution is identified, find the part of the distribution | ||
4012 | 608 | that has the URL in it that we want to use for kickstarting the | ||
4013 | 609 | distribution, and create a ksmeta variable $tree that contains this. | ||
4014 | 610 | """ | ||
4015 | 611 | |||
4016 | 612 | base = self.get_rootdir() | ||
4017 | 613 | |||
4018 | 614 | if self.network_root is None: | ||
4019 | 615 | dists_path = os.path.join( self.path , "dists" ) | ||
4020 | 616 | if os.path.isdir( dists_path ): | ||
4021 | 617 | tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name) | ||
4022 | 618 | else: | ||
4023 | 619 | tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) | ||
4024 | 620 | self.set_install_tree(distro, tree) | ||
4025 | 621 | else: | ||
4026 | 622 | # where we assign the kickstart source is relative to our current directory | ||
4027 | 623 | # and the input start directory in the crawl. We find the path segments | ||
4028 | 624 | # between and tack them on the network source path to find the explicit | ||
4029 | 625 | # network path to the distro that Anaconda can digest. | ||
4030 | 626 | tail = utils.path_tail(self.path, base) | ||
4031 | 627 | tree = self.network_root[:-1] + tail | ||
4032 | 628 | self.set_install_tree(distro, tree) | ||
4033 | 629 | |||
4034 | 630 | def get_rootdir(self): | ||
4035 | 631 | return self.mirror | ||
4036 | 632 | |||
4037 | 633 | def get_pkgdir(self): | ||
4038 | 634 | if not self.pkgdir: | ||
4039 | 635 | return None | ||
4040 | 636 | return os.path.join(self.get_rootdir(),self.pkgdir) | ||
4041 | 637 | |||
4042 | 638 | def set_install_tree(self, distro, url): | ||
4043 | 639 | distro.ks_meta["tree"] = url | ||
4044 | 640 | |||
4045 | 641 | def learn_arch_from_tree(self): | ||
4046 | 642 | """ | ||
4047 | 643 | If a distribution is imported from DVD, there is a good chance the path doesn't | ||
4048 | 644 | contain the arch and we should add it back in so that it's part of the | ||
4049 | 645 | meaningful name ... so this code helps figure out the arch name. This is important | ||
4050 | 646 | for producing predictable distro names (and profile names) from differing import sources | ||
4051 | 647 | """ | ||
4052 | 648 | result = {} | ||
4053 | 649 | # FIXME : this is called only once, should not be a walk | ||
4054 | 650 | if self.get_pkgdir(): | ||
4055 | 651 | os.path.walk(self.get_pkgdir(), self.arch_walker, result) | ||
4056 | 652 | if result.pop("amd64",False): | ||
4057 | 653 | result["x86_64"] = 1 | ||
4058 | 654 | if result.pop("i686",False): | ||
4059 | 655 | result["i386"] = 1 | ||
4060 | 656 | return result.keys() | ||
4061 | 657 | |||
4062 | 658 | def match_kernelarch_file(self, filename): | ||
4063 | 659 | """ | ||
4064 | 660 | Is the given filename a kernel filename? | ||
4065 | 661 | """ | ||
4066 | 662 | if not filename.endswith("deb"): | ||
4067 | 663 | return False | ||
4068 | 664 | if filename.startswith("linux-headers-"): | ||
4069 | 665 | return True | ||
4070 | 666 | return False | ||
4071 | 667 | |||
4072 | 668 | def scan_pkg_filename(self, file): | ||
4073 | 669 | """ | ||
4074 | 670 | Determine what the distro is based on the release package filename. | ||
4075 | 671 | """ | ||
4076 | 672 | # FIXME: all of these dist_names should probably be put in a function | ||
4077 | 673 | # which would be called in place of looking in codes.py. Right now | ||
4078 | 674 | # you have to update both codes.py and this to add a new release | ||
4079 | 675 | if self.breed == "debian": | ||
4080 | 676 | dist_names = ['etch','lenny',] | ||
4081 | 677 | elif self.breed == "ubuntu": | ||
4082 | 678 | dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',] | ||
4083 | 679 | else: | ||
4084 | 680 | return None | ||
4085 | 681 | |||
4086 | 682 | if os.path.basename(file) in dist_names: | ||
4087 | 683 | release_file = os.path.join(file,'Release') | ||
4088 | 684 | self.logger.info("Found %s release file: %s" % (self.breed,release_file)) | ||
4089 | 685 | |||
4090 | 686 | f = open(release_file,'r') | ||
4091 | 687 | lines = f.readlines() | ||
4092 | 688 | f.close() | ||
4093 | 689 | |||
4094 | 690 | for line in lines: | ||
4095 | 691 | if line.lower().startswith('version: '): | ||
4096 | 692 | version = line.split(':')[1].strip() | ||
4097 | 693 | values = version.split('.') | ||
4098 | 694 | if len(values) == 1: | ||
4099 | 695 | # I don't think you'd ever hit this currently with debian or ubuntu, | ||
4100 | 696 | # just including it for safety reasons | ||
4101 | 697 | return (os.path.basename(file), values[0], "0", "0") | ||
4102 | 698 | elif len(values) == 2: | ||
4103 | 699 | return (os.path.basename(file), values[0], values[1], "0") | ||
4104 | 700 | elif len(values) > 2: | ||
4105 | 701 | return (os.path.basename(file), values[0], values[1], values[2]) | ||
4106 | 702 | return None | ||
4107 | 703 | |||
4108 | 704 | def get_datestamp(self): | ||
4109 | 705 | """ | ||
4110 | 706 | Not used for debian/ubuntu... should probably be removed? - jcammarata | ||
4111 | 707 | """ | ||
4112 | 708 | pass | ||
4113 | 709 | |||
4114 | 710 | def set_variance(self, flavor, major, minor, arch): | ||
4115 | 711 | """ | ||
4116 | 712 | Set distro specific versioning. | ||
4117 | 713 | """ | ||
4118 | 714 | # I don't think this is required anymore, as the scan_pkg_filename() function | ||
4119 | 715 | # above does everything we need it to - jcammarata | ||
4120 | 716 | # | ||
4121 | 717 | #if self.breed == "debian": | ||
4122 | 718 | # dist_names = { '4.0' : "etch" , '5.0' : "lenny" } | ||
4123 | 719 | # dist_vers = "%s.%s" % ( major , minor ) | ||
4124 | 720 | # os_version = dist_names[dist_vers] | ||
4125 | 721 | # | ||
4126 | 722 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
4127 | 723 | #elif self.breed == "ubuntu": | ||
4128 | 724 | # # Release names taken from wikipedia | ||
4129 | 725 | # dist_names = { '6.4' :"dapper", | ||
4130 | 726 | # '8.4' :"hardy", | ||
4131 | 727 | # '8.10' :"intrepid", | ||
4132 | 728 | # '9.4' :"jaunty", | ||
4133 | 729 | # '9.10' :"karmic", | ||
4134 | 730 | # '10.4' :"lynx", | ||
4135 | 731 | # '10.10':"maverick", | ||
4136 | 732 | # '11.4' :"natty", | ||
4137 | 733 | # } | ||
4138 | 734 | # dist_vers = "%s.%s" % ( major , minor ) | ||
4139 | 735 | # if not dist_names.has_key( dist_vers ): | ||
4140 | 736 | # dist_names['4ubuntu2.0'] = "IntrepidIbex" | ||
4141 | 737 | # os_version = dist_names[dist_vers] | ||
4142 | 738 | # | ||
4143 | 739 | # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" | ||
4144 | 740 | #else: | ||
4145 | 741 | # return None | ||
4146 | 742 | pass | ||
4147 | 743 | |||
4148 | 744 | def process_repos(self, main_importer, distro): | ||
4149 | 745 | # Create a disabled repository for the new distro, and the security updates | ||
4150 | 746 | # | ||
4151 | 747 | # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage | ||
4152 | 748 | |||
4153 | 749 | repo = item_repo.Repo(main_importer.config) | ||
4154 | 750 | repo.set_breed( "apt" ) | ||
4155 | 751 | repo.set_arch( distro.arch ) | ||
4156 | 752 | repo.set_keep_updated( False ) | ||
4157 | 753 | repo.yumopts["--ignore-release-gpg"] = None | ||
4158 | 754 | repo.yumopts["--verbose"] = None | ||
4159 | 755 | repo.set_name( distro.name ) | ||
4160 | 756 | repo.set_os_version( distro.os_version ) | ||
4161 | 757 | # NOTE : The location of the mirror should come from timezone | ||
4162 | 758 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) ) | ||
4163 | 759 | |||
4164 | 760 | security_repo = item_repo.Repo(main_importer.config) | ||
4165 | 761 | security_repo.set_breed( "apt" ) | ||
4166 | 762 | security_repo.set_arch( distro.arch ) | ||
4167 | 763 | security_repo.set_keep_updated( False ) | ||
4168 | 764 | security_repo.yumopts["--ignore-release-gpg"] = None | ||
4169 | 765 | security_repo.yumopts["--verbose"] = None | ||
4170 | 766 | security_repo.set_name( distro.name + "-security" ) | ||
4171 | 767 | security_repo.set_os_version( distro.os_version ) | ||
4172 | 768 | # There are no official mirrors for security updates | ||
4173 | 769 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' ) | ||
4174 | 770 | |||
4175 | 771 | self.logger.info("Added repos for %s" % distro.name) | ||
4176 | 772 | repos = main_importer.config.repos() | ||
4177 | 773 | repos.add(repo,save=True) | ||
4178 | 774 | repos.add(security_repo,save=True) | ||
4179 | 775 | |||
4180 | 776 | # ========================================================================== | ||
4181 | 777 | |||
4182 | 778 | def get_import_manager(config,logger): | ||
4183 | 779 | return ImportDebianUbuntuManager(config,logger) | ||
4184 | 780 | 0 | ||
4185 | === renamed file '.pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py' => '.pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py.THIS' | |||
4186 | === removed file '.pc/applied-patches' | |||
4187 | --- .pc/applied-patches 2011-06-03 09:25:37 +0000 | |||
4188 | +++ .pc/applied-patches 1970-01-01 00:00:00 +0000 | |||
4189 | @@ -1,10 +0,0 @@ | |||
4190 | 1 | 21_cobbler_use_netboot.patch | ||
4191 | 2 | 12_fix_dhcp_restart.patch | ||
4192 | 3 | 05_cobbler_fix_reposync_permissions.patch | ||
4193 | 4 | 33_authn_configfile.patch | ||
4194 | 5 | 34_fix_apache_wont_start.patch | ||
4195 | 6 | 39_cw_remove_vhost.patch | ||
4196 | 7 | 40_ubuntu_bind9_management.patch | ||
4197 | 8 | 41_update_tree_path_with_arch.patch | ||
4198 | 9 | 42_fix_repomirror_create_sync.patch | ||
4199 | 10 | 43_fix_reposync_env_variable.patch | ||
4200 | 11 | 0 | ||
4201 | === modified file 'cobbler/action_check.py' | |||
4202 | --- cobbler/action_check.py 2011-04-18 11:15:59 +0000 | |||
4203 | +++ cobbler/action_check.py 2011-06-09 00:11:01 +0000 | |||
4204 | @@ -66,7 +66,7 @@ | |||
4205 | 66 | mode = self.config.api.get_sync().dns.what() | 66 | mode = self.config.api.get_sync().dns.what() |
4206 | 67 | if mode == "bind": | 67 | if mode == "bind": |
4207 | 68 | self.check_bind_bin(status) | 68 | self.check_bind_bin(status) |
4209 | 69 | self.check_service(status,"bind9") | 69 | self.check_service(status,"named") |
4210 | 70 | elif mode == "dnsmasq" and not self.settings.manage_dhcp: | 70 | elif mode == "dnsmasq" and not self.settings.manage_dhcp: |
4211 | 71 | self.check_dnsmasq_bin(status) | 71 | self.check_dnsmasq_bin(status) |
4212 | 72 | self.check_service(status,"dnsmasq") | 72 | self.check_service(status,"dnsmasq") |
4213 | 73 | 73 | ||
4214 | === modified file 'cobbler/action_reposync.py' | |||
4215 | --- cobbler/action_reposync.py 2011-06-08 17:21:45 +0000 | |||
4216 | +++ cobbler/action_reposync.py 2011-06-09 00:11:01 +0000 | |||
4217 | @@ -485,11 +485,6 @@ | |||
4218 | 485 | arch = "amd64" # FIX potential arch errors | 485 | arch = "amd64" # FIX potential arch errors |
4219 | 486 | cmd = "%s --nosource -a %s" % (cmd, arch) | 486 | cmd = "%s --nosource -a %s" % (cmd, arch) |
4220 | 487 | 487 | ||
4221 | 488 | # Set's an environment variable for subprocess, otherwise debmirror will fail | ||
4222 | 489 | # as it needs this variable to exist. | ||
4223 | 490 | # FIXME: might this break anything? So far it doesn't | ||
4224 | 491 | os.putenv("HOME", "/var/lib/cobbler") | ||
4225 | 492 | |||
4226 | 493 | rc = utils.subprocess_call(self.logger, cmd) | 488 | rc = utils.subprocess_call(self.logger, cmd) |
4227 | 494 | if rc !=0: | 489 | if rc !=0: |
4228 | 495 | utils.die(self.logger,"cobbler reposync failed") | 490 | utils.die(self.logger,"cobbler reposync failed") |
4229 | @@ -569,7 +564,7 @@ | |||
4230 | 569 | a safeguard. | 564 | a safeguard. |
4231 | 570 | """ | 565 | """ |
4232 | 571 | # all_path = os.path.join(repo_path, "*") | 566 | # all_path = os.path.join(repo_path, "*") |
4234 | 572 | cmd1 = "chown -R root:www-data %s" % repo_path | 567 | cmd1 = "chown -R root:apache %s" % repo_path |
4235 | 573 | utils.subprocess_call(self.logger, cmd1) | 568 | utils.subprocess_call(self.logger, cmd1) |
4236 | 574 | 569 | ||
4237 | 575 | cmd2 = "chmod -R 755 %s" % repo_path | 570 | cmd2 = "chmod -R 755 %s" % repo_path |
4238 | 576 | 571 | ||
4239 | === modified file 'cobbler/codes.py' | |||
4240 | --- cobbler/codes.py 2011-05-02 18:26:03 +0000 | |||
4241 | +++ cobbler/codes.py 2011-06-09 00:11:01 +0000 | |||
4242 | @@ -53,8 +53,8 @@ | |||
4243 | 53 | } | 53 | } |
4244 | 54 | 54 | ||
4245 | 55 | VALID_REPO_BREEDS = [ | 55 | VALID_REPO_BREEDS = [ |
4248 | 56 | "rsync", "rhn", "yum", "apt" | 56 | # "rsync", "rhn", "yum", "apt" |
4249 | 57 | # "rsync", "rhn", "yum" | 57 | "rsync", "rhn", "yum" |
4250 | 58 | ] | 58 | ] |
4251 | 59 | 59 | ||
4252 | 60 | def uniquify(seq, idfun=None): | 60 | def uniquify(seq, idfun=None): |
4253 | 61 | 61 | ||
4254 | === modified file 'cobbler/modules/manage_bind.py' | |||
4255 | --- cobbler/modules/manage_bind.py 2011-04-18 11:15:59 +0000 | |||
4256 | +++ cobbler/modules/manage_bind.py 2011-06-09 00:11:01 +0000 | |||
4257 | @@ -180,7 +180,7 @@ | |||
4258 | 180 | """ | 180 | """ |
4259 | 181 | Write out the named.conf main config file from the template. | 181 | Write out the named.conf main config file from the template. |
4260 | 182 | """ | 182 | """ |
4262 | 183 | settings_file = "/etc/bind/named.conf.local" | 183 | settings_file = "/etc/named.conf" |
4263 | 184 | template_file = "/etc/cobbler/named.template" | 184 | template_file = "/etc/cobbler/named.template" |
4264 | 185 | forward_zones = self.settings.manage_forward_zones | 185 | forward_zones = self.settings.manage_forward_zones |
4265 | 186 | reverse_zones = self.settings.manage_reverse_zones | 186 | reverse_zones = self.settings.manage_reverse_zones |
4266 | @@ -291,7 +291,7 @@ | |||
4267 | 291 | 291 | ||
4268 | 292 | metadata['host_record'] = self.__pretty_print_host_records(hosts) | 292 | metadata['host_record'] = self.__pretty_print_host_records(hosts) |
4269 | 293 | 293 | ||
4271 | 294 | zonefilename='/etc/bind/db.' + zone | 294 | zonefilename='/var/named/' + zone |
4272 | 295 | if self.logger is not None: | 295 | if self.logger is not None: |
4273 | 296 | self.logger.info("generating (forward) %s" % zonefilename) | 296 | self.logger.info("generating (forward) %s" % zonefilename) |
4274 | 297 | self.templar.render(template_data, metadata, zonefilename, None) | 297 | self.templar.render(template_data, metadata, zonefilename, None) |
4275 | @@ -313,7 +313,7 @@ | |||
4276 | 313 | 313 | ||
4277 | 314 | metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR') | 314 | metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR') |
4278 | 315 | 315 | ||
4280 | 316 | zonefilename='/etc/bind/db.' + zone | 316 | zonefilename='/var/named/' + zone |
4281 | 317 | if self.logger is not None: | 317 | if self.logger is not None: |
4282 | 318 | self.logger.info("generating (reverse) %s" % zonefilename) | 318 | self.logger.info("generating (reverse) %s" % zonefilename) |
4283 | 319 | self.templar.render(template_data, metadata, zonefilename, None) | 319 | self.templar.render(template_data, metadata, zonefilename, None) |
4284 | 320 | 320 | ||
4285 | === modified file 'cobbler/modules/manage_import_debian_ubuntu.py' | |||
4286 | --- cobbler/modules/manage_import_debian_ubuntu.py 2011-06-08 17:21:45 +0000 | |||
4287 | +++ cobbler/modules/manage_import_debian_ubuntu.py 2011-06-09 00:11:01 +0000 | |||
4288 | @@ -187,8 +187,6 @@ | |||
4289 | 187 | else: | 187 | else: |
4290 | 188 | # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again | 188 | # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again |
4291 | 189 | self.path += ("-%s" % self.arch) | 189 | self.path += ("-%s" % self.arch) |
4292 | 190 | # If arch is specified we also need to update the mirror name. | ||
4293 | 191 | self.mirror_name = self.mirror_name + "-" + self.arch | ||
4294 | 192 | 190 | ||
4295 | 193 | # make the output path and mirror content but only if not specifying that a network | 191 | # make the output path and mirror content but only if not specifying that a network |
4296 | 194 | # accessible support location already exists (this is --available-as on the command line) | 192 | # accessible support location already exists (this is --available-as on the command line) |
4297 | @@ -341,10 +339,11 @@ | |||
4298 | 341 | self.logger.info("descent into %s" % top) | 339 | self.logger.info("descent into %s" % top) |
4299 | 342 | dists_path = os.path.join(self.path, "dists") | 340 | dists_path = os.path.join(self.path, "dists") |
4300 | 343 | if not os.path.isdir(dists_path): | 341 | if not os.path.isdir(dists_path): |
4302 | 344 | self.process_repos(self, distro) | 342 | self.process_repos() |
4303 | 345 | else: | 343 | else: |
4304 | 346 | self.logger.info("this distro isn't mirrored") | 344 | self.logger.info("this distro isn't mirrored") |
4305 | 347 | 345 | ||
4306 | 346 | <<<<<<< TREE | ||
4307 | 348 | def get_repo_mirror_from_apt(self): | 347 | def get_repo_mirror_from_apt(self): |
4308 | 349 | """ | 348 | """ |
4309 | 350 | This tries to determine the apt mirror/archive to use (when processing repos) | 349 | This tries to determine the apt mirror/archive to use (when processing repos) |
4310 | @@ -364,6 +363,11 @@ | |||
4311 | 364 | 363 | ||
4312 | 365 | return mirror | 364 | return mirror |
4313 | 366 | 365 | ||
4314 | 366 | ======= | ||
4315 | 367 | def process_repos(self): | ||
4316 | 368 | pass | ||
4317 | 369 | |||
4318 | 370 | >>>>>>> MERGE-SOURCE | ||
4319 | 367 | def distro_adder(self,distros_added,dirname,fnames): | 371 | def distro_adder(self,distros_added,dirname,fnames): |
4320 | 368 | """ | 372 | """ |
4321 | 369 | This is an os.path.walk routine that finds distributions in the directory | 373 | This is an os.path.walk routine that finds distributions in the directory |
4322 | @@ -387,9 +391,9 @@ | |||
4323 | 387 | self.logger.info("following symlink: %s" % fullname) | 391 | self.logger.info("following symlink: %s" % fullname) |
4324 | 388 | os.path.walk(fullname, self.distro_adder, distros_added) | 392 | os.path.walk(fullname, self.distro_adder, distros_added) |
4325 | 389 | 393 | ||
4327 | 390 | if ( x.startswith("initrd.gz") ) and x != "initrd.size": | 394 | if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") ) and x != "initrd.size": |
4328 | 391 | initrd = os.path.join(dirname,x) | 395 | initrd = os.path.join(dirname,x) |
4330 | 392 | if ( x.startswith("linux") ) and x.find("initrd") == -1: | 396 | if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") ) and x.find("initrd") == -1: |
4331 | 393 | kernel = os.path.join(dirname,x) | 397 | kernel = os.path.join(dirname,x) |
4332 | 394 | 398 | ||
4333 | 395 | # if we've collected a matching kernel and initrd pair, turn the in and add them to the list | 399 | # if we've collected a matching kernel and initrd pair, turn the in and add them to the list |
4334 | @@ -788,12 +792,17 @@ | |||
4335 | 788 | repo.yumopts["--verbose"] = None | 792 | repo.yumopts["--verbose"] = None |
4336 | 789 | repo.set_name( distro.name ) | 793 | repo.set_name( distro.name ) |
4337 | 790 | repo.set_os_version( distro.os_version ) | 794 | repo.set_os_version( distro.os_version ) |
4338 | 795 | <<<<<<< TREE | ||
4339 | 791 | 796 | ||
4340 | 792 | if distro.breed == "ubuntu": | 797 | if distro.breed == "ubuntu": |
4341 | 793 | repo.set_mirror( "%s/%s" % (mirror, distro.os_version) ) | 798 | repo.set_mirror( "%s/%s" % (mirror, distro.os_version) ) |
4342 | 794 | else: | 799 | else: |
4343 | 795 | # NOTE : The location of the mirror should come from timezone | 800 | # NOTE : The location of the mirror should come from timezone |
4344 | 796 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) ) | 801 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) ) |
4345 | 802 | ======= | ||
4346 | 803 | # NOTE : The location of the mirror should come from timezone | ||
4347 | 804 | repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) ) | ||
4348 | 805 | >>>>>>> MERGE-SOURCE | ||
4349 | 797 | 806 | ||
4350 | 798 | security_repo = item_repo.Repo(main_importer.config) | 807 | security_repo = item_repo.Repo(main_importer.config) |
4351 | 799 | security_repo.set_breed( "apt" ) | 808 | security_repo.set_breed( "apt" ) |
4352 | @@ -804,10 +813,14 @@ | |||
4353 | 804 | security_repo.set_name( distro.name + "-security" ) | 813 | security_repo.set_name( distro.name + "-security" ) |
4354 | 805 | security_repo.set_os_version( distro.os_version ) | 814 | security_repo.set_os_version( distro.os_version ) |
4355 | 806 | # There are no official mirrors for security updates | 815 | # There are no official mirrors for security updates |
4356 | 816 | <<<<<<< TREE | ||
4357 | 807 | if distro.breed == "ubuntu": | 817 | if distro.breed == "ubuntu": |
4358 | 808 | security_repo.set_mirror( "%s/%s-security" % (mirror, distro.os_version) ) | 818 | security_repo.set_mirror( "%s/%s-security" % (mirror, distro.os_version) ) |
4359 | 809 | else: | 819 | else: |
4360 | 810 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % distro.os_version ) | 820 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % distro.os_version ) |
4361 | 821 | ======= | ||
4362 | 822 | security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' ) | ||
4363 | 823 | >>>>>>> MERGE-SOURCE | ||
4364 | 811 | 824 | ||
4365 | 812 | self.logger.info("Added repos for %s" % distro.name) | 825 | self.logger.info("Added repos for %s" % distro.name) |
4366 | 813 | repos = main_importer.config.repos() | 826 | repos = main_importer.config.repos() |
4367 | 814 | 827 | ||
4368 | === modified file 'cobbler/modules/sync_post_restart_services.py' | |||
4369 | --- cobbler/modules/sync_post_restart_services.py 2011-04-18 11:15:59 +0000 | |||
4370 | +++ cobbler/modules/sync_post_restart_services.py 2011-06-09 00:11:01 +0000 | |||
4371 | @@ -42,7 +42,7 @@ | |||
4372 | 42 | if rc != 0: | 42 | if rc != 0: |
4373 | 43 | logger.error("dhcpd -t failed") | 43 | logger.error("dhcpd -t failed") |
4374 | 44 | return 1 | 44 | return 1 |
4376 | 45 | rc = utils.subprocess_call(logger,"service isc-dhcp-server restart", shell=True) | 45 | rc = utils.subprocess_call(logger,"service dhcpd restart", shell=True) |
4377 | 46 | elif which_dhcp_module == "manage_dnsmasq": | 46 | elif which_dhcp_module == "manage_dnsmasq": |
4378 | 47 | if restart_dhcp != "0": | 47 | if restart_dhcp != "0": |
4379 | 48 | rc = utils.subprocess_call(logger, "service dnsmasq restart") | 48 | rc = utils.subprocess_call(logger, "service dnsmasq restart") |
4380 | @@ -53,7 +53,7 @@ | |||
4381 | 53 | 53 | ||
4382 | 54 | if manage_dns != "0" and restart_dns != "0": | 54 | if manage_dns != "0" and restart_dns != "0": |
4383 | 55 | if which_dns_module == "manage_bind": | 55 | if which_dns_module == "manage_bind": |
4385 | 56 | rc = utils.subprocess_call(logger, "service bind9 restart", shell=True) | 56 | rc = utils.subprocess_call(logger, "service named restart", shell=True) |
4386 | 57 | elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq: | 57 | elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq: |
4387 | 58 | rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True) | 58 | rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True) |
4388 | 59 | elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq: | 59 | elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq: |
4389 | 60 | 60 | ||
4390 | === modified file 'config/cobbler_web.conf' | |||
4391 | --- config/cobbler_web.conf 2011-04-15 12:47:39 +0000 | |||
4392 | +++ config/cobbler_web.conf 2011-06-09 00:11:01 +0000 | |||
4393 | @@ -1,10 +1,14 @@ | |||
4394 | 1 | # This configuration file enables the cobbler web | 1 | # This configuration file enables the cobbler web |
4395 | 2 | # interface (django version) | 2 | # interface (django version) |
4396 | 3 | 3 | ||
4397 | 4 | <VirtualHost *:80> | ||
4398 | 5 | |||
4399 | 4 | # Do not log the requests generated from the event notification system | 6 | # Do not log the requests generated from the event notification system |
4400 | 5 | SetEnvIf Request_URI ".*/op/events/user/.*" dontlog | 7 | SetEnvIf Request_URI ".*/op/events/user/.*" dontlog |
4401 | 6 | # Log only what remains | 8 | # Log only what remains |
4403 | 7 | #CustomLog logs/access_log combined env=!dontlog | 9 | CustomLog logs/access_log combined env=!dontlog |
4404 | 8 | 10 | ||
4405 | 9 | WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi | 11 | WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi |
4406 | 10 | 12 | ||
4407 | 13 | </VirtualHost> | ||
4408 | 14 | |||
4409 | 11 | 15 | ||
4410 | === modified file 'config/modules.conf' | |||
4411 | --- config/modules.conf 2011-04-04 12:55:44 +0000 | |||
4412 | +++ config/modules.conf 2011-06-09 00:11:01 +0000 | |||
4413 | @@ -19,7 +19,7 @@ | |||
4414 | 19 | # https://fedorahosted.org/cobbler/wiki/CobblerWithLdap | 19 | # https://fedorahosted.org/cobbler/wiki/CobblerWithLdap |
4415 | 20 | 20 | ||
4416 | 21 | [authentication] | 21 | [authentication] |
4418 | 22 | module = authn_configfile | 22 | module = authn_denyall |
4419 | 23 | 23 | ||
4420 | 24 | # authorization: | 24 | # authorization: |
4421 | 25 | # once a user has been cleared by the WebUI/XMLRPC, what can they do? | 25 | # once a user has been cleared by the WebUI/XMLRPC, what can they do? |
4422 | 26 | 26 | ||
4423 | === modified file 'templates/etc/named.template' | |||
4424 | --- templates/etc/named.template 2011-04-18 11:15:59 +0000 | |||
4425 | +++ templates/etc/named.template 2011-06-09 00:11:01 +0000 | |||
4426 | @@ -1,14 +1,31 @@ | |||
4427 | 1 | options { | ||
4428 | 2 | listen-on port 53 { 127.0.0.1; }; | ||
4429 | 3 | directory "/var/named"; | ||
4430 | 4 | dump-file "/var/named/data/cache_dump.db"; | ||
4431 | 5 | statistics-file "/var/named/data/named_stats.txt"; | ||
4432 | 6 | memstatistics-file "/var/named/data/named_mem_stats.txt"; | ||
4433 | 7 | allow-query { localhost; }; | ||
4434 | 8 | recursion yes; | ||
4435 | 9 | }; | ||
4436 | 10 | |||
4437 | 11 | logging { | ||
4438 | 12 | channel default_debug { | ||
4439 | 13 | file "data/named.run"; | ||
4440 | 14 | severity dynamic; | ||
4441 | 15 | }; | ||
4442 | 16 | }; | ||
4443 | 17 | |||
4444 | 1 | #for $zone in $forward_zones | 18 | #for $zone in $forward_zones |
4445 | 2 | zone "${zone}." { | 19 | zone "${zone}." { |
4446 | 3 | type master; | 20 | type master; |
4448 | 4 | file "/etc/bind/db.$zone"; | 21 | file "$zone"; |
4449 | 5 | }; | 22 | }; |
4450 | 6 | 23 | ||
4451 | 7 | #end for | 24 | #end for |
4452 | 8 | #for $zone, $arpa in $reverse_zones | 25 | #for $zone, $arpa in $reverse_zones |
4453 | 9 | zone "${arpa}." { | 26 | zone "${arpa}." { |
4454 | 10 | type master; | 27 | type master; |
4456 | 11 | file "/etc/bind/db.$zone"; | 28 | file "$zone"; |
4457 | 12 | }; | 29 | }; |
4458 | 13 | 30 | ||
4459 | 14 | #end for | 31 | #end for |
This seems to be unintentional fallout.