Merge lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005 into lp:ubuntu/oneiric/cobbler

Proposed by James Westby
Status: Work in progress
Proposed branch: lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005
Merge into: lp:ubuntu/oneiric/cobbler
Diff against target: 4459 lines (+52/-4124) (has conflicts)
24 files modified
.pc/.version (+0/-1)
.pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py (+0/-568)
.pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py (+0/-66)
.pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-777)
.pc/33_authn_configfile.patch/config/modules.conf (+0/-86)
.pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf (+0/-14)
.pc/39_cw_remove_vhost.patch/config/cobbler_web.conf (+0/-14)
.pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py (+0/-482)
.pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py (+0/-332)
.pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py (+0/-66)
.pc/40_ubuntu_bind9_management.patch/templates/etc/named.template (+0/-31)
.pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-777)
.pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py (+0/-98)
.pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py (+0/-779)
.pc/applied-patches (+0/-10)
cobbler/action_check.py (+1/-1)
cobbler/action_reposync.py (+1/-6)
cobbler/codes.py (+2/-2)
cobbler/modules/manage_bind.py (+3/-3)
cobbler/modules/manage_import_debian_ubuntu.py (+18/-5)
cobbler/modules/sync_post_restart_services.py (+2/-2)
config/cobbler_web.conf (+5/-1)
config/modules.conf (+1/-1)
templates/etc/named.template (+19/-2)
Conflict: can't delete .pc because it is not empty.  Not deleting.
Conflict because .pc is not versioned, but has versioned children.  Versioned directory.
Conflict: can't delete .pc/42_fix_repomirror_create_sync.patch because it is not empty.  Not deleting.
Conflict because .pc/42_fix_repomirror_create_sync.patch is not versioned, but has versioned children.  Versioned directory.
Conflict: can't delete .pc/42_fix_repomirror_create_sync.patch/cobbler because it is not empty.  Not deleting.
Conflict because .pc/42_fix_repomirror_create_sync.patch/cobbler is not versioned, but has versioned children.  Versioned directory.
Conflict: can't delete .pc/43_fix_reposync_env_variable.patch because it is not empty.  Not deleting.
Conflict because .pc/43_fix_reposync_env_variable.patch is not versioned, but has versioned children.  Versioned directory.
Conflict: can't delete .pc/43_fix_reposync_env_variable.patch/cobbler because it is not empty.  Not deleting.
Conflict because .pc/43_fix_reposync_env_variable.patch/cobbler is not versioned, but has versioned children.  Versioned directory.
Contents conflict in .pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py
Text conflict in cobbler/modules/manage_import_debian_ubuntu.py
To merge this branch: bzr merge lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005
Reviewer Review Type Date Requested Status
Dave Walker (community) Disapprove
Ubuntu branches Pending
Review via email: mp+63942@code.launchpad.net

Description of the change

The package history in the archive and the history in the bzr branch differ. As the archive is authoritative the history of lp:ubuntu/oneiric/cobbler now reflects that and the old bzr branch has been pushed to lp:~ubuntu-branches/ubuntu/oneiric/cobbler/oneiric-201106090005. A merge should be performed if necessary.

To post a comment you must log in.
Revision history for this message
Dave Walker (davewalker) wrote :

This seems to be unintentional fallout.

review: Disapprove

Unmerged revisions

26. By Andres Rodriguez

* debian/patches/42_fix_repomirror_create_sync.patch: Improve method to
  obtain Ubuntu mirror to use if python-apt installed.
* debian/cobbler.postinst: Really fix setting of 'server'. Move the logic
  to obtain IP to debian/cobbler.config and set it default if available.

25. By Andres Rodriguez

Un-apply all patches and remove .pc. IMO branches should not reflect patches
applied as they generate huge diff's when updating or working with them.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== removed file '.pc/.version'
--- .pc/.version 2011-01-18 12:03:14 +0000
+++ .pc/.version 1970-01-01 00:00:00 +0000
@@ -1,1 +0,0 @@
12
20
=== removed directory '.pc/05_cobbler_fix_reposync_permissions.patch'
=== removed directory '.pc/05_cobbler_fix_reposync_permissions.patch/cobbler'
=== removed file '.pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py'
--- .pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py 2011-01-28 14:39:12 +0000
+++ .pc/05_cobbler_fix_reposync_permissions.patch/cobbler/action_reposync.py 1970-01-01 00:00:00 +0000
@@ -1,568 +0,0 @@
1"""
2Builds out and synchronizes yum repo mirrors.
3Initial support for rsync, perhaps reposync coming later.
4
5Copyright 2006-2007, Red Hat, Inc
6Michael DeHaan <mdehaan@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import os.path
26import time
27import yaml # Howell-Clark version
28import sys
29HAS_YUM = True
30try:
31 import yum
32except:
33 HAS_YUM = False
34
35import utils
36from cexceptions import *
37import traceback
38import errno
39from utils import _
40import clogger
41
42class RepoSync:
43 """
44 Handles conversion of internal state to the tftpboot tree layout
45 """
46
47 # ==================================================================================
48
49 def __init__(self,config,tries=1,nofail=False,logger=None):
50 """
51 Constructor
52 """
53 self.verbose = True
54 self.api = config.api
55 self.config = config
56 self.distros = config.distros()
57 self.profiles = config.profiles()
58 self.systems = config.systems()
59 self.settings = config.settings()
60 self.repos = config.repos()
61 self.rflags = self.settings.reposync_flags
62 self.tries = tries
63 self.nofail = nofail
64 self.logger = logger
65
66 if logger is None:
67 self.logger = clogger.Logger()
68
69 self.logger.info("hello, reposync")
70
71
72 # ===================================================================
73
74 def run(self, name=None, verbose=True):
75 """
76 Syncs the current repo configuration file with the filesystem.
77 """
78
79 self.logger.info("run, reposync, run!")
80
81 try:
82 self.tries = int(self.tries)
83 except:
84 utils.die(self.logger,"retry value must be an integer")
85
86 self.verbose = verbose
87
88 report_failure = False
89 for repo in self.repos:
90
91 env = repo.environment
92
93 for k in env.keys():
94 self.logger.info("environment: %s=%s" % (k,env[k]))
95 if env[k] is not None:
96 os.putenv(k,env[k])
97
98 if name is not None and repo.name != name:
99 # invoked to sync only a specific repo, this is not the one
100 continue
101 elif name is None and not repo.keep_updated:
102 # invoked to run against all repos, but this one is off
103 self.logger.info("%s is set to not be updated" % repo.name)
104 continue
105
106 repo_mirror = os.path.join(self.settings.webdir, "repo_mirror")
107 repo_path = os.path.join(repo_mirror, repo.name)
108 mirror = repo.mirror
109
110 if not os.path.isdir(repo_path) and not repo.mirror.lower().startswith("rhn://"):
111 os.makedirs(repo_path)
112
113 # which may actually NOT reposync if the repo is set to not mirror locally
114 # but that's a technicality
115
116 for x in range(self.tries+1,1,-1):
117 success = False
118 try:
119 self.sync(repo)
120 success = True
121 except:
122 utils.log_exc(self.logger)
123 self.logger.warning("reposync failed, tries left: %s" % (x-2))
124
125 if not success:
126 report_failure = True
127 if not self.nofail:
128 utils.die(self.logger,"reposync failed, retry limit reached, aborting")
129 else:
130 self.logger.error("reposync failed, retry limit reached, skipping")
131
132 self.update_permissions(repo_path)
133
134 if report_failure:
135 utils.die(self.logger,"overall reposync failed, at least one repo failed to synchronize")
136
137 return True
138
139 # ==================================================================================
140
141 def sync(self, repo):
142
143 """
144 Conditionally sync a repo, based on type.
145 """
146
147 if repo.breed == "rhn":
148 return self.rhn_sync(repo)
149 elif repo.breed == "yum":
150 return self.yum_sync(repo)
151 elif repo.breed == "apt":
152 return self.apt_sync(repo)
153 elif repo.breed == "rsync":
154 return self.rsync_sync(repo)
155 else:
156 utils.die(self.logger,"unable to sync repo (%s), unknown or unsupported repo type (%s)" % (repo.name, repo.breed))
157
158 # ====================================================================================
159
160 def createrepo_walker(self, repo, dirname, fnames):
161 """
162 Used to run createrepo on a copied Yum mirror.
163 """
164 if os.path.exists(dirname) or repo['breed'] == 'rsync':
165 utils.remove_yum_olddata(dirname)
166
167 # add any repo metadata we can use
168 mdoptions = []
169 if os.path.isfile("%s/.origin/repomd.xml" % (dirname)):
170 if not HAS_YUM:
171 utils.die(self.logger,"yum is required to use this feature")
172
173 rmd = yum.repoMDObject.RepoMD('', "%s/.origin/repomd.xml" % (dirname))
174 if rmd.repoData.has_key("group"):
175 groupmdfile = rmd.getData("group").location[1]
176 mdoptions.append("-g %s" % groupmdfile)
177 if rmd.repoData.has_key("prestodelta"):
178 # need createrepo >= 0.9.7 to add deltas
179 if utils.check_dist() == "redhat" or utils.check_dist() == "suse":
180 cmd = "/usr/bin/rpmquery --queryformat=%{VERSION} createrepo"
181 createrepo_ver = utils.subprocess_get(self.logger, cmd)
182 if createrepo_ver >= "0.9.7":
183 mdoptions.append("--deltas")
184 else:
185 self.logger.error("this repo has presto metadata; you must upgrade createrepo to >= 0.9.7 first and then need to resync the repo through cobbler.")
186
187 blended = utils.blender(self.api, False, repo)
188 flags = blended.get("createrepo_flags","(ERROR: FLAGS)")
189 try:
190 # BOOKMARK
191 cmd = "createrepo %s %s %s" % (" ".join(mdoptions), flags, dirname)
192 utils.subprocess_call(self.logger, cmd)
193 except:
194 utils.log_exc(self.logger)
195 self.logger.error("createrepo failed.")
196 del fnames[:] # we're in the right place
197
198 # ====================================================================================
199
200 def rsync_sync(self, repo):
201
202 """
203 Handle copying of rsync:// and rsync-over-ssh repos.
204 """
205
206 repo_mirror = repo.mirror
207
208 if not repo.mirror_locally:
209 utils.die(self.logger,"rsync:// urls must be mirrored locally, yum cannot access them directly")
210
211 if repo.rpm_list != "" and repo.rpm_list != []:
212 self.logger.warning("--rpm-list is not supported for rsync'd repositories")
213
214 # FIXME: don't hardcode
215 dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name)
216
217 spacer = ""
218 if not repo.mirror.startswith("rsync://") and not repo.mirror.startswith("/"):
219 spacer = "-e ssh"
220 if not repo.mirror.endswith("/"):
221 repo.mirror = "%s/" % repo.mirror
222
223 # FIXME: wrapper for subprocess that logs to logger
224 cmd = "rsync -rltDv %s --delete --exclude-from=/etc/cobbler/rsync.exclude %s %s" % (spacer, repo.mirror, dest_path)
225 rc = utils.subprocess_call(self.logger, cmd)
226
227 if rc !=0:
228 utils.die(self.logger,"cobbler reposync failed")
229 os.path.walk(dest_path, self.createrepo_walker, repo)
230 self.create_local_file(dest_path, repo)
231
232 # ====================================================================================
233
234 def rhn_sync(self, repo):
235
236 """
237 Handle mirroring of RHN repos.
238 """
239
240 repo_mirror = repo.mirror
241
242 # FIXME? warn about not having yum-utils. We don't want to require it in the package because
243 # RHEL4 and RHEL5U0 don't have it.
244
245 if not os.path.exists("/usr/bin/reposync"):
246 utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils")
247
248 cmd = "" # command to run
249 has_rpm_list = False # flag indicating not to pull the whole repo
250
251 # detect cases that require special handling
252
253 if repo.rpm_list != "" and repo.rpm_list != []:
254 has_rpm_list = True
255
256 # create yum config file for use by reposync
257 # FIXME: don't hardcode
258 dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name)
259 temp_path = os.path.join(dest_path, ".origin")
260
261 if not os.path.isdir(temp_path):
262 # FIXME: there's a chance this might break the RHN D/L case
263 os.makedirs(temp_path)
264
265 # how we invoke yum-utils depends on whether this is RHN content or not.
266
267
268 # this is the somewhat more-complex RHN case.
269 # NOTE: this requires that you have entitlements for the server and you give the mirror as rhn://$channelname
270 if not repo.mirror_locally:
271 utils.die("rhn:// repos do not work with --mirror-locally=1")
272
273 if has_rpm_list:
274 self.logger.warning("warning: --rpm-list is not supported for RHN content")
275 rest = repo.mirror[6:] # everything after rhn://
276 cmd = "/usr/bin/reposync %s -r %s --download_path=%s" % (self.rflags, rest, "/var/www/cobbler/repo_mirror")
277 if repo.name != rest:
278 args = { "name" : repo.name, "rest" : rest }
279 utils.die(self.logger,"ERROR: repository %(name)s needs to be renamed %(rest)s as the name of the cobbler repository must match the name of the RHN channel" % args)
280
281 if repo.arch == "i386":
282 # counter-intuitive, but we want the newish kernels too
283 repo.arch = "i686"
284
285 if repo.arch != "":
286 cmd = "%s -a %s" % (cmd, repo.arch)
287
288 # now regardless of whether we're doing yumdownloader or reposync
289 # or whether the repo was http://, ftp://, or rhn://, execute all queued
290 # commands here. Any failure at any point stops the operation.
291
292 if repo.mirror_locally:
293 rc = utils.subprocess_call(self.logger, cmd)
294 # Don't die if reposync fails, it is logged
295 # if rc !=0:
296 # utils.die(self.logger,"cobbler reposync failed")
297
298 # some more special case handling for RHN.
299 # create the config file now, because the directory didn't exist earlier
300
301 temp_file = self.create_local_file(temp_path, repo, output=False)
302
303 # now run createrepo to rebuild the index
304
305 if repo.mirror_locally:
306 os.path.walk(dest_path, self.createrepo_walker, repo)
307
308 # create the config file the hosts will use to access the repository.
309
310 self.create_local_file(dest_path, repo)
311
312 # ====================================================================================
313
314 def yum_sync(self, repo):
315
316 """
317 Handle copying of http:// and ftp:// yum repos.
318 """
319
320 repo_mirror = repo.mirror
321
322 # warn about not having yum-utils. We don't want to require it in the package because
323 # RHEL4 and RHEL5U0 don't have it.
324
325 if not os.path.exists("/usr/bin/reposync"):
326 utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils")
327
328 cmd = "" # command to run
329 has_rpm_list = False # flag indicating not to pull the whole repo
330
331 # detect cases that require special handling
332
333 if repo.rpm_list != "" and repo.rpm_list != []:
334 has_rpm_list = True
335
336 # create yum config file for use by reposync
337 dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name)
338 temp_path = os.path.join(dest_path, ".origin")
339
340 if not os.path.isdir(temp_path) and repo.mirror_locally:
341 # FIXME: there's a chance this might break the RHN D/L case
342 os.makedirs(temp_path)
343
344 # create the config file that yum will use for the copying
345
346 if repo.mirror_locally:
347 temp_file = self.create_local_file(temp_path, repo, output=False)
348
349 if not has_rpm_list and repo.mirror_locally:
350 # if we have not requested only certain RPMs, use reposync
351 cmd = "/usr/bin/reposync %s --config=%s --repoid=%s --download_path=%s" % (self.rflags, temp_file, repo.name, "/var/www/cobbler/repo_mirror")
352 if repo.arch != "":
353 if repo.arch == "x86":
354 repo.arch = "i386" # FIX potential arch errors
355 if repo.arch == "i386":
356 # counter-intuitive, but we want the newish kernels too
357 cmd = "%s -a i686" % (cmd)
358 else:
359 cmd = "%s -a %s" % (cmd, repo.arch)
360
361 elif repo.mirror_locally:
362
363 # create the output directory if it doesn't exist
364 if not os.path.exists(dest_path):
365 os.makedirs(dest_path)
366
367 use_source = ""
368 if repo.arch == "src":
369 use_source = "--source"
370
371 # older yumdownloader sometimes explodes on --resolvedeps
372 # if this happens to you, upgrade yum & yum-utils
373 extra_flags = self.settings.yumdownloader_flags
374 cmd = "/usr/bin/yumdownloader %s %s --disablerepo=* --enablerepo=%s -c %s --destdir=%s %s" % (extra_flags, use_source, repo.name, temp_file, dest_path, " ".join(repo.rpm_list))
375
376 # now regardless of whether we're doing yumdownloader or reposync
377 # or whether the repo was http://, ftp://, or rhn://, execute all queued
378 # commands here. Any failure at any point stops the operation.
379
380 if repo.mirror_locally:
381 rc = utils.subprocess_call(self.logger, cmd)
382 if rc !=0:
383 utils.die(self.logger,"cobbler reposync failed")
384
385 repodata_path = os.path.join(dest_path, "repodata")
386
387 if not os.path.exists("/usr/bin/wget"):
388 utils.die(self.logger,"no /usr/bin/wget found, please install wget")
389
390 # grab repomd.xml and use it to download any metadata we can use
391 cmd2 = "/usr/bin/wget -q %s/repodata/repomd.xml -O %s/repomd.xml" % (repo_mirror, temp_path)
392 rc = utils.subprocess_call(self.logger,cmd2)
393 if rc == 0:
394 # create our repodata directory now, as any extra metadata we're
395 # about to download probably lives there
396 if not os.path.isdir(repodata_path):
397 os.makedirs(repodata_path)
398 rmd = yum.repoMDObject.RepoMD('', "%s/repomd.xml" % (temp_path))
399 for mdtype in rmd.repoData.keys():
400 # don't download metadata files that are created by default
401 if mdtype not in ["primary", "primary_db", "filelists", "filelists_db", "other", "other_db"]:
402 mdfile = rmd.getData(mdtype).location[1]
403 cmd3 = "/usr/bin/wget -q %s/%s -O %s/%s" % (repo_mirror, mdfile, dest_path, mdfile)
404 utils.subprocess_call(self.logger,cmd3)
405 if rc !=0:
406 utils.die(self.logger,"wget failed")
407
408 # now run createrepo to rebuild the index
409
410 if repo.mirror_locally:
411 os.path.walk(dest_path, self.createrepo_walker, repo)
412
413 # create the config file the hosts will use to access the repository.
414
415 self.create_local_file(dest_path, repo)
416
417 # ====================================================================================
418
419
420 def apt_sync(self, repo):
421
422 """
423 Handle copying of http:// and ftp:// debian repos.
424 """
425
426 repo_mirror = repo.mirror
427
428 # warn about not having mirror program.
429
430 mirror_program = "/usr/bin/debmirror"
431 if not os.path.exists(mirror_program):
432 utils.die(self.logger,"no %s found, please install it"%(mirror_program))
433
434 cmd = "" # command to run
435 has_rpm_list = False # flag indicating not to pull the whole repo
436
437 # detect cases that require special handling
438
439 if repo.rpm_list != "" and repo.rpm_list != []:
440 utils.die(self.logger,"has_rpm_list not yet supported on apt repos")
441
442 if not repo.arch:
443 utils.die(self.logger,"Architecture is required for apt repositories")
444
445 # built destination path for the repo
446 dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name)
447
448 if repo.mirror_locally:
449 mirror = repo.mirror.replace("@@suite@@",repo.os_version)
450
451 idx = mirror.find("://")
452 method = mirror[:idx]
453 mirror = mirror[idx+3:]
454
455 idx = mirror.find("/")
456 host = mirror[:idx]
457 mirror = mirror[idx+1:]
458
459 idx = mirror.rfind("/dists/")
460 suite = mirror[idx+7:]
461 mirror = mirror[:idx]
462
463 mirror_data = "--method=%s --host=%s --root=%s --dist=%s " % ( method , host , mirror , suite )
464
465 # FIXME : flags should come from repo instead of being hardcoded
466
467 rflags = "--passive --nocleanup"
468 for x in repo.yumopts:
469 if repo.yumopts[x]:
470 rflags += " %s %s" % ( x , repo.yumopts[x] )
471 else:
472 rflags += " %s" % x
473 cmd = "%s %s %s %s" % (mirror_program, rflags, mirror_data, dest_path)
474 if repo.arch == "src":
475 cmd = "%s --source" % cmd
476 else:
477 arch = repo.arch
478 if arch == "x86":
479 arch = "i386" # FIX potential arch errors
480 if arch == "x86_64":
481 arch = "amd64" # FIX potential arch errors
482 cmd = "%s --nosource -a %s" % (cmd, arch)
483
484 rc = utils.subprocess_call(self.logger, cmd)
485 if rc !=0:
486 utils.die(self.logger,"cobbler reposync failed")
487
488
489 def create_local_file(self, dest_path, repo, output=True):
490 """
491
492 Creates Yum config files for use by reposync
493
494 Two uses:
495 (A) output=True, Create local files that can be used with yum on provisioned clients to make use of this mirror.
496 (B) output=False, Create a temporary file for yum to feed into yum for mirroring
497 """
498
499 # the output case will generate repo configuration files which are usable
500 # for the installed systems. They need to be made compatible with --server-override
501 # which means they are actually templates, which need to be rendered by a cobbler-sync
502 # on per profile/system basis.
503
504 if output:
505 fname = os.path.join(dest_path,"config.repo")
506 else:
507 fname = os.path.join(dest_path, "%s.repo" % repo.name)
508 self.logger.debug("creating: %s" % fname)
509 if not os.path.exists(dest_path):
510 utils.mkdir(dest_path)
511 config_file = open(fname, "w+")
512 config_file.write("[%s]\n" % repo.name)
513 config_file.write("name=%s\n" % repo.name)
514 optenabled = False
515 optgpgcheck = False
516 if output:
517 if repo.mirror_locally:
518 line = "baseurl=http://${server}/cobbler/repo_mirror/%s\n" % (repo.name)
519 else:
520 mstr = repo.mirror
521 if mstr.startswith("/"):
522 mstr = "file://%s" % mstr
523 line = "baseurl=%s\n" % mstr
524
525 config_file.write(line)
526 # user may have options specific to certain yum plugins
527 # add them to the file
528 for x in repo.yumopts:
529 config_file.write("%s=%s\n" % (x, repo.yumopts[x]))
530 if x == "enabled":
531 optenabled = True
532 if x == "gpgcheck":
533 optgpgcheck = True
534 else:
535 mstr = repo.mirror
536 if mstr.startswith("/"):
537 mstr = "file://%s" % mstr
538 line = "baseurl=%s\n" % mstr
539 if self.settings.http_port not in (80, '80'):
540 http_server = "%s:%s" % (self.settings.server, self.settings.http_port)
541 else:
542 http_server = self.settings.server
543 line = line.replace("@@server@@",http_server)
544 config_file.write(line)
545 if not optenabled:
546 config_file.write("enabled=1\n")
547 config_file.write("priority=%s\n" % repo.priority)
548 # FIXME: potentially might want a way to turn this on/off on a per-repo basis
549 if not optgpgcheck:
550 config_file.write("gpgcheck=0\n")
551 config_file.close()
552 return fname
553
554 # ==================================================================================
555
556 def update_permissions(self, repo_path):
557 """
558 Verifies that permissions and contexts after an rsync are as expected.
559 Sending proper rsync flags should prevent the need for this, though this is largely
560 a safeguard.
561 """
562 # all_path = os.path.join(repo_path, "*")
563 cmd1 = "chown -R root:apache %s" % repo_path
564 utils.subprocess_call(self.logger, cmd1)
565
566 cmd2 = "chmod -R 755 %s" % repo_path
567 utils.subprocess_call(self.logger, cmd2)
568
5690
=== removed directory '.pc/12_fix_dhcp_restart.patch'
=== removed directory '.pc/12_fix_dhcp_restart.patch/cobbler'
=== removed directory '.pc/12_fix_dhcp_restart.patch/cobbler/modules'
=== removed file '.pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py'
--- .pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py 2011-01-28 14:39:12 +0000
+++ .pc/12_fix_dhcp_restart.patch/cobbler/modules/sync_post_restart_services.py 1970-01-01 00:00:00 +0000
@@ -1,66 +0,0 @@
1import distutils.sysconfig
2import sys
3import os
4import traceback
5import cexceptions
6import os
7import sys
8import xmlrpclib
9import cobbler.module_loader as module_loader
10import cobbler.utils as utils
11
12plib = distutils.sysconfig.get_python_lib()
13mod_path="%s/cobbler" % plib
14sys.path.insert(0, mod_path)
15
16def register():
17 # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
18 # the return of this method indicates the trigger type
19 return "/var/lib/cobbler/triggers/sync/post/*"
20
21def run(api,args,logger):
22
23 settings = api.settings()
24
25 manage_dhcp = str(settings.manage_dhcp).lower()
26 manage_dns = str(settings.manage_dns).lower()
27 manage_tftpd = str(settings.manage_tftpd).lower()
28 restart_dhcp = str(settings.restart_dhcp).lower()
29 restart_dns = str(settings.restart_dns).lower()
30
31 which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip()
32 which_dns_module = module_loader.get_module_from_file("dns","module",just_name=True).strip()
33
34 # special handling as we don't want to restart it twice
35 has_restarted_dnsmasq = False
36
37 rc = 0
38 if manage_dhcp != "0":
39 if which_dhcp_module == "manage_isc":
40 if restart_dhcp != "0":
41 rc = utils.subprocess_call(logger, "dhcpd -t -q", shell=True)
42 if rc != 0:
43 logger.error("dhcpd -t failed")
44 return 1
45 rc = utils.subprocess_call(logger,"service dhcpd restart", shell=True)
46 elif which_dhcp_module == "manage_dnsmasq":
47 if restart_dhcp != "0":
48 rc = utils.subprocess_call(logger, "service dnsmasq restart")
49 has_restarted_dnsmasq = True
50 else:
51 logger.error("unknown DHCP engine: %s" % which_dhcp_module)
52 rc = 411
53
54 if manage_dns != "0" and restart_dns != "0":
55 if which_dns_module == "manage_bind":
56 rc = utils.subprocess_call(logger, "service named restart", shell=True)
57 elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq:
58 rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True)
59 elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq:
60 rc = 0
61 else:
62 logger.error("unknown DNS engine: %s" % which_dns_module)
63 rc = 412
64
65 return rc
66
670
=== removed directory '.pc/21_cobbler_use_netboot.patch'
=== removed directory '.pc/21_cobbler_use_netboot.patch/cobbler'
=== removed directory '.pc/21_cobbler_use_netboot.patch/cobbler/modules'
=== removed file '.pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py'
--- .pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-01-18 12:03:14 +0000
+++ .pc/21_cobbler_use_netboot.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000
@@ -1,777 +0,0 @@
1"""
2This is some of the code behind 'cobbler sync'.
3
4Copyright 2006-2009, Red Hat, Inc
5Michael DeHaan <mdehaan@redhat.com>
6John Eckersberg <jeckersb@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import os.path
26import shutil
27import time
28import sys
29import glob
30import traceback
31import errno
32import re
33from utils import popen2
34from shlex import shlex
35
36
37import utils
38from cexceptions import *
39import templar
40
41import item_distro
42import item_profile
43import item_repo
44import item_system
45
46from utils import _
47
48def register():
49 """
50 The mandatory cobbler module registration hook.
51 """
52 return "manage/import"
53
54
55class ImportDebianUbuntuManager:
56
57 def __init__(self,config,logger):
58 """
59 Constructor
60 """
61 self.logger = logger
62 self.config = config
63 self.api = config.api
64 self.distros = config.distros()
65 self.profiles = config.profiles()
66 self.systems = config.systems()
67 self.settings = config.settings()
68 self.repos = config.repos()
69 self.templar = templar.Templar(config)
70
71 # required function for import modules
72 def what(self):
73 return "import/debian_ubuntu"
74
75 # required function for import modules
76 def check_for_signature(self,path,cli_breed):
77 signatures = [
78 'pool',
79 ]
80
81 #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path)
82 for signature in signatures:
83 d = os.path.join(path,signature)
84 if os.path.exists(d):
85 self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature)
86 return (True,signature)
87
88 if cli_breed and cli_breed in self.get_valid_breeds():
89 self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path)
90 return (True,None)
91
92 return (False,None)
93
94 # required function for import modules
95 def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None):
96 self.pkgdir = pkgdir
97 self.mirror = mirror
98 self.mirror_name = mirror_name
99 self.network_root = network_root
100 self.kickstart_file = kickstart_file
101 self.rsync_flags = rsync_flags
102 self.arch = arch
103 self.breed = breed
104 self.os_version = os_version
105
106 # some fixups for the XMLRPC interface, which does not use "None"
107 if self.arch == "": self.arch = None
108 if self.mirror == "": self.mirror = None
109 if self.mirror_name == "": self.mirror_name = None
110 if self.kickstart_file == "": self.kickstart_file = None
111 if self.os_version == "": self.os_version = None
112 if self.rsync_flags == "": self.rsync_flags = None
113 if self.network_root == "": self.network_root = None
114
115 # If no breed was specified on the command line, figure it out
116 if self.breed == None:
117 self.breed = self.get_breed_from_directory()
118 if not self.breed:
119 utils.die(self.logger,"import failed - could not determine breed of debian-based distro")
120
121 # debug log stuff for testing
122 #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir))
123 #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror))
124 #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name))
125 #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root))
126 #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file))
127 #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags))
128 #self.logger.info("DEBUG: self.arch = %s" % str(self.arch))
129 #self.logger.info("DEBUG: self.breed = %s" % str(self.breed))
130 #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version))
131
132 # both --import and --name are required arguments
133
134 if self.mirror is None:
135 utils.die(self.logger,"import failed. no --path specified")
136 if self.mirror_name is None:
137 utils.die(self.logger,"import failed. no --name specified")
138
139 # if --arch is supplied, validate it to ensure it's valid
140
141 if self.arch is not None and self.arch != "":
142 self.arch = self.arch.lower()
143 if self.arch == "x86":
144 # be consistent
145 self.arch = "i386"
146 if self.arch not in self.get_valid_arches():
147 utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", "))
148
149 # if we're going to do any copying, set where to put things
150 # and then make sure nothing is already there.
151
152 self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) )
153 if os.path.exists(self.path) and self.arch is None:
154 # FIXME : Raise exception even when network_root is given ?
155 utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path)
156
157 # import takes a --kickstart for forcing selection that can't be used in all circumstances
158
159 if self.kickstart_file and not self.breed:
160 utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected")
161
162 if self.os_version and not self.breed:
163 utils.die(self.logger,"OS version can only be specified when a specific breed is selected")
164
165 if self.breed and self.breed.lower() not in self.get_valid_breeds():
166 utils.die(self.logger,"Supplied import breed is not supported by this module")
167
168 # if --arch is supplied, make sure the user is not importing a path with a different
169 # arch, which would just be silly.
170
171 if self.arch:
172 # append the arch path to the name if the arch is not already
173 # found in the name.
174 for x in self.get_valid_arches():
175 if self.path.lower().find(x) != -1:
176 if self.arch != x :
177 utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch))
178 break
179 else:
180 # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again
181 self.path += ("-%s" % self.arch)
182
183 # make the output path and mirror content but only if not specifying that a network
184 # accessible support location already exists (this is --available-as on the command line)
185
186 if self.network_root is None:
187 # we need to mirror (copy) the files
188
189 utils.mkdir(self.path)
190
191 if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"):
192
193 # http mirrors are kind of primative. rsync is better.
194 # that's why this isn't documented in the manpage and we don't support them.
195 # TODO: how about adding recursive FTP as an option?
196
197 utils.die(self.logger,"unsupported protocol")
198
199 else:
200
201 # good, we're going to use rsync..
202 # we don't use SSH for public mirrors and local files.
203 # presence of user@host syntax means use SSH
204
205 # kick off the rsync now
206
207 if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger):
208 utils.die(self.logger, "failed to rsync the files")
209
210 else:
211
212 # rather than mirroring, we're going to assume the path is available
213 # over http, ftp, and nfs, perhaps on an external filer. scanning still requires
214 # --mirror is a filesystem path, but --available-as marks the network path
215
216 if not os.path.exists(self.mirror):
217 utils.die(self.logger, "path does not exist: %s" % self.mirror)
218
219 # find the filesystem part of the path, after the server bits, as each distro
220 # URL needs to be calculated relative to this.
221
222 if not self.network_root.endswith("/"):
223 self.network_root = self.network_root + "/"
224 self.path = os.path.normpath( self.mirror )
225 valid_roots = [ "nfs://", "ftp://", "http://" ]
226 for valid_root in valid_roots:
227 if self.network_root.startswith(valid_root):
228 break
229 else:
230 utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://")
231 if self.network_root.startswith("nfs://"):
232 try:
233 (a,b,rest) = self.network_root.split(":",3)
234 except:
235 utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.")
236
237 # now walk the filesystem looking for distributions that match certain patterns
238
239 self.logger.info("adding distros")
240 distros_added = []
241 # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST
242 os.path.walk(self.path, self.distro_adder, distros_added)
243
244 # find out if we can auto-create any repository records from the install tree
245
246 if self.network_root is None:
247 self.logger.info("associating repos")
248 # FIXME: this automagic is not possible (yet) without mirroring
249 self.repo_finder(distros_added)
250
251 # find the most appropriate answer files for each profile object
252
253 self.logger.info("associating kickstarts")
254 self.kickstart_finder(distros_added)
255
256 # ensure bootloaders are present
257 self.api.pxegen.copy_bootloaders()
258
259 return True
260
261 # required function for import modules
262 def get_valid_arches(self):
263 return ["i386", "ppc", "x86_64", "x86",]
264
265 # required function for import modules
266 def get_valid_breeds(self):
267 return ["debian","ubuntu"]
268
269 # required function for import modules
270 def get_valid_os_versions(self):
271 if self.breed == "debian":
272 return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",]
273 elif self.breed == "ubuntu":
274 return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",]
275 else:
276 return []
277
278 def get_valid_repo_breeds(self):
279 return ["apt",]
280
281 def get_release_files(self):
282 """
283 Find distro release packages.
284 """
285 return glob.glob(os.path.join(self.get_rootdir(), "dists/*"))
286
287 def get_breed_from_directory(self):
288 for breed in self.get_valid_breeds():
289 # NOTE : Although we break the loop after the first match,
290 # multiple debian derived distros can actually live at the same pool -- JP
291 d = os.path.join(self.mirror, breed)
292 if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed:
293 return breed
294 else:
295 return None
296
297 def get_tree_location(self, distro):
298 """
299 Once a distribution is identified, find the part of the distribution
300 that has the URL in it that we want to use for kickstarting the
301 distribution, and create a ksmeta variable $tree that contains this.
302 """
303
304 base = self.get_rootdir()
305
306 if self.network_root is None:
307 dists_path = os.path.join(self.path, "dists")
308 if os.path.isdir(dists_path):
309 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
310 else:
311 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
312 self.set_install_tree(distro, tree)
313 else:
314 # where we assign the kickstart source is relative to our current directory
315 # and the input start directory in the crawl. We find the path segments
316 # between and tack them on the network source path to find the explicit
317 # network path to the distro that Anaconda can digest.
318 tail = self.path_tail(self.path, base)
319 tree = self.network_root[:-1] + tail
320 self.set_install_tree(distro, tree)
321
322 return
323
324 def repo_finder(self, distros_added):
325 for distro in distros_added:
326 self.logger.info("traversing distro %s" % distro.name)
327 # FIXME : Shouldn't decide this the value of self.network_root ?
328 if distro.kernel.find("ks_mirror") != -1:
329 basepath = os.path.dirname(distro.kernel)
330 top = self.get_rootdir()
331 self.logger.info("descent into %s" % top)
332 dists_path = os.path.join(self.path, "dists")
333 if not os.path.isdir(dists_path):
334 self.process_repos()
335 else:
336 self.logger.info("this distro isn't mirrored")
337
338 def process_repos(self):
339 pass
340
341 def distro_adder(self,distros_added,dirname,fnames):
342 """
343 This is an os.path.walk routine that finds distributions in the directory
344 to be scanned and then creates them.
345 """
346
347 # FIXME: If there are more than one kernel or initrd image on the same directory,
348 # results are unpredictable
349
350 initrd = None
351 kernel = None
352
353 for x in fnames:
354 adtls = []
355
356 fullname = os.path.join(dirname,x)
357 if os.path.islink(fullname) and os.path.isdir(fullname):
358 if fullname.startswith(self.path):
359 self.logger.warning("avoiding symlink loop")
360 continue
361 self.logger.info("following symlink: %s" % fullname)
362 os.path.walk(fullname, self.distro_adder, distros_added)
363
364 if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") ) and x != "initrd.size":
365 initrd = os.path.join(dirname,x)
366 if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") ) and x.find("initrd") == -1:
367 kernel = os.path.join(dirname,x)
368
369 # if we've collected a matching kernel and initrd pair, turn the in and add them to the list
370 if initrd is not None and kernel is not None:
371 adtls.append(self.add_entry(dirname,kernel,initrd))
372 kernel = None
373 initrd = None
374
375 for adtl in adtls:
376 distros_added.extend(adtl)
377
378 def add_entry(self,dirname,kernel,initrd):
379 """
380 When we find a directory with a valid kernel/initrd in it, create the distribution objects
381 as appropriate and save them. This includes creating xen and rescue distros/profiles
382 if possible.
383 """
384
385 proposed_name = self.get_proposed_name(dirname,kernel)
386 proposed_arch = self.get_proposed_arch(dirname)
387
388 if self.arch and proposed_arch and self.arch != proposed_arch:
389 utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch))
390
391 archs = self.learn_arch_from_tree()
392 if not archs:
393 if self.arch:
394 archs.append( self.arch )
395 else:
396 if self.arch and self.arch not in archs:
397 utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir()))
398 if proposed_arch:
399 if archs and proposed_arch not in archs:
400 self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir()))
401 return
402
403 archs = [ proposed_arch ]
404
405 if len(archs)>1:
406 self.logger.warning("- Warning : Multiple archs found : %s" % (archs))
407
408 distros_added = []
409
410 for pxe_arch in archs:
411 name = proposed_name + "-" + pxe_arch
412 existing_distro = self.distros.find(name=name)
413
414 if existing_distro is not None:
415 self.logger.warning("skipping import, as distro name already exists: %s" % name)
416 continue
417
418 else:
419 self.logger.info("creating new distro: %s" % name)
420 distro = self.config.new_distro()
421
422 if name.find("-autoboot") != -1:
423 # this is an artifact of some EL-3 imports
424 continue
425
426 distro.set_name(name)
427 distro.set_kernel(kernel)
428 distro.set_initrd(initrd)
429 distro.set_arch(pxe_arch)
430 distro.set_breed(self.breed)
431 # If a version was supplied on command line, we set it now
432 if self.os_version:
433 distro.set_os_version(self.os_version)
434
435 self.distros.add(distro,save=True)
436 distros_added.append(distro)
437
438 existing_profile = self.profiles.find(name=name)
439
440 # see if the profile name is already used, if so, skip it and
441 # do not modify the existing profile
442
443 if existing_profile is None:
444 self.logger.info("creating new profile: %s" % name)
445 #FIXME: The created profile holds a default kickstart, and should be breed specific
446 profile = self.config.new_profile()
447 else:
448 self.logger.info("skipping existing profile, name already exists: %s" % name)
449 continue
450
451 # save our minimal profile which just points to the distribution and a good
452 # default answer file
453
454 profile.set_name(name)
455 profile.set_distro(name)
456 profile.set_kickstart(self.kickstart_file)
457
458 # depending on the name of the profile we can define a good virt-type
459 # for usage with koan
460
461 if name.find("-xen") != -1:
462 profile.set_virt_type("xenpv")
463 elif name.find("vmware") != -1:
464 profile.set_virt_type("vmware")
465 else:
466 profile.set_virt_type("qemu")
467
468 # save our new profile to the collection
469
470 self.profiles.add(profile,save=True)
471
472 return distros_added
473
474 def get_proposed_name(self,dirname,kernel=None):
475 """
476 Given a directory name where we have a kernel/initrd pair, try to autoname
477 the distribution (and profile) object based on the contents of that path
478 """
479
480 if self.network_root is not None:
481 name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/"))
482 else:
483 # remove the part that says /var/www/cobbler/ks_mirror/name
484 name = "-".join(dirname.split("/")[5:])
485
486 if kernel is not None and kernel.find("PAE") != -1:
487 name = name + "-PAE"
488
489 # These are all Ubuntu's doing, the netboot images are buried pretty
490 # deep. ;-) -JC
491 name = name.replace("-netboot","")
492 name = name.replace("-ubuntu-installer","")
493 name = name.replace("-amd64","")
494 name = name.replace("-i386","")
495
496 # we know that some kernel paths should not be in the name
497
498 name = name.replace("-images","")
499 name = name.replace("-pxeboot","")
500 name = name.replace("-install","")
501 name = name.replace("-isolinux","")
502
503 # some paths above the media root may have extra path segments we want
504 # to clean up
505
506 name = name.replace("-os","")
507 name = name.replace("-tree","")
508 name = name.replace("var-www-cobbler-", "")
509 name = name.replace("ks_mirror-","")
510 name = name.replace("--","-")
511
512 # remove any architecture name related string, as real arch will be appended later
513
514 name = name.replace("chrp","ppc64")
515
516 for separator in [ '-' , '_' , '.' ] :
517 for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]:
518 name = name.replace("%s%s" % ( separator , arch ),"")
519
520 return name
521
522 def get_proposed_arch(self,dirname):
523 """
524 Given an directory name, can we infer an architecture from a path segment?
525 """
526 if dirname.find("x86_64") != -1 or dirname.find("amd") != -1:
527 return "x86_64"
528 if dirname.find("ia64") != -1:
529 return "ia64"
530 if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1:
531 return "i386"
532 if dirname.find("s390x") != -1:
533 return "s390x"
534 if dirname.find("s390") != -1:
535 return "s390"
536 if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1:
537 return "ppc64"
538 if dirname.find("ppc32") != -1:
539 return "ppc"
540 if dirname.find("ppc") != -1:
541 return "ppc"
542 return None
543
544 def arch_walker(self,foo,dirname,fnames):
545 """
546 See docs on learn_arch_from_tree.
547
548 The TRY_LIST is used to speed up search, and should be dropped for default importer
549 Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem
550
551 This method is useful to get the archs, but also to package type and a raw guess of the breed
552 """
553
554 # try to find a kernel header RPM and then look at it's arch.
555 for x in fnames:
556 if self.match_kernelarch_file(x):
557 for arch in self.get_valid_arches():
558 if x.find(arch) != -1:
559 foo[arch] = 1
560 for arch in [ "i686" , "amd64" ]:
561 if x.find(arch) != -1:
562 foo[arch] = 1
563
564 def kickstart_finder(self,distros_added):
565 """
566 For all of the profiles in the config w/o a kickstart, use the
567 given kickstart file, or look at the kernel path, from that,
568 see if we can guess the distro, and if we can, assign a kickstart
569 if one is available for it.
570 """
571 for profile in self.profiles:
572 distro = self.distros.find(name=profile.get_conceptual_parent().name)
573 if distro is None or not (distro in distros_added):
574 continue
575
576 kdir = os.path.dirname(distro.kernel)
577 if self.kickstart_file == None:
578 for file in self.get_release_files():
579 results = self.scan_pkg_filename(file)
580 # FIXME : If os is not found on tree but set with CLI, no kickstart is searched
581 if results is None:
582 self.logger.warning("skipping %s" % file)
583 continue
584 (flavor, major, minor, release) = results
585 # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata
586 #version , ks = self.set_variance(flavor, major, minor, distro.arch)
587 if self.os_version:
588 if self.os_version != flavor:
589 utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor))
590 distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch))
591 distro.set_os_version(flavor)
592 # is this even valid for debian/ubuntu? - jcammarata
593 #ds = self.get_datestamp()
594 #if ds is not None:
595 # distro.set_tree_build_time(ds)
596 profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed")
597 self.profiles.add(profile,save=True)
598
599 self.configure_tree_location(distro)
600 self.distros.add(distro,save=True) # re-save
601 self.api.serialize()
602
603 def configure_tree_location(self, distro):
604 """
605 Once a distribution is identified, find the part of the distribution
606 that has the URL in it that we want to use for kickstarting the
607 distribution, and create a ksmeta variable $tree that contains this.
608 """
609
610 base = self.get_rootdir()
611
612 if self.network_root is None:
613 dists_path = os.path.join( self.path , "dists" )
614 if os.path.isdir( dists_path ):
615 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
616 else:
617 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
618 self.set_install_tree(distro, tree)
619 else:
620 # where we assign the kickstart source is relative to our current directory
621 # and the input start directory in the crawl. We find the path segments
622 # between and tack them on the network source path to find the explicit
623 # network path to the distro that Anaconda can digest.
624 tail = utils.path_tail(self.path, base)
625 tree = self.network_root[:-1] + tail
626 self.set_install_tree(distro, tree)
627
628 def get_rootdir(self):
629 return self.mirror
630
631 def get_pkgdir(self):
632 if not self.pkgdir:
633 return None
634 return os.path.join(self.get_rootdir(),self.pkgdir)
635
636 def set_install_tree(self, distro, url):
637 distro.ks_meta["tree"] = url
638
639 def learn_arch_from_tree(self):
640 """
641 If a distribution is imported from DVD, there is a good chance the path doesn't
642 contain the arch and we should add it back in so that it's part of the
643 meaningful name ... so this code helps figure out the arch name. This is important
644 for producing predictable distro names (and profile names) from differing import sources
645 """
646 result = {}
647 # FIXME : this is called only once, should not be a walk
648 if self.get_pkgdir():
649 os.path.walk(self.get_pkgdir(), self.arch_walker, result)
650 if result.pop("amd64",False):
651 result["x86_64"] = 1
652 if result.pop("i686",False):
653 result["i386"] = 1
654 return result.keys()
655
656 def match_kernelarch_file(self, filename):
657 """
658 Is the given filename a kernel filename?
659 """
660 if not filename.endswith("deb"):
661 return False
662 if filename.startswith("linux-headers-"):
663 return True
664 return False
665
666 def scan_pkg_filename(self, file):
667 """
668 Determine what the distro is based on the release package filename.
669 """
670 # FIXME: all of these dist_names should probably be put in a function
671 # which would be called in place of looking in codes.py. Right now
672 # you have to update both codes.py and this to add a new release
673 if self.breed == "debian":
674 dist_names = ['etch','lenny',]
675 elif self.breed == "ubuntu":
676 dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',]
677 else:
678 return None
679
680 if os.path.basename(file) in dist_names:
681 release_file = os.path.join(file,'Release')
682 self.logger.info("Found %s release file: %s" % (self.breed,release_file))
683
684 f = open(release_file,'r')
685 lines = f.readlines()
686 f.close()
687
688 for line in lines:
689 if line.lower().startswith('version: '):
690 version = line.split(':')[1].strip()
691 values = version.split('.')
692 if len(values) == 1:
693 # I don't think you'd ever hit this currently with debian or ubuntu,
694 # just including it for safety reasons
695 return (os.path.basename(file), values[0], "0", "0")
696 elif len(values) == 2:
697 return (os.path.basename(file), values[0], values[1], "0")
698 elif len(values) > 2:
699 return (os.path.basename(file), values[0], values[1], values[2])
700 return None
701
702 def get_datestamp(self):
703 """
704 Not used for debian/ubuntu... should probably be removed? - jcammarata
705 """
706 pass
707
708 def set_variance(self, flavor, major, minor, arch):
709 """
710 Set distro specific versioning.
711 """
712 # I don't think this is required anymore, as the scan_pkg_filename() function
713 # above does everything we need it to - jcammarata
714 #
715 #if self.breed == "debian":
716 # dist_names = { '4.0' : "etch" , '5.0' : "lenny" }
717 # dist_vers = "%s.%s" % ( major , minor )
718 # os_version = dist_names[dist_vers]
719 #
720 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
721 #elif self.breed == "ubuntu":
722 # # Release names taken from wikipedia
723 # dist_names = { '6.4' :"dapper",
724 # '8.4' :"hardy",
725 # '8.10' :"intrepid",
726 # '9.4' :"jaunty",
727 # '9.10' :"karmic",
728 # '10.4' :"lynx",
729 # '10.10':"maverick",
730 # '11.4' :"natty",
731 # }
732 # dist_vers = "%s.%s" % ( major , minor )
733 # if not dist_names.has_key( dist_vers ):
734 # dist_names['4ubuntu2.0'] = "IntrepidIbex"
735 # os_version = dist_names[dist_vers]
736 #
737 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
738 #else:
739 # return None
740 pass
741
742 def process_repos(self, main_importer, distro):
743 # Create a disabled repository for the new distro, and the security updates
744 #
745 # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage
746
747 repo = item_repo.Repo(main_importer.config)
748 repo.set_breed( "apt" )
749 repo.set_arch( distro.arch )
750 repo.set_keep_updated( False )
751 repo.yumopts["--ignore-release-gpg"] = None
752 repo.yumopts["--verbose"] = None
753 repo.set_name( distro.name )
754 repo.set_os_version( distro.os_version )
755 # NOTE : The location of the mirror should come from timezone
756 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) )
757
758 security_repo = item_repo.Repo(main_importer.config)
759 security_repo.set_breed( "apt" )
760 security_repo.set_arch( distro.arch )
761 security_repo.set_keep_updated( False )
762 security_repo.yumopts["--ignore-release-gpg"] = None
763 security_repo.yumopts["--verbose"] = None
764 security_repo.set_name( distro.name + "-security" )
765 security_repo.set_os_version( distro.os_version )
766 # There are no official mirrors for security updates
767 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' )
768
769 self.logger.info("Added repos for %s" % distro.name)
770 repos = main_importer.config.repos()
771 repos.add(repo,save=True)
772 repos.add(security_repo,save=True)
773
774# ==========================================================================
775
776def get_import_manager(config,logger):
777 return ImportDebianUbuntuManager(config,logger)
7780
=== removed directory '.pc/33_authn_configfile.patch'
=== removed directory '.pc/33_authn_configfile.patch/config'
=== removed file '.pc/33_authn_configfile.patch/config/modules.conf'
--- .pc/33_authn_configfile.patch/config/modules.conf 2011-04-04 12:55:44 +0000
+++ .pc/33_authn_configfile.patch/config/modules.conf 1970-01-01 00:00:00 +0000
@@ -1,86 +0,0 @@
1# cobbler module configuration file
2# =================================
3
4# authentication:
5# what users can log into the WebUI and Read-Write XMLRPC?
6# choices:
7# authn_denyall -- no one (default)
8# authn_configfile -- use /etc/cobbler/users.digest (for basic setups)
9# authn_passthru -- ask Apache to handle it (used for kerberos)
10# authn_ldap -- authenticate against LDAP
11# authn_spacewalk -- ask Spacewalk/Satellite (experimental)
12# authn_testing -- username/password is always testing/testing (debug)
13# (user supplied) -- you may write your own module
14# WARNING: this is a security setting, do not choose an option blindly.
15# for more information:
16# https://fedorahosted.org/cobbler/wiki/CobblerWebInterface
17# https://fedorahosted.org/cobbler/wiki/CustomizableSecurity
18# https://fedorahosted.org/cobbler/wiki/CobblerWithKerberos
19# https://fedorahosted.org/cobbler/wiki/CobblerWithLdap
20
21[authentication]
22module = authn_denyall
23
24# authorization:
25# once a user has been cleared by the WebUI/XMLRPC, what can they do?
26# choices:
27# authz_allowall -- full access for all authneticated users (default)
28# authz_ownership -- use users.conf, but add object ownership semantics
29# (user supplied) -- you may write your own module
30# WARNING: this is a security setting, do not choose an option blindly.
31# If you want to further restrict cobbler with ACLs for various groups,
32# pick authz_ownership. authz_allowall does not support ACLs. configfile
33# does but does not support object ownership which is useful as an additional
34# layer of control.
35
36# for more information:
37# https://fedorahosted.org/cobbler/wiki/CobblerWebInterface
38# https://fedorahosted.org/cobbler/wiki/CustomizableSecurity
39# https://fedorahosted.org/cobbler/wiki/CustomizableAuthorization
40# https://fedorahosted.org/cobbler/wiki/AuthorizationWithOwnership
41# https://fedorahosted.org/cobbler/wiki/AclFeature
42
43[authorization]
44module = authz_allowall
45
46# dns:
47# chooses the DNS management engine if manage_dns is enabled
48# in /etc/cobbler/settings, which is off by default.
49# choices:
50# manage_bind -- default, uses BIND/named
51# manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dhcp below
52# NOTE: more configuration is still required in /etc/cobbler
53# for more information:
54# https://fedorahosted.org/cobbler/wiki/ManageDns
55
56[dns]
57module = manage_bind
58
59# dhcp:
60# chooses the DHCP management engine if manage_dhcp is enabled
61# in /etc/cobbler/settings, which is off by default.
62# choices:
63# manage_isc -- default, uses ISC dhcpd
64# manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dns above
65# NOTE: more configuration is still required in /etc/cobbler
66# for more information:
67# https://fedorahosted.org/cobbler/wiki/ManageDhcp
68
69[dhcp]
70module = manage_isc
71
72# tftpd:
73# chooses the TFTP management engine if manage_tftp is enabled
74# in /etc/cobbler/settings, which is ON by default.
75#
76# choices:
77# manage_in_tftpd -- default, uses the system's tftp server
78# manage_tftpd_py -- uses cobbler's tftp server
79#
80# for more information:
81# https://fedorahosted.org/cobbler/wiki/ManageTftp
82
83[tftpd]
84module = manage_in_tftpd
85
86#--------------------------------------------------
870
=== removed directory '.pc/34_fix_apache_wont_start.patch'
=== removed directory '.pc/34_fix_apache_wont_start.patch/config'
=== removed file '.pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf'
--- .pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf 2011-04-04 12:55:44 +0000
+++ .pc/34_fix_apache_wont_start.patch/config/cobbler_web.conf 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
1# This configuration file enables the cobbler web
2# interface (django version)
3
4<VirtualHost *:80>
5
6# Do not log the requests generated from the event notification system
7SetEnvIf Request_URI ".*/op/events/user/.*" dontlog
8# Log only what remains
9CustomLog logs/access_log combined env=!dontlog
10
11WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi
12
13</VirtualHost>
14
150
=== removed directory '.pc/39_cw_remove_vhost.patch'
=== removed directory '.pc/39_cw_remove_vhost.patch/config'
=== removed file '.pc/39_cw_remove_vhost.patch/config/cobbler_web.conf'
--- .pc/39_cw_remove_vhost.patch/config/cobbler_web.conf 2011-04-15 12:47:39 +0000
+++ .pc/39_cw_remove_vhost.patch/config/cobbler_web.conf 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
1# This configuration file enables the cobbler web
2# interface (django version)
3
4<VirtualHost *:80>
5
6# Do not log the requests generated from the event notification system
7SetEnvIf Request_URI ".*/op/events/user/.*" dontlog
8# Log only what remains
9#CustomLog logs/access_log combined env=!dontlog
10
11WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi
12
13</VirtualHost>
14
150
=== removed directory '.pc/40_ubuntu_bind9_management.patch'
=== removed directory '.pc/40_ubuntu_bind9_management.patch/cobbler'
=== removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py'
--- .pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py 2011-04-18 11:15:59 +0000
+++ .pc/40_ubuntu_bind9_management.patch/cobbler/action_check.py 1970-01-01 00:00:00 +0000
@@ -1,482 +0,0 @@
1"""
2Validates whether the system is reasonably well configured for
3serving up content. This is the code behind 'cobbler check'.
4
5Copyright 2006-2009, Red Hat, Inc
6Michael DeHaan <mdehaan@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import re
26import action_sync
27import utils
28import glob
29from utils import _
30import clogger
31
32class BootCheck:
33
34 def __init__(self,config,logger=None):
35 """
36 Constructor
37 """
38 self.config = config
39 self.settings = config.settings()
40 if logger is None:
41 logger = clogger.Logger()
42 self.logger = logger
43
44
45 def run(self):
46 """
47 Returns None if there are no errors, otherwise returns a list
48 of things to correct prior to running application 'for real'.
49 (The CLI usage is "cobbler check" before "cobbler sync")
50 """
51 status = []
52 self.checked_dist = utils.check_dist()
53 self.check_name(status)
54 self.check_selinux(status)
55 if self.settings.manage_dhcp:
56 mode = self.config.api.get_sync().dhcp.what()
57 if mode == "isc":
58 self.check_dhcpd_bin(status)
59 self.check_dhcpd_conf(status)
60 self.check_service(status,"dhcpd")
61 elif mode == "dnsmasq":
62 self.check_dnsmasq_bin(status)
63 self.check_service(status,"dnsmasq")
64
65 if self.settings.manage_dns:
66 mode = self.config.api.get_sync().dns.what()
67 if mode == "bind":
68 self.check_bind_bin(status)
69 self.check_service(status,"named")
70 elif mode == "dnsmasq" and not self.settings.manage_dhcp:
71 self.check_dnsmasq_bin(status)
72 self.check_service(status,"dnsmasq")
73
74 mode = self.config.api.get_sync().tftpd.what()
75 if mode == "in_tftpd":
76 self.check_tftpd_bin(status)
77 self.check_tftpd_dir(status)
78 self.check_tftpd_conf(status)
79 elif mode == "tftpd_py":
80 self.check_ctftpd_bin(status)
81 self.check_ctftpd_dir(status)
82 self.check_ctftpd_conf(status)
83
84 self.check_service(status, "cobblerd")
85
86 self.check_bootloaders(status)
87 self.check_rsync_conf(status)
88 self.check_httpd(status)
89 self.check_iptables(status)
90 self.check_yum(status)
91 self.check_debmirror(status)
92 self.check_for_ksvalidator(status)
93 self.check_for_default_password(status)
94 self.check_for_unreferenced_repos(status)
95 self.check_for_unsynced_repos(status)
96 self.check_for_cman(status)
97
98 return status
99
100 def check_for_ksvalidator(self, status):
101 if self.checked_dist in ["debian", "ubuntu"]:
102 return
103
104 if not os.path.exists("/usr/bin/ksvalidator"):
105 status.append("ksvalidator was not found, install pykickstart")
106
107 return True
108
109 def check_for_cman(self, status):
110 # not doing rpm -q here to be cross-distro friendly
111 if not os.path.exists("/sbin/fence_ilo") and not os.path.exists("/usr/sbin/fence_ilo"):
112 status.append("fencing tools were not found, and are required to use the (optional) power management features. install cman or fence-agents to use them")
113 return True
114
115 def check_service(self, status, which, notes=""):
116 if notes != "":
117 notes = " (NOTE: %s)" % notes
118 rc = 0
119 if self.checked_dist == "redhat" or self.checked_dist == "suse":
120 if os.path.exists("/etc/rc.d/init.d/%s" % which):
121 rc = utils.subprocess_call(self.logger,"/sbin/service %s status > /dev/null 2>/dev/null" % which, shell=True)
122 if rc != 0:
123 status.append(_("service %s is not running%s") % (which,notes))
124 return False
125 elif self.checked_dist in ["debian", "ubuntu"]:
126 # we still use /etc/init.d
127 if os.path.exists("/etc/init.d/%s" % which):
128 rc = utils.subprocess_call(self.logger,"/etc/init.d/%s status /dev/null 2>/dev/null" % which, shell=True)
129 if rc != 0:
130 status.append(_("service %s is not running%s") % which,notes)
131 return False
132 elif self.checked_dist == "ubuntu":
133 if os.path.exists("/etc/init/%s.conf" % which):
134 rc = utils.subprocess_call(self.logger,"status %s > /dev/null 2>&1" % which, shell=True)
135 if rc != 0:
136 status.append(_("service %s is not running%s") % (which,notes))
137 else:
138 status.append(_("Unknown distribution type, cannot check for running service %s" % which))
139 return False
140 return True
141
142 def check_iptables(self, status):
143 if os.path.exists("/etc/rc.d/init.d/iptables"):
144 rc = utils.subprocess_call(self.logger,"/sbin/service iptables status >/dev/null 2>/dev/null", shell=True)
145 if rc == 0:
146 status.append(_("since iptables may be running, ensure 69, 80, and %(xmlrpc)s are unblocked") % { "xmlrpc" : self.settings.xmlrpc_port })
147
148 def check_yum(self,status):
149 if self.checked_dist in ["debian", "ubuntu"]:
150 return
151
152 if not os.path.exists("/usr/bin/createrepo"):
153 status.append(_("createrepo package is not installed, needed for cobbler import and cobbler reposync, install createrepo?"))
154 if not os.path.exists("/usr/bin/reposync"):
155 status.append(_("reposync is not installed, need for cobbler reposync, install/upgrade yum-utils?"))
156 if not os.path.exists("/usr/bin/yumdownloader"):
157 status.append(_("yumdownloader is not installed, needed for cobbler repo add with --rpm-list parameter, install/upgrade yum-utils?"))
158 if self.settings.reposync_flags.find("-l"):
159 if self.checked_dist == "redhat" or self.checked_dist == "suse":
160 yum_utils_ver = utils.subprocess_get(self.logger,"/usr/bin/rpmquery --queryformat=%{VERSION} yum-utils", shell=True)
161 if yum_utils_ver < "1.1.17":
162 status.append(_("yum-utils need to be at least version 1.1.17 for reposync -l, current version is %s") % yum_utils_ver )
163
164 def check_debmirror(self,status):
165 if not os.path.exists("/usr/bin/debmirror"):
166 status.append(_("debmirror package is not installed, it will be required to manage debian deployments and repositories"))
167 if os.path.exists("/etc/debmirror.conf"):
168 f = open("/etc/debmirror.conf")
169 re_dists = re.compile(r'@dists=')
170 re_arches = re.compile(r'@arches=')
171 for line in f.readlines():
172 if re_dists.search(line) and not line.strip().startswith("#"):
173 status.append(_("comment 'dists' on /etc/debmirror.conf for proper debian support"))
174 if re_arches.search(line) and not line.strip().startswith("#"):
175 status.append(_("comment 'arches' on /etc/debmirror.conf for proper debian support"))
176
177
178 def check_name(self,status):
179 """
180 If the server name in the config file is still set to localhost
181 kickstarts run from koan will not have proper kernel line
182 parameters.
183 """
184 if self.settings.server == "127.0.0.1":
185 status.append(_("The 'server' field in /etc/cobbler/settings must be set to something other than localhost, or kickstarting features will not work. This should be a resolvable hostname or IP for the boot server as reachable by all machines that will use it."))
186 if self.settings.next_server == "127.0.0.1":
187 status.append(_("For PXE to be functional, the 'next_server' field in /etc/cobbler/settings must be set to something other than 127.0.0.1, and should match the IP of the boot server on the PXE network."))
188
189 def check_selinux(self,status):
190 """
191 Suggests various SELinux rules changes to run Cobbler happily with
192 SELinux in enforcing mode. FIXME: this method could use some
193 refactoring in the future.
194 """
195 if self.checked_dist in ["debian", "ubuntu"]:
196 return
197
198 enabled = self.config.api.is_selinux_enabled()
199 if enabled:
200 data2 = utils.subprocess_get(self.logger,"/usr/sbin/getsebool -a",shell=True)
201 for line in data2.split("\n"):
202 if line.find("httpd_can_network_connect ") != -1:
203 if line.find("off") != -1:
204 status.append(_("Must enable a selinux boolean to enable vital web services components, run: setsebool -P httpd_can_network_connect true"))
205 if line.find("rsync_disable_trans ") != -1:
206 if line.find("on") != -1:
207 status.append(_("Must enable the cobbler import and replicate commands, run: setsebool -P rsync_disable_trans=1"))
208
209 data3 = utils.subprocess_get(self.logger,"/usr/sbin/semanage fcontext -l | grep public_content_t",shell=True)
210
211 rule1 = False
212 rule2 = False
213 rule3 = False
214 selinux_msg = "/usr/sbin/semanage fcontext -a -t public_content_t \"%s\""
215 for line in data3.split("\n"):
216 if line.startswith("/tftpboot/.*"):
217 rule1 = True
218 if line.startswith("/var/lib/tftpboot/.*"):
219 rule2 = True
220 if line.startswith("/var/www/cobbler/images/.*"):
221 rule3 = True
222
223 rules = []
224 if os.path.exists("/tftpboot") and not rule1:
225 rules.append(selinux_msg % "/tftpboot/.*")
226 else:
227 if not rule2:
228 rules.append(selinux_msg % "/var/lib/tftpboot/.*")
229 if not rule3:
230 rules.append(selinux_msg % "/var/www/cobbler/images/.*")
231 if len(rules) > 0:
232 status.append("you need to set some SELinux content rules to ensure cobbler serves content correctly in your SELinux environment, run the following: %s" % " && ".join(rules))
233
234 # now check to see that the Django sessions path is accessible
235 # by Apache
236
237 data4 = utils.subprocess_get(self.logger,"/usr/sbin/semanage fcontext -l | grep httpd_sys_content_rw_t",shell=True)
238 selinux_msg = "you need to set some SELinux rules if you want to use cobbler-web (an optional package), run the following: /usr/sbin/semanage fcontext -a -t httpd_sys_content_rw_t \"%s\""
239 rule4 = False
240 for line in data4.split("\n"):
241 if line.startswith("/var/lib/cobbler/webui_sessions/.*"):
242 rule4 = True
243 if not rule4:
244 status.append(selinux_msg % "/var/lib/cobbler/webui_sessions/.*")
245
246
247 def check_for_default_password(self,status):
248 default_pass = self.settings.default_password_crypted
249 if default_pass == "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac.":
250 status.append(_("The default password used by the sample templates for newly installed machines (default_password_crypted in /etc/cobbler/settings) is still set to 'cobbler' and should be changed, try: \"openssl passwd -1 -salt 'random-phrase-here' 'your-password-here'\" to generate new one"))
251
252
253 def check_for_unreferenced_repos(self,status):
254 repos = []
255 referenced = []
256 not_found = []
257 for r in self.config.api.repos():
258 repos.append(r.name)
259 for p in self.config.api.profiles():
260 my_repos = p.repos
261 if my_repos != "<<inherit>>":
262 referenced.extend(my_repos)
263 for r in referenced:
264 if r not in repos and r != "<<inherit>>":
265 not_found.append(r)
266 if len(not_found) > 0:
267 status.append(_("One or more repos referenced by profile objects is no longer defined in cobbler: %s") % ", ".join(not_found))
268
269 def check_for_unsynced_repos(self,status):
270 need_sync = []
271 for r in self.config.repos():
272 if r.mirror_locally == 1:
273 lookfor = os.path.join(self.settings.webdir, "repo_mirror", r.name)
274 if not os.path.exists(lookfor):
275 need_sync.append(r.name)
276 if len(need_sync) > 0:
277 status.append(_("One or more repos need to be processed by cobbler reposync for the first time before kickstarting against them: %s") % ", ".join(need_sync))
278
279
280 def check_httpd(self,status):
281 """
282 Check if Apache is installed.
283 """
284 if self.checked_dist in [ "suse", "redhat" ]:
285 rc = utils.subprocess_get(self.logger,"httpd -v")
286 else:
287 rc = utils.subprocess_get(self.logger,"apache2 -v")
288 if rc.find("Server") == -1:
289 status.append("Apache (httpd) is not installed and/or in path")
290
291
292 def check_dhcpd_bin(self,status):
293 """
294 Check if dhcpd is installed
295 """
296 if not os.path.exists("/usr/sbin/dhcpd"):
297 status.append("dhcpd is not installed")
298
299 def check_dnsmasq_bin(self,status):
300 """
301 Check if dnsmasq is installed
302 """
303 rc = utils.subprocess_get(self.logger,"dnsmasq --help")
304 if rc.find("Valid options") == -1:
305 status.append("dnsmasq is not installed and/or in path")
306
307 def check_bind_bin(self,status):
308 """
309 Check if bind is installed.
310 """
311 rc = utils.subprocess_get(self.logger,"named -v")
312 # it should return something like "BIND 9.6.1-P1-RedHat-9.6.1-6.P1.fc11"
313 if rc.find("BIND") == -1:
314 status.append("named is not installed and/or in path")
315
316 def check_bootloaders(self,status):
317 """
318 Check if network bootloaders are installed
319 """
320 # FIXME: move zpxe.rexx to loaders
321
322 bootloaders = {
323 "elilo" : [ "/var/lib/cobbler/loaders/elilo*.efi" ],
324 "menu.c32" : [ "/usr/share/syslinux/menu.c32",
325 "/usr/lib/syslinux/menu.c32",
326 "/var/lib/cobbler/loaders/menu.c32" ],
327 "yaboot" : [ "/var/lib/cobbler/loaders/yaboot*" ],
328 "pxelinux.0" : [ "/usr/share/syslinux/pxelinux.0",
329 "/usr/lib/syslinux/pxelinux.0",
330 "/var/lib/cobbler/loaders/pxelinux.0" ],
331 "efi" : [ "/var/lib/cobbler/loaders/grub-x86.efi",
332 "/var/lib/cobbler/loaders/grub-x86_64.efi" ],
333 }
334
335 # look for bootloaders at the glob locations above
336 found_bootloaders = []
337 items = bootloaders.keys()
338 for loader_name in items:
339 patterns = bootloaders[loader_name]
340 for pattern in patterns:
341 matches = glob.glob(pattern)
342 if len(matches) > 0:
343 found_bootloaders.append(loader_name)
344 not_found = []
345
346 # invert the list of what we've found so we can report on what we haven't found
347 for loader_name in items:
348 if loader_name not in found_bootloaders:
349 not_found.append(loader_name)
350
351 if len(not_found) > 0:
352 status.append("some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.")
353
354 def check_tftpd_bin(self,status):
355 """
356 Check if tftpd is installed
357 """
358 if self.checked_dist in ["debian", "ubuntu"]:
359 return
360
361 if not os.path.exists("/etc/xinetd.d/tftp"):
362 status.append("missing /etc/xinetd.d/tftp, install tftp-server?")
363
364 def check_tftpd_dir(self,status):
365 """
366 Check if cobbler.conf's tftpboot directory exists
367 """
368 if self.checked_dist in ["debian", "ubuntu"]:
369 return
370
371 bootloc = utils.tftpboot_location()
372 if not os.path.exists(bootloc):
373 status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc })
374
375
376 def check_tftpd_conf(self,status):
377 """
378 Check that configured tftpd boot directory matches with actual
379 Check that tftpd is enabled to autostart
380 """
381 if self.checked_dist in ["debian", "ubuntu"]:
382 return
383
384 if os.path.exists("/etc/xinetd.d/tftp"):
385 f = open("/etc/xinetd.d/tftp")
386 re_disable = re.compile(r'disable.*=.*yes')
387 for line in f.readlines():
388 if re_disable.search(line) and not line.strip().startswith("#"):
389 status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" })
390 else:
391 status.append("missing configuration file: /etc/xinetd.d/tftp")
392
393 def check_ctftpd_bin(self,status):
394 """
395 Check if the Cobbler tftp server is installed
396 """
397 if self.checked_dist in ["debian", "ubuntu"]:
398 return
399
400 if not os.path.exists("/etc/xinetd.d/ctftp"):
401 status.append("missing /etc/xinetd.d/ctftp")
402
403 def check_ctftpd_dir(self,status):
404 """
405 Check if cobbler.conf's tftpboot directory exists
406 """
407 if self.checked_dist in ["debian", "ubuntu"]:
408 return
409
410 bootloc = utils.tftpboot_location()
411 if not os.path.exists(bootloc):
412 status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc })
413
414 def check_ctftpd_conf(self,status):
415 """
416 Check that configured tftpd boot directory matches with actual
417 Check that tftpd is enabled to autostart
418 """
419 if self.checked_dist in ["debian", "ubuntu"]:
420 return
421
422 if os.path.exists("/etc/xinetd.d/tftp"):
423 f = open("/etc/xinetd.d/tftp")
424 re_disable = re.compile(r'disable.*=.*no')
425 for line in f.readlines():
426 if re_disable.search(line) and not line.strip().startswith("#"):
427 status.append(_("change 'disable' to 'yes' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" })
428 if os.path.exists("/etc/xinetd.d/ctftp"):
429 f = open("/etc/xinetd.d/ctftp")
430 re_disable = re.compile(r'disable.*=.*yes')
431 for line in f.readlines():
432 if re_disable.search(line) and not line.strip().startswith("#"):
433 status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/ctftp" })
434 else:
435 status.append("missing configuration file: /etc/xinetd.d/ctftp")
436
437 def check_rsync_conf(self,status):
438 """
439 Check that rsync is enabled to autostart
440 """
441 if self.checked_dist in ["debian", "ubuntu"]:
442 return
443
444 if os.path.exists("/etc/xinetd.d/rsync"):
445 f = open("/etc/xinetd.d/rsync")
446 re_disable = re.compile(r'disable.*=.*yes')
447 for line in f.readlines():
448 if re_disable.search(line) and not line.strip().startswith("#"):
449 status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/rsync" })
450 else:
451 status.append(_("file %(file)s does not exist") % { "file" : "/etc/xinetd.d/rsync" })
452
453
454 def check_dhcpd_conf(self,status):
455 """
456 NOTE: this code only applies if cobbler is *NOT* set to generate
457 a dhcp.conf file
458
459 Check that dhcpd *appears* to be configured for pxe booting.
460 We can't assure file correctness. Since a cobbler user might
461 have dhcp on another server, it's okay if it's not there and/or
462 not configured correctly according to automated scans.
463 """
464 if not (self.settings.manage_dhcp == 0):
465 return
466
467 if os.path.exists(self.settings.dhcpd_conf):
468 match_next = False
469 match_file = False
470 f = open(self.settings.dhcpd_conf)
471 for line in f.readlines():
472 if line.find("next-server") != -1:
473 match_next = True
474 if line.find("filename") != -1:
475 match_file = True
476 if not match_next:
477 status.append(_("expecting next-server entry in %(file)s") % { "file" : self.settings.dhcpd_conf })
478 if not match_file:
479 status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf })
480 else:
481 status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf })
482
4830
=== removed directory '.pc/40_ubuntu_bind9_management.patch/cobbler/modules'
=== removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py'
--- .pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py 2011-04-18 11:15:59 +0000
+++ .pc/40_ubuntu_bind9_management.patch/cobbler/modules/manage_bind.py 1970-01-01 00:00:00 +0000
@@ -1,332 +0,0 @@
1"""
2This is some of the code behind 'cobbler sync'.
3
4Copyright 2006-2009, Red Hat, Inc
5Michael DeHaan <mdehaan@redhat.com>
6John Eckersberg <jeckersb@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import os.path
26import shutil
27import time
28import sys
29import glob
30import traceback
31import errno
32import re
33from shlex import shlex
34
35
36import utils
37from cexceptions import *
38import templar
39
40import item_distro
41import item_profile
42import item_repo
43import item_system
44
45from utils import _
46
47
48def register():
49 """
50 The mandatory cobbler module registration hook.
51 """
52 return "manage"
53
54
55class BindManager:
56
57 def what(self):
58 return "bind"
59
60 def __init__(self,config,logger):
61 """
62 Constructor
63 """
64 self.logger = logger
65 self.config = config
66 self.api = config.api
67 self.distros = config.distros()
68 self.profiles = config.profiles()
69 self.systems = config.systems()
70 self.settings = config.settings()
71 self.repos = config.repos()
72 self.templar = templar.Templar(config)
73
74 def regen_hosts(self):
75 pass # not used
76
77 def __forward_zones(self):
78 """
79 Returns a map of zones and the records that belong
80 in them
81 """
82 zones = {}
83 forward_zones = self.settings.manage_forward_zones
84 if type(forward_zones) != type([]):
85 # gracefully handle when user inputs only a single zone
86 # as a string instead of a list with only a single item
87 forward_zones = [forward_zones]
88
89 for zone in forward_zones:
90 zones[zone] = {}
91
92 for system in self.systems:
93 for (name, interface) in system.interfaces.iteritems():
94 host = interface["dns_name"]
95 ip = interface["ip_address"]
96 if not system.is_management_supported(cidr_ok=False):
97 continue
98 if not host or not ip:
99 # gotsta have some dns_name and ip or else!
100 continue
101 if host.find(".") == -1:
102 continue
103
104 # match the longest zone!
105 # e.g. if you have a host a.b.c.d.e
106 # if manage_forward_zones has:
107 # - c.d.e
108 # - b.c.d.e
109 # then a.b.c.d.e should go in b.c.d.e
110 best_match = ''
111 for zone in zones.keys():
112 if re.search('\.%s$' % zone, host) and len(zone) > len(best_match):
113 best_match = zone
114
115 if best_match == '': # no match
116 continue
117
118 # strip the zone off the dns_name and append the
119 # remainder + ip to the zone list
120 host = re.sub('\.%s$' % best_match, '', host)
121
122 zones[best_match][host] = ip
123
124 return zones
125
126 def __reverse_zones(self):
127 """
128 Returns a map of zones and the records that belong
129 in them
130 """
131 zones = {}
132 reverse_zones = self.settings.manage_reverse_zones
133 if type(reverse_zones) != type([]):
134 # gracefully handle when user inputs only a single zone
135 # as a string instead of a list with only a single item
136 reverse_zones = [reverse_zones]
137
138 for zone in reverse_zones:
139 zones[zone] = {}
140
141 for sys in self.systems:
142 for (name, interface) in sys.interfaces.iteritems():
143 host = interface["dns_name"]
144 ip = interface["ip_address"]
145 if not sys.is_management_supported(cidr_ok=False):
146 continue
147 if not host or not ip:
148 # gotsta have some dns_name and ip or else!
149 continue
150
151 # match the longest zone!
152 # e.g. if you have an ip 1.2.3.4
153 # if manage_reverse_zones has:
154 # - 1.2
155 # - 1.2.3
156 # then 1.2.3.4 should go in 1.2.3
157 best_match = ''
158 for zone in zones.keys():
159 if re.search('^%s\.' % zone, ip) and len(zone) > len(best_match):
160 best_match = zone
161
162 if best_match == '': # no match
163 continue
164
165 # strip the zone off the front of the ip
166 # reverse the rest of the octets
167 # append the remainder + dns_name
168 ip = ip.replace(best_match, '', 1)
169 if ip[0] == '.': # strip leading '.' if it's there
170 ip = ip[1:]
171 tokens = ip.split('.')
172 tokens.reverse()
173 ip = '.'.join(tokens)
174 zones[best_match][ip] = host + '.'
175
176 return zones
177
178
179 def __write_named_conf(self):
180 """
181 Write out the named.conf main config file from the template.
182 """
183 settings_file = "/etc/named.conf"
184 template_file = "/etc/cobbler/named.template"
185 forward_zones = self.settings.manage_forward_zones
186 reverse_zones = self.settings.manage_reverse_zones
187
188 metadata = {'forward_zones': self.__forward_zones().keys(),
189 'reverse_zones': [],
190 'zone_include': ''}
191
192 for zone in metadata['forward_zones']:
193 txt = """
194zone "%(zone)s." {
195 type master;
196 file "%(zone)s";
197};
198""" % {'zone': zone}
199 metadata['zone_include'] = metadata['zone_include'] + txt
200
201 for zone in self.__reverse_zones().keys():
202 tokens = zone.split('.')
203 tokens.reverse()
204 arpa = '.'.join(tokens) + '.in-addr.arpa'
205 metadata['reverse_zones'].append((zone, arpa))
206 txt = """
207zone "%(arpa)s." {
208 type master;
209 file "%(zone)s";
210};
211""" % {'arpa': arpa, 'zone': zone}
212 metadata['zone_include'] = metadata['zone_include'] + txt
213
214 try:
215 f2 = open(template_file,"r")
216 except:
217 raise CX(_("error reading template from file: %s") % template_file)
218 template_data = ""
219 template_data = f2.read()
220 f2.close()
221
222 if self.logger is not None:
223 self.logger.info("generating %s" % settings_file)
224 self.templar.render(template_data, metadata, settings_file, None)
225
226 def __ip_sort(self, ips):
227 """
228 Sorts IP addresses (or partial addresses) in a numerical fashion per-octet
229 """
230 # strings to integer octet chunks so we can sort numerically
231 octets = map(lambda x: [int(i) for i in x.split('.')], ips)
232 octets.sort()
233 # integers back to strings
234 octets = map(lambda x: [str(i) for i in x], octets)
235 return ['.'.join(i) for i in octets]
236
237 def __pretty_print_host_records(self, hosts, rectype='A', rclass='IN'):
238 """
239 Format host records by order and with consistent indentation
240 """
241 names = [k for k,v in hosts.iteritems()]
242 if not names: return '' # zones with no hosts
243
244 if rectype == 'PTR':
245 names = self.__ip_sort(names)
246 else:
247 names.sort()
248
249 max_name = max([len(i) for i in names])
250
251 s = ""
252 for name in names:
253 spacing = " " * (max_name - len(name))
254 my_name = "%s%s" % (name, spacing)
255 my_host = hosts[name]
256 s += "%s %s %s %s\n" % (my_name, rclass, rectype, my_host)
257 return s
258
259 def __write_zone_files(self):
260 """
261 Write out the forward and reverse zone files for all configured zones
262 """
263 default_template_file = "/etc/cobbler/zone.template"
264 cobbler_server = self.settings.server
265 serial = int(time.time())
266 forward = self.__forward_zones()
267 reverse = self.__reverse_zones()
268
269 try:
270 f2 = open(default_template_file,"r")
271 except:
272 raise CX(_("error reading template from file: %s") % default_template_file)
273 default_template_data = ""
274 default_template_data = f2.read()
275 f2.close()
276
277 for (zone, hosts) in forward.iteritems():
278 metadata = {
279 'cobbler_server': cobbler_server,
280 'serial': serial,
281 'host_record': ''
282 }
283
284 # grab zone-specific template if it exists
285 try:
286 fd = open('/etc/cobbler/zone_templates/%s' % zone)
287 template_data = fd.read()
288 fd.close()
289 except:
290 template_data = default_template_data
291
292 metadata['host_record'] = self.__pretty_print_host_records(hosts)
293
294 zonefilename='/var/named/' + zone
295 if self.logger is not None:
296 self.logger.info("generating (forward) %s" % zonefilename)
297 self.templar.render(template_data, metadata, zonefilename, None)
298
299 for (zone, hosts) in reverse.iteritems():
300 metadata = {
301 'cobbler_server': cobbler_server,
302 'serial': serial,
303 'host_record': ''
304 }
305
306 # grab zone-specific template if it exists
307 try:
308 fd = open('/etc/cobbler/zone_templates/%s' % zone)
309 template_data = fd.read()
310 fd.close()
311 except:
312 template_data = default_template_data
313
314 metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR')
315
316 zonefilename='/var/named/' + zone
317 if self.logger is not None:
318 self.logger.info("generating (reverse) %s" % zonefilename)
319 self.templar.render(template_data, metadata, zonefilename, None)
320
321
322 def write_dns_files(self):
323 """
324 BIND files are written when manage_dns is set in
325 /var/lib/cobbler/settings.
326 """
327
328 self.__write_named_conf()
329 self.__write_zone_files()
330
331def get_manager(config,logger):
332 return BindManager(config,logger)
3330
=== removed file '.pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py'
--- .pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py 2011-04-18 11:15:59 +0000
+++ .pc/40_ubuntu_bind9_management.patch/cobbler/modules/sync_post_restart_services.py 1970-01-01 00:00:00 +0000
@@ -1,66 +0,0 @@
1import distutils.sysconfig
2import sys
3import os
4import traceback
5import cexceptions
6import os
7import sys
8import xmlrpclib
9import cobbler.module_loader as module_loader
10import cobbler.utils as utils
11
12plib = distutils.sysconfig.get_python_lib()
13mod_path="%s/cobbler" % plib
14sys.path.insert(0, mod_path)
15
16def register():
17 # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
18 # the return of this method indicates the trigger type
19 return "/var/lib/cobbler/triggers/sync/post/*"
20
21def run(api,args,logger):
22
23 settings = api.settings()
24
25 manage_dhcp = str(settings.manage_dhcp).lower()
26 manage_dns = str(settings.manage_dns).lower()
27 manage_tftpd = str(settings.manage_tftpd).lower()
28 restart_dhcp = str(settings.restart_dhcp).lower()
29 restart_dns = str(settings.restart_dns).lower()
30
31 which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip()
32 which_dns_module = module_loader.get_module_from_file("dns","module",just_name=True).strip()
33
34 # special handling as we don't want to restart it twice
35 has_restarted_dnsmasq = False
36
37 rc = 0
38 if manage_dhcp != "0":
39 if which_dhcp_module == "manage_isc":
40 if restart_dhcp != "0":
41 rc = utils.subprocess_call(logger, "dhcpd -t -q", shell=True)
42 if rc != 0:
43 logger.error("dhcpd -t failed")
44 return 1
45 rc = utils.subprocess_call(logger,"service isc-dhcp-server restart", shell=True)
46 elif which_dhcp_module == "manage_dnsmasq":
47 if restart_dhcp != "0":
48 rc = utils.subprocess_call(logger, "service dnsmasq restart")
49 has_restarted_dnsmasq = True
50 else:
51 logger.error("unknown DHCP engine: %s" % which_dhcp_module)
52 rc = 411
53
54 if manage_dns != "0" and restart_dns != "0":
55 if which_dns_module == "manage_bind":
56 rc = utils.subprocess_call(logger, "service named restart", shell=True)
57 elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq:
58 rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True)
59 elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq:
60 rc = 0
61 else:
62 logger.error("unknown DNS engine: %s" % which_dns_module)
63 rc = 412
64
65 return rc
66
670
=== removed directory '.pc/40_ubuntu_bind9_management.patch/templates'
=== removed directory '.pc/40_ubuntu_bind9_management.patch/templates/etc'
=== removed file '.pc/40_ubuntu_bind9_management.patch/templates/etc/named.template'
--- .pc/40_ubuntu_bind9_management.patch/templates/etc/named.template 2011-04-18 11:15:59 +0000
+++ .pc/40_ubuntu_bind9_management.patch/templates/etc/named.template 1970-01-01 00:00:00 +0000
@@ -1,31 +0,0 @@
1options {
2 listen-on port 53 { 127.0.0.1; };
3 directory "/var/named";
4 dump-file "/var/named/data/cache_dump.db";
5 statistics-file "/var/named/data/named_stats.txt";
6 memstatistics-file "/var/named/data/named_mem_stats.txt";
7 allow-query { localhost; };
8 recursion yes;
9};
10
11logging {
12 channel default_debug {
13 file "data/named.run";
14 severity dynamic;
15 };
16};
17
18#for $zone in $forward_zones
19zone "${zone}." {
20 type master;
21 file "$zone";
22};
23
24#end for
25#for $zone, $arpa in $reverse_zones
26zone "${arpa}." {
27 type master;
28 file "$zone";
29};
30
31#end for
320
=== removed directory '.pc/41_update_tree_path_with_arch.patch'
=== removed directory '.pc/41_update_tree_path_with_arch.patch/cobbler'
=== removed directory '.pc/41_update_tree_path_with_arch.patch/cobbler/modules'
=== removed file '.pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py'
--- .pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-05-02 18:26:03 +0000
+++ .pc/41_update_tree_path_with_arch.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000
@@ -1,777 +0,0 @@
1"""
2This is some of the code behind 'cobbler sync'.
3
4Copyright 2006-2009, Red Hat, Inc
5Michael DeHaan <mdehaan@redhat.com>
6John Eckersberg <jeckersb@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import os.path
26import shutil
27import time
28import sys
29import glob
30import traceback
31import errno
32import re
33from utils import popen2
34from shlex import shlex
35
36
37import utils
38from cexceptions import *
39import templar
40
41import item_distro
42import item_profile
43import item_repo
44import item_system
45
46from utils import _
47
48def register():
49 """
50 The mandatory cobbler module registration hook.
51 """
52 return "manage/import"
53
54
55class ImportDebianUbuntuManager:
56
57 def __init__(self,config,logger):
58 """
59 Constructor
60 """
61 self.logger = logger
62 self.config = config
63 self.api = config.api
64 self.distros = config.distros()
65 self.profiles = config.profiles()
66 self.systems = config.systems()
67 self.settings = config.settings()
68 self.repos = config.repos()
69 self.templar = templar.Templar(config)
70
71 # required function for import modules
72 def what(self):
73 return "import/debian_ubuntu"
74
75 # required function for import modules
76 def check_for_signature(self,path,cli_breed):
77 signatures = [
78 'pool',
79 ]
80
81 #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path)
82 for signature in signatures:
83 d = os.path.join(path,signature)
84 if os.path.exists(d):
85 self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature)
86 return (True,signature)
87
88 if cli_breed and cli_breed in self.get_valid_breeds():
89 self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path)
90 return (True,None)
91
92 return (False,None)
93
94 # required function for import modules
95 def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None):
96 self.pkgdir = pkgdir
97 self.mirror = mirror
98 self.mirror_name = mirror_name
99 self.network_root = network_root
100 self.kickstart_file = kickstart_file
101 self.rsync_flags = rsync_flags
102 self.arch = arch
103 self.breed = breed
104 self.os_version = os_version
105
106 # some fixups for the XMLRPC interface, which does not use "None"
107 if self.arch == "": self.arch = None
108 if self.mirror == "": self.mirror = None
109 if self.mirror_name == "": self.mirror_name = None
110 if self.kickstart_file == "": self.kickstart_file = None
111 if self.os_version == "": self.os_version = None
112 if self.rsync_flags == "": self.rsync_flags = None
113 if self.network_root == "": self.network_root = None
114
115 # If no breed was specified on the command line, figure it out
116 if self.breed == None:
117 self.breed = self.get_breed_from_directory()
118 if not self.breed:
119 utils.die(self.logger,"import failed - could not determine breed of debian-based distro")
120
121 # debug log stuff for testing
122 #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir))
123 #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror))
124 #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name))
125 #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root))
126 #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file))
127 #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags))
128 #self.logger.info("DEBUG: self.arch = %s" % str(self.arch))
129 #self.logger.info("DEBUG: self.breed = %s" % str(self.breed))
130 #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version))
131
132 # both --import and --name are required arguments
133
134 if self.mirror is None:
135 utils.die(self.logger,"import failed. no --path specified")
136 if self.mirror_name is None:
137 utils.die(self.logger,"import failed. no --name specified")
138
139 # if --arch is supplied, validate it to ensure it's valid
140
141 if self.arch is not None and self.arch != "":
142 self.arch = self.arch.lower()
143 if self.arch == "x86":
144 # be consistent
145 self.arch = "i386"
146 if self.arch not in self.get_valid_arches():
147 utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", "))
148
149 # if we're going to do any copying, set where to put things
150 # and then make sure nothing is already there.
151
152 self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) )
153 if os.path.exists(self.path) and self.arch is None:
154 # FIXME : Raise exception even when network_root is given ?
155 utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path)
156
157 # import takes a --kickstart for forcing selection that can't be used in all circumstances
158
159 if self.kickstart_file and not self.breed:
160 utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected")
161
162 if self.os_version and not self.breed:
163 utils.die(self.logger,"OS version can only be specified when a specific breed is selected")
164
165 if self.breed and self.breed.lower() not in self.get_valid_breeds():
166 utils.die(self.logger,"Supplied import breed is not supported by this module")
167
168 # if --arch is supplied, make sure the user is not importing a path with a different
169 # arch, which would just be silly.
170
171 if self.arch:
172 # append the arch path to the name if the arch is not already
173 # found in the name.
174 for x in self.get_valid_arches():
175 if self.path.lower().find(x) != -1:
176 if self.arch != x :
177 utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch))
178 break
179 else:
180 # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again
181 self.path += ("-%s" % self.arch)
182
183 # make the output path and mirror content but only if not specifying that a network
184 # accessible support location already exists (this is --available-as on the command line)
185
186 if self.network_root is None:
187 # we need to mirror (copy) the files
188
189 utils.mkdir(self.path)
190
191 if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"):
192
193 # http mirrors are kind of primative. rsync is better.
194 # that's why this isn't documented in the manpage and we don't support them.
195 # TODO: how about adding recursive FTP as an option?
196
197 utils.die(self.logger,"unsupported protocol")
198
199 else:
200
201 # good, we're going to use rsync..
202 # we don't use SSH for public mirrors and local files.
203 # presence of user@host syntax means use SSH
204
205 # kick off the rsync now
206
207 if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger):
208 utils.die(self.logger, "failed to rsync the files")
209
210 else:
211
212 # rather than mirroring, we're going to assume the path is available
213 # over http, ftp, and nfs, perhaps on an external filer. scanning still requires
214 # --mirror is a filesystem path, but --available-as marks the network path
215
216 if not os.path.exists(self.mirror):
217 utils.die(self.logger, "path does not exist: %s" % self.mirror)
218
219 # find the filesystem part of the path, after the server bits, as each distro
220 # URL needs to be calculated relative to this.
221
222 if not self.network_root.endswith("/"):
223 self.network_root = self.network_root + "/"
224 self.path = os.path.normpath( self.mirror )
225 valid_roots = [ "nfs://", "ftp://", "http://" ]
226 for valid_root in valid_roots:
227 if self.network_root.startswith(valid_root):
228 break
229 else:
230 utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://")
231 if self.network_root.startswith("nfs://"):
232 try:
233 (a,b,rest) = self.network_root.split(":",3)
234 except:
235 utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.")
236
237 # now walk the filesystem looking for distributions that match certain patterns
238
239 self.logger.info("adding distros")
240 distros_added = []
241 # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST
242 os.path.walk(self.path, self.distro_adder, distros_added)
243
244 # find out if we can auto-create any repository records from the install tree
245
246 if self.network_root is None:
247 self.logger.info("associating repos")
248 # FIXME: this automagic is not possible (yet) without mirroring
249 self.repo_finder(distros_added)
250
251 # find the most appropriate answer files for each profile object
252
253 self.logger.info("associating kickstarts")
254 self.kickstart_finder(distros_added)
255
256 # ensure bootloaders are present
257 self.api.pxegen.copy_bootloaders()
258
259 return True
260
261 # required function for import modules
262 def get_valid_arches(self):
263 return ["i386", "ppc", "x86_64", "x86",]
264
265 # required function for import modules
266 def get_valid_breeds(self):
267 return ["debian","ubuntu"]
268
269 # required function for import modules
270 def get_valid_os_versions(self):
271 if self.breed == "debian":
272 return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",]
273 elif self.breed == "ubuntu":
274 return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",]
275 else:
276 return []
277
278 def get_valid_repo_breeds(self):
279 return ["apt",]
280
281 def get_release_files(self):
282 """
283 Find distro release packages.
284 """
285 return glob.glob(os.path.join(self.get_rootdir(), "dists/*"))
286
287 def get_breed_from_directory(self):
288 for breed in self.get_valid_breeds():
289 # NOTE : Although we break the loop after the first match,
290 # multiple debian derived distros can actually live at the same pool -- JP
291 d = os.path.join(self.mirror, breed)
292 if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed:
293 return breed
294 else:
295 return None
296
297 def get_tree_location(self, distro):
298 """
299 Once a distribution is identified, find the part of the distribution
300 that has the URL in it that we want to use for kickstarting the
301 distribution, and create a ksmeta variable $tree that contains this.
302 """
303
304 base = self.get_rootdir()
305
306 if self.network_root is None:
307 dists_path = os.path.join(self.path, "dists")
308 if os.path.isdir(dists_path):
309 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
310 else:
311 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
312 self.set_install_tree(distro, tree)
313 else:
314 # where we assign the kickstart source is relative to our current directory
315 # and the input start directory in the crawl. We find the path segments
316 # between and tack them on the network source path to find the explicit
317 # network path to the distro that Anaconda can digest.
318 tail = self.path_tail(self.path, base)
319 tree = self.network_root[:-1] + tail
320 self.set_install_tree(distro, tree)
321
322 return
323
324 def repo_finder(self, distros_added):
325 for distro in distros_added:
326 self.logger.info("traversing distro %s" % distro.name)
327 # FIXME : Shouldn't decide this the value of self.network_root ?
328 if distro.kernel.find("ks_mirror") != -1:
329 basepath = os.path.dirname(distro.kernel)
330 top = self.get_rootdir()
331 self.logger.info("descent into %s" % top)
332 dists_path = os.path.join(self.path, "dists")
333 if not os.path.isdir(dists_path):
334 self.process_repos()
335 else:
336 self.logger.info("this distro isn't mirrored")
337
338 def process_repos(self):
339 pass
340
341 def distro_adder(self,distros_added,dirname,fnames):
342 """
343 This is an os.path.walk routine that finds distributions in the directory
344 to be scanned and then creates them.
345 """
346
347 # FIXME: If there are more than one kernel or initrd image on the same directory,
348 # results are unpredictable
349
350 initrd = None
351 kernel = None
352
353 for x in fnames:
354 adtls = []
355
356 fullname = os.path.join(dirname,x)
357 if os.path.islink(fullname) and os.path.isdir(fullname):
358 if fullname.startswith(self.path):
359 self.logger.warning("avoiding symlink loop")
360 continue
361 self.logger.info("following symlink: %s" % fullname)
362 os.path.walk(fullname, self.distro_adder, distros_added)
363
364 if ( x.startswith("initrd.gz") ) and x != "initrd.size":
365 initrd = os.path.join(dirname,x)
366 if ( x.startswith("linux") ) and x.find("initrd") == -1:
367 kernel = os.path.join(dirname,x)
368
369 # if we've collected a matching kernel and initrd pair, turn the in and add them to the list
370 if initrd is not None and kernel is not None:
371 adtls.append(self.add_entry(dirname,kernel,initrd))
372 kernel = None
373 initrd = None
374
375 for adtl in adtls:
376 distros_added.extend(adtl)
377
378 def add_entry(self,dirname,kernel,initrd):
379 """
380 When we find a directory with a valid kernel/initrd in it, create the distribution objects
381 as appropriate and save them. This includes creating xen and rescue distros/profiles
382 if possible.
383 """
384
385 proposed_name = self.get_proposed_name(dirname,kernel)
386 proposed_arch = self.get_proposed_arch(dirname)
387
388 if self.arch and proposed_arch and self.arch != proposed_arch:
389 utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch))
390
391 archs = self.learn_arch_from_tree()
392 if not archs:
393 if self.arch:
394 archs.append( self.arch )
395 else:
396 if self.arch and self.arch not in archs:
397 utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir()))
398 if proposed_arch:
399 if archs and proposed_arch not in archs:
400 self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir()))
401 return
402
403 archs = [ proposed_arch ]
404
405 if len(archs)>1:
406 self.logger.warning("- Warning : Multiple archs found : %s" % (archs))
407
408 distros_added = []
409
410 for pxe_arch in archs:
411 name = proposed_name + "-" + pxe_arch
412 existing_distro = self.distros.find(name=name)
413
414 if existing_distro is not None:
415 self.logger.warning("skipping import, as distro name already exists: %s" % name)
416 continue
417
418 else:
419 self.logger.info("creating new distro: %s" % name)
420 distro = self.config.new_distro()
421
422 if name.find("-autoboot") != -1:
423 # this is an artifact of some EL-3 imports
424 continue
425
426 distro.set_name(name)
427 distro.set_kernel(kernel)
428 distro.set_initrd(initrd)
429 distro.set_arch(pxe_arch)
430 distro.set_breed(self.breed)
431 # If a version was supplied on command line, we set it now
432 if self.os_version:
433 distro.set_os_version(self.os_version)
434
435 self.distros.add(distro,save=True)
436 distros_added.append(distro)
437
438 existing_profile = self.profiles.find(name=name)
439
440 # see if the profile name is already used, if so, skip it and
441 # do not modify the existing profile
442
443 if existing_profile is None:
444 self.logger.info("creating new profile: %s" % name)
445 #FIXME: The created profile holds a default kickstart, and should be breed specific
446 profile = self.config.new_profile()
447 else:
448 self.logger.info("skipping existing profile, name already exists: %s" % name)
449 continue
450
451 # save our minimal profile which just points to the distribution and a good
452 # default answer file
453
454 profile.set_name(name)
455 profile.set_distro(name)
456 profile.set_kickstart(self.kickstart_file)
457
458 # depending on the name of the profile we can define a good virt-type
459 # for usage with koan
460
461 if name.find("-xen") != -1:
462 profile.set_virt_type("xenpv")
463 elif name.find("vmware") != -1:
464 profile.set_virt_type("vmware")
465 else:
466 profile.set_virt_type("qemu")
467
468 # save our new profile to the collection
469
470 self.profiles.add(profile,save=True)
471
472 return distros_added
473
474 def get_proposed_name(self,dirname,kernel=None):
475 """
476 Given a directory name where we have a kernel/initrd pair, try to autoname
477 the distribution (and profile) object based on the contents of that path
478 """
479
480 if self.network_root is not None:
481 name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/"))
482 else:
483 # remove the part that says /var/www/cobbler/ks_mirror/name
484 name = "-".join(dirname.split("/")[5:])
485
486 if kernel is not None and kernel.find("PAE") != -1:
487 name = name + "-PAE"
488
489 # These are all Ubuntu's doing, the netboot images are buried pretty
490 # deep. ;-) -JC
491 name = name.replace("-netboot","")
492 name = name.replace("-ubuntu-installer","")
493 name = name.replace("-amd64","")
494 name = name.replace("-i386","")
495
496 # we know that some kernel paths should not be in the name
497
498 name = name.replace("-images","")
499 name = name.replace("-pxeboot","")
500 name = name.replace("-install","")
501 name = name.replace("-isolinux","")
502
503 # some paths above the media root may have extra path segments we want
504 # to clean up
505
506 name = name.replace("-os","")
507 name = name.replace("-tree","")
508 name = name.replace("var-www-cobbler-", "")
509 name = name.replace("ks_mirror-","")
510 name = name.replace("--","-")
511
512 # remove any architecture name related string, as real arch will be appended later
513
514 name = name.replace("chrp","ppc64")
515
516 for separator in [ '-' , '_' , '.' ] :
517 for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]:
518 name = name.replace("%s%s" % ( separator , arch ),"")
519
520 return name
521
522 def get_proposed_arch(self,dirname):
523 """
524 Given an directory name, can we infer an architecture from a path segment?
525 """
526 if dirname.find("x86_64") != -1 or dirname.find("amd") != -1:
527 return "x86_64"
528 if dirname.find("ia64") != -1:
529 return "ia64"
530 if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1:
531 return "i386"
532 if dirname.find("s390x") != -1:
533 return "s390x"
534 if dirname.find("s390") != -1:
535 return "s390"
536 if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1:
537 return "ppc64"
538 if dirname.find("ppc32") != -1:
539 return "ppc"
540 if dirname.find("ppc") != -1:
541 return "ppc"
542 return None
543
544 def arch_walker(self,foo,dirname,fnames):
545 """
546 See docs on learn_arch_from_tree.
547
548 The TRY_LIST is used to speed up search, and should be dropped for default importer
549 Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem
550
551 This method is useful to get the archs, but also to package type and a raw guess of the breed
552 """
553
554 # try to find a kernel header RPM and then look at it's arch.
555 for x in fnames:
556 if self.match_kernelarch_file(x):
557 for arch in self.get_valid_arches():
558 if x.find(arch) != -1:
559 foo[arch] = 1
560 for arch in [ "i686" , "amd64" ]:
561 if x.find(arch) != -1:
562 foo[arch] = 1
563
564 def kickstart_finder(self,distros_added):
565 """
566 For all of the profiles in the config w/o a kickstart, use the
567 given kickstart file, or look at the kernel path, from that,
568 see if we can guess the distro, and if we can, assign a kickstart
569 if one is available for it.
570 """
571 for profile in self.profiles:
572 distro = self.distros.find(name=profile.get_conceptual_parent().name)
573 if distro is None or not (distro in distros_added):
574 continue
575
576 kdir = os.path.dirname(distro.kernel)
577 if self.kickstart_file == None:
578 for file in self.get_release_files():
579 results = self.scan_pkg_filename(file)
580 # FIXME : If os is not found on tree but set with CLI, no kickstart is searched
581 if results is None:
582 self.logger.warning("skipping %s" % file)
583 continue
584 (flavor, major, minor, release) = results
585 # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata
586 #version , ks = self.set_variance(flavor, major, minor, distro.arch)
587 if self.os_version:
588 if self.os_version != flavor:
589 utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor))
590 distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch))
591 distro.set_os_version(flavor)
592 # is this even valid for debian/ubuntu? - jcammarata
593 #ds = self.get_datestamp()
594 #if ds is not None:
595 # distro.set_tree_build_time(ds)
596 profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed")
597 self.profiles.add(profile,save=True)
598
599 self.configure_tree_location(distro)
600 self.distros.add(distro,save=True) # re-save
601 self.api.serialize()
602
603 def configure_tree_location(self, distro):
604 """
605 Once a distribution is identified, find the part of the distribution
606 that has the URL in it that we want to use for kickstarting the
607 distribution, and create a ksmeta variable $tree that contains this.
608 """
609
610 base = self.get_rootdir()
611
612 if self.network_root is None:
613 dists_path = os.path.join( self.path , "dists" )
614 if os.path.isdir( dists_path ):
615 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
616 else:
617 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
618 self.set_install_tree(distro, tree)
619 else:
620 # where we assign the kickstart source is relative to our current directory
621 # and the input start directory in the crawl. We find the path segments
622 # between and tack them on the network source path to find the explicit
623 # network path to the distro that Anaconda can digest.
624 tail = utils.path_tail(self.path, base)
625 tree = self.network_root[:-1] + tail
626 self.set_install_tree(distro, tree)
627
628 def get_rootdir(self):
629 return self.mirror
630
631 def get_pkgdir(self):
632 if not self.pkgdir:
633 return None
634 return os.path.join(self.get_rootdir(),self.pkgdir)
635
636 def set_install_tree(self, distro, url):
637 distro.ks_meta["tree"] = url
638
639 def learn_arch_from_tree(self):
640 """
641 If a distribution is imported from DVD, there is a good chance the path doesn't
642 contain the arch and we should add it back in so that it's part of the
643 meaningful name ... so this code helps figure out the arch name. This is important
644 for producing predictable distro names (and profile names) from differing import sources
645 """
646 result = {}
647 # FIXME : this is called only once, should not be a walk
648 if self.get_pkgdir():
649 os.path.walk(self.get_pkgdir(), self.arch_walker, result)
650 if result.pop("amd64",False):
651 result["x86_64"] = 1
652 if result.pop("i686",False):
653 result["i386"] = 1
654 return result.keys()
655
656 def match_kernelarch_file(self, filename):
657 """
658 Is the given filename a kernel filename?
659 """
660 if not filename.endswith("deb"):
661 return False
662 if filename.startswith("linux-headers-"):
663 return True
664 return False
665
666 def scan_pkg_filename(self, file):
667 """
668 Determine what the distro is based on the release package filename.
669 """
670 # FIXME: all of these dist_names should probably be put in a function
671 # which would be called in place of looking in codes.py. Right now
672 # you have to update both codes.py and this to add a new release
673 if self.breed == "debian":
674 dist_names = ['etch','lenny',]
675 elif self.breed == "ubuntu":
676 dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',]
677 else:
678 return None
679
680 if os.path.basename(file) in dist_names:
681 release_file = os.path.join(file,'Release')
682 self.logger.info("Found %s release file: %s" % (self.breed,release_file))
683
684 f = open(release_file,'r')
685 lines = f.readlines()
686 f.close()
687
688 for line in lines:
689 if line.lower().startswith('version: '):
690 version = line.split(':')[1].strip()
691 values = version.split('.')
692 if len(values) == 1:
693 # I don't think you'd ever hit this currently with debian or ubuntu,
694 # just including it for safety reasons
695 return (os.path.basename(file), values[0], "0", "0")
696 elif len(values) == 2:
697 return (os.path.basename(file), values[0], values[1], "0")
698 elif len(values) > 2:
699 return (os.path.basename(file), values[0], values[1], values[2])
700 return None
701
702 def get_datestamp(self):
703 """
704 Not used for debian/ubuntu... should probably be removed? - jcammarata
705 """
706 pass
707
708 def set_variance(self, flavor, major, minor, arch):
709 """
710 Set distro specific versioning.
711 """
712 # I don't think this is required anymore, as the scan_pkg_filename() function
713 # above does everything we need it to - jcammarata
714 #
715 #if self.breed == "debian":
716 # dist_names = { '4.0' : "etch" , '5.0' : "lenny" }
717 # dist_vers = "%s.%s" % ( major , minor )
718 # os_version = dist_names[dist_vers]
719 #
720 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
721 #elif self.breed == "ubuntu":
722 # # Release names taken from wikipedia
723 # dist_names = { '6.4' :"dapper",
724 # '8.4' :"hardy",
725 # '8.10' :"intrepid",
726 # '9.4' :"jaunty",
727 # '9.10' :"karmic",
728 # '10.4' :"lynx",
729 # '10.10':"maverick",
730 # '11.4' :"natty",
731 # }
732 # dist_vers = "%s.%s" % ( major , minor )
733 # if not dist_names.has_key( dist_vers ):
734 # dist_names['4ubuntu2.0'] = "IntrepidIbex"
735 # os_version = dist_names[dist_vers]
736 #
737 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
738 #else:
739 # return None
740 pass
741
742 def process_repos(self, main_importer, distro):
743 # Create a disabled repository for the new distro, and the security updates
744 #
745 # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage
746
747 repo = item_repo.Repo(main_importer.config)
748 repo.set_breed( "apt" )
749 repo.set_arch( distro.arch )
750 repo.set_keep_updated( False )
751 repo.yumopts["--ignore-release-gpg"] = None
752 repo.yumopts["--verbose"] = None
753 repo.set_name( distro.name )
754 repo.set_os_version( distro.os_version )
755 # NOTE : The location of the mirror should come from timezone
756 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) )
757
758 security_repo = item_repo.Repo(main_importer.config)
759 security_repo.set_breed( "apt" )
760 security_repo.set_arch( distro.arch )
761 security_repo.set_keep_updated( False )
762 security_repo.yumopts["--ignore-release-gpg"] = None
763 security_repo.yumopts["--verbose"] = None
764 security_repo.set_name( distro.name + "-security" )
765 security_repo.set_os_version( distro.os_version )
766 # There are no official mirrors for security updates
767 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' )
768
769 self.logger.info("Added repos for %s" % distro.name)
770 repos = main_importer.config.repos()
771 repos.add(repo,save=True)
772 repos.add(security_repo,save=True)
773
774# ==========================================================================
775
776def get_import_manager(config,logger):
777 return ImportDebianUbuntuManager(config,logger)
7780
=== removed file '.pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py'
--- .pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py 2011-05-02 18:26:03 +0000
+++ .pc/42_fix_repomirror_create_sync.patch/cobbler/codes.py 1970-01-01 00:00:00 +0000
@@ -1,98 +0,0 @@
1
2"""
3various codes and constants used by Cobbler
4
5Copyright 2006-2009, Red Hat, Inc
6Michael DeHaan <mdehaan@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import utils
25
26# OS variants table. This is a variance of the data from
27# ls /usr/lib/python2.X/site-packages/virtinst/FullVirtGuest.py
28# but replicated here as we can't assume cobbler is installed on a system with libvirt.
29# in many cases it will not be (i.e. old EL4 server, etc) and we need this info to
30# know how to validate --os-variant and --os-version.
31#
32# The keys of this hash correspond with the --breed flag in Cobbler.
33# --breed has physical provisioning semantics as well as virt semantics.
34#
35# presense of something in this table does /not/ mean it's supported.
36# for instance, currently, "redhat", "debian", and "suse" do something interesting.
37# the rest are undefined (for now), this will evolve.
38
39VALID_OS_BREEDS = [
40 "redhat", "debian", "ubuntu", "suse", "generic", "windows", "unix", "vmware", "other"
41]
42
43VALID_OS_VERSIONS = {
44 "redhat" : [ "rhel2.1", "rhel3", "rhel4", "rhel5", "rhel6", "fedora5", "fedora6", "fedora7", "fedora8", "fedora9", "fedora10", "fedora11", "fedora12", "fedora13", "fedora14", "generic24", "generic26", "virtio26", "other" ],
45 "suse" : [ "sles10", "generic24", "generic26", "virtio26", "other" ],
46 "debian" : [ "etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "generic24", "generic26", "other" ],
47 "ubuntu" : [ "dapper", "hardy", "intrepid", "jaunty", "karmic", "lucid", "maverick", "natty" ],
48 "generic" : [ "generic24", "generic26", "other" ],
49 "windows" : [ "winxp", "win2k", "win2k3", "vista", "other" ],
50 "unix" : [ "solaris9", "solaris10", "freebsd6", "openbsd4", "other" ],
51 "vmware" : [ "esx4", "esxi4" ],
52 "other" : [ "msdos", "netware4", "netware5", "netware6", "generic", "other" ]
53}
54
55VALID_REPO_BREEDS = [
56# "rsync", "rhn", "yum", "apt"
57 "rsync", "rhn", "yum"
58]
59
60def uniquify(seq, idfun=None):
61
62 # this is odd (older mod_python scoping bug?) but we can't use
63 # utils.uniquify here because on older distros (RHEL4/5)
64 # mod_python gets another utils. As a result,
65 # it is duplicated here for now. Bad, but ... now you know.
66 #
67 # credit: http://www.peterbe.com/plog/uniqifiers-benchmark
68 # FIXME: if this is actually slower than some other way, overhaul it
69
70 if idfun is None:
71 def idfun(x):
72 return x
73 seen = {}
74 result = []
75 for item in seq:
76 marker = idfun(item)
77 if marker in seen:
78 continue
79 seen[marker] = 1
80 result.append(item)
81 return result
82
83
84def get_all_os_versions():
85 """
86 Collapse the above list of OS versions for usage/display by the CLI/webapp.
87 """
88 results = ['']
89 for x in VALID_OS_VERSIONS.keys():
90 for y in VALID_OS_VERSIONS[x]:
91 results.append(y)
92
93 results = uniquify(results)
94
95 results.sort()
96 return results
97
98
990
=== removed directory '.pc/42_fix_repomirror_create_sync.patch/cobbler/modules'
=== removed file '.pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py'
--- .pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py 2011-05-02 18:26:03 +0000
+++ .pc/42_fix_repomirror_create_sync.patch/cobbler/modules/manage_import_debian_ubuntu.py 1970-01-01 00:00:00 +0000
@@ -1,779 +0,0 @@
1"""
2This is some of the code behind 'cobbler sync'.
3
4Copyright 2006-2009, Red Hat, Inc
5Michael DeHaan <mdehaan@redhat.com>
6John Eckersberg <jeckersb@redhat.com>
7
8This program is free software; you can redistribute it and/or modify
9it under the terms of the GNU General Public License as published by
10the Free Software Foundation; either version 2 of the License, or
11(at your option) any later version.
12
13This program is distributed in the hope that it will be useful,
14but WITHOUT ANY WARRANTY; without even the implied warranty of
15MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16GNU General Public License for more details.
17
18You should have received a copy of the GNU General Public License
19along with this program; if not, write to the Free Software
20Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
2102110-1301 USA
22"""
23
24import os
25import os.path
26import shutil
27import time
28import sys
29import glob
30import traceback
31import errno
32import re
33from utils import popen2
34from shlex import shlex
35
36
37import utils
38from cexceptions import *
39import templar
40
41import item_distro
42import item_profile
43import item_repo
44import item_system
45
46from utils import _
47
48def register():
49 """
50 The mandatory cobbler module registration hook.
51 """
52 return "manage/import"
53
54
55class ImportDebianUbuntuManager:
56
57 def __init__(self,config,logger):
58 """
59 Constructor
60 """
61 self.logger = logger
62 self.config = config
63 self.api = config.api
64 self.distros = config.distros()
65 self.profiles = config.profiles()
66 self.systems = config.systems()
67 self.settings = config.settings()
68 self.repos = config.repos()
69 self.templar = templar.Templar(config)
70
71 # required function for import modules
72 def what(self):
73 return "import/debian_ubuntu"
74
75 # required function for import modules
76 def check_for_signature(self,path,cli_breed):
77 signatures = [
78 'pool',
79 ]
80
81 #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path)
82 for signature in signatures:
83 d = os.path.join(path,signature)
84 if os.path.exists(d):
85 self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature)
86 return (True,signature)
87
88 if cli_breed and cli_breed in self.get_valid_breeds():
89 self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path)
90 return (True,None)
91
92 return (False,None)
93
94 # required function for import modules
95 def run(self,pkgdir,mirror,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None):
96 self.pkgdir = pkgdir
97 self.mirror = mirror
98 self.mirror_name = mirror_name
99 self.network_root = network_root
100 self.kickstart_file = kickstart_file
101 self.rsync_flags = rsync_flags
102 self.arch = arch
103 self.breed = breed
104 self.os_version = os_version
105
106 # some fixups for the XMLRPC interface, which does not use "None"
107 if self.arch == "": self.arch = None
108 if self.mirror == "": self.mirror = None
109 if self.mirror_name == "": self.mirror_name = None
110 if self.kickstart_file == "": self.kickstart_file = None
111 if self.os_version == "": self.os_version = None
112 if self.rsync_flags == "": self.rsync_flags = None
113 if self.network_root == "": self.network_root = None
114
115 # If no breed was specified on the command line, figure it out
116 if self.breed == None:
117 self.breed = self.get_breed_from_directory()
118 if not self.breed:
119 utils.die(self.logger,"import failed - could not determine breed of debian-based distro")
120
121 # debug log stuff for testing
122 #self.logger.info("DEBUG: self.pkgdir = %s" % str(self.pkgdir))
123 #self.logger.info("DEBUG: self.mirror = %s" % str(self.mirror))
124 #self.logger.info("DEBUG: self.mirror_name = %s" % str(self.mirror_name))
125 #self.logger.info("DEBUG: self.network_root = %s" % str(self.network_root))
126 #self.logger.info("DEBUG: self.kickstart_file = %s" % str(self.kickstart_file))
127 #self.logger.info("DEBUG: self.rsync_flags = %s" % str(self.rsync_flags))
128 #self.logger.info("DEBUG: self.arch = %s" % str(self.arch))
129 #self.logger.info("DEBUG: self.breed = %s" % str(self.breed))
130 #self.logger.info("DEBUG: self.os_version = %s" % str(self.os_version))
131
132 # both --import and --name are required arguments
133
134 if self.mirror is None:
135 utils.die(self.logger,"import failed. no --path specified")
136 if self.mirror_name is None:
137 utils.die(self.logger,"import failed. no --name specified")
138
139 # if --arch is supplied, validate it to ensure it's valid
140
141 if self.arch is not None and self.arch != "":
142 self.arch = self.arch.lower()
143 if self.arch == "x86":
144 # be consistent
145 self.arch = "i386"
146 if self.arch not in self.get_valid_arches():
147 utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", "))
148
149 # if we're going to do any copying, set where to put things
150 # and then make sure nothing is already there.
151
152 self.path = os.path.normpath( "%s/ks_mirror/%s" % (self.settings.webdir, self.mirror_name) )
153 if os.path.exists(self.path) and self.arch is None:
154 # FIXME : Raise exception even when network_root is given ?
155 utils.die(self.logger,"Something already exists at this import location (%s). You must specify --arch to avoid potentially overwriting existing files." % self.path)
156
157 # import takes a --kickstart for forcing selection that can't be used in all circumstances
158
159 if self.kickstart_file and not self.breed:
160 utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected")
161
162 if self.os_version and not self.breed:
163 utils.die(self.logger,"OS version can only be specified when a specific breed is selected")
164
165 if self.breed and self.breed.lower() not in self.get_valid_breeds():
166 utils.die(self.logger,"Supplied import breed is not supported by this module")
167
168 # if --arch is supplied, make sure the user is not importing a path with a different
169 # arch, which would just be silly.
170
171 if self.arch:
172 # append the arch path to the name if the arch is not already
173 # found in the name.
174 for x in self.get_valid_arches():
175 if self.path.lower().find(x) != -1:
176 if self.arch != x :
177 utils.die(self.logger,"Architecture found on pathname (%s) does not fit the one given in command line (%s)"%(x,self.arch))
178 break
179 else:
180 # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again
181 self.path += ("-%s" % self.arch)
182 # If arch is specified we also need to update the mirror name.
183 self.mirror_name = self.mirror_name + "-" + self.arch
184
185 # make the output path and mirror content but only if not specifying that a network
186 # accessible support location already exists (this is --available-as on the command line)
187
188 if self.network_root is None:
189 # we need to mirror (copy) the files
190
191 utils.mkdir(self.path)
192
193 if self.mirror.startswith("http://") or self.mirror.startswith("ftp://") or self.mirror.startswith("nfs://"):
194
195 # http mirrors are kind of primative. rsync is better.
196 # that's why this isn't documented in the manpage and we don't support them.
197 # TODO: how about adding recursive FTP as an option?
198
199 utils.die(self.logger,"unsupported protocol")
200
201 else:
202
203 # good, we're going to use rsync..
204 # we don't use SSH for public mirrors and local files.
205 # presence of user@host syntax means use SSH
206
207 # kick off the rsync now
208
209 if not utils.rsync_files(self.mirror, self.path, self.rsync_flags, self.logger):
210 utils.die(self.logger, "failed to rsync the files")
211
212 else:
213
214 # rather than mirroring, we're going to assume the path is available
215 # over http, ftp, and nfs, perhaps on an external filer. scanning still requires
216 # --mirror is a filesystem path, but --available-as marks the network path
217
218 if not os.path.exists(self.mirror):
219 utils.die(self.logger, "path does not exist: %s" % self.mirror)
220
221 # find the filesystem part of the path, after the server bits, as each distro
222 # URL needs to be calculated relative to this.
223
224 if not self.network_root.endswith("/"):
225 self.network_root = self.network_root + "/"
226 self.path = os.path.normpath( self.mirror )
227 valid_roots = [ "nfs://", "ftp://", "http://" ]
228 for valid_root in valid_roots:
229 if self.network_root.startswith(valid_root):
230 break
231 else:
232 utils.die(self.logger, "Network root given to --available-as must be nfs://, ftp://, or http://")
233 if self.network_root.startswith("nfs://"):
234 try:
235 (a,b,rest) = self.network_root.split(":",3)
236 except:
237 utils.die(self.logger, "Network root given to --available-as is missing a colon, please see the manpage example.")
238
239 # now walk the filesystem looking for distributions that match certain patterns
240
241 self.logger.info("adding distros")
242 distros_added = []
243 # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST
244 os.path.walk(self.path, self.distro_adder, distros_added)
245
246 # find out if we can auto-create any repository records from the install tree
247
248 if self.network_root is None:
249 self.logger.info("associating repos")
250 # FIXME: this automagic is not possible (yet) without mirroring
251 self.repo_finder(distros_added)
252
253 # find the most appropriate answer files for each profile object
254
255 self.logger.info("associating kickstarts")
256 self.kickstart_finder(distros_added)
257
258 # ensure bootloaders are present
259 self.api.pxegen.copy_bootloaders()
260
261 return True
262
263 # required function for import modules
264 def get_valid_arches(self):
265 return ["i386", "ppc", "x86_64", "x86",]
266
267 # required function for import modules
268 def get_valid_breeds(self):
269 return ["debian","ubuntu"]
270
271 # required function for import modules
272 def get_valid_os_versions(self):
273 if self.breed == "debian":
274 return ["etch", "lenny", "squeeze", "sid", "stable", "testing", "unstable", "experimental",]
275 elif self.breed == "ubuntu":
276 return ["dapper", "hardy", "karmic", "lucid", "maverick", "natty",]
277 else:
278 return []
279
280 def get_valid_repo_breeds(self):
281 return ["apt",]
282
283 def get_release_files(self):
284 """
285 Find distro release packages.
286 """
287 return glob.glob(os.path.join(self.get_rootdir(), "dists/*"))
288
289 def get_breed_from_directory(self):
290 for breed in self.get_valid_breeds():
291 # NOTE : Although we break the loop after the first match,
292 # multiple debian derived distros can actually live at the same pool -- JP
293 d = os.path.join(self.mirror, breed)
294 if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.mirror)) or os.path.basename(self.mirror) == breed:
295 return breed
296 else:
297 return None
298
299 def get_tree_location(self, distro):
300 """
301 Once a distribution is identified, find the part of the distribution
302 that has the URL in it that we want to use for kickstarting the
303 distribution, and create a ksmeta variable $tree that contains this.
304 """
305
306 base = self.get_rootdir()
307
308 if self.network_root is None:
309 dists_path = os.path.join(self.path, "dists")
310 if os.path.isdir(dists_path):
311 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
312 else:
313 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
314 self.set_install_tree(distro, tree)
315 else:
316 # where we assign the kickstart source is relative to our current directory
317 # and the input start directory in the crawl. We find the path segments
318 # between and tack them on the network source path to find the explicit
319 # network path to the distro that Anaconda can digest.
320 tail = self.path_tail(self.path, base)
321 tree = self.network_root[:-1] + tail
322 self.set_install_tree(distro, tree)
323
324 return
325
326 def repo_finder(self, distros_added):
327 for distro in distros_added:
328 self.logger.info("traversing distro %s" % distro.name)
329 # FIXME : Shouldn't decide this the value of self.network_root ?
330 if distro.kernel.find("ks_mirror") != -1:
331 basepath = os.path.dirname(distro.kernel)
332 top = self.get_rootdir()
333 self.logger.info("descent into %s" % top)
334 dists_path = os.path.join(self.path, "dists")
335 if not os.path.isdir(dists_path):
336 self.process_repos()
337 else:
338 self.logger.info("this distro isn't mirrored")
339
340 def process_repos(self):
341 pass
342
343 def distro_adder(self,distros_added,dirname,fnames):
344 """
345 This is an os.path.walk routine that finds distributions in the directory
346 to be scanned and then creates them.
347 """
348
349 # FIXME: If there are more than one kernel or initrd image on the same directory,
350 # results are unpredictable
351
352 initrd = None
353 kernel = None
354
355 for x in fnames:
356 adtls = []
357
358 fullname = os.path.join(dirname,x)
359 if os.path.islink(fullname) and os.path.isdir(fullname):
360 if fullname.startswith(self.path):
361 self.logger.warning("avoiding symlink loop")
362 continue
363 self.logger.info("following symlink: %s" % fullname)
364 os.path.walk(fullname, self.distro_adder, distros_added)
365
366 if ( x.startswith("initrd.gz") ) and x != "initrd.size":
367 initrd = os.path.join(dirname,x)
368 if ( x.startswith("linux") ) and x.find("initrd") == -1:
369 kernel = os.path.join(dirname,x)
370
371 # if we've collected a matching kernel and initrd pair, turn the in and add them to the list
372 if initrd is not None and kernel is not None:
373 adtls.append(self.add_entry(dirname,kernel,initrd))
374 kernel = None
375 initrd = None
376
377 for adtl in adtls:
378 distros_added.extend(adtl)
379
380 def add_entry(self,dirname,kernel,initrd):
381 """
382 When we find a directory with a valid kernel/initrd in it, create the distribution objects
383 as appropriate and save them. This includes creating xen and rescue distros/profiles
384 if possible.
385 """
386
387 proposed_name = self.get_proposed_name(dirname,kernel)
388 proposed_arch = self.get_proposed_arch(dirname)
389
390 if self.arch and proposed_arch and self.arch != proposed_arch:
391 utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch))
392
393 archs = self.learn_arch_from_tree()
394 if not archs:
395 if self.arch:
396 archs.append( self.arch )
397 else:
398 if self.arch and self.arch not in archs:
399 utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir()))
400 if proposed_arch:
401 if archs and proposed_arch not in archs:
402 self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir()))
403 return
404
405 archs = [ proposed_arch ]
406
407 if len(archs)>1:
408 self.logger.warning("- Warning : Multiple archs found : %s" % (archs))
409
410 distros_added = []
411
412 for pxe_arch in archs:
413 name = proposed_name + "-" + pxe_arch
414 existing_distro = self.distros.find(name=name)
415
416 if existing_distro is not None:
417 self.logger.warning("skipping import, as distro name already exists: %s" % name)
418 continue
419
420 else:
421 self.logger.info("creating new distro: %s" % name)
422 distro = self.config.new_distro()
423
424 if name.find("-autoboot") != -1:
425 # this is an artifact of some EL-3 imports
426 continue
427
428 distro.set_name(name)
429 distro.set_kernel(kernel)
430 distro.set_initrd(initrd)
431 distro.set_arch(pxe_arch)
432 distro.set_breed(self.breed)
433 # If a version was supplied on command line, we set it now
434 if self.os_version:
435 distro.set_os_version(self.os_version)
436
437 self.distros.add(distro,save=True)
438 distros_added.append(distro)
439
440 existing_profile = self.profiles.find(name=name)
441
442 # see if the profile name is already used, if so, skip it and
443 # do not modify the existing profile
444
445 if existing_profile is None:
446 self.logger.info("creating new profile: %s" % name)
447 #FIXME: The created profile holds a default kickstart, and should be breed specific
448 profile = self.config.new_profile()
449 else:
450 self.logger.info("skipping existing profile, name already exists: %s" % name)
451 continue
452
453 # save our minimal profile which just points to the distribution and a good
454 # default answer file
455
456 profile.set_name(name)
457 profile.set_distro(name)
458 profile.set_kickstart(self.kickstart_file)
459
460 # depending on the name of the profile we can define a good virt-type
461 # for usage with koan
462
463 if name.find("-xen") != -1:
464 profile.set_virt_type("xenpv")
465 elif name.find("vmware") != -1:
466 profile.set_virt_type("vmware")
467 else:
468 profile.set_virt_type("qemu")
469
470 # save our new profile to the collection
471
472 self.profiles.add(profile,save=True)
473
474 return distros_added
475
476 def get_proposed_name(self,dirname,kernel=None):
477 """
478 Given a directory name where we have a kernel/initrd pair, try to autoname
479 the distribution (and profile) object based on the contents of that path
480 """
481
482 if self.network_root is not None:
483 name = self.mirror_name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/"))
484 else:
485 # remove the part that says /var/www/cobbler/ks_mirror/name
486 name = "-".join(dirname.split("/")[5:])
487
488 if kernel is not None and kernel.find("PAE") != -1:
489 name = name + "-PAE"
490
491 # These are all Ubuntu's doing, the netboot images are buried pretty
492 # deep. ;-) -JC
493 name = name.replace("-netboot","")
494 name = name.replace("-ubuntu-installer","")
495 name = name.replace("-amd64","")
496 name = name.replace("-i386","")
497
498 # we know that some kernel paths should not be in the name
499
500 name = name.replace("-images","")
501 name = name.replace("-pxeboot","")
502 name = name.replace("-install","")
503 name = name.replace("-isolinux","")
504
505 # some paths above the media root may have extra path segments we want
506 # to clean up
507
508 name = name.replace("-os","")
509 name = name.replace("-tree","")
510 name = name.replace("var-www-cobbler-", "")
511 name = name.replace("ks_mirror-","")
512 name = name.replace("--","-")
513
514 # remove any architecture name related string, as real arch will be appended later
515
516 name = name.replace("chrp","ppc64")
517
518 for separator in [ '-' , '_' , '.' ] :
519 for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]:
520 name = name.replace("%s%s" % ( separator , arch ),"")
521
522 return name
523
524 def get_proposed_arch(self,dirname):
525 """
526 Given an directory name, can we infer an architecture from a path segment?
527 """
528 if dirname.find("x86_64") != -1 or dirname.find("amd") != -1:
529 return "x86_64"
530 if dirname.find("ia64") != -1:
531 return "ia64"
532 if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1:
533 return "i386"
534 if dirname.find("s390x") != -1:
535 return "s390x"
536 if dirname.find("s390") != -1:
537 return "s390"
538 if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1:
539 return "ppc64"
540 if dirname.find("ppc32") != -1:
541 return "ppc"
542 if dirname.find("ppc") != -1:
543 return "ppc"
544 return None
545
546 def arch_walker(self,foo,dirname,fnames):
547 """
548 See docs on learn_arch_from_tree.
549
550 The TRY_LIST is used to speed up search, and should be dropped for default importer
551 Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem
552
553 This method is useful to get the archs, but also to package type and a raw guess of the breed
554 """
555
556 # try to find a kernel header RPM and then look at it's arch.
557 for x in fnames:
558 if self.match_kernelarch_file(x):
559 for arch in self.get_valid_arches():
560 if x.find(arch) != -1:
561 foo[arch] = 1
562 for arch in [ "i686" , "amd64" ]:
563 if x.find(arch) != -1:
564 foo[arch] = 1
565
566 def kickstart_finder(self,distros_added):
567 """
568 For all of the profiles in the config w/o a kickstart, use the
569 given kickstart file, or look at the kernel path, from that,
570 see if we can guess the distro, and if we can, assign a kickstart
571 if one is available for it.
572 """
573 for profile in self.profiles:
574 distro = self.distros.find(name=profile.get_conceptual_parent().name)
575 if distro is None or not (distro in distros_added):
576 continue
577
578 kdir = os.path.dirname(distro.kernel)
579 if self.kickstart_file == None:
580 for file in self.get_release_files():
581 results = self.scan_pkg_filename(file)
582 # FIXME : If os is not found on tree but set with CLI, no kickstart is searched
583 if results is None:
584 self.logger.warning("skipping %s" % file)
585 continue
586 (flavor, major, minor, release) = results
587 # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata
588 #version , ks = self.set_variance(flavor, major, minor, distro.arch)
589 if self.os_version:
590 if self.os_version != flavor:
591 utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor))
592 distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch))
593 distro.set_os_version(flavor)
594 # is this even valid for debian/ubuntu? - jcammarata
595 #ds = self.get_datestamp()
596 #if ds is not None:
597 # distro.set_tree_build_time(ds)
598 profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed")
599 self.profiles.add(profile,save=True)
600
601 self.configure_tree_location(distro)
602 self.distros.add(distro,save=True) # re-save
603 self.api.serialize()
604
605 def configure_tree_location(self, distro):
606 """
607 Once a distribution is identified, find the part of the distribution
608 that has the URL in it that we want to use for kickstarting the
609 distribution, and create a ksmeta variable $tree that contains this.
610 """
611
612 base = self.get_rootdir()
613
614 if self.network_root is None:
615 dists_path = os.path.join( self.path , "dists" )
616 if os.path.isdir( dists_path ):
617 tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.mirror_name)
618 else:
619 tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name)
620 self.set_install_tree(distro, tree)
621 else:
622 # where we assign the kickstart source is relative to our current directory
623 # and the input start directory in the crawl. We find the path segments
624 # between and tack them on the network source path to find the explicit
625 # network path to the distro that Anaconda can digest.
626 tail = utils.path_tail(self.path, base)
627 tree = self.network_root[:-1] + tail
628 self.set_install_tree(distro, tree)
629
630 def get_rootdir(self):
631 return self.mirror
632
633 def get_pkgdir(self):
634 if not self.pkgdir:
635 return None
636 return os.path.join(self.get_rootdir(),self.pkgdir)
637
638 def set_install_tree(self, distro, url):
639 distro.ks_meta["tree"] = url
640
641 def learn_arch_from_tree(self):
642 """
643 If a distribution is imported from DVD, there is a good chance the path doesn't
644 contain the arch and we should add it back in so that it's part of the
645 meaningful name ... so this code helps figure out the arch name. This is important
646 for producing predictable distro names (and profile names) from differing import sources
647 """
648 result = {}
649 # FIXME : this is called only once, should not be a walk
650 if self.get_pkgdir():
651 os.path.walk(self.get_pkgdir(), self.arch_walker, result)
652 if result.pop("amd64",False):
653 result["x86_64"] = 1
654 if result.pop("i686",False):
655 result["i386"] = 1
656 return result.keys()
657
658 def match_kernelarch_file(self, filename):
659 """
660 Is the given filename a kernel filename?
661 """
662 if not filename.endswith("deb"):
663 return False
664 if filename.startswith("linux-headers-"):
665 return True
666 return False
667
668 def scan_pkg_filename(self, file):
669 """
670 Determine what the distro is based on the release package filename.
671 """
672 # FIXME: all of these dist_names should probably be put in a function
673 # which would be called in place of looking in codes.py. Right now
674 # you have to update both codes.py and this to add a new release
675 if self.breed == "debian":
676 dist_names = ['etch','lenny',]
677 elif self.breed == "ubuntu":
678 dist_names = ['dapper','hardy','intrepid','jaunty','karmic','lynx','maverick','natty',]
679 else:
680 return None
681
682 if os.path.basename(file) in dist_names:
683 release_file = os.path.join(file,'Release')
684 self.logger.info("Found %s release file: %s" % (self.breed,release_file))
685
686 f = open(release_file,'r')
687 lines = f.readlines()
688 f.close()
689
690 for line in lines:
691 if line.lower().startswith('version: '):
692 version = line.split(':')[1].strip()
693 values = version.split('.')
694 if len(values) == 1:
695 # I don't think you'd ever hit this currently with debian or ubuntu,
696 # just including it for safety reasons
697 return (os.path.basename(file), values[0], "0", "0")
698 elif len(values) == 2:
699 return (os.path.basename(file), values[0], values[1], "0")
700 elif len(values) > 2:
701 return (os.path.basename(file), values[0], values[1], values[2])
702 return None
703
704 def get_datestamp(self):
705 """
706 Not used for debian/ubuntu... should probably be removed? - jcammarata
707 """
708 pass
709
710 def set_variance(self, flavor, major, minor, arch):
711 """
712 Set distro specific versioning.
713 """
714 # I don't think this is required anymore, as the scan_pkg_filename() function
715 # above does everything we need it to - jcammarata
716 #
717 #if self.breed == "debian":
718 # dist_names = { '4.0' : "etch" , '5.0' : "lenny" }
719 # dist_vers = "%s.%s" % ( major , minor )
720 # os_version = dist_names[dist_vers]
721 #
722 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
723 #elif self.breed == "ubuntu":
724 # # Release names taken from wikipedia
725 # dist_names = { '6.4' :"dapper",
726 # '8.4' :"hardy",
727 # '8.10' :"intrepid",
728 # '9.4' :"jaunty",
729 # '9.10' :"karmic",
730 # '10.4' :"lynx",
731 # '10.10':"maverick",
732 # '11.4' :"natty",
733 # }
734 # dist_vers = "%s.%s" % ( major , minor )
735 # if not dist_names.has_key( dist_vers ):
736 # dist_names['4ubuntu2.0'] = "IntrepidIbex"
737 # os_version = dist_names[dist_vers]
738 #
739 # return os_version , "/var/lib/cobbler/kickstarts/sample.seed"
740 #else:
741 # return None
742 pass
743
744 def process_repos(self, main_importer, distro):
745 # Create a disabled repository for the new distro, and the security updates
746 #
747 # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage
748
749 repo = item_repo.Repo(main_importer.config)
750 repo.set_breed( "apt" )
751 repo.set_arch( distro.arch )
752 repo.set_keep_updated( False )
753 repo.yumopts["--ignore-release-gpg"] = None
754 repo.yumopts["--verbose"] = None
755 repo.set_name( distro.name )
756 repo.set_os_version( distro.os_version )
757 # NOTE : The location of the mirror should come from timezone
758 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) )
759
760 security_repo = item_repo.Repo(main_importer.config)
761 security_repo.set_breed( "apt" )
762 security_repo.set_arch( distro.arch )
763 security_repo.set_keep_updated( False )
764 security_repo.yumopts["--ignore-release-gpg"] = None
765 security_repo.yumopts["--verbose"] = None
766 security_repo.set_name( distro.name + "-security" )
767 security_repo.set_os_version( distro.os_version )
768 # There are no official mirrors for security updates
769 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' )
770
771 self.logger.info("Added repos for %s" % distro.name)
772 repos = main_importer.config.repos()
773 repos.add(repo,save=True)
774 repos.add(security_repo,save=True)
775
776# ==========================================================================
777
778def get_import_manager(config,logger):
779 return ImportDebianUbuntuManager(config,logger)
7800
=== renamed file '.pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py' => '.pc/43_fix_reposync_env_variable.patch/cobbler/action_reposync.py.THIS'
=== removed file '.pc/applied-patches'
--- .pc/applied-patches 2011-06-03 09:25:37 +0000
+++ .pc/applied-patches 1970-01-01 00:00:00 +0000
@@ -1,10 +0,0 @@
121_cobbler_use_netboot.patch
212_fix_dhcp_restart.patch
305_cobbler_fix_reposync_permissions.patch
433_authn_configfile.patch
534_fix_apache_wont_start.patch
639_cw_remove_vhost.patch
740_ubuntu_bind9_management.patch
841_update_tree_path_with_arch.patch
942_fix_repomirror_create_sync.patch
1043_fix_reposync_env_variable.patch
110
=== modified file 'cobbler/action_check.py'
--- cobbler/action_check.py 2011-04-18 11:15:59 +0000
+++ cobbler/action_check.py 2011-06-09 00:11:01 +0000
@@ -66,7 +66,7 @@
66 mode = self.config.api.get_sync().dns.what()66 mode = self.config.api.get_sync().dns.what()
67 if mode == "bind":67 if mode == "bind":
68 self.check_bind_bin(status)68 self.check_bind_bin(status)
69 self.check_service(status,"bind9")69 self.check_service(status,"named")
70 elif mode == "dnsmasq" and not self.settings.manage_dhcp:70 elif mode == "dnsmasq" and not self.settings.manage_dhcp:
71 self.check_dnsmasq_bin(status)71 self.check_dnsmasq_bin(status)
72 self.check_service(status,"dnsmasq")72 self.check_service(status,"dnsmasq")
7373
=== modified file 'cobbler/action_reposync.py'
--- cobbler/action_reposync.py 2011-06-08 17:21:45 +0000
+++ cobbler/action_reposync.py 2011-06-09 00:11:01 +0000
@@ -485,11 +485,6 @@
485 arch = "amd64" # FIX potential arch errors485 arch = "amd64" # FIX potential arch errors
486 cmd = "%s --nosource -a %s" % (cmd, arch)486 cmd = "%s --nosource -a %s" % (cmd, arch)
487 487
488 # Set's an environment variable for subprocess, otherwise debmirror will fail
489 # as it needs this variable to exist.
490 # FIXME: might this break anything? So far it doesn't
491 os.putenv("HOME", "/var/lib/cobbler")
492
493 rc = utils.subprocess_call(self.logger, cmd)488 rc = utils.subprocess_call(self.logger, cmd)
494 if rc !=0:489 if rc !=0:
495 utils.die(self.logger,"cobbler reposync failed")490 utils.die(self.logger,"cobbler reposync failed")
@@ -569,7 +564,7 @@
569 a safeguard.564 a safeguard.
570 """565 """
571 # all_path = os.path.join(repo_path, "*")566 # all_path = os.path.join(repo_path, "*")
572 cmd1 = "chown -R root:www-data %s" % repo_path567 cmd1 = "chown -R root:apache %s" % repo_path
573 utils.subprocess_call(self.logger, cmd1)568 utils.subprocess_call(self.logger, cmd1)
574569
575 cmd2 = "chmod -R 755 %s" % repo_path570 cmd2 = "chmod -R 755 %s" % repo_path
576571
=== modified file 'cobbler/codes.py'
--- cobbler/codes.py 2011-05-02 18:26:03 +0000
+++ cobbler/codes.py 2011-06-09 00:11:01 +0000
@@ -53,8 +53,8 @@
53}53}
5454
55VALID_REPO_BREEDS = [55VALID_REPO_BREEDS = [
56 "rsync", "rhn", "yum", "apt"56# "rsync", "rhn", "yum", "apt"
57# "rsync", "rhn", "yum"57 "rsync", "rhn", "yum"
58]58]
5959
60def uniquify(seq, idfun=None):60def uniquify(seq, idfun=None):
6161
=== modified file 'cobbler/modules/manage_bind.py'
--- cobbler/modules/manage_bind.py 2011-04-18 11:15:59 +0000
+++ cobbler/modules/manage_bind.py 2011-06-09 00:11:01 +0000
@@ -180,7 +180,7 @@
180 """180 """
181 Write out the named.conf main config file from the template.181 Write out the named.conf main config file from the template.
182 """182 """
183 settings_file = "/etc/bind/named.conf.local"183 settings_file = "/etc/named.conf"
184 template_file = "/etc/cobbler/named.template"184 template_file = "/etc/cobbler/named.template"
185 forward_zones = self.settings.manage_forward_zones185 forward_zones = self.settings.manage_forward_zones
186 reverse_zones = self.settings.manage_reverse_zones186 reverse_zones = self.settings.manage_reverse_zones
@@ -291,7 +291,7 @@
291291
292 metadata['host_record'] = self.__pretty_print_host_records(hosts)292 metadata['host_record'] = self.__pretty_print_host_records(hosts)
293293
294 zonefilename='/etc/bind/db.' + zone294 zonefilename='/var/named/' + zone
295 if self.logger is not None:295 if self.logger is not None:
296 self.logger.info("generating (forward) %s" % zonefilename)296 self.logger.info("generating (forward) %s" % zonefilename)
297 self.templar.render(template_data, metadata, zonefilename, None)297 self.templar.render(template_data, metadata, zonefilename, None)
@@ -313,7 +313,7 @@
313313
314 metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR')314 metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR')
315315
316 zonefilename='/etc/bind/db.' + zone316 zonefilename='/var/named/' + zone
317 if self.logger is not None:317 if self.logger is not None:
318 self.logger.info("generating (reverse) %s" % zonefilename)318 self.logger.info("generating (reverse) %s" % zonefilename)
319 self.templar.render(template_data, metadata, zonefilename, None)319 self.templar.render(template_data, metadata, zonefilename, None)
320320
=== modified file 'cobbler/modules/manage_import_debian_ubuntu.py'
--- cobbler/modules/manage_import_debian_ubuntu.py 2011-06-08 17:21:45 +0000
+++ cobbler/modules/manage_import_debian_ubuntu.py 2011-06-09 00:11:01 +0000
@@ -187,8 +187,6 @@
187 else:187 else:
188 # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again188 # FIXME : This is very likely removed later at get_proposed_name, and the guessed arch appended again
189 self.path += ("-%s" % self.arch)189 self.path += ("-%s" % self.arch)
190 # If arch is specified we also need to update the mirror name.
191 self.mirror_name = self.mirror_name + "-" + self.arch
192190
193 # make the output path and mirror content but only if not specifying that a network191 # make the output path and mirror content but only if not specifying that a network
194 # accessible support location already exists (this is --available-as on the command line)192 # accessible support location already exists (this is --available-as on the command line)
@@ -341,10 +339,11 @@
341 self.logger.info("descent into %s" % top)339 self.logger.info("descent into %s" % top)
342 dists_path = os.path.join(self.path, "dists")340 dists_path = os.path.join(self.path, "dists")
343 if not os.path.isdir(dists_path):341 if not os.path.isdir(dists_path):
344 self.process_repos(self, distro)342 self.process_repos()
345 else:343 else:
346 self.logger.info("this distro isn't mirrored")344 self.logger.info("this distro isn't mirrored")
347345
346<<<<<<< TREE
348 def get_repo_mirror_from_apt(self):347 def get_repo_mirror_from_apt(self):
349 """348 """
350 This tries to determine the apt mirror/archive to use (when processing repos)349 This tries to determine the apt mirror/archive to use (when processing repos)
@@ -364,6 +363,11 @@
364363
365 return mirror364 return mirror
366365
366=======
367 def process_repos(self):
368 pass
369
370>>>>>>> MERGE-SOURCE
367 def distro_adder(self,distros_added,dirname,fnames):371 def distro_adder(self,distros_added,dirname,fnames):
368 """372 """
369 This is an os.path.walk routine that finds distributions in the directory373 This is an os.path.walk routine that finds distributions in the directory
@@ -387,9 +391,9 @@
387 self.logger.info("following symlink: %s" % fullname)391 self.logger.info("following symlink: %s" % fullname)
388 os.path.walk(fullname, self.distro_adder, distros_added)392 os.path.walk(fullname, self.distro_adder, distros_added)
389393
390 if ( x.startswith("initrd.gz") ) and x != "initrd.size":394 if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") ) and x != "initrd.size":
391 initrd = os.path.join(dirname,x)395 initrd = os.path.join(dirname,x)
392 if ( x.startswith("linux") ) and x.find("initrd") == -1:396 if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") ) and x.find("initrd") == -1:
393 kernel = os.path.join(dirname,x)397 kernel = os.path.join(dirname,x)
394398
395 # if we've collected a matching kernel and initrd pair, turn the in and add them to the list399 # if we've collected a matching kernel and initrd pair, turn the in and add them to the list
@@ -788,12 +792,17 @@
788 repo.yumopts["--verbose"] = None792 repo.yumopts["--verbose"] = None
789 repo.set_name( distro.name )793 repo.set_name( distro.name )
790 repo.set_os_version( distro.os_version )794 repo.set_os_version( distro.os_version )
795<<<<<<< TREE
791796
792 if distro.breed == "ubuntu":797 if distro.breed == "ubuntu":
793 repo.set_mirror( "%s/%s" % (mirror, distro.os_version) )798 repo.set_mirror( "%s/%s" % (mirror, distro.os_version) )
794 else:799 else:
795 # NOTE : The location of the mirror should come from timezone800 # NOTE : The location of the mirror should come from timezone
796 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) )801 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) )
802=======
803 # NOTE : The location of the mirror should come from timezone
804 repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , '@@suite@@' ) )
805>>>>>>> MERGE-SOURCE
797806
798 security_repo = item_repo.Repo(main_importer.config)807 security_repo = item_repo.Repo(main_importer.config)
799 security_repo.set_breed( "apt" )808 security_repo.set_breed( "apt" )
@@ -804,10 +813,14 @@
804 security_repo.set_name( distro.name + "-security" )813 security_repo.set_name( distro.name + "-security" )
805 security_repo.set_os_version( distro.os_version )814 security_repo.set_os_version( distro.os_version )
806 # There are no official mirrors for security updates815 # There are no official mirrors for security updates
816<<<<<<< TREE
807 if distro.breed == "ubuntu":817 if distro.breed == "ubuntu":
808 security_repo.set_mirror( "%s/%s-security" % (mirror, distro.os_version) )818 security_repo.set_mirror( "%s/%s-security" % (mirror, distro.os_version) )
809 else:819 else:
810 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % distro.os_version )820 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % distro.os_version )
821=======
822 security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % '@@suite@@' )
823>>>>>>> MERGE-SOURCE
811824
812 self.logger.info("Added repos for %s" % distro.name)825 self.logger.info("Added repos for %s" % distro.name)
813 repos = main_importer.config.repos()826 repos = main_importer.config.repos()
814827
=== modified file 'cobbler/modules/sync_post_restart_services.py'
--- cobbler/modules/sync_post_restart_services.py 2011-04-18 11:15:59 +0000
+++ cobbler/modules/sync_post_restart_services.py 2011-06-09 00:11:01 +0000
@@ -42,7 +42,7 @@
42 if rc != 0:42 if rc != 0:
43 logger.error("dhcpd -t failed")43 logger.error("dhcpd -t failed")
44 return 144 return 1
45 rc = utils.subprocess_call(logger,"service isc-dhcp-server restart", shell=True)45 rc = utils.subprocess_call(logger,"service dhcpd restart", shell=True)
46 elif which_dhcp_module == "manage_dnsmasq":46 elif which_dhcp_module == "manage_dnsmasq":
47 if restart_dhcp != "0":47 if restart_dhcp != "0":
48 rc = utils.subprocess_call(logger, "service dnsmasq restart")48 rc = utils.subprocess_call(logger, "service dnsmasq restart")
@@ -53,7 +53,7 @@
5353
54 if manage_dns != "0" and restart_dns != "0":54 if manage_dns != "0" and restart_dns != "0":
55 if which_dns_module == "manage_bind":55 if which_dns_module == "manage_bind":
56 rc = utils.subprocess_call(logger, "service bind9 restart", shell=True)56 rc = utils.subprocess_call(logger, "service named restart", shell=True)
57 elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq:57 elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq:
58 rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True)58 rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True)
59 elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq:59 elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq:
6060
=== modified file 'config/cobbler_web.conf'
--- config/cobbler_web.conf 2011-04-15 12:47:39 +0000
+++ config/cobbler_web.conf 2011-06-09 00:11:01 +0000
@@ -1,10 +1,14 @@
1# This configuration file enables the cobbler web1# This configuration file enables the cobbler web
2# interface (django version)2# interface (django version)
33
4<VirtualHost *:80>
5
4# Do not log the requests generated from the event notification system6# Do not log the requests generated from the event notification system
5SetEnvIf Request_URI ".*/op/events/user/.*" dontlog7SetEnvIf Request_URI ".*/op/events/user/.*" dontlog
6# Log only what remains8# Log only what remains
7#CustomLog logs/access_log combined env=!dontlog9CustomLog logs/access_log combined env=!dontlog
810
9WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi11WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi
1012
13</VirtualHost>
14
1115
=== modified file 'config/modules.conf'
--- config/modules.conf 2011-04-04 12:55:44 +0000
+++ config/modules.conf 2011-06-09 00:11:01 +0000
@@ -19,7 +19,7 @@
19# https://fedorahosted.org/cobbler/wiki/CobblerWithLdap19# https://fedorahosted.org/cobbler/wiki/CobblerWithLdap
2020
21[authentication]21[authentication]
22module = authn_configfile22module = authn_denyall
2323
24# authorization: 24# authorization:
25# once a user has been cleared by the WebUI/XMLRPC, what can they do?25# once a user has been cleared by the WebUI/XMLRPC, what can they do?
2626
=== modified file 'templates/etc/named.template'
--- templates/etc/named.template 2011-04-18 11:15:59 +0000
+++ templates/etc/named.template 2011-06-09 00:11:01 +0000
@@ -1,14 +1,31 @@
1options {
2 listen-on port 53 { 127.0.0.1; };
3 directory "/var/named";
4 dump-file "/var/named/data/cache_dump.db";
5 statistics-file "/var/named/data/named_stats.txt";
6 memstatistics-file "/var/named/data/named_mem_stats.txt";
7 allow-query { localhost; };
8 recursion yes;
9};
10
11logging {
12 channel default_debug {
13 file "data/named.run";
14 severity dynamic;
15 };
16};
17
1#for $zone in $forward_zones18#for $zone in $forward_zones
2zone "${zone}." {19zone "${zone}." {
3 type master;20 type master;
4 file "/etc/bind/db.$zone";21 file "$zone";
5};22};
623
7#end for24#end for
8#for $zone, $arpa in $reverse_zones25#for $zone, $arpa in $reverse_zones
9zone "${arpa}." {26zone "${arpa}." {
10 type master;27 type master;
11 file "/etc/bind/db.$zone";28 file "$zone";
12};29};
1330
14#end for31#end for

Subscribers

People subscribed via source and target branches

to all changes: