On 13-10-2010 09:15, Chiradeep Vittal wrote: > Chiradeep Vittal has proposed merging lp:~chiradeep/nova/msft-hyper-v-support into lp:nova. > > Requested reviews: > Jay Pipes (jaypipes) > > > Introduces basic support for spawning, rebooting and destroying vms when using Microsoft Hyper-V as the hypervisor. > Images need to be in VHD format. Note that although Hyper-V doesn't accept kernel and ramdisk > separate from the image, the nova objectstore api still expects an image to have an associated aki and ari. You can use dummy aki and ari images -- the hyper-v driver won't use them or try to download them. > Requires Python's WMI module. > > === modified file 'nova/compute/manager.py' > --- nova/compute/manager.py 2010-10-12 20:18:29 +0000 > +++ nova/compute/manager.py 2010-10-13 07:15:24 +0000 > @@ -39,6 +39,8 @@ > 'where instances are stored on disk') > flags.DEFINE_string('compute_driver', 'nova.virt.connection.get_connection', > 'Driver to use for volume creation') > +flags.DEFINE_string('images_path', utils.abspath('../images'), > + 'path to decrypted local images if not using s3') Each flag must only be defined once. images_path is also defined in nova.objectstore.images. Either import that or move the flag to a common place and import that. > === modified file 'nova/virt/connection.py' > --- nova/virt/connection.py 2010-08-30 13:19:14 +0000 > +++ nova/virt/connection.py 2010-10-13 07:15:24 +0000 > @@ -26,6 +26,7 @@ > from nova.virt import fake > from nova.virt import libvirt_conn > from nova.virt import xenapi > +from nova.virt import hyperv > > > FLAGS = flags.FLAGS > @@ -49,6 +50,8 @@ > conn = libvirt_conn.get_connection(read_only) > elif t == 'xenapi': > conn = xenapi.get_connection(read_only) > + elif t == 'hyperv': > + conn = hyperv.get_connection(read_only) > else: > raise Exception('Unknown connection type "%s"' % t) > > This is troublesome. We have no python-wmi on linux, and unconditionally importing the hyperv driver means unconditionally importing wmi which fails spectacularly. nova.virt.connection.get_connection should probably be refactored to use nova.utils.import_object or similar, based on the defined connection_type. That would solve this problem. > === added file 'nova/virt/hyperv.py' > --- nova/virt/hyperv.py 1970-01-01 00:00:00 +0000 > +++ nova/virt/hyperv.py 2010-10-13 07:15:24 +0000 > @@ -0,0 +1,453 @@ > +# vim: tabstop=4 shiftwidth=4 softtabstop=4 > + > +# Copyright (c) 2010 Cloud.com, Inc > +# > +# Licensed under the Apache License, Version 2.0 (the "License"); you may > +# not use this file except in compliance with the License. You may obtain > +# a copy of the License at > +# > +# http://www.apache.org/licenses/LICENSE-2.0 > +# > +# Unless required by applicable law or agreed to in writing, software > +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT > +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the > +# License for the specific language governing permissions and limitations > +# under the License. > + > +""" > +A connection to Hyper-V . > +Uses Windows Management Instrumentation (WMI) calls to interact with Hyper-V > +Hyper-V WMI usage: > + http://msdn.microsoft.com/en-us/library/cc723875%28v=VS.85%29.aspx > +The Hyper-V object model briefly: > + The physical computer and its hosted virtual machines are each represented > + by the Msvm_ComputerSystem class. > + > + Each virtual machine is associated with a > + Msvm_VirtualSystemGlobalSettingData (vs_gs_data) instance and one or more > + Msvm_VirtualSystemSettingData (vmsetting) instances. For each vmsetting > + there is a series of Msvm_ResourceAllocationSettingData (rasd) objects. > + The rasd objects describe the settings for each device in a virtual machine. > + Together, the vs_gs_data, vmsettings and rasds describe the configuration > + of the virtual machine. > + > + Creating new resources such as disks and nics involves cloning a default > + rasd object and appropriately modifying the clone and calling the > + AddVirtualSystemResources WMI method > + Changing resources such as memory uses the ModifyVirtualSystemResources > + WMI method > + > +Using the Python WMI library: > + Tutorial: > + http://timgolden.me.uk/python/wmi/tutorial.html > + Hyper-V WMI objects can be retrieved simply by using the class name > + of the WMI object and optionally specifying a column to filter the > + result set. More complex filters can be formed using WQL (sql-like) > + queries. > + The parameters and return tuples of WMI method calls can gleaned by > + examining the doc string. For example: > + >>> vs_man_svc.ModifyVirtualSystemResources.__doc__ > + ModifyVirtualSystemResources (ComputerSystem, ResourceSettingData[]) > + => (Job, ReturnValue)' > + When passing setting data (ResourceSettingData) to the WMI method, > + an XML representation of the data is passed in using the GetText_(1) method. > + Available methods on a service can be determined using method.keys(): > + >>> vs_man_svc.methods.keys() > + vmsettings and rasds for a vm can be retrieved using the 'associators' > + method with the appropriate return class. > + Long running WMI commands generally return a Job (an instance of > + Msvm_ConcreteJob) whose state can be polled to determine when it finishes > + > +""" > + > +import os > +import logging > +import time > + > +from twisted.internet import defer > +import wmi > + > +from nova import exception > +from nova import flags > +from nova.auth.manager import AuthManager > +from nova.compute import power_state > +from nova.virt import images > + > + > +FLAGS = flags.FLAGS > + > + > +HYPERV_POWER_STATE = { > + 3 : power_state.SHUTDOWN, > + 2 : power_state.RUNNING, > + 32768 : power_state.PAUSED, > +} > + > + > +REQ_POWER_STATE = { > + 'Enabled' : 2, > + 'Disabled': 3, > + 'Reboot' : 10, > + 'Reset' : 11, > + 'Paused' : 32768, > + 'Suspended': 32769 > +} > + > + > +WMI_JOB_STATUS_STARTED = 4096 > +WMI_JOB_STATE_RUNNING = 4 > +WMI_JOB_STATE_COMPLETED = 7 > + > + > +def get_connection(_): > + return HyperVConnection() > + > + > +class HyperVConnection(object): > + def __init__(self): > + self._conn = wmi.WMI(moniker = '//./root/virtualization') > + self._cim_conn = wmi.WMI(moniker = '//./root/cimv2') > + > + def list_instances(self): > + """ Return the names of all the instances known to Hyper-V. """ > + vms = [v.ElementName \ > + for v in self._conn.Msvm_ComputerSystem(['ElementName'])] > + return vms > + > + @defer.inlineCallbacks > + def spawn(self, instance): > + """ Create a new VM and start it.""" > + vm = yield self._lookup(instance.name) > + if vm is not None: > + raise exception.Duplicate('Attempted to create duplicate name %s' % > + instance.name) > + > + user = AuthManager().get_user(instance['user_id']) > + project = AuthManager().get_project(instance['project_id']) Although AuthManager indeed (currently) is a singleton, you should only instantiate it once. If for no other reason, then for consistency across the code base. > + #Fetch the file, assume it is a VHD file. > + vhdfile = os.path.join(FLAGS.instances_path, instance['str_id'])+".vhd" Something like: base_vhd_filename = os.path.join(FLAGS.instances_path, instance['str_id']) vhdfile = "%s.vhd" % (base_vhd_filename,) Would be more Pythonic. > + yield images.fetch(instance['image_id'], vhdfile, user, project) > + > + try: > + yield self._create_vm(instance) > + > + yield self._create_disk(instance['name'], vhdfile) > + yield self._create_nic(instance['name'], instance['mac_address']) > + > + logging.debug ('Starting VM %s ', instance.name) > + yield self._set_vm_state(instance['name'], 'Enabled') > + logging.info('Started VM %s ', instance.name) > + except Exception as exn: > + logging.error('spawn vm failed: %s', exn) > + self.destroy(instance) > + > + def _create_vm(self, instance): > + """Create a VM but don't start it. """ > + vs_man_svc = self._conn.Msvm_VirtualSystemManagementService()[0] > + > + vs_gs_data = self._conn.Msvm_VirtualSystemGlobalSettingData.new() > + vs_gs_data.ElementName = instance['name'] > + (job, ret_val) = vs_man_svc.DefineVirtualSystem( > + [], None, vs_gs_data.GetText_(1))[1:] > + if ret_val == WMI_JOB_STATUS_STARTED: > + success = self._check_job_status(job) > + else: > + success = (ret_val == 0) > + > + if not success: > + raise Exception('Failed to create VM %s', instance.name) > + > + logging.debug('Created VM %s...', instance.name) > + vm = self._conn.Msvm_ComputerSystem (ElementName=instance.name)[0] > + > + vmsettings = vm.associators(wmi_result_class= > + 'Msvm_VirtualSystemSettingData') > + vmsetting = [s for s in vmsettings > + if s.SettingType == 3][0] #avoid snapshots > + memsetting = vmsetting.associators(wmi_result_class= > + 'Msvm_MemorySettingData')[0] > + #No Dynamic Memory, so reservation, limit and quantity are identical. > + mem = long(str(instance['memory_mb'])) > + memsetting.VirtualQuantity = mem > + memsetting.Reservation = mem > + memsetting.Limit = mem > + > + (job, ret_val) = vs_man_svc.ModifyVirtualSystemResources( > + vm.path_(), [memsetting.GetText_(1)]) > + > + logging.debug('Set memory for vm %s...', instance.name) > + procsetting = vmsetting.associators(wmi_result_class= > + 'Msvm_ProcessorSettingData')[0] > + vcpus = long(instance['vcpus']) > + procsetting.VirtualQuantity = vcpus > + procsetting.Reservation = vcpus > + procsetting.Limit = vcpus > + > + (job, ret_val) = vs_man_svc.ModifyVirtualSystemResources( > + vm.path_(), [procsetting.GetText_(1)]) > + > + logging.debug('Set vcpus for vm %s...', instance.name) > + > + def _create_disk(self, vm_name, vhdfile): > + """Create a disk and attach it to the vm""" > + logging.debug("Creating disk for %s by attaching disk file %s", \ > + vm_name, vhdfile) \ is not needed inside parentheses. > + #Find the IDE controller for the vm. > + vms = self._conn.MSVM_ComputerSystem (ElementName=vm_name) > + vm = vms[0] > + vmsettings = vm.associators( > + wmi_result_class='Msvm_VirtualSystemSettingData') > + rasds = vmsettings[0].associators( > + wmi_result_class='MSVM_ResourceAllocationSettingData') > + ctrller = [r for r in rasds > + if r.ResourceSubType == 'Microsoft Emulated IDE Controller'\ > + and r.Address == "0" ] > + #Find the default disk drive object for the vm and clone it. > + diskdflt = self._conn.query( > + "SELECT * FROM Msvm_ResourceAllocationSettingData \ > + WHERE ResourceSubType LIKE 'Microsoft Synthetic Disk Drive'\ > + AND InstanceID LIKE '%Default%'")[0] > + diskdrive = self._clone_wmi_obj( > + 'Msvm_ResourceAllocationSettingData', diskdflt) > + #Set the IDE ctrller as parent. > + diskdrive.Parent = ctrller[0].path_() > + diskdrive.Address = 0 > + #Add the cloned disk drive object to the vm. > + new_resources = self._add_virt_resource(diskdrive, vm) > + > + if new_resources is None: > + raise Exception('Failed to add diskdrive to VM %s', vm_name) > + > + diskdrive_path = new_resources[0] > + logging.debug("New disk drive path is " + diskdrive_path) logging.debug("New disk drive path is %s", diskdrive_path) instead. > + #Find the default VHD disk object. > + vhddefault = self._conn.query( > + "SELECT * FROM Msvm_ResourceAllocationSettingData \ > + WHERE ResourceSubType LIKE 'Microsoft Virtual Hard Disk' AND \ > + InstanceID LIKE '%Default%' ")[0] > + > + #Clone the default and point it to the image file. > + vhddisk = self._clone_wmi_obj( > + 'Msvm_ResourceAllocationSettingData', vhddefault) > + #Set the new drive as the parent. > + vhddisk.Parent = diskdrive_path > + vhddisk.Connection = [vhdfile] > + > + #Add the new vhd object as a virtual hard disk to the vm. > + new_resources = self._add_virt_resource(vhddisk, vm) > + if new_resources is None: > + raise Exception('Failed to add vhd file to VM %s', vm_name) > + logging.info("Created disk for %s ", vm_name) > + > + def _create_nic(self, vm_name, mac): > + """Create a (emulated) nic and attach it to the vm""" > + logging.debug("Creating nic for %s ", vm_name) > + #Find the vswitch that is connected to the physical nic. > + vms = self._conn.Msvm_ComputerSystem (ElementName=vm_name) > + extswitch = self._find_external_network() > + vm = vms[0] > + switch_svc = self._conn.Msvm_VirtualSwitchManagementService ()[0] > + #Find the default nic and clone it to create a new nic for the vm. > + #Use Msvm_SyntheticEthernetPortSettingData for Windows VMs or Linux with > + #Linux Integration Components installed. > + emulatednics_data = self._conn.Msvm_EmulatedEthernetPortSettingData() > + default_nic_data = [n for n in emulatednics_data > + if n.InstanceID.rfind('Default') >0 ] > + new_nic_data = self._clone_wmi_obj( > + 'Msvm_EmulatedEthernetPortSettingData', > + default_nic_data[0]) > + > + #Create a port on the vswitch. > + (created_sw, ret_val) = switch_svc.CreateSwitchPort(vm_name, vm_name, > + "", extswitch.path_()) > + if ret_val != 0: > + logging.debug("Failed to create a new port on the external network") > + return > + logging.debug("Created switch port %s on switch %s", > + vm_name, extswitch.path_()) > + #Connect the new nic to the new port. > + new_nic_data.Connection = [created_sw] > + new_nic_data.ElementName = vm_name + ' nic' > + new_nic_data.Address = ''.join(mac.split(':')) > + new_nic_data.StaticMacAddress = 'TRUE' > + #Add the new nic to the vm. > + new_resources = self._add_virt_resource(new_nic_data, vm) > + if new_resources is None: > + raise Exception('Failed to add nic to VM %s', vm_name) You should use one of Nova's Exception classes here. If one doesn't fit, add one. > + logging.info("Created nic for %s ", vm_name) > + > + def _add_virt_resource(self, res_setting_data, target_vm): > + """Add a new resource (disk/nic) to the VM""" > + vs_man_svc = self._conn.Msvm_VirtualSystemManagementService()[0] > + (job, new_resources, return_val) = vs_man_svc.\ > + AddVirtualSystemResources([res_setting_data.GetText_(1)], > + target_vm.path_()) > + success = True > + if return_val == WMI_JOB_STATUS_STARTED: > + success = self._check_job_status(job) > + else: > + success = (return_val == 0) > + if success: > + return new_resources > + else: > + return None > + > + #TODO: use the reactor to poll instead of sleep > + def _check_job_status(self, jobpath): > + """Poll WMI job state for completion""" Can you add an example of such a jobpath? It's hard to review this without knowing what the format is. > + inst_id = jobpath.split(':')[1].split('=')[1].strip('\"') > + jobs = self._conn.Msvm_ConcreteJob(InstanceID=inst_id) > + if len(jobs) == 0: > + return False > + job = jobs[0] > + while job.JobState == WMI_JOB_STATE_RUNNING: > + time.sleep(0.1) > + job = self._conn.Msvm_ConcreteJob(InstanceID=inst_id)[0] > + > + if job.JobState != WMI_JOB_STATE_COMPLETED: > + logging.debug("WMI job failed: " + job.ErrorSummaryDescription) > + return False > + > + logging.debug("WMI job succeeded: " + job.Description + ",Elapsed = " \ > + + job.ElapsedTime) > + > + return True > + > + def _find_external_network(self): > + """Find the vswitch that is connected to the physical nic. > + Assumes only one physical nic on the host > + """ > + #If there are no physical nics connected to networks, return. > + bound = self._conn.Msvm_ExternalEthernetPort(IsBound='TRUE') > + if len(bound) == 0: > + return None > + > + return self._conn.Msvm_ExternalEthernetPort(IsBound='TRUE')[0]\ > + .associators(wmi_result_class='Msvm_SwitchLANEndpoint')[0]\ > + .associators(wmi_result_class='Msvm_SwitchPort')[0]\ > + .associators(wmi_result_class='Msvm_VirtualSwitch')[0] > + > + def _clone_wmi_obj(self, wmi_class, wmi_obj): > + """Clone a WMI object""" > + cl = self._conn.__getattr__(wmi_class) #get the class Don't call __getattr__ directly. Instead, call getattr(self._conn, wmi_class). > + newinst = cl.new() > + #Copy the properties from the original. > + for prop in wmi_obj._properties: > + newinst.Properties_.Item(prop).Value =\ > + wmi_obj.Properties_.Item(prop).Value > + return newinst > + > + @defer.inlineCallbacks > + def reboot(self, instance): > + """Reboot the specified instance.""" > + vm = yield self._lookup(instance.name) > + if vm is None: > + raise exception.NotFound('instance not present %s' % instance.name) > + self._set_vm_state(instance.name, 'Reboot') > + > + @defer.inlineCallbacks > + def destroy(self, instance): > + """Destroy the VM. Also destroy the associated VHD disk files""" > + logging.debug("Got request to destroy vm %s", instance.name) > + vm = yield self._lookup(instance.name) > + if vm is None: > + defer.returnValue(None) > + vm = self._conn.Msvm_ComputerSystem (ElementName=instance.name)[0] > + vs_man_svc = self._conn.Msvm_VirtualSystemManagementService()[0] > + #Stop the VM first. > + self._set_vm_state(instance.name, 'Disabled') > + vmsettings = vm.associators(wmi_result_class= > + 'Msvm_VirtualSystemSettingData') > + rasds = vmsettings[0].associators(wmi_result_class= > + 'MSVM_ResourceAllocationSettingData') > + disks = [r for r in rasds \ > + if r.ResourceSubType == 'Microsoft Virtual Hard Disk' ] > + diskfiles = [] > + #Collect disk file information before destroying the VM. > + for disk in disks: > + diskfiles.extend([c for c in disk.Connection]) > + #Nuke the VM. Does not destroy disks. > + (job, ret_val) = vs_man_svc.DestroyVirtualSystem(vm.path_()) > + if ret_val == WMI_JOB_STATUS_STARTED: > + success = self._check_job_status(job) > + elif ret_val == 0: > + success = True > + if not success: > + raise Exception('Failed to destroy vm %s' % instance.name) > + #Delete associated vhd disk files. > + for disk in diskfiles: > + vhdfile = self._cim_conn.CIM_DataFile(Name=disk) > + for vf in vhdfile: > + vf.Delete() > + logging.debug("Deleted disk %s vm %s", vhdfile, instance.name) > + > + def get_info(self, instance_id): > + """Get information about the VM""" > + vm = self._lookup(instance_id) > + if vm is None: > + raise exception.NotFound('instance not present %s' % instance_id) > + vm = self._conn.Msvm_ComputerSystem(ElementName=instance_id)[0] > + vs_man_svc = self._conn.Msvm_VirtualSystemManagementService()[0] > + vmsettings = vm.associators(wmi_result_class= > + 'Msvm_VirtualSystemSettingData') > + settings_paths = [ v.path_() for v in vmsettings] > + #See http://msdn.microsoft.com/en-us/library/cc160706%28VS.85%29.aspx > + summary_info = vs_man_svc.GetSummaryInformation( > + [4,100,103,105], settings_paths)[1] > + info = summary_info[0] > + logging.debug("Got Info for vm %s: state=%s, mem=%s, num_cpu=%s, \ > + cpu_time=%s", instance_id, > + str(HYPERV_POWER_STATE[info.EnabledState]), > + str(info.MemoryUsage), > + str(info.NumberOfProcessors), > + str(info.UpTime)) > + > + return {'state': HYPERV_POWER_STATE[info.EnabledState], > + 'max_mem': info.MemoryUsage, > + 'mem': info.MemoryUsage, > + 'num_cpu': info.NumberOfProcessors, > + 'cpu_time': info.UpTime} > + > + def _lookup(self, i): > + vms = self._conn.Msvm_ComputerSystem (ElementName=i) > + n = len(vms) > + if n == 0: > + return None > + elif n > 1: > + raise Exception('duplicate name found: %s' % i) > + else: > + return vms[0].ElementName This looks sort of odd to me. If there can be multiple VM's with the same name, how can we know that we're looking at the right one when we manipulate it? > + > + def _set_vm_state(self, vm_name, req_state): > + """Set the desired state of the VM""" > + vms = self._conn.Msvm_ComputerSystem (ElementName=vm_name) > + if len(vms) == 0: > + return False > + status = vms[0].RequestStateChange(REQ_POWER_STATE[req_state]) > + job = status[0] > + return_val = status[1] > + if return_val == WMI_JOB_STATUS_STARTED: > + success = self._check_job_status(job) > + elif return_val == 0: > + success = True > + if success: > + logging.info("Successfully changed vm state of %s to %s", > + vm_name, req_state) > + return True > + else: > + logging.debug("Failed to change vm state of %s to %s", > + vm_name, req_state) > + return False > + > + def attach_volume(self, instance_name, device_path, mountpoint): > + vm = self._lookup(instance_name) > + if vm is None: > + raise exception.NotFound('Cannot attach volume to missing %s vm' % > + instance_name) > + > + def detach_volume(self, instance_name, mountpoint): > + vm = self._lookup(instance_name) > + if vm is None: > + raise exception.NotFound('Cannot detach volume from missing %s ' % > + instance_name) > + > > === modified file 'nova/virt/images.py' > --- nova/virt/images.py 2010-10-07 14:03:43 +0000 > +++ nova/virt/images.py 2010-10-13 07:15:24 +0000 > @@ -21,15 +21,19 @@ > Handling of VM disk images. > """ > > +import logging > +import os > import os.path > +import shutil > +import sys > import time > +import urllib2 > import urlparse > > from nova import flags > from nova import process > from nova.auth import manager > from nova.auth import signer > -from nova.objectstore import image > > > FLAGS = flags.FLAGS > @@ -45,8 +49,27 @@ > return f(image, path, user, project) > > > +def _fetch_image_no_curl(url, path, headers): > + request = urllib2.Request(url) > + for (k, v) in headers.iteritems(): > + request.add_header(k, v) > + > + def urlretrieve(urlfile, fpath): > + chunk = 1*1024*1024 > + f = open(fpath, "wb") > + while 1: > + data = urlfile.read(chunk) > + if not data: > + break > + f.write(data) > + > + urlopened = urllib2.urlopen(request) > + urlretrieve(urlopened, path) > + logging.debug("Finished retreving %s -- placed in %s", url, path) > + The entire process will be blocked while this is running. Twisted has an excellent HTTP client implementation you can use instead, though. > def _fetch_s3_image(image, path, user, project): > url = image_url(image) > + logging.debug("About to retrieve %s and place it in %s", url, path) > > # This should probably move somewhere else, like e.g. a download_as > # method on User objects and at the same time get rewritten to use > @@ -61,18 +84,22 @@ > url_path) > headers['Authorization'] = 'AWS %s:%s' % (access, signature) > > - cmd = ['/usr/bin/curl', '--fail', '--silent', url] > - for (k,v) in headers.iteritems(): > - cmd += ['-H', '%s: %s' % (k,v)] > - > - cmd += ['-o', path] > - return process.SharedPool().execute(executable=cmd[0], args=cmd[1:]) > - > + if sys.platform.startswith('win'): > + return _fetch_image_no_curl(url, path, headers) > + else: > + cmd = ['/usr/bin/curl', '--fail', '--silent', url] > + for (k,v) in headers.iteritems(): > + cmd += ['-H', '%s: %s' % (k,v)] > + cmd += ['-o', path] > + return process.SharedPool().execute(executable=cmd[0], args=cmd[1:]) > > def _fetch_local_image(image, path, user, project): > - source = _image_path('%s/image' % image) > - return process.simple_execute('cp %s %s' % (source, path)) > - > + source = _image_path(os.path.join(image,'image')) > + logging.debug("About to copy %s to %s", source, path) > + if sys.platform.startswith('win'): > + return shutil.copy(source, path) Same here. It'll block everything until the copy is done. > + else: > + return process.simple_execute('cp %s %s' % (source, path)) > > def _image_path(path): > return os.path.join(FLAGS.images_path, path) > Unit tests? Right now, this branch makes the test suite completely unable to run due to the double definition of images_path. Addressing that leaves the all of these broken: nova.tests.cloud_unittest.CloudTestCase nova.tests.scheduler_unittest.SimpleDriverTestCase nova.tests.volume_unittest.VolumeTestCase ..due to the "import wmi" issue. review needsfixing -- Soren Hansen Ubuntu Developer http://www.ubuntu.com/ OpenStack Developer http://www.openstack.org/