diff -Nru gsconfig-1.0.6/debian/changelog gsconfig-1.0.8/debian/changelog --- gsconfig-1.0.6/debian/changelog 2017-02-02 12:48:02.000000000 +0000 +++ gsconfig-1.0.8/debian/changelog 2018-01-18 10:23:27.000000000 +0000 @@ -1,5 +1,5 @@ -gsconfig (1.0.6-1) xenial; urgency=high +gsconfig (1.0.8-1) xenial; urgency=high - * source package automatically created by stdeb 0.6.0+git + * source package automatically created by stdeb 0.8.5 - -- David Winslow, Sebastian Benthall Thu, 02 Feb 2017 12:47:35 +0000 + -- David Winslow, Sebastian Benthall Thu, 18 Jan 2018 11:23:07 +0100 diff -Nru gsconfig-1.0.6/debian/control gsconfig-1.0.8/debian/control --- gsconfig-1.0.6/debian/control 2017-02-02 12:47:35.000000000 +0000 +++ gsconfig-1.0.8/debian/control 2018-01-18 10:23:07.000000000 +0000 @@ -5,6 +5,8 @@ Build-Depends: python-setuptools (>= 0.6b3), python-all (>= 2.6.6-3), debhelper (>= 7) Standards-Version: 3.9.1 + + Package: python-gsconfig Architecture: all Depends: ${misc:Depends}, ${python:Depends} @@ -29,3 +31,6 @@ For developers: . .. code-block:: shell + + + diff -Nru gsconfig-1.0.6/debian/rules gsconfig-1.0.8/debian/rules --- gsconfig-1.0.6/debian/rules 2017-02-02 12:47:35.000000000 +0000 +++ gsconfig-1.0.8/debian/rules 2018-01-18 10:23:07.000000000 +0000 @@ -1,9 +1,31 @@ #!/usr/bin/make -f -# This file was automatically generated by stdeb 0.6.0+git at -# Thu, 02 Feb 2017 12:47:35 +0000 +# This file was automatically generated by stdeb 0.8.5 at +# Thu, 18 Jan 2018 11:23:07 +0100 %: dh $@ --with python2 --buildsystem=python_distutils +override_dh_auto_clean: + python setup.py clean -a + find . -name \*.pyc -exec rm {} \; + + + +override_dh_auto_build: + python setup.py build --force + + + +override_dh_auto_install: + python setup.py install --force --root=debian/python-gsconfig --no-compile -O0 --install-layout=deb --prefix=/usr + + + +override_dh_python2: + dh_python2 --no-guessing-versions + + + + diff -Nru gsconfig-1.0.6/debian/source/options gsconfig-1.0.8/debian/source/options --- gsconfig-1.0.6/debian/source/options 2017-02-02 12:47:35.000000000 +0000 +++ gsconfig-1.0.8/debian/source/options 2018-01-18 10:23:07.000000000 +0000 @@ -1 +1 @@ -extend-diff-ignore="\.egg-info" \ No newline at end of file +extend-diff-ignore="\.egg-info$" \ No newline at end of file diff -Nru gsconfig-1.0.6/PKG-INFO gsconfig-1.0.8/PKG-INFO --- gsconfig-1.0.6/PKG-INFO 2017-02-02 12:47:26.000000000 +0000 +++ gsconfig-1.0.8/PKG-INFO 2017-10-10 15:02:16.000000000 +0000 @@ -1,300 +1,300 @@ -Metadata-Version: 1.1 -Name: gsconfig -Version: 1.0.6 -Summary: GeoServer REST Configuration -Home-page: https://github.com/boundlessgeo/gsconfig -Author: David Winslow, Sebastian Benthall -Author-email: dwinslow@opengeo.org -License: MIT -Description: gsconfig - ======== - - .. image:: https://travis-ci.org/boundlessgeo/gsconfig.svg?branch=master - :target: https://travis-ci.org/boundlessgeo/gsconfig - - gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API. - - The project is distributed under a `MIT License `_ . - - Installing - ========== - - .. code-block:: shell - - pip install gsconfig - - For developers: - - .. code-block:: shell - - git clone git@github.com:boundlessgeo/gsconfig.git - cd gsconfig - python setup.py develop - - Getting Help - ============ - There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . - If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ . - - Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.) - - Sample Layer Creation Code - ========================== - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8080/geoserver/") - topp = cat.get_workspace("topp") - shapefile_plus_sidecars = shapefile_and_friends("states") - # shapefile_and_friends should look on the filesystem to find a shapefile - # and related files based on the base path passed in - # - # shapefile_plus_sidecars == { - # 'shp': 'states.shp', - # 'shx': 'states.shx', - # 'prj': 'states.prj', - # 'dbf': 'states.dbf' - # } - - # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) - # 'workspace' is optional (GeoServer's default workspace is used by... default) - # 'name' is required - ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars) - - Running Tests - ============= - - Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of `integration tests `_. - These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer. - This data is also included in the GeoServer source repository as ``/data/release/``. - In addition, it is expected that there will be a postgres database available at ``postgres:password@localhost:5432/db``. - You can test connecting to this database with the ``psql`` command line client by running ``$ psql -d db -Upostgres -h localhost -p 5432`` (you will be prompted interactively for the password.) - - To override the assumed database connection parameters, the following environment variables are supported: - - - DATABASE - - DBUSER - - DBPASS - - If present, psycopg will be used to verify the database connection prior to running the tests. - - If provided, the following environment variables will be used to reset the data directory: - - GEOSERVER_HOME - Location of git repository to read the clean data from. If only this option is provided - `git clean` will be used to reset the data. - GEOSERVER_DATA_DIR - Optional location of the data dir geoserver will be running with. If provided, `rsync` - will be used to reset the data. - GS_VERSION - Optional environment variable allowing the catalog test cases to automatically download - and start a vanilla GeoServer WAR form the web. - Be sure that there are no running services on HTTP port 8080. - - Here are the commands that I use to reset before running the gsconfig tests: - - .. code-block:: shell - - $ cd ~/geoserver/src/web/app/ - $ PGUSER=postgres dropdb db - $ PGUSER=postgres createdb db -T template_postgis - $ git clean -dxff -- ../../../data/release/ - $ git checkout -f - $ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \ - GEOSERVER_DATA_DIR=../../../data/release \ - mvn jetty:run - - At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests. - You can stop it with ``CTRL-C`` (but don't do that until you've run the tests!) - You can run the gsconfig tests with the following command: - - .. code-block:: shell - - $ python setup.py test - - Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests: - - .. code-block:: shell - - $ git clean -dxff -- ../../../data/release/ - $ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload - - More Examples - Updated for GeoServer 2.4+ - ========================================== - - Loading the GeoServer ``catalog`` using ``gsconfig`` is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver") - - The code below allows you to create a FeatureType from a Shapefile - - .. code-block:: python - - geosolutions = cat.get_workspace("geosolutions") - import geoserver.util - shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states") - # shapefile_and_friends should look on the filesystem to find a shapefile - # and related files based on the base path passed in - # - # shapefile_plus_sidecars == { - # 'shp': 'states.shp', - # 'shx': 'states.shx', - # 'prj': 'states.prj', - # 'dbf': 'states.dbf' - # } - # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) - # 'workspace' is optional (GeoServer's default workspace is used by... default) - # 'name' is required - ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions) - - It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called ``my_jdbc_vt_test`` defined by a custom ``sql``. - - .. code-block:: python - - from geoserver.catalog import Catalog - from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam - - cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****') - store = cat.get_store('postgis-geoserver') - geom = JDBCVirtualTableGeometry('newgeom','LineString','4326') - ft_name = 'my_jdbc_vt_test' - epsg_code = 'EPSG:4326' - sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime' - keyColumn = None - parameters = None - - jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters) - ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt) - - This example shows how to easily update a ``layer`` property. The same approach may be used with every ``catalog`` resource - - .. code-block:: python - - ne_shaded = cat.get_layer("ne_shaded") - ne_shaded.enabled=True - cat.save(ne_shaded) - cat.reload() - - Deleting a ``store`` from the ``catalog`` requires to purge all the associated ``layers`` first. This can be done by doing something like this: - - .. code-block:: python - - st = cat.get_store("ne_shaded") - cat.delete(ne_shaded) - cat.reload() - cat.delete(st) - cat.reload() - - There are some functionalities allowing to manage the ``ImageMosaic`` coverages. It is possible to create new ImageMosaics, add granules to them, - and also read the coverages metadata, modify the mosaic ``Dimensions`` and finally query the mosaic ``granules`` and list their properties. - - The gsconfig methods map the `REST APIs for ImageMosaic `_ - - In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide - in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules. - You need to update or remove the ``datastore.properties`` file before creating the mosaic otherwise you will get an exception. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8180/geoserver/rest") - cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip") - - By defualt the ``cat.create_imagemosaic`` tries to configure the layer too. If you want to create the store only, you can specify the following parameter - - .. code-block:: python - - cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none") - - In order to retrieve from the catalog the ImageMosaic coverage store you can do this - - .. code-block:: python - - store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") - - It is possible to add more granules to the mosaic at runtime. - With the following method you can add granules already present on the machine local path. - - .. code-block:: python - - cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store) - - The method below allows to send granules remotely via POST to the ImageMosaic. - The granules will be uploaded and stored on the ImageMosaic index folder. - - .. code-block:: python - - cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store) - - To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. - *ATTENTION*: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables. - - .. code-block:: python - - layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test") - cat.delete(layer) - cat.reload() - cat.delete(store) - cat.reload() - - The method below allows you the load and update the coverage metadata of the ImageMosaic. - You need to do this for every coverage of the ImageMosaic of course. - - .. code-block:: python - - coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml") - coverage.supported_formats = ['GEOTIFF'] - cat.save(coverage) - - By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions. - *ATTENTION*: notice that the ``presentation`` parameters accepts only one among the following values {'LIST', 'DISCRETE_INTERVAL', 'CONTINUOUS_INTERVAL'} - - .. code-block:: python - - from geoserver.support import DimensionInfo - timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None) - coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo}) - cat.save(coverage) - - One the ImageMosaic has been configures, it is possible to read the coverages along with their granule schema and granule info. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8180/geoserver/rest") - store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") - coverages = cat.mosaic_coverages(store) - schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store) - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store) - - The granules details can be easily read by doing something like this: - - .. code-block:: python - - granules['crs']['properties']['name'] - granules['features'] - granules['features'][0]['properties']['time'] - granules['features'][0]['properties']['location'] - granules['features'][0]['properties']['run'] - - When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes. - - .. code-block:: python - - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") - -Keywords: GeoServer REST Configuration -Platform: UNKNOWN -Classifier: Development Status :: 4 - Beta -Classifier: Intended Audience :: Developers -Classifier: Intended Audience :: Science/Research -Classifier: License :: OSI Approved :: MIT License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Topic :: Scientific/Engineering :: GIS +Metadata-Version: 1.1 +Name: gsconfig +Version: 1.0.8 +Summary: GeoServer REST Configuration +Home-page: https://github.com/boundlessgeo/gsconfig +Author: David Winslow, Sebastian Benthall +Author-email: dwinslow@opengeo.org +License: MIT +Description: gsconfig + ======== + + .. image:: https://travis-ci.org/boundlessgeo/gsconfig.svg?branch=master + :target: https://travis-ci.org/boundlessgeo/gsconfig + + gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API. + + The project is distributed under a `MIT License `_ . + + Installing + ========== + + .. code-block:: shell + + pip install gsconfig + + For developers: + + .. code-block:: shell + + git clone git@github.com:boundlessgeo/gsconfig.git + cd gsconfig + python setup.py develop + + Getting Help + ============ + There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . + If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ . + + Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.) + + Sample Layer Creation Code + ========================== + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8080/geoserver/") + topp = cat.get_workspace("topp") + shapefile_plus_sidecars = shapefile_and_friends("states") + # shapefile_and_friends should look on the filesystem to find a shapefile + # and related files based on the base path passed in + # + # shapefile_plus_sidecars == { + # 'shp': 'states.shp', + # 'shx': 'states.shx', + # 'prj': 'states.prj', + # 'dbf': 'states.dbf' + # } + + # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) + # 'workspace' is optional (GeoServer's default workspace is used by... default) + # 'name' is required + ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars) + + Running Tests + ============= + + Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of `integration tests `_. + These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer. + This data is also included in the GeoServer source repository as ``/data/release/``. + In addition, it is expected that there will be a postgres database available at ``postgres:password@localhost:5432/db``. + You can test connecting to this database with the ``psql`` command line client by running ``$ psql -d db -Upostgres -h localhost -p 5432`` (you will be prompted interactively for the password.) + + To override the assumed database connection parameters, the following environment variables are supported: + + - DATABASE + - DBUSER + - DBPASS + + If present, psycopg will be used to verify the database connection prior to running the tests. + + If provided, the following environment variables will be used to reset the data directory: + + GEOSERVER_HOME + Location of git repository to read the clean data from. If only this option is provided + `git clean` will be used to reset the data. + GEOSERVER_DATA_DIR + Optional location of the data dir geoserver will be running with. If provided, `rsync` + will be used to reset the data. + GS_VERSION + Optional environment variable allowing the catalog test cases to automatically download + and start a vanilla GeoServer WAR form the web. + Be sure that there are no running services on HTTP port 8080. + + Here are the commands that I use to reset before running the gsconfig tests: + + .. code-block:: shell + + $ cd ~/geoserver/src/web/app/ + $ PGUSER=postgres dropdb db + $ PGUSER=postgres createdb db -T template_postgis + $ git clean -dxff -- ../../../data/release/ + $ git checkout -f + $ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \ + GEOSERVER_DATA_DIR=../../../data/release \ + mvn jetty:run + + At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests. + You can stop it with ``CTRL-C`` (but don't do that until you've run the tests!) + You can run the gsconfig tests with the following command: + + .. code-block:: shell + + $ python setup.py test + + Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests: + + .. code-block:: shell + + $ git clean -dxff -- ../../../data/release/ + $ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload + + More Examples - Updated for GeoServer 2.4+ + ========================================== + + Loading the GeoServer ``catalog`` using ``gsconfig`` is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver") + + The code below allows you to create a FeatureType from a Shapefile + + .. code-block:: python + + geosolutions = cat.get_workspace("geosolutions") + import geoserver.util + shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states") + # shapefile_and_friends should look on the filesystem to find a shapefile + # and related files based on the base path passed in + # + # shapefile_plus_sidecars == { + # 'shp': 'states.shp', + # 'shx': 'states.shx', + # 'prj': 'states.prj', + # 'dbf': 'states.dbf' + # } + # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) + # 'workspace' is optional (GeoServer's default workspace is used by... default) + # 'name' is required + ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions) + + It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called ``my_jdbc_vt_test`` defined by a custom ``sql``. + + .. code-block:: python + + from geoserver.catalog import Catalog + from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam + + cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****') + store = cat.get_store('postgis-geoserver') + geom = JDBCVirtualTableGeometry('newgeom','LineString','4326') + ft_name = 'my_jdbc_vt_test' + epsg_code = 'EPSG:4326' + sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime' + keyColumn = None + parameters = None + + jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters) + ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt) + + This example shows how to easily update a ``layer`` property. The same approach may be used with every ``catalog`` resource + + .. code-block:: python + + ne_shaded = cat.get_layer("ne_shaded") + ne_shaded.enabled=True + cat.save(ne_shaded) + cat.reload() + + Deleting a ``store`` from the ``catalog`` requires to purge all the associated ``layers`` first. This can be done by doing something like this: + + .. code-block:: python + + st = cat.get_store("ne_shaded") + cat.delete(ne_shaded) + cat.reload() + cat.delete(st) + cat.reload() + + There are some functionalities allowing to manage the ``ImageMosaic`` coverages. It is possible to create new ImageMosaics, add granules to them, + and also read the coverages metadata, modify the mosaic ``Dimensions`` and finally query the mosaic ``granules`` and list their properties. + + The gsconfig methods map the `REST APIs for ImageMosaic `_ + + In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide + in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules. + You need to update or remove the ``datastore.properties`` file before creating the mosaic otherwise you will get an exception. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8180/geoserver/rest") + cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip") + + By defualt the ``cat.create_imagemosaic`` tries to configure the layer too. If you want to create the store only, you can specify the following parameter + + .. code-block:: python + + cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none") + + In order to retrieve from the catalog the ImageMosaic coverage store you can do this + + .. code-block:: python + + store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") + + It is possible to add more granules to the mosaic at runtime. + With the following method you can add granules already present on the machine local path. + + .. code-block:: python + + cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store) + + The method below allows to send granules remotely via POST to the ImageMosaic. + The granules will be uploaded and stored on the ImageMosaic index folder. + + .. code-block:: python + + cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store) + + To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. + *ATTENTION*: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables. + + .. code-block:: python + + layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test") + cat.delete(layer) + cat.reload() + cat.delete(store) + cat.reload() + + The method below allows you the load and update the coverage metadata of the ImageMosaic. + You need to do this for every coverage of the ImageMosaic of course. + + .. code-block:: python + + coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml") + coverage.supported_formats = ['GEOTIFF'] + cat.save(coverage) + + By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions. + *ATTENTION*: notice that the ``presentation`` parameters accepts only one among the following values {'LIST', 'DISCRETE_INTERVAL', 'CONTINUOUS_INTERVAL'} + + .. code-block:: python + + from geoserver.support import DimensionInfo + timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None) + coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo}) + cat.save(coverage) + + Once the ImageMosaic has been configured, it is possible to read the coverages along with their granule schema and granule info. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8180/geoserver/rest") + store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") + coverages = cat.mosaic_coverages(store) + schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store) + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store) + + The granules details can be easily read by doing something like this: + + .. code-block:: python + + granules['crs']['properties']['name'] + granules['features'] + granules['features'][0]['properties']['time'] + granules['features'][0]['properties']['location'] + granules['features'][0]['properties']['run'] + + When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes. + + .. code-block:: python + + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") + +Keywords: GeoServer REST Configuration +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Science/Research +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Scientific/Engineering :: GIS diff -Nru gsconfig-1.0.6/README.rst gsconfig-1.0.8/README.rst --- gsconfig-1.0.6/README.rst 2015-11-18 17:31:55.000000000 +0000 +++ gsconfig-1.0.8/README.rst 2017-10-03 13:42:08.000000000 +0000 @@ -252,7 +252,7 @@ coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo}) cat.save(coverage) -One the ImageMosaic has been configures, it is possible to read the coverages along with their granule schema and granule info. +Once the ImageMosaic has been configured, it is possible to read the coverages along with their granule schema and granule info. .. code-block:: python @@ -261,7 +261,7 @@ store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") coverages = cat.mosaic_coverages(store) schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store) - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store) + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store) The granules details can be easily read by doing something like this: @@ -277,6 +277,6 @@ .. code-block:: python - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") diff -Nru gsconfig-1.0.6/setup.cfg gsconfig-1.0.8/setup.cfg --- gsconfig-1.0.6/setup.cfg 2017-02-02 12:47:26.000000000 +0000 +++ gsconfig-1.0.8/setup.cfg 2017-10-10 15:02:16.000000000 +0000 @@ -1,4 +1,5 @@ -[egg_info] -tag_build = -tag_date = 0 - +[egg_info] +tag_build = +tag_date = 0 +tag_svn_revision = 0 + diff -Nru gsconfig-1.0.6/setup.py gsconfig-1.0.8/setup.py --- gsconfig-1.0.6/setup.py 2017-02-02 12:46:59.000000000 +0000 +++ gsconfig-1.0.8/setup.py 2017-10-10 11:47:44.000000000 +0000 @@ -9,7 +9,7 @@ readme_text = '' setup(name = "gsconfig", - version = "1.0.6", + version = "1.0.8", description = "GeoServer REST Configuration", long_description = readme_text, keywords = "GeoServer REST Configuration", diff -Nru gsconfig-1.0.6/src/geoserver/catalog.py gsconfig-1.0.8/src/geoserver/catalog.py --- gsconfig-1.0.6/src/geoserver/catalog.py 2017-02-02 12:46:59.000000000 +0000 +++ gsconfig-1.0.8/src/geoserver/catalog.py 2017-10-10 14:57:26.000000000 +0000 @@ -20,7 +20,7 @@ from geoserver.support import prepare_upload_bundle, url, _decode_list, _decode_dict, JDBCVirtualTable from geoserver.layergroup import LayerGroup, UnsavedLayerGroup from geoserver.workspace import workspace_from_index, Workspace -from os import unlink +import os import httplib2 from xml.etree.ElementTree import XML from xml.parsers.expat import ExpatError @@ -250,63 +250,68 @@ return response def get_store(self, name, workspace=None): + ''' + Returns a single store object. + Will return None if no store is found. + Will raise an error if more than one store with the same name is found. + ''' - # Make sure workspace is a workspace object and not a string. - # If the workspace does not exist, continue as if no workspace had been defined. - if isinstance(workspace, basestring): - workspace = self.get_workspace(workspace) - - # Create a list with potential workspaces to look into - # if a workspace is defined, it will contain only that workspace - # if no workspace is defined, the list will contain all workspaces. - workspaces = [] + stores = self.get_stores(workspace=workspace, names=name) - if workspace is None: - workspaces.extend(self.get_workspaces()) - else: - workspaces.append(workspace) + if stores.__len__() == 0: + return None + elif stores.__len__() > 1: + multiple_stores = [] + for s in stores: + multiple_stores.append("{workspace_name}:{store_name}".format(workspace_name=s.workspace.name, store_name=s.name)) - # Iterate over all workspaces to find the stores or store - found_stores = {} - for ws in workspaces: - # Get all the store objects from geoserver - raw_stores = self.get_stores(workspace=ws) - # And put it in a dictionary where the keys are the name of the store, - new_stores = dict(zip([s.name for s in raw_stores], raw_stores)) - # If the store is found, put it in a dict that also takes into account the - # worspace. - if name in new_stores: - found_stores[ws.name + ':' + name] = new_stores[name] - - # There are 3 cases: - # a) No stores are found. - # b) Only one store is found. - # c) More than one is found. - if len(found_stores) == 0: - raise FailedRequestError("No store found named: " + name) - elif len(found_stores) > 1: - raise AmbiguousRequestError("Multiple stores found named '" + name + "': "+ found_stores.keys()) + raise AmbiguousRequestError("Multiple stores found named {name} - {stores}".format(name=name, stores=", ".join(multiple_stores))) else: - return found_stores.values()[0] + return stores[0] + def get_stores(self, names=None, workspace=None): + ''' + Returns a list of stores in the catalog. If workspace is specified will only return stores in that workspace. + If names is specified, will only return stores that match. + names can either be a comma delimited string or an array. + If names is specified will only return stores that match the name. + Will return an empty list if no stores are found. + ''' - def get_stores(self, workspace=None): + workspaces = [] if workspace is not None: if isinstance(workspace, basestring): - workspace = self.get_workspace(workspace) - ds_list = self.get_xml(workspace.datastore_url) - cs_list = self.get_xml(workspace.coveragestore_url) - wms_list = self.get_xml(workspace.wmsstore_url) - datastores = [datastore_from_index(self, workspace, n) for n in ds_list.findall("dataStore")] - coveragestores = [coveragestore_from_index(self, workspace, n) for n in cs_list.findall("coverageStore")] - wmsstores = [wmsstore_from_index(self, workspace, n) for n in wms_list.findall("wmsStore")] - return datastores + coveragestores + wmsstores - else: - stores = [] - for ws in self.get_workspaces(): - a = self.get_stores(ws) - stores.extend(a) - return stores + ws = self.get_workspaces(workspace) + if ws: + # There can only be one workspace with this name + workspaces.append(ws[0]) + elif hasattr(workspace, 'resource_type') and workspace.resource_type == "workspace": + workspaces.append(workspace) + else: + workspaces = self.get_workspaces() + + stores = [] + if workspaces: + for ws in workspaces: + ds_list = self.get_xml(ws.datastore_url) + cs_list = self.get_xml(ws.coveragestore_url) + wms_list = self.get_xml(ws.wmsstore_url) + stores.extend([datastore_from_index(self, ws, n) for n in ds_list.findall("dataStore")]) + stores.extend([coveragestore_from_index(self, ws, n) for n in cs_list.findall("coverageStore")]) + stores.extend([wmsstore_from_index(self, ws, n) for n in wms_list.findall("wmsStore")]) + + if names is None: + names = [] + elif isinstance(names, basestring): + names = map(str.strip, str(names).split(',')) + if stores and names: + named_stores = [] + for store in stores: + if store.name in names: + named_stores.append(store) + return named_stores + + return stores def create_datastore(self, name, workspace=None): if isinstance(workspace, basestring): @@ -370,6 +375,9 @@ params["update"] = "overwrite" if charset is not None: params["charset"] = charset + params["filename"] = "{}.zip".format(name) + params["target"] = "shp" + # params["configure"] = "all" headers = { 'Content-Type': 'application/zip', 'Accept': 'application/xml' } upload_url = url(self.service_url, @@ -383,19 +391,17 @@ if headers.status != 201: raise UploadError(response) finally: - unlink(bundle) + # os.unlink(bundle) + pass def create_featurestore(self, name, data, workspace=None, overwrite=False, charset=None): if not overwrite: - try: - store = self.get_store(name, workspace) + store = self.get_store(name, workspace) + if store is not None: msg = "There is already a store named " + name if workspace: msg += " in " + str(workspace) raise ConflictingDataError(msg) - except FailedRequestError: - # we don't really expect that every layer name will be taken - pass if workspace is None: workspace = self.get_default_workspace() @@ -426,19 +432,16 @@ raise UploadError(response) finally: message.close() - unlink(archive) + os.unlink(archive) def create_imagemosaic(self, name, data, configure=None, workspace=None, overwrite=False, charset=None): if not overwrite: - try: - store = self.get_store(name, workspace) + store = self.get_store(name, workspace) + if store is not None: msg = "There is already a store named " + name if workspace: msg += " in " + str(workspace) raise ConflictingDataError(msg) - except FailedRequestError: - # we don't really expect that every layer name will be taken - pass if workspace is None: workspace = self.get_default_workspace() @@ -448,26 +451,53 @@ params['charset'] = charset if configure is not None: params['configure'] = "none" - cs_url = url(self.service_url, - ["workspaces", workspace, "coveragestores", name, "file.imagemosaic"], params) + + if isinstance(data, file) or os.path.splitext(data)[-1] == ".zip": + store_type = "file.imagemosaic" + contet_type = "application/zip" + if isinstance(data, basestring): + upload_data = open(data, 'rb') + elif isinstance(data, file): + # Adding this check only to pass tests. We should drop support for passing a file object + upload_data = data + else: + raise ValueError("ImageMosaic Dataset or directory: {data} is incorrect".format(data=data)) + else: + store_type = "external.imagemosaic" + contet_type = "text/plain" + if isinstance(data, basestring): + upload_data = data if data.startswith("file:") else "file:{data}".format(data=data) + else: + raise ValueError("ImageMosaic Dataset or directory: {data} is incorrect".format(data=data)) + + cs_url = url( + self.service_url, + [ + "workspaces", + workspace, + "coveragestores", + name, + store_type + ], + params + ) # PUT /workspaces//coveragestores//file.imagemosaic?configure=none - headers = { - "Content-type": "application/zip", + req_headers = { + "Content-type": contet_type, "Accept": "application/xml" } - if isinstance(data, basestring): - message = open(data, 'rb') - else: - message = data + try: - headers, response = self.http.request(cs_url, "PUT", message, headers) + resp_headers, response = self.http.request(cs_url, "PUT", upload_data, req_headers) self._cache.clear() - if headers.status != 201: + if resp_headers.status != 201: raise UploadError(response) finally: - if hasattr(message, "close"): - message.close() + if hasattr(upload_data, "close"): + upload_data.close() + + return "Image Mosaic created" def create_coveragestore(self, name, data, workspace=None, overwrite=False): self._create_coveragestore(name, data, workspace, overwrite) @@ -477,15 +507,12 @@ def _create_coveragestore(self, name, data, workspace=None, overwrite=False, external=False): if not overwrite: - try: - store = self.get_store(name, workspace) + store = self.get_store(name, workspace) + if store is not None: msg = "There is already a store named " + name if workspace: msg += " in " + str(workspace) raise ConflictingDataError(msg) - except FailedRequestError: - # we don't really expect that every layer name will be taken - pass if workspace is None: workspace = self.get_default_workspace() @@ -529,108 +556,219 @@ if hasattr(message, "close"): message.close() if archive is not None: - unlink(archive) + os.unlink(archive) - def harvest_externalgranule(self, data, store): - '''Harvest a granule into an existing imagemosaic''' - params = dict() - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "external.imagemosaic"], params) - # POST /workspaces//coveragestores//external.imagemosaic - headers = { - "Content-type": "text/plain", - "Accept": "application/xml" - } - headers, response = self.http.request(cs_url, "POST", data, headers) - self._cache.clear() - if headers.status != 202: - raise UploadError(response) + def add_granule(self, data, store, workspace=None): + '''Harvest/add a granule into an existing imagemosaic''' + ext = os.path.splitext(data)[-1] + if ext == ".zip": + type = "file.imagemosaic" + upload_data = open(data, 'rb') + headers = { + "Content-type": "application/zip", + "Accept": "application/xml" + } + else: + type = "external.imagemosaic" + upload_data = data if data.startswith("file:") else "file:{data}".format(data=data) + headers = { + "Content-type": "text/plain", + "Accept": "application/xml" + } - def harvest_uploadgranule(self, data, store): - '''Harvest a granule into an existing imagemosaic''' params = dict() - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "file.imagemosaic"], params) - # POST /workspaces//coveragestores//file.imagemosaic - headers = { - "Content-type": "application/zip", - "Accept": "application/xml" - } - message = open(data, 'rb') + workspace_name = workspace + if isinstance(store, basestring): + store_name = store + else: + store_name = store.name + workspace_name = store.workspace.name + + if workspace_name is None: raise ValueError("Must specify workspace") + + cs_url = url( + self.service_url, + [ + "workspaces", + workspace_name, + "coveragestores", + store_name, + type + ], + params + ) + try: - headers, response = self.http.request(cs_url, "POST", message, headers) - self._cache.clear() + headers, response = self.http.request(cs_url, "POST", upload_data, headers) if headers.status != 202: raise UploadError(response) finally: - if hasattr(message, "close"): - message.close() + if hasattr(upload_data, "close"): + upload_data.close() - def mosaic_coverages(self, store): - '''Print granules of an existing imagemosaic''' - params = dict() - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "coverages.json"], params) - # GET /workspaces//coveragestores//coverages.json - headers = { - "Content-type": "application/json", - "Accept": "application/json" - } - headers, response = self.http.request(cs_url, "GET", None, headers) self._cache.clear() - coverages = json.loads(response, object_hook=_decode_dict) - return coverages + return "Added granule" - def mosaic_coverage_schema(self, coverage, store): - '''Print granules of an existing imagemosaic''' + def delete_granule(self, coverage, store, granule_id, workspace=None): + '''Deletes a granule of an existing imagemosaic''' params = dict() - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "coverages", coverage, "index.json"], params) - # GET /workspaces//coveragestores//coverages//index.json + + workspace_name = workspace + if isinstance(store, basestring): + store_name = store + else: + store_name = store.name + workspace_name = store.workspace.name + + if workspace_name is None: raise ValueError("Must specify workspace") + + cs_url = url( + self.service_url, + [ + "workspaces", + workspace_name, + "coveragestores", + store_name, + "coverages", + coverage, + "index/granules", + granule_id, + ".json" + ], + params + ) + + # DELETE /workspaces//coveragestores//coverages//index/granules/.json headers = { "Content-type": "application/json", "Accept": "application/json" } - headers, response = self.http.request(cs_url, "GET", None, headers) + + headers, response = self.http.request(cs_url, "DELETE", None, headers) + if headers.status != 200: + raise FailedRequestError(response) self._cache.clear() - schema = json.loads(response, object_hook=_decode_dict) - return schema + return "Deleted granule" - def mosaic_granules(self, coverage, store, filter=None, limit=None, offset=None): - '''Print granules of an existing imagemosaic''' + def list_granules(self, coverage, store, workspace=None, filter=None, limit=None, offset=None): + '''List granules of an imagemosaic''' params = dict() + if filter is not None: params['filter'] = filter if limit is not None: params['limit'] = limit if offset is not None: params['offset'] = offset - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "coverages", coverage, "index/granules.json"], params) + + workspace_name = workspace + if isinstance(store, basestring): + store_name = store + else: + store_name = store.name + workspace_name = store.workspace.name + + if workspace_name is None: raise ValueError("Must specify workspace") + + cs_url = url( + self.service_url, + [ + "workspaces", + workspace_name, + "coveragestores", + store_name, + "coverages", + coverage, + "index/granules.json" + ], + params + ) + # GET /workspaces//coveragestores//coverages//index/granules.json headers = { "Content-type": "application/json", "Accept": "application/json" } + headers, response = self.http.request(cs_url, "GET", None, headers) + if headers.status != 200: + raise FailedRequestError(response) self._cache.clear() granules = json.loads(response, object_hook=_decode_dict) return granules - def mosaic_delete_granule(self, coverage, store, granule_id): - '''Deletes a granule of an existing imagemosaic''' + def harvest_externalgranule(self, data, store): + '''Harvest a granule into an existing imagemosaic''' + self.add_granule(data, store) + + def harvest_uploadgranule(self, data, store): + '''Harvest a granule into an existing imagemosaic''' + self.add_granule(data, store) + + def mosaic_coverages(self, store): + '''Returns all coverages in a coverage store''' params = dict() - cs_url = url(self.service_url, - ["workspaces", store.workspace.name, "coveragestores", store.name, "coverages", coverage, "index/granules", granule_id,".json"], params) - # DELETE /workspaces//coveragestores//coverages//index/granules/.json + cs_url = url( + self.service_url, + [ + "workspaces", + store.workspace.name, + "coveragestores", + store.name, + "coverages.json" + ], + params + ) + # GET /workspaces//coveragestores//coverages.json headers = { "Content-type": "application/json", "Accept": "application/json" } - headers, response = self.http.request(cs_url, "DELETE", None, headers) + + headers, response = self.http.request(cs_url, "GET", None, headers) + if headers.status != 200: + raise FailedRequestError(response) self._cache.clear() + coverages = json.loads(response, object_hook=_decode_dict) + return coverages + + def mosaic_coverage_schema(self, coverage, store, workspace): + '''Returns the schema of a coverage in a coverage store''' + params = dict() + cs_url = url( + self.service_url, + [ + "workspaces", + workspace, + "coveragestores", + store, + "coverages", + coverage, + "index.json" + ], + params + ) + # GET /workspaces//coveragestores//coverages//index.json + + headers = { + "Content-type": "application/json", + "Accept": "application/json" + } + + headers, response = self.http.request(cs_url, "GET", None, headers) if headers.status != 200: raise FailedRequestError(response) + self._cache.clear() + schema = json.loads(response, object_hook=_decode_dict) + return schema + + def mosaic_granules(self, coverage, store, filter=None, limit=None, offset=None): + '''List granules of an imagemosaic''' + return self.list_granules(coverage, store, filter=None, limit=None, offset=None) + + def mosaic_delete_granule(self, coverage, store, granule_id): + '''Deletes a granule of an existing imagemosaic''' + self.delete_granule(coverage, store, granule_id) def publish_featuretype(self, name, store, native_crs, srs=None, jdbc_virtual_table=None): '''Publish a featuretype from data in an existing store''' @@ -682,7 +820,7 @@ return candidates[0] if workspace is not None: - for store in self.get_stores(workspace): + for store in self.get_stores(workspace=workspace): resource = self.get_resource(name, store) if resource is not None: return resource @@ -715,7 +853,7 @@ return store.get_resources() if workspace is not None: resources = [] - for store in self.get_stores(workspace): + for store in self.get_stores(workspace=workspace): resources.extend(self.get_resources(store)) return resources resources = [] @@ -854,20 +992,50 @@ headers, response = self.http.request(workspace_url, "POST", xml, headers) assert 200 <= headers.status < 300, "Tried to create workspace but got " + str(headers.status) + ": " + response self._cache.pop("%s/workspaces.xml" % self.service_url, None) - return self.get_workspace(name) + workspaces = self.get_workspaces(name) + # Can only have one workspace with this name + return workspaces[0] if workspaces else None + + def get_workspaces(self, names=None): + ''' + Returns a list of workspaces in the catalog. + If names is specified, will only return workspaces that match. + names can either be a comma delimited string or an array. + Will return an empty list if no workspaces are found. + ''' + if names is None: + names = [] + elif isinstance(names, basestring): + names = map(str.strip, str(names).split(',')) - def get_workspaces(self): description = self.get_xml("%s/workspaces.xml" % self.service_url) - return [workspace_from_index(self, node) for node in description.findall("workspace")] + workspaces = [] + workspaces.extend([workspace_from_index(self, node) for node in description.findall("workspace")]) + + if workspaces and names: + named_workspaces = [] + for ws in workspaces: + if ws.name in names: + named_workspaces.append(ws) + return named_workspaces + + return workspaces def get_workspace(self, name): - candidates = [w for w in self.get_workspaces() if w.name == name] - if len(candidates) == 0: + ''' + returns a single workspace object. + Will return None if no workspace is found. + Will raise an error if more than one workspace with the same name is found. + ''' + + workspaces = self.get_workspaces(name) + + if len(workspaces) == 0: return None - elif len(candidates) > 1: + elif len(workspaces) > 1: raise AmbiguousRequestError() else: - return candidates[0] + return workspaces[0] def get_default_workspace(self): ws = Workspace(self, "default") diff -Nru gsconfig-1.0.6/src/geoserver/style.py gsconfig-1.0.8/src/geoserver/style.py --- gsconfig-1.0.6/src/geoserver/style.py 2015-11-18 17:31:55.000000000 +0000 +++ gsconfig-1.0.8/src/geoserver/style.py 2017-10-09 16:38:10.000000000 +0000 @@ -72,14 +72,15 @@ user_style = self._get_sld_dom().find("{http://www.opengis.net/sld}NamedLayer/{http://www.opengis.net/sld}UserStyle") if not user_style: user_style = self._get_sld_dom().find("{http://www.opengis.net/sld}UserLayer/{http://www.opengis.net/sld}UserStyle") - + + title_node = None if user_style: try: # it is not mandatory title_node = user_style.find("{http://www.opengis.net/sld}Title") except: title_node = None - + return title_node.text if title_node is not None else None @property @@ -87,14 +88,15 @@ user_style = self._get_sld_dom().find("{http://www.opengis.net/sld}NamedLayer/{http://www.opengis.net/sld}UserStyle") if not user_style: user_style = self._get_sld_dom().find("{http://www.opengis.net/sld}UserLayer/{http://www.opengis.net/sld}UserStyle") - + + name_node = None if user_style: try: # it is not mandatory name_node = user_style.find("{http://www.opengis.net/sld}Name") except: name_node = None - + return name_node.text if name_node is not None else None @property diff -Nru gsconfig-1.0.6/src/geoserver/support.py gsconfig-1.0.8/src/geoserver/support.py --- gsconfig-1.0.6/src/geoserver/support.py 2017-02-02 12:46:59.000000000 +0000 +++ gsconfig-1.0.8/src/geoserver/support.py 2017-10-03 13:41:46.000000000 +0000 @@ -145,6 +145,7 @@ if k == 'port': v = str(v) builder.start("entry", dict(key=k)) + v = v if isinstance(v, basestring) else str(v) builder.data(v) builder.end("entry") builder.end(name) diff -Nru gsconfig-1.0.6/src/gsconfig.egg-info/PKG-INFO gsconfig-1.0.8/src/gsconfig.egg-info/PKG-INFO --- gsconfig-1.0.6/src/gsconfig.egg-info/PKG-INFO 2017-02-02 12:47:26.000000000 +0000 +++ gsconfig-1.0.8/src/gsconfig.egg-info/PKG-INFO 2017-10-10 15:02:14.000000000 +0000 @@ -1,300 +1,300 @@ -Metadata-Version: 1.1 -Name: gsconfig -Version: 1.0.6 -Summary: GeoServer REST Configuration -Home-page: https://github.com/boundlessgeo/gsconfig -Author: David Winslow, Sebastian Benthall -Author-email: dwinslow@opengeo.org -License: MIT -Description: gsconfig - ======== - - .. image:: https://travis-ci.org/boundlessgeo/gsconfig.svg?branch=master - :target: https://travis-ci.org/boundlessgeo/gsconfig - - gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API. - - The project is distributed under a `MIT License `_ . - - Installing - ========== - - .. code-block:: shell - - pip install gsconfig - - For developers: - - .. code-block:: shell - - git clone git@github.com:boundlessgeo/gsconfig.git - cd gsconfig - python setup.py develop - - Getting Help - ============ - There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . - If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ . - - Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.) - - Sample Layer Creation Code - ========================== - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8080/geoserver/") - topp = cat.get_workspace("topp") - shapefile_plus_sidecars = shapefile_and_friends("states") - # shapefile_and_friends should look on the filesystem to find a shapefile - # and related files based on the base path passed in - # - # shapefile_plus_sidecars == { - # 'shp': 'states.shp', - # 'shx': 'states.shx', - # 'prj': 'states.prj', - # 'dbf': 'states.dbf' - # } - - # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) - # 'workspace' is optional (GeoServer's default workspace is used by... default) - # 'name' is required - ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars) - - Running Tests - ============= - - Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of `integration tests `_. - These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer. - This data is also included in the GeoServer source repository as ``/data/release/``. - In addition, it is expected that there will be a postgres database available at ``postgres:password@localhost:5432/db``. - You can test connecting to this database with the ``psql`` command line client by running ``$ psql -d db -Upostgres -h localhost -p 5432`` (you will be prompted interactively for the password.) - - To override the assumed database connection parameters, the following environment variables are supported: - - - DATABASE - - DBUSER - - DBPASS - - If present, psycopg will be used to verify the database connection prior to running the tests. - - If provided, the following environment variables will be used to reset the data directory: - - GEOSERVER_HOME - Location of git repository to read the clean data from. If only this option is provided - `git clean` will be used to reset the data. - GEOSERVER_DATA_DIR - Optional location of the data dir geoserver will be running with. If provided, `rsync` - will be used to reset the data. - GS_VERSION - Optional environment variable allowing the catalog test cases to automatically download - and start a vanilla GeoServer WAR form the web. - Be sure that there are no running services on HTTP port 8080. - - Here are the commands that I use to reset before running the gsconfig tests: - - .. code-block:: shell - - $ cd ~/geoserver/src/web/app/ - $ PGUSER=postgres dropdb db - $ PGUSER=postgres createdb db -T template_postgis - $ git clean -dxff -- ../../../data/release/ - $ git checkout -f - $ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \ - GEOSERVER_DATA_DIR=../../../data/release \ - mvn jetty:run - - At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests. - You can stop it with ``CTRL-C`` (but don't do that until you've run the tests!) - You can run the gsconfig tests with the following command: - - .. code-block:: shell - - $ python setup.py test - - Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests: - - .. code-block:: shell - - $ git clean -dxff -- ../../../data/release/ - $ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload - - More Examples - Updated for GeoServer 2.4+ - ========================================== - - Loading the GeoServer ``catalog`` using ``gsconfig`` is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver") - - The code below allows you to create a FeatureType from a Shapefile - - .. code-block:: python - - geosolutions = cat.get_workspace("geosolutions") - import geoserver.util - shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states") - # shapefile_and_friends should look on the filesystem to find a shapefile - # and related files based on the base path passed in - # - # shapefile_plus_sidecars == { - # 'shp': 'states.shp', - # 'shx': 'states.shx', - # 'prj': 'states.prj', - # 'dbf': 'states.dbf' - # } - # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) - # 'workspace' is optional (GeoServer's default workspace is used by... default) - # 'name' is required - ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions) - - It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called ``my_jdbc_vt_test`` defined by a custom ``sql``. - - .. code-block:: python - - from geoserver.catalog import Catalog - from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam - - cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****') - store = cat.get_store('postgis-geoserver') - geom = JDBCVirtualTableGeometry('newgeom','LineString','4326') - ft_name = 'my_jdbc_vt_test' - epsg_code = 'EPSG:4326' - sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime' - keyColumn = None - parameters = None - - jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters) - ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt) - - This example shows how to easily update a ``layer`` property. The same approach may be used with every ``catalog`` resource - - .. code-block:: python - - ne_shaded = cat.get_layer("ne_shaded") - ne_shaded.enabled=True - cat.save(ne_shaded) - cat.reload() - - Deleting a ``store`` from the ``catalog`` requires to purge all the associated ``layers`` first. This can be done by doing something like this: - - .. code-block:: python - - st = cat.get_store("ne_shaded") - cat.delete(ne_shaded) - cat.reload() - cat.delete(st) - cat.reload() - - There are some functionalities allowing to manage the ``ImageMosaic`` coverages. It is possible to create new ImageMosaics, add granules to them, - and also read the coverages metadata, modify the mosaic ``Dimensions`` and finally query the mosaic ``granules`` and list their properties. - - The gsconfig methods map the `REST APIs for ImageMosaic `_ - - In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide - in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules. - You need to update or remove the ``datastore.properties`` file before creating the mosaic otherwise you will get an exception. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8180/geoserver/rest") - cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip") - - By defualt the ``cat.create_imagemosaic`` tries to configure the layer too. If you want to create the store only, you can specify the following parameter - - .. code-block:: python - - cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none") - - In order to retrieve from the catalog the ImageMosaic coverage store you can do this - - .. code-block:: python - - store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") - - It is possible to add more granules to the mosaic at runtime. - With the following method you can add granules already present on the machine local path. - - .. code-block:: python - - cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store) - - The method below allows to send granules remotely via POST to the ImageMosaic. - The granules will be uploaded and stored on the ImageMosaic index folder. - - .. code-block:: python - - cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store) - - To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. - *ATTENTION*: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables. - - .. code-block:: python - - layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test") - cat.delete(layer) - cat.reload() - cat.delete(store) - cat.reload() - - The method below allows you the load and update the coverage metadata of the ImageMosaic. - You need to do this for every coverage of the ImageMosaic of course. - - .. code-block:: python - - coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml") - coverage.supported_formats = ['GEOTIFF'] - cat.save(coverage) - - By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions. - *ATTENTION*: notice that the ``presentation`` parameters accepts only one among the following values {'LIST', 'DISCRETE_INTERVAL', 'CONTINUOUS_INTERVAL'} - - .. code-block:: python - - from geoserver.support import DimensionInfo - timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None) - coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo}) - cat.save(coverage) - - One the ImageMosaic has been configures, it is possible to read the coverages along with their granule schema and granule info. - - .. code-block:: python - - from geoserver.catalog import Catalog - cat = Catalog("http://localhost:8180/geoserver/rest") - store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") - coverages = cat.mosaic_coverages(store) - schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store) - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store) - - The granules details can be easily read by doing something like this: - - .. code-block:: python - - granules['crs']['properties']['name'] - granules['features'] - granules['features'][0]['properties']['time'] - granules['features'][0]['properties']['location'] - granules['features'][0]['properties']['run'] - - When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes. - - .. code-block:: python - - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") - granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") - -Keywords: GeoServer REST Configuration -Platform: UNKNOWN -Classifier: Development Status :: 4 - Beta -Classifier: Intended Audience :: Developers -Classifier: Intended Audience :: Science/Research -Classifier: License :: OSI Approved :: MIT License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Topic :: Scientific/Engineering :: GIS +Metadata-Version: 1.1 +Name: gsconfig +Version: 1.0.8 +Summary: GeoServer REST Configuration +Home-page: https://github.com/boundlessgeo/gsconfig +Author: David Winslow, Sebastian Benthall +Author-email: dwinslow@opengeo.org +License: MIT +Description: gsconfig + ======== + + .. image:: https://travis-ci.org/boundlessgeo/gsconfig.svg?branch=master + :target: https://travis-ci.org/boundlessgeo/gsconfig + + gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API. + + The project is distributed under a `MIT License `_ . + + Installing + ========== + + .. code-block:: shell + + pip install gsconfig + + For developers: + + .. code-block:: shell + + git clone git@github.com:boundlessgeo/gsconfig.git + cd gsconfig + python setup.py develop + + Getting Help + ============ + There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . + If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ . + + Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.) + + Sample Layer Creation Code + ========================== + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8080/geoserver/") + topp = cat.get_workspace("topp") + shapefile_plus_sidecars = shapefile_and_friends("states") + # shapefile_and_friends should look on the filesystem to find a shapefile + # and related files based on the base path passed in + # + # shapefile_plus_sidecars == { + # 'shp': 'states.shp', + # 'shx': 'states.shx', + # 'prj': 'states.prj', + # 'dbf': 'states.dbf' + # } + + # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) + # 'workspace' is optional (GeoServer's default workspace is used by... default) + # 'name' is required + ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars) + + Running Tests + ============= + + Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of `integration tests `_. + These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer. + This data is also included in the GeoServer source repository as ``/data/release/``. + In addition, it is expected that there will be a postgres database available at ``postgres:password@localhost:5432/db``. + You can test connecting to this database with the ``psql`` command line client by running ``$ psql -d db -Upostgres -h localhost -p 5432`` (you will be prompted interactively for the password.) + + To override the assumed database connection parameters, the following environment variables are supported: + + - DATABASE + - DBUSER + - DBPASS + + If present, psycopg will be used to verify the database connection prior to running the tests. + + If provided, the following environment variables will be used to reset the data directory: + + GEOSERVER_HOME + Location of git repository to read the clean data from. If only this option is provided + `git clean` will be used to reset the data. + GEOSERVER_DATA_DIR + Optional location of the data dir geoserver will be running with. If provided, `rsync` + will be used to reset the data. + GS_VERSION + Optional environment variable allowing the catalog test cases to automatically download + and start a vanilla GeoServer WAR form the web. + Be sure that there are no running services on HTTP port 8080. + + Here are the commands that I use to reset before running the gsconfig tests: + + .. code-block:: shell + + $ cd ~/geoserver/src/web/app/ + $ PGUSER=postgres dropdb db + $ PGUSER=postgres createdb db -T template_postgis + $ git clean -dxff -- ../../../data/release/ + $ git checkout -f + $ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \ + GEOSERVER_DATA_DIR=../../../data/release \ + mvn jetty:run + + At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests. + You can stop it with ``CTRL-C`` (but don't do that until you've run the tests!) + You can run the gsconfig tests with the following command: + + .. code-block:: shell + + $ python setup.py test + + Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests: + + .. code-block:: shell + + $ git clean -dxff -- ../../../data/release/ + $ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload + + More Examples - Updated for GeoServer 2.4+ + ========================================== + + Loading the GeoServer ``catalog`` using ``gsconfig`` is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver") + + The code below allows you to create a FeatureType from a Shapefile + + .. code-block:: python + + geosolutions = cat.get_workspace("geosolutions") + import geoserver.util + shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states") + # shapefile_and_friends should look on the filesystem to find a shapefile + # and related files based on the base path passed in + # + # shapefile_plus_sidecars == { + # 'shp': 'states.shp', + # 'shx': 'states.shx', + # 'prj': 'states.prj', + # 'dbf': 'states.dbf' + # } + # 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes) + # 'workspace' is optional (GeoServer's default workspace is used by... default) + # 'name' is required + ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions) + + It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called ``my_jdbc_vt_test`` defined by a custom ``sql``. + + .. code-block:: python + + from geoserver.catalog import Catalog + from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam + + cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****') + store = cat.get_store('postgis-geoserver') + geom = JDBCVirtualTableGeometry('newgeom','LineString','4326') + ft_name = 'my_jdbc_vt_test' + epsg_code = 'EPSG:4326' + sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime' + keyColumn = None + parameters = None + + jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters) + ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt) + + This example shows how to easily update a ``layer`` property. The same approach may be used with every ``catalog`` resource + + .. code-block:: python + + ne_shaded = cat.get_layer("ne_shaded") + ne_shaded.enabled=True + cat.save(ne_shaded) + cat.reload() + + Deleting a ``store`` from the ``catalog`` requires to purge all the associated ``layers`` first. This can be done by doing something like this: + + .. code-block:: python + + st = cat.get_store("ne_shaded") + cat.delete(ne_shaded) + cat.reload() + cat.delete(st) + cat.reload() + + There are some functionalities allowing to manage the ``ImageMosaic`` coverages. It is possible to create new ImageMosaics, add granules to them, + and also read the coverages metadata, modify the mosaic ``Dimensions`` and finally query the mosaic ``granules`` and list their properties. + + The gsconfig methods map the `REST APIs for ImageMosaic `_ + + In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide + in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules. + You need to update or remove the ``datastore.properties`` file before creating the mosaic otherwise you will get an exception. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8180/geoserver/rest") + cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip") + + By defualt the ``cat.create_imagemosaic`` tries to configure the layer too. If you want to create the store only, you can specify the following parameter + + .. code-block:: python + + cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none") + + In order to retrieve from the catalog the ImageMosaic coverage store you can do this + + .. code-block:: python + + store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") + + It is possible to add more granules to the mosaic at runtime. + With the following method you can add granules already present on the machine local path. + + .. code-block:: python + + cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store) + + The method below allows to send granules remotely via POST to the ImageMosaic. + The granules will be uploaded and stored on the ImageMosaic index folder. + + .. code-block:: python + + cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store) + + To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. + *ATTENTION*: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables. + + .. code-block:: python + + layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test") + cat.delete(layer) + cat.reload() + cat.delete(store) + cat.reload() + + The method below allows you the load and update the coverage metadata of the ImageMosaic. + You need to do this for every coverage of the ImageMosaic of course. + + .. code-block:: python + + coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml") + coverage.supported_formats = ['GEOTIFF'] + cat.save(coverage) + + By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions. + *ATTENTION*: notice that the ``presentation`` parameters accepts only one among the following values {'LIST', 'DISCRETE_INTERVAL', 'CONTINUOUS_INTERVAL'} + + .. code-block:: python + + from geoserver.support import DimensionInfo + timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None) + coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo}) + cat.save(coverage) + + Once the ImageMosaic has been configured, it is possible to read the coverages along with their granule schema and granule info. + + .. code-block:: python + + from geoserver.catalog import Catalog + cat = Catalog("http://localhost:8180/geoserver/rest") + store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test") + coverages = cat.mosaic_coverages(store) + schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store) + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store) + + The granules details can be easily read by doing something like this: + + .. code-block:: python + + granules['crs']['properties']['name'] + granules['features'] + granules['features'][0]['properties']['time'] + granules['features'][0]['properties']['location'] + granules['features'][0]['properties']['run'] + + When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes. + + .. code-block:: python + + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0") + granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'") + +Keywords: GeoServer REST Configuration +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Science/Research +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Scientific/Engineering :: GIS