Merge lp:~stefanor/ibid/db-import-export into lp:~ibid-core/ibid/old-trunk-pack-0.92
- db-import-export
- Merge into old-trunk-pack-0.92
Status: | Rejected |
---|---|
Rejected by: | Stefano Rivera |
Proposed branch: | lp:~stefanor/ibid/db-import-export |
Merge into: | lp:~ibid-core/ibid/old-trunk-pack-0.92 |
Diff against target: |
665 lines 4 files modified
ibid/core.py (+9/-3) ibid/models.py (+117/-57) scripts/ibid-db (+194/-0) setup.py (+11/-1) |
To merge this branch: | bzr merge lp:~stefanor/ibid/db-import-export |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jonathan Hitchcock | Approve | ||
Ibid Core Team | Pending | ||
Review via email: mp+9851@code.launchpad.net |
This proposal supersedes a proposal from 2009-08-07.
Commit message
Description of the change
Jonathan Hitchcock (vhata) wrote : | # |
Stefano Rivera (stefanor) wrote : | # |
> Should you be loading the entire database into memory before dumping it?
Good question. For now not an issue (I assume even Spinach's DB is way smaller than the libraries he loads.
Obviously changing that would require a different file format.
> Also, what's all this "print >> stderr" stuff? Why do you use logging if
> you're not going to... use it?
The logging bit is boilerplate to get the rest of ibid to log correctly.
Standard UNIX error behaviour is error to stderr, exit non-zero.
Michael Gorven (mgorven) wrote : | # |
On Monday 10 August 2009 11:36:05 Stefano Rivera wrote:
> Standard UNIX error behaviour is error to stderr, exit non-zero.
So why not use the Python logging, and configure console output (which goes to
stderr)?
Stefano Rivera (stefanor) wrote : | # |
Hi Michael (2009.08.
> So why not use the Python logging, and configure console output (which goes to
> stderr)?
Because this script isn't the only part of Ibid that does logging.
Seriously, do you both not think this is the simplest and right option??
SR
Michael Gorven (mgorven) wrote : | # |
On Friday 07 August 2009 22:50:18 Stefano Rivera wrote:
> +if options.verbose:
> + logging.
> +else:
> + logging.
I think a nicer way to do this is to make -v a counter, and set
`level=
DEBUG.
> + version = re.match(r'^# Ibid (.+) Database dump for', version).group(1)
I'd prefer storing information which will be parsed in JSON instead of using
regexes.
> + "Dump schema doesn't match DB Schema. "
> + 'You must use the same version of ibid that you dumped with. '
Does that mean that you can't restore an old Ibid dump with a newer Ibid?
That's fine for now if it's non-trivial to handle, but ideally it would
import it and then upgrade.
Michael Gorven (mgorven) wrote : | # |
On Monday 10 August 2009 17:54:08 Stefano Rivera wrote:
> Seriously, do you both not think this is the simplest and right option??
Now that I've actually looked at the code, I agree ;-)
Stefano Rivera (stefanor) wrote : | # |
> I think a nicer way to do this is to make -v a counter, and set
> `level=
> DEBUG.
Yeah, should probably do that everywhere
> > + version = re.match(r'^# Ibid (.+) Database dump for', version).group(1)
>
> I'd prefer storing information which will be parsed in JSON instead of using
> regexes.
Yup. I've actually been meaning to ago back and change that - too strict.
> Does that mean that you can't restore an old Ibid dump with a newer Ibid?
> That's fine for now if it's non-trivial to handle, but ideally it would
> import it and then upgrade.
Unfortunatly so. The problem is that Ibid can't setup a new DB based on an
arbitrary past schema version. And we can't just save the schema in the dump
because one of the aims of the DB is to be cross-DB compatible.
We could insert the data, and just hope it works...
Jonathan Hitchcock (vhata) wrote : | # |
> On Monday 10 August 2009 17:54:08 Stefano Rivera wrote:
> > Seriously, do you both not think this is the simplest and right option??
>
> Now that I've actually looked at the code, I agree ;-)
Me too :)
Jonathan Hitchcock (vhata) wrote : | # |
> > Does that mean that you can't restore an old Ibid dump with a newer Ibid?
> > That's fine for now if it's non-trivial to handle, but ideally it would
> > import it and then upgrade.
>
> Unfortunatly so. The problem is that Ibid can't setup a new DB based on an
> arbitrary past schema version. And we can't just save the schema in the dump
> because one of the aims of the DB is to be cross-DB compatible.
>
> We could insert the data, and just hope it works...
As per our talk on IRC, this is what I think should happen:
All plugins should start at schema version 0 straight away - this is equivalent to having no tables. They should then provide per-DB 0-to-1 migrations which create the initial tables. 1-to-2 and so on obviously update these tables in a cross-DB compatible way.
Importing old data from a different database will require running all schema migrations from 0 to old-data-
Stefano Rivera (stefanor) wrote : | # |
> As per our talk on IRC, this is what I think should happen:
As this branch has been held up for months, I've filed a separate bug #491930 for that.
In the meantime, let's merge the db upgrade tool
Jonathan Hitchcock (vhata) wrote : | # |
You win this time, Baby Doom.
Stefano Rivera (stefanor) wrote : | # |
Will re-propose for new trunk when I've made a few additions
Unmerged revisions
- 777. By Stefano Rivera
-
Spelling
- 776. By Stefano Rivera
-
Arbitrary length factoid names
- 775. By Stefano Rivera
-
Double-guess chardet
- 774. By Stefano Rivera
-
Check for over-length items
- 773. By Stefano Rivera
-
ibid_import: Use local.ini
- 772. By Stefano Rivera
-
ibid_import: Don't touch the Knab DB
- 771. By Stefano Rivera
-
Ignore header comment on import
- 770. By Stefano Rivera
-
Merge from trunk
- 769. By Stefano Rivera
-
Unused import
- 768. By Stefano Rivera
-
Whitespace
Preview Diff
1 | === modified file 'ibid/core.py' |
2 | --- ibid/core.py 2009-10-16 16:31:34 +0000 |
3 | +++ ibid/core.py 2009-10-17 19:58:09 +0000 |
4 | @@ -222,8 +222,10 @@ |
5 | else: |
6 | self.log.debug("Skipping Processor: %s.%s", name, klass.__name__) |
7 | |
8 | - ibid.models.check_schema_versions(ibid.databases['ibid']) |
9 | - |
10 | + try: |
11 | + ibid.models.check_schema_versions(ibid.databases['ibid']) |
12 | + except ibid.models.SchemaVersionException, e: |
13 | + self.log.error(u'Tables out of date: %s. Run "ibid-db --upgrade"', e.message) |
14 | except Exception, e: |
15 | self.log.exception(u"Couldn't instantiate %s processor of %s plugin", classname, name) |
16 | return False |
17 | @@ -288,7 +290,11 @@ |
18 | self.load(database) |
19 | |
20 | if check_schema_versions: |
21 | - ibid.models.check_schema_versions(self['ibid']) |
22 | + try: |
23 | + ibid.models.check_schema_versions(self['ibid']) |
24 | + except ibid.models.SchemaVersionException, e: |
25 | + self.log.error(u'Tables out of date: %s. Run "ibid-db --upgrade"', e.message) |
26 | + raise |
27 | |
28 | def load(self, name): |
29 | uri = ibid.config.databases[name] |
30 | |
31 | === modified file 'ibid/models.py' |
32 | --- ibid/models.py 2009-10-16 16:31:34 +0000 |
33 | +++ ibid/models.py 2009-10-17 19:58:09 +0000 |
34 | @@ -2,12 +2,14 @@ |
35 | import logging |
36 | import re |
37 | |
38 | -from sqlalchemy import Column, Integer, Unicode, UnicodeText, DateTime, ForeignKey, \ |
39 | - UniqueConstraint, MetaData, Table, Index, __version__ as sqlalchemy_version |
40 | +from sqlalchemy import Column, Integer, Unicode, UnicodeText, DateTime, \ |
41 | + ForeignKey, UniqueConstraint, MetaData, Table, Index, \ |
42 | + __version__ as sqlalchemy_version |
43 | from sqlalchemy.orm import relation |
44 | from sqlalchemy.ext.declarative import declarative_base |
45 | from sqlalchemy.sql import func |
46 | -from sqlalchemy.exceptions import InvalidRequestError, OperationalError, ProgrammingError |
47 | +from sqlalchemy.exceptions import InvalidRequestError, OperationalError, \ |
48 | + ProgrammingError |
49 | |
50 | if sqlalchemy_version < '0.5': |
51 | NoResultFound = InvalidRequestError |
52 | @@ -48,7 +50,8 @@ |
53 | return False |
54 | |
55 | try: |
56 | - schema = session.query(Schema).filter(Schema.table==unicode(self.table.name)).one() |
57 | + schema = session.query(Schema) \ |
58 | + .filter_by(table=unicode(self.table.name)).one() |
59 | return schema.version == self.version |
60 | except NoResultFound: |
61 | return False |
62 | @@ -58,8 +61,10 @@ |
63 | |
64 | for fk in self.table.foreign_keys: |
65 | dependancy = fk.target_fullname.split('.')[0] |
66 | - log.debug("Upgrading table %s before %s", dependancy, self.table.name) |
67 | - metadata.tables[dependancy].versioned_schema.upgrade_schema(sessionmaker) |
68 | + log.debug("Upgrading table %s before %s", |
69 | + dependancy, self.table.name) |
70 | + metadata.tables[dependancy].versioned_schema \ |
71 | + .upgrade_schema(sessionmaker) |
72 | |
73 | self.upgrade_session = session = sessionmaker() |
74 | self.upgrade_reflected_model = MetaData(session.bind, reflect=True) |
75 | @@ -74,7 +79,8 @@ |
76 | return |
77 | Schema.__table__ = self._get_reflected_model() |
78 | |
79 | - schema = session.query(Schema).filter(Schema.table==unicode(self.table.name)).first() |
80 | + schema = session.query(Schema) \ |
81 | + .filter_by(table=unicode(self.table.name)).first() |
82 | |
83 | try: |
84 | if not schema: |
85 | @@ -87,7 +93,8 @@ |
86 | |
87 | elif self.version > schema.version: |
88 | for version in range(schema.version + 1, self.version + 1): |
89 | - log.info(u"Upgrading table %s to version %i", self.table.name, version) |
90 | + log.info(u"Upgrading table %s to version %i", |
91 | + self.table.name, version) |
92 | |
93 | session.commit() |
94 | |
95 | @@ -96,7 +103,8 @@ |
96 | schema.version = version |
97 | session.save_or_update(schema) |
98 | |
99 | - self.upgrade_reflected_model = MetaData(session.bind, reflect=True) |
100 | + self.upgrade_reflected_model = \ |
101 | + MetaData(session.bind, reflect=True) |
102 | |
103 | session.commit() |
104 | |
105 | @@ -109,7 +117,8 @@ |
106 | |
107 | def _index_name(self, col): |
108 | """ |
109 | - We'd like to not duplicate an existing index so try to abide by the local customs |
110 | + We'd like to not duplicate an existing index so try to abide by the |
111 | + local customs |
112 | """ |
113 | session = self.upgrade_session |
114 | |
115 | @@ -120,12 +129,15 @@ |
116 | elif session.bind.engine.name == 'mysql': |
117 | return col.name |
118 | |
119 | - log.warning(u"Unknown database type, %s, you may end up with duplicate indices" |
120 | - % session.bind.engine.name) |
121 | + log.warning(u"Unknown database type, %s, you may end up with " |
122 | + u"duplicate indices" % session.bind.engine.name) |
123 | return 'ix_%s_%s' % (self.table.name, col.name) |
124 | |
125 | def _mysql_constraint_createstring(self, constraint): |
126 | - """Generate the description of a constraint for insertion into a CREATE string""" |
127 | + """ |
128 | + Generate the description of a constraint for insertion into a CREATE |
129 | + string |
130 | + """ |
131 | return ', '.join( |
132 | (isinstance(column.type, UnicodeText) |
133 | and '"%(name)s"(%(length)i)' |
134 | @@ -136,9 +148,11 @@ |
135 | ) |
136 | |
137 | def _create_table(self): |
138 | - """Check that the table is in a suitable form for all DBs, before creating. |
139 | - Yes, SQLAlchemy's abstractions are leaky enough that you have to do this""" |
140 | - |
141 | + """ |
142 | + Check that the table is in a suitable form for all DBs, before |
143 | + creating. Yes, SQLAlchemy's abstractions are leaky enough that you have |
144 | + to do this |
145 | + """ |
146 | session = self.upgrade_session |
147 | indices = [] |
148 | old_indexes = list(self.table.indexes) |
149 | @@ -147,18 +161,23 @@ |
150 | for column in self.table.c: |
151 | if column.unique and not column.index: |
152 | raise Exception(u"Column %s.%s is unique but not indexed. " |
153 | - u"SQLite doesn't like such things, so please be nice and don't do that." |
154 | + u"SQLite doesn't like such things, " |
155 | + u"so please be nice and don't do that." |
156 | % (self.table.name, self.column.name)) |
157 | |
158 | - # Strip out Indexes and Constraints that SQLAlchemy can't create by itself |
159 | + # Strip out Indexes and Constraints that SQLAlchemy can't create by |
160 | + # itself |
161 | if session.bind.engine.name == 'mysql': |
162 | for type, old_list in ( |
163 | ('constraints', old_constraints), |
164 | ('indexes', old_indexes)): |
165 | for constraint in old_list: |
166 | - if [True for column in constraint.columns if isinstance(column.type, UnicodeText)]: |
167 | - indices.append((isinstance(constraint, UniqueConstraint), |
168 | - self._mysql_constraint_createstring(constraint))) |
169 | + if any(True for column in constraint.columns |
170 | + if isinstance(column.type, UnicodeText)): |
171 | + indices.append(( |
172 | + isinstance(constraint, UniqueConstraint), |
173 | + self._mysql_constraint_createstring(constraint) |
174 | + )) |
175 | |
176 | getattr(self.table, type).remove(constraint) |
177 | |
178 | @@ -194,7 +213,8 @@ |
179 | table.append_column(col) |
180 | constraints = table.constraints - constraints |
181 | |
182 | - sg = session.bind.dialect.schemagenerator(session.bind.dialect, session.bind) |
183 | + sg = session.bind.dialect.schemagenerator(session.bind.dialect, |
184 | + session.bind) |
185 | description = sg.get_column_specification(col) |
186 | |
187 | for constraint in constraints: |
188 | @@ -208,7 +228,8 @@ |
189 | else: |
190 | constraints.append(constraint) |
191 | |
192 | - session.execute('ALTER TABLE "%s" ADD COLUMN %s %s;' % (table.name, description, " ".join(constraints))) |
193 | + session.execute('ALTER TABLE "%s" ADD COLUMN %s %s;' |
194 | + % (table.name, description, " ".join(constraints))) |
195 | |
196 | def add_index(self, col, unique=False): |
197 | "Add an index to the table" |
198 | @@ -238,12 +259,14 @@ |
199 | |
200 | session = self.upgrade_session |
201 | |
202 | - log.debug(u"Dropping column %s from table %s", col_name, self.table.name) |
203 | + log.debug(u"Dropping column %s from table %s", |
204 | + col_name, self.table.name) |
205 | |
206 | if session.bind.engine.name == 'sqlite': |
207 | self._rebuild_sqlite({col_name: None}) |
208 | else: |
209 | - session.execute('ALTER TABLE "%s" DROP COLUMN "%s";' % (self.table.name, col_name)) |
210 | + session.execute('ALTER TABLE "%s" DROP COLUMN "%s";' |
211 | + % (self.table.name, col_name)) |
212 | |
213 | def rename_column(self, col, old_name): |
214 | "Rename column from old_name to Column col" |
215 | @@ -251,15 +274,16 @@ |
216 | session = self.upgrade_session |
217 | table = self._get_reflected_model() |
218 | |
219 | - log.debug(u"Rename column %s to %s in table %s", old_name, col.name, table.name) |
220 | + log.debug(u"Rename column %s to %s in table %s", |
221 | + old_name, col.name, table.name) |
222 | |
223 | if session.bind.engine.name == 'sqlite': |
224 | self._rebuild_sqlite({old_name: col}) |
225 | elif session.bind.engine.name == 'mysql': |
226 | self.alter_column(col, old_name) |
227 | else: |
228 | - session.execute('ALTER TABLE "%s" RENAME COLUMN "%s" TO "%s";' % |
229 | - (table.name, old_name, col.name)) |
230 | + session.execute('ALTER TABLE "%s" RENAME COLUMN "%s" TO "%s";' |
231 | + % (table.name, old_name, col.name)) |
232 | |
233 | def alter_column(self, col, old_name=None): |
234 | """Change a column (possibly renaming from old_name) to Column col.""" |
235 | @@ -269,7 +293,8 @@ |
236 | |
237 | log.debug(u"Altering column %s in table %s", col.name, table.name) |
238 | |
239 | - sg = session.bind.dialect.schemagenerator(session.bind.dialect, session.bind) |
240 | + sg = session.bind.dialect.schemagenerator(session.bind.dialect, |
241 | + session.bind) |
242 | description = sg.get_column_specification(col) |
243 | old_col = table.c[old_name or col.name] |
244 | |
245 | @@ -282,24 +307,29 @@ |
246 | # only type changes have a real effect |
247 | return |
248 | |
249 | - self._rebuild_sqlite({old_name is None and col.name or old_name: col}) |
250 | + self._rebuild_sqlite( |
251 | + {old_name is None and col.name or old_name: col}) |
252 | |
253 | elif session.bind.engine.name == 'mysql': |
254 | # Special handling for columns of TEXT type, because SQLAlchemy |
255 | # can't create indexes for them |
256 | recreate = [] |
257 | - if isinstance(col.type, UnicodeText) or isinstance(old_col.type, UnicodeText): |
258 | + if isinstance(col.type, UnicodeText) \ |
259 | + or isinstance(old_col.type, UnicodeText): |
260 | for type in (table.constraints, table.indexes): |
261 | for constraint in list(type): |
262 | - if [True for column in constraint.columns if old_col.name == column.name]: |
263 | + if any(True for column in constraint.columns |
264 | + if old_col.name == column.name): |
265 | constraint.drop() |
266 | |
267 | constraint.columns = [ |
268 | - (old_col.name == column.name) and col or column |
269 | - for column in constraint.columns |
270 | + (old_col.name == column.name) and col or column |
271 | + for column in constraint.columns |
272 | ] |
273 | - recreate.append((isinstance(constraint, UniqueConstraint), |
274 | - self._mysql_constraint_createstring(constraint))) |
275 | + recreate.append(( |
276 | + isinstance(constraint, UniqueConstraint), |
277 | + self._mysql_constraint_createstring(constraint) |
278 | + )) |
279 | |
280 | session.execute('ALTER TABLE "%s" CHANGE "%s" %s;' % |
281 | (table.name, old_col.name, description)) |
282 | @@ -316,14 +346,21 @@ |
283 | (table.name, col.name, description.split(" ", 3)[1])) |
284 | |
285 | if old_col.nullable != col.nullable: |
286 | - session.execute('ALTER TABLE "%s" ALTER COLUMN "%s" %s NOT NULL;' % |
287 | - (table.name, col.name, col.nullable and 'DROP' or 'SET')) |
288 | + session.execute( |
289 | + 'ALTER TABLE "%s" ALTER COLUMN "%s" %s NOT NULL;' |
290 | + % (table.name, col.name, col.nullable and 'DROP' or 'SET') |
291 | + ) |
292 | |
293 | def _rebuild_sqlite(self, colmap): |
294 | - """SQLite doesn't support modification of table schema - must rebuild the table. |
295 | - colmap maps old column names to new Columns (or None for column deletion). |
296 | - Only modified columns need to be listed, unchaged columns are carried over automatically. |
297 | - Specify table in case name has changed in a more recent version.""" |
298 | + """ |
299 | + SQLite doesn't support modification of table schema - must rebuild the |
300 | + table. |
301 | + colmap maps old column names to new Columns |
302 | + (or None for column deletion). |
303 | + Only modified columns need to be listed, unchaged columns are carried |
304 | + over automatically. |
305 | + Specify table in case name has changed in a more recent version. |
306 | + """ |
307 | |
308 | session = self.upgrade_session |
309 | table = self._get_reflected_model() |
310 | @@ -343,23 +380,30 @@ |
311 | if col is not None: |
312 | table.append_column(col) |
313 | |
314 | - session.execute('ALTER TABLE "%s" RENAME TO "%s_old";' % (table.name, table.name)) |
315 | + session.execute('ALTER TABLE "%s" RENAME TO "%s_old";' |
316 | + % (table.name, table.name)) |
317 | |
318 | - # SQLAlchemy indexes aren't attached to tables, they must be dropped around now |
319 | - # or we'll get a clash |
320 | + # SQLAlchemy indexes aren't attached to tables, they must be dropped |
321 | + # around now or we'll get a clash |
322 | for constraint in table.indexes: |
323 | constraint.drop() |
324 | |
325 | table.create() |
326 | |
327 | session.execute('INSERT INTO "%s" ("%s") SELECT "%s" FROM "%s_old";' |
328 | - % (table.name, '", "'.join(fullcolmap.values()), '", "'.join(fullcolmap.keys()), table.name)) |
329 | + % ( |
330 | + table.name, |
331 | + '", "'.join(fullcolmap.values()), |
332 | + '", "'.join(fullcolmap.keys()), |
333 | + table.name |
334 | + )) |
335 | |
336 | session.execute('DROP TABLE "%s_old";' % table.name) |
337 | |
338 | # SQLAlchemy doesn't pick up all the indexes in the reflected table. |
339 | - # It's ok to use indexes that may be further in the future than this upgrade |
340 | - # because either we can already support them or we'll be rebuilding again soon |
341 | + # It's ok to use indexes that may be further in the future than this |
342 | + # upgrade because either we can already support them or we'll be |
343 | + # rebuilding again soon |
344 | for constraint in self.table.indexes: |
345 | try: |
346 | constraint.create(bind=session.bind) |
347 | @@ -391,7 +435,8 @@ |
348 | Column('id', Integer, primary_key=True), |
349 | Column('account_id', Integer, ForeignKey('accounts.id'), index=True), |
350 | Column('source', Unicode(32), nullable=False, index=True), |
351 | - Column('identity', UnicodeText, nullable=False, index=True), |
352 | + Column('identity', UnicodeText, nullable=False, index=True, |
353 | + info={'ibid_mysql_index_length': 32}), |
354 | Column('created', DateTime), |
355 | UniqueConstraint('source', 'identity'), |
356 | useexisting=True) |
357 | @@ -403,8 +448,10 @@ |
358 | self.add_index(self.table.c.identity) |
359 | |
360 | def upgrade_2_to_3(self): |
361 | - self.alter_column(Column('source', Unicode(32), nullable=False, index=True)) |
362 | - self.alter_column(Column('identity', UnicodeText, nullable=False, index=True)) |
363 | + self.alter_column(Column('source', |
364 | + Unicode(32), nullable=False, index=True)) |
365 | + self.alter_column(Column('identity', |
366 | + UnicodeText, nullable=False, index=True)) |
367 | |
368 | __table__.versioned_schema = IdentitySchema(__table__, 3) |
369 | |
370 | @@ -420,7 +467,8 @@ |
371 | class Attribute(Base): |
372 | __table__ = Table('account_attributes', Base.metadata, |
373 | Column('id', Integer, primary_key=True), |
374 | - Column('account_id', Integer, ForeignKey('accounts.id'), nullable=False, index=True), |
375 | + Column('account_id', Integer, ForeignKey('accounts.id'), |
376 | + nullable=False, index=True), |
377 | Column('name', Unicode(32), nullable=False, index=True), |
378 | Column('value', UnicodeText, nullable=False), |
379 | UniqueConstraint('account_id', 'name'), |
380 | @@ -445,7 +493,8 @@ |
381 | class Credential(Base): |
382 | __table__ = Table('credentials', Base.metadata, |
383 | Column('id', Integer, primary_key=True), |
384 | - Column('account_id', Integer, ForeignKey('accounts.id'), nullable=False, index=True), |
385 | + Column('account_id', Integer, ForeignKey('accounts.id'), |
386 | + nullable=False, index=True), |
387 | Column('source', Unicode(32), index=True), |
388 | Column('method', Unicode(16), nullable=False, index=True), |
389 | Column('credential', UnicodeText, nullable=False), |
390 | @@ -458,7 +507,8 @@ |
391 | self.add_index(self.table.c.method) |
392 | def upgrade_2_to_3(self): |
393 | self.alter_column(Column('source', Unicode(32), index=True)) |
394 | - self.alter_column(Column('credential', UnicodeText, nullable=False)) |
395 | + self.alter_column(Column('credential', |
396 | + UnicodeText, nullable=False)) |
397 | |
398 | __table__.versioned_schema = CredentialSchema(__table__, 3) |
399 | |
400 | @@ -471,7 +521,8 @@ |
401 | class Permission(Base): |
402 | __table__ = Table('permissions', Base.metadata, |
403 | Column('id', Integer, primary_key=True), |
404 | - Column('account_id', Integer, ForeignKey('accounts.id'), nullable=False, index=True), |
405 | + Column('account_id', Integer, ForeignKey('accounts.id'), |
406 | + nullable=False, index=True), |
407 | Column('name', Unicode(16), nullable=False, index=True), |
408 | Column('value', Unicode(4), nullable=False), |
409 | UniqueConstraint('account_id', 'name'), |
410 | @@ -491,7 +542,8 @@ |
411 | class Account(Base): |
412 | __table__ = Table('accounts', Base.metadata, |
413 | Column('id', Integer, primary_key=True), |
414 | - Column('username', Unicode(32), unique=True, nullable=False, index=True), |
415 | + Column('username', Unicode(32), unique=True, nullable=False, |
416 | + index=True), |
417 | useexisting=True) |
418 | |
419 | class AccountSchema(VersionedSchema): |
420 | @@ -511,6 +563,14 @@ |
421 | def __repr__(self): |
422 | return '<Account %s>' % self.username |
423 | |
424 | + |
425 | +class SchemaVersionException(Exception): |
426 | + """There is an out-of-date table. |
427 | + The message should be a list of out of date tables. |
428 | + """ |
429 | + pass |
430 | + |
431 | + |
432 | def check_schema_versions(sessionmaker): |
433 | """Pass through all tables, log out of date ones, |
434 | and except if not all up to date""" |
435 | @@ -528,7 +588,7 @@ |
436 | if not upgrades: |
437 | return |
438 | |
439 | - raise Exception(u"Tables %s are out of date. Run ibid-setup" % u", ".join(upgrades)) |
440 | + raise SchemaVersionException(u", ".join(upgrades)) |
441 | |
442 | def upgrade_schemas(sessionmaker): |
443 | "Pass through all tables and update schemas" |
444 | |
445 | === added file 'scripts/ibid-db' |
446 | --- scripts/ibid-db 1970-01-01 00:00:00 +0000 |
447 | +++ scripts/ibid-db 2009-10-17 19:58:09 +0000 |
448 | @@ -0,0 +1,194 @@ |
449 | +#!/usr/bin/env python |
450 | + |
451 | +from datetime import datetime |
452 | +import gzip |
453 | +import logging |
454 | +from optparse import OptionParser, OptionGroup |
455 | +from os.path import exists |
456 | +import re |
457 | +from sys import stdin, stdout, stderr, exit |
458 | + |
459 | +from sqlalchemy import select, DateTime, Unicode, UnicodeText |
460 | +from twisted.python.modules import getModule |
461 | +# json only in Python >=2.6 |
462 | +try: |
463 | + import simplejson as json |
464 | +except ImportError: |
465 | + import json |
466 | + |
467 | +import ibid |
468 | +from ibid.config import FileConfig |
469 | +from ibid.core import DatabaseManager |
470 | +from ibid.models import metadata, upgrade_schemas |
471 | +from ibid.utils import ibid_version |
472 | + |
473 | +parser = OptionParser(usage='%prog [options...]', |
474 | + description="""Ibid Database management tool. Used for import, export, |
475 | +and upgrades. Export format is a header-line followed by JSON. FILE can be - |
476 | +for stdin/stdout or can end in .gz for automatic Gzip compression.""") |
477 | +commands = OptionGroup(parser, 'Modes') |
478 | +commands.add_option('-e', '--export', dest='export', metavar='FILE', |
479 | + help='Export DB contents to FILE.') |
480 | +commands.add_option('-i', '--import', dest='import_', metavar='FILE', |
481 | + help='Import DB contents from FILE. DB must be empty first.') |
482 | +commands.add_option('-u', '--upgrade', dest='upgrade', action='store_true', |
483 | + help='Upgrade DB schema. You should backup first.') |
484 | +parser.add_option_group(commands) |
485 | +parser.add_option('-v', '--verbose', dest='verbose', action='store_true', |
486 | + default=False, help='Turn on debugging output to STDERR.') |
487 | + |
488 | +(options, args) = parser.parse_args() |
489 | + |
490 | +modes = sum(1 for mode in (options.import_, options.export, options.upgrade) |
491 | + if mode is not None) |
492 | +if modes > 1: |
493 | + parser.error('Only one mode can be specified.') |
494 | +elif modes == 0: |
495 | + parser.error('You must specify a mode.') |
496 | + |
497 | +if options.verbose: |
498 | + logging.basicConfig(level=logging.DEBUG) |
499 | +else: |
500 | + logging.basicConfig(level=logging.ERROR) |
501 | + |
502 | +for module in getModule('ibid.plugins').iterModules(): |
503 | + try: |
504 | + __import__(module.name) |
505 | + except Exception, e: |
506 | + print >> stderr, u"Couldn't load %s plugin: %s" % ( |
507 | + module.name.replace('ibid.plugins.', ''), unicode(e)) |
508 | + |
509 | +ibid.options = {'base': '.'} |
510 | +ibid.config = FileConfig('ibid.ini') |
511 | +ibid.config.merge(FileConfig('local.ini')) |
512 | + |
513 | +def dump_table(table, db): |
514 | + """Return a list of dicts of the data in table. |
515 | + table is a SQLAlchemy table |
516 | + db is a SQLAlchemy session |
517 | + """ |
518 | + sql = select([table]) |
519 | + rows = [] |
520 | + for row in db.execute(sql): |
521 | + out = {} |
522 | + for key in row.iterkeys(): |
523 | + value = row[key] |
524 | + if isinstance(value, datetime): |
525 | + value = value.strftime('%Y-%m-%dT%H:%M:%SZ') |
526 | + out[key] = value |
527 | + rows.append(out) |
528 | + return rows |
529 | + |
530 | +if options.upgrade is not None: |
531 | + db = DatabaseManager(check_schema_versions=False)['ibid'] |
532 | + if not db.bind.has_table('schema'): |
533 | + print >> stderr, ("Database doesn't appear to contain an Ibid. " |
534 | + 'Run ibid-setup.') |
535 | + exit(1) |
536 | + upgrade_schemas(db) |
537 | + |
538 | +elif options.export is not None: |
539 | + if options.export == '-': |
540 | + output = stdout |
541 | + elif exists(options.export): |
542 | + print >> stderr, ( |
543 | + 'Output file (%s) exists, refusing to clobber it' % |
544 | + options.export) |
545 | + exit(1) |
546 | + elif options.export.endswith('.gz'): |
547 | + output = gzip.open(options.export, 'wb') |
548 | + else: |
549 | + output = open(options.export, 'wb') |
550 | + |
551 | + output.write(('# Ibid %(version)s Database dump for %(botname)s ' |
552 | + 'made on %(date)s\n') % { |
553 | + 'version': ibid_version() or 'bzr', |
554 | + 'botname': ibid.config['botname'], |
555 | + 'date': datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'), |
556 | + }) |
557 | + |
558 | + db = DatabaseManager()['ibid'] |
559 | + dump = dict((table.name, dump_table(table, db)) |
560 | + for table in metadata.tables.itervalues()) |
561 | + |
562 | + output.write(json.dumps(dump, sort_keys=True, indent=4)) |
563 | + output.close() |
564 | + |
565 | +elif options.import_ is not None: |
566 | + db = DatabaseManager(check_schema_versions=False)['ibid'] |
567 | + if db.bind.has_table('schema'): |
568 | + print >> stderr, ('Database looks like it already contains an Ibid. ' |
569 | + 'Refusing to clobber it.') |
570 | + exit(1) |
571 | + |
572 | + upgrade_schemas(db) |
573 | + |
574 | + if options.import_ == '-': |
575 | + input = stdin |
576 | + elif options.import_.endswith('.gz'): |
577 | + input = gzip.open(options.import_, 'rb') |
578 | + else: |
579 | + input = open(options.import_, 'rb') |
580 | + |
581 | + version = input.readline().strip() |
582 | + version = re.match(r'^# Ibid (.+) Database dump for', version).group(1) |
583 | + if version != (ibid_version() or 'bzr'): |
584 | + print >> stderr, ( |
585 | + 'Ibid version differs between dump (%(dump)s) and ' |
586 | + 'local install (%(local)s). Aborting.') % { |
587 | + 'dump': version, |
588 | + 'local': ibid_version() or 'bzr', |
589 | + } |
590 | + exit(1) |
591 | + |
592 | + dump = json.loads(input.read()) |
593 | + |
594 | + dump_schema = dict((table['table'], table['version']) |
595 | + for table in dump['schema']) |
596 | + db_schema = dict((table['table'], table['version']) |
597 | + for table in dump_table(metadata.tables['schema'], db)) |
598 | + if dump_schema != db_schema: |
599 | + print >> stderr, ( |
600 | + "Dump schema doesn't match DB Schema. " |
601 | + 'You must use the same version of ibid that you dumped with. ' |
602 | + 'Aborting.') |
603 | + exit(1) |
604 | + |
605 | + loaded = set() |
606 | + |
607 | + def load_table(table, data): |
608 | + """Load data into table. |
609 | + table is a table name |
610 | + data is a list of dicts of data |
611 | + """ |
612 | + if table_name == 'schema': |
613 | + return |
614 | + |
615 | + dbtable = metadata.tables[table] |
616 | + for fk in dbtable.foreign_keys: |
617 | + dependancy = fk.target_fullname.split('.')[0] |
618 | + if dependancy not in loaded: |
619 | + load_table(dependancy, dump[dependancy]) |
620 | + |
621 | + maxid = 0 |
622 | + for row in data: |
623 | + if row['id'] > maxid: |
624 | + maxid = row['id'] |
625 | + for field in row.iterkeys(): |
626 | + if isinstance(dbtable.c[field].type, DateTime): |
627 | + row[field] = datetime.strptime(row[field], |
628 | + '%Y-%m-%dT%H:%M:%SZ') |
629 | + elif isinstance(dbtable.c[field].type, (Unicode, UnicodeText)): |
630 | + row[field] = unicode(row[field]) |
631 | + sql = dbtable.insert().values(**row) |
632 | + db.execute(sql) |
633 | + if maxid != 0 and db.bind.engine.name == 'postgres': |
634 | + db.execute("SELECT setval('%s_id_seq', %i);" % (table, maxid + 1)) |
635 | + db.commit() |
636 | + loaded.add(table) |
637 | + |
638 | + for table_name, table in dump.iteritems(): |
639 | + if table_name not in loaded: |
640 | + load_table(table_name, table) |
641 | + |
642 | + db.close() |
643 | |
644 | === modified file 'setup.py' |
645 | --- setup.py 2009-08-29 23:05:02 +0000 |
646 | +++ setup.py 2009-10-17 19:58:09 +0000 |
647 | @@ -46,7 +46,17 @@ |
648 | entry_points={ |
649 | 'trac.plugins': ['tracibid = tracibid.notifier'], |
650 | }, |
651 | - scripts=['scripts/ibid', 'scripts/ibid-setup', 'scripts/ibid-factpack', 'scripts/ibid_pb', 'scripts/ibid_import', 'scripts/ibid.tac', 'scripts/ibid-plugin'], |
652 | + scripts=[ |
653 | + 'scripts/ibid', |
654 | + 'scripts/ibid-db', |
655 | + 'scripts/ibid-plugin', |
656 | + 'scripts/ibid-setup', |
657 | + 'scripts/ibid-factpack', |
658 | + 'scripts/ibid_pb', |
659 | + 'scripts/ibid_import', |
660 | + 'scripts/ibid.tac', |
661 | + ], |
662 | + |
663 | include_package_data=True, |
664 | zip_safe=False, |
665 | ) |
Should you be loading the entire database into memory before dumping it?
Also, what's all this "print >> stderr" stuff? Why do you use logging if you're not going to... use it?