Merge lp:~ashishsharma/boots/curses-devel into lp:boots

Proposed by Ashish Sharma on 2011-03-26
Status: Needs review
Proposed branch: lp:~ashishsharma/boots/curses-devel
Merge into: lp:boots
Diff against target: 8670 lines (+8462/-1)
32 files modified
boots/app/client_config.py (+5/-0)
boots/lib/console.py (+6/-1)
boots/lib/ui/cui/NEWS (+18/-0)
boots/lib/ui/cui/completer/dynamic_loader.py (+108/-0)
boots/lib/ui/cui/completer/key_cache.py (+121/-0)
boots/lib/ui/cui/completer/static_loader.py (+168/-0)
boots/lib/ui/cui/cui_buffer.py (+200/-0)
boots/lib/ui/cui/cui_colors.py (+219/-0)
boots/lib/ui/cui/cui_display.py (+215/-0)
boots/lib/ui/cui/cui_keyboard.py (+452/-0)
boots/lib/ui/cui/cui_main.py (+107/-0)
boots/lib/ui/cui/cui_utils.py (+58/-0)
boots/lib/ui/cui/cui_window.py (+252/-0)
boots/lib/ui/cui/lexers/ply/__init__.py (+4/-0)
boots/lib/ui/cui/lexers/ply/cpp.py (+898/-0)
boots/lib/ui/cui/lexers/ply/ctokens.py (+133/-0)
boots/lib/ui/cui/lexers/ply/lex.py (+1058/-0)
boots/lib/ui/cui/lexers/ply/yacc.py (+3276/-0)
boots/lib/ui/cui/lexers/sql_lexer.py (+98/-0)
boots/lib/ui/curses.py (+278/-0)
data/cfg/colors.cfg (+112/-0)
doc/gsoc-docs/Makefile (+89/-0)
doc/gsoc-docs/code_docs.rst (+19/-0)
doc/gsoc-docs/code_docs/mod_completer.rst (+33/-0)
doc/gsoc-docs/code_docs/mod_cursesui.rst (+65/-0)
doc/gsoc-docs/code_docs/mod_lexers.rst (+19/-0)
doc/gsoc-docs/conf.py (+194/-0)
doc/gsoc-docs/gsoctemplates/layout.html (+21/-0)
doc/gsoc-docs/index.rst (+38/-0)
doc/gsoc-docs/make.bat (+113/-0)
doc/gsoc-docs/personal_info.rst (+35/-0)
doc/gsoc-docs/project_overview.rst (+50/-0)
To merge this branch: bzr merge lp:~ashishsharma/boots/curses-devel
Reviewer Review Type Date Requested Status
Boots Developers 2011-03-26 Pending
Review via email: mp+54952@code.launchpad.net

Description of the change

GSOC 2010 Work.

All related blueprints have been implemented(some minimally). List is available on branch's page.

These changes can be safely merged into boots as they are activated only with use of
--interface option.

Details:
1. Curses User Interface. Complete interface done in curses. Only one
   difficulty remains, which is that on resizing of terminal window the
   interface fails(does not support resizing of terminal window).
2. Syntax Highlighting. Frmework for it is also created. Basic highlighting
   for sql is supported. Can be extended to other languages too.
3. Proper parsing. Limited work done on this one. What is done is that, lexing
   is done for SQL. This can be extended to other languages.
4. Tab Completion. This has been implemented. But it is limited in a sense that
   it builds up key database(from which matches are found) at start of program.
   After that no updates are done when the objects of server are modified.

Now need comments on new things to do.

To post a comment you must log in.

Unmerged revisions

186. By Ashish Sharma on 2011-03-26

Code refactoring.
New option --interface added to boots command line options.
See NEWS for complete description.

185. By Ashish Sharma on 2010-08-16

last gsoc commit. No changes. Just additions to comments.

184. By Ashish Sharma on 2010-08-15

docs update

183. By Ashish Sharma on 2010-08-15

Worked on comments and refactoring.
Tested the tab completion with MySQLdb connection.
Will only be working if you use boots to connect to MySQL.
To disable it comment line no 98 in ./boots/lib/ui/cui/key_cache.py

182. By Ashish Sharma on 2010-08-10

Documentation added and updated in main devel branch.
Also find it hosted on http://web.iiit.ac.in/~ashishsharma/gsoc/html/index.html

181. By Ashish Sharma on 2010-08-10

small changes to network updater

180. By Ashish Sharma on 2010-08-09

News Updates.
Tab key functions properly now.

179. By Ashish Sharma on 2010-08-09

Ctrl+w rectified.

178. By Ashish Sharma on 2010-08-09

Tab completion support added.

177. By Ashish Sharma on 2010-08-01

create doc folders.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'boots/app/client_config.py'
2--- boots/app/client_config.py 2010-04-06 04:56:01 +0000
3+++ boots/app/client_config.py 2011-03-26 12:15:52 +0000
4@@ -102,6 +102,7 @@
5 "port": 3306,
6 "username": os.getenv("USER"),
7 "password": None,
8+ "uinterface": "default",
9 "prompt1": "> ",
10 "prompt2": "+ ",
11 "pager": None,
12@@ -119,6 +120,10 @@
13 self._cli_parser = optparse.OptionParser(version = version_string,
14 conflict_handler="resolve");
15
16+ self._cli_parser.add_option('--interface',
17+ action = 'store',
18+ dest = 'uinterface',
19+ help = _('set user interface to be used for new session'))
20 self._cli_parser.add_option('--debug',
21 action = 'store_true',
22 dest = 'debug',
23
24=== modified file 'boots/lib/console.py'
25--- boots/lib/console.py 2010-04-06 04:56:01 +0000
26+++ boots/lib/console.py 2011-03-26 12:15:52 +0000
27@@ -32,6 +32,7 @@
28 from boots.api.errors import BootsError, BootsWarning, CausedException, ConnectionError
29 from boots.api.nodes.node import NodeGraph, SyncNode, IteratorNode, LinkStackNode
30 from boots.lib.ui.plain import PlainUI
31+from boots.lib.ui.curses import CursesUI
32 from boots.lib.ui.generic import ScriptDriver, StringDriver, StdoutPresenter
33 from boots.lib.ui.components.help import HelpTopic, QueryIndexTopic, CommandHelpTopic
34 from boots.lib.ui.components.metacommands import MetaCommandManager, MetaCommands, parse_metacommand
35@@ -102,7 +103,11 @@
36 self.help["lingos"][name] = lingo.help
37
38 # Set up UI components
39- self.ui = PlainUI(self)
40+ if self.config['uinterface'] == 'curses':
41+ self.ui = CursesUI(self)
42+ else:
43+ self.ui = PlainUI(self)
44+
45 self.presenter = self.ui
46 if self.config["script"]:
47 self.driver = ScriptDriver(self, self.config["script"], self.config["lingo"])
48
49=== added directory 'boots/lib/ui/cui'
50=== added file 'boots/lib/ui/cui/NEWS'
51--- boots/lib/ui/cui/NEWS 1970-01-01 00:00:00 +0000
52+++ boots/lib/ui/cui/NEWS 2011-03-26 12:15:52 +0000
53@@ -0,0 +1,18 @@
54+NEWS
55+====
56+
57+1. Very late commit this one.
58+2. New option --interface added.
59+ Use `--interface curses` to use the curses interface.
60+3. Code Refactoring relating to the debug option.
61+
62+Notes:
63+
64+Now, boots can be used with either of its user interfaces. The --interface
65+command line option can be used to do the switch.
66+
67+--interface <value>
68+
69+<value> takes default or curses as arguments. any other string will result in
70+loading of the default interface.
71+
72
73=== added file 'boots/lib/ui/cui/__init__.py'
74=== added directory 'boots/lib/ui/cui/completer'
75=== added file 'boots/lib/ui/cui/completer/__init__.py'
76=== added file 'boots/lib/ui/cui/completer/dynamic_loader.py'
77--- boots/lib/ui/cui/completer/dynamic_loader.py 1970-01-01 00:00:00 +0000
78+++ boots/lib/ui/cui/completer/dynamic_loader.py 2011-03-26 12:15:52 +0000
79@@ -0,0 +1,108 @@
80+#!/usr/bin/env python
81+# -*- coding: iso-8859-1-unix -*-
82+# Boots Client Project
83+# www.launchpad.net/boots
84+#
85+# This file contributed to boots by Ashish Sharma.
86+# ##### BEGIN LICENSE BLOCK #####
87+#
88+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
89+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
90+#
91+# This program is free software: you can redistribute it and/or modify
92+# it under the terms of the GNU General Public License as published by
93+# the Free Software Foundation, either version 3 of the License, or
94+# (at your option) any later version.
95+#
96+# This program is distributed in the hope that it will be useful,
97+# but WITHOUT ANY WARRANTY; without even the implied warranty of
98+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
99+# GNU General Public License for more details.
100+#
101+# You should have received a copy of the GNU General Public License
102+# along with this program. If not, see <http://www.gnu.org/licenses/>.
103+#
104+# ##### END LICENSE BLOCK #####
105+
106+""" Template for a typical dynamic loader which loads potential tab completion
107+candidates from server."""
108+
109+import sys
110+import os
111+import re
112+import time
113+import curses
114+import signal
115+import readline
116+import termios
117+import fcntl
118+import struct
119+
120+MYSQLDB_AVL = False
121+try:
122+ import MySQLdb
123+ MYSQLDB_AVL = True
124+except ImportError, e:
125+ sys.stderr.write('Import Error: MySQLdb.\n')
126+
127+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
128+from boots.lib.ui.cui.cui_utils import *
129+
130+class DynamicUpdater():
131+ '''Defines updater for updating keys from a server
132+ '''
133+ def __init__(self):
134+ pass
135+
136+ def update(self, lingo):
137+ '''create and return an update of keys
138+ '''
139+ pass
140+
141+class DrizzleUpdater(DynamicUpdater):
142+ '''Defines custom updater for drizzle connections.
143+ '''
144+ def __init__(self):
145+ pass
146+
147+ def update(self, lingo, conn):
148+ '''Connect to MySQL or Drizzle server and recieve update'''
149+ pass
150+
151+class MySQLUpdater(DynamicUpdater):
152+ '''Defines custom updater for drizzle connections.
153+ '''
154+ def __init__(self, dic):
155+ if not MYSQLDB_AVL:
156+ self.conn = None
157+ else:
158+ self.conn = MySQLdb.connect(host=dic['host'],
159+ port=int(dic['port']),
160+ user=dic['username'],
161+ passwd=dic['password'])
162+
163+ def update(self):
164+ '''Connect to MySQL or Drizzle server and recieve update.
165+
166+ .. note::
167+ Use this update only once.'''
168+ if self.conn is None:
169+ return []
170+ cursor = self.conn.cursor()
171+ update_queries = { 'DATABASE' :'''SELECT DISTINCT TABLE_SCHEMA
172+ from information_schema.columns;''',
173+ 'TABLE': '''SELECT DISTINCT TABLE_NAME
174+ from information_schema.columns;''',
175+ 'COLUMN': '''SELECT DISTINCT COLUMN_NAME
176+ from information_schema.columns;''',}
177+ key_store = []
178+ for category in update_queries:
179+ res = cursor.execute(update_queries[category])
180+ keys = []
181+ for row in cursor.fetchall():
182+ key_store.append((row[0], category))
183+
184+
185+ cursor.close()
186+ self.conn.close()
187+ return key_store
188
189=== added file 'boots/lib/ui/cui/completer/key_cache.py'
190--- boots/lib/ui/cui/completer/key_cache.py 1970-01-01 00:00:00 +0000
191+++ boots/lib/ui/cui/completer/key_cache.py 2011-03-26 12:15:52 +0000
192@@ -0,0 +1,121 @@
193+#!/usr/bin/env python
194+# -*- coding: ascii -*-
195+# Boots Client Project
196+# www.launchpad.net/boots
197+#
198+# This file contributed to boots by Ashish Sharma.
199+# ##### BEGIN LICENSE BLOCK #####
200+#
201+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
202+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
203+#
204+# This program is free software: you can redistribute it and/or modify
205+# it under the terms of the GNU General Public License as published by
206+# the Free Software Foundation, either version 3 of the License, or
207+# (at your option) any later version.
208+#
209+# This program is distributed in the hope that it will be useful,
210+# but WITHOUT ANY WARRANTY; without even the implied warranty of
211+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
212+# GNU General Public License for more details.
213+#
214+# You should have received a copy of the GNU General Public License
215+# along with this program. If not, see <http://www.gnu.org/licenses/>.
216+#
217+# ##### END LICENSE BLOCK #####
218+
219+""" Defines classes to manage history of operations and also current input
220+being edited."""
221+
222+import sys
223+import os
224+import re
225+import time
226+import curses
227+import signal
228+import readline
229+import termios
230+import fcntl
231+import struct
232+import sqlite3
233+
234+from static_loader import StaticUpdater
235+from dynamic_loader import MySQLUpdater
236+
237+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
238+from boots.lib.ui.cui.cui_utils import *
239+
240+class KeysCache():
241+ '''
242+ Cache for the tab completion identifiers.
243+ '''
244+
245+ def __init__(self):
246+ '''
247+ Create a new cache object
248+ '''
249+ self.__create_db()
250+ self.static_updater = StaticUpdater()
251+
252+ def __create_db(self):
253+ '''create sqlite in memory db for efficient storage'''
254+ self.conn = sqlite3.connect(':memory:')
255+ self.cursor = self.conn.cursor()
256+ self.types = ['KEYWORD','DATABASE', 'TABLE', 'INDEX', 'COLUMN', 'ALIAS',
257+ 'VIEW', 'STORED PROCEDURE', 'PARTITION', 'TABLESPACE']
258+ self.cursor.execute('CREATE TABLE KEYS \
259+ (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME VARCHAR(50), \
260+ TYPE CHAR(20))')
261+
262+ def __destroy_db(self):
263+ '''destroy sqlite in memory db'''
264+ self.execute('DELETE FROM KEYS')
265+ self.cursor.close()
266+
267+ def update(self, lingo, config):
268+ '''update the key cache by reloading from static_updater and
269+ dynamic_updater. A lingo change could also be brought in.
270+
271+ | @param lingo - lingo for which to update the cache
272+ | @param config - conatins connection information
273+
274+ The network updater portion is currently under testing.
275+ I have testd framework with a MySQLdb connection.
276+ More support will be added from boots query framework.'''
277+ #Cleanup previous entries if any
278+ self.cursor.execute('DELETE FROM KEYS')
279+
280+ #static update
281+ keys = self.static_updater.update(lingo)
282+
283+ #test network update for mysql direct.
284+ #modify it to first detect server type and then connect
285+ #or use boots query framework to query data.
286+ if config :
287+ network_updater = MySQLUpdater({'host': config['host'],
288+ 'port' : config['port'],
289+ 'username': config['username'],
290+ 'password': config['password']})
291+ keys.extend(network_updater.update())
292+
293+ for pair in keys:
294+ self.cursor.execute('INSERT INTO KEYS (NAME, TYPE) VALUES (?,?)',pair)
295+
296+ def search(self, key):
297+ '''search existing KEYS for matches and return the matches.
298+
299+ @param key - key to be searched in the cache'''
300+ self.cursor.execute("SELECT NAME, TYPE FROM KEYS \
301+ WHERE NAME LIKE '"+key+"%' \
302+ ORDER BY NAME")
303+ matches = self.cursor.fetchall()
304+ if len(matches) == 0:
305+ #NO MATCH
306+ return []
307+ else:
308+ return [ m[0] for m in matches]
309+
310+ def destroy(self):
311+ '''destroy in memopey db'''
312+ self.__destroy_db()
313+
314
315=== added file 'boots/lib/ui/cui/completer/static_loader.py'
316--- boots/lib/ui/cui/completer/static_loader.py 1970-01-01 00:00:00 +0000
317+++ boots/lib/ui/cui/completer/static_loader.py 2011-03-26 12:15:52 +0000
318@@ -0,0 +1,168 @@
319+#!/usr/bin/env python
320+# -*- coding: ascii -*-
321+# Boots Client Project
322+# www.launchpad.net/boots
323+#
324+# This file contributed to boots by Ashish Sharma.
325+# ##### BEGIN LICENSE BLOCK #####
326+#
327+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
328+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
329+#
330+# This program is free software: you can redistribute it and/or modify
331+# it under the terms of the GNU General Public License as published by
332+# the Free Software Foundation, either version 3 of the License, or
333+# (at your option) any later version.
334+#
335+# This program is distributed in the hope that it will be useful,
336+# but WITHOUT ANY WARRANTY; without even the implied warranty of
337+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
338+# GNU General Public License for more details.
339+#
340+# You should have received a copy of the GNU General Public License
341+# along with this program. If not, see <http://www.gnu.org/licenses/>.
342+#
343+# ##### END LICENSE BLOCK #####
344+
345+""" Defines a static loader to load static keywords for different suppoerted
346+lingos."""
347+
348+import sys
349+import os
350+import re
351+import time
352+import curses
353+import signal
354+import readline
355+import termios
356+import fcntl
357+import struct
358+
359+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
360+from boots.lib.ui.cui.cui_utils import *
361+
362+class StaticUpdater():
363+ '''defines static keywords for a particular lingo
364+ '''
365+ sql_keywords=[ 'ABORT', 'ABS', 'ABSOLUTE', 'ACCESS', 'ADA', 'ADD', 'ADMIN',
366+ 'AFTER', 'AGGREGATE', 'ALIAS', 'ALL', 'ALLOCATE', 'ALTER',
367+ 'ANALYSE', 'ANALYZE', 'AND', 'ANY', 'ARE', 'AS', 'ASC',
368+ 'ASENSITIVE', 'ASSERTION', 'ASSIGNMENT', 'ASYMMETRIC', 'AT',
369+ 'ATOMIC', 'AUTHORIZATION', 'AVG', 'BACKWARD', 'BEFORE',
370+ 'BEGIN', 'BETWEEN', 'BITVAR', 'BIT_LENGTH', 'BOTH', 'BREADTH',
371+ 'BY', 'C', 'CACHE', 'CALL', 'CALLED', 'CARDINALITY', 'CASCADE',
372+ 'CASCADED', 'CASE', 'CAST', 'CATALOG', 'CATALOG_NAME', 'CHAIN',
373+ 'CHARACTERISTICS', 'CHARACTER_LENGTH', 'CHARACTER_SET_CATALOG',
374+ 'CHARACTER_SET_NAME', 'CHARACTER_SET_SCHEMA', 'CHAR_LENGTH',
375+ 'CHECK', 'CHECKED', 'CHECKPOINT', 'CLASS', 'CLASS_ORIGIN',
376+ 'CLOB', 'CLOSE', 'CLUSTER', 'COALSECE', 'COBOL', 'COLLATE',
377+ 'COLLATION', 'COLLATION_CATALOG', 'COLLATION_NAME',
378+ 'COLLATION_SCHEMA', 'COLUMN', 'COLUMN_NAME', 'COMMAND_FUNCTION',
379+ 'COMMAND_FUNCTION_CODE', 'COMMENT', 'COMMIT', 'COMMITTED',
380+ 'COMPLETION', 'CONDITION_NUMBER', 'CONNECT', 'CONNECTION',
381+ 'CONNECTION_NAME', 'CONSTRAINT', 'CONSTRAINTS',
382+ 'CONSTRAINT_CATALOG', 'CONSTRAINT_NAME', 'CONSTRAINT_SCHEMA',
383+ 'CONSTRUCTOR', 'CONTAINS', 'CONTINUE', 'CONVERSION', 'CONVERT',
384+ 'COPY', 'CORRESPONTING', 'COUNT', 'CREATE', 'CREATEDB',
385+ 'CREATEUSER', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE',
386+ 'CURRENT_PATH', 'CURRENT_ROLE', 'CURRENT_TIME',
387+ 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'CURSOR', 'CURSOR_NAME',
388+ 'CYCLE', 'DATA', 'DATABASE', 'DATETIME_INTERVAL_CODE',
389+ 'DATETIME_INTERVAL_PRECISION', 'DAY', 'DEALLOCATE', 'DECLARE',
390+ 'DEFAULT', 'DEFAULTS', 'DEFERRABLE', 'DEFERRED', 'DEFINED',
391+ 'DEFINER', 'DELETE', 'DELIMITER', 'DELIMITERS', 'DEREF',
392+ 'DESC', 'DESCRIBE', 'DESCRIPTOR', 'DESTROY', 'DESTRUCTOR',
393+ 'DETERMINISTIC', 'DIAGNOSTICS', 'DICTIONARY', 'DISCONNECT',
394+ 'DISPATCH', 'DISTINCT', 'DO', 'DOMAIN', 'DROP', 'DYNAMIC',
395+ 'DYNAMIC_FUNCTION', 'DYNAMIC_FUNCTION_CODE', 'EACH', 'ELSE',
396+ 'ENCODING', 'ENCRYPTED', 'END', 'END-EXEC', 'EQUALS', 'ESCAPE',
397+ 'EVERY', 'EXCEPT', 'ESCEPTION', 'EXCLUDING', 'EXCLUSIVE',
398+ 'EXEC', 'EXECUTE', 'EXISTING', 'EXISTS', 'EXPLAIN', 'EXTERNAL',
399+ 'EXTRACT', 'FALSE', 'FETCH', 'FINAL', 'FIRST', 'FOR', 'FORCE',
400+ 'FOREIGN', 'FORTRAN', 'FORWARD', 'FOUND', 'FREE', 'FREEZE',
401+ 'FROM', 'FULL', 'FUNCTION', 'G', 'GENERAL', 'GENERATED', 'GET',
402+ 'GLOBAL', 'GO', 'GOTO', 'GRANT', 'GRANTED', 'GROUP', 'GROUPING',
403+ 'HANDLER', 'HAVING', 'HIERARCHY', 'HOLD', 'HOST', 'IDENTITY',
404+ 'IGNORE', 'ILIKE', 'IMMEDIATE', 'IMMUTABLE', 'IMPLEMENTATION',
405+ 'IMPLICIT', 'IN', 'INCLUDING', 'INCREMENT', 'INDEX',
406+ 'INDITCATOR', 'INFIX', 'INHERITS', 'INITIALIZE', 'INITIALLY',
407+ 'INNER', 'INOUT', 'INPUT', 'INSENSITIVE', 'INSERT',
408+ 'INSTANTIABLE', 'INSTEAD', 'INTERSECT', 'INTO', 'INVOKER',
409+ 'IS', 'ISNULL', 'ISOLATION', 'ITERATE', 'JOIN', 'KEY',
410+ 'KEY_MEMBER', 'KEY_TYPE', 'LANCOMPILER', 'LANGUAGE', 'LARGE',
411+ 'LAST', 'LATERAL', 'LEADING', 'LEFT', 'LENGTH', 'LESS',
412+ 'LEVEL', 'LIKE', 'LIMIT', 'LISTEN', 'LOAD', 'LOCAL',
413+ 'LOCALTIME', 'LOCALTIMESTAMP', 'LOCATION', 'LOCATOR', 'LOCK',
414+ 'LOWER', 'MAP', 'MATCH', 'MAX', 'MAXVALUE', 'MESSAGE_LENGTH',
415+ 'MESSAGE_OCTET_LENGTH', 'MESSAGE_TEXT', 'METHOD', 'MIN',
416+ 'MINUTE', 'MINVALUE', 'MOD', 'MODE', 'MODIFIES', 'MODIFY',
417+ 'MONTH', 'MORE', 'MOVE', 'MUMPS', 'NAMES', 'NATIONAL',
418+ 'NATURAL', 'NCHAR', 'NCLOB', 'NEW', 'NEXT', 'NO', 'NOCREATEDB',
419+ 'NOCREATEUSER', 'NONE', 'NOT', 'NOTHING', 'NOTIFY', 'NOTNULL',
420+ 'NULL', 'NULLABLE', 'NULLIF', 'OBJECT', 'OCTET_LENGTH', 'OF',
421+ 'OFF', 'OFFSET', 'OIDS', 'OLD', 'ON', 'ONLY', 'OPEN',
422+ 'OPERATION', 'OPERATOR', 'OPTION', 'OPTIONS', 'OR', 'ORDER',
423+ 'ORDINALITY', 'OUT', 'OUTER', 'OUTPUT', 'OVERLAPS', 'OVERLAY',
424+ 'OVERRIDING', 'OWNER', 'PAD', 'PARAMETER', 'PARAMETERS',
425+ 'PARAMETER_MODE', 'PARAMATER_NAME',
426+ 'PARAMATER_ORDINAL_POSITION', 'PARAMETER_SPECIFIC_CATALOG',
427+ 'PARAMETER_SPECIFIC_NAME', 'PARAMATER_SPECIFIC_SCHEMA',
428+ 'PARTIAL', 'PASCAL', 'PENDANT', 'PLACING', 'PLI', 'POSITION',
429+ 'POSTFIX', 'PRECISION', 'PREFIX', 'PREORDER', 'PREPARE',
430+ 'PRESERVE', 'PRIMARY', 'PRIOR', 'PRIVILEGES', 'PROCEDURAL',
431+ 'PROCEDURE', 'PUBLIC', 'READ', 'READS', 'RECHECK',
432+ 'RECURSIVE', 'REF', 'REFERENCES', 'REFERENCING', 'REINDEX',
433+ 'RELATIVE', 'RENAME', 'REPEATABLE', 'REPLACE', 'RESET',
434+ 'RESTART', 'RESTRICT', 'RESULT', 'RETURN', 'RETURNED_LENGTH',
435+ 'RETURNED_OCTET_LENGTH', 'RETURNED_SQLSTATE', 'RETURNS',
436+ 'REVOKE', 'RIGHT', 'ROLE', 'ROLLBACK', 'ROLLUP', 'ROUTINE',
437+ 'ROUTINE_CATALOG', 'ROUTINE_NAME', 'ROUTINE_SCHEMA', 'ROW',
438+ 'ROWS', 'ROW_COUNT', 'RULE', 'SAVE_POINT', 'SCALE', 'SCHEMA',
439+ 'SCHEMA_NAME', 'SCOPE', 'SCROLL', 'SEARCH', 'SECOND',
440+ 'SECURITY', 'SELECT', 'SELF', 'SENSITIVE', 'SERIALIZABLE',
441+ 'SERVER_NAME', 'SESSION', 'SESSION_USER', 'SET', 'SETOF',
442+ 'SETS', 'SHARE', 'SHOW', 'SIMILAR', 'SIMPLE', 'SIZE', 'SOME',
443+ 'SOURCE', 'SPACE', 'SPECIFIC', 'SPECIFICTYPE', 'SPECIFIC_NAME',
444+ 'SQL', 'SQLCODE', 'SQLERROR', 'SQLEXCEPTION', 'SQLSTATE',
445+ 'SQLWARNINIG', 'STABLE', 'START', 'STATE', 'STATEMENT',
446+ 'STATIC', 'STATISTICS', 'STDIN', 'STDOUT', 'STORAGE',
447+ 'STRICT', 'STRUCTURE', 'STYPE', 'SUBCLASS_ORIGIN', 'SUBLIST',
448+ 'SUBSTRING', 'SUM', 'SYMMETRIC', 'SYSID', 'SYSTEM',
449+ 'SYSTEM_USER', 'TABLE', 'TABLE_NAME', 'TEMP', 'TEMPLATE',
450+ 'TEMPORARY', 'TERMINATE', 'THAN', 'THEN', 'TIMESTAMP',
451+ 'TIMEZONE_HOUR', 'TIMEZONE_MINUTE', 'TO', 'TOAST', 'TRAILING',
452+ 'TRANSATION', 'TRANSACTIONS_COMMITTED',
453+ 'TRANSACTIONS_ROLLED_BACK', 'TRANSATION_ACTIVE', 'TRANSFORM',
454+ 'TRANSFORMS', 'TRANSLATE', 'TRANSLATION', 'TREAT', 'TRIGGER',
455+ 'TRIGGER_CATALOG', 'TRIGGER_NAME', 'TRIGGER_SCHEMA', 'TRIM',
456+ 'TRUE', 'TRUNCATE', 'TRUSTED', 'TYPE', 'UNCOMMITTED', 'UNDER',
457+ 'UNENCRYPTED', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLISTEN',
458+ 'UNNAMED', 'UNNEST', 'UNTIL', 'UPDATE', 'UPPER', 'USAGE',
459+ 'USER', 'USER_DEFINED_TYPE_CATALOG', 'USER_DEFINED_TYPE_NAME',
460+ 'USER_DEFINED_TYPE_SCHEMA', 'USING', 'VACUUM', 'VALID',
461+ 'VALIDATOR', 'VALUES', 'VARIABLE', 'VERBOSE', 'VERSION',
462+ 'VIEW', 'VOLATILE', 'WHEN', 'WHENEVER', 'WHERE', 'WITH',
463+ 'WITHOUT', 'WORK', 'WRITE', 'YEAR', 'ZONE', 'ARRAY', 'BIGINT',
464+ 'BINARY', 'BIT', 'BLOB', 'BOOLEAN', 'CHAR', 'CHARACTER',
465+ 'DATE', 'DEC', 'DECIMAL', 'FLOAT', 'INT', 'INTEGER',
466+ 'INTERVAL', 'NUMBER', 'NUMERIC', 'REAL', 'SERIAL', 'SMALLINT',
467+ 'VARCHAR', 'VARYING', 'INT8', 'SERIAL8', 'TEXT']
468+
469+ def __init__(self):
470+ '''Set lingo and build up registry of available keywords
471+ '''
472+ self.keyword_registry = {'sql':self.sql_keywords,
473+ }
474+
475+ def update(self, lingo):
476+ '''create and return an update for static keywords of
477+ '''
478+ keywords = None
479+ if lingo in self.keyword_registry:
480+ keywords = self.keyword_registry[lingo]
481+ else :
482+ return []
483+
484+ if keywords is not None:
485+ return [ (key, 'KEYWORD') for key in keywords]
486+
487
488=== added file 'boots/lib/ui/cui/cui_buffer.py'
489--- boots/lib/ui/cui/cui_buffer.py 1970-01-01 00:00:00 +0000
490+++ boots/lib/ui/cui/cui_buffer.py 2011-03-26 12:15:52 +0000
491@@ -0,0 +1,200 @@
492+#!/usr/bin/env python
493+# -*- coding: iso-8859-1-unix -*-
494+
495+# Boots Client Project
496+# www.launchpad.net/boots
497+#
498+# This file contributed to boots by Ashish Sharma.
499+# ##### BEGIN LICENSE BLOCK #####
500+#
501+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
502+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
503+#
504+# This program is free software: you can redistribute it and/or modify
505+# it under the terms of the GNU General Public License as published by
506+# the Free Software Foundation, either version 3 of the License, or
507+# (at your option) any later version.
508+#
509+# This program is distributed in the hope that it will be useful,
510+# but WITHOUT ANY WARRANTY; without even the implied warranty of
511+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
512+# GNU General Public License for more details.
513+#
514+# You should have received a copy of the GNU General Public License
515+# along with this program. If not, see <http://www.gnu.org/licenses/>.
516+#
517+# ##### END LICENSE BLOCK #####
518+
519+'''Defines classes to manage the two buffers associated with curses ui. They are
520+* Buffer maintaining the terminal output(the history buffer)
521+* Buffer maintaining the presently edited command(the command buffer)
522+'''
523+
524+#import sys
525+#import os
526+import re
527+#import time
528+#import curses
529+#import signal
530+#import readline
531+#import termios
532+#import fcntl
533+#import struct
534+
535+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
536+from boots.lib.ui.cui.cui_utils import *
537+
538+
539+class Buffer():
540+ '''Buffer manager. Maintains buffers of above mentioned types.
541+ '''
542+
543+ def __init__(self, debug, width):
544+ '''
545+ |Initializes variable related to buffer.
546+ |@param *width* - width of terminal.'''
547+ # @CODEX VAR<self.history> :
548+ # - buffer = store rows of terminal output.
549+ # - line_count = count for lines in buffer.[To be removed]
550+ self.history = {
551+ 'buffer': [],
552+ 'line_count': 0,
553+ }
554+ # @CODEX VAR<self.command> :
555+ # - buffer = buffer for input currently under editing.
556+ # - disp = displaced position for cursor in above buffer.
557+ # This variable takes values [0, -len(command['buffer']]
558+ # 0 when cursor is at END, -len(command['buffer']],
559+ # when cursor is at HOME.
560+ # - prompt = prompt string for current input.
561+ self.command = {
562+ 'buffer': '',
563+ 'disp': 0,
564+ 'prompt': '',
565+ }
566+ self.width = width
567+ self.debug = debug
568+ # @CODEX VAR<self.cursor_pos> = dict like {'y':y_pos,'x':x_pos}.
569+ # Saves cursor_pos before beginning an operation like getting input,
570+ # printing output.
571+ self.cursor_pos = {}
572+
573+ def prepare_for_command(self, prompt, cursoryx):
574+ '''
575+ | Prepare(or reset) buffer vars before getting input from user.
576+ |
577+ | @param *prompt* - current prompt string.
578+ | @param *cursoryx* - cursor [y, x] position at function call.
579+ '''
580+ self.command['buffer'] = ''
581+ self.command['disp'] = 0
582+ self.command['prompt'] = prompt
583+ self.cursor_pos = {'y': cursoryx[0], 'x': cursoryx[1]}
584+
585+ def update_c(self, ftype, string):
586+ '''
587+ | Update the command['buffer'].
588+ |
589+ | @param *ftype* - takes values as
590+ * add to command['buffer'], or
591+ * remove on left.(Backspace), or
592+ * remove on right.(Delete).
593+ | @param *string* - string to update command['buffer']'s value with.
594+ '''
595+ buf = self.command['buffer']
596+ dis = self.command['disp']
597+
598+ def add(*args):
599+ '''For adding to buffer'''
600+ self.command['buffer'] = buf[:len(buf) + dis] + str(*args[0]) \
601+ + buf[len(buf) + dis:]
602+
603+ def delb(*args):
604+ '''For deleting backspace style'''
605+ if dis != - len(buf):
606+ self.command['buffer'] = buf[:len(buf) + dis - 1] \
607+ + buf[len(buf) + dis:]
608+
609+ def deld(*args):
610+ '''For deleting delete style'''
611+ if dis != 0:
612+ self.command['buffer'] = buf[:len(buf) + dis] \
613+ + buf[len(buf) + dis + 1:]
614+ self.command['disp'] += 1
615+
616+ actions = {
617+ 0: add,
618+ 1: delb,
619+ 2: deld,
620+ }
621+ actions.get(ftype)(string)
622+
623+ def extend_history(self, strinput, stype, prompt, highlighter):
624+ '''
625+ | Add new rows to the buffer which may be input from user or the
626+ | output of user commands.
627+ |
628+ | @param *strinput* - string to be added to history buffer.
629+ | @param *stype* - stype of strinput being added.
630+ | @param *prompt* - prompt string if any.
631+ | @param *highlighter* - Highlighter object.
632+ '''
633+ global LINETYPES
634+ #Break rows on \n
635+ rows = re.split('[\n]+', strinput)
636+ if stype != LINETYPES['inp']:
637+ # @CODEX last row is NULL for command outputs, so is deleted.
638+ # Tested for mysql, drizzle connections.
639+ del rows[-1]
640+
641+ count = 0
642+ for row in rows:
643+ # @CODEX
644+ # VAR<tokens> - list of tokens for current input row.
645+ # Each token is of format: (string, group_name)
646+ tokens = []
647+ rows_to_add = []
648+ single_row = []
649+ single_row_len = 0
650+ if stype == LINETYPES['inp']:
651+ # @CODEX For `input stype`, tokens is initialized with prompt
652+ # token, and extended by token list of rest of the input.
653+ tokens = [(prompt, 'group8')]
654+ tokens.extend(highlighter.tokensandcolors(row))
655+ else:
656+ tokens = [(row, 'group8')]
657+
658+ # @CODEX FOR loop. This loop creates rows out of a single input row
659+ # It splits a row that encompasses onto many terminal rows and then
660+ # stores them in rows_to_add.
661+ for token in tokens:
662+ single_row_len += len(token[0])
663+ if single_row_len <= self.width:
664+ single_row.append(token)
665+ else:
666+ single_row_len -= len(token[0])
667+ single_row_left_space = self.width - single_row_len
668+ part_token = token[0][:single_row_left_space]
669+ single_row.append((part_token, token[1]))
670+ rows_to_add.append(single_row)
671+ single_row = []
672+ single_row_len = 0
673+
674+ remaining_token = token[0][single_row_left_space:]
675+ while remaining_token:
676+ part_token = remaining_token[:self.width]
677+ if len(part_token) == self.width:
678+ single_row.append((part_token, token[1]))
679+ rows_to_add.append(single_row)
680+ single_row = []
681+ single_row_len = 0
682+ else:
683+ single_row.append((part_token, token[1]))
684+ single_row_len = len(part_token)
685+ remaining_token = remaining_token[self.width:]
686+ rows_to_add.append(single_row)
687+ count += len(rows_to_add)
688+ self.history['buffer'].extend(rows_to_add)
689+ self.history['line_count'] += len(rows_to_add)
690+ # @CODEX return number of rows added this time.
691+ return count
692
693=== added file 'boots/lib/ui/cui/cui_colors.py'
694--- boots/lib/ui/cui/cui_colors.py 1970-01-01 00:00:00 +0000
695+++ boots/lib/ui/cui/cui_colors.py 2011-03-26 12:15:52 +0000
696@@ -0,0 +1,219 @@
697+#!/usr/bin/env python
698+# -*- coding: iso-8859-1-unix -*-
699+
700+# Boots Client Project
701+# www.launchpad.net/boots
702+#
703+# This file contributed to boots by Ashish Sharma.
704+# ##### BEGIN LICENSE BLOCK #####
705+#
706+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
707+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
708+#
709+# This program is free software: you can redistribute it and/or modify
710+# it under the terms of the GNU General Public License as published by
711+# the Free Software Foundation, either version 3 of the License, or
712+# (at your option) any later version.
713+#
714+# This program is distributed in the hope that it will be useful,
715+# but WITHOUT ANY WARRANTY; without even the implied warranty of
716+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
717+# GNU General Public License for more details.
718+#
719+# You should have received a copy of the GNU General Public License
720+# along with this program. If not, see <http://www.gnu.org/licenses/>.
721+#
722+# ##### END LICENSE BLOCK #####
723+
724+'''
725+| Define class Colors and related classes.
726+| Also defines COLOR_CFG which locates the color cofiguration file on
727+ filesystem.
728+'''
729+
730+import sys
731+#import os
732+import re
733+import curses
734+
735+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
736+from boots.lib.ui.cui.cui_utils import *
737+
738+# Global variable for position of colors config file
739+COLOR_CFG = './data/cfg/colors.cfg'
740+
741+
742+class Colors():
743+ '''Defines colors to be used in UI.
744+ The number of colors is kept limited for now.
745+ Could be refined later to support more colors.'''
746+
747+ def __init__(self, debug):
748+ ''' Defines variables for Colors.'''
749+ # @CODEX VAR<COLOR_LIST> - list of avaiable colors.
750+ self.COLOR_LIST = (
751+ curses.COLOR_BLACK,
752+ curses.COLOR_BLUE,
753+ curses.COLOR_CYAN,
754+ curses.COLOR_GREEN,
755+ curses.COLOR_MAGENTA,
756+ curses.COLOR_RED,
757+ curses.COLOR_WHITE,
758+ curses.COLOR_YELLOW, )
759+
760+ # @CODEX VAR<GRP_COLOR> - groups to color mapping.
761+ # groups concept token from vim for now.
762+ # It is best described in colors.cfg.
763+ self.GRP_COLOR = {
764+ 'group1': curses.COLOR_BLUE ,
765+ 'group2': curses.COLOR_RED ,
766+ 'group3': curses.COLOR_CYAN ,
767+ 'group4': curses.COLOR_YELLOW ,
768+ 'group5': curses.COLOR_MAGENTA ,
769+ 'group6': curses.COLOR_GREEN ,
770+ 'group7': curses.COLOR_MAGENTA ,
771+ 'group8': curses.COLOR_WHITE
772+ }
773+ self.debug = debug
774+
775+ def init_colors(self):
776+ '''Init curses color pairs in curses'''
777+ # @CODEX - Here the index i provided to a color pair is important.
778+ # It is used directly in cui_display. See cui_display for use.
779+ for i, color in enumerate(self.COLOR_LIST):
780+ if color != curses.COLOR_BLACK:
781+ curses.init_pair(i, color, curses.COLOR_BLACK)
782+
783+ def get_color(self, group):
784+ '''
785+ | Provide color index for input group.
786+ Calling init_colors before this function is required.
787+ |
788+ | @param *group* - name of group queried for its color index.'''
789+ # @CODEX See indexes for colors are used diectly. So ordering of
790+ # COLOR_LIST and its use in init_colors is important.
791+ index_color = self.GRP_COLOR[group]
792+ return curses.color_pair(self.COLOR_LIST.index(index_color))
793+
794+class ColorSchemeParser():
795+ '''
796+ | Class provides color associations for a particular lingo
797+ by reading config file.
798+ | It helps in parsing colors configuration and provides group->color
799+ associations for a particular lingo.'''
800+ #help = HelpTopic("ColorSchemes", _("Color Schemes"), _("Color Schemes documentation."),
801+ # description = _("Class provides color schemes to different lingos. It supports sql right now."))
802+
803+ def __init__(self, lingo='sql'):
804+ '''Intialize parser object
805+ @param *lingo* - language to parse colors.cfg for.'''
806+ self.lingo = lingo
807+
808+ def get_color_asso(self):
809+ '''Calls classmethod to get color associations
810+ with lingo as self.lingo.'''
811+ return self.get_color_asso (self.lingo)
812+
813+ @classmethod
814+ def get_color_asso(cls, lingo):
815+ '''
816+ | Retrieve color scheme assocations for given lingo.
817+ Also, associations are formatted as token -> groupname.
818+ Details in colors.cfg.
819+ |
820+ | @param *lingo* - lingo for which to get the color scheme.
821+ '''
822+ if lingo is not None:
823+ # Keys are token types returned from lexer and
824+ # values are the color pair index to use for that token type.
825+ import ConfigParser
826+ config = ConfigParser.ConfigParser()
827+ config.read(COLOR_CFG)
828+ associations = {}
829+ try:
830+ _token_grp_map = dict([ kv for kv in config.items(lingo)])
831+ associations = dict([(k.upper(), v) \
832+ for k,v in _token_grp_map.iteritems()])
833+ except ConfigParser.ParsingError as exp:
834+ # FIXME : Do a better exit or switch to plain UI.
835+ sys.exit('Error occured while parsing color schemes file')
836+ except ConfigParser.NoSectionError as exp:
837+ associations = None
838+ return associations
839+
840+class CursesHighlighting():
841+ '''
842+ | Provides highlighting options.
843+ | Highlighting is enabled if following requirements are met:
844+ 1. Lexer for currently set lingo exists and is in readable form
845+ in default directory.
846+ 2. Color scheme for currently set lingo exists in COLOR_CFG conf file.
847+ '''
848+
849+ def __init__(self, debug, lingo):
850+ '''
851+ |Initialize highlighter object.
852+ |
853+ | @param *lingo* - language for which highlighter object is created'''
854+ self.debug = debug
855+ self.lingo = lingo
856+ self.highlight = False
857+ self.color_scheme = None
858+ self.lexer = None
859+ self.reset(self.lingo)
860+
861+ def set_scheme(self):
862+ '''set color scheme for present lingo'''
863+ self.color_scheme = ColorSchemeParser().get_color_asso(self.lingo)
864+
865+ def set_lexer(self):
866+ '''Finds if a lexer is present for given lingo.
867+ If it finds one then just use it else return None'''
868+ self.lexer = None
869+ module = None
870+ # Check if a lexer is available
871+ if self.lingo == 'sql':
872+ import lexers.sql_lexer as module
873+ else :
874+ # Lexer not available
875+ pass
876+ if module is not None:
877+ # @CODEX - Specific function in the module is accessed here.
878+ # While adding new lexers to code it must be kept in mind to
879+ # name the lexer class as mylexer.
880+ # Also for character insensitive lingos lexer is built to be
881+ # character insensitive.
882+ self.lexer = module.mylexer()
883+ character_insensitives = ['sql', 'pipedsql']
884+ if self.lingo in character_insensitives:
885+ self.lexer.build(reflags=re.IGNORECASE)
886+ else:
887+ self.lexer.build()
888+
889+ def reset(self, lingo):
890+ '''
891+ | Resets highlighting settings.
892+ Changes lingo, parser, color scheme.
893+ |
894+ | @param *lingo* - new language to which we need to reset
895+ the highlighter'''
896+ self.lingo = lingo
897+ self.set_scheme()
898+ self.set_lexer()
899+ if self.lexer and self.color_scheme:
900+ self.highlight = True
901+ else:
902+ self.highlight = False
903+
904+ def tokensandcolors(self, line):
905+ '''
906+ | lex out the given line with a lexer and return list of tuples of
907+ token values and color groups.
908+ |
909+ | @param *line* - string to be lexed'''
910+ if self.highlight:
911+ return [(str(token.value), self.color_scheme[token.type])
912+ for token in self.lexer.tokenz(line) if str(token.value) != '']
913+ else :
914+ return [(line, 'group8')]
915+
916
917=== added file 'boots/lib/ui/cui/cui_display.py'
918--- boots/lib/ui/cui/cui_display.py 1970-01-01 00:00:00 +0000
919+++ boots/lib/ui/cui/cui_display.py 2011-03-26 12:15:52 +0000
920@@ -0,0 +1,215 @@
921+#!/usr/bin/env python
922+# -*- coding: iso-8859-1-unix -*-
923+
924+# Boots Client Project
925+# www.launchpad.net/boots
926+#
927+# This file contributed to boots by Ashish Sharma.
928+# ##### BEGIN LICENSE BLOCK #####
929+#
930+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
931+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
932+#
933+# This program is free software: you can redistribute it and/or modify
934+# it under the terms of the GNU General Public License as published by
935+# the Free Software Foundation, either version 3 of the License, or
936+# (at your option) any later version.
937+#
938+# This program is distributed in the hope that it will be useful,
939+# but WITHOUT ANY WARRANTY; without even the implied warranty of
940+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
941+# GNU General Public License for more details.
942+#
943+# You should have received a copy of the GNU General Public License
944+# along with this program. If not, see <http://www.gnu.org/licenses/>.
945+#
946+# ##### END LICENSE BLOCK #####
947+
948+''' Screen display manager. '''
949+#import sys
950+#import os
951+#import re
952+#import time
953+#import curses
954+#import signal
955+#import readline
956+#import termios, fcntl, struct
957+
958+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
959+from boots.lib.ui.cui.cui_utils import *
960+
961+class Displayer():
962+ '''Display contents to the screen'''
963+
964+ def __init__(self, debug, lingo, screen, colors, highlighter):
965+ '''intializes displayer object.
966+ @param *lingo* - Current lingo
967+ @param *screen* - Current screen to which to provide output
968+ @param *colors* - Colors object in window's use
969+ @param *highlighter* - Highlighter objects gives color information
970+ for tokens.
971+ '''
972+ self.debug = debug
973+ self.lingo = lingo
974+ self.screen = screen
975+ self.colors = colors
976+ self.highlighter = highlighter
977+
978+ def change_lingo(self, lingo):
979+ '''
980+ | Change lingo to new lingo
981+ |
982+ | @param *lingo* - new lingo for displayer'''
983+ self.lingo = lingo
984+ self.highlighter.reset(self.lingo)
985+
986+ def display_e(self, Buffer, stype, line, Vars):
987+ '''
988+ | Displays the line section that can be edited.
989+ |
990+ | @param *Buffer* - Buffer object
991+ | @param *stype* - type of line to be displayed
992+ | @param *line* - line or string to be displayed
993+ | @param *Vars* - Window associated variables'''
994+
995+ CURY = Buffer.cursor_pos['y']
996+ CURX = Buffer.cursor_pos['x']
997+ if Vars.p_indisplay is True :
998+ #Do it
999+ Vars.p_indisplay = False
1000+ return self.draw_page(Buffer,
1001+ [Buffer.history['line_count'] - CURY,
1002+ Buffer.history['line_count']],
1003+ Vars)
1004+
1005+ # @CODEX if block. The if block below tries to display the input line
1006+ # on the screen starting at a cursor position also supplied in args.
1007+ # There is a while loop to restart the string rendering on to the screen
1008+ # This while loop addresses the problem, where when on last line and you
1009+ # exceed over. In that case rows above are scrolled upwards and string
1010+ # is repainted from start.
1011+ restart = True
1012+ try:
1013+ tokens = []
1014+ if line is not None and line != '':
1015+ tokens = self.highlighter.tokensandcolors(line)
1016+ while restart:
1017+ restart = False
1018+ self.screen.move(CURY, CURX)
1019+ self.screen.clrtobot()
1020+ if Buffer.command['prompt'] is not None:
1021+ self.screen.addstr(Buffer.command['prompt'])
1022+
1023+ for token in tokens:
1024+ X = self.screen.getyx()[1] + len(token[0]) + 1
1025+ if self.screen.getmaxyx()[0]-1 <=self.screen.getyx()[0]\
1026+ + X / Buffer.width \
1027+ and X >= self.screen.getmaxyx()[1]:
1028+ self.screen.scroll()
1029+ CURY -= 1
1030+ restart = True
1031+ break
1032+ color = self.colors.get_color(token[1])
1033+ self.screen.addstr(token[0], color)
1034+ except Exception as e:
1035+ if self.debug:
1036+ mdebug('Exception in display_e')
1037+ mdebug(e)
1038+
1039+ # @CODEX After execution of above code the cursor needs to be
1040+ # repositioned according to its displacement in buffer.
1041+ temp_l = len(Buffer.command['prompt']) \
1042+ + len(line) + Buffer.command['disp']
1043+ self.screen.move(CURY + (temp_l)/Buffer.width,
1044+ CURX + ((temp_l)%Buffer.width) )
1045+ self.screen.refresh()
1046+ return {'y':CURY, 'x':CURX}
1047+
1048+ def display_ne(self, cursor_loc, stype, line, prompt=None):
1049+ '''
1050+ | Displays the line section that can not be edited.
1051+ |
1052+ | @param *cursor_loc* - dict giving intial location of cursor
1053+ | @param *stype* - type of line to be displayed
1054+ | @param *line* - line or string to be displayed
1055+ | @param *prompt* - prompt string to be displayed
1056+ | @param *Vars* - Window associated variables'''
1057+
1058+ global LINETYPES
1059+
1060+ self.screen.move(cursor_loc['y'], cursor_loc['x'])
1061+ if stype == LINETYPES['out'] and len(line) > 0:
1062+ try:
1063+ self.screen.addstr(line)
1064+ except Exception as e:
1065+ if self.debug:
1066+ mdebug('Exception in display_ne ' + str(e))
1067+ self.screen.refresh()
1068+
1069+ def reposition_cursor(self, Buffer):
1070+ '''
1071+ | Reposition cursor on screen
1072+ |
1073+ | @param Buffer - Buffer class instance'''
1074+ #temp_l calculates new position for cursor on screen.
1075+ temp_l = len(Buffer.command['prompt']) + len(Buffer.command['buffer'])\
1076+ + Buffer.command['disp']
1077+ self.screen.move(Buffer.cursor_pos['y'] \
1078+ + temp_l / Buffer.width,
1079+ Buffer.cursor_pos['x'] + (temp_l % Buffer.width) )
1080+ self.screen.refresh()
1081+
1082+ def draw_page(self, Buffer, cut, Vars):
1083+ '''
1084+ | Draws pages in cases of Page UP and Page DOWN
1085+ |
1086+ | @param *Buffer* - instance of Buffer class
1087+ | @param *cut* - a list of 2 numbers specifying the lines from history
1088+ buffer to be used in drawing the page.
1089+ | @param *Vars* - Window associated variables'''
1090+ global LINETYPES
1091+ self.screen.move(0, 0)
1092+ self.screen.clrtobot()
1093+
1094+ # @CODEX Each row from the buffer's cut is iterated upon.
1095+ # Each row contains tokens of the form (string, group_name). They are
1096+ # inserted on to screen at places pointed by (y,x).
1097+ y = 0
1098+ for token_row in Buffer.history['buffer'][cut[0]: cut[1]]:
1099+ x = 0
1100+ for token in token_row:
1101+ color = self.colors.get_color(token[1])
1102+ try:
1103+ self.screen.insstr( y, x, token[0], color)
1104+ except Exception as e:
1105+ if self.debug:
1106+ mdebug('Exception in draw_page\nInsertion of token gave \
1107+ error')
1108+ mdebug(e)
1109+ x += (len(token[0]))
1110+ try:
1111+ self.screen.move(y, x)
1112+ except Exception as e:
1113+ if self.debug:
1114+ mdebug('Exception in draw_page')
1115+ mdebug(e)
1116+ y += 1
1117+
1118+ # @CODEX In case the screen is not filled with rows from history buffer.
1119+ # Then it means there is space to display the command buffer. So add it
1120+ # to screen on appropriate position.
1121+ cut_size = cut[1] - cut[0]
1122+ if cut_size < self.screen.getmaxyx()[0] :
1123+ self.screen.addstr('\n')
1124+ if cut_size == 0:
1125+ self.screen.move(0, 0)
1126+ cursor = {'y': self.screen.getyx()[0], 'x': self.screen.getyx()[1]}
1127+ Buffer.cursor_pos = cursor
1128+ self.display_e(Buffer, LINETYPES['inp'], Buffer.command['buffer'],
1129+ Vars)
1130+ self.screen.refresh()
1131+ return cursor
1132+
1133+ self.screen.refresh()
1134+ return Buffer.cursor_pos
1135+
1136
1137=== added file 'boots/lib/ui/cui/cui_keyboard.py'
1138--- boots/lib/ui/cui/cui_keyboard.py 1970-01-01 00:00:00 +0000
1139+++ boots/lib/ui/cui/cui_keyboard.py 2011-03-26 12:15:52 +0000
1140@@ -0,0 +1,452 @@
1141+#!/usr/bin/env python
1142+# -*- coding: iso-8859-1-unix -*-
1143+
1144+# Boots Client Project
1145+# www.launchpad.net/boots
1146+#
1147+# This file contributed to boots by Ashish Sharma.
1148+# ##### BEGIN LICENSE BLOCK #####
1149+#
1150+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
1151+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
1152+#
1153+# This program is free software: you can redistribute it and/or modify
1154+# it under the terms of the GNU General Public License as published by
1155+# the Free Software Foundation, either version 3 of the License, or
1156+# (at your option) any later version.
1157+#
1158+# This program is distributed in the hope that it will be useful,
1159+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1160+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1161+# GNU General Public License for more details.
1162+#
1163+# You should have received a copy of the GNU General Public License
1164+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1165+#
1166+# ##### END LICENSE BLOCK #####
1167+
1168+""" Keyboard and signals manager """
1169+
1170+import sys
1171+import os
1172+import re
1173+import time
1174+import curses
1175+import signal
1176+import readline
1177+import termios, fcntl, struct
1178+
1179+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
1180+from boots.lib.ui.cui.cui_utils import *
1181+
1182+class Keyboard():
1183+ """Provides methods related to keyboard events. """
1184+ def __init__(self, debug):
1185+ """Initialize."""
1186+ self.signal_handlers = self.get_signal_handlers()
1187+ self.key_handlers = self.get_key_handlers()
1188+ self.debug = debug
1189+
1190+
1191+ def get_signal_handlers(cls):
1192+ """Set up signal handlers
1193+ The name get_... is not although correct as instead of getting it does
1194+ setting. But for symmetry with get_key... this name is taken."""
1195+ #signal.signal(signal.SIGWINCH, cls._handle_RESIZE)
1196+ pass
1197+
1198+
1199+ def get_control_signal_handlers(cls):
1200+ """Set up control signal handlers."""
1201+ return {
1202+ 'Ctrl_b': cls._handle_Ctrl_b,
1203+ 'Ctrl_f': cls._handle_Ctrl_f,
1204+ 'Ctrl_d': cls._handle_Ctrl_d,
1205+ # Following implementations require logging of editing commands.
1206+ #'Ctrl_x': cls._handle_Ctrl_x,
1207+ 'Ctrl_u': cls._handle_Ctrl_u,
1208+ 'Ctrl_a': cls._handle_Ctrl_a,
1209+ 'Ctrl_e': cls._handle_Ctrl_e,
1210+ 'Ctrl_l': cls._handle_Ctrl_l,
1211+ 'Ctrl_k': cls._handle_Ctrl_k,
1212+ 'Ctrl_w': cls._handle_Ctrl_w,
1213+ 'Ctrl_y': cls._handle_Ctrl_y,
1214+ 'Ctrl_i': cls._handle_TAB,
1215+ # Following implementations require search support in readline.
1216+ #'Ctrl_r': cls._handle_Ctrl_r,
1217+ #'Ctrl_s': cls._handle_Ctrl_s,
1218+ }
1219+
1220+
1221+ def get_key_handlers(cls):
1222+ """Set up key handlers."""
1223+ return {
1224+ 'KEY_LEFT': cls._handle_KEY_LEFT,
1225+ 'KEY_RIGHT': cls._handle_KEY_RIGHT,
1226+ 'KEY_UP': cls._handle_KEY_UP,
1227+ 'KEY_DOWN': cls._handle_KEY_DOWN,
1228+ 'KEY_HOME': cls._handle_KEY_HOME,
1229+ 'KEY_END': cls._handle_KEY_END,
1230+ 'KEY_BACKSPACE': cls._handle_KEY_BACKSPACE,
1231+ 'KEY_DC': cls._handle_KEY_DC,
1232+ 'KEY_PPAGE': cls._handle_KEY_PPAGE,
1233+ 'KEY_NPAGE': cls._handle_KEY_NPAGE,
1234+ }
1235+
1236+
1237+ def get_visible_handlers(cls):
1238+ return {
1239+ 'visible':cls._handle_visible,
1240+ }
1241+
1242+
1243+ def _handle_visible(cls, c, Buffer, Displayer, Vars):
1244+ """ Handle for visble characters"""
1245+ Buffer.update_c(0, str(c))
1246+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1247+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1248+
1249+
1250+ def _handle_Ctrl_b(cls, Buffer, Displayer, Vars):
1251+ """Set up handle for Ctrl-b signal which,
1252+ moves cursor back on character."""
1253+ cls._handle_KEY_LEFT(Buffer, Displayer, Vars)
1254+
1255+
1256+ def _handle_Ctrl_f(cls, Buffer, Displayer, Vars):
1257+ """Set up handle for Ctrl-f signal which,
1258+ moves cursor forward one character."""
1259+ cls._handle_KEY_RIGHT(Buffer, Displayer, Vars)
1260+
1261+
1262+ def _handle_Ctrl_d(cls, Buffer, Displayer, Vars):
1263+ """Set up handle for Ctrl-d signal which,
1264+ functions as delete if buffer is not empty and
1265+ if buffer is empty then it is a quit signal."""
1266+ if len(Buffer.command['buffer']) == 0 :
1267+ Buffer.command['buffer'] = '\quit'
1268+ Vars.return_inp = True
1269+ else :
1270+ cls._handle_KEY_DC(Buffer, Displayer, Vars)
1271+
1272+ #
1273+ #def _handle_Ctrl_x(cls, Buffer, Displayer, Vars):
1274+ # """Set up handle for Ctrl-x signal which,
1275+ # """
1276+ # #@TODO
1277+ # pass
1278+ #
1279+
1280+ def _handle_Ctrl_u(cls, Buffer, Displayer, Vars):
1281+ """Set up handle for Ctrl-u signal which,
1282+ deletes whole line."""
1283+ Buffer.command['buffer'] = ''
1284+ Buffer.command['disp'] = 0
1285+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1286+ LINETYPES['inp'], ' ' * (len(Buffer.command['buffer'])+1), Vars)
1287+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1288+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1289+
1290+
1291+ def _handle_Ctrl_a(cls, Buffer, Displayer, Vars):
1292+ """Set up handle for Ctrl-a signal which,
1293+ moves cursor to beginning."""
1294+ cls._handle_KEY_HOME(Buffer, Displayer, Vars)
1295+
1296+
1297+ def _handle_Ctrl_e(cls, Buffer, Displayer, Vars):
1298+ """Set up handle for Ctrl-e signal which,
1299+ moves cursor to end."""
1300+ cls._handle_KEY_END(Buffer, Displayer, Vars)
1301+
1302+
1303+ def _handle_Ctrl_l(cls, Buffer, Displayer, Vars):
1304+ """Set up handle for Ctrl-l signal which,
1305+ clears screen and positions cursor on top."""
1306+ Vars.prev_toprow = len(Buffer.history['buffer'])
1307+ page_cut = [len(Buffer.history['buffer']),len(Buffer.history['buffer'])]
1308+ Buffer.cursor_pos= Displayer.draw_page(Buffer, page_cut, Vars)
1309+
1310+
1311+ def _handle_Ctrl_k(cls, Buffer, Displayer, Vars):
1312+ """Set up handle for Ctrl-k signal which,
1313+ kills text till the end."""
1314+ Vars.kill_count += 1
1315+ KL_LMT = len(Buffer.command['buffer']) + Buffer.command['disp']
1316+ if Vars.kill_count >= 2:
1317+ Vars.kill_ring = Buffer.command['buffer'][KL_LMT:] + Vars.kill_ring
1318+ else:
1319+ Vars.kill_ring = Buffer.command['buffer'][KL_LMT:]
1320+
1321+ Buffer.command['buffer'] = Buffer.command['buffer'][:KL_LMT]
1322+ Buffer.command['disp'] = 0
1323+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1324+ LINETYPES['inp'], ' ' * (len(Buffer.command['buffer'])+1), Vars)
1325+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1326+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1327+
1328+
1329+ def _handle_Ctrl_w(cls, Buffer, Displayer, Vars):
1330+ """Set up handle for Ctrl-w signal which,
1331+ kills text till the last whitespace"""
1332+ Vars.kill_count += 1
1333+ regx = re.compile(r'(\s+\S+\s*)\Z', re.U)
1334+ disp = Buffer.command['disp']
1335+ words = regx.split(Buffer.command['buffer'][: disp])
1336+ kill_text = ''.join(words[-2:])
1337+ rem_buf = ''.join(words[:-2])
1338+ if kill_text:
1339+ if Vars.kill_count >= 2:
1340+ Vars.kill_ring = kill_text + Vars.kill_ring
1341+ else:
1342+ Vars.kill_ring = kill_text
1343+ Buffer.command['buffer'] = rem_buf + Buffer.command['buffer'][disp: ]
1344+ #Buffer.command['disp'] = 0
1345+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1346+ LINETYPES['inp'], ' ' * (len(Buffer.command['buffer'])+1), Vars)
1347+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1348+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1349+
1350+
1351+
1352+
1353+ def _handle_Ctrl_y(cls, Buffer, Displayer, Vars):
1354+ """Set up handle for Ctrl-y signal which,
1355+ yanks kill_buffer to screen."""
1356+ YK_LMT = Buffer.command['disp'] + len(Buffer.command['buffer'])
1357+ Buffer.command['buffer'] = \
1358+ Buffer.command['buffer'][: YK_LMT]\
1359+ + Vars.kill_ring \
1360+ + Buffer.command['buffer'][YK_LMT: ]
1361+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1362+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1363+
1364+ #
1365+ #def _handle_Ctrl_r(cls, Buffer, Displayer, Vars):
1366+ # """Set up handle for Ctrl-r signal which,
1367+ # """
1368+ # pass
1369+ #
1370+ #
1371+ #def _handle_Ctrl_s(cls, Buffer, Displayer, Vars):
1372+ # """Set up handle for Ctrl-s signal which,
1373+ # """
1374+ # pass
1375+
1376+
1377+ def _handle_KEY_LEFT(cls, Buffer, Displayer, Vars):
1378+ """SET up a left arrow key handler"""
1379+ if -(Buffer.command['disp'] - 1) < len(Buffer.command['buffer']):
1380+ Buffer.command['disp'] -= 1
1381+ else:
1382+ Buffer.command['disp'] = -len(Buffer.command['buffer'])
1383+ Displayer.reposition_cursor(Buffer)
1384+
1385+
1386+ def _handle_KEY_RIGHT(cls, Buffer, Displayer, Vars):
1387+ """SET up a right arrow key handler"""
1388+ if (Buffer.command['disp'] + 1) < 0 :
1389+ Buffer.command['disp'] += 1
1390+ else:
1391+ Buffer.command['disp'] = 0
1392+ Displayer.reposition_cursor(Buffer)
1393+
1394+
1395+ def _handle_KEY_UP(cls, Buffer, Displayer, Vars):
1396+ """SET up a up arrow key handler"""
1397+ global LINETYPES
1398+ if Vars.history_index == -1:
1399+ Vars.history_index = readline.get_current_history_length()
1400+ Buffer.command['buffer'] = readline.get_history_item(
1401+ Vars.history_index)
1402+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1403+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1404+ else :
1405+ if Vars.history_index - 1 != 0:
1406+ Vars.history_index -= 1
1407+ Buffer.command['buffer']= readline.get_history_item(
1408+ Vars.history_index)
1409+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1410+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1411+
1412+
1413+
1414+ def _handle_KEY_DOWN(cls, Buffer, Displayer, Vars):
1415+ """SET up a down arrow key handler"""
1416+ global LINETYPES
1417+ if Vars.history_index == -1:
1418+ pass
1419+ elif Vars.history_index == readline.get_current_history_length():
1420+ Vars.history_index = -1
1421+ Buffer.command['buffer'] = ''
1422+ Buffer.cursor_pos=Displayer.display_e(Buffer,
1423+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1424+ else :
1425+ Vars.history_index +=1
1426+ Buffer.command['buffer']= readline.get_history_item(
1427+ Vars.history_index)
1428+ Buffer.cursor_pos=Displayer.display_e(Buffer,
1429+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1430+
1431+
1432+ def _handle_KEY_PPAGE(cls, Buffer, Displayer, Vars):
1433+ """SET up a page up(previous page) key handler"""
1434+ TOPLINE=0
1435+ if not Vars.p_indisplay :
1436+ Y_row_count = Vars.current_cursor[0] \
1437+ - len(Buffer.command['buffer'] \
1438+ + Buffer.command['prompt']) / Buffer.width
1439+ TOPLINE = Buffer.history['line_count'] - Y_row_count
1440+ else:
1441+ if Vars.prev_toprow - Vars.maxyx[0] < 0 :
1442+ TOPLINE = Vars.prev_toprow
1443+ else:
1444+ TOPLINE = Vars.prev_toprow - Vars.maxyx[0]
1445+
1446+ page_cut = None
1447+ if TOPLINE >= 0 :
1448+ if TOPLINE - Vars.maxyx[0] < 0 :
1449+ top_lim = Vars.maxyx[0]
1450+ if top_lim > Buffer.history['line_count']:
1451+ top_lim = Buffer.history['line_count']
1452+ page_cut= (0, top_lim)
1453+ else:
1454+ page_cut= (TOPLINE - Vars.maxyx[0], TOPLINE)
1455+ Vars.p_indisplay = True
1456+ Vars.prev_toprow = TOPLINE #UPDATE
1457+ Buffer.cursor_pos= Displayer.draw_page(Buffer, page_cut, Vars)
1458+
1459+
1460+ def _handle_KEY_NPAGE(cls, Buffer, Displayer, Vars):
1461+ """SET up a page down (next page) key handler"""
1462+ if Vars.prev_toprow >= 0 :
1463+ TOPLINE = Vars.prev_toprow + Vars.maxyx[0]
1464+ if TOPLINE > Buffer.history['line_count'] :
1465+ if Buffer.history['line_count'] < TOPLINE:
1466+ TOPLINE = Buffer.history['line_count']
1467+ page_cut = (Vars.prev_toprow, TOPLINE)
1468+ Vars.p_indisplay = False
1469+ Vars.prev_toprow = -1
1470+ else:
1471+ page_cut = (Vars.prev_toprow, TOPLINE)
1472+ Vars.prev_toprow = TOPLINE
1473+ Buffer.cursor_pos= Displayer.draw_page(Buffer, page_cut, Vars)
1474+
1475+
1476+ def _handle_KEY_HOME(cls, Buffer, Displayer, Vars):
1477+ """SET up a home key handler"""
1478+ Buffer.command['disp'] = -len(Buffer.command['buffer'])
1479+ Displayer.reposition_cursor(Buffer)
1480+
1481+
1482+ def _handle_KEY_END(cls, Buffer, Displayer, Vars):
1483+ """SET up a end key handler"""
1484+ Buffer.command['disp'] = 0
1485+ Displayer.reposition_cursor(Buffer)
1486+
1487+
1488+ def _handle_KEY_BACKSPACE(cls, Buffer, Displayer, Vars):
1489+ """SET up a backspace key handler"""
1490+ global LINETYPES
1491+ # Detailed in Buffer class
1492+ Buffer.update_c(1, '')
1493+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1494+ LINETYPES['inp'], ' ' * (len(Buffer.command['buffer'])+1), Vars)
1495+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1496+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1497+
1498+
1499+ def _handle_KEY_DC(cls, Buffer, Displayer, Vars):
1500+ """SET up a delete key handler"""
1501+ global LINETYPES
1502+ # Detailed in Buffer class
1503+ Buffer.update_c(2, '')
1504+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1505+ LINETYPES['inp'], ' ' * (len(Buffer.command['buffer'])+1), Vars)
1506+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1507+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1508+
1509+ def _handle_TAB(cls, Buffer, Displayer, Vars):
1510+ """SET up a TAB key handle"""
1511+ Vars.tab_count += 1
1512+ regx = re.compile(r'(\s+\S+\s*)\Z', re.U)
1513+ disp = Buffer.command['disp']
1514+ words = regx.split(Buffer.command['buffer'][:disp])
1515+ rem_buf_b = Buffer.command['buffer'][disp:]
1516+ if disp == 0:
1517+ words = regx.split(Buffer.command['buffer'])
1518+ rem_buf_b = ''
1519+ key_text = ''.join(words[-2:])
1520+ rem_buf_f = ''.join(words[:-2])
1521+
1522+ matches = Vars.keycache.search(key_text.strip())
1523+ if len(matches) == 0:
1524+ pass
1525+ elif len(matches) == 1:
1526+ Buffer.command['buffer'] = rem_buf_f + \
1527+ key_text.replace(key_text.strip(), matches[0]) + rem_buf_b
1528+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1529+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1530+ elif Vars.tab_count >= 2:
1531+ #SHOW POSSIBLE MATCHES
1532+ #@CODEX This approach of displaying results is inspired from
1533+ #popular unix utility ls.
1534+ line_length = Vars.maxyx[1]
1535+ max_idx = line_length / 3
1536+ matches_len = len(matches)
1537+ class info():
1538+ def __init__(self):
1539+ self.valid_len = True
1540+ self.col_arr = [0] * max_idx
1541+ self.line_len = 0
1542+
1543+ max_cols = [max_idx, matches_len][max_idx > matches_len]
1544+
1545+ column_info = [info() for i in range(max_cols)]
1546+
1547+ for filesno in range(matches_len):
1548+ name_length = len(matches[filesno])
1549+ for i in range(max_cols):
1550+ if column_info[i].valid_len:
1551+ idx = filesno / ((matches_len + i) / (i + 1))
1552+ real_length = name_length + [2,0][(idx == i)]
1553+ if real_length > column_info[i].col_arr[idx]:
1554+ column_info[i].line_len += (real_length - column_info[i].col_arr[idx]);
1555+ column_info[i].col_arr[idx] = real_length;
1556+ column_info[i].valid_len = column_info[i].line_len < line_length;
1557+
1558+ cols = max_cols
1559+ while cols > 1:
1560+ if column_info[cols - 1].valid_len:
1561+ break
1562+ cols -= 1
1563+
1564+ rows = matches_len / cols + (matches_len % cols != 0);
1565+ total_p=''
1566+ for i in range(rows):
1567+ s=''
1568+ for i, name in enumerate(matches[i::rows]):
1569+ s+= name.ljust(column_info[cols-1].col_arr[i])
1570+ total_p+=(s+'\n')
1571+ Buffer.extend_history(total_p, LINETYPES['out'],
1572+ "", Displayer.highlighter)
1573+ custom = False
1574+ if not custom:
1575+ y = 0
1576+ if Displayer.screen.getyx()[0] + 1 < Vars.maxyx[0] :
1577+ y = Displayer.screen.getyx()[0] + 1
1578+ else:
1579+ Displayer.screen.scroll()
1580+ y = Displayer.screen.getyx()[0]
1581+ Buffer.cursor_pos = {'y': y,
1582+ 'x': 0 }
1583+ Displayer.display_ne(Buffer.cursor_pos,
1584+ LINETYPES['out'], total_p)
1585+ Buffer.cursor_pos = {'y': Displayer.screen.getyx()[0],
1586+ 'x': Displayer.screen.getyx()[1]}
1587+ else:
1588+ # custom not implemented
1589+ pass
1590+ Buffer.cursor_pos = Displayer.display_e(Buffer,
1591+ LINETYPES['inp'], Buffer.command['buffer'], Vars)
1592+
1593
1594=== added file 'boots/lib/ui/cui/cui_main.py'
1595--- boots/lib/ui/cui/cui_main.py 1970-01-01 00:00:00 +0000
1596+++ boots/lib/ui/cui/cui_main.py 2011-03-26 12:15:52 +0000
1597@@ -0,0 +1,107 @@
1598+#!/usr/bin/env python
1599+# -*- coding: iso-8859-1-unix -*-
1600+
1601+# Boots Client Project
1602+# www.launchpad.net/boots
1603+#
1604+# This file contributed to boots by Ashish Sharma.
1605+# ##### BEGIN LICENSE BLOCK #####
1606+#
1607+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
1608+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
1609+#
1610+# This program is free software: you can redistribute it and/or modify
1611+# it under the terms of the GNU General Public License as published by
1612+# the Free Software Foundation, either version 3 of the License, or
1613+# (at your option) any later version.
1614+#
1615+# This program is distributed in the hope that it will be useful,
1616+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1617+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1618+# GNU General Public License for more details.
1619+#
1620+# You should have received a copy of the GNU General Public License
1621+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1622+#
1623+# ##### END LICENSE BLOCK #####
1624+
1625+"""Curses UI interface to boots."""
1626+
1627+import os
1628+import sys
1629+import curses
1630+import traceback
1631+
1632+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
1633+from boots.lib.ui.cui.cui_window import Window
1634+from boots.lib.ui.cui.cui_utils import *
1635+
1636+
1637+class CursesMain():
1638+ """This class does the interfacing with boots other classes and the
1639+ internal curses ui components.
1640+ It provides basic initialization and deinitialization methods."""
1641+
1642+ def __init__(self, console, lingo=None, config=None):
1643+ """intialize the screen and other cui components
1644+ @param lingo - intilization lingo to be used"""
1645+ self.lingo = lingo
1646+ self.config = config
1647+ self.debug = console.config['debug']
1648+ curses.initscr()
1649+ curses.noecho()
1650+ curses.cbreak()
1651+ try:
1652+ curses.start_color()
1653+ except:
1654+ if self.debug:
1655+ mdebug("Failed to start colors")
1656+ self.create_cui(self.config)
1657+
1658+ def create_cui(self, config, lingo='sql'):
1659+ """Creates a window object and intializes it.
1660+ @param lingo - language for window to be intilized with."""
1661+ self.cui = Window(self)
1662+ if lingo is not None:
1663+ self.lingo = lingo
1664+ self.config = config
1665+ self.cui.make_cui(self.lingo, self.config)
1666+
1667+ def destroy_cui(self, choice=0):
1668+ """Destroys cui. Deinitilalizes the current window object
1669+ and in case boots is going to quit it deinitializes curses and
1670+ dumps current buffer onto screen.
1671+
1672+ | @param choice - 0, default, means that destroy_cui is called
1673+ | but boots is not going to quit.
1674+ | 1, means boots is going to quit and curses must
1675+ | be deinitialized.
1676+ """
1677+ hbuffer = self.cui.destroy_cui()
1678+ if choice == 1:
1679+ curses.echo()
1680+ curses.nocbreak()
1681+ curses.endwin()
1682+ # And dump buffer history onto the term.
1683+ print ''.join([(''.join([token[0] for token in token_row]) + '\n')\
1684+ for token_row in hbuffer])
1685+
1686+
1687+ def getinp(self, prompt):
1688+ """provides input of a single line.
1689+
1690+ @param prompt - prompt to display on screen"""
1691+ try:
1692+ string = self.cui.input(prompt + " ")
1693+ return string
1694+ except Exception as e:
1695+ if self.debug:
1696+ mdebug('Exception in CursesMain-getinp '+ str(e))
1697+ traceback.print_exc(file='./.exceptions')
1698+ sys.exit(0)
1699+
1700+ def printout(self, lines):
1701+ """provides screen based output facility.
1702+ @param lines - lines to display on screen"""
1703+ self.cui.printout(lines)
1704+
1705
1706=== added file 'boots/lib/ui/cui/cui_utils.py'
1707--- boots/lib/ui/cui/cui_utils.py 1970-01-01 00:00:00 +0000
1708+++ boots/lib/ui/cui/cui_utils.py 2011-03-26 12:15:52 +0000
1709@@ -0,0 +1,58 @@
1710+#!/usr/bin/env python
1711+# -*- coding: iso-8859-1-unix -*-
1712+
1713+# Boots Client Project
1714+# www.launchpad.net/boots
1715+#
1716+# This file contributed to boots by Ashish Sharma.
1717+# ##### BEGIN LICENSE BLOCK #####
1718+#
1719+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
1720+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
1721+#
1722+# This program is free software: you can redistribute it and/or modify
1723+# it under the terms of the GNU General Public License as published by
1724+# the Free Software Foundation, either version 3 of the License, or
1725+# (at your option) any later version.
1726+#
1727+# This program is distributed in the hope that it will be useful,
1728+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1729+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1730+# GNU General Public License for more details.
1731+#
1732+# You should have received a copy of the GNU General Public License
1733+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1734+#
1735+# ##### END LICENSE BLOCK #####
1736+
1737+""" Utilities and varaibles to be made available in
1738+different curses components """
1739+
1740+# Global Variables
1741+
1742+LINETYPES= {'inp': 1, 'out': 2}
1743+
1744+
1745+# Custom Exceptions
1746+
1747+#Not Used
1748+class UnSupportedMethod(Exception):
1749+ """Class defines an exception to be raised when a method is called with
1750+ argument for which it is not implemented.
1751+ FIXME: To be completed and used."""
1752+ def __init__(self, value):
1753+ self.value = value
1754+ def __str__(self):
1755+ return repr(self.value)
1756+
1757+# For Debugging
1758+
1759+def mdebug(msg):
1760+ """Writes debug messages to .debug.txt
1761+ @param s - message to write
1762+ """
1763+ F=open('.debug.txt','a')
1764+ F.write( str(msg) )
1765+ F.write('\n')
1766+ F.close()
1767+
1768
1769=== added file 'boots/lib/ui/cui/cui_window.py'
1770--- boots/lib/ui/cui/cui_window.py 1970-01-01 00:00:00 +0000
1771+++ boots/lib/ui/cui/cui_window.py 2011-03-26 12:15:52 +0000
1772@@ -0,0 +1,252 @@
1773+#!/usr/bin/env python
1774+# -*- coding: iso-8859-1-unix -*-
1775+
1776+# Boots Client Project
1777+# www.launchpad.net/boots
1778+#
1779+# This file contributed to boots by Ashish Sharma.
1780+# ##### BEGIN LICENSE BLOCK #####
1781+#
1782+# Copyright (C) 2009-2010 Clark Boylan, Ken Brotherton, Max Goodman,
1783+# Victoria Lewis, David Rosenbaum, and Andreas Turriff
1784+#
1785+# This program is free software: you can redistribute it and/or modify
1786+# it under the terms of the GNU General Public License as published by
1787+# the Free Software Foundation, either version 3 of the License, or
1788+# (at your option) any later version.
1789+#
1790+# This program is distributed in the hope that it will be useful,
1791+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1792+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1793+# GNU General Public License for more details.
1794+#
1795+# You should have received a copy of the GNU General Public License
1796+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1797+#
1798+# ##### END LICENSE BLOCK #####
1799+
1800+''' Manager for overall input and output for boots using cursesUI.'''
1801+
1802+import sys
1803+import os
1804+import re
1805+import time
1806+import curses
1807+import signal
1808+import readline
1809+import termios, fcntl, struct
1810+
1811+from boots.lib.ui.components.help import HelpTopic, CommandHelpTopic
1812+from boots.lib.ui.cui.cui_utils import *
1813+from boots.lib.ui.cui.cui_colors import Colors, CursesHighlighting
1814+from boots.lib.ui.cui.cui_buffer import Buffer
1815+from boots.lib.ui.cui.cui_keyboard import Keyboard
1816+from boots.lib.ui.cui.cui_display import Displayer
1817+from boots.lib.ui.cui.completer.key_cache import KeysCache
1818+
1819+class Window():
1820+ '''Window object for window, buffer management.'''
1821+
1822+ def __init__(self, debug):
1823+ '''Initialize variables and
1824+ special input handles.'''
1825+
1826+ self.config = None
1827+ self.screen = None
1828+ self.colors = None
1829+ self.buffer = None
1830+ self.highlighter = None
1831+ self.displayer = None
1832+ self.debug = debug
1833+ self.Keyboard = Keyboard(debug)
1834+
1835+ class WindowVars():
1836+ '''Defines and manages window variables thaat are also
1837+ passed on to other functions like key handlers.'''
1838+ def __init__(self, history_index, prev_toprow, p_indisplay, maxyx):
1839+ self.set (history_index, prev_toprow, p_indisplay, maxyx)
1840+
1841+ def set(self, history_index, prev_toprow, p_indisplay, maxyx):
1842+ self.history_index = history_index
1843+ self.prev_toprow = prev_toprow
1844+ self.p_indisplay = p_indisplay
1845+ self.maxyx = maxyx
1846+ self.current_cursor = []
1847+ self.kill_ring = ''
1848+ self.kill_count = 0
1849+ self.tab_count = 0
1850+ self.keycache = KeysCache()
1851+ self.return_inp = False
1852+
1853+ self.vars = WindowVars(-1, -1, False, [0, 0])
1854+
1855+ # Keyboard configuration
1856+ self._install_visible_handles()
1857+ self._install_handles()
1858+ self._install_control_handles()
1859+ self._install_signal_handles()
1860+
1861+ def _make_window(self):
1862+ '''Initial settings for a window'''
1863+ self.screen = curses.newwin(0, 0)
1864+ self.screen.idlok(1)
1865+ self.screen.keypad(1)
1866+ self.screen.scrollok(1)
1867+ self.screen.timeout(-1)
1868+
1869+ def _destroy_window(self):
1870+ '''destroy window'''
1871+ self.screen = None
1872+
1873+ def make_cui(self, lingo, config):
1874+ '''create components of the Curses UI.
1875+ | @param lingo - language for which to create Window objects
1876+ | @param config - contains connection information'''
1877+ self.config = config
1878+ self._make_window()
1879+ if self.colors is None:
1880+ self.colors= Colors(self.debug)
1881+ self.colors.init_colors()
1882+ if self.highlighter is None:
1883+ self.highlighter= CursesHighlighting(self.debug, lingo)
1884+ if self.buffer is None:
1885+ self.buffer= Buffer(self.debug, self.screen.getmaxyx()[1])
1886+ if self.displayer is None:
1887+ self.displayer= Displayer(self.debug, lingo, self.screen,
1888+ self.colors, self.highlighter)
1889+
1890+ self.vars.set(-1, -1, False, [self.screen.getmaxyx()[0],
1891+ self.screen.getmaxyx()[1]])
1892+ self.vars.keycache.update(lingo, self.config)
1893+
1894+ def destroy_cui(self):
1895+ '''remove Curses UI components.'''
1896+ self._destroy_window()
1897+ self.colors = None
1898+ self.highlighter = None
1899+ self.displayer = None
1900+ self.vars.set(-1, -1, False, [0, 0])
1901+ return self.buffer.history['buffer']
1902+
1903+ def _install_signal_handles(self):
1904+ '''Set signal handlers'''
1905+ self.Keyboard.get_signal_handlers()
1906+
1907+ def _install_visible_handles(self):
1908+ '''Set key handlers'''
1909+ self.visible_handles = self.Keyboard.get_visible_handlers()
1910+
1911+ def _install_handles(self):
1912+ '''Set key handlers'''
1913+ self.handles = self.Keyboard.get_key_handlers()
1914+
1915+ def _install_control_handles(self):
1916+ '''Set control signal handlers'''
1917+ self.control_handles = self.Keyboard.get_control_signal_handlers()
1918+
1919+ def _nullify_counts(self):
1920+ self.vars.kill_count = 0
1921+ self.vars.tab_count = 0
1922+
1923+ def input(self, prompt):
1924+ '''return a line of input
1925+ @param prompt - prompt string to be shown '''
1926+ global LINETYPES
1927+ self.buffer.prepare_for_command(prompt, self.screen.getyx())
1928+ self.vars.history_index = -1
1929+ self.buffer.cursor_pos = self.displayer.display_e(self.buffer,
1930+ LINETYPES['inp'], self.buffer.command['buffer'],
1931+ self.vars)
1932+
1933+ c = ' '
1934+ eols = ('\n')
1935+ while c not in eols and not self.vars.return_inp :
1936+ # @CODEX
1937+ # try-except - try to receive input from user and store that in c.
1938+ # if block - check for input longer than 1 in length, pass it to
1939+ # a key handler.
1940+ # Then for one length inputs, separate the control
1941+ # signals. (Control Signals are upto 32).
1942+ # Finally, leave the character input and,
1943+ # Add it to buffer and update screen accordingly.
1944+ try:
1945+ c = self.screen.getkey()
1946+ except Exception as e:
1947+ # @CODEX A strange exception is encountered from getkey()
1948+ # on a resize. It has to be resolved or could remain like this
1949+ # as the exception is harmless.
1950+ if self.debug :
1951+ mdebug('Exception in input')
1952+ mdebug(e)
1953+ continue
1954+
1955+ self.vars.current_cursor = self.screen.getyx()
1956+ if len(c) > 1 :
1957+ # Call a key handle.
1958+ self._nullify_counts()
1959+ if c in self.handles:
1960+ self.handles[c](self.buffer, self.displayer, self.vars)
1961+ else:
1962+ if self.debug:
1963+ mdebug('Unhandled key : ' + c)
1964+ elif ord(c) < 32 :
1965+ # @CODEX A control signal.
1966+ # common control signals used in editing are handled
1967+ # some common editing signals not handled for now as they
1968+ # require changes to progrmas variables and data structures.
1969+ if ord(c) > 0 and ord(c) < 27 :
1970+ nch = ord(c) + 96
1971+ qrkey = 'Ctrl_' + chr(nch)
1972+ if self.control_handles.has_key(qrkey):
1973+ if chr(nch) != 'w' and chr(nch) != 'k':
1974+ self.vars.kill_count = 0
1975+ if ord(c) != 9:
1976+ self.vars.tab_count = 0
1977+
1978+ self.control_handles[qrkey](self.buffer, self.displayer,
1979+ self.vars)
1980+ else:
1981+ self._nullify_counts()
1982+ # Handle not available for this control signal
1983+ pass
1984+ else:
1985+ # Visible characters
1986+ self._nullify_counts()
1987+ self.visible_handles['visible']( c,
1988+ self.buffer, self.displayer, self.vars)
1989+
1990+ self.buffer.extend_history(self.buffer.command['buffer'],
1991+ LINETYPES['inp'], prompt, self.highlighter)
1992+ self.buffer.command['disp'] = 0
1993+ self.buffer.cursor_pos = self.displayer.display_e(self.buffer,
1994+ LINETYPES['inp'], self.buffer.command['buffer'],
1995+ self.vars)
1996+ self.buffer.command['prompt'] = ''
1997+ self.vars.return_inp = False
1998+
1999+ # @CODEX Scroll one line if currently on last line.
2000+ # This is done to avoid getting out of screen and getting an exceotion
2001+ # in subsequent call to addstr.
2002+ if self.screen.getmaxyx()[0] == self.screen.getyx()[0] :
2003+ self.screen.scroll()
2004+ self.screen.addstr('\n')
2005+
2006+ if readline.get_history_item(readline.get_current_history_length()) !=\
2007+ self.buffer.command['buffer'] \
2008+ and self.buffer.command['buffer'].strip() !='':
2009+ readline.add_history(self.buffer.command['buffer'])
2010+ return self.buffer.command['buffer']
2011+
2012+ def printout(self, line):
2013+ '''Put lines in the buffer and display on screen
2014+ @param line - lines to be added to screen.'''
2015+ self.buffer.command['buffer'] = ''
2016+ liney, linex = self.screen.getyx()
2017+ self.buffer.cursor_pos = {'y': self.screen.getyx()[0],
2018+ 'x': self.screen.getyx()[1]}
2019+ self.buffer.extend_history(line, LINETYPES['out'],
2020+ '', self.highlighter)
2021+ self.displayer.display_ne(self.buffer.cursor_pos,
2022+ LINETYPES['out'], line)
2023+
2024+
2025
2026=== added directory 'boots/lib/ui/cui/lexers'
2027=== added file 'boots/lib/ui/cui/lexers/__init__.py'
2028=== added directory 'boots/lib/ui/cui/lexers/ply'
2029=== added file 'boots/lib/ui/cui/lexers/ply/__init__.py'
2030--- boots/lib/ui/cui/lexers/ply/__init__.py 1970-01-01 00:00:00 +0000
2031+++ boots/lib/ui/cui/lexers/ply/__init__.py 2011-03-26 12:15:52 +0000
2032@@ -0,0 +1,4 @@
2033+# PLY package
2034+# Author: David Beazley (dave@dabeaz.com)
2035+
2036+__all__ = ['lex','yacc']
2037
2038=== added file 'boots/lib/ui/cui/lexers/ply/cpp.py'
2039--- boots/lib/ui/cui/lexers/ply/cpp.py 1970-01-01 00:00:00 +0000
2040+++ boots/lib/ui/cui/lexers/ply/cpp.py 2011-03-26 12:15:52 +0000
2041@@ -0,0 +1,898 @@
2042+# -----------------------------------------------------------------------------
2043+# cpp.py
2044+#
2045+# Author: David Beazley (http://www.dabeaz.com)
2046+# Copyright (C) 2007
2047+# All rights reserved
2048+#
2049+# This module implements an ANSI-C style lexical preprocessor for PLY.
2050+# -----------------------------------------------------------------------------
2051+from __future__ import generators
2052+
2053+# -----------------------------------------------------------------------------
2054+# Default preprocessor lexer definitions. These tokens are enough to get
2055+# a basic preprocessor working. Other modules may import these if they want
2056+# -----------------------------------------------------------------------------
2057+
2058+tokens = (
2059+ 'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT', 'CPP_POUND','CPP_DPOUND'
2060+)
2061+
2062+literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
2063+
2064+# Whitespace
2065+def t_CPP_WS(t):
2066+ r'\s+'
2067+ t.lexer.lineno += t.value.count("\n")
2068+ return t
2069+
2070+t_CPP_POUND = r'\#'
2071+t_CPP_DPOUND = r'\#\#'
2072+
2073+# Identifier
2074+t_CPP_ID = r'[A-Za-z_][\w_]*'
2075+
2076+# Integer literal
2077+def CPP_INTEGER(t):
2078+ r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU]|[lL]|[uU][lL]|[lL][uU])?)'
2079+ return t
2080+
2081+t_CPP_INTEGER = CPP_INTEGER
2082+
2083+# Floating literal
2084+t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
2085+
2086+# String literal
2087+def t_CPP_STRING(t):
2088+ r'\"([^\\\n]|(\\(.|\n)))*?\"'
2089+ t.lexer.lineno += t.value.count("\n")
2090+ return t
2091+
2092+# Character constant 'c' or L'c'
2093+def t_CPP_CHAR(t):
2094+ r'(L)?\'([^\\\n]|(\\(.|\n)))*?\''
2095+ t.lexer.lineno += t.value.count("\n")
2096+ return t
2097+
2098+# Comment
2099+def t_CPP_COMMENT(t):
2100+ r'(/\*(.|\n)*?\*/)|(//.*?\n)'
2101+ t.lexer.lineno += t.value.count("\n")
2102+ return t
2103+
2104+def t_error(t):
2105+ t.type = t.value[0]
2106+ t.value = t.value[0]
2107+ t.lexer.skip(1)
2108+ return t
2109+
2110+import re
2111+import copy
2112+import time
2113+import os.path
2114+
2115+# -----------------------------------------------------------------------------
2116+# trigraph()
2117+#
2118+# Given an input string, this function replaces all trigraph sequences.
2119+# The following mapping is used:
2120+#
2121+# ??= #
2122+# ??/ \
2123+# ??' ^
2124+# ??( [
2125+# ??) ]
2126+# ??! |
2127+# ??< {
2128+# ??> }
2129+# ??- ~
2130+# -----------------------------------------------------------------------------
2131+
2132+_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''')
2133+_trigraph_rep = {
2134+ '=':'#',
2135+ '/':'\\',
2136+ "'":'^',
2137+ '(':'[',
2138+ ')':']',
2139+ '!':'|',
2140+ '<':'{',
2141+ '>':'}',
2142+ '-':'~'
2143+}
2144+
2145+def trigraph(input):
2146+ return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)
2147+
2148+# ------------------------------------------------------------------
2149+# Macro object
2150+#
2151+# This object holds information about preprocessor macros
2152+#
2153+# .name - Macro name (string)
2154+# .value - Macro value (a list of tokens)
2155+# .arglist - List of argument names
2156+# .variadic - Boolean indicating whether or not variadic macro
2157+# .vararg - Name of the variadic parameter
2158+#
2159+# When a macro is created, the macro replacement token sequence is
2160+# pre-scanned and used to create patch lists that are later used
2161+# during macro expansion
2162+# ------------------------------------------------------------------
2163+
2164+class Macro(object):
2165+ def __init__(self,name,value,arglist=None,variadic=False):
2166+ self.name = name
2167+ self.value = value
2168+ self.arglist = arglist
2169+ self.variadic = variadic
2170+ if variadic:
2171+ self.vararg = arglist[-1]
2172+ self.source = None
2173+
2174+# ------------------------------------------------------------------
2175+# Preprocessor object
2176+#
2177+# Object representing a preprocessor. Contains macro definitions,
2178+# include directories, and other information
2179+# ------------------------------------------------------------------
2180+
2181+class Preprocessor(object):
2182+ def __init__(self,lexer=None):
2183+ if lexer is None:
2184+ lexer = lex.lexer
2185+ self.lexer = lexer
2186+ self.macros = { }
2187+ self.path = []
2188+ self.temp_path = []
2189+
2190+ # Probe the lexer for selected tokens
2191+ self.lexprobe()
2192+
2193+ tm = time.localtime()
2194+ self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm))
2195+ self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm))
2196+ self.parser = None
2197+
2198+ # -----------------------------------------------------------------------------
2199+ # tokenize()
2200+ #
2201+ # Utility function. Given a string of text, tokenize into a list of tokens
2202+ # -----------------------------------------------------------------------------
2203+
2204+ def tokenize(self,text):
2205+ tokens = []
2206+ self.lexer.input(text)
2207+ while True:
2208+ tok = self.lexer.token()
2209+ if not tok: break
2210+ tokens.append(tok)
2211+ return tokens
2212+
2213+ # ---------------------------------------------------------------------
2214+ # error()
2215+ #
2216+ # Report a preprocessor error/warning of some kind
2217+ # ----------------------------------------------------------------------
2218+
2219+ def error(self,file,line,msg):
2220+ print >>sys.stderr,"%s:%d %s" % (file,line,msg)
2221+
2222+ # ----------------------------------------------------------------------
2223+ # lexprobe()
2224+ #
2225+ # This method probes the preprocessor lexer object to discover
2226+ # the token types of symbols that are important to the preprocessor.
2227+ # If this works right, the preprocessor will simply "work"
2228+ # with any suitable lexer regardless of how tokens have been named.
2229+ # ----------------------------------------------------------------------
2230+
2231+ def lexprobe(self):
2232+
2233+ # Determine the token type for identifiers
2234+ self.lexer.input("identifier")
2235+ tok = self.lexer.token()
2236+ if not tok or tok.value != "identifier":
2237+ print "Couldn't determine identifier type"
2238+ else:
2239+ self.t_ID = tok.type
2240+
2241+ # Determine the token type for integers
2242+ self.lexer.input("12345")
2243+ tok = self.lexer.token()
2244+ if not tok or int(tok.value) != 12345:
2245+ print "Couldn't determine integer type"
2246+ else:
2247+ self.t_INTEGER = tok.type
2248+ self.t_INTEGER_TYPE = type(tok.value)
2249+
2250+ # Determine the token type for strings enclosed in double quotes
2251+ self.lexer.input("\"filename\"")
2252+ tok = self.lexer.token()
2253+ if not tok or tok.value != "\"filename\"":
2254+ print "Couldn't determine string type"
2255+ else:
2256+ self.t_STRING = tok.type
2257+
2258+ # Determine the token type for whitespace--if any
2259+ self.lexer.input(" ")
2260+ tok = self.lexer.token()
2261+ if not tok or tok.value != " ":
2262+ self.t_SPACE = None
2263+ else:
2264+ self.t_SPACE = tok.type
2265+
2266+ # Determine the token type for newlines
2267+ self.lexer.input("\n")
2268+ tok = self.lexer.token()
2269+ if not tok or tok.value != "\n":
2270+ self.t_NEWLINE = None
2271+ print "Couldn't determine token for newlines"
2272+ else:
2273+ self.t_NEWLINE = tok.type
2274+
2275+ self.t_WS = (self.t_SPACE, self.t_NEWLINE)
2276+
2277+ # Check for other characters used by the preprocessor
2278+ chars = [ '<','>','#','##','\\','(',')',',','.']
2279+ for c in chars:
2280+ self.lexer.input(c)
2281+ tok = self.lexer.token()
2282+ if not tok or tok.value != c:
2283+ print "Unable to lex '%s' required for preprocessor" % c
2284+
2285+ # ----------------------------------------------------------------------
2286+ # add_path()
2287+ #
2288+ # Adds a search path to the preprocessor.
2289+ # ----------------------------------------------------------------------
2290+
2291+ def add_path(self,path):
2292+ self.path.append(path)
2293+
2294+ # ----------------------------------------------------------------------
2295+ # group_lines()
2296+ #
2297+ # Given an input string, this function splits it into lines. Trailing whitespace
2298+ # is removed. Any line ending with \ is grouped with the next line. This
2299+ # function forms the lowest level of the preprocessor---grouping into text into
2300+ # a line-by-line format.
2301+ # ----------------------------------------------------------------------
2302+
2303+ def group_lines(self,input):
2304+ lex = self.lexer.clone()
2305+ lines = [x.rstrip() for x in input.splitlines()]
2306+ for i in xrange(len(lines)):
2307+ j = i+1
2308+ while lines[i].endswith('\\') and (j < len(lines)):
2309+ lines[i] = lines[i][:-1]+lines[j]
2310+ lines[j] = ""
2311+ j += 1
2312+
2313+ input = "\n".join(lines)
2314+ lex.input(input)
2315+ lex.lineno = 1
2316+
2317+ current_line = []
2318+ while True:
2319+ tok = lex.token()
2320+ if not tok:
2321+ break
2322+ current_line.append(tok)
2323+ if tok.type in self.t_WS and '\n' in tok.value:
2324+ yield current_line
2325+ current_line = []
2326+
2327+ if current_line:
2328+ yield current_line
2329+
2330+ # ----------------------------------------------------------------------
2331+ # tokenstrip()
2332+ #
2333+ # Remove leading/trailing whitespace tokens from a token list
2334+ # ----------------------------------------------------------------------
2335+
2336+ def tokenstrip(self,tokens):
2337+ i = 0
2338+ while i < len(tokens) and tokens[i].type in self.t_WS:
2339+ i += 1
2340+ del tokens[:i]
2341+ i = len(tokens)-1
2342+ while i >= 0 and tokens[i].type in self.t_WS:
2343+ i -= 1
2344+ del tokens[i+1:]
2345+ return tokens
2346+
2347+
2348+ # ----------------------------------------------------------------------
2349+ # collect_args()
2350+ #
2351+ # Collects comma separated arguments from a list of tokens. The arguments
2352+ # must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions)
2353+ # where tokencount is the number of tokens consumed, args is a list of arguments,
2354+ # and positions is a list of integers containing the starting index of each
2355+ # argument. Each argument is represented by a list of tokens.
2356+ #
2357+ # When collecting arguments, leading and trailing whitespace is removed
2358+ # from each argument.
2359+ #
2360+ # This function properly handles nested parenthesis and commas---these do not
2361+ # define new arguments.
2362+ # ----------------------------------------------------------------------
2363+
2364+ def collect_args(self,tokenlist):
2365+ args = []
2366+ positions = []
2367+ current_arg = []
2368+ nesting = 1
2369+ tokenlen = len(tokenlist)
2370+
2371+ # Search for the opening '('.
2372+ i = 0
2373+ while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
2374+ i += 1
2375+
2376+ if (i < tokenlen) and (tokenlist[i].value == '('):
2377+ positions.append(i+1)
2378+ else:
2379+ self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments")
2380+ return 0, [], []
2381+
2382+ i += 1
2383+
2384+ while i < tokenlen:
2385+ t = tokenlist[i]
2386+ if t.value == '(':
2387+ current_arg.append(t)
2388+ nesting += 1
2389+ elif t.value == ')':
2390+ nesting -= 1
2391+ if nesting == 0:
2392+ if current_arg:
2393+ args.append(self.tokenstrip(current_arg))
2394+ positions.append(i)
2395+ return i+1,args,positions
2396+ current_arg.append(t)
2397+ elif t.value == ',' and nesting == 1:
2398+ args.append(self.tokenstrip(current_arg))
2399+ positions.append(i+1)
2400+ current_arg = []
2401+ else:
2402+ current_arg.append(t)
2403+ i += 1
2404+
2405+ # Missing end argument
2406+ self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
2407+ return 0, [],[]
2408+
2409+ # ----------------------------------------------------------------------
2410+ # macro_prescan()
2411+ #
2412+ # Examine the macro value (token sequence) and identify patch points
2413+ # This is used to speed up macro expansion later on---we'll know
2414+ # right away where to apply patches to the value to form the expansion
2415+ # ----------------------------------------------------------------------
2416+
2417+ def macro_prescan(self,macro):
2418+ macro.patch = [] # Standard macro arguments
2419+ macro.str_patch = [] # String conversion expansion
2420+ macro.var_comma_patch = [] # Variadic macro comma patch
2421+ i = 0
2422+ while i < len(macro.value):
2423+ if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist:
2424+ argnum = macro.arglist.index(macro.value[i].value)
2425+ # Conversion of argument to a string
2426+ if i > 0 and macro.value[i-1].value == '#':
2427+ macro.value[i] = copy.copy(macro.value[i])
2428+ macro.value[i].type = self.t_STRING
2429+ del macro.value[i-1]
2430+ macro.str_patch.append((argnum,i-1))
2431+ continue
2432+ # Concatenation
2433+ elif (i > 0 and macro.value[i-1].value == '##'):
2434+ macro.patch.append(('c',argnum,i-1))
2435+ del macro.value[i-1]
2436+ continue
2437+ elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
2438+ macro.patch.append(('c',argnum,i))
2439+ i += 1
2440+ continue
2441+ # Standard expansion
2442+ else:
2443+ macro.patch.append(('e',argnum,i))
2444+ elif macro.value[i].value == '##':
2445+ if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \
2446+ ((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \
2447+ (macro.value[i+1].value == macro.vararg):
2448+ macro.var_comma_patch.append(i-1)
2449+ i += 1
2450+ macro.patch.sort(key=lambda x: x[2],reverse=True)
2451+
2452+ # ----------------------------------------------------------------------
2453+ # macro_expand_args()
2454+ #
2455+ # Given a Macro and list of arguments (each a token list), this method
2456+ # returns an expanded version of a macro. The return value is a token sequence
2457+ # representing the replacement macro tokens
2458+ # ----------------------------------------------------------------------
2459+
2460+ def macro_expand_args(self,macro,args):
2461+ # Make a copy of the macro token sequence
2462+ rep = [copy.copy(_x) for _x in macro.value]
2463+
2464+ # Make string expansion patches. These do not alter the length of the replacement sequence
2465+
2466+ str_expansion = {}
2467+ for argnum, i in macro.str_patch:
2468+ if argnum not in str_expansion:
2469+ str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\")
2470+ rep[i] = copy.copy(rep[i])
2471+ rep[i].value = str_expansion[argnum]
2472+
2473+ # Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid
2474+ comma_patch = False
2475+ if macro.variadic and not args[-1]:
2476+ for i in macro.var_comma_patch:
2477+ rep[i] = None
2478+ comma_patch = True
2479+
2480+ # Make all other patches. The order of these matters. It is assumed that the patch list
2481+ # has been sorted in reverse order of patch location since replacements will cause the
2482+ # size of the replacement sequence to expand from the patch point.
2483+
2484+ expanded = { }
2485+ for ptype, argnum, i in macro.patch:
2486+ # Concatenation. Argument is left unexpanded
2487+ if ptype == 'c':
2488+ rep[i:i+1] = args[argnum]
2489+ # Normal expansion. Argument is macro expanded first
2490+ elif ptype == 'e':
2491+ if argnum not in expanded:
2492+ expanded[argnum] = self.expand_macros(args[argnum])
2493+ rep[i:i+1] = expanded[argnum]
2494+
2495+ # Get rid of removed comma if necessary
2496+ if comma_patch:
2497+ rep = [_i for _i in rep if _i]
2498+
2499+ return rep
2500+
2501+
2502+ # ----------------------------------------------------------------------
2503+ # expand_macros()
2504+ #
2505+ # Given a list of tokens, this function performs macro expansion.
2506+ # The expanded argument is a dictionary that contains macros already
2507+ # expanded. This is used to prevent infinite recursion.
2508+ # ----------------------------------------------------------------------
2509+
2510+ def expand_macros(self,tokens,expanded=None):
2511+ if expanded is None:
2512+ expanded = {}
2513+ i = 0
2514+ while i < len(tokens):
2515+ t = tokens[i]
2516+ if t.type == self.t_ID:
2517+ if t.value in self.macros and t.value not in expanded:
2518+ # Yes, we found a macro match
2519+ expanded[t.value] = True
2520+
2521+ m = self.macros[t.value]
2522+ if not m.arglist:
2523+ # A simple macro
2524+ ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded)
2525+ for e in ex:
2526+ e.lineno = t.lineno
2527+ tokens[i:i+1] = ex
2528+ i += len(ex)
2529+ else:
2530+ # A macro with arguments
2531+ j = i + 1
2532+ while j < len(tokens) and tokens[j].type in self.t_WS:
2533+ j += 1
2534+ if tokens[j].value == '(':
2535+ tokcount,args,positions = self.collect_args(tokens[j:])
2536+ if not m.variadic and len(args) != len(m.arglist):
2537+ self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
2538+ i = j + tokcount
2539+ elif m.variadic and len(args) < len(m.arglist)-1:
2540+ if len(m.arglist) > 2:
2541+ self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1))
2542+ else:
2543+ self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1))
2544+ i = j + tokcount
2545+ else:
2546+ if m.variadic:
2547+ if len(args) == len(m.arglist)-1:
2548+ args.append([])
2549+ else:
2550+ args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
2551+ del args[len(m.arglist):]
2552+
2553+ # Get macro replacement text
2554+ rep = self.macro_expand_args(m,args)
2555+ rep = self.expand_macros(rep,expanded)
2556+ for r in rep:
2557+ r.lineno = t.lineno
2558+ tokens[i:j+tokcount] = rep
2559+ i += len(rep)
2560+ del expanded[t.value]
2561+ continue
2562+ elif t.value == '__LINE__':
2563+ t.type = self.t_INTEGER
2564+ t.value = self.t_INTEGER_TYPE(t.lineno)
2565+
2566+ i += 1
2567+ return tokens
2568+
2569+ # ----------------------------------------------------------------------
2570+ # evalexpr()
2571+ #
2572+ # Evaluate an expression token sequence for the purposes of evaluating
2573+ # integral expressions.
2574+ # ----------------------------------------------------------------------
2575+
2576+ def evalexpr(self,tokens):
2577+ # tokens = tokenize(line)
2578+ # Search for defined macros
2579+ i = 0
2580+ while i < len(tokens):
2581+ if tokens[i].type == self.t_ID and tokens[i].value == 'defined':
2582+ j = i + 1
2583+ needparen = False
2584+ result = "0L"
2585+ while j < len(tokens):
2586+ if tokens[j].type in self.t_WS:
2587+ j += 1
2588+ continue
2589+ elif tokens[j].type == self.t_ID:
2590+ if tokens[j].value in self.macros:
2591+ result = "1L"
2592+ else:
2593+ result = "0L"
2594+ if not needparen: break
2595+ elif tokens[j].value == '(':
2596+ needparen = True
2597+ elif tokens[j].value == ')':
2598+ break
2599+ else:
2600+ self.error(self.source,tokens[i].lineno,"Malformed defined()")
2601+ j += 1
2602+ tokens[i].type = self.t_INTEGER
2603+ tokens[i].value = self.t_INTEGER_TYPE(result)
2604+ del tokens[i+1:j+1]
2605+ i += 1
2606+ tokens = self.expand_macros(tokens)
2607+ for i,t in enumerate(tokens):
2608+ if t.type == self.t_ID:
2609+ tokens[i] = copy.copy(t)
2610+ tokens[i].type = self.t_INTEGER
2611+ tokens[i].value = self.t_INTEGER_TYPE("0L")
2612+ elif t.type == self.t_INTEGER:
2613+ tokens[i] = copy.copy(t)
2614+ # Strip off any trailing suffixes
2615+ tokens[i].value = str(tokens[i].value)
2616+ while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
2617+ tokens[i].value = tokens[i].value[:-1]
2618+
2619+ expr = "".join([str(x.value) for x in tokens])
2620+ expr = expr.replace("&&"," and ")
2621+ expr = expr.replace("||"," or ")
2622+ expr = expr.replace("!"," not ")
2623+ try:
2624+ result = eval(expr)
2625+ except StandardError:
2626+ self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
2627+ result = 0
2628+ return result
2629+
2630+ # ----------------------------------------------------------------------
2631+ # parsegen()
2632+ #
2633+ # Parse an input string/
2634+ # ----------------------------------------------------------------------
2635+ def parsegen(self,input,source=None):
2636+
2637+ # Replace trigraph sequences
2638+ t = trigraph(input)
2639+ lines = self.group_lines(t)
2640+
2641+ if not source:
2642+ source = ""
2643+
2644+ self.define("__FILE__ \"%s\"" % source)
2645+
2646+ self.source = source
2647+ chunk = []
2648+ enable = True
2649+ iftrigger = False
2650+ ifstack = []
2651+
2652+ for x in lines:
2653+ for i,tok in enumerate(x):
2654+ if tok.type not in self.t_WS: break
2655+ if tok.value == '#':
2656+ # Preprocessor directive
2657+
2658+ for tok in x:
2659+ if tok in self.t_WS and '\n' in tok.value:
2660+ chunk.append(tok)
2661+
2662+ dirtokens = self.tokenstrip(x[i+1:])
2663+ if dirtokens:
2664+ name = dirtokens[0].value
2665+ args = self.tokenstrip(dirtokens[1:])
2666+ else:
2667+ name = ""
2668+ args = []
2669+
2670+ if name == 'define':
2671+ if enable:
2672+ for tok in self.expand_macros(chunk):
2673+ yield tok
2674+ chunk = []
2675+ self.define(args)
2676+ elif name == 'include':
2677+ if enable:
2678+ for tok in self.expand_macros(chunk):
2679+ yield tok
2680+ chunk = []
2681+ oldfile = self.macros['__FILE__']
2682+ for tok in self.include(args):
2683+ yield tok
2684+ self.macros['__FILE__'] = oldfile
2685+ self.source = source
2686+ elif name == 'undef':
2687+ if enable:
2688+ for tok in self.expand_macros(chunk):
2689+ yield tok
2690+ chunk = []
2691+ self.undef(args)
2692+ elif name == 'ifdef':
2693+ ifstack.append((enable,iftrigger))
2694+ if enable:
2695+ if not args[0].value in self.macros:
2696+ enable = False
2697+ iftrigger = False
2698+ else:
2699+ iftrigger = True
2700+ elif name == 'ifndef':
2701+ ifstack.append((enable,iftrigger))
2702+ if enable:
2703+ if args[0].value in self.macros:
2704+ enable = False
2705+ iftrigger = False
2706+ else:
2707+ iftrigger = True
2708+ elif name == 'if':
2709+ ifstack.append((enable,iftrigger))
2710+ if enable:
2711+ result = self.evalexpr(args)
2712+ if not result:
2713+ enable = False
2714+ iftrigger = False
2715+ else:
2716+ iftrigger = True
2717+ elif name == 'elif':
2718+ if ifstack:
2719+ if ifstack[-1][0]: # We only pay attention if outer "if" allows this
2720+ if enable: # If already true, we flip enable False
2721+ enable = False
2722+ elif not iftrigger: # If False, but not triggered yet, we'll check expression
2723+ result = self.evalexpr(args)
2724+ if result:
2725+ enable = True
2726+ iftrigger = True
2727+ else:
2728+ self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
2729+
2730+ elif name == 'else':
2731+ if ifstack:
2732+ if ifstack[-1][0]:
2733+ if enable:
2734+ enable = False
2735+ elif not iftrigger:
2736+ enable = True
2737+ iftrigger = True
2738+ else:
2739+ self.error(self.source,dirtokens[0].lineno,"Misplaced #else")
2740+
2741+ elif name == 'endif':
2742+ if ifstack:
2743+ enable,iftrigger = ifstack.pop()
2744+ else:
2745+ self.error(self.source,dirtokens[0].lineno,"Misplaced #endif")
2746+ else:
2747+ # Unknown preprocessor directive
2748+ pass
2749+
2750+ else:
2751+ # Normal text
2752+ if enable:
2753+ chunk.extend(x)
2754+
2755+ for tok in self.expand_macros(chunk):
2756+ yield tok
2757+ chunk = []
2758+
2759+ # ----------------------------------------------------------------------
2760+ # include()
2761+ #
2762+ # Implementation of file-inclusion
2763+ # ----------------------------------------------------------------------
2764+
2765+ def include(self,tokens):
2766+ # Try to extract the filename and then process an include file
2767+ if not tokens:
2768+ return
2769+ if tokens:
2770+ if tokens[0].value != '<' and tokens[0].type != self.t_STRING:
2771+ tokens = self.expand_macros(tokens)
2772+
2773+ if tokens[0].value == '<':
2774+ # Include <...>
2775+ i = 1
2776+ while i < len(tokens):
2777+ if tokens[i].value == '>':
2778+ break
2779+ i += 1
2780+ else:
2781+ print "Malformed #include <...>"
2782+ return
2783+ filename = "".join([x.value for x in tokens[1:i]])
2784+ path = self.path + [""] + self.temp_path
2785+ elif tokens[0].type == self.t_STRING:
2786+ filename = tokens[0].value[1:-1]
2787+ path = self.temp_path + [""] + self.path
2788+ else:
2789+ print "Malformed #include statement"
2790+ return
2791+ for p in path:
2792+ iname = os.path.join(p,filename)
2793+ try:
2794+ data = open(iname,"r").read()
2795+ dname = os.path.dirname(iname)
2796+ if dname:
2797+ self.temp_path.insert(0,dname)
2798+ for tok in self.parsegen(data,filename):
2799+ yield tok
2800+ if dname:
2801+ del self.temp_path[0]
2802+ break
2803+ except IOError,e:
2804+ pass
2805+ else:
2806+ print "Couldn't find '%s'" % filename
2807+
2808+ # ----------------------------------------------------------------------
2809+ # define()
2810+ #
2811+ # Define a new macro
2812+ # ----------------------------------------------------------------------
2813+
2814+ def define(self,tokens):
2815+ if isinstance(tokens,(str,unicode)):
2816+ tokens = self.tokenize(tokens)
2817+
2818+ linetok = tokens
2819+ try:
2820+ name = linetok[0]
2821+ if len(linetok) > 1:
2822+ mtype = linetok[1]
2823+ else:
2824+ mtype = None
2825+ if not mtype:
2826+ m = Macro(name.value,[])
2827+ self.macros[name.value] = m
2828+ elif mtype.type in self.t_WS:
2829+ # A normal macro
2830+ m = Macro(name.value,self.tokenstrip(linetok[2:]))
2831+ self.macros[name.value] = m
2832+ elif mtype.value == '(':
2833+ # A macro with arguments
2834+ tokcount, args, positions = self.collect_args(linetok[1:])
2835+ variadic = False
2836+ for a in args:
2837+ if variadic:
2838+ print "No more arguments may follow a variadic argument"
2839+ break
2840+ astr = "".join([str(_i.value) for _i in a])
2841+ if astr == "...":
2842+ variadic = True
2843+ a[0].type = self.t_ID
2844+ a[0].value = '__VA_ARGS__'
2845+ variadic = True
2846+ del a[1:]
2847+ continue
2848+ elif astr[-3:] == "..." and a[0].type == self.t_ID:
2849+ variadic = True
2850+ del a[1:]
2851+ # If, for some reason, "." is part of the identifier, strip off the name for the purposes
2852+ # of macro expansion
2853+ if a[0].value[-3:] == '...':
2854+ a[0].value = a[0].value[:-3]
2855+ continue
2856+ if len(a) > 1 or a[0].type != self.t_ID:
2857+ print "Invalid macro argument"
2858+ break
2859+ else:
2860+ mvalue = self.tokenstrip(linetok[1+tokcount:])
2861+ i = 0
2862+ while i < len(mvalue):
2863+ if i+1 < len(mvalue):
2864+ if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##':
2865+ del mvalue[i]
2866+ continue
2867+ elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS:
2868+ del mvalue[i+1]
2869+ i += 1
2870+ m = Macro(name.value,mvalue,[x[0].value for x in args],variadic)
2871+ self.macro_prescan(m)
2872+ self.macros[name.value] = m
2873+ else:
2874+ print "Bad macro definition"
2875+ except LookupError:
2876+ print "Bad macro definition"
2877+
2878+ # ----------------------------------------------------------------------
2879+ # undef()
2880+ #
2881+ # Undefine a macro
2882+ # ----------------------------------------------------------------------
2883+
2884+ def undef(self,tokens):
2885+ id = tokens[0].value
2886+ try:
2887+ del self.macros[id]
2888+ except LookupError:
2889+ pass
2890+
2891+ # ----------------------------------------------------------------------
2892+ # parse()
2893+ #
2894+ # Parse input text.
2895+ # ----------------------------------------------------------------------
2896+ def parse(self,input,source=None,ignore={}):
2897+ self.ignore = ignore
2898+ self.parser = self.parsegen(input,source)
2899+
2900+ # ----------------------------------------------------------------------
2901+ # token()
2902+ #
2903+ # Method to return individual tokens
2904+ # ----------------------------------------------------------------------
2905+ def token(self):
2906+ try:
2907+ while True:
2908+ tok = self.parser.next()
2909+ if tok.type not in self.ignore: return tok
2910+ except StopIteration:
2911+ self.parser = None
2912+ return None
2913+
2914+if __name__ == '__main__':
2915+ import ply.lex as lex
2916+ lexer = lex.lex()
2917+
2918+ # Run a preprocessor
2919+ import sys
2920+ f = open(sys.argv[1])
2921+ input = f.read()
2922+
2923+ p = Preprocessor(lexer)
2924+ p.parse(input,sys.argv[1])
2925+ while True:
2926+ tok = p.token()
2927+ if not tok: break
2928+ print p.source, tok
2929+
2930+
2931+
2932+
2933+
2934+
2935+
2936+
2937+
2938+
2939+
2940
2941=== added file 'boots/lib/ui/cui/lexers/ply/ctokens.py'
2942--- boots/lib/ui/cui/lexers/ply/ctokens.py 1970-01-01 00:00:00 +0000
2943+++ boots/lib/ui/cui/lexers/ply/ctokens.py 2011-03-26 12:15:52 +0000
2944@@ -0,0 +1,133 @@
2945+# ----------------------------------------------------------------------
2946+# ctokens.py
2947+#
2948+# Token specifications for symbols in ANSI C and C++. This file is
2949+# meant to be used as a library in other tokenizers.
2950+# ----------------------------------------------------------------------
2951+
2952+# Reserved words
2953+
2954+tokens = [
2955+ # Literals (identifier, integer constant, float constant, string constant, char const)
2956+ 'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
2957+
2958+ # Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
2959+ 'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD',
2960+ 'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
2961+ 'LOR', 'LAND', 'LNOT',
2962+ 'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
2963+
2964+ # Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
2965+ 'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
2966+ 'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
2967+
2968+ # Increment/decrement (++,--)
2969+ 'PLUSPLUS', 'MINUSMINUS',
2970+
2971+ # Structure dereference (->)
2972+ 'ARROW',
2973+
2974+ # Ternary operator (?)
2975+ 'TERNARY',
2976+
2977+ # Delimeters ( ) [ ] { } , . ; :
2978+ 'LPAREN', 'RPAREN',
2979+ 'LBRACKET', 'RBRACKET',
2980+ 'LBRACE', 'RBRACE',
2981+ 'COMMA', 'PERIOD', 'SEMI', 'COLON',
2982+
2983+ # Ellipsis (...)
2984+ 'ELLIPSIS',
2985+]
2986+
2987+# Operators
2988+t_PLUS = r'\+'
2989+t_MINUS = r'-'
2990+t_TIMES = r'\*'
2991+t_DIVIDE = r'/'
2992+t_MODULO = r'%'
2993+t_OR = r'\|'
2994+t_AND = r'&'
2995+t_NOT = r'~'
2996+t_XOR = r'\^'
2997+t_LSHIFT = r'<<'
2998+t_RSHIFT = r'>>'
2999+t_LOR = r'\|\|'
3000+t_LAND = r'&&'
3001+t_LNOT = r'!'
3002+t_LT = r'<'
3003+t_GT = r'>'
3004+t_LE = r'<='
3005+t_GE = r'>='
3006+t_EQ = r'=='
3007+t_NE = r'!='
3008+
3009+# Assignment operators
3010+
3011+t_EQUALS = r'='
3012+t_TIMESEQUAL = r'\*='
3013+t_DIVEQUAL = r'/='
3014+t_MODEQUAL = r'%='
3015+t_PLUSEQUAL = r'\+='
3016+t_MINUSEQUAL = r'-='
3017+t_LSHIFTEQUAL = r'<<='
3018+t_RSHIFTEQUAL = r'>>='
3019+t_ANDEQUAL = r'&='
3020+t_OREQUAL = r'\|='
3021+t_XOREQUAL = r'^='
3022+
3023+# Increment/decrement
3024+t_INCREMENT = r'\+\+'
3025+t_DECREMENT = r'--'
3026+
3027+# ->
3028+t_ARROW = r'->'
3029+
3030+# ?
3031+t_TERNARY = r'\?'
3032+
3033+# Delimeters
3034+t_LPAREN = r'\('
3035+t_RPAREN = r'\)'
3036+t_LBRACKET = r'\['
3037+t_RBRACKET = r'\]'
3038+t_LBRACE = r'\{'
3039+t_RBRACE = r'\}'
3040+t_COMMA = r','
3041+t_PERIOD = r'\.'
3042+t_SEMI = r';'
3043+t_COLON = r':'
3044+t_ELLIPSIS = r'\.\.\.'
3045+
3046+# Identifiers
3047+t_ID = r'[A-Za-z_][A-Za-z0-9_]*'
3048+
3049+# Integer literal
3050+t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?'
3051+
3052+# Floating literal
3053+t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
3054+
3055+# String literal
3056+t_STRING = r'\"([^\\\n]|(\\.))*?\"'
3057+
3058+# Character constant 'c' or L'c'
3059+t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\''
3060+
3061+# Comment (C-Style)
3062+def t_COMMENT(t):
3063+ r'/\*(.|\n)*?\*/'
3064+ t.lexer.lineno += t.value.count('\n')
3065+ return t
3066+
3067+# Comment (C++-Style)
3068+def t_CPPCOMMENT(t):
3069+ r'//.*\n'
3070+ t.lexer.lineno += 1
3071+ return t
3072+
3073+
3074+
3075+
3076+
3077+
3078
3079=== added file 'boots/lib/ui/cui/lexers/ply/lex.py'
3080--- boots/lib/ui/cui/lexers/ply/lex.py 1970-01-01 00:00:00 +0000
3081+++ boots/lib/ui/cui/lexers/ply/lex.py 2011-03-26 12:15:52 +0000
3082@@ -0,0 +1,1058 @@
3083+# -----------------------------------------------------------------------------
3084+# ply: lex.py
3085+#
3086+# Copyright (C) 2001-2009,
3087+# David M. Beazley (Dabeaz LLC)
3088+# All rights reserved.
3089+#
3090+# Redistribution and use in source and binary forms, with or without
3091+# modification, are permitted provided that the following conditions are
3092+# met:
3093+#
3094+# * Redistributions of source code must retain the above copyright notice,
3095+# this list of conditions and the following disclaimer.
3096+# * Redistributions in binary form must reproduce the above copyright notice,
3097+# this list of conditions and the following disclaimer in the documentation
3098+# and/or other materials provided with the distribution.
3099+# * Neither the name of the David Beazley or Dabeaz LLC may be used to
3100+# endorse or promote products derived from this software without
3101+# specific prior written permission.
3102+#
3103+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
3104+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
3105+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
3106+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
3107+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
3108+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
3109+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
3110+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
3111+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
3112+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
3113+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
3114+# -----------------------------------------------------------------------------
3115+
3116+__version__ = "3.3"
3117+__tabversion__ = "3.2" # Version of table file used
3118+
3119+import re, sys, types, copy, os
3120+
3121+# This tuple contains known string types
3122+try:
3123+ # Python 2.6
3124+ StringTypes = (types.StringType, types.UnicodeType)
3125+except AttributeError:
3126+ # Python 3.0
3127+ StringTypes = (str, bytes)
3128+
3129+# Extract the code attribute of a function. Different implementations
3130+# are for Python 2/3 compatibility.
3131+
3132+if sys.version_info[0] < 3:
3133+ def func_code(f):
3134+ return f.func_code
3135+else:
3136+ def func_code(f):
3137+ return f.__code__
3138+
3139+# This regular expression is used to match valid token names
3140+_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$')
3141+
3142+# Exception thrown when invalid token encountered and no default error
3143+# handler is defined.
3144+
3145+class LexError(Exception):
3146+ def __init__(self,message,s):
3147+ self.args = (message,)
3148+ self.text = s
3149+
3150+# Token class. This class is used to represent the tokens produced.
3151+class LexToken(object):
3152+ def __str__(self):
3153+ return "LexToken(%s,%r,%d,%d)" % (self.type,self.value,self.lineno,self.lexpos)
3154+ def __repr__(self):
3155+ return str(self)
3156+
3157+# This object is a stand-in for a logging object created by the
3158+# logging module.
3159+
3160+class PlyLogger(object):
3161+ def __init__(self,f):
3162+ self.f = f
3163+ def critical(self,msg,*args,**kwargs):
3164+ self.f.write((msg % args) + "\n")
3165+
3166+ def warning(self,msg,*args,**kwargs):
3167+ self.f.write("WARNING: "+ (msg % args) + "\n")
3168+
3169+ def error(self,msg,*args,**kwargs):
3170+ self.f.write("ERROR: " + (msg % args) + "\n")
3171+
3172+ info = critical
3173+ debug = critical
3174+
3175+# Null logger is used when no output is generated. Does nothing.
3176+class NullLogger(object):
3177+ def __getattribute__(self,name):
3178+ return self
3179+ def __call__(self,*args,**kwargs):
3180+ return self
3181+
3182+# -----------------------------------------------------------------------------
3183+# === Lexing Engine ===
3184+#
3185+# The following Lexer class implements the lexer runtime. There are only
3186+# a few public methods and attributes:
3187+#
3188+# input() - Store a new string in the lexer
3189+# token() - Get the next token
3190+# clone() - Clone the lexer
3191+#
3192+# lineno - Current line number
3193+# lexpos - Current position in the input string
3194+# -----------------------------------------------------------------------------
3195+
3196+class Lexer:
3197+ def __init__(self):
3198+ self.lexre = None # Master regular expression. This is a list of
3199+ # tuples (re,findex) where re is a compiled
3200+ # regular expression and findex is a list
3201+ # mapping regex group numbers to rules
3202+ self.lexretext = None # Current regular expression strings
3203+ self.lexstatere = {} # Dictionary mapping lexer states to master regexs
3204+ self.lexstateretext = {} # Dictionary mapping lexer states to regex strings
3205+ self.lexstaterenames = {} # Dictionary mapping lexer states to symbol names
3206+ self.lexstate = "INITIAL" # Current lexer state
3207+ self.lexstatestack = [] # Stack of lexer states
3208+ self.lexstateinfo = None # State information
3209+ self.lexstateignore = {} # Dictionary of ignored characters for each state
3210+ self.lexstateerrorf = {} # Dictionary of error functions for each state
3211+ self.lexreflags = 0 # Optional re compile flags
3212+ self.lexdata = None # Actual input data (as a string)
3213+ self.lexpos = 0 # Current position in input text
3214+ self.lexlen = 0 # Length of the input text
3215+ self.lexerrorf = None # Error rule (if any)
3216+ self.lextokens = None # List of valid tokens
3217+ self.lexignore = "" # Ignored characters
3218+ self.lexliterals = "" # Literal characters that can be passed through
3219+ self.lexmodule = None # Module
3220+ self.lineno = 1 # Current line number
3221+ self.lexoptimize = 0 # Optimized mode
3222+
3223+ def clone(self,object=None):
3224+ c = copy.copy(self)
3225+
3226+ # If the object parameter has been supplied, it means we are attaching the
3227+ # lexer to a new object. In this case, we have to rebind all methods in
3228+ # the lexstatere and lexstateerrorf tables.
3229+
3230+ if object:
3231+ newtab = { }
3232+ for key, ritem in self.lexstatere.items():
3233+ newre = []
3234+ for cre, findex in ritem:
3235+ newfindex = []
3236+ for f in findex:
3237+ if not f or not f[0]:
3238+ newfindex.append(f)
3239+ continue
3240+ newfindex.append((getattr(object,f[0].__name__),f[1]))
3241+ newre.append((cre,newfindex))
3242+ newtab[key] = newre
3243+ c.lexstatere = newtab
3244+ c.lexstateerrorf = { }
3245+ for key, ef in self.lexstateerrorf.items():
3246+ c.lexstateerrorf[key] = getattr(object,ef.__name__)
3247+ c.lexmodule = object
3248+ return c
3249+
3250+ # ------------------------------------------------------------
3251+ # writetab() - Write lexer information to a table file
3252+ # ------------------------------------------------------------
3253+ def writetab(self,tabfile,outputdir=""):
3254+ if isinstance(tabfile,types.ModuleType):
3255+ return
3256+ basetabfilename = tabfile.split(".")[-1]
3257+ filename = os.path.join(outputdir,basetabfilename)+".py"
3258+ tf = open(filename,"w")
3259+ tf.write("# %s.py. This file automatically created by PLY (version %s). Don't edit!\n" % (tabfile,__version__))
3260+ tf.write("_tabversion = %s\n" % repr(__version__))
3261+ tf.write("_lextokens = %s\n" % repr(self.lextokens))
3262+ tf.write("_lexreflags = %s\n" % repr(self.lexreflags))
3263+ tf.write("_lexliterals = %s\n" % repr(self.lexliterals))
3264+ tf.write("_lexstateinfo = %s\n" % repr(self.lexstateinfo))
3265+
3266+ tabre = { }
3267+ # Collect all functions in the initial state
3268+ initial = self.lexstatere["INITIAL"]
3269+ initialfuncs = []
3270+ for part in initial:
3271+ for f in part[1]:
3272+ if f and f[0]:
3273+ initialfuncs.append(f)
3274+
3275+ for key, lre in self.lexstatere.items():
3276+ titem = []
3277+ for i in range(len(lre)):
3278+ titem.append((self.lexstateretext[key][i],_funcs_to_names(lre[i][1],self.lexstaterenames[key][i])))
3279+ tabre[key] = titem
3280+
3281+ tf.write("_lexstatere = %s\n" % repr(tabre))
3282+ tf.write("_lexstateignore = %s\n" % repr(self.lexstateignore))
3283+
3284+ taberr = { }
3285+ for key, ef in self.lexstateerrorf.items():
3286+ if ef:
3287+ taberr[key] = ef.__name__
3288+ else:
3289+ taberr[key] = None
3290+ tf.write("_lexstateerrorf = %s\n" % repr(taberr))
3291+ tf.close()
3292+
3293+ # ------------------------------------------------------------
3294+ # readtab() - Read lexer information from a tab file
3295+ # ------------------------------------------------------------
3296+ def readtab(self,tabfile,fdict):
3297+ if isinstance(tabfile,types.ModuleType):
3298+ lextab = tabfile
3299+ else:
3300+ if sys.version_info[0] < 3:
3301+ exec("import %s as lextab" % tabfile)
3302+ else:
3303+ env = { }
3304+ exec("import %s as lextab" % tabfile, env,env)
3305+ lextab = env['lextab']
3306+
3307+ if getattr(lextab,"_tabversion","0.0") != __version__:
3308+ raise ImportError("Inconsistent PLY version")
3309+
3310+ self.lextokens = lextab._lextokens
3311+ self.lexreflags = lextab._lexreflags
3312+ self.lexliterals = lextab._lexliterals
3313+ self.lexstateinfo = lextab._lexstateinfo
3314+ self.lexstateignore = lextab._lexstateignore
3315+ self.lexstatere = { }
3316+ self.lexstateretext = { }
3317+ for key,lre in lextab._lexstatere.items():
3318+ titem = []
3319+ txtitem = []
3320+ for i in range(len(lre)):
3321+ titem.append((re.compile(lre[i][0],lextab._lexreflags | re.VERBOSE),_names_to_funcs(lre[i][1],fdict)))
3322+ txtitem.append(lre[i][0])
3323+ self.lexstatere[key] = titem
3324+ self.lexstateretext[key] = txtitem
3325+ self.lexstateerrorf = { }
3326+ for key,ef in lextab._lexstateerrorf.items():
3327+ self.lexstateerrorf[key] = fdict[ef]
3328+ self.begin('INITIAL')
3329+
3330+ # ------------------------------------------------------------
3331+ # input() - Push a new string into the lexer
3332+ # ------------------------------------------------------------
3333+ def input(self,s):
3334+ # Pull off the first character to see if s looks like a string
3335+ c = s[:1]
3336+ if not isinstance(c,StringTypes):
3337+ raise ValueError("Expected a string")
3338+ self.lexdata = s
3339+ self.lexpos = 0
3340+ self.lexlen = len(s)
3341+
3342+ # ------------------------------------------------------------
3343+ # begin() - Changes the lexing state
3344+ # ------------------------------------------------------------
3345+ def begin(self,state):
3346+ if not state in self.lexstatere:
3347+ raise ValueError("Undefined state")
3348+ self.lexre = self.lexstatere[state]
3349+ self.lexretext = self.lexstateretext[state]
3350+ self.lexignore = self.lexstateignore.get(state,"")
3351+ self.lexerrorf = self.lexstateerrorf.get(state,None)
3352+ self.lexstate = state
3353+
3354+ # ------------------------------------------------------------
3355+ # push_state() - Changes the lexing state and saves old on stack
3356+ # ------------------------------------------------------------
3357+ def push_state(self,state):
3358+ self.lexstatestack.append(self.lexstate)
3359+ self.begin(state)
3360+
3361+ # ------------------------------------------------------------
3362+ # pop_state() - Restores the previous state
3363+ # ------------------------------------------------------------
3364+ def pop_state(self):
3365+ self.begin(self.lexstatestack.pop())
3366+
3367+ # ------------------------------------------------------------
3368+ # current_state() - Returns the current lexing state
3369+ # ------------------------------------------------------------
3370+ def current_state(self):
3371+ return self.lexstate
3372+
3373+ # ------------------------------------------------------------
3374+ # skip() - Skip ahead n characters
3375+ # ------------------------------------------------------------
3376+ def skip(self,n):
3377+ self.lexpos += n
3378+
3379+ # ------------------------------------------------------------
3380+ # opttoken() - Return the next token from the Lexer
3381+ #
3382+ # Note: This function has been carefully implemented to be as fast
3383+ # as possible. Don't make changes unless you really know what
3384+ # you are doing
3385+ # ------------------------------------------------------------
3386+ def token(self):
3387+ # Make local copies of frequently referenced attributes
3388+ lexpos = self.lexpos
3389+ lexlen = self.lexlen
3390+ lexignore = self.lexignore
3391+ lexdata = self.lexdata
3392+
3393+ while lexpos < lexlen:
3394+ # This code provides some short-circuit code for whitespace, tabs, and other ignored characters
3395+ if lexdata[lexpos] in lexignore:
3396+ lexpos += 1
3397+ continue
3398+
3399+ # Look for a regular expression match
3400+ for lexre,lexindexfunc in self.lexre:
3401+ m = lexre.match(lexdata,lexpos)
3402+ if not m: continue
3403+
3404+ # Create a token for return
3405+ tok = LexToken()
3406+ tok.value = m.group()
3407+ tok.lineno = self.lineno
3408+ tok.lexpos = lexpos
3409+
3410+ i = m.lastindex
3411+ func,tok.type = lexindexfunc[i]
3412+
3413+ if not func:
3414+ # If no token type was set, it's an ignored token
3415+ if tok.type:
3416+ self.lexpos = m.end()
3417+ return tok
3418+ else:
3419+ lexpos = m.end()
3420+ break
3421+
3422+ lexpos = m.end()
3423+
3424+ # If token is processed by a function, call it
3425+
3426+ tok.lexer = self # Set additional attributes useful in token rules
3427+ self.lexmatch = m
3428+ self.lexpos = lexpos
3429+
3430+ newtok = func(tok)
3431+
3432+ # Every function must return a token, if nothing, we just move to next token
3433+ if not newtok:
3434+ lexpos = self.lexpos # This is here in case user has updated lexpos.
3435+ lexignore = self.lexignore # This is here in case there was a state change
3436+ break
3437+
3438+ # Verify type of the token. If not in the token map, raise an error
3439+ if not self.lexoptimize:
3440+ if not newtok.type in self.lextokens:
3441+ raise LexError("%s:%d: Rule '%s' returned an unknown token type '%s'" % (
3442+ func_code(func).co_filename, func_code(func).co_firstlineno,
3443+ func.__name__, newtok.type),lexdata[lexpos:])
3444+
3445+ return newtok
3446+ else:
3447+ # No match, see if in literals
3448+ if lexdata[lexpos] in self.lexliterals:
3449+ tok = LexToken()
3450+ tok.value = lexdata[lexpos]
3451+ tok.lineno = self.lineno
3452+ tok.type = tok.value
3453+ tok.lexpos = lexpos
3454+ self.lexpos = lexpos + 1
3455+ return tok
3456+
3457+ # No match. Call t_error() if defined.
3458+ if self.lexerrorf:
3459+ tok = LexToken()
3460+ tok.value = self.lexdata[lexpos:]
3461+ tok.lineno = self.lineno
3462+ tok.type = "error"
3463+ tok.lexer = self
3464+ tok.lexpos = lexpos
3465+ self.lexpos = lexpos
3466+ newtok = self.lexerrorf(tok)
3467+ if lexpos == self.lexpos:
3468+ # Error method didn't change text position at all. This is an error.
3469+ raise LexError("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:])
3470+ lexpos = self.lexpos
3471+ if not newtok: continue
3472+ return newtok
3473+
3474+ self.lexpos = lexpos
3475+ raise LexError("Illegal character '%s' at index %d" % (lexdata[lexpos],lexpos), lexdata[lexpos:])
3476+
3477+ self.lexpos = lexpos + 1
3478+ if self.lexdata is None:
3479+ raise RuntimeError("No input string given with input()")
3480+ return None
3481+
3482+ # Iterator interface
3483+ def __iter__(self):
3484+ return self
3485+
3486+ def next(self):
3487+ t = self.token()
3488+ if t is None:
3489+ raise StopIteration
3490+ return t
3491+
3492+ __next__ = next
3493+
3494+# -----------------------------------------------------------------------------
3495+# ==== Lex Builder ===
3496+#
3497+# The functions and classes below are used to collect lexing information
3498+# and build a Lexer object from it.
3499+# -----------------------------------------------------------------------------
3500+
3501+# -----------------------------------------------------------------------------
3502+# get_caller_module_dict()
3503+#
3504+# This function returns a dictionary containing all of the symbols defined within
3505+# a caller further down the call stack. This is used to get the environment
3506+# associated with the yacc() call if none was provided.
3507+# -----------------------------------------------------------------------------
3508+
3509+def get_caller_module_dict(levels):
3510+ try:
3511+ raise RuntimeError
3512+ except RuntimeError:
3513+ e,b,t = sys.exc_info()
3514+ f = t.tb_frame
3515+ while levels > 0:
3516+ f = f.f_back
3517+ levels -= 1
3518+ ldict = f.f_globals.copy()
3519+ if f.f_globals != f.f_locals:
3520+ ldict.update(f.f_locals)
3521+
3522+ return ldict
3523+
3524+# -----------------------------------------------------------------------------
3525+# _funcs_to_names()
3526+#
3527+# Given a list of regular expression functions, this converts it to a list
3528+# suitable for output to a table file
3529+# -----------------------------------------------------------------------------
3530+
3531+def _funcs_to_names(funclist,namelist):
3532+ result = []
3533+ for f,name in zip(funclist,namelist):
3534+ if f and f[0]:
3535+ result.append((name, f[1]))
3536+ else:
3537+ result.append(f)
3538+ return result
3539+
3540+# -----------------------------------------------------------------------------
3541+# _names_to_funcs()
3542+#
3543+# Given a list of regular expression function names, this converts it back to
3544+# functions.
3545+# -----------------------------------------------------------------------------
3546+
3547+def _names_to_funcs(namelist,fdict):
3548+ result = []
3549+ for n in namelist:
3550+ if n and n[0]:
3551+ result.append((fdict[n[0]],n[1]))
3552+ else:
3553+ result.append(n)
3554+ return result
3555+
3556+# -----------------------------------------------------------------------------
3557+# _form_master_re()
3558+#
3559+# This function takes a list of all of the regex components and attempts to
3560+# form the master regular expression. Given limitations in the Python re
3561+# module, it may be necessary to break the master regex into separate expressions.
3562+# -----------------------------------------------------------------------------
3563+
3564+def _form_master_re(relist,reflags,ldict,toknames):
3565+ if not relist: return []
3566+ regex = "|".join(relist)
3567+ try:
3568+ lexre = re.compile(regex,re.VERBOSE | reflags)
3569+
3570+ # Build the index to function map for the matching engine
3571+ lexindexfunc = [ None ] * (max(lexre.groupindex.values())+1)
3572+ lexindexnames = lexindexfunc[:]
3573+
3574+ for f,i in lexre.groupindex.items():
3575+ handle = ldict.get(f,None)
3576+ if type(handle) in (types.FunctionType, types.MethodType):
3577+ lexindexfunc[i] = (handle,toknames[f])
3578+ lexindexnames[i] = f
3579+ elif handle is not None:
3580+ lexindexnames[i] = f
3581+ if f.find("ignore_") > 0:
3582+ lexindexfunc[i] = (None,None)
3583+ else:
3584+ lexindexfunc[i] = (None, toknames[f])
3585+
3586+ return [(lexre,lexindexfunc)],[regex],[lexindexnames]
3587+ except Exception:
3588+ m = int(len(relist)/2)
3589+ if m == 0: m = 1
3590+ llist, lre, lnames = _form_master_re(relist[:m],reflags,ldict,toknames)
3591+ rlist, rre, rnames = _form_master_re(relist[m:],reflags,ldict,toknames)
3592+ return llist+rlist, lre+rre, lnames+rnames
3593+
3594+# -----------------------------------------------------------------------------
3595+# def _statetoken(s,names)
3596+#
3597+# Given a declaration name s of the form "t_" and a dictionary whose keys are
3598+# state names, this function returns a tuple (states,tokenname) where states
3599+# is a tuple of state names and tokenname is the name of the token. For example,
3600+# calling this with s = "t_foo_bar_SPAM" might return (('foo','bar'),'SPAM')
3601+# -----------------------------------------------------------------------------
3602+
3603+def _statetoken(s,names):
3604+ nonstate = 1
3605+ parts = s.split("_")
3606+ for i in range(1,len(parts)):
3607+ if not parts[i] in names and parts[i] != 'ANY': break
3608+ if i > 1:
3609+ states = tuple(parts[1:i])
3610+ else:
3611+ states = ('INITIAL',)
3612+
3613+ if 'ANY' in states:
3614+ states = tuple(names)
3615+
3616+ tokenname = "_".join(parts[i:])
3617+ return (states,tokenname)
3618+
3619+
3620+# -----------------------------------------------------------------------------
3621+# LexerReflect()
3622+#
3623+# This class represents information needed to build a lexer as extracted from a
3624+# user's input file.
3625+# -----------------------------------------------------------------------------
3626+class LexerReflect(object):
3627+ def __init__(self,ldict,log=None,reflags=0):
3628+ self.ldict = ldict
3629+ self.error_func = None
3630+ self.tokens = []
3631+ self.reflags = reflags
3632+ self.stateinfo = { 'INITIAL' : 'inclusive'}
3633+ self.files = {}
3634+ self.error = 0
3635+
3636+ if log is None:
3637+ self.log = PlyLogger(sys.stderr)
3638+ else:
3639+ self.log = log
3640+
3641+ # Get all of the basic information
3642+ def get_all(self):
3643+ self.get_tokens()
3644+ self.get_literals()
3645+ self.get_states()
3646+ self.get_rules()
3647+
3648+ # Validate all of the information
3649+ def validate_all(self):
3650+ self.validate_tokens()
3651+ self.validate_literals()
3652+ self.validate_rules()
3653+ return self.error
3654+
3655+ # Get the tokens map
3656+ def get_tokens(self):
3657+ tokens = self.ldict.get("tokens",None)
3658+ if not tokens:
3659+ self.log.error("No token list is defined")
3660+ self.error = 1
3661+ return
3662+
3663+ if not isinstance(tokens,(list, tuple)):
3664+ self.log.error("tokens must be a list or tuple")
3665+ self.error = 1
3666+ return
3667+
3668+ if not tokens:
3669+ self.log.error("tokens is empty")
3670+ self.error = 1
3671+ return
3672+
3673+ self.tokens = tokens
3674+
3675+ # Validate the tokens
3676+ def validate_tokens(self):
3677+ terminals = {}
3678+ for n in self.tokens:
3679+ if not _is_identifier.match(n):
3680+ self.log.error("Bad token name '%s'",n)
3681+ self.error = 1
3682+ if n in terminals:
3683+ self.log.warning("Token '%s' multiply defined", n)
3684+ terminals[n] = 1
3685+
3686+ # Get the literals specifier
3687+ def get_literals(self):
3688+ self.literals = self.ldict.get("literals","")
3689+
3690+ # Validate literals
3691+ def validate_literals(self):
3692+ try:
3693+ for c in self.literals:
3694+ if not isinstance(c,StringTypes) or len(c) > 1:
3695+ self.log.error("Invalid literal %s. Must be a single character", repr(c))
3696+ self.error = 1
3697+ continue
3698+
3699+ except TypeError:
3700+ self.log.error("Invalid literals specification. literals must be a sequence of characters")
3701+ self.error = 1
3702+
3703+ def get_states(self):
3704+ self.states = self.ldict.get("states",None)
3705+ # Build statemap
3706+ if self.states:
3707+ if not isinstance(self.states,(tuple,list)):
3708+ self.log.error("states must be defined as a tuple or list")
3709+ self.error = 1
3710+ else:
3711+ for s in self.states:
3712+ if not isinstance(s,tuple) or len(s) != 2:
3713+ self.log.error("Invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')",repr(s))
3714+ self.error = 1
3715+ continue
3716+ name, statetype = s
3717+ if not isinstance(name,StringTypes):
3718+ self.log.error("State name %s must be a string", repr(name))
3719+ self.error = 1
3720+ continue
3721+ if not (statetype == 'inclusive' or statetype == 'exclusive'):
3722+ self.log.error("State type for state %s must be 'inclusive' or 'exclusive'",name)
3723+ self.error = 1
3724+ continue
3725+ if name in self.stateinfo:
3726+ self.log.error("State '%s' already defined",name)
3727+ self.error = 1
3728+ continue
3729+ self.stateinfo[name] = statetype
3730+
3731+ # Get all of the symbols with a t_ prefix and sort them into various
3732+ # categories (functions, strings, error functions, and ignore characters)
3733+
3734+ def get_rules(self):
3735+ tsymbols = [f for f in self.ldict if f[:2] == 't_' ]
3736+
3737+ # Now build up a list of functions and a list of strings
3738+
3739+ self.toknames = { } # Mapping of symbols to token names
3740+ self.funcsym = { } # Symbols defined as functions
3741+ self.strsym = { } # Symbols defined as strings
3742+ self.ignore = { } # Ignore strings by state
3743+ self.errorf = { } # Error functions by state
3744+
3745+ for s in self.stateinfo:
3746+ self.funcsym[s] = []
3747+ self.strsym[s] = []
3748+
3749+ if len(tsymbols) == 0:
3750+ self.log.error("No rules of the form t_rulename are defined")
3751+ self.error = 1
3752+ return
3753+
3754+ for f in tsymbols:
3755+ t = self.ldict[f]
3756+ states, tokname = _statetoken(f,self.stateinfo)
3757+ self.toknames[f] = tokname
3758+
3759+ if hasattr(t,"__call__"):
3760+ if tokname == 'error':
3761+ for s in states:
3762+ self.errorf[s] = t
3763+ elif tokname == 'ignore':
3764+ line = func_code(t).co_firstlineno
3765+ file = func_code(t).co_filename
3766+ self.log.error("%s:%d: Rule '%s' must be defined as a string",file,line,t.__name__)
3767+ self.error = 1
3768+ else:
3769+ for s in states:
3770+ self.funcsym[s].append((f,t))
3771+ elif isinstance(t, StringTypes):
3772+ if tokname == 'ignore':
3773+ for s in states:
3774+ self.ignore[s] = t
3775+ if "\\" in t:
3776+ self.log.warning("%s contains a literal backslash '\\'",f)
3777+
3778+ elif tokname == 'error':
3779+ self.log.error("Rule '%s' must be defined as a function", f)
3780+ self.error = 1
3781+ else:
3782+ for s in states:
3783+ self.strsym[s].append((f,t))
3784+ else:
3785+ self.log.error("%s not defined as a function or string", f)
3786+ self.error = 1
3787+
3788+ # Sort the functions by line number
3789+ for f in self.funcsym.values():
3790+ if sys.version_info[0] < 3:
3791+ f.sort(lambda x,y: cmp(func_code(x[1]).co_firstlineno,func_code(y[1]).co_firstlineno))
3792+ else:
3793+ # Python 3.0
3794+ f.sort(key=lambda x: func_code(x[1]).co_firstlineno)
3795+
3796+ # Sort the strings by regular expression length
3797+ for s in self.strsym.values():
3798+ if sys.version_info[0] < 3:
3799+ s.sort(lambda x,y: (len(x[1]) < len(y[1])) - (len(x[1]) > len(y[1])))
3800+ else:
3801+ # Python 3.0
3802+ s.sort(key=lambda x: len(x[1]),reverse=True)
3803+
3804+ # Validate all of the t_rules collected
3805+ def validate_rules(self):
3806+ for state in self.stateinfo:
3807+ # Validate all rules defined by functions
3808+
3809+
3810+
3811+ for fname, f in self.funcsym[state]:
3812+ line = func_code(f).co_firstlineno
3813+ file = func_code(f).co_filename
3814+ self.files[file] = 1
3815+
3816+ tokname = self.toknames[fname]
3817+ if isinstance(f, types.MethodType):
3818+ reqargs = 2
3819+ else:
3820+ reqargs = 1
3821+ nargs = func_code(f).co_argcount
3822+ if nargs > reqargs:
3823+ self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__)
3824+ self.error = 1
3825+ continue
3826+
3827+ if nargs < reqargs:
3828+ self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__)
3829+ self.error = 1
3830+ continue
3831+
3832+ if not f.__doc__:
3833+ self.log.error("%s:%d: No regular expression defined for rule '%s'",file,line,f.__name__)
3834+ self.error = 1
3835+ continue
3836+
3837+ try:
3838+ c = re.compile("(?P<%s>%s)" % (fname,f.__doc__), re.VERBOSE | self.reflags)
3839+ if c.match(""):
3840+ self.log.error("%s:%d: Regular expression for rule '%s' matches empty string", file,line,f.__name__)
3841+ self.error = 1
3842+ except re.error:
3843+ _etype, e, _etrace = sys.exc_info()
3844+ self.log.error("%s:%d: Invalid regular expression for rule '%s'. %s", file,line,f.__name__,e)
3845+ if '#' in f.__doc__:
3846+ self.log.error("%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'",file,line, f.__name__)
3847+ self.error = 1
3848+
3849+ # Validate all rules defined by strings
3850+ for name,r in self.strsym[state]:
3851+ tokname = self.toknames[name]
3852+ if tokname == 'error':
3853+ self.log.error("Rule '%s' must be defined as a function", name)
3854+ self.error = 1
3855+ continue
3856+
3857+ if not tokname in self.tokens and tokname.find("ignore_") < 0:
3858+ self.log.error("Rule '%s' defined for an unspecified token %s",name,tokname)
3859+ self.error = 1
3860+ continue
3861+
3862+ try:
3863+ c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | self.reflags)
3864+ if (c.match("")):
3865+ self.log.error("Regular expression for rule '%s' matches empty string",name)
3866+ self.error = 1
3867+ except re.error:
3868+ _etype, e, _etrace = sys.exc_info()
3869+ self.log.error("Invalid regular expression for rule '%s'. %s",name,e)
3870+ if '#' in r:
3871+ self.log.error("Make sure '#' in rule '%s' is escaped with '\\#'",name)
3872+ self.error = 1
3873+
3874+ if not self.funcsym[state] and not self.strsym[state]:
3875+ self.log.error("No rules defined for state '%s'",state)
3876+ self.error = 1
3877+
3878+ # Validate the error function
3879+ efunc = self.errorf.get(state,None)
3880+ if efunc:
3881+ f = efunc
3882+ line = func_code(f).co_firstlineno
3883+ file = func_code(f).co_filename
3884+ self.files[file] = 1
3885+
3886+ if isinstance(f, types.MethodType):
3887+ reqargs = 2
3888+ else:
3889+ reqargs = 1
3890+ nargs = func_code(f).co_argcount
3891+ if nargs > reqargs:
3892+ self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__)
3893+ self.error = 1
3894+
3895+ if nargs < reqargs:
3896+ self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__)
3897+ self.error = 1
3898+
3899+ for f in self.files:
3900+ self.validate_file(f)
3901+
3902+
3903+ # -----------------------------------------------------------------------------
3904+ # validate_file()
3905+ #
3906+ # This checks to see if there are duplicated t_rulename() functions or strings
3907+ # in the parser input file. This is done using a simple regular expression
3908+ # match on each line in the given file.
3909+ # -----------------------------------------------------------------------------
3910+
3911+ def validate_file(self,filename):
3912+ import os.path
3913+ base,ext = os.path.splitext(filename)
3914+ if ext != '.py': return # No idea what the file is. Return OK
3915+
3916+ try:
3917+ f = open(filename)
3918+ lines = f.readlines()
3919+ f.close()
3920+ except IOError:
3921+ return # Couldn't find the file. Don't worry about it
3922+
3923+ fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(')
3924+ sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=')
3925+
3926+ counthash = { }
3927+ linen = 1
3928+ for l in lines:
3929+ m = fre.match(l)
3930+ if not m:
3931+ m = sre.match(l)
3932+ if m:
3933+ name = m.group(1)
3934+ prev = counthash.get(name)
3935+ if not prev:
3936+ counthash[name] = linen
3937+ else:
3938+ self.log.error("%s:%d: Rule %s redefined. Previously defined on line %d",filename,linen,name,prev)
3939+ self.error = 1
3940+ linen += 1
3941+
3942+# -----------------------------------------------------------------------------
3943+# lex(module)
3944+#
3945+# Build all of the regular expression rules from definitions in the supplied module
3946+# -----------------------------------------------------------------------------
3947+def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,nowarn=0,outputdir="", debuglog=None, errorlog=None):
3948+ global lexer
3949+ ldict = None
3950+ stateinfo = { 'INITIAL' : 'inclusive'}
3951+ lexobj = Lexer()
3952+ lexobj.lexoptimize = optimize
3953+ global token,input
3954+
3955+ if errorlog is None:
3956+ errorlog = PlyLogger(sys.stderr)
3957+
3958+ if debug:
3959+ if debuglog is None:
3960+ debuglog = PlyLogger(sys.stderr)
3961+
3962+ # Get the module dictionary used for the lexer
3963+ if object: module = object
3964+
3965+ if module:
3966+ _items = [(k,getattr(module,k)) for k in dir(module)]
3967+ ldict = dict(_items)
3968+ else:
3969+ ldict = get_caller_module_dict(2)
3970+
3971+ # Collect parser information from the dictionary
3972+ linfo = LexerReflect(ldict,log=errorlog,reflags=reflags)
3973+ linfo.get_all()
3974+ if not optimize:
3975+ if linfo.validate_all():
3976+ raise SyntaxError("Can't build lexer")
3977+
3978+ if optimize and lextab:
3979+ try:
3980+ lexobj.readtab(lextab,ldict)
3981+ token = lexobj.token
3982+ input = lexobj.input
3983+ lexer = lexobj
3984+ return lexobj
3985+
3986+ except ImportError:
3987+ pass
3988+
3989+ # Dump some basic debugging information
3990+ if debug:
3991+ debuglog.info("lex: tokens = %r", linfo.tokens)
3992+ debuglog.info("lex: literals = %r", linfo.literals)
3993+ debuglog.info("lex: states = %r", linfo.stateinfo)
3994+
3995+ # Build a dictionary of valid token names
3996+ lexobj.lextokens = { }
3997+ for n in linfo.tokens:
3998+ lexobj.lextokens[n] = 1
3999+
4000+ # Get literals specification
4001+ if isinstance(linfo.literals,(list,tuple)):
4002+ lexobj.lexliterals = type(linfo.literals[0])().join(linfo.literals)
4003+ else:
4004+ lexobj.lexliterals = linfo.literals
4005+
4006+ # Get the stateinfo dictionary
4007+ stateinfo = linfo.stateinfo
4008+
4009+ regexs = { }
4010+ # Build the master regular expressions
4011+ for state in stateinfo:
4012+ regex_list = []
4013+
4014+ # Add rules defined by functions first
4015+ for fname, f in linfo.funcsym[state]:
4016+ line = func_code(f).co_firstlineno
4017+ file = func_code(f).co_filename
4018+ regex_list.append("(?P<%s>%s)" % (fname,f.__doc__))
4019+ if debug:
4020+ debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",fname,f.__doc__, state)
4021+
4022+ # Now add all of the simple rules
4023+ for name,r in linfo.strsym[state]:
4024+ regex_list.append("(?P<%s>%s)" % (name,r))
4025+ if debug:
4026+ debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",name,r, state)
4027+
4028+ regexs[state] = regex_list
4029+
4030+ # Build the master regular expressions
4031+
4032+ if debug:
4033+ debuglog.info("lex: ==== MASTER REGEXS FOLLOW ====")
4034+
4035+ for state in regexs:
4036+ lexre, re_text, re_names = _form_master_re(regexs[state],reflags,ldict,linfo.toknames)
4037+ lexobj.lexstatere[state] = lexre
4038+ lexobj.lexstateretext[state] = re_text
4039+ lexobj.lexstaterenames[state] = re_names
4040+ if debug:
4041+ for i in range(len(re_text)):
4042+ debuglog.info("lex: state '%s' : regex[%d] = '%s'",state, i, re_text[i])
4043+
4044+ # For inclusive states, we need to add the regular expressions from the INITIAL state
4045+ for state,stype in stateinfo.items():
4046+ if state != "INITIAL" and stype == 'inclusive':
4047+ lexobj.lexstatere[state].extend(lexobj.lexstatere['INITIAL'])
4048+ lexobj.lexstateretext[state].extend(lexobj.lexstateretext['INITIAL'])
4049+ lexobj.lexstaterenames[state].extend(lexobj.lexstaterenames['INITIAL'])
4050+
4051+ lexobj.lexstateinfo = stateinfo
4052+ lexobj.lexre = lexobj.lexstatere["INITIAL"]
4053+ lexobj.lexretext = lexobj.lexstateretext["INITIAL"]
4054+ lexobj.lexreflags = reflags
4055+
4056+ # Set up ignore variables
4057+ lexobj.lexstateignore = linfo.ignore
4058+ lexobj.lexignore = lexobj.lexstateignore.get("INITIAL","")
4059+
4060+ # Set up error functions
4061+ lexobj.lexstateerrorf = linfo.errorf
4062+ lexobj.lexerrorf = linfo.errorf.get("INITIAL",None)
4063+ if not lexobj.lexerrorf:
4064+ errorlog.warning("No t_error rule is defined")
4065+
4066+ # Check state information for ignore and error rules
4067+ for s,stype in stateinfo.items():
4068+ if stype == 'exclusive':
4069+ if not s in linfo.errorf:
4070+ errorlog.warning("No error rule is defined for exclusive state '%s'", s)
4071+ if not s in linfo.ignore and lexobj.lexignore:
4072+ errorlog.warning("No ignore rule is defined for exclusive state '%s'", s)
4073+ elif stype == 'inclusive':
4074+ if not s in linfo.errorf:
4075+ linfo.errorf[s] = linfo.errorf.get("INITIAL",None)
4076+ if not s in linfo.ignore:
4077+ linfo.ignore[s] = linfo.ignore.get("INITIAL","")
4078+
4079+ # Create global versions of the token() and input() functions
4080+ token = lexobj.token
4081+ input = lexobj.input
4082+ lexer = lexobj
4083+
4084+ # If in optimize mode, we write the lextab
4085+ if lextab and optimize:
4086+ lexobj.writetab(lextab,outputdir)
4087+
4088+ return lexobj
4089+
4090+# -----------------------------------------------------------------------------
4091+# runmain()
4092+#
4093+# This runs the lexer as a main program
4094+# -----------------------------------------------------------------------------
4095+
4096+def runmain(lexer=None,data=None):
4097+ if not data:
4098+ try:
4099+ filename = sys.argv[1]
4100+ f = open(filename)
4101+ data = f.read()
4102+ f.close()
4103+ except IndexError:
4104+ sys.stdout.write("Reading from standard input (type EOF to end):\n")
4105+ data = sys.stdin.read()
4106+
4107+ if lexer:
4108+ _input = lexer.input
4109+ else:
4110+ _input = input
4111+ _input(data)
4112+ if lexer:
4113+ _token = lexer.token
4114+ else:
4115+ _token = token
4116+
4117+ while 1:
4118+ tok = _token()
4119+ if not tok: break
4120+ sys.stdout.write("(%s,%r,%d,%d)\n" % (tok.type, tok.value, tok.lineno,tok.lexpos))
4121+
4122+# -----------------------------------------------------------------------------
4123+# @TOKEN(regex)
4124+#
4125+# This decorator function can be used to set the regex expression on a function
4126+# when its docstring might need to be set in an alternative way
4127+# -----------------------------------------------------------------------------
4128+
4129+def TOKEN(r):
4130+ def set_doc(f):
4131+ if hasattr(r,"__call__"):
4132+ f.__doc__ = r.__doc__
4133+ else:
4134+ f.__doc__ = r
4135+ return f
4136+ return set_doc
4137+
4138+# Alternative spelling of the TOKEN decorator
4139+Token = TOKEN
4140+
4141
4142=== added file 'boots/lib/ui/cui/lexers/ply/yacc.py'
4143--- boots/lib/ui/cui/lexers/ply/yacc.py 1970-01-01 00:00:00 +0000
4144+++ boots/lib/ui/cui/lexers/ply/yacc.py 2011-03-26 12:15:52 +0000
4145@@ -0,0 +1,3276 @@
4146+# -----------------------------------------------------------------------------
4147+# ply: yacc.py
4148+#
4149+# Copyright (C) 2001-2009,
4150+# David M. Beazley (Dabeaz LLC)
4151+# All rights reserved.
4152+#
4153+# Redistribution and use in source and binary forms, with or without
4154+# modification, are permitted provided that the following conditions are
4155+# met:
4156+#
4157+# * Redistributions of source code must retain the above copyright notice,
4158+# this list of conditions and the following disclaimer.
4159+# * Redistributions in binary form must reproduce the above copyright notice,
4160+# this list of conditions and the following disclaimer in the documentation
4161+# and/or other materials provided with the distribution.
4162+# * Neither the name of the David Beazley or Dabeaz LLC may be used to
4163+# endorse or promote products derived from this software without
4164+# specific prior written permission.
4165+#
4166+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
4167+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
4168+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
4169+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
4170+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
4171+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
4172+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
4173+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
4174+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4175+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
4176+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4177+# -----------------------------------------------------------------------------
4178+#
4179+# This implements an LR parser that is constructed from grammar rules defined
4180+# as Python functions. The grammer is specified by supplying the BNF inside
4181+# Python documentation strings. The inspiration for this technique was borrowed
4182+# from John Aycock's Spark parsing system. PLY might be viewed as cross between
4183+# Spark and the GNU bison utility.
4184+#
4185+# The current implementation is only somewhat object-oriented. The
4186+# LR parser itself is defined in terms of an object (which allows multiple
4187+# parsers to co-exist). However, most of the variables used during table
4188+# construction are defined in terms of global variables. Users shouldn't
4189+# notice unless they are trying to define multiple parsers at the same
4190+# time using threads (in which case they should have their head examined).
4191+#
4192+# This implementation supports both SLR and LALR(1) parsing. LALR(1)
4193+# support was originally implemented by Elias Ioup (ezioup@alumni.uchicago.edu),
4194+# using the algorithm found in Aho, Sethi, and Ullman "Compilers: Principles,
4195+# Techniques, and Tools" (The Dragon Book). LALR(1) has since been replaced
4196+# by the more efficient DeRemer and Pennello algorithm.
4197+#
4198+# :::::::: WARNING :::::::
4199+#
4200+# Construction of LR parsing tables is fairly complicated and expensive.
4201+# To make this module run fast, a *LOT* of work has been put into
4202+# optimization---often at the expensive of readability and what might
4203+# consider to be good Python "coding style." Modify the code at your
4204+# own risk!
4205+# ----------------------------------------------------------------------------
4206+
4207+__version__ = "3.3"
4208+__tabversion__ = "3.2" # Table version
4209+
4210+#-----------------------------------------------------------------------------
4211+# === User configurable parameters ===
4212+#
4213+# Change these to modify the default behavior of yacc (if you wish)
4214+#-----------------------------------------------------------------------------
4215+
4216+yaccdebug = 1 # Debugging mode. If set, yacc generates a
4217+ # a 'parser.out' file in the current directory
4218+
4219+debug_file = 'parser.out' # Default name of the debugging file
4220+tab_module = 'parsetab' # Default name of the table module
4221+default_lr = 'LALR' # Default LR table generation method
4222+
4223+error_count = 3 # Number of symbols that must be shifted to leave recovery mode
4224+
4225+yaccdevel = 0 # Set to True if developing yacc. This turns off optimized
4226+ # implementations of certain functions.
4227+
4228+resultlimit = 40 # Size limit of results when running in debug mode.
4229+
4230+pickle_protocol = 0 # Protocol to use when writing pickle files
4231+
4232+import re, types, sys, os.path
4233+
4234+# Compatibility function for python 2.6/3.0
4235+if sys.version_info[0] < 3:
4236+ def func_code(f):
4237+ return f.func_code
4238+else:
4239+ def func_code(f):
4240+ return f.__code__
4241+
4242+# Compatibility
4243+try:
4244+ MAXINT = sys.maxint
4245+except AttributeError:
4246+ MAXINT = sys.maxsize
4247+
4248+# Python 2.x/3.0 compatibility.
4249+def load_ply_lex():
4250+ if sys.version_info[0] < 3:
4251+ import lex
4252+ else:
4253+ import ply.lex as lex
4254+ return lex
4255+
4256+# This object is a stand-in for a logging object created by the
4257+# logging module. PLY will use this by default to create things
4258+# such as the parser.out file. If a user wants more detailed
4259+# information, they can create their own logging object and pass
4260+# it into PLY.
4261+
4262+class PlyLogger(object):
4263+ def __init__(self,f):
4264+ self.f = f
4265+ def debug(self,msg,*args,**kwargs):
4266+ self.f.write((msg % args) + "\n")
4267+ info = debug
4268+
4269+ def warning(self,msg,*args,**kwargs):
4270+ self.f.write("WARNING: "+ (msg % args) + "\n")
4271+
4272+ def error(self,msg,*args,**kwargs):
4273+ self.f.write("ERROR: " + (msg % args) + "\n")
4274+
4275+ critical = debug
4276+
4277+# Null logger is used when no output is generated. Does nothing.
4278+class NullLogger(object):
4279+ def __getattribute__(self,name):
4280+ return self
4281+ def __call__(self,*args,**kwargs):
4282+ return self
4283+
4284+# Exception raised for yacc-related errors
4285+class YaccError(Exception): pass
4286+
4287+# Format the result message that the parser produces when running in debug mode.
4288+def format_result(r):
4289+ repr_str = repr(r)
4290+ if '\n' in repr_str: repr_str = repr(repr_str)
4291+ if len(repr_str) > resultlimit:
4292+ repr_str = repr_str[:resultlimit]+" ..."
4293+ result = "<%s @ 0x%x> (%s)" % (type(r).__name__,id(r),repr_str)
4294+ return result
4295+
4296+
4297+# Format stack entries when the parser is running in debug mode
4298+def format_stack_entry(r):
4299+ repr_str = repr(r)
4300+ if '\n' in repr_str: repr_str = repr(repr_str)
4301+ if len(repr_str) < 16:
4302+ return repr_str
4303+ else:
4304+ return "<%s @ 0x%x>" % (type(r).__name__,id(r))
4305+
4306+#-----------------------------------------------------------------------------
4307+# === LR Parsing Engine ===
4308+#
4309+# The following classes are used for the LR parser itself. These are not
4310+# used during table construction and are independent of the actual LR
4311+# table generation algorithm
4312+#-----------------------------------------------------------------------------
4313+
4314+# This class is used to hold non-terminal grammar symbols during parsing.
4315+# It normally has the following attributes set:
4316+# .type = Grammar symbol type
4317+# .value = Symbol value
4318+# .lineno = Starting line number
4319+# .endlineno = Ending line number (optional, set automatically)
4320+# .lexpos = Starting lex position
4321+# .endlexpos = Ending lex position (optional, set automatically)
4322+
4323+class YaccSymbol:
4324+ def __str__(self): return self.type
4325+ def __repr__(self): return str(self)
4326+
4327+# This class is a wrapper around the objects actually passed to each
4328+# grammar rule. Index lookup and assignment actually assign the
4329+# .value attribute of the underlying YaccSymbol object.
4330+# The lineno() method returns the line number of a given
4331+# item (or 0 if not defined). The linespan() method returns
4332+# a tuple of (startline,endline) representing the range of lines
4333+# for a symbol. The lexspan() method returns a tuple (lexpos,endlexpos)
4334+# representing the range of positional information for a symbol.
4335+
4336+class YaccProduction:
4337+ def __init__(self,s,stack=None):
4338+ self.slice = s
4339+ self.stack = stack
4340+ self.lexer = None
4341+ self.parser= None
4342+ def __getitem__(self,n):
4343+ if n >= 0: return self.slice[n].value
4344+ else: return self.stack[n].value
4345+
4346+ def __setitem__(self,n,v):
4347+ self.slice[n].value = v
4348+
4349+ def __getslice__(self,i,j):
4350+ return [s.value for s in self.slice[i:j]]
4351+
4352+ def __len__(self):
4353+ return len(self.slice)
4354+
4355+ def lineno(self,n):
4356+ return getattr(self.slice[n],"lineno",0)
4357+
4358+ def set_lineno(self,n,lineno):
4359+ self.slice[n].lineno = lineno
4360+
4361+ def linespan(self,n):
4362+ startline = getattr(self.slice[n],"lineno",0)
4363+ endline = getattr(self.slice[n],"endlineno",startline)
4364+ return startline,endline
4365+
4366+ def lexpos(self,n):
4367+ return getattr(self.slice[n],"lexpos",0)
4368+
4369+ def lexspan(self,n):
4370+ startpos = getattr(self.slice[n],"lexpos",0)
4371+ endpos = getattr(self.slice[n],"endlexpos",startpos)
4372+ return startpos,endpos
4373+
4374+ def error(self):
4375+ raise SyntaxError
4376+
4377+
4378+# -----------------------------------------------------------------------------
4379+# == LRParser ==
4380+#
4381+# The LR Parsing engine.
4382+# -----------------------------------------------------------------------------
4383+
4384+class LRParser:
4385+ def __init__(self,lrtab,errorf):
4386+ self.productions = lrtab.lr_productions
4387+ self.action = lrtab.lr_action
4388+ self.goto = lrtab.lr_goto
4389+ self.errorfunc = errorf
4390+
4391+ def errok(self):
4392+ self.errorok = 1
4393+
4394+ def restart(self):
4395+ del self.statestack[:]
4396+ del self.symstack[:]
4397+ sym = YaccSymbol()
4398+ sym.type = '$end'
4399+ self.symstack.append(sym)
4400+ self.statestack.append(0)
4401+
4402+ def parse(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None):
4403+ if debug or yaccdevel:
4404+ if isinstance(debug,int):
4405+ debug = PlyLogger(sys.stderr)
4406+ return self.parsedebug(input,lexer,debug,tracking,tokenfunc)
4407+ elif tracking:
4408+ return self.parseopt(input,lexer,debug,tracking,tokenfunc)
4409+ else:
4410+ return self.parseopt_notrack(input,lexer,debug,tracking,tokenfunc)
4411+
4412+
4413+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4414+ # parsedebug().
4415+ #
4416+ # This is the debugging enabled version of parse(). All changes made to the
4417+ # parsing engine should be made here. For the non-debugging version,
4418+ # copy this code to a method parseopt() and delete all of the sections
4419+ # enclosed in:
4420+ #
4421+ # #--! DEBUG
4422+ # statements
4423+ # #--! DEBUG
4424+ #
4425+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4426+
4427+ def parsedebug(self,input=None,lexer=None,debug=None,tracking=0,tokenfunc=None):
4428+ lookahead = None # Current lookahead symbol
4429+ lookaheadstack = [ ] # Stack of lookahead symbols
4430+ actions = self.action # Local reference to action table (to avoid lookup on self.)
4431+ goto = self.goto # Local reference to goto table (to avoid lookup on self.)
4432+ prod = self.productions # Local reference to production list (to avoid lookup on self.)
4433+ pslice = YaccProduction(None) # Production object passed to grammar rules
4434+ errorcount = 0 # Used during error recovery
4435+
4436+ # --! DEBUG
4437+ debug.info("PLY: PARSE DEBUG START")
4438+ # --! DEBUG
4439+
4440+ # If no lexer was given, we will try to use the lex module
4441+ if not lexer:
4442+ lex = load_ply_lex()
4443+ lexer = lex.lexer
4444+
4445+ # Set up the lexer and parser objects on pslice
4446+ pslice.lexer = lexer
4447+ pslice.parser = self
4448+
4449+ # If input was supplied, pass to lexer
4450+ if input is not None:
4451+ lexer.input(input)
4452+
4453+ if tokenfunc is None:
4454+ # Tokenize function
4455+ get_token = lexer.token
4456+ else:
4457+ get_token = tokenfunc
4458+
4459+ # Set up the state and symbol stacks
4460+
4461+ statestack = [ ] # Stack of parsing states
4462+ self.statestack = statestack
4463+ symstack = [ ] # Stack of grammar symbols
4464+ self.symstack = symstack
4465+
4466+ pslice.stack = symstack # Put in the production
4467+ errtoken = None # Err token
4468+
4469+ # The start state is assumed to be (0,$end)
4470+
4471+ statestack.append(0)
4472+ sym = YaccSymbol()
4473+ sym.type = "$end"
4474+ symstack.append(sym)
4475+ state = 0
4476+ while 1:
4477+ # Get the next symbol on the input. If a lookahead symbol
4478+ # is already set, we just use that. Otherwise, we'll pull
4479+ # the next token off of the lookaheadstack or from the lexer
4480+
4481+ # --! DEBUG
4482+ debug.debug('')
4483+ debug.debug('State : %s', state)
4484+ # --! DEBUG
4485+
4486+ if not lookahead:
4487+ if not lookaheadstack:
4488+ lookahead = get_token() # Get the next token
4489+ else:
4490+ lookahead = lookaheadstack.pop()
4491+ if not lookahead:
4492+ lookahead = YaccSymbol()
4493+ lookahead.type = "$end"
4494+
4495+ # --! DEBUG
4496+ debug.debug('Stack : %s',
4497+ ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip())
4498+ # --! DEBUG
4499+
4500+ # Check the action table
4501+ ltype = lookahead.type
4502+ t = actions[state].get(ltype)
4503+
4504+ if t is not None:
4505+ if t > 0:
4506+ # shift a symbol on the stack
4507+ statestack.append(t)
4508+ state = t
4509+
4510+ # --! DEBUG
4511+ debug.debug("Action : Shift and goto state %s", t)
4512+ # --! DEBUG
4513+
4514+ symstack.append(lookahead)
4515+ lookahead = None
4516+
4517+ # Decrease error count on successful shift
4518+ if errorcount: errorcount -=1
4519+ continue
4520+
4521+ if t < 0:
4522+ # reduce a symbol on the stack, emit a production
4523+ p = prod[-t]
4524+ pname = p.name
4525+ plen = p.len
4526+
4527+ # Get production function
4528+ sym = YaccSymbol()
4529+ sym.type = pname # Production name
4530+ sym.value = None
4531+
4532+ # --! DEBUG
4533+ if plen:
4534+ debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, "["+",".join([format_stack_entry(_v.value) for _v in symstack[-plen:]])+"]",-t)
4535+ else:
4536+ debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, [],-t)
4537+
4538+ # --! DEBUG
4539+
4540+ if plen:
4541+ targ = symstack[-plen-1:]
4542+ targ[0] = sym
4543+
4544+ # --! TRACKING
4545+ if tracking:
4546+ t1 = targ[1]
4547+ sym.lineno = t1.lineno
4548+ sym.lexpos = t1.lexpos
4549+ t1 = targ[-1]
4550+ sym.endlineno = getattr(t1,"endlineno",t1.lineno)
4551+ sym.endlexpos = getattr(t1,"endlexpos",t1.lexpos)
4552+
4553+ # --! TRACKING
4554+
4555+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4556+ # The code enclosed in this section is duplicated
4557+ # below as a performance optimization. Make sure
4558+ # changes get made in both locations.
4559+
4560+ pslice.slice = targ
4561+
4562+ try:
4563+ # Call the grammar rule with our special slice object
4564+ del symstack[-plen:]
4565+ del statestack[-plen:]
4566+ p.callable(pslice)
4567+ # --! DEBUG
4568+ debug.info("Result : %s", format_result(pslice[0]))
4569+ # --! DEBUG
4570+ symstack.append(sym)
4571+ state = goto[statestack[-1]][pname]
4572+ statestack.append(state)
4573+ except SyntaxError:
4574+ # If an error was set. Enter error recovery state
4575+ lookaheadstack.append(lookahead)
4576+ symstack.pop()
4577+ statestack.pop()
4578+ state = statestack[-1]
4579+ sym.type = 'error'
4580+ lookahead = sym
4581+ errorcount = error_count
4582+ self.errorok = 0
4583+ continue
4584+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4585+
4586+ else:
4587+
4588+ # --! TRACKING
4589+ if tracking:
4590+ sym.lineno = lexer.lineno
4591+ sym.lexpos = lexer.lexpos
4592+ # --! TRACKING
4593+
4594+ targ = [ sym ]
4595+
4596+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4597+ # The code enclosed in this section is duplicated
4598+ # above as a performance optimization. Make sure
4599+ # changes get made in both locations.
4600+
4601+ pslice.slice = targ
4602+
4603+ try:
4604+ # Call the grammar rule with our special slice object
4605+ p.callable(pslice)
4606+ # --! DEBUG
4607+ debug.info("Result : %s", format_result(pslice[0]))
4608+ # --! DEBUG
4609+ symstack.append(sym)
4610+ state = goto[statestack[-1]][pname]
4611+ statestack.append(state)
4612+ except SyntaxError:
4613+ # If an error was set. Enter error recovery state
4614+ lookaheadstack.append(lookahead)
4615+ symstack.pop()
4616+ statestack.pop()
4617+ state = statestack[-1]
4618+ sym.type = 'error'
4619+ lookahead = sym
4620+ errorcount = error_count
4621+ self.errorok = 0
4622+ continue
4623+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4624+
4625+ if t == 0:
4626+ n = symstack[-1]
4627+ result = getattr(n,"value",None)
4628+ # --! DEBUG
4629+ debug.info("Done : Returning %s", format_result(result))
4630+ debug.info("PLY: PARSE DEBUG END")
4631+ # --! DEBUG
4632+ return result
4633+
4634+ if t == None:
4635+
4636+ # --! DEBUG
4637+ debug.error('Error : %s',
4638+ ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip())
4639+ # --! DEBUG
4640+
4641+ # We have some kind of parsing error here. To handle
4642+ # this, we are going to push the current token onto
4643+ # the tokenstack and replace it with an 'error' token.
4644+ # If there are any synchronization rules, they may
4645+ # catch it.
4646+ #
4647+ # In addition to pushing the error token, we call call
4648+ # the user defined p_error() function if this is the
4649+ # first syntax error. This function is only called if
4650+ # errorcount == 0.
4651+ if errorcount == 0 or self.errorok:
4652+ errorcount = error_count
4653+ self.errorok = 0
4654+ errtoken = lookahead
4655+ if errtoken.type == "$end":
4656+ errtoken = None # End of file!
4657+ if self.errorfunc:
4658+ global errok,token,restart
4659+ errok = self.errok # Set some special functions available in error recovery
4660+ token = get_token
4661+ restart = self.restart
4662+ if errtoken and not hasattr(errtoken,'lexer'):
4663+ errtoken.lexer = lexer
4664+ tok = self.errorfunc(errtoken)
4665+ del errok, token, restart # Delete special functions
4666+
4667+ if self.errorok:
4668+ # User must have done some kind of panic
4669+ # mode recovery on their own. The
4670+ # returned token is the next lookahead
4671+ lookahead = tok
4672+ errtoken = None
4673+ continue
4674+ else:
4675+ if errtoken:
4676+ if hasattr(errtoken,"lineno"): lineno = lookahead.lineno
4677+ else: lineno = 0
4678+ if lineno:
4679+ sys.stderr.write("yacc: Syntax error at line %d, token=%s\n" % (lineno, errtoken.type))
4680+ else:
4681+ sys.stderr.write("yacc: Syntax error, token=%s" % errtoken.type)
4682+ else:
4683+ sys.stderr.write("yacc: Parse error in input. EOF\n")
4684+ return
4685+
4686+ else:
4687+ errorcount = error_count
4688+
4689+ # case 1: the statestack only has 1 entry on it. If we're in this state, the
4690+ # entire parse has been rolled back and we're completely hosed. The token is
4691+ # discarded and we just keep going.
4692+
4693+ if len(statestack) <= 1 and lookahead.type != "$end":
4694+ lookahead = None
4695+ errtoken = None
4696+ state = 0
4697+ # Nuke the pushback stack
4698+ del lookaheadstack[:]
4699+ continue
4700+
4701+ # case 2: the statestack has a couple of entries on it, but we're
4702+ # at the end of the file. nuke the top entry and generate an error token
4703+
4704+ # Start nuking entries on the stack
4705+ if lookahead.type == "$end":
4706+ # Whoa. We're really hosed here. Bail out
4707+ return
4708+
4709+ if lookahead.type != 'error':
4710+ sym = symstack[-1]
4711+ if sym.type == 'error':
4712+ # Hmmm. Error is on top of stack, we'll just nuke input
4713+ # symbol and continue
4714+ lookahead = None
4715+ continue
4716+ t = YaccSymbol()
4717+ t.type = 'error'
4718+ if hasattr(lookahead,"lineno"):
4719+ t.lineno = lookahead.lineno
4720+ t.value = lookahead
4721+ lookaheadstack.append(lookahead)
4722+ lookahead = t
4723+ else:
4724+ symstack.pop()
4725+ statestack.pop()
4726+ state = statestack[-1] # Potential bug fix
4727+
4728+ continue
4729+
4730+ # Call an error function here
4731+ raise RuntimeError("yacc: internal parser error!!!\n")
4732+
4733+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4734+ # parseopt().
4735+ #
4736+ # Optimized version of parse() method. DO NOT EDIT THIS CODE DIRECTLY.
4737+ # Edit the debug version above, then copy any modifications to the method
4738+ # below while removing #--! DEBUG sections.
4739+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4740+
4741+
4742+ def parseopt(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None):
4743+ lookahead = None # Current lookahead symbol
4744+ lookaheadstack = [ ] # Stack of lookahead symbols
4745+ actions = self.action # Local reference to action table (to avoid lookup on self.)
4746+ goto = self.goto # Local reference to goto table (to avoid lookup on self.)
4747+ prod = self.productions # Local reference to production list (to avoid lookup on self.)
4748+ pslice = YaccProduction(None) # Production object passed to grammar rules
4749+ errorcount = 0 # Used during error recovery
4750+
4751+ # If no lexer was given, we will try to use the lex module
4752+ if not lexer:
4753+ lex = load_ply_lex()
4754+ lexer = lex.lexer
4755+
4756+ # Set up the lexer and parser objects on pslice
4757+ pslice.lexer = lexer
4758+ pslice.parser = self
4759+
4760+ # If input was supplied, pass to lexer
4761+ if input is not None:
4762+ lexer.input(input)
4763+
4764+ if tokenfunc is None:
4765+ # Tokenize function
4766+ get_token = lexer.token
4767+ else:
4768+ get_token = tokenfunc
4769+
4770+ # Set up the state and symbol stacks
4771+
4772+ statestack = [ ] # Stack of parsing states
4773+ self.statestack = statestack
4774+ symstack = [ ] # Stack of grammar symbols
4775+ self.symstack = symstack
4776+
4777+ pslice.stack = symstack # Put in the production
4778+ errtoken = None # Err token
4779+
4780+ # The start state is assumed to be (0,$end)
4781+
4782+ statestack.append(0)
4783+ sym = YaccSymbol()
4784+ sym.type = '$end'
4785+ symstack.append(sym)
4786+ state = 0
4787+ while 1:
4788+ # Get the next symbol on the input. If a lookahead symbol
4789+ # is already set, we just use that. Otherwise, we'll pull
4790+ # the next token off of the lookaheadstack or from the lexer
4791+
4792+ if not lookahead:
4793+ if not lookaheadstack:
4794+ lookahead = get_token() # Get the next token
4795+ else:
4796+ lookahead = lookaheadstack.pop()
4797+ if not lookahead:
4798+ lookahead = YaccSymbol()
4799+ lookahead.type = '$end'
4800+
4801+ # Check the action table
4802+ ltype = lookahead.type
4803+ t = actions[state].get(ltype)
4804+
4805+ if t is not None:
4806+ if t > 0:
4807+ # shift a symbol on the stack
4808+ statestack.append(t)
4809+ state = t
4810+
4811+ symstack.append(lookahead)
4812+ lookahead = None
4813+
4814+ # Decrease error count on successful shift
4815+ if errorcount: errorcount -=1
4816+ continue
4817+
4818+ if t < 0:
4819+ # reduce a symbol on the stack, emit a production
4820+ p = prod[-t]
4821+ pname = p.name
4822+ plen = p.len
4823+
4824+ # Get production function
4825+ sym = YaccSymbol()
4826+ sym.type = pname # Production name
4827+ sym.value = None
4828+
4829+ if plen:
4830+ targ = symstack[-plen-1:]
4831+ targ[0] = sym
4832+
4833+ # --! TRACKING
4834+ if tracking:
4835+ t1 = targ[1]
4836+ sym.lineno = t1.lineno
4837+ sym.lexpos = t1.lexpos
4838+ t1 = targ[-1]
4839+ sym.endlineno = getattr(t1,"endlineno",t1.lineno)
4840+ sym.endlexpos = getattr(t1,"endlexpos",t1.lexpos)
4841+
4842+ # --! TRACKING
4843+
4844+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4845+ # The code enclosed in this section is duplicated
4846+ # below as a performance optimization. Make sure
4847+ # changes get made in both locations.
4848+
4849+ pslice.slice = targ
4850+
4851+ try:
4852+ # Call the grammar rule with our special slice object
4853+ del symstack[-plen:]
4854+ del statestack[-plen:]
4855+ p.callable(pslice)
4856+ symstack.append(sym)
4857+ state = goto[statestack[-1]][pname]
4858+ statestack.append(state)
4859+ except SyntaxError:
4860+ # If an error was set. Enter error recovery state
4861+ lookaheadstack.append(lookahead)
4862+ symstack.pop()
4863+ statestack.pop()
4864+ state = statestack[-1]
4865+ sym.type = 'error'
4866+ lookahead = sym
4867+ errorcount = error_count
4868+ self.errorok = 0
4869+ continue
4870+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4871+
4872+ else:
4873+
4874+ # --! TRACKING
4875+ if tracking:
4876+ sym.lineno = lexer.lineno
4877+ sym.lexpos = lexer.lexpos
4878+ # --! TRACKING
4879+
4880+ targ = [ sym ]
4881+
4882+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4883+ # The code enclosed in this section is duplicated
4884+ # above as a performance optimization. Make sure
4885+ # changes get made in both locations.
4886+
4887+ pslice.slice = targ
4888+
4889+ try:
4890+ # Call the grammar rule with our special slice object
4891+ p.callable(pslice)
4892+ symstack.append(sym)
4893+ state = goto[statestack[-1]][pname]
4894+ statestack.append(state)
4895+ except SyntaxError:
4896+ # If an error was set. Enter error recovery state
4897+ lookaheadstack.append(lookahead)
4898+ symstack.pop()
4899+ statestack.pop()
4900+ state = statestack[-1]
4901+ sym.type = 'error'
4902+ lookahead = sym
4903+ errorcount = error_count
4904+ self.errorok = 0
4905+ continue
4906+ # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4907+
4908+ if t == 0:
4909+ n = symstack[-1]
4910+ return getattr(n,"value",None)
4911+
4912+ if t == None:
4913+
4914+ # We have some kind of parsing error here. To handle
4915+ # this, we are going to push the current token onto
4916+ # the tokenstack and replace it with an 'error' token.
4917+ # If there are any synchronization rules, they may
4918+ # catch it.
4919+ #
4920+ # In addition to pushing the error token, we call call
4921+ # the user defined p_error() function if this is the
4922+ # first syntax error. This function is only called if
4923+ # errorcount == 0.
4924+ if errorcount == 0 or self.errorok:
4925+ errorcount = error_count
4926+ self.errorok = 0
4927+ errtoken = lookahead
4928+ if errtoken.type == '$end':
4929+ errtoken = None # End of file!
4930+ if self.errorfunc:
4931+ global errok,token,restart
4932+ errok = self.errok # Set some special functions available in error recovery
4933+ token = get_token
4934+ restart = self.restart
4935+ if errtoken and not hasattr(errtoken,'lexer'):
4936+ errtoken.lexer = lexer
4937+ tok = self.errorfunc(errtoken)
4938+ del errok, token, restart # Delete special functions
4939+
4940+ if self.errorok:
4941+ # User must have done some kind of panic
4942+ # mode recovery on their own. The
4943+ # returned token is the next lookahead
4944+ lookahead = tok
4945+ errtoken = None
4946+ continue
4947+ else:
4948+ if errtoken:
4949+ if hasattr(errtoken,"lineno"): lineno = lookahead.lineno
4950+ else: lineno = 0
4951+ if lineno:
4952+ sys.stderr.write("yacc: Syntax error at line %d, token=%s\n" % (lineno, errtoken.type))
4953+ else:
4954+ sys.stderr.write("yacc: Syntax error, token=%s" % errtoken.type)
4955+ else:
4956+ sys.stderr.write("yacc: Parse error in input. EOF\n")
4957+ return
4958+
4959+ else:
4960+ errorcount = error_count
4961+
4962+ # case 1: the statestack only has 1 entry on it. If we're in this state, the
4963+ # entire parse has been rolled back and we're completely hosed. The token is
4964+ # discarded and we just keep going.
4965+
4966+ if len(statestack) <= 1 and lookahead.type != '$end':
4967+ lookahead = None
4968+ errtoken = None
4969+ state = 0
4970+ # Nuke the pushback stack
4971+ del lookaheadstack[:]
4972+ continue
4973+
4974+ # case 2: the statestack has a couple of entries on it, but we're
4975+ # at the end of the file. nuke the top entry and generate an error token
4976+
4977+ # Start nuking entries on the stack
4978+ if lookahead.type == '$end':
4979+ # Whoa. We're really hosed here. Bail out
4980+ return
4981+
4982+ if lookahead.type != 'error':
4983+ sym = symstack[-1]
4984+ if sym.type == 'error':
4985+ # Hmmm. Error is on top of stack, we'll just nuke input
4986+ # symbol and continue
4987+ lookahead = None
4988+ continue
4989+ t = YaccSymbol()
4990+ t.type = 'error'
4991+ if hasattr(lookahead,"lineno"):
4992+ t.lineno = lookahead.lineno
4993+ t.value = lookahead
4994+ lookaheadstack.append(lookahead)
4995+ lookahead = t
4996+ else:
4997+ symstack.pop()
4998+ statestack.pop()
4999+ state = statestack[-1] # Potential bug fix
5000+
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches