Merge lp:~robru/gwibber/twitter into lp:~barry/gwibber/py3

Proposed by Robert Bruce Park
Status: Merged
Merged at revision: 1445
Proposed branch: lp:~robru/gwibber/twitter
Merge into: lp:~barry/gwibber/py3
Diff against target: 2245 lines (+1180/-847)
12 files modified
gwibber/gwibber/protocols/twitter.py (+275/-6)
gwibber/gwibber/testing/mocks.py (+13/-7)
gwibber/gwibber/tests/data/twitter-home.dat (+344/-0)
gwibber/gwibber/tests/test_dbus.py (+5/-0)
gwibber/gwibber/tests/test_download.py (+41/-0)
gwibber/gwibber/tests/test_protocols.py (+52/-0)
gwibber/gwibber/tests/test_twitter.py (+296/-8)
gwibber/gwibber/utils/base.py (+31/-6)
gwibber/gwibber/utils/download.py (+34/-1)
gwibber/microblog/plugins/twitter/__init__.py (+0/-819)
gwibber/tools/debug_live.py (+67/-0)
gwibber/tools/debug_slave.py (+22/-0)
To merge this branch: bzr merge lp:~robru/gwibber/twitter
Reviewer Review Type Date Requested Status
Barry Warsaw Pending
Review via email: mp+127373@code.launchpad.net

Description of the change

Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit prematurely because it's still the easiest way to see an overall diff ;-)

To post a comment you must log in.
lp:~robru/gwibber/twitter updated
1474. By Robert Bruce Park

Implement Base._unpublish for the sake of deleting messages.

This required adding a new module global to base.py, _seen_ids, which
is a dict mapping message ids to SharedModel row iters. Without this
it would have been necessary to iterate over the entire model every
time you wanted to delete a single row.

1475. By Robert Bruce Park

Add a test case for Base._unpublish.

1476. By Robert Bruce Park

Fill out supported features in test_dbus.

All tests pass!

1477. By Robert Bruce Park

Uncomment line ;-)

Revision history for this message
Barry Warsaw (barry) wrote :
Download full text (29.8 KiB)

On Oct 01, 2012, at 09:30 PM, Robert Bruce Park wrote:

>Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit
>prematurely because it's still the easiest way to see an overall diff ;-)

The branch is looking pretty good. I'm glad you were able to verify that the
OAuth signatures were working, at least for public tweets. I guess more
research will have to be done to figure out why private replies are broken.

Anyway, here's a review of the branch so far. In generally, it looks pretty
good. There are a few error cases that need to be thought about, a few
additional tests that I think need to be written, and just a few minor style
nits here and there. But overall, it's looking fantastic and the small
problems should be easy to fix. I think this can be landed pretty quickly.

Detailed comments follow.

Do you intend to include the debug_twitter_*.py scripts in the branch? If so,
let's put them in a tools subdirectory (i.e. not in the gwibber Python
package). That way they'll be easier to omit from the Debian package.
Also...

=== added file 'gwibber/debug_twitter_live.py'
--- gwibber/debug_twitter_live.py 1970-01-01 00:00:00 +0000
+++ gwibber/debug_twitter_live.py 2012-10-02 14:33:58 +0000
> @@ -0,0 +1,31 @@
> +#!/usr/bin/env python3
> +
> +from gwibber.utils.account import AccountManager
> +from gwibber.utils.model import Model
> +from gi.repository import GObject
> +
> +def refresh (account):
> + print ("############################# Receive ####################################")

This can probably be fit into 79 characters. :)

Also, no space between print and open paren. (PEP 8)

> + account.protocol.user()
> + account.protocol.delete('252823527978311680')

What does this number correspond to? Is it private information?

> + for row in Model:
> + print([col for col in row])
> + print()
> + print ("ROWS: ", len(Model))

extra space

> + return True
> +
> +if __name__ == "__main__":
> +
> + found = False
> + a = AccountManager(None)
> +
> + for account_id, account in a._accounts.items():
> + if account_id.endswith('twitter'):
> + found = True
> + refresh(account)
> + GObject.timeout_add_seconds(300, refresh, account)
> +
> + if not found:
> + print('No Twitter account found in your Ubuntu Online Accounts!')
> + else:
> + GObject.MainLoop().run()

=== added file 'gwibber/debug_twitter_slave.py'
--- gwibber/debug_twitter_slave.py 1970-01-01 00:00:00 +0000
+++ gwibber/debug_twitter_slave.py 2012-10-02 14:33:58 +0000
> @@ -0,0 +1,23 @@
> +#!/usr/bin/env python3
> +
> +from gi.repository import Dee
> +from gi.repository import GObject
> +
> +
> +class Slave:
> + def __init__(self):

4 space indents (PEP 8).

> + model_name = "com.Gwibber.Streams"
> + print ("Joining model %s" % model_name)

extra space.

> + self.model = Dee.SharedModel.new(model_name)
> + self.model.connect("row-added", self.on_row_added)
> +
> + def on_row_added (self, model, itr):
> + row = self.model.get_row (itr)
> + print (row)
> + print ("ROWS: ", len(self.model))

extra spaces.

> +
> +if __name__ == "__main__":
> +
> + s =...

Revision history for this message
Robert Bruce Park (robru) wrote :
Download full text (19.3 KiB)

On 12-10-02 02:13 PM, Barry Warsaw wrote:
> On Oct 01, 2012, at 09:30 PM, Robert Bruce Park wrote:
>
>> Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit
>> prematurely because it's still the easiest way to see an overall diff ;-)
>
> The branch is looking pretty good. I'm glad you were able to verify that the
> OAuth signatures were working, at least for public tweets.

I actually verified that the OAuth signatures worked for *every* API
endpoint *except* sending new direct messages (we can receive them ok,
too). That's what debug_twitter_live.py was all about, every time I'd
write a new protocol operation, I'd modify it to do that new operation,
then run it, and I'd see on my live twitter account that it worked.

> I guess more
> research will have to be done to figure out why private replies are broken.

Yeah, I still have no idea.

> Do you intend to include the debug_twitter_*.py scripts in the branch? If so,
> let's put them in a tools subdirectory (i.e. not in the gwibber Python
> package). That way they'll be easier to omit from the Debian package.

I hadn't originally. At first it just started off as 'from
gwibber.utils.account import AccountManager; a = AccountManager(None);
a._accounts['6/twitter'].protocol.receive()' because I was sick of
typing that out 10,000 times in an interactive python shell every time I
wanted to test a change I made to the code. But once the script was
born, it grew, and then ken expanded it for his own testing purposes,
which is included here as well.

In fact, *all* of the style nits you mention were written by ken ;-)

I had assumed that you would just not merge them. If you think there's
value in keeping them in a tools directory, I guess that's ok. I don't
have a strong opinion either way.

>> + account.protocol.user()
>> + account.protocol.delete('252823527978311680')
>
> What does this number correspond to? Is it private information?

That is a public tweet I made with a previous incarnation of this
script, that I've now deleted by running this script as-is. No worries.

>> + def _get_url(self, url, data=None):
>> + """Access the Twitter API with correct OAuth signed headers."""
>> + # TODO start enforcing some rate limiting.
>
> What are your thoughts on how to do this rate limiting? Is it something that
> Twitter provides through their API? (in a similar manner to Facebook's
> pagination support?). I wonder if there's some commonality refactoring that
> should go on here, at least for determining the constants for the rate
> limiting?

I haven't looked closely at Twitter's rate limiting standards yet, but I
wrote that comment because I ran into Twitter's limit at one point and
they stopped me from downloading tweets that I wanted.

All I know is that each API endpoint has a different maximum number of
requests that can be made "per window" and the length of time of the
window is different, too. So basically I'm envisioning a dict that maps
URLs to ints, the int for a given URL will increase once per each
_get_url invocation, and then we just need to figure out how long to
pause before making the request in order not to hit th...

lp:~robru/gwibber/twitter updated
1478. By Robert Bruce Park

Allow debug_twitter_live.py to call any action from the commandline.

Commandline args are passed directly to protocol.__call__(), so any
feature supported by Twitter can be invoked directly by this script.

1479. By Robert Bruce Park

Generalize debug_twitter_*.py scripts into debug_*.py scripts.

debug_live has a good docstring that explains its usage.

1480. By Robert Bruce Park

Add tests for Twitter operations.

This also tweaks Twitter.tag so that it silently ignores the # symbols
in hash tags, allowing you to include them or not, at your option.
Previously, if you tried to Twitter.tag('#hashtag'), it wouldn't work,
you had to strip the # symbol yourself. Now we accept both.

1481. By Robert Bruce Park

Docstring cleanup.

1482. By Robert Bruce Park

Fix bug in Base._unpublish

1483. By Robert Bruce Park

Improve _seen_ids / _seen_messages consistency.

Also fix a comment.

Revision history for this message
Barry Warsaw (barry) wrote :
Download full text (17.3 KiB)

On Oct 03, 2012, at 12:34 PM, Robert Bruce Park wrote:

>I actually verified that the OAuth signatures worked for *every* API endpoint
>*except* sending new direct messages (we can receive them ok, too). That's
>what debug_twitter_live.py was all about, every time I'd write a new protocol
>operation, I'd modify it to do that new operation, then run it, and I'd see
>on my live twitter account that it worked.

You may have to engage with the Twitter developer community to debug this. I
didn't find anything relevant in a very little bit of googling though. You'd
think that if it were a bug in their API, others would have noticed it.

>I hadn't originally. At first it just started off as 'from
>gwibber.utils.account import AccountManager; a = AccountManager(None);

Just FWIW, I have a patch in my facebook.working branch that makes None the
default argument. You have to be careful not to call the callback if it's
None. I wonder if it's useful to land that in the py3 branch separately?

>In fact, *all* of the style nits you mention were written by ken ;-)

/me hears Kirk's voice scream to the heavens "Keeeeeeennnnnn!"

>I haven't looked closely at Twitter's rate limiting standards yet, but I
>wrote that comment because I ran into Twitter's limit at one point and they
>stopped me from downloading tweets that I wanted.

I think Twitter warns you (via response headers?) if you're getting rate
limited. So I guess you could check for those and automatically back-off if
you see them. It's probably better to at least have some combination of
proactive and reactive response to rate limiting. The gory details:

https://dev.twitter.com/docs/rate-limiting

>>> + # "Client" == "Consumer" in oauthlib parlance.
>>> + client_key = self._account.auth.parameters['ConsumerKey']
>>> + client_secret = self._account.auth.parameters['ConsumerSecret']
>>
>> Can these fail? Are the other parts of your code prepared to handle the
>> resulting KeyErrors if these parameters are missing? Do you have any tests
>> for those situations?
>
>No, these are constants defined by Twitter at app-registration time, stored
>in libaccounts-sso package. Can never fail. They may change in the event that
>somebody nefarious compromises them (which ought to be easy to do for an
>open-source package), but they won't change during runtime.

Cool. In that case, the code above is fine because if they *do* fail, you
want to know via the logged KeyErrors.

>>> + def _publish_tweet(self, tweet):
>>> + """Publish a single tweet into the Dee.SharedModel."""
>>> + tweet_id = tweet.get('id_str', '')
>>
>> Does it still make sense to publish this tweet if there is no id_str?
>> Won't that possibly result in row collisions, since the only unique item in
>> the key will be the empty string for multiple id_str-less entries?
>>
>> I think in my Facebook branch, I just skip the JSON response entry if
>> there's no message_id.
>
>Yeah, that's probably reasonable. I just write 'adict.get' out of habit for
>the sake of being as flexible as possible.

As discussed over IRC, something like:

    tweet_id = tweet.get('id_str')
    if tweet_id is None:
        log.info...

lp:~robru/gwibber/twitter updated
1484. By Robert Bruce Park

Fix up Base._unpublish: Better tests, bugs fixed.

Particularly I've fixed the removal of ids when there were duplicates
before, and also I've eliminated the possibility of collisions between
message id's from different services (there used to be a small chance
that twitter and facebook could assign the same message id to
different messages, and then _unpublish would get confused.

And of course, improved test coverage to prove that this is all good
now ;-)

1485. By Robert Bruce Park

Typo.

1486. By Robert Bruce Park

Cleanup some API formatting.

1487. By Robert Bruce Park

Added comment as per barry.

1488. By Robert Bruce Park

Undo a change that wasn't necessary.

Revision history for this message
Robert Bruce Park (robru) wrote :
Download full text (17.0 KiB)

On 12-10-03 05:22 PM, Barry Warsaw wrote:
> On Oct 03, 2012, at 12:34 PM, Robert Bruce Park wrote:
>> I actually verified that the OAuth signatures worked for *every* API endpoint
>> *except* sending new direct messages (we can receive them ok, too). That's
>> what debug_twitter_live.py was all about, every time I'd write a new protocol
>> operation, I'd modify it to do that new operation, then run it, and I'd see
>> on my live twitter account that it worked.
>
> You may have to engage with the Twitter developer community to debug this. I
> didn't find anything relevant in a very little bit of googling though. You'd
> think that if it were a bug in their API, others would have noticed it.

Yeah, no idea. Maybe we're the first to port to 1.1 ;-)

I'll look into it a little bit later tonight, got some other
refactoring/cleanup going on.

>> I hadn't originally. At first it just started off as 'from
>> gwibber.utils.account import AccountManager; a = AccountManager(None);
>
> Just FWIW, I have a patch in my facebook.working branch that makes None the
> default argument. You have to be careful not to call the callback if it's
> None. I wonder if it's useful to land that in the py3 branch separately?

I don't think it matters. Maybe it should just default to lambda:None
and then we don't have to worry about whether the callback gets called
or not. For the debug_live.py script, it's not meant to be a
long-running script, just short stints for testing. So it's unlikely for
the callback to be triggered during it's operation.

>> I haven't looked closely at Twitter's rate limiting standards yet, but I
>> wrote that comment because I ran into Twitter's limit at one point and they
>> stopped me from downloading tweets that I wanted.
>
> I think Twitter warns you (via response headers?) if you're getting rate
> limited. So I guess you could check for those and automatically back-off if
> you see them. It's probably better to at least have some combination of
> proactive and reactive response to rate limiting. The gory details:
>
> https://dev.twitter.com/docs/rate-limiting

Yeah, I'm gonna dive into this soon, but so far I've just been focusing
on cleaning up the code that's already there and improving the tests.

>>>> +# https://dev.twitter.com/docs/api/1.1/get/statuses/mentions_timeline
>>>> + @feature
>>>> + def mentions(self):
>>>> + """Gather the tweets that mention us."""
>>>> + url = self._timeline.format('mentions')
>>>> + for tweet in self._get_url(url):
>>>> + self._publish_tweet(tweet)
>>>
>>> Do you have tests for each of these? I know they *look* ridiculously
>>> similar, but each should have a test.
>>
>> I wasn't really sure how to test them, to be honest.
>
> It's probably good enough to use the same, or very similar sample data, but
> just call .mentions(), etc. in the tests.
>
> Really, the only thing you care about is that you've tested the basic code in
> this (and similar) methods, not in anything it calls. Let's say for example
> that you - or a future developer - typo'd "self._publish_twit()". That's the
> kind of thing you want these tests to catch.

Fair enough. I wrote some t...

lp:~robru/gwibber/twitter updated
1489. By Robert Bruce Park

Implement a basic rate limiter.

This had to be done in the Downloader class unfortunately because it
needed direct access to the HTTP headers and I didn't really see any
way to access those outside of the Downloader class.

Luckily it will harmlessly ignore any protocol that doesn't specify
rate limits in the HTTP headers, so it should work well for Twitter
and do nothing for anybody else.

This basically makes no attempt to limit you if you have more than 5
requests remaining in the rate limit window (making it unobtrusive for
cases where the dispatcher is just calling every 5 minutes and it
really should never come near the rate limiter *anyway*), but if
something has gone on such as a user requesting too many manual
refreshes, then it will forcibly pause each request long enough to
avoid hitting Twitter's rate limiter.

1490. By Robert Bruce Park

Prevent duplicate message ids from filling up the message_ids column.

It turns out that it's quite easy to see the same message multiple
times, and previously the dupe-checking logic was successful in
stopping the message from being published more than once, it blindly
appended the message_id to the list regardless of whether or not it
was already on the list, so that tended to fill up with junk quick.

With test case.

Revision history for this message
Robert Bruce Park (robru) wrote :

Ok, barry, I think we have a pretty solid thing here, rate limiting and everything! Have another look over the diff and let me know what you think ;-)

The only real issue remaining is that send_private still 403s, but I've sent a message to the Twitter support forum, so hopefully they respond to that soon.

lp:~robru/gwibber/twitter updated
1491. By Robert Bruce Park

Fill out some comments and docstrings.

1492. By Robert Bruce Park

Catch 403s in send_private, with test.

1493. By Robert Bruce Park

Add tests for the rate limiter.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'gwibber/gwibber/protocols/twitter.py'
--- gwibber/gwibber/protocols/twitter.py 2012-09-19 22:21:39 +0000
+++ gwibber/gwibber/protocols/twitter.py 2012-10-04 20:29:21 +0000
@@ -14,20 +14,289 @@
1414
15"""The Twitter protocol plugin."""15"""The Twitter protocol plugin."""
1616
17
17__all__ = [18__all__ = [
18 'Twitter',19 'Twitter',
19 ]20 ]
2021
2122
22import gettext
23import logging23import logging
2424
25from gwibber.utils.base import Base25from oauthlib.oauth1 import Client
26from urllib.error import HTTPError
27from urllib.parse import quote
28
29from gwibber.utils.authentication import Authentication
30from gwibber.utils.base import Base, feature
31from gwibber.utils.download import get_json
32from gwibber.utils.time import parsetime, iso8601utc
2633
2734
28log = logging.getLogger('gwibber.service')35log = logging.getLogger('gwibber.service')
29_ = gettext.lgettext36
3037
3138# https://dev.twitter.com/docs/api/1.1
32class Twitter(Base):39class Twitter(Base):
33 pass40 # StatusNet claims to mimick the Twitter API very closely (so
41 # closely to the point that they refer you to the twitter API
42 # reference docs as a starting point for learning about their
43 # API), So these prefixes are defined here as class attributes
44 # instead of the usual module globals, in the hopes that the
45 # StatusNet class will be able to subclass Twitter and change only
46 # the URLs, with minimal other changes, and magically work.
47 _api_base = 'https://api.twitter.com/1.1/{endpoint}.json'
48
49 _timeline = _api_base.format(endpoint='statuses/{}_timeline')
50 _user_timeline = _timeline.format('user') + '?screen_name={}'
51
52 _lists = _api_base.format(endpoint='lists/statuses') + '?list_id={}'
53
54 _destroy = _api_base.format(endpoint='statuses/destroy/{}')
55 _retweet = _api_base.format(endpoint='statuses/retweet/{}')
56
57 _tweet_permalink = 'https://twitter.com/{user_id}/status/{tweet_id}'
58
59 def _locked_login(self, old_token):
60 """Sign in without worrying about concurrent login attempts."""
61 result = Authentication(self._account, log).login()
62 if result is None:
63 log.error('No Twitter authentication results received.')
64 return
65
66 token = result.get('AccessToken')
67 if token is None:
68 log.error('No AccessToken in Twitter session: {!r}', result)
69 else:
70 self._account.access_token = token
71 self._account.secret_token = result.get('TokenSecret')
72 self._account.user_id = result.get('UserId')
73 self._account.user_name = result.get('ScreenName')
74 log.debug('{} UID: {}'.format(self.__class__.__name__,
75 self._account.user_id))
76
77 def _get_url(self, url, data=None):
78 """Access the Twitter API with correct OAuth signed headers."""
79 # TODO start enforcing some rate limiting.
80 do_post = data is not None
81
82 # "Client" == "Consumer" in oauthlib parlance.
83 client_key = self._account.auth.parameters['ConsumerKey']
84 client_secret = self._account.auth.parameters['ConsumerSecret']
85
86 # "resource_owner" == secret and token.
87 resource_owner_key = self._get_access_token()
88 resource_owner_secret = self._account.secret_token
89 oauth_client = Client(client_key, client_secret,
90 resource_owner_key, resource_owner_secret)
91
92 headers = {}
93 if do_post:
94 headers['Content-Type'] = 'application/x-www-form-urlencoded'
95
96 # All we care about is the headers, which will contain the
97 # Authorization header necessary to satisfy OAuth.
98 uri, headers, body = oauth_client.sign(
99 url, body=data, headers=headers,
100 http_method='POST' if do_post else 'GET')
101
102 return get_json(url,
103 params=data,
104 headers=headers,
105 post=do_post)
106
107 def _publish_tweet(self, tweet):
108 """Publish a single tweet into the Dee.SharedModel."""
109 tweet_id = tweet.get('id_str')
110 if tweet_id is None:
111 log.info('We got a tweet with no id! Abort!')
112 return
113
114 user = tweet.get('user', {})
115 screen_name = user.get('screen_name', '')
116 self._publish(
117 message_id=tweet_id,
118 message=tweet.get('text', ''),
119 timestamp=iso8601utc(parsetime(tweet.get('created_at', ''))),
120 stream='messages',
121 sender=user.get('name', ''),
122 sender_nick=screen_name,
123 from_me=(screen_name == self._account.user_id),
124 icon_uri=user.get('profile_image_url_https', ''),
125 liked=tweet.get('favorited', False),
126 url=self._tweet_permalink.format(user_id=screen_name,
127 tweet_id=tweet_id),
128 )
129
130# https://dev.twitter.com/docs/api/1.1/get/statuses/home_timeline
131 @feature
132 def home(self):
133 """Gather the user's home timeline."""
134 url = self._timeline.format('home')
135 for tweet in self._get_url(url):
136 self._publish_tweet(tweet)
137
138# https://dev.twitter.com/docs/api/1.1/get/statuses/mentions_timeline
139 @feature
140 def mentions(self):
141 """Gather the tweets that mention us."""
142 url = self._timeline.format('mentions')
143 for tweet in self._get_url(url):
144 self._publish_tweet(tweet)
145
146# https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline
147 @feature
148 def user(self, screen_name=''):
149 """Gather the tweets from a specific user.
150
151 If screen_name is not specified, then gather the tweets
152 written by the currently authenticated user.
153 """
154 url = self._user_timeline.format(screen_name)
155 for tweet in self._get_url(url):
156 self._publish_tweet(tweet)
157
158# https://dev.twitter.com/docs/api/1.1/get/lists/statuses
159 @feature
160 def list(self, list_id):
161 """Gather the tweets from the specified list_id."""
162 url = self._lists.format(list_id)
163 for tweet in self._get_url(url):
164 self._publish_tweet(tweet)
165
166# https://dev.twitter.com/docs/api/1.1/get/lists/list
167 @feature
168 def lists(self):
169 """Gather the tweets from the lists that the we are subscribed to."""
170 url = self._api_base.format(endpoint='lists/list')
171 for twitlist in self._get_url(url):
172 self.list(twitlist.get('id_str', ''))
173
174# https://dev.twitter.com/docs/api/1.1/get/direct_messages
175# https://dev.twitter.com/docs/api/1.1/get/direct_messages/sent
176 @feature
177 def private(self):
178 """Gather the direct messages sent to/from us."""
179 url = self._api_base.format(endpoint='direct_messages')
180 for tweet in self._get_url(url):
181 self._publish_tweet(tweet)
182
183 url = self._api_base.format(endpoint='direct_messages/sent')
184 for tweet in self._get_url(url):
185 self._publish_tweet(tweet)
186
187 @feature
188 def receive(self):
189 """Gather and publish all incoming messages."""
190 # TODO I know mentions and lists are actually incorporated
191 # within the home timeline, but calling them explicitly will
192 # ensure that more of those types of messages appear in the
193 # timeline (eg, so they don't get drown out by everything
194 # else). I'm not sure how necessary it is, though.
195 self.home()
196 self.mentions()
197 self.lists()
198 self.private()
199
200 @feature
201 def send_private(self, screen_name, message):
202 """Send a direct message to the given screen name.
203
204 This will error 403 if the person you are sending to does not
205 follow you.
206 """
207 url = self._api_base.format(endpoint='direct_messages/new')
208 try:
209 tweet = self._get_url(url, dict(text=message, screen_name=screen_name))
210 self._publish_tweet(tweet)
211 except HTTPError as e:
212 log.error('{}: Does that user follow you?'.format(e))
213
214# https://dev.twitter.com/docs/api/1.1/post/statuses/update
215 @feature
216 def send(self, message):
217 """Publish a public tweet."""
218 url = self._api_base.format(endpoint='statuses/update')
219 tweet = self._get_url(url, dict(status=message))
220 self._publish_tweet(tweet)
221
222# https://dev.twitter.com/docs/api/1.1/post/statuses/update
223 @feature
224 def send_thread(self, message_id, message):
225 """Send a reply message to message_id.
226
227 Note that you have to @mention the message_id owner's screen name in
228 order for Twitter to actually accept this as a reply. Otherwise it will
229 just be an ordinary tweet.
230 """
231 url = self._api_base.format(endpoint='statuses/update')
232 tweet = self._get_url(url, dict(in_reply_to_status_id=message_id,
233 status=message))
234 self._publish_tweet(tweet)
235
236# https://dev.twitter.com/docs/api/1.1/post/statuses/destroy/%3Aid
237 @feature
238 def delete(self, message_id):
239 """Delete a tweet that you wrote."""
240 url = self._destroy.format(message_id)
241 tweet = self._get_url(url, dict(trim_user='true'))
242 self._unpublish(message_id)
243
244# https://dev.twitter.com/docs/api/1.1/post/statuses/retweet/%3Aid
245 @feature
246 def retweet(self, message_id):
247 """Republish somebody else's tweet with your name on it."""
248 url = self._retweet.format(message_id)
249 tweet = self._get_url(url, dict(trim_user='true'))
250 self._publish_tweet(tweet)
251
252# https://dev.twitter.com/docs/api/1.1/post/friendships/destroy
253 @feature
254 def unfollow(self, screen_name):
255 """Stop following the given screen name."""
256 url = self._api_base.format(endpoint='friendships/destroy')
257 self._get_url(url, dict(screen_name=screen_name))
258
259# https://dev.twitter.com/docs/api/1.1/post/friendships/create
260 @feature
261 def follow(self, screen_name):
262 """Start following the given screen name."""
263 url = self._api_base.format(endpoint='friendships/create')
264 self._get_url(url, dict(screen_name=screen_name, follow='true'))
265
266# https://dev.twitter.com/docs/api/1.1/post/favorites/create
267 @feature
268 def like(self, message_id):
269 """Announce to the world your undying love for a tweet."""
270 url = self._api_base.format(endpoint='favorites/create')
271 tweet = self._get_url(url, dict(id=message_id))
272 # I don't think we need to publish this tweet because presumably
273 # the user has clicked the 'favorite' button on the message that's
274 # already in the stream...
275
276# https://dev.twitter.com/docs/api/1.1/post/favorites/destroy
277 @feature
278 def unlike(self, message_id):
279 """Renounce your undying love for a tweet."""
280 url = self._api_base.format(endpoint='favorites/destroy')
281 tweet = self._get_url(url, dict(id=message_id))
282
283# https://dev.twitter.com/docs/api/1.1/get/search/tweets
284 @feature
285 def tag(self, hashtag):
286 """Return a list of some recent tweets mentioning hashtag."""
287 url = self._api_base.format(endpoint='search/tweets')
288
289 response = self._get_url(
290 '{}?q=%23{}'.format(url, hashtag.lstrip('#')))
291 for tweet in response.get('statuses', []):
292 self._publish_tweet(tweet)
293
294# https://dev.twitter.com/docs/api/1.1/get/search/tweets
295 @feature
296 def search(self, query):
297 """Search for any arbitrary string."""
298 url = self._api_base.format(endpoint='search/tweets')
299
300 response = self._get_url('{}?q={}'.format(url, quote(query, safe='')))
301 for tweet in response.get('statuses', []):
302 self._publish_tweet(tweet)
34303
=== modified file 'gwibber/gwibber/testing/mocks.py'
--- gwibber/gwibber/testing/mocks.py 2012-09-21 21:45:06 +0000
+++ gwibber/gwibber/testing/mocks.py 2012-10-04 20:29:21 +0000
@@ -71,21 +71,30 @@
71class FakeData:71class FakeData:
72 """Mimic a urlopen() that returns canned data."""72 """Mimic a urlopen() that returns canned data."""
7373
74 def __init__(self, path, resource, charset='utf-8'):74 def __init__(self, path, resource, charset='utf-8', headers=None):
75 # resource_string() always returns bytes.75 # resource_string() always returns bytes.
76 self._data = resource_string(path, resource)76 self._data = resource_string(path, resource)
77 self.call_count = 077 self.call_count = 0
78 self._charset = charset78 self._charset = charset
79 self._headers = headers or {}
7980
80 def __call__(self, url, post_data=None):81 def __call__(self, url, post_data=None):
81 # Ignore url and post_data since the canned data will already82 # Ignore url and post_data since the canned data will already
82 # represents these results. We just have to make the API fit.83 # represents these results. We just have to make the API fit.
83 # post_data will be missing for GETs.84 # post_data will be missing for GETs.
84 charset = self._charset85 charset = self._charset
86 class FakeInfo:
87 def __init__(self, headers=None):
88 self.headers = headers or {}
89 def get(self, key, default=None):
90 return self.headers.get(key, default)
91 def get_content_charset(self):
92 return charset
85 class FakeOpen:93 class FakeOpen:
86 def __init__(self, data, charset):94 def __init__(self, data, charset, headers=None):
87 self._data = data95 self._data = data
88 self._charset = charset96 self._charset = charset
97 self.headers = headers or {}
89 def read(self):98 def read(self):
90 return self._data99 return self._data
91 def __enter__(self):100 def __enter__(self):
@@ -93,12 +102,9 @@
93 def __exit__(self, *args, **kws):102 def __exit__(self, *args, **kws):
94 pass103 pass
95 def info(self):104 def info(self):
96 class FakeInfo:105 return FakeInfo(self.headers)
97 def get_content_charset(self):
98 return charset
99 return FakeInfo()
100 self.call_count += 1106 self.call_count += 1
101 return FakeOpen(self._data, self._charset)107 return FakeOpen(self._data, self._charset, self._headers)
102108
103109
104class SettingsIterMock:110class SettingsIterMock:
105111
=== added file 'gwibber/gwibber/tests/data/twitter-home.dat'
--- gwibber/gwibber/tests/data/twitter-home.dat 1970-01-01 00:00:00 +0000
+++ gwibber/gwibber/tests/data/twitter-home.dat 2012-10-04 20:29:21 +0000
@@ -0,0 +1,344 @@
1 [
2 {
3 "coordinates": null,
4 "truncated": false,
5 "created_at": "Tue Aug 28 21:16:23 +0000 2012",
6 "favorited": false,
7 "id_str": "240558470661799936",
8 "in_reply_to_user_id_str": null,
9 "entities": {
10 "urls": [
11
12 ],
13 "hashtags": [
14
15 ],
16 "user_mentions": [
17
18 ]
19 },
20 "text": "just another test",
21 "contributors": null,
22 "id": 240558470661799936,
23 "retweet_count": 0,
24 "in_reply_to_status_id_str": null,
25 "geo": null,
26 "retweeted": false,
27 "in_reply_to_user_id": null,
28 "place": null,
29 "source": "<a href=\"http://realitytechnicians.com\" rel=\"nofollow\">OAuth Dancer Reborn</a>",
30 "user": {
31 "name": "OAuth Dancer",
32 "profile_sidebar_fill_color": "DDEEF6",
33 "profile_background_tile": true,
34 "profile_sidebar_border_color": "C0DEED",
35 "profile_image_url": "http://a0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg",
36 "created_at": "Wed Mar 03 19:37:35 +0000 2010",
37 "location": "San Francisco, CA",
38 "follow_request_sent": false,
39 "id_str": "119476949",
40 "is_translator": false,
41 "profile_link_color": "0084B4",
42 "entities": {
43 "url": {
44 "urls": [
45 {
46 "expanded_url": null,
47 "url": "http://bit.ly/oauth-dancer",
48 "indices": [
49 0,
50 26
51 ],
52 "display_url": null
53 }
54 ]
55 },
56 "description": null
57 },
58 "default_profile": false,
59 "url": "http://bit.ly/oauth-dancer",
60 "contributors_enabled": false,
61 "favourites_count": 7,
62 "utc_offset": null,
63 "profile_image_url_https": "https://si0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg",
64 "id": 119476949,
65 "listed_count": 1,
66 "profile_use_background_image": true,
67 "profile_text_color": "333333",
68 "followers_count": 28,
69 "lang": "en",
70 "protected": false,
71 "geo_enabled": true,
72 "notifications": false,
73 "description": "",
74 "profile_background_color": "C0DEED",
75 "verified": false,
76 "time_zone": null,
77 "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/80151733/oauth-dance.png",
78 "statuses_count": 166,
79 "profile_background_image_url": "http://a0.twimg.com/profile_background_images/80151733/oauth-dance.png",
80 "default_profile_image": false,
81 "friends_count": 14,
82 "following": false,
83 "show_all_inline_media": false,
84 "screen_name": "oauth_dancer"
85 },
86 "in_reply_to_screen_name": null,
87 "in_reply_to_status_id": null
88 },
89 {
90 "coordinates": {
91 "coordinates": [
92 -122.25831,
93 37.871609
94 ],
95 "type": "Point"
96 },
97 "truncated": false,
98 "created_at": "Tue Aug 28 21:08:15 +0000 2012",
99 "favorited": false,
100 "id_str": "240556426106372096",
101 "in_reply_to_user_id_str": null,
102 "entities": {
103 "urls": [
104 {
105 "expanded_url": "http://blogs.ischool.berkeley.edu/i290-abdt-s12/",
106 "url": "http://t.co/bfj7zkDJ",
107 "indices": [
108 79,
109 99
110 ],
111 "display_url": "blogs.ischool.berkeley.edu/i290-abdt-s12/"
112 }
113 ],
114 "hashtags": [
115
116 ],
117 "user_mentions": [
118 {
119 "name": "Cal",
120 "id_str": "17445752",
121 "id": 17445752,
122 "indices": [
123 60,
124 64
125 ],
126 "screen_name": "Cal"
127 },
128 {
129 "name": "Othman Laraki",
130 "id_str": "20495814",
131 "id": 20495814,
132 "indices": [
133 70,
134 77
135 ],
136 "screen_name": "othman"
137 }
138 ]
139 },
140 "text": "lecturing at the \"analyzing big data with twitter\" class at @cal with @othman http://t.co/bfj7zkDJ",
141 "contributors": null,
142 "id": 240556426106372096,
143 "retweet_count": 3,
144 "in_reply_to_status_id_str": null,
145 "geo": {
146 "coordinates": [
147 37.871609,
148 -122.25831
149 ],
150 "type": "Point"
151 },
152 "retweeted": false,
153 "possibly_sensitive": false,
154 "in_reply_to_user_id": null,
155 "place": {
156 "name": "Berkeley",
157 "country_code": "US",
158 "country": "United States",
159 "attributes": {
160 },
161 "url": "http://api.twitter.com/1/geo/id/5ef5b7f391e30aff.json",
162 "id": "5ef5b7f391e30aff",
163 "bounding_box": {
164 "coordinates": [
165 [
166 [
167 -122.367781,
168 37.835727
169 ],
170 [
171 -122.234185,
172 37.835727
173 ],
174 [
175 -122.234185,
176 37.905824
177 ],
178 [
179 -122.367781,
180 37.905824
181 ]
182 ]
183 ],
184 "type": "Polygon"
185 },
186 "full_name": "Berkeley, CA",
187 "place_type": "city"
188 },
189 "source": "<a href=\"http://www.apple.com\" rel=\"nofollow\">Safari on iOS</a>",
190 "user": {
191 "name": "Raffi Krikorian",
192 "profile_sidebar_fill_color": "DDEEF6",
193 "profile_background_tile": false,
194 "profile_sidebar_border_color": "C0DEED",
195 "profile_image_url": "http://a0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png",
196 "created_at": "Sun Aug 19 14:24:06 +0000 2007",
197 "location": "San Francisco, California",
198 "follow_request_sent": false,
199 "id_str": "8285392",
200 "is_translator": false,
201 "profile_link_color": "0084B4",
202 "entities": {
203 "url": {
204 "urls": [
205 {
206 "expanded_url": "http://about.me/raffi.krikorian",
207 "url": "http://t.co/eNmnM6q",
208 "indices": [
209 0,
210 19
211 ],
212 "display_url": "about.me/raffi.krikorian"
213 }
214 ]
215 },
216 "description": {
217 "urls": [
218
219 ]
220 }
221 },
222 "default_profile": true,
223 "url": "http://t.co/eNmnM6q",
224 "contributors_enabled": false,
225 "favourites_count": 724,
226 "utc_offset": -28800,
227 "profile_image_url_https": "https://si0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png",
228 "id": 8285392,
229 "listed_count": 619,
230 "profile_use_background_image": true,
231 "profile_text_color": "333333",
232 "followers_count": 18752,
233 "lang": "en",
234 "protected": false,
235 "geo_enabled": true,
236 "notifications": false,
237 "description": "Director of @twittereng's Platform Services. I break things.",
238 "profile_background_color": "C0DEED",
239 "verified": false,
240 "time_zone": "Pacific Time (US & Canada)",
241 "profile_background_image_url_https": "https://si0.twimg.com/images/themes/theme1/bg.png",
242 "statuses_count": 5007,
243 "profile_background_image_url": "http://a0.twimg.com/images/themes/theme1/bg.png",
244 "default_profile_image": false,
245 "friends_count": 701,
246 "following": true,
247 "show_all_inline_media": true,
248 "screen_name": "raffi"
249 },
250 "in_reply_to_screen_name": null,
251 "in_reply_to_status_id": null
252 },
253 {
254 "coordinates": null,
255 "truncated": false,
256 "created_at": "Tue Aug 28 19:59:34 +0000 2012",
257 "favorited": false,
258 "id_str": "240539141056638977",
259 "in_reply_to_user_id_str": null,
260 "entities": {
261 "urls": [
262
263 ],
264 "hashtags": [
265
266 ],
267 "user_mentions": [
268
269 ]
270 },
271 "text": "You'd be right more often if you thought you were wrong.",
272 "contributors": null,
273 "id": 240539141056638977,
274 "retweet_count": 1,
275 "in_reply_to_status_id_str": null,
276 "geo": null,
277 "retweeted": false,
278 "in_reply_to_user_id": null,
279 "place": null,
280 "source": "web",
281 "user": {
282 "name": "Taylor Singletary",
283 "profile_sidebar_fill_color": "FBFBFB",
284 "profile_background_tile": true,
285 "profile_sidebar_border_color": "000000",
286 "profile_image_url": "http://a0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg",
287 "created_at": "Wed Mar 07 22:23:19 +0000 2007",
288 "location": "San Francisco, CA",
289 "follow_request_sent": false,
290 "id_str": "819797",
291 "is_translator": false,
292 "profile_link_color": "c71818",
293 "entities": {
294 "url": {
295 "urls": [
296 {
297 "expanded_url": "http://www.rebelmouse.com/episod/",
298 "url": "http://t.co/Lxw7upbN",
299 "indices": [
300 0,
301 20
302 ],
303 "display_url": "rebelmouse.com/episod/"
304 }
305 ]
306 },
307 "description": {
308 "urls": [
309
310 ]
311 }
312 },
313 "default_profile": false,
314 "url": "http://t.co/Lxw7upbN",
315 "contributors_enabled": false,
316 "favourites_count": 15990,
317 "utc_offset": -28800,
318 "profile_image_url_https": "https://si0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg",
319 "id": 819797,
320 "listed_count": 340,
321 "profile_use_background_image": true,
322 "profile_text_color": "D20909",
323 "followers_count": 7126,
324 "lang": "en",
325 "protected": false,
326 "geo_enabled": true,
327 "notifications": false,
328 "description": "Reality Technician, Twitter API team, synthesizer enthusiast; a most excellent adventure in timelines. I know it's hard to believe in something you can't see.",
329 "profile_background_color": "000000",
330 "verified": false,
331 "time_zone": "Pacific Time (US & Canada)",
332 "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/643655842/hzfv12wini4q60zzrthg.png",
333 "statuses_count": 18076,
334 "profile_background_image_url": "http://a0.twimg.com/profile_background_images/643655842/hzfv12wini4q60zzrthg.png",
335 "default_profile_image": false,
336 "friends_count": 5444,
337 "following": true,
338 "show_all_inline_media": true,
339 "screen_name": "episod"
340 },
341 "in_reply_to_screen_name": null,
342 "in_reply_to_status_id": null
343 }
344 ]
0345
=== modified file 'gwibber/gwibber/tests/test_dbus.py'
--- gwibber/gwibber/tests/test_dbus.py 2012-09-25 20:25:42 +0000
+++ gwibber/gwibber/tests/test_dbus.py 2012-10-04 20:29:21 +0000
@@ -111,6 +111,11 @@
111 '/com/gwibber/Service')111 '/com/gwibber/Service')
112 iface = dbus.Interface(obj, 'com.Gwibber.Service')112 iface = dbus.Interface(obj, 'com.Gwibber.Service')
113 # TODO Add more cases as more protocols are added.113 # TODO Add more cases as more protocols are added.
114 self.assertEqual(json.loads(iface.GetFeatures('twitter')),
115 ['delete', 'follow', 'home', 'like', 'list', 'lists',
116 'mentions', 'private', 'receive', 'retweet', 'search',
117 'send', 'send_private', 'send_thread', 'tag',
118 'unfollow', 'unlike', 'user'])
114 self.assertEqual(json.loads(iface.GetFeatures('flickr')), ['receive'])119 self.assertEqual(json.loads(iface.GetFeatures('flickr')), ['receive'])
115 self.assertEqual(json.loads(iface.GetFeatures('foursquare')),120 self.assertEqual(json.loads(iface.GetFeatures('foursquare')),
116 ['receive'])121 ['receive'])
117122
=== modified file 'gwibber/gwibber/tests/test_download.py'
--- gwibber/gwibber/tests/test_download.py 2012-09-14 20:29:11 +0000
+++ gwibber/gwibber/tests/test_download.py 2012-10-04 20:29:21 +0000
@@ -249,3 +249,44 @@
249 headers={'X-Foo': 'baz',249 headers={'X-Foo': 'baz',
250 'X-Bar': 'foo'}),250 'X-Bar': 'foo'}),
251 dict(foo='baz', bar='foo'))251 dict(foo='baz', bar='foo'))
252
253 @mock.patch('gwibber.utils.download.urlopen',
254 FakeData('gwibber.tests.data', 'twitter-home.dat',
255 headers={'X-Rate-Limit-Reset':1349382153 + 300,
256 'X-Rate-Limit-Remaining': 1}))
257 @mock.patch('gwibber.utils.download.time.sleep')
258 @mock.patch('gwibber.utils.download.time.time', return_value=1349382153)
259 def test_rate_limiter_maximum(self, time, sleep):
260 # First call does not get limited, but establishes the limit
261 get_json('http://example.com/alpha')
262 sleep.assert_called_with(0)
263 # Second call gets called with the established limit
264 get_json('http://example.com/alpha')
265 sleep.assert_called_with(300)
266
267 @mock.patch('gwibber.utils.download.urlopen',
268 FakeData('gwibber.tests.data', 'twitter-home.dat',
269 headers={'X-Rate-Limit-Reset':1349382153 + 300,
270 'X-Rate-Limit-Remaining': 3}))
271 @mock.patch('gwibber.utils.download.time.sleep')
272 @mock.patch('gwibber.utils.download.time.time', return_value=1349382153)
273 def test_rate_limiter_medium(self, time, sleep):
274 # First call does not get limited, but establishes the limit
275 get_json('http://example.com/beta')
276 sleep.assert_called_with(0)
277 # Second call gets called with the established limit
278 get_json('http://example.com/beta')
279 sleep.assert_called_with(100.0)
280
281 @mock.patch('gwibber.utils.download.urlopen',
282 FakeData('gwibber.tests.data', 'twitter-home.dat',
283 headers={'X-Rate-Limit-Reset':int(time.time()) + 300,
284 'X-Rate-Limit-Remaining': 10}))
285 @mock.patch('gwibber.utils.download.time.sleep')
286 def test_rate_limiter_unlimited(self, sleep):
287 # First call does not get limited, but establishes the limit
288 get_json('http://example.com/omega')
289 sleep.assert_called_with(0)
290 # Second call gets called with the established limit
291 get_json('http://example.com/omega')
292 sleep.assert_called_with(0)
252293
=== modified file 'gwibber/gwibber/tests/test_protocols.py'
--- gwibber/gwibber/tests/test_protocols.py 2012-09-25 20:25:42 +0000
+++ gwibber/gwibber/tests/test_protocols.py 2012-10-04 20:29:21 +0000
@@ -176,6 +176,7 @@
176176
177 @mock.patch('gwibber.utils.base.Model', TestModel)177 @mock.patch('gwibber.utils.base.Model', TestModel)
178 @mock.patch('gwibber.utils.base._seen_messages', {})178 @mock.patch('gwibber.utils.base._seen_messages', {})
179 @mock.patch('gwibber.utils.base._seen_ids', {})
179 def test_one_message(self):180 def test_one_message(self):
180 # Test that publishing a message inserts a row into the model.181 # Test that publishing a message inserts a row into the model.
181 base = Base(FakeAccount())182 base = Base(FakeAccount())
@@ -215,6 +216,32 @@
215216
216 @mock.patch('gwibber.utils.base.Model', TestModel)217 @mock.patch('gwibber.utils.base.Model', TestModel)
217 @mock.patch('gwibber.utils.base._seen_messages', {})218 @mock.patch('gwibber.utils.base._seen_messages', {})
219 @mock.patch('gwibber.utils.base._seen_ids', {})
220 def test_unpublish(self):
221 base = Base(FakeAccount())
222 self.assertEqual(0, TestModel.get_n_rows())
223 self.assertTrue(base._publish(
224 message_id='1234',
225 sender='fred',
226 message='hello, @jimmy'))
227 self.assertTrue(base._publish(
228 message_id='5678',
229 sender='fred',
230 message='hello, +jimmy'))
231 self.assertEqual(1, TestModel.get_n_rows())
232 self.assertEqual(TestModel[0][0],
233 [['base', 'faker/than fake', '1234'],
234 ['base', 'faker/than fake', '5678']])
235 base._unpublish('1234')
236 self.assertEqual(1, TestModel.get_n_rows())
237 self.assertEqual(TestModel[0][0],
238 [['base', 'faker/than fake', '5678']])
239 base._unpublish('5678')
240 self.assertEqual(0, TestModel.get_n_rows())
241
242 @mock.patch('gwibber.utils.base.Model', TestModel)
243 @mock.patch('gwibber.utils.base._seen_messages', {})
244 @mock.patch('gwibber.utils.base._seen_ids', {})
218 def test_duplicate_messages_identified(self):245 def test_duplicate_messages_identified(self):
219 # When two messages which are deemed identical, by way of the246 # When two messages which are deemed identical, by way of the
220 # _make_key() test in base.py, are published, only one ends up in the247 # _make_key() test in base.py, are published, only one ends up in the
@@ -259,6 +286,31 @@
259286
260 @mock.patch('gwibber.utils.base.Model', TestModel)287 @mock.patch('gwibber.utils.base.Model', TestModel)
261 @mock.patch('gwibber.utils.base._seen_messages', {})288 @mock.patch('gwibber.utils.base._seen_messages', {})
289 @mock.patch('gwibber.utils.base._seen_ids', {})
290 def test_duplicate_ids_not_duplicated(self):
291 # When two messages are actually identical (same ids and all),
292 # we need to avoid duplicating the id in the sharedmodel.
293 base = Base(FakeAccount())
294 self.assertEqual(0, TestModel.get_n_rows())
295 self.assertTrue(base._publish(
296 message_id='1234',
297 stream='messages',
298 sender='fred',
299 message='hello, @jimmy'))
300 self.assertTrue(base._publish(
301 message_id='1234',
302 stream='messages',
303 sender='fred',
304 message='hello, @jimmy'))
305 self.assertEqual(1, TestModel.get_n_rows())
306 row = TestModel.get_row(0)
307 # The same message_id should not appear twice.
308 self.assertEqual(row[COLUMN_INDICES['message_ids']],
309 [['base', 'faker/than fake', '1234']])
310
311 @mock.patch('gwibber.utils.base.Model', TestModel)
312 @mock.patch('gwibber.utils.base._seen_messages', {})
313 @mock.patch('gwibber.utils.base._seen_ids', {})
262 def test_similar_messages_allowed(self):314 def test_similar_messages_allowed(self):
263 # Because both the sender and message contribute to the unique key we315 # Because both the sender and message contribute to the unique key we
264 # use to identify messages, if two messages are published with316 # use to identify messages, if two messages are published with
265317
=== modified file 'gwibber/gwibber/tests/test_twitter.py'
--- gwibber/gwibber/tests/test_twitter.py 2012-09-21 14:11:34 +0000
+++ gwibber/gwibber/tests/test_twitter.py 2012-10-04 20:29:21 +0000
@@ -14,6 +14,7 @@
1414
15"""Test the Twitter plugin."""15"""Test the Twitter plugin."""
1616
17
17__all__ = [18__all__ = [
18 'TestTwitter',19 'TestTwitter',
19 ]20 ]
@@ -21,9 +22,13 @@
2122
22import unittest23import unittest
2324
25from gi.repository import Dee
26
24from gwibber.protocols.twitter import Twitter27from gwibber.protocols.twitter import Twitter
25from gwibber.testing.helpers import FakeAccount28from gwibber.testing.helpers import FakeAccount
26from gwibber.testing.mocks import LogMock29from gwibber.testing.mocks import FakeData, LogMock
30from gwibber.utils.model import COLUMN_TYPES
31
2732
28try:33try:
29 # Python 3.334 # Python 3.3
@@ -32,10 +37,12 @@
32 import mock37 import mock
3338
3439
35# Ensure synchronicity between the main thread and the sub-thread. Also, set40# Create a test model that will not interfere with the user's environment.
36# up the loggers for the modules-under-test so that we can assert their error41# We'll use this object as a mock of the real model.
37# messages.42TestModel = Dee.SharedModel.new('com.Gwibber.TestSharedModel')
38@mock.patch.dict('gwibber.utils.base.__dict__', {'_SYNCHRONIZE': True})43TestModel.set_schema_full(COLUMN_TYPES)
44
45
39class TestTwitter(unittest.TestCase):46class TestTwitter(unittest.TestCase):
40 """Test the Twitter API."""47 """Test the Twitter API."""
4148
@@ -46,8 +53,289 @@
46 'gwibber.protocols.twitter')53 'gwibber.protocols.twitter')
4754
48 def tearDown(self):55 def tearDown(self):
56 # Ensure that any log entries we haven't tested just get consumed so
57 # as to isolate out test logger from other tests.
49 self.log_mock.stop()58 self.log_mock.stop()
5059
51 def test_protocol_info(self):60 @mock.patch('gwibber.utils.authentication.Authentication.login',
52 # Each protocol carries with it a number of protocol variables.61 return_value=None)
53 self.assertEqual(self.protocol.__class__.__name__, 'Twitter')62 @mock.patch('gwibber.utils.download.get_json',
63 return_value=None)
64 def test_unsuccessful_authentication(self, *mocks):
65 self.assertFalse(self.protocol._login())
66 self.assertIsNone(self.account.user_name)
67 self.assertIsNone(self.account.user_id)
68
69 @mock.patch('gwibber.utils.authentication.Authentication.login',
70 return_value=dict(AccessToken='some clever fake data',
71 TokenSecret='sssssshhh!',
72 UserId='rickygervais',
73 ScreenName='Ricky Gervais'))
74 def test_successful_authentication(self, *mocks):
75 self.assertTrue(self.protocol._login())
76 self.assertEqual(self.account.user_name, 'Ricky Gervais')
77 self.assertEqual(self.account.user_id, 'rickygervais')
78 self.assertEqual(self.account.access_token, 'some clever fake data')
79 self.assertEqual(self.account.secret_token, 'sssssshhh!')
80
81
82 @mock.patch('gwibber.protocols.twitter.get_json', lambda *x, **y: y)
83 @mock.patch('oauthlib.oauth1.rfc5849.generate_nonce',
84 lambda: 'once upon a nonce')
85 @mock.patch('oauthlib.oauth1.rfc5849.generate_timestamp',
86 lambda: '1348690628')
87 def test_signatures(self, *mocks):
88 self.account.secret_token = 'alpha'
89 self.account.access_token = 'omega'
90 self.account.auth.id = 6
91 self.account.auth.method = 'oauth2'
92 self.account.auth.mechanism = 'HMAC-SHA1'
93 self.account.auth.parameters = dict(ConsumerKey='consume',
94 ConsumerSecret='obey')
95 result = '''\
96OAuth oauth_nonce="once%20upon%20a%20nonce", \
97oauth_timestamp="1348690628", \
98oauth_version="1.0", \
99oauth_signature_method="HMAC-SHA1", \
100oauth_consumer_key="consume", \
101oauth_token="omega", \
102oauth_signature="2MlC4DOqcAdCUmU647izPmxiL%2F0%3D"'''
103
104 self.assertEqual(
105 self.protocol._get_url('http://example.com')['headers'],
106 dict(Authorization=result))
107
108 @mock.patch('gwibber.utils.base.Model', TestModel)
109 @mock.patch('gwibber.utils.download.urlopen',
110 FakeData('gwibber.tests.data', 'twitter-home.dat'))
111 @mock.patch('gwibber.protocols.twitter.Twitter._login',
112 return_value=True)
113 def test_home(self, *mocks):
114 self.account.access_token = 'access'
115 self.account.secret_token = 'secret'
116 self.account.auth.parameters = dict(
117 ConsumerKey='key',
118 ConsumerSecret='secret')
119 self.assertEqual(0, TestModel.get_n_rows())
120 self.protocol.home()
121 self.assertEqual(3, TestModel.get_n_rows())
122
123 # This test data was ripped directly from Twitter's API docs.
124 expected = [
125 [[['twitter', 'faker/than fake', '240558470661799936']],
126 'messages', 'OAuth Dancer', 'oauth_dancer', False,
127 '2012-08-28T21:16:23', 'just another test', '',
128 'https://si0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg',
129 'https://twitter.com/oauth_dancer/status/240558470661799936', '',
130 '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '',
131 '', '', '', '', '', '', '', '', '', '',
132 ],
133 [[['twitter', 'faker/than fake', '240556426106372096']],
134 'messages', 'Raffi Krikorian', 'raffi', False,
135 '2012-08-28T21:08:15', 'lecturing at the "analyzing big data ' +
136 'with twitter" class at @cal with @othman http://t.co/bfj7zkDJ', '',
137 'https://si0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png',
138 'https://twitter.com/raffi/status/240556426106372096', '',
139 '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '',
140 '', '', '', '', '', '', '', '', '', '',
141 ],
142 [[['twitter', 'faker/than fake', '240539141056638977']],
143 'messages', 'Taylor Singletary', 'episod', False,
144 '2012-08-28T19:59:34', 'You\'d be right more often if you thought you were wrong.', '',
145 'https://si0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg',
146 'https://twitter.com/episod/status/240539141056638977', '',
147 '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '',
148 '', '', '', '', '', '', '', '', '', '',
149 ],
150 ]
151 for i, expected_row in enumerate(expected):
152 for got, want in zip(TestModel.get_row(i), expected_row):
153 self.assertEqual(got, want)
154
155 def test_mentions(self):
156 get_url = self.protocol._get_url = mock.Mock(return_value=['tweet'])
157 publish = self.protocol._publish_tweet = mock.Mock()
158
159 self.protocol.mentions()
160
161 publish.assert_called_with('tweet')
162 get_url.assert_called_with(
163 'https://api.twitter.com/1.1/statuses/mentions_timeline.json')
164
165 def test_user(self):
166 get_url = self.protocol._get_url = mock.Mock(return_value=['tweet'])
167 publish = self.protocol._publish_tweet = mock.Mock()
168
169 self.protocol.user()
170
171 publish.assert_called_with('tweet')
172 get_url.assert_called_with(
173 'https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=')
174
175 def test_list(self):
176 get_url = self.protocol._get_url = mock.Mock(return_value=['tweet'])
177 publish = self.protocol._publish_tweet = mock.Mock()
178
179 self.protocol.list('some_list_id')
180
181 publish.assert_called_with('tweet')
182 get_url.assert_called_with(
183 'https://api.twitter.com/1.1/lists/statuses.json?list_id=some_list_id')
184
185 def test_lists(self):
186 get_url = self.protocol._get_url = mock.Mock(
187 return_value=[dict(id_str='twitlist')])
188 publish = self.protocol.list = mock.Mock()
189
190 self.protocol.lists()
191
192 publish.assert_called_with('twitlist')
193 get_url.assert_called_with(
194 'https://api.twitter.com/1.1/lists/list.json')
195
196 def test_private(self):
197 get_url = self.protocol._get_url = mock.Mock(return_value=['tweet'])
198 publish = self.protocol._publish_tweet = mock.Mock()
199
200 self.protocol.private()
201
202 publish.assert_called_with('tweet')
203 self.assertEqual(
204 get_url.mock_calls,
205 [mock.call('https://api.twitter.com/1.1/direct_messages.json'),
206 mock.call('https://api.twitter.com/1.1/direct_messages/sent.json')])
207
208 def test_send_private(self):
209 get_url = self.protocol._get_url = mock.Mock(return_value='tweet')
210 publish = self.protocol._publish_tweet = mock.Mock()
211
212 self.protocol.send_private('pumpichank', 'Are you mocking me?')
213
214 publish.assert_called_with('tweet')
215 get_url.assert_called_with(
216 'https://api.twitter.com/1.1/direct_messages/new.json',
217 dict(text='Are you mocking me?', screen_name='pumpichank'))
218
219 def test_failing_send_private(self):
220 from urllib.error import HTTPError
221 def failing_request(*ignore):
222 raise HTTPError('url', 403, 'Forbidden', 'Forbidden', mock.Mock())
223 get_url = self.protocol._get_url = failing_request
224 publish = self.protocol._publish_tweet = mock.Mock()
225
226 self.protocol.send_private('pumpichank', 'Are you mocking me?')
227
228 self.assertEqual(
229 self.log_mock.empty(),
230 'HTTP Error 403: Forbidden: Does that user follow you?\n')
231
232 def test_send(self):
233 get_url = self.protocol._get_url = mock.Mock(return_value='tweet')
234 publish = self.protocol._publish_tweet = mock.Mock()
235
236 self.protocol.send('Hello, twitterverse!')
237
238 publish.assert_called_with('tweet')
239 get_url.assert_called_with(
240 'https://api.twitter.com/1.1/statuses/update.json',
241 dict(status='Hello, twitterverse!'))
242
243 def test_send_thread(self):
244 get_url = self.protocol._get_url = mock.Mock(return_value='tweet')
245 publish = self.protocol._publish_tweet = mock.Mock()
246
247 self.protocol.send_thread(
248 '1234',
249 'Why yes, I would love to respond to your tweet @pumpichank!')
250
251 publish.assert_called_with('tweet')
252 get_url.assert_called_with(
253 'https://api.twitter.com/1.1/statuses/update.json',
254 dict(status='Why yes, I would love to respond to your tweet @pumpichank!',
255 in_reply_to_status_id='1234'))
256
257 def test_delete(self):
258 get_url = self.protocol._get_url = mock.Mock(return_value='tweet')
259 publish = self.protocol._unpublish = mock.Mock()
260
261 self.protocol.delete('1234')
262
263 publish.assert_called_with('1234')
264 get_url.assert_called_with(
265 'https://api.twitter.com/1.1/statuses/destroy/1234.json',
266 dict(trim_user='true'))
267
268 def test_retweet(self):
269 get_url = self.protocol._get_url = mock.Mock(return_value='tweet')
270 publish = self.protocol._publish_tweet = mock.Mock()
271
272 self.protocol.retweet('1234')
273
274 publish.assert_called_with('tweet')
275 get_url.assert_called_with(
276 'https://api.twitter.com/1.1/statuses/retweet/1234.json',
277 dict(trim_user='true'))
278
279 def test_unfollow(self):
280 get_url = self.protocol._get_url = mock.Mock()
281
282 self.protocol.unfollow('pumpichank')
283
284 get_url.assert_called_with(
285 'https://api.twitter.com/1.1/friendships/destroy.json',
286 dict(screen_name='pumpichank'))
287
288 def test_follow(self):
289 get_url = self.protocol._get_url = mock.Mock()
290
291 self.protocol.follow('pumpichank')
292
293 get_url.assert_called_with(
294 'https://api.twitter.com/1.1/friendships/create.json',
295 dict(screen_name='pumpichank', follow='true'))
296
297 def test_like(self):
298 get_url = self.protocol._get_url = mock.Mock()
299
300 self.protocol.like('1234')
301
302 get_url.assert_called_with(
303 'https://api.twitter.com/1.1/favorites/create.json',
304 dict(id='1234'))
305
306 def test_unlike(self):
307 get_url = self.protocol._get_url = mock.Mock()
308
309 self.protocol.unlike('1234')
310
311 get_url.assert_called_with(
312 'https://api.twitter.com/1.1/favorites/destroy.json',
313 dict(id='1234'))
314
315 def test_tag(self):
316 get_url = self.protocol._get_url = mock.Mock(
317 return_value=dict(statuses=['tweet']))
318 publish = self.protocol._publish_tweet = mock.Mock()
319
320 self.protocol.tag('yegbike')
321
322 publish.assert_called_with('tweet')
323 get_url.assert_called_with(
324 'https://api.twitter.com/1.1/search/tweets.json?q=%23yegbike')
325
326 self.protocol.tag('#yegbike')
327
328 publish.assert_called_with('tweet')
329 get_url.assert_called_with(
330 'https://api.twitter.com/1.1/search/tweets.json?q=%23yegbike')
331
332 def test_search(self):
333 get_url = self.protocol._get_url = mock.Mock(
334 return_value=dict(statuses=['tweet']))
335 publish = self.protocol._publish_tweet = mock.Mock()
336
337 self.protocol.search('hello')
338
339 publish.assert_called_with('tweet')
340 get_url.assert_called_with(
341 'https://api.twitter.com/1.1/search/tweets.json?q=hello')
54342
=== modified file 'gwibber/gwibber/utils/base.py'
--- gwibber/gwibber/utils/base.py 2012-09-25 20:25:42 +0000
+++ gwibber/gwibber/utils/base.py 2012-10-04 20:29:21 +0000
@@ -42,6 +42,7 @@
42# representing the rows matching those keys. It is used for quickly finding42# representing the rows matching those keys. It is used for quickly finding
43# duplicates when we want to insert new rows into the model.43# duplicates when we want to insert new rows into the model.
44_seen_messages = {}44_seen_messages = {}
45_seen_ids = {}
4546
4647
47# Protocol __call__() methods run in threads, so we need to serialize48# Protocol __call__() methods run in threads, so we need to serialize
@@ -179,11 +180,10 @@
179 # The column value is a list of lists (see gwibber/utils/model.py for180 # The column value is a list of lists (see gwibber/utils/model.py for
180 # details), and because the arguments are themselves a list, this gets181 # details), and because the arguments are themselves a list, this gets
181 # initialized as a triply-nested list.182 # initialized as a triply-nested list.
182 args = [[[183 triple = [self.__class__.__name__.lower(),
183 self.__class__.__name__.lower(),184 self._account.id,
184 self._account.id,185 message_id]
185 message_id186 args = [[triple]]
186 ]]]
187 # Now iterate through all the column names listed in the SCHEMA,187 # Now iterate through all the column names listed in the SCHEMA,
188 # except for the first, since we just composed its value in the188 # except for the first, since we just composed its value in the
189 # preceding line. Pop matching column values from the kwargs, in the189 # preceding line. Pop matching column values from the kwargs, in the
@@ -218,9 +218,34 @@
218 # the model gets updated, we need to insert into the row, thus218 # the model gets updated, we need to insert into the row, thus
219 # it's best to concatenate the two lists together and store it219 # it's best to concatenate the two lists together and store it
220 # back into the column.220 # back into the column.
221 row[IDS_IDX] = row[IDS_IDX] + args[IDS_IDX]221 if triple not in row[IDS_IDX]:
222 row[IDS_IDX] = row[IDS_IDX] + args[IDS_IDX]
223
224 _seen_ids[tuple(triple)] = _seen_messages.get(key)
222 return key in _seen_messages225 return key in _seen_messages
223226
227 def _unpublish(self, message_id):
228 """Remove message_id from the Dee.SharedModel."""
229 triple = [self.__class__.__name__.lower(),
230 self._account.id,
231 message_id]
232
233 row_iter = _seen_ids.pop(tuple(triple), None)
234 if row_iter is None:
235 log.error('Tried to delete an invalid message id.')
236 return
237
238 row = Model.get_row(row_iter)
239 if len(row[IDS_IDX]) == 1:
240 # Message only exists on one protocol, delete it
241 del _seen_messages[_make_key(row)]
242 Model.remove(row_iter)
243 else:
244 # Message exists on other protocols too, only drop id
245 row[IDS_IDX] = [ids for ids
246 in row[IDS_IDX]
247 if message_id not in ids]
248
224 def _get_access_token(self):249 def _get_access_token(self):
225 """Return an access token, logging in if necessary."""250 """Return an access token, logging in if necessary."""
226 if self._account.access_token is None:251 if self._account.access_token is None:
227252
=== modified file 'gwibber/gwibber/utils/download.py'
--- gwibber/gwibber/utils/download.py 2012-09-14 20:29:11 +0000
+++ gwibber/gwibber/utils/download.py 2012-10-04 20:29:21 +0000
@@ -20,6 +20,7 @@
20 ]20 ]
2121
2222
23import time
23import json24import json
24import logging25import logging
2526
@@ -31,12 +32,18 @@
31log = logging.getLogger('gwibber.service')32log = logging.getLogger('gwibber.service')
3233
3334
35# This stores some information about the current rate limits imposed
36# upon us, and persists that data between instances of the Downloader.
37_rate_limits = {}
38
39
34class Downloader:40class Downloader:
35 """Convenient downloading wrapper."""41 """Convenient downloading wrapper."""
3642
37 def __init__(self, url, params=None, post=False,43 def __init__(self, url, params=None, post=False,
38 username=None, password=None,44 username=None, password=None,
39 headers=None):45 headers=None):
46 self.base_url = url.split('?')[0]
40 self.url = url47 self.url = url
41 self.params = params48 self.params = params
42 self.post = post49 self.post = post
@@ -47,11 +54,19 @@
4754
48 def _download(self):55 def _download(self):
49 """Return the results as a decoded unicode string."""56 """Return the results as a decoded unicode string."""
57 # Downloads will be happening in threads, so this won't
58 # block the whole app ;-)
59 time.sleep(_rate_limits.get(self.base_url, 0))
60
50 data = None61 data = None
51 url = self.url62 url = self.url
52 headers = self.headers.copy()63 headers = self.headers.copy()
53 if self.params is not None:64 if self.params is not None:
54 params = urlencode(self.params)65 # urlencode() does not have an option to use quote()
66 # instead of quote_plus(), but Twitter requires
67 # percent-encoded spaces, and this is harmless to any
68 # other protocol.
69 params = urlencode(self.params).replace('+', '%20')
55 if self.post:70 if self.post:
56 data = params.encode('utf-8')71 data = params.encode('utf-8')
57 headers['Content-Type'] = (72 headers['Content-Type'] = (
@@ -89,6 +104,24 @@
89 with urlopen(request) as result:104 with urlopen(request) as result:
90 payload = result.read()105 payload = result.read()
91 info = result.info()106 info = result.info()
107
108 rate_reset = info.get('X-Rate-Limit-Reset') # in UTC epoch seconds
109 rate_count = info.get('X-Rate-Limit-Remaining')
110 if None not in (rate_reset, rate_count):
111 rate_reset = int(rate_reset)
112 rate_count = int(rate_count)
113 rate_delta = rate_reset - time.time()
114 if rate_count > 5:
115 # Ehhh, let's not impede the user if they have more than 5
116 # requests remaining in this window.
117 _rate_limits[self.base_url] = 0
118 elif rate_count <= 1:
119 # Avoid division by zero... wait the full length of time!
120 _rate_limits[self.base_url] = rate_delta
121 else:
122 wait_secs = rate_delta / rate_count
123 _rate_limits[self.base_url] = wait_secs
124
92 # RFC 4627 $3. JSON text SHALL be encoded in Unicode. The default125 # RFC 4627 $3. JSON text SHALL be encoded in Unicode. The default
93 # encoding is UTF-8. Since the first two characters of a JSON text126 # encoding is UTF-8. Since the first two characters of a JSON text
94 # will always be ASCII characters [RFC0020], it is possible to127 # will always be ASCII characters [RFC0020], it is possible to
95128
=== removed directory 'gwibber/microblog/plugins/twitter'
=== removed file 'gwibber/microblog/plugins/twitter/__init__.py'
--- gwibber/microblog/plugins/twitter/__init__.py 2012-06-15 13:53:08 +0000
+++ gwibber/microblog/plugins/twitter/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,819 +0,0 @@
1import cgi
2from gettext import lgettext as _
3from oauth import oauth
4
5from gwibber.microblog import network, util
6from gwibber.microblog.util import resources
7from gwibber.microblog.util.time import parsetime
8from gwibber.microblog.util.auth import Authentication
9
10import logging
11logger = logging.getLogger("Twitter")
12logger.debug("Initializing.")
13
14PROTOCOL_INFO = {
15 "name": "Twitter",
16 "version": "1.0",
17
18 "config": [
19 "private:secret_token",
20 "access_token",
21 "username",
22 "color",
23 "receive_enabled",
24 "send_enabled",
25 ],
26
27 "authtype": "oauth1a",
28 "color": "#729FCF",
29
30 "features": [
31 "send",
32 "receive",
33 "search",
34 "tag",
35 "reply",
36 "responses",
37 "private",
38 "public",
39 "delete",
40 "follow",
41 "unfollow",
42 "profile",
43 "retweet",
44 "like",
45 "send_thread",
46 "send_private",
47 "user_messages",
48 "sinceid",
49 "lists",
50 "list",
51 ],
52
53 "default_streams": [
54 "receive",
55 "images",
56 "responses",
57 "private",
58 "lists",
59 ],
60}
61
62URL_PREFIX = "https://twitter.com"
63API_PREFIX = "https://api.twitter.com/1"
64
65class Client ():
66 """Querys Twitter and converts the data.
67
68 The Client class is responsible for querying Twitter and turning the data obtained
69 into data that Gwibber can understand. Twitter uses a version of OAuth for security.
70
71 Tokens have already been obtained when the account was set up in Gwibber and are used to
72 authenticate when getting data.
73
74 """
75 def __init__(self, acct):
76 self.service = util.getbus("Service")
77 self.account = acct
78 self._loop = None
79
80 self.sigmethod = oauth.OAuthSignatureMethod_HMAC_SHA1()
81 self.token = None
82 parameters = self.account["auth"]["parameters"]
83 self.consumer = oauth.OAuthConsumer(parameters["ConsumerKey"],
84 parameters["ConsumerSecret"])
85
86 def _login(self):
87 old_token = self.account.get("secret_token", None)
88 with self.account.login_lock:
89 # Perform the login only if it wasn't already performed by another thread
90 # while we were waiting to the lock
91 if self.account.get("secret_token", None) == old_token:
92 self._locked_login(old_token)
93
94 self.token = oauth.OAuthToken(self.account["access_token"],
95 self.account["secret_token"])
96 return "access_token" in self.account and \
97 self.account["access_token"] != old_token
98
99 def _locked_login(self, old_token):
100 logger.debug("Re-authenticating" if old_token else "Logging in")
101
102 auth = Authentication(self.account, logger)
103 reply = auth.login()
104 if reply and reply.has_key("AccessToken"):
105 self.account["access_token"] = reply["AccessToken"]
106 self.account["secret_token"] = reply["TokenSecret"]
107 self.account["uid"] = reply["UserId"]
108 self.account["username"] = reply["ScreenName"]
109 logger.debug("User id is: %s, name is %s" % (self.account["uid"],
110 self.account["username"]))
111 else:
112 logger.error("Didn't find token in session: %s", (reply,))
113
114 def _common(self, data):
115 """Parses messages into Gwibber compatible forms.
116
117 This function is common to all tweet types
118 and includes parsing of user mentions, hashtags,
119 urls, images and videos.
120
121 Arguments:
122 data -- A data object obtained from Twitter containing a complete tweet
123
124 Returns:
125 m -- A data object compatible with inserting into the Gwibber database for that tweet
126
127 """
128 m = {}
129 try:
130 m["mid"] = str(data["id"])
131 m["service"] = "twitter"
132 m["account"] = self.account["id"]
133 if data.has_key("created_at"):
134 m["time"] = parsetime(data["created_at"])
135 m["text"] = util.unescape(data["text"])
136 m["text"] = cgi.escape(m["text"])
137 m["content"] = m["text"]
138
139 # Go through the entities in the tweet and use them to linkify/filter tweets as appropriate
140 if data.has_key("entities"):
141
142 #Get mention entries
143 if data["entities"].has_key("user_mentions"):
144 names = []
145 for mention in data["entities"]["user_mentions"]:
146 if not mention["screen_name"] in names:
147 try:
148 m["content"] = m["content"].replace("@" + mention["screen_name"], "@<a href='gwibber:/user?acct=" + m["account"] + "&name=@" + mention["screen_name"] + "'>" + mention["screen_name"] + "</a>")
149 except:
150 pass
151 names.append(mention["screen_name"])
152
153 #Get hashtag entities
154 if data["entities"].has_key("hashtags"):
155 hashtags = []
156 for tag in data["entities"]["hashtags"]:
157 if not tag["text"] in hashtags:
158 try:
159 m["content"] = m["content"].replace("#" + tag["text"], "#<a href='gwibber:/tag?acct=" + m["account"] + "&query=#" + tag["text"] + "'>" + tag["text"] + "</a>")
160 except:
161 pass
162 hashtags.append(tag["text"])
163
164 # Get url entities - These usually go in the link stream, but if they're pictures or videos, they should go in the proper stream
165 if data["entities"].has_key("urls"):
166 for urls in data["entities"]["urls"]:
167 url = cgi.escape (urls["url"])
168 expanded_url = url
169 if urls.has_key("expanded_url"):
170 if not urls["expanded_url"] is None:
171 expanded_url = cgi.escape(urls["expanded_url"])
172
173 display_url = url
174 if urls.has_key("display_url"):
175 display_url = cgi.escape (urls["display_url"])
176
177 if url == m["content"]:
178 m["content"] = "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>"
179 else:
180 try:
181 startindex = m["content"].index(url)
182 endindex = startindex + len(url)
183 start = m["content"][0:startindex]
184 end = m["content"][endindex:]
185 m["content"] = start + "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>" + end
186 except:
187 logger.debug ("Failed to set url for ID: %s", m["mid"])
188
189 m["type"] = "link"
190
191 images = util.imgpreview(expanded_url)
192 videos = util.videopreview(expanded_url)
193 if images:
194 m["images"] = images
195 m["type"] = "photo"
196 elif videos:
197 m["images"] = videos
198 m["type"] = "video"
199 else:
200 # Well, it's not anything else, so it must be a link
201 m["link"] = {}
202 m["link"]["picture"] = ""
203 m["link"]["name"] = ""
204 m["link"]["description"] = m["content"]
205 m["link"]["url"] = url
206 m["link"]["icon"] = ""
207 m["link"]["caption"] = ""
208 m["link"]["properties"] = {}
209
210 if data["entities"].has_key("media"):
211 for media in data["entities"]["media"]:
212 try:
213 url = cgi.escape (media["url"])
214 media_url_https = media["media_url_https"]
215 expanded_url = url
216 if media.has_key("expanded_url"):
217 expanded_url = cgi.escape(media["expanded_url"])
218
219 display_url = url
220 if media.has_key("display_url"):
221 display_url = cgi.escape (media["display_url"])
222
223 startindex = m["content"].index(url)
224 endindex = startindex + len(url)
225 start = m["content"][0:startindex]
226 end = m["content"][endindex:]
227 m["content"] = start + "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>" + end
228
229 if media["type"] == "photo":
230 m["type"] = "photo"
231 m["photo"] = {}
232 m["photo"]["picture"] = media_url_https
233 m["photo"]["url"] = None
234 m["photo"]["name"] = None
235
236 except:
237 pass
238
239 else:
240 m["content"] = util.linkify(util.unescape(m["text"]),
241 ((util.PARSE_HASH, '#<a href="gwibber:/tag?acct=%s&query=\\1">\\1</a>' % m["account"]),
242 (util.PARSE_NICK, '@<a href="gwibber:/user?acct=%s&name=\\1">\\1</a>' % m["account"])), escape=True)
243
244 m["html"] = m["content"]
245
246 m["to_me"] = ("@%s" % self.account["username"]) in data["text"] # Check if it's a reply directed at the user
247 m["favorited"] = data.get("favorited", False) # Check if the tweet has been favourited
248
249 except:
250 logger.error("%s failure - %s", PROTOCOL_INFO["name"], data)
251 return {}
252
253 return m
254
255 def _user(self, user):
256 """Parses the user portion of a tweet.
257
258 Arguments:
259 user -- A user object from a tweet
260
261 Returns:
262 A user object in a format compatible with Gwibber's database.
263
264 """
265 return {
266 "name": user.get("name", None),
267 "nick": user.get("screen_name", None),
268 "id": user.get("id", None),
269 "location": user.get("location", None),
270 "followers": user.get("followers_count", None),
271 "friends": user.get("friends_count", None),
272 "description": user.get("description", None),
273 "following": user.get("following", None),
274 "protected": user.get("protected", None),
275 "statuses": user.get("statuses_count", None),
276 "image": user.get("profile_image_url", None),
277 "website": user.get("url", None),
278 "url": "/".join((URL_PREFIX, user.get("screen_name", ""))) or None,
279 "is_me": user.get("screen_name", None) == self.account["username"],
280 }
281
282 def _message(self, data):
283 """Parses messages into gwibber compatible forms.
284
285 This is the initial function for tweet parsing and parses
286 retweeted status (the shared-by portion), source (the program
287 the tweet was tweeted from) and reply details (the in-reply-to
288 portion). It sends the rest to _common() for further parsing.
289
290 Arguments:
291 data -- A data object obtained from Twitter containing a complete tweet
292
293 Returns:
294 m -- A data object compatible with inserting into the Gwibber database for that tweet
295
296 """
297 if type(data) != dict:
298 logger.error("Cannot parse message data: %s", str(data))
299 return {}
300
301 n = {}
302 if data.has_key("retweeted_status"):
303 n["retweeted_by"] = self._user(data["user"] if "user" in data else data["sender"])
304 if data.has_key("created_at"):
305 n["time"] = parsetime(data["created_at"])
306 data = data["retweeted_status"]
307 else:
308 n["retweeted_by"] = None
309 if data.has_key("created_at"):
310 n["time"] = parsetime(data["created_at"])
311
312 m = self._common(data)
313 for k in n:
314 m[k] = n[k]
315
316 m["source"] = data.get("source", False)
317
318 if data.has_key("in_reply_to_status_id"):
319 if data["in_reply_to_status_id"]:
320 m["reply"] = {}
321 m["reply"]["id"] = data["in_reply_to_status_id"]
322 m["reply"]["nick"] = data["in_reply_to_screen_name"]
323 if m["reply"]["id"] and m["reply"]["nick"]:
324 m["reply"]["url"] = "/".join((URL_PREFIX, m["reply"]["nick"], "statuses", str(m["reply"]["id"])))
325 else:
326 m["reply"]["url"] = None
327
328 m["sender"] = self._user(data["user"] if "user" in data else data["sender"])
329 m["url"] = "/".join((m["sender"]["url"], "statuses", str(m.get("mid", None))))
330
331 return m
332
333 def _responses(self, data):
334 """Sets the message type if the message should be in the replies stream.
335
336 It sends the rest to _message() for further parsing.
337
338 Arguments:
339 data -- A data object obtained from Twitter containing a complete tweet
340
341 Returns:
342 m -- A data object compatible with inserting into the Gwibber database for that tweet
343
344 """
345 m = self._message(data)
346 m["type"] = None
347
348 return m
349
350 def _private(self, data):
351 """Sets the message type and privacy.
352
353 Sets the message type and privacy if the message should be in the private stream.
354 Also parses the recipient as both sent & recieved messages can be in the private stream.
355 It sends the rest to _message() for further parsing
356
357 Arguments:
358 data -- A data object obtained from Twitter containing a complete tweet
359
360 Returns:
361 m -- A data object compatible with inserting into the Gwibber database for that tweet
362
363 """
364 m = self._message(data)
365 m["private"] = True
366 m["type"] = None
367
368 m["recipient"] = {}
369 m["recipient"]["name"] = data["recipient"]["name"]
370 m["recipient"]["nick"] = data["recipient"]["screen_name"]
371 m["recipient"]["id"] = data["recipient"]["id"]
372 m["recipient"]["image"] = data["recipient"]["profile_image_url"]
373 m["recipient"]["location"] = data["recipient"]["location"]
374 m["recipient"]["url"] = "/".join((URL_PREFIX, m["recipient"]["nick"]))
375 m["recipient"]["is_me"] = m["recipient"]["nick"] == self.account["username"]
376 m["to_me"] = m["recipient"]["is_me"]
377
378 return m
379
380 def _result(self, data):
381 """Called when a search is done in Gwibber.
382
383 Parses the sender and sends the rest to _common()
384 for further parsing.
385
386 Arguments:
387 data -- A data object obtained from Twitter containing a complete tweet
388
389 Returns:
390 m -- A data object compatible with inserting into the Gwibber database for that tweet
391
392 """
393 m = self._common(data)
394
395 if data["to_user_id"]:
396 m["reply"] = {}
397 m["reply"]["id"] = data["to_user_id"]
398 m["reply"]["nick"] = data["to_user"]
399
400 m["sender"] = {}
401 m["sender"]["nick"] = data["from_user"]
402 m["sender"]["id"] = data["from_user_id"]
403 m["sender"]["image"] = data["profile_image_url"]
404 m["sender"]["url"] = "/".join((URL_PREFIX, m["sender"]["nick"]))
405 m["sender"]["is_me"] = m["sender"]["nick"] == self.account["username"]
406 m["url"] = "/".join((m["sender"]["url"], "statuses", str(m["mid"])))
407 return m
408
409 def _profile(self, data):
410 """Called when a user is clicked on.
411
412 Args:
413 data -- A data object obtained from Twitter containing a complete user
414
415 Returns:
416 A data object compatible with inserting into the Gwibber database for that user.
417
418 """
419 if "error" in data:
420 return {
421 "error": data["error"]
422 }
423 return {
424 "name": data.get("name", data["screen_name"]),
425 "service": "twitter",
426 "stream": "profile",
427 "account": self.account["id"],
428 "mid": data["id"],
429 "text": data.get("description", ""),
430 "nick": data["screen_name"],
431 "url": data.get("url", ""),
432 "protected": data.get("protected", False),
433 "statuses": data.get("statuses_count", 0),
434 "followers": data.get("followers_count", 0),
435 "friends": data.get("friends_count", 0),
436 "following": data.get("following", 0),
437 "favourites": data.get("favourites_count", 0),
438 "image": data["profile_image_url"],
439 "utc_offset": data.get("utc_offset", 0),
440 "id": data["id"],
441 "lang": data.get("lang", "en"),
442 "verified": data.get("verified", False),
443 "geo_enabled": data.get("geo_enabled", False),
444 "time_zone": data.get("time_zone", "")
445 }
446
447 def _list(self, data):
448 """Called when a list is clicked on.
449
450 Args:
451 data -- A data object obtained from Twitter containing a complete list
452
453 Returns:
454 A data object compatible with inserting into the Gwibber database for that list.
455
456 """
457 return {
458 "mid": data["id"],
459 "service": "twitter",
460 "account": self.account["id"],
461 "time": 0,
462 "text": data["description"],
463 "html": data["description"],
464 "content": data["description"],
465 "url": "/".join((URL_PREFIX, data["uri"])),
466 "sender": self._user(data["user"]),
467 "name": data["name"],
468 "nick": data["slug"],
469 "key": data["slug"],
470 "full": data["full_name"],
471 "uri": data["uri"],
472 "mode": data["mode"],
473 "members": data["member_count"],
474 "followers": data["subscriber_count"],
475 "kind": "list",
476 }
477
478 def _get(self, path, parse="message", post=False, single=False, **args):
479 """Establishes a connection with Twitter and gets the data requested.
480
481 Requires authentication.
482
483 Arguments:
484 path -- The end of the url to look up on Twitter
485 parse -- The function to use to parse the data returned (message by default)
486 post -- True if using POST, for example the send operation. False if using GET, most operations other than send. (False by default)
487 single -- True if a single checkin is requested, False if multiple (False by default)
488 **args -- Arguments to be added to the URL when accessed.
489
490 Returns:
491 A list of Gwibber compatible objects which have been parsed by the parse function.
492
493 """
494 if not self.token and not self._login():
495 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Authentication failed"), "Auth needs updating")
496 logger.error("%s", logstr)
497 return [{"error": {"type": "auth", "account": self.account, "message": _("Authentication failed, please re-authorize")}}]
498
499 url = "/".join((API_PREFIX, path))
500
501 request = oauth.OAuthRequest.from_consumer_and_token(self.consumer, self.token,
502 http_method=post and "POST" or "GET", http_url=url, parameters=util.compact(args))
503 request.sign_request(self.sigmethod, self.consumer, self.token)
504
505 if post:
506 headers = request.to_header()
507 data = network.Download(url, util.compact(args), post, header=headers).get_json()
508 else:
509 data = network.Download(request.to_url(), None, post).get_json()
510 resources.dump(self.account["service"], self.account["id"], data)
511
512 if isinstance(data, dict) and data.get("errors", 0):
513 if "authenticate" in data["errors"][0]["message"]:
514 # Try again, if we get a new token
515 if self._login():
516 logger.debug("Authentication error, logging in again")
517 return self._get(path, parse, post, single, args)
518 else:
519 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Authentication failed"), data["errors"][0]["message"])
520 logger.error("%s", logstr)
521 return [{"error": {"type": "auth", "account": self.account, "message": data["errors"][0]["message"]}}]
522 else:
523 for error in data["errors"]:
524 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Unknown failure"), error["message"])
525 return [{"error": {"type": "unknown", "account": self.account, "message": error["message"]}}]
526 elif isinstance(data, dict) and data.get("error", 0):
527 if "Incorrect signature" in data["error"]:
528 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Request failed"), data["error"])
529 logger.error("%s", logstr)
530 return [{"error": {"type": "auth", "account": self.account, "message": data["error"]}}]
531 elif isinstance(data, str):
532 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Request failed"), data)
533 logger.error("%s", logstr)
534 return [{"error": {"type": "request", "account": self.account, "message": data}}]
535
536 if parse == "follow" or parse == "unfollow":
537 if isinstance(data, dict) and data.get("error", 0):
538 logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("%s failed" % parse), data["error"])
539 logger.error("%s", logstr)
540 return [{"error": {"type": "auth", "account": self.account, "message": data["error"]}}]
541 else:
542 return [["friendships", {"type": parse, "account": self.account["id"], "service": self.account["service"],"user_id": data["id"], "nick": data["screen_name"]}]]
543
544 if parse == "profile" and isinstance(data, dict):
545 return self._profile(data)
546
547 if parse == "list":
548 return [self._list(l) for l in data["lists"]]
549
550 if single: return [getattr(self, "_%s" % parse)(data)]
551 if parse: return [getattr(self, "_%s" % parse)(m) for m in data]
552 else: return []
553
554 def _search(self, **args):
555 """Establishes a connection with Twitter and gets the results of a search.
556
557 Does not require authentication
558
559 Arguments:
560 **args -- The search terms
561
562 Returns:
563 A list of Gwibber compatible objects which have been parsed by _result().
564
565 """
566 data = network.Download("http://search.twitter.com/search.json", util.compact(args))
567 data = data.get_json()["results"]
568
569 if type(data) != list:
570 logger.error("Cannot parse search data: %s", str(data))
571 return []
572
573 return [self._result(m) for m in data]
574
575 def __call__(self, opname, **args):
576 return getattr(self, opname)(**args)
577
578 def receive(self, count=util.COUNT, since=None):
579 """Gets the latest tweets and adds them to the database.
580
581 Arguments:
582 count -- Number of updates to get
583 since -- Time to get updates since
584
585 Returns:
586 A list of Gwibber compatible objects which have been parsed by _message().
587
588 """
589 return self._get("statuses/home_timeline.json", include_entities=1, count=count, since_id=since)
590
591 def responses(self, count=util.COUNT, since=None):
592 """Gets the latest replies and adds them to the database.
593
594 Arguments:
595 count -- Number of updates to get
596 since -- Time to get updates since
597
598 Returns:
599 A list of Gwibber compatible objects which have been parsed by _responses().
600
601 """
602 return self._get("statuses/mentions.json", "responses", include_entities=1, count=count, since_id=since)
603
604 def private(self, count=util.COUNT, since=None):
605 """Gets the latest direct messages sent and recieved and adds them to the database.
606
607 Args:
608 count -- Number of updates to get
609 since -- Time to get updates since
610
611 Returns:
612 A list of Gwibber compatible objects which have been parsed by _private().
613
614 """
615 private = self._get("direct_messages.json", "private", include_entities=1, count=count, since_id=since) or []
616 private_sent = self._get("direct_messages/sent.json", "private", count=count, since_id=since) or []
617 return private + private_sent
618
619 def public(self):
620 """Gets the latest tweets from the public timeline and adds them to the database.
621
622 Arguments:
623 None
624
625 Returns:
626 A list of Gwibber compatible objects which have been parsed by _message().
627
628 """
629 return self._get("statuses/public_timeline.json", include_entities=1)
630
631 def lists(self, **args):
632 """Gets subscribed lists and adds them to the database.
633
634 Arguments:
635 None
636
637 Returns:
638 A list of Gwibber compatible objects which have been parsed by _list().
639
640 """
641 if not "username" in self.account:
642 self._login()
643 following = self._get("%s/lists/subscriptions.json" % self.account["username"], "list") or []
644 lists = self._get("%s/lists.json" % self.account["username"], "list") or []
645 return following + lists
646
647 def list(self, user, id, count=util.COUNT, since=None):
648 """Gets the latest tweets from subscribed lists and adds them to the database.
649
650 Arguments:
651 user -- The user's name whose lists are to be got
652 id -- The user's id whose lists are to be got
653 count -- Number of updates to get
654 since -- Time to get updates since
655
656 Returns:
657 A list of Gwibber compatible objects which have been parsed by _message().
658
659 """
660 return self._get("%s/lists/%s/statuses.json" % (user, id), include_entities=1, per_page=count, since_id=since)
661
662 def search(self, query, count=util.COUNT, since=None):
663 """Gets the latest results from a search and adds them to the database.
664
665 Arguments:
666 query -- The search query
667 count -- Number of updates to get
668 since -- Time to get updates since
669
670 Returns:
671 A list of Gwibber compatible objects which have been parsed by _search().
672
673 """
674 return self._search(include_entities=1, q=query, rpp=count, since_id=since)
675
676 def tag(self, query, count=util.COUNT, since=None):
677 """Gets the latest results from a hashtag search and adds them to the database.
678
679 Arguments:
680 query -- The search query (hashtag without the #)
681 count -- Number of updates to get
682 since -- Time to get updates since
683
684 Returns:
685 A list of Gwibber compatible objects which have been parsed by _search().
686
687 """
688 return self._search(q="#%s" % query, count=count, since_id=since)
689
690 def delete(self, message):
691 """Deletes a specified tweet from Twitter.
692
693 Arguments:
694 message -- A Gwibber compatible message object (from gwibber's database)
695
696 Returns:
697 Nothing
698
699 """
700 return self._get("statuses/destroy/%s.json" % message["mid"], None, post=True, do=1)
701
702 def like(self, message):
703 """Favourites a specified tweet on Twitter.
704
705 Arguments:
706 message -- A Gwibber compatible message object (from Gwibber's database)
707
708 Returns:
709 Nothing
710
711 """
712 return self._get("favorites/create/%s.json" % message["mid"], None, post=True, do=1)
713
714 def send(self, message):
715 """Sends a tweet to Twitter.
716
717 Arguments:
718 message -- The tweet's text
719
720 Returns:
721 Nothing
722
723 """
724 return self._get("statuses/update.json", post=True, single=True,
725 status=message)
726
727 def send_private(self, message, private):
728 """Sends a direct message to Twitter.
729
730 Arguments:
731 message -- The tweet's text
732 private -- A gwibber compatible user object (from gwibber's database)
733
734 Returns:
735 Nothing
736
737 """
738 return self._get("direct_messages/new.json", "private", post=True, single=True,
739 text=message, screen_name=private["sender"]["nick"])
740
741 def send_thread(self, message, target):
742 """Sends a reply to a user on Twitter.
743
744 Arguments:
745 message -- The tweet's text
746 target -- A Gwibber compatible user object (from Gwibber's database)
747
748 Returns:
749 Nothing
750
751 """
752 return self._get("statuses/update.json", post=True, single=True,
753 status=message, in_reply_to_status_id=target["mid"])
754
755 def retweet(self, message):
756 """Retweets a tweet.
757
758 Arguments:
759 message -- A Gwibber compatible message object (from gwibber's database)
760
761 Returns:
762 Nothing
763
764 """
765 return self._get("statuses/retweet/%s.json" % message["mid"], None, post=True, do=1)
766
767 def follow(self, screen_name):
768 """Follows a user.
769
770 Arguments:
771 screen_name -- The screen name (@someone without the @) of the user to be followed
772
773 Returns:
774 Nothing
775
776 """
777 return self._get("friendships/create.json", screen_name=screen_name, post=True, parse="follow")
778
779 def unfollow(self, screen_name):
780 """Unfollows a user.
781
782 Arguments:
783 screen_name -- The screen name (@someone without the @) of the user to be unfollowed
784
785 Returns:
786 Nothing
787
788 """
789 return self._get("friendships/destroy.json", screen_name=screen_name, post=True, parse="unfollow")
790
791 def profile(self, id=None, count=None, since=None):
792 """Gets a user's profile.
793
794 Arguments:
795 id -- The user's screen name
796 count -- Number of tweets to get
797 since -- Time to get tweets since
798
799 Returns:
800 A list of Gwibber compatible objects which have been parsed by _profile().
801
802 """
803 return self._get("users/show.json", screen_name=id, count=count, since_id=since, parse="profile")
804
805 def user_messages(self, id=None, count=util.COUNT, since=None):
806 """Gets a user's profile & timeline.
807
808 Arguments:
809 id -- The user's screen name
810 count -- Number of tweets to get
811 since -- Time to get tweets since
812
813 Returns:
814 A list of Gwibber compatible objects which have been parsed by _profile().
815
816 """
817 profiles = [self.profile(id)] or []
818 messages = self._get("statuses/user_timeline.json", id=id, include_entities=1, count=count, since_id=since) or []
819 return messages + profiles
8200
=== added directory 'gwibber/tools'
=== added file 'gwibber/tools/debug_live.py'
--- gwibber/tools/debug_live.py 1970-01-01 00:00:00 +0000
+++ gwibber/tools/debug_live.py 2012-10-04 20:29:21 +0000
@@ -0,0 +1,67 @@
1#!/usr/bin/env python3
2
3"""Usage: ./tools/debug_live.py PROTOCOL OPERATION [OPTIONS]
4
5Where PROTOCOL is a protocol supported by Gwibber, such as 'twitter',
6OPERATION is an instance method defined in that protocol's class, and
7OPTIONS are whatever arguments you'd like to pass to that method (if
8any), such as message id's or a status message.
9
10Examples:
11
12./tools/debug_live.py twitter home
13./tools/debug_live.py twitter send 'Hello, world!'
14
15This tool is provided to aid with rapid feedback of changes made to
16the gwibber source tree, and as such is designed to be run from the
17same directory that contains 'setup.py'. It is not intended for use
18with an installed gwibber package.
19"""
20
21import sys
22
23sys.path.insert(0, '.')
24
25if len(sys.argv) < 3:
26 sys.exit(__doc__)
27
28protocol = sys.argv[1]
29args = sys.argv[2:]
30
31from gwibber.utils.account import AccountManager
32from gwibber.utils.model import Model
33from gwibber.utils.base import Base
34from gi.repository import GObject
35
36# Disable threading for easier testing.
37Base._SYNCHRONIZE = True
38
39def refresh(account):
40 print()
41 print('#' * 80)
42 print('Performing "{}" operation!'.format(args[0]))
43 print('#' * 80)
44
45 account.protocol(*args)
46 for row in Model:
47 print([col for col in row])
48 print()
49 print('ROWS: ', len(Model))
50 return True
51
52if __name__ == '__main__':
53
54 found = False
55 a = AccountManager(None)
56
57 for account_id, account in a._accounts.items():
58 if account_id.endswith(protocol):
59 found = True
60 refresh(account)
61 GObject.timeout_add_seconds(300, refresh, account)
62
63 if not found:
64 print('No {} account found in your Ubuntu Online Accounts!'.format(
65 protocol))
66 else:
67 GObject.MainLoop().run()
068
=== added file 'gwibber/tools/debug_slave.py'
--- gwibber/tools/debug_slave.py 1970-01-01 00:00:00 +0000
+++ gwibber/tools/debug_slave.py 2012-10-04 20:29:21 +0000
@@ -0,0 +1,22 @@
1#!/usr/bin/env python3
2
3from gi.repository import Dee
4from gi.repository import GObject
5
6
7class Slave:
8 def __init__(self):
9 model_name = 'com.Gwibber.Streams'
10 print('Joining model ' + model_name)
11 self.model = Dee.SharedModel.new(model_name)
12 self.model.connect('row-added', self.on_row_added)
13
14 def on_row_added(self, model, itr):
15 row = self.model.get_row(itr)
16 print(row)
17 print('ROWS: ', len(self.model))
18
19if __name__ == '__main__':
20
21 s = Slave()
22 GObject.MainLoop().run()

Subscribers

People subscribed via source and target branches