Merge lp:~robru/gwibber/twitter into lp:~barry/gwibber/py3
- Merge into py3
Status: | Merged |
---|---|
Merged at revision: | 1445 |
Proposed branch: | lp:~robru/gwibber/twitter |
Merge into: | lp:~barry/gwibber/py3 |
Diff against target: |
2245 lines (+1180/-847) 12 files modified
gwibber/gwibber/protocols/twitter.py (+275/-6) gwibber/gwibber/testing/mocks.py (+13/-7) gwibber/gwibber/tests/data/twitter-home.dat (+344/-0) gwibber/gwibber/tests/test_dbus.py (+5/-0) gwibber/gwibber/tests/test_download.py (+41/-0) gwibber/gwibber/tests/test_protocols.py (+52/-0) gwibber/gwibber/tests/test_twitter.py (+296/-8) gwibber/gwibber/utils/base.py (+31/-6) gwibber/gwibber/utils/download.py (+34/-1) gwibber/microblog/plugins/twitter/__init__.py (+0/-819) gwibber/tools/debug_live.py (+67/-0) gwibber/tools/debug_slave.py (+22/-0) |
To merge this branch: | bzr merge lp:~robru/gwibber/twitter |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Barry Warsaw | Pending | ||
Review via email: mp+127373@code.launchpad.net |
Commit message
Description of the change
Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit prematurely because it's still the easiest way to see an overall diff ;-)
- 1474. By Robert Bruce Park
-
Implement Base._unpublish for the sake of deleting messages.
This required adding a new module global to base.py, _seen_ids, which
is a dict mapping message ids to SharedModel row iters. Without this
it would have been necessary to iterate over the entire model every
time you wanted to delete a single row. - 1475. By Robert Bruce Park
-
Add a test case for Base._unpublish.
- 1476. By Robert Bruce Park
-
Fill out supported features in test_dbus.
All tests pass!
- 1477. By Robert Bruce Park
-
Uncomment line ;-)
Barry Warsaw (barry) wrote : | # |
Robert Bruce Park (robru) wrote : | # |
On 12-10-02 02:13 PM, Barry Warsaw wrote:
> On Oct 01, 2012, at 09:30 PM, Robert Bruce Park wrote:
>
>> Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit
>> prematurely because it's still the easiest way to see an overall diff ;-)
>
> The branch is looking pretty good. I'm glad you were able to verify that the
> OAuth signatures were working, at least for public tweets.
I actually verified that the OAuth signatures worked for *every* API
endpoint *except* sending new direct messages (we can receive them ok,
too). That's what debug_twitter_
write a new protocol operation, I'd modify it to do that new operation,
then run it, and I'd see on my live twitter account that it worked.
> I guess more
> research will have to be done to figure out why private replies are broken.
Yeah, I still have no idea.
> Do you intend to include the debug_twitter_*.py scripts in the branch? If so,
> let's put them in a tools subdirectory (i.e. not in the gwibber Python
> package). That way they'll be easier to omit from the Debian package.
I hadn't originally. At first it just started off as 'from
gwibber.
a._accounts[
typing that out 10,000 times in an interactive python shell every time I
wanted to test a change I made to the code. But once the script was
born, it grew, and then ken expanded it for his own testing purposes,
which is included here as well.
In fact, *all* of the style nits you mention were written by ken ;-)
I had assumed that you would just not merge them. If you think there's
value in keeping them in a tools directory, I guess that's ok. I don't
have a strong opinion either way.
>> + account.
>> + account.
>
> What does this number correspond to? Is it private information?
That is a public tweet I made with a previous incarnation of this
script, that I've now deleted by running this script as-is. No worries.
>> + def _get_url(self, url, data=None):
>> + """Access the Twitter API with correct OAuth signed headers."""
>> + # TODO start enforcing some rate limiting.
>
> What are your thoughts on how to do this rate limiting? Is it something that
> Twitter provides through their API? (in a similar manner to Facebook's
> pagination support?). I wonder if there's some commonality refactoring that
> should go on here, at least for determining the constants for the rate
> limiting?
I haven't looked closely at Twitter's rate limiting standards yet, but I
wrote that comment because I ran into Twitter's limit at one point and
they stopped me from downloading tweets that I wanted.
All I know is that each API endpoint has a different maximum number of
requests that can be made "per window" and the length of time of the
window is different, too. So basically I'm envisioning a dict that maps
URLs to ints, the int for a given URL will increase once per each
_get_url invocation, and then we just need to figure out how long to
pause before making the request in order not to hit th...
- 1478. By Robert Bruce Park
-
Allow debug_twitter_
live.py to call any action from the commandline. Commandline args are passed directly to protocol.
__call_ _(), so any
feature supported by Twitter can be invoked directly by this script. - 1479. By Robert Bruce Park
-
Generalize debug_twitter_*.py scripts into debug_*.py scripts.
debug_live has a good docstring that explains its usage.
- 1480. By Robert Bruce Park
-
Add tests for Twitter operations.
This also tweaks Twitter.tag so that it silently ignores the # symbols
in hash tags, allowing you to include them or not, at your option.
Previously, if you tried to Twitter.tag('#hashtag' ), it wouldn't work,
you had to strip the # symbol yourself. Now we accept both. - 1481. By Robert Bruce Park
-
Docstring cleanup.
- 1482. By Robert Bruce Park
-
Fix bug in Base._unpublish
- 1483. By Robert Bruce Park
-
Improve _seen_ids / _seen_messages consistency.
Also fix a comment.
Barry Warsaw (barry) wrote : | # |
On Oct 03, 2012, at 12:34 PM, Robert Bruce Park wrote:
>I actually verified that the OAuth signatures worked for *every* API endpoint
>*except* sending new direct messages (we can receive them ok, too). That's
>what debug_twitter_
>operation, I'd modify it to do that new operation, then run it, and I'd see
>on my live twitter account that it worked.
You may have to engage with the Twitter developer community to debug this. I
didn't find anything relevant in a very little bit of googling though. You'd
think that if it were a bug in their API, others would have noticed it.
>I hadn't originally. At first it just started off as 'from
>gwibber.
Just FWIW, I have a patch in my facebook.working branch that makes None the
default argument. You have to be careful not to call the callback if it's
None. I wonder if it's useful to land that in the py3 branch separately?
>In fact, *all* of the style nits you mention were written by ken ;-)
/me hears Kirk's voice scream to the heavens "Keeeeeeennnnnn!"
>I haven't looked closely at Twitter's rate limiting standards yet, but I
>wrote that comment because I ran into Twitter's limit at one point and they
>stopped me from downloading tweets that I wanted.
I think Twitter warns you (via response headers?) if you're getting rate
limited. So I guess you could check for those and automatically back-off if
you see them. It's probably better to at least have some combination of
proactive and reactive response to rate limiting. The gory details:
https:/
>>> + # "Client" == "Consumer" in oauthlib parlance.
>>> + client_key = self._account.
>>> + client_secret = self._account.
>>
>> Can these fail? Are the other parts of your code prepared to handle the
>> resulting KeyErrors if these parameters are missing? Do you have any tests
>> for those situations?
>
>No, these are constants defined by Twitter at app-registration time, stored
>in libaccounts-sso package. Can never fail. They may change in the event that
>somebody nefarious compromises them (which ought to be easy to do for an
>open-source package), but they won't change during runtime.
Cool. In that case, the code above is fine because if they *do* fail, you
want to know via the logged KeyErrors.
>>> + def _publish_
>>> + """Publish a single tweet into the Dee.SharedModel."""
>>> + tweet_id = tweet.get('id_str', '')
>>
>> Does it still make sense to publish this tweet if there is no id_str?
>> Won't that possibly result in row collisions, since the only unique item in
>> the key will be the empty string for multiple id_str-less entries?
>>
>> I think in my Facebook branch, I just skip the JSON response entry if
>> there's no message_id.
>
>Yeah, that's probably reasonable. I just write 'adict.get' out of habit for
>the sake of being as flexible as possible.
As discussed over IRC, something like:
tweet_id = tweet.get('id_str')
if tweet_id is None:
log.info...
- 1484. By Robert Bruce Park
-
Fix up Base._unpublish: Better tests, bugs fixed.
Particularly I've fixed the removal of ids when there were duplicates
before, and also I've eliminated the possibility of collisions between
message id's from different services (there used to be a small chance
that twitter and facebook could assign the same message id to
different messages, and then _unpublish would get confused.And of course, improved test coverage to prove that this is all good
now ;-) - 1485. By Robert Bruce Park
-
Typo.
- 1486. By Robert Bruce Park
-
Cleanup some API formatting.
- 1487. By Robert Bruce Park
-
Added comment as per barry.
- 1488. By Robert Bruce Park
-
Undo a change that wasn't necessary.
Robert Bruce Park (robru) wrote : | # |
On 12-10-03 05:22 PM, Barry Warsaw wrote:
> On Oct 03, 2012, at 12:34 PM, Robert Bruce Park wrote:
>> I actually verified that the OAuth signatures worked for *every* API endpoint
>> *except* sending new direct messages (we can receive them ok, too). That's
>> what debug_twitter_
>> operation, I'd modify it to do that new operation, then run it, and I'd see
>> on my live twitter account that it worked.
>
> You may have to engage with the Twitter developer community to debug this. I
> didn't find anything relevant in a very little bit of googling though. You'd
> think that if it were a bug in their API, others would have noticed it.
Yeah, no idea. Maybe we're the first to port to 1.1 ;-)
I'll look into it a little bit later tonight, got some other
refactoring/cleanup going on.
>> I hadn't originally. At first it just started off as 'from
>> gwibber.
>
> Just FWIW, I have a patch in my facebook.working branch that makes None the
> default argument. You have to be careful not to call the callback if it's
> None. I wonder if it's useful to land that in the py3 branch separately?
I don't think it matters. Maybe it should just default to lambda:None
and then we don't have to worry about whether the callback gets called
or not. For the debug_live.py script, it's not meant to be a
long-running script, just short stints for testing. So it's unlikely for
the callback to be triggered during it's operation.
>> I haven't looked closely at Twitter's rate limiting standards yet, but I
>> wrote that comment because I ran into Twitter's limit at one point and they
>> stopped me from downloading tweets that I wanted.
>
> I think Twitter warns you (via response headers?) if you're getting rate
> limited. So I guess you could check for those and automatically back-off if
> you see them. It's probably better to at least have some combination of
> proactive and reactive response to rate limiting. The gory details:
>
> https:/
Yeah, I'm gonna dive into this soon, but so far I've just been focusing
on cleaning up the code that's already there and improving the tests.
>>>> +# https:/
>>>> + @feature
>>>> + def mentions(self):
>>>> + """Gather the tweets that mention us."""
>>>> + url = self._timeline.
>>>> + for tweet in self._get_url(url):
>>>> + self._publish_
>>>
>>> Do you have tests for each of these? I know they *look* ridiculously
>>> similar, but each should have a test.
>>
>> I wasn't really sure how to test them, to be honest.
>
> It's probably good enough to use the same, or very similar sample data, but
> just call .mentions(), etc. in the tests.
>
> Really, the only thing you care about is that you've tested the basic code in
> this (and similar) methods, not in anything it calls. Let's say for example
> that you - or a future developer - typo'd "self._
> kind of thing you want these tests to catch.
Fair enough. I wrote some t...
- 1489. By Robert Bruce Park
-
Implement a basic rate limiter.
This had to be done in the Downloader class unfortunately because it
needed direct access to the HTTP headers and I didn't really see any
way to access those outside of the Downloader class.Luckily it will harmlessly ignore any protocol that doesn't specify
rate limits in the HTTP headers, so it should work well for Twitter
and do nothing for anybody else.This basically makes no attempt to limit you if you have more than 5
requests remaining in the rate limit window (making it unobtrusive for
cases where the dispatcher is just calling every 5 minutes and it
really should never come near the rate limiter *anyway*), but if
something has gone on such as a user requesting too many manual
refreshes, then it will forcibly pause each request long enough to
avoid hitting Twitter's rate limiter. - 1490. By Robert Bruce Park
-
Prevent duplicate message ids from filling up the message_ids column.
It turns out that it's quite easy to see the same message multiple
times, and previously the dupe-checking logic was successful in
stopping the message from being published more than once, it blindly
appended the message_id to the list regardless of whether or not it
was already on the list, so that tended to fill up with junk quick.With test case.
Robert Bruce Park (robru) wrote : | # |
Ok, barry, I think we have a pretty solid thing here, rate limiting and everything! Have another look over the diff and let me know what you think ;-)
The only real issue remaining is that send_private still 403s, but I've sent a message to the Twitter support forum, so hopefully they respond to that soon.
- 1491. By Robert Bruce Park
-
Fill out some comments and docstrings.
- 1492. By Robert Bruce Park
-
Catch 403s in send_private, with test.
- 1493. By Robert Bruce Park
-
Add tests for the rate limiter.
Preview Diff
1 | === modified file 'gwibber/gwibber/protocols/twitter.py' |
2 | --- gwibber/gwibber/protocols/twitter.py 2012-09-19 22:21:39 +0000 |
3 | +++ gwibber/gwibber/protocols/twitter.py 2012-10-04 20:29:21 +0000 |
4 | @@ -14,20 +14,289 @@ |
5 | |
6 | """The Twitter protocol plugin.""" |
7 | |
8 | + |
9 | __all__ = [ |
10 | 'Twitter', |
11 | ] |
12 | |
13 | |
14 | -import gettext |
15 | import logging |
16 | |
17 | -from gwibber.utils.base import Base |
18 | +from oauthlib.oauth1 import Client |
19 | +from urllib.error import HTTPError |
20 | +from urllib.parse import quote |
21 | + |
22 | +from gwibber.utils.authentication import Authentication |
23 | +from gwibber.utils.base import Base, feature |
24 | +from gwibber.utils.download import get_json |
25 | +from gwibber.utils.time import parsetime, iso8601utc |
26 | |
27 | |
28 | log = logging.getLogger('gwibber.service') |
29 | -_ = gettext.lgettext |
30 | - |
31 | - |
32 | + |
33 | + |
34 | +# https://dev.twitter.com/docs/api/1.1 |
35 | class Twitter(Base): |
36 | - pass |
37 | + # StatusNet claims to mimick the Twitter API very closely (so |
38 | + # closely to the point that they refer you to the twitter API |
39 | + # reference docs as a starting point for learning about their |
40 | + # API), So these prefixes are defined here as class attributes |
41 | + # instead of the usual module globals, in the hopes that the |
42 | + # StatusNet class will be able to subclass Twitter and change only |
43 | + # the URLs, with minimal other changes, and magically work. |
44 | + _api_base = 'https://api.twitter.com/1.1/{endpoint}.json' |
45 | + |
46 | + _timeline = _api_base.format(endpoint='statuses/{}_timeline') |
47 | + _user_timeline = _timeline.format('user') + '?screen_name={}' |
48 | + |
49 | + _lists = _api_base.format(endpoint='lists/statuses') + '?list_id={}' |
50 | + |
51 | + _destroy = _api_base.format(endpoint='statuses/destroy/{}') |
52 | + _retweet = _api_base.format(endpoint='statuses/retweet/{}') |
53 | + |
54 | + _tweet_permalink = 'https://twitter.com/{user_id}/status/{tweet_id}' |
55 | + |
56 | + def _locked_login(self, old_token): |
57 | + """Sign in without worrying about concurrent login attempts.""" |
58 | + result = Authentication(self._account, log).login() |
59 | + if result is None: |
60 | + log.error('No Twitter authentication results received.') |
61 | + return |
62 | + |
63 | + token = result.get('AccessToken') |
64 | + if token is None: |
65 | + log.error('No AccessToken in Twitter session: {!r}', result) |
66 | + else: |
67 | + self._account.access_token = token |
68 | + self._account.secret_token = result.get('TokenSecret') |
69 | + self._account.user_id = result.get('UserId') |
70 | + self._account.user_name = result.get('ScreenName') |
71 | + log.debug('{} UID: {}'.format(self.__class__.__name__, |
72 | + self._account.user_id)) |
73 | + |
74 | + def _get_url(self, url, data=None): |
75 | + """Access the Twitter API with correct OAuth signed headers.""" |
76 | + # TODO start enforcing some rate limiting. |
77 | + do_post = data is not None |
78 | + |
79 | + # "Client" == "Consumer" in oauthlib parlance. |
80 | + client_key = self._account.auth.parameters['ConsumerKey'] |
81 | + client_secret = self._account.auth.parameters['ConsumerSecret'] |
82 | + |
83 | + # "resource_owner" == secret and token. |
84 | + resource_owner_key = self._get_access_token() |
85 | + resource_owner_secret = self._account.secret_token |
86 | + oauth_client = Client(client_key, client_secret, |
87 | + resource_owner_key, resource_owner_secret) |
88 | + |
89 | + headers = {} |
90 | + if do_post: |
91 | + headers['Content-Type'] = 'application/x-www-form-urlencoded' |
92 | + |
93 | + # All we care about is the headers, which will contain the |
94 | + # Authorization header necessary to satisfy OAuth. |
95 | + uri, headers, body = oauth_client.sign( |
96 | + url, body=data, headers=headers, |
97 | + http_method='POST' if do_post else 'GET') |
98 | + |
99 | + return get_json(url, |
100 | + params=data, |
101 | + headers=headers, |
102 | + post=do_post) |
103 | + |
104 | + def _publish_tweet(self, tweet): |
105 | + """Publish a single tweet into the Dee.SharedModel.""" |
106 | + tweet_id = tweet.get('id_str') |
107 | + if tweet_id is None: |
108 | + log.info('We got a tweet with no id! Abort!') |
109 | + return |
110 | + |
111 | + user = tweet.get('user', {}) |
112 | + screen_name = user.get('screen_name', '') |
113 | + self._publish( |
114 | + message_id=tweet_id, |
115 | + message=tweet.get('text', ''), |
116 | + timestamp=iso8601utc(parsetime(tweet.get('created_at', ''))), |
117 | + stream='messages', |
118 | + sender=user.get('name', ''), |
119 | + sender_nick=screen_name, |
120 | + from_me=(screen_name == self._account.user_id), |
121 | + icon_uri=user.get('profile_image_url_https', ''), |
122 | + liked=tweet.get('favorited', False), |
123 | + url=self._tweet_permalink.format(user_id=screen_name, |
124 | + tweet_id=tweet_id), |
125 | + ) |
126 | + |
127 | +# https://dev.twitter.com/docs/api/1.1/get/statuses/home_timeline |
128 | + @feature |
129 | + def home(self): |
130 | + """Gather the user's home timeline.""" |
131 | + url = self._timeline.format('home') |
132 | + for tweet in self._get_url(url): |
133 | + self._publish_tweet(tweet) |
134 | + |
135 | +# https://dev.twitter.com/docs/api/1.1/get/statuses/mentions_timeline |
136 | + @feature |
137 | + def mentions(self): |
138 | + """Gather the tweets that mention us.""" |
139 | + url = self._timeline.format('mentions') |
140 | + for tweet in self._get_url(url): |
141 | + self._publish_tweet(tweet) |
142 | + |
143 | +# https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline |
144 | + @feature |
145 | + def user(self, screen_name=''): |
146 | + """Gather the tweets from a specific user. |
147 | + |
148 | + If screen_name is not specified, then gather the tweets |
149 | + written by the currently authenticated user. |
150 | + """ |
151 | + url = self._user_timeline.format(screen_name) |
152 | + for tweet in self._get_url(url): |
153 | + self._publish_tweet(tweet) |
154 | + |
155 | +# https://dev.twitter.com/docs/api/1.1/get/lists/statuses |
156 | + @feature |
157 | + def list(self, list_id): |
158 | + """Gather the tweets from the specified list_id.""" |
159 | + url = self._lists.format(list_id) |
160 | + for tweet in self._get_url(url): |
161 | + self._publish_tweet(tweet) |
162 | + |
163 | +# https://dev.twitter.com/docs/api/1.1/get/lists/list |
164 | + @feature |
165 | + def lists(self): |
166 | + """Gather the tweets from the lists that the we are subscribed to.""" |
167 | + url = self._api_base.format(endpoint='lists/list') |
168 | + for twitlist in self._get_url(url): |
169 | + self.list(twitlist.get('id_str', '')) |
170 | + |
171 | +# https://dev.twitter.com/docs/api/1.1/get/direct_messages |
172 | +# https://dev.twitter.com/docs/api/1.1/get/direct_messages/sent |
173 | + @feature |
174 | + def private(self): |
175 | + """Gather the direct messages sent to/from us.""" |
176 | + url = self._api_base.format(endpoint='direct_messages') |
177 | + for tweet in self._get_url(url): |
178 | + self._publish_tweet(tweet) |
179 | + |
180 | + url = self._api_base.format(endpoint='direct_messages/sent') |
181 | + for tweet in self._get_url(url): |
182 | + self._publish_tweet(tweet) |
183 | + |
184 | + @feature |
185 | + def receive(self): |
186 | + """Gather and publish all incoming messages.""" |
187 | + # TODO I know mentions and lists are actually incorporated |
188 | + # within the home timeline, but calling them explicitly will |
189 | + # ensure that more of those types of messages appear in the |
190 | + # timeline (eg, so they don't get drown out by everything |
191 | + # else). I'm not sure how necessary it is, though. |
192 | + self.home() |
193 | + self.mentions() |
194 | + self.lists() |
195 | + self.private() |
196 | + |
197 | + @feature |
198 | + def send_private(self, screen_name, message): |
199 | + """Send a direct message to the given screen name. |
200 | + |
201 | + This will error 403 if the person you are sending to does not |
202 | + follow you. |
203 | + """ |
204 | + url = self._api_base.format(endpoint='direct_messages/new') |
205 | + try: |
206 | + tweet = self._get_url(url, dict(text=message, screen_name=screen_name)) |
207 | + self._publish_tweet(tweet) |
208 | + except HTTPError as e: |
209 | + log.error('{}: Does that user follow you?'.format(e)) |
210 | + |
211 | +# https://dev.twitter.com/docs/api/1.1/post/statuses/update |
212 | + @feature |
213 | + def send(self, message): |
214 | + """Publish a public tweet.""" |
215 | + url = self._api_base.format(endpoint='statuses/update') |
216 | + tweet = self._get_url(url, dict(status=message)) |
217 | + self._publish_tweet(tweet) |
218 | + |
219 | +# https://dev.twitter.com/docs/api/1.1/post/statuses/update |
220 | + @feature |
221 | + def send_thread(self, message_id, message): |
222 | + """Send a reply message to message_id. |
223 | + |
224 | + Note that you have to @mention the message_id owner's screen name in |
225 | + order for Twitter to actually accept this as a reply. Otherwise it will |
226 | + just be an ordinary tweet. |
227 | + """ |
228 | + url = self._api_base.format(endpoint='statuses/update') |
229 | + tweet = self._get_url(url, dict(in_reply_to_status_id=message_id, |
230 | + status=message)) |
231 | + self._publish_tweet(tweet) |
232 | + |
233 | +# https://dev.twitter.com/docs/api/1.1/post/statuses/destroy/%3Aid |
234 | + @feature |
235 | + def delete(self, message_id): |
236 | + """Delete a tweet that you wrote.""" |
237 | + url = self._destroy.format(message_id) |
238 | + tweet = self._get_url(url, dict(trim_user='true')) |
239 | + self._unpublish(message_id) |
240 | + |
241 | +# https://dev.twitter.com/docs/api/1.1/post/statuses/retweet/%3Aid |
242 | + @feature |
243 | + def retweet(self, message_id): |
244 | + """Republish somebody else's tweet with your name on it.""" |
245 | + url = self._retweet.format(message_id) |
246 | + tweet = self._get_url(url, dict(trim_user='true')) |
247 | + self._publish_tweet(tweet) |
248 | + |
249 | +# https://dev.twitter.com/docs/api/1.1/post/friendships/destroy |
250 | + @feature |
251 | + def unfollow(self, screen_name): |
252 | + """Stop following the given screen name.""" |
253 | + url = self._api_base.format(endpoint='friendships/destroy') |
254 | + self._get_url(url, dict(screen_name=screen_name)) |
255 | + |
256 | +# https://dev.twitter.com/docs/api/1.1/post/friendships/create |
257 | + @feature |
258 | + def follow(self, screen_name): |
259 | + """Start following the given screen name.""" |
260 | + url = self._api_base.format(endpoint='friendships/create') |
261 | + self._get_url(url, dict(screen_name=screen_name, follow='true')) |
262 | + |
263 | +# https://dev.twitter.com/docs/api/1.1/post/favorites/create |
264 | + @feature |
265 | + def like(self, message_id): |
266 | + """Announce to the world your undying love for a tweet.""" |
267 | + url = self._api_base.format(endpoint='favorites/create') |
268 | + tweet = self._get_url(url, dict(id=message_id)) |
269 | + # I don't think we need to publish this tweet because presumably |
270 | + # the user has clicked the 'favorite' button on the message that's |
271 | + # already in the stream... |
272 | + |
273 | +# https://dev.twitter.com/docs/api/1.1/post/favorites/destroy |
274 | + @feature |
275 | + def unlike(self, message_id): |
276 | + """Renounce your undying love for a tweet.""" |
277 | + url = self._api_base.format(endpoint='favorites/destroy') |
278 | + tweet = self._get_url(url, dict(id=message_id)) |
279 | + |
280 | +# https://dev.twitter.com/docs/api/1.1/get/search/tweets |
281 | + @feature |
282 | + def tag(self, hashtag): |
283 | + """Return a list of some recent tweets mentioning hashtag.""" |
284 | + url = self._api_base.format(endpoint='search/tweets') |
285 | + |
286 | + response = self._get_url( |
287 | + '{}?q=%23{}'.format(url, hashtag.lstrip('#'))) |
288 | + for tweet in response.get('statuses', []): |
289 | + self._publish_tweet(tweet) |
290 | + |
291 | +# https://dev.twitter.com/docs/api/1.1/get/search/tweets |
292 | + @feature |
293 | + def search(self, query): |
294 | + """Search for any arbitrary string.""" |
295 | + url = self._api_base.format(endpoint='search/tweets') |
296 | + |
297 | + response = self._get_url('{}?q={}'.format(url, quote(query, safe=''))) |
298 | + for tweet in response.get('statuses', []): |
299 | + self._publish_tweet(tweet) |
300 | |
301 | === modified file 'gwibber/gwibber/testing/mocks.py' |
302 | --- gwibber/gwibber/testing/mocks.py 2012-09-21 21:45:06 +0000 |
303 | +++ gwibber/gwibber/testing/mocks.py 2012-10-04 20:29:21 +0000 |
304 | @@ -71,21 +71,30 @@ |
305 | class FakeData: |
306 | """Mimic a urlopen() that returns canned data.""" |
307 | |
308 | - def __init__(self, path, resource, charset='utf-8'): |
309 | + def __init__(self, path, resource, charset='utf-8', headers=None): |
310 | # resource_string() always returns bytes. |
311 | self._data = resource_string(path, resource) |
312 | self.call_count = 0 |
313 | self._charset = charset |
314 | + self._headers = headers or {} |
315 | |
316 | def __call__(self, url, post_data=None): |
317 | # Ignore url and post_data since the canned data will already |
318 | # represents these results. We just have to make the API fit. |
319 | # post_data will be missing for GETs. |
320 | charset = self._charset |
321 | + class FakeInfo: |
322 | + def __init__(self, headers=None): |
323 | + self.headers = headers or {} |
324 | + def get(self, key, default=None): |
325 | + return self.headers.get(key, default) |
326 | + def get_content_charset(self): |
327 | + return charset |
328 | class FakeOpen: |
329 | - def __init__(self, data, charset): |
330 | + def __init__(self, data, charset, headers=None): |
331 | self._data = data |
332 | self._charset = charset |
333 | + self.headers = headers or {} |
334 | def read(self): |
335 | return self._data |
336 | def __enter__(self): |
337 | @@ -93,12 +102,9 @@ |
338 | def __exit__(self, *args, **kws): |
339 | pass |
340 | def info(self): |
341 | - class FakeInfo: |
342 | - def get_content_charset(self): |
343 | - return charset |
344 | - return FakeInfo() |
345 | + return FakeInfo(self.headers) |
346 | self.call_count += 1 |
347 | - return FakeOpen(self._data, self._charset) |
348 | + return FakeOpen(self._data, self._charset, self._headers) |
349 | |
350 | |
351 | class SettingsIterMock: |
352 | |
353 | === added file 'gwibber/gwibber/tests/data/twitter-home.dat' |
354 | --- gwibber/gwibber/tests/data/twitter-home.dat 1970-01-01 00:00:00 +0000 |
355 | +++ gwibber/gwibber/tests/data/twitter-home.dat 2012-10-04 20:29:21 +0000 |
356 | @@ -0,0 +1,344 @@ |
357 | + [ |
358 | + { |
359 | + "coordinates": null, |
360 | + "truncated": false, |
361 | + "created_at": "Tue Aug 28 21:16:23 +0000 2012", |
362 | + "favorited": false, |
363 | + "id_str": "240558470661799936", |
364 | + "in_reply_to_user_id_str": null, |
365 | + "entities": { |
366 | + "urls": [ |
367 | + |
368 | + ], |
369 | + "hashtags": [ |
370 | + |
371 | + ], |
372 | + "user_mentions": [ |
373 | + |
374 | + ] |
375 | + }, |
376 | + "text": "just another test", |
377 | + "contributors": null, |
378 | + "id": 240558470661799936, |
379 | + "retweet_count": 0, |
380 | + "in_reply_to_status_id_str": null, |
381 | + "geo": null, |
382 | + "retweeted": false, |
383 | + "in_reply_to_user_id": null, |
384 | + "place": null, |
385 | + "source": "<a href=\"http://realitytechnicians.com\" rel=\"nofollow\">OAuth Dancer Reborn</a>", |
386 | + "user": { |
387 | + "name": "OAuth Dancer", |
388 | + "profile_sidebar_fill_color": "DDEEF6", |
389 | + "profile_background_tile": true, |
390 | + "profile_sidebar_border_color": "C0DEED", |
391 | + "profile_image_url": "http://a0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg", |
392 | + "created_at": "Wed Mar 03 19:37:35 +0000 2010", |
393 | + "location": "San Francisco, CA", |
394 | + "follow_request_sent": false, |
395 | + "id_str": "119476949", |
396 | + "is_translator": false, |
397 | + "profile_link_color": "0084B4", |
398 | + "entities": { |
399 | + "url": { |
400 | + "urls": [ |
401 | + { |
402 | + "expanded_url": null, |
403 | + "url": "http://bit.ly/oauth-dancer", |
404 | + "indices": [ |
405 | + 0, |
406 | + 26 |
407 | + ], |
408 | + "display_url": null |
409 | + } |
410 | + ] |
411 | + }, |
412 | + "description": null |
413 | + }, |
414 | + "default_profile": false, |
415 | + "url": "http://bit.ly/oauth-dancer", |
416 | + "contributors_enabled": false, |
417 | + "favourites_count": 7, |
418 | + "utc_offset": null, |
419 | + "profile_image_url_https": "https://si0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg", |
420 | + "id": 119476949, |
421 | + "listed_count": 1, |
422 | + "profile_use_background_image": true, |
423 | + "profile_text_color": "333333", |
424 | + "followers_count": 28, |
425 | + "lang": "en", |
426 | + "protected": false, |
427 | + "geo_enabled": true, |
428 | + "notifications": false, |
429 | + "description": "", |
430 | + "profile_background_color": "C0DEED", |
431 | + "verified": false, |
432 | + "time_zone": null, |
433 | + "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/80151733/oauth-dance.png", |
434 | + "statuses_count": 166, |
435 | + "profile_background_image_url": "http://a0.twimg.com/profile_background_images/80151733/oauth-dance.png", |
436 | + "default_profile_image": false, |
437 | + "friends_count": 14, |
438 | + "following": false, |
439 | + "show_all_inline_media": false, |
440 | + "screen_name": "oauth_dancer" |
441 | + }, |
442 | + "in_reply_to_screen_name": null, |
443 | + "in_reply_to_status_id": null |
444 | + }, |
445 | + { |
446 | + "coordinates": { |
447 | + "coordinates": [ |
448 | + -122.25831, |
449 | + 37.871609 |
450 | + ], |
451 | + "type": "Point" |
452 | + }, |
453 | + "truncated": false, |
454 | + "created_at": "Tue Aug 28 21:08:15 +0000 2012", |
455 | + "favorited": false, |
456 | + "id_str": "240556426106372096", |
457 | + "in_reply_to_user_id_str": null, |
458 | + "entities": { |
459 | + "urls": [ |
460 | + { |
461 | + "expanded_url": "http://blogs.ischool.berkeley.edu/i290-abdt-s12/", |
462 | + "url": "http://t.co/bfj7zkDJ", |
463 | + "indices": [ |
464 | + 79, |
465 | + 99 |
466 | + ], |
467 | + "display_url": "blogs.ischool.berkeley.edu/i290-abdt-s12/" |
468 | + } |
469 | + ], |
470 | + "hashtags": [ |
471 | + |
472 | + ], |
473 | + "user_mentions": [ |
474 | + { |
475 | + "name": "Cal", |
476 | + "id_str": "17445752", |
477 | + "id": 17445752, |
478 | + "indices": [ |
479 | + 60, |
480 | + 64 |
481 | + ], |
482 | + "screen_name": "Cal" |
483 | + }, |
484 | + { |
485 | + "name": "Othman Laraki", |
486 | + "id_str": "20495814", |
487 | + "id": 20495814, |
488 | + "indices": [ |
489 | + 70, |
490 | + 77 |
491 | + ], |
492 | + "screen_name": "othman" |
493 | + } |
494 | + ] |
495 | + }, |
496 | + "text": "lecturing at the \"analyzing big data with twitter\" class at @cal with @othman http://t.co/bfj7zkDJ", |
497 | + "contributors": null, |
498 | + "id": 240556426106372096, |
499 | + "retweet_count": 3, |
500 | + "in_reply_to_status_id_str": null, |
501 | + "geo": { |
502 | + "coordinates": [ |
503 | + 37.871609, |
504 | + -122.25831 |
505 | + ], |
506 | + "type": "Point" |
507 | + }, |
508 | + "retweeted": false, |
509 | + "possibly_sensitive": false, |
510 | + "in_reply_to_user_id": null, |
511 | + "place": { |
512 | + "name": "Berkeley", |
513 | + "country_code": "US", |
514 | + "country": "United States", |
515 | + "attributes": { |
516 | + }, |
517 | + "url": "http://api.twitter.com/1/geo/id/5ef5b7f391e30aff.json", |
518 | + "id": "5ef5b7f391e30aff", |
519 | + "bounding_box": { |
520 | + "coordinates": [ |
521 | + [ |
522 | + [ |
523 | + -122.367781, |
524 | + 37.835727 |
525 | + ], |
526 | + [ |
527 | + -122.234185, |
528 | + 37.835727 |
529 | + ], |
530 | + [ |
531 | + -122.234185, |
532 | + 37.905824 |
533 | + ], |
534 | + [ |
535 | + -122.367781, |
536 | + 37.905824 |
537 | + ] |
538 | + ] |
539 | + ], |
540 | + "type": "Polygon" |
541 | + }, |
542 | + "full_name": "Berkeley, CA", |
543 | + "place_type": "city" |
544 | + }, |
545 | + "source": "<a href=\"http://www.apple.com\" rel=\"nofollow\">Safari on iOS</a>", |
546 | + "user": { |
547 | + "name": "Raffi Krikorian", |
548 | + "profile_sidebar_fill_color": "DDEEF6", |
549 | + "profile_background_tile": false, |
550 | + "profile_sidebar_border_color": "C0DEED", |
551 | + "profile_image_url": "http://a0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png", |
552 | + "created_at": "Sun Aug 19 14:24:06 +0000 2007", |
553 | + "location": "San Francisco, California", |
554 | + "follow_request_sent": false, |
555 | + "id_str": "8285392", |
556 | + "is_translator": false, |
557 | + "profile_link_color": "0084B4", |
558 | + "entities": { |
559 | + "url": { |
560 | + "urls": [ |
561 | + { |
562 | + "expanded_url": "http://about.me/raffi.krikorian", |
563 | + "url": "http://t.co/eNmnM6q", |
564 | + "indices": [ |
565 | + 0, |
566 | + 19 |
567 | + ], |
568 | + "display_url": "about.me/raffi.krikorian" |
569 | + } |
570 | + ] |
571 | + }, |
572 | + "description": { |
573 | + "urls": [ |
574 | + |
575 | + ] |
576 | + } |
577 | + }, |
578 | + "default_profile": true, |
579 | + "url": "http://t.co/eNmnM6q", |
580 | + "contributors_enabled": false, |
581 | + "favourites_count": 724, |
582 | + "utc_offset": -28800, |
583 | + "profile_image_url_https": "https://si0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png", |
584 | + "id": 8285392, |
585 | + "listed_count": 619, |
586 | + "profile_use_background_image": true, |
587 | + "profile_text_color": "333333", |
588 | + "followers_count": 18752, |
589 | + "lang": "en", |
590 | + "protected": false, |
591 | + "geo_enabled": true, |
592 | + "notifications": false, |
593 | + "description": "Director of @twittereng's Platform Services. I break things.", |
594 | + "profile_background_color": "C0DEED", |
595 | + "verified": false, |
596 | + "time_zone": "Pacific Time (US & Canada)", |
597 | + "profile_background_image_url_https": "https://si0.twimg.com/images/themes/theme1/bg.png", |
598 | + "statuses_count": 5007, |
599 | + "profile_background_image_url": "http://a0.twimg.com/images/themes/theme1/bg.png", |
600 | + "default_profile_image": false, |
601 | + "friends_count": 701, |
602 | + "following": true, |
603 | + "show_all_inline_media": true, |
604 | + "screen_name": "raffi" |
605 | + }, |
606 | + "in_reply_to_screen_name": null, |
607 | + "in_reply_to_status_id": null |
608 | + }, |
609 | + { |
610 | + "coordinates": null, |
611 | + "truncated": false, |
612 | + "created_at": "Tue Aug 28 19:59:34 +0000 2012", |
613 | + "favorited": false, |
614 | + "id_str": "240539141056638977", |
615 | + "in_reply_to_user_id_str": null, |
616 | + "entities": { |
617 | + "urls": [ |
618 | + |
619 | + ], |
620 | + "hashtags": [ |
621 | + |
622 | + ], |
623 | + "user_mentions": [ |
624 | + |
625 | + ] |
626 | + }, |
627 | + "text": "You'd be right more often if you thought you were wrong.", |
628 | + "contributors": null, |
629 | + "id": 240539141056638977, |
630 | + "retweet_count": 1, |
631 | + "in_reply_to_status_id_str": null, |
632 | + "geo": null, |
633 | + "retweeted": false, |
634 | + "in_reply_to_user_id": null, |
635 | + "place": null, |
636 | + "source": "web", |
637 | + "user": { |
638 | + "name": "Taylor Singletary", |
639 | + "profile_sidebar_fill_color": "FBFBFB", |
640 | + "profile_background_tile": true, |
641 | + "profile_sidebar_border_color": "000000", |
642 | + "profile_image_url": "http://a0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg", |
643 | + "created_at": "Wed Mar 07 22:23:19 +0000 2007", |
644 | + "location": "San Francisco, CA", |
645 | + "follow_request_sent": false, |
646 | + "id_str": "819797", |
647 | + "is_translator": false, |
648 | + "profile_link_color": "c71818", |
649 | + "entities": { |
650 | + "url": { |
651 | + "urls": [ |
652 | + { |
653 | + "expanded_url": "http://www.rebelmouse.com/episod/", |
654 | + "url": "http://t.co/Lxw7upbN", |
655 | + "indices": [ |
656 | + 0, |
657 | + 20 |
658 | + ], |
659 | + "display_url": "rebelmouse.com/episod/" |
660 | + } |
661 | + ] |
662 | + }, |
663 | + "description": { |
664 | + "urls": [ |
665 | + |
666 | + ] |
667 | + } |
668 | + }, |
669 | + "default_profile": false, |
670 | + "url": "http://t.co/Lxw7upbN", |
671 | + "contributors_enabled": false, |
672 | + "favourites_count": 15990, |
673 | + "utc_offset": -28800, |
674 | + "profile_image_url_https": "https://si0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg", |
675 | + "id": 819797, |
676 | + "listed_count": 340, |
677 | + "profile_use_background_image": true, |
678 | + "profile_text_color": "D20909", |
679 | + "followers_count": 7126, |
680 | + "lang": "en", |
681 | + "protected": false, |
682 | + "geo_enabled": true, |
683 | + "notifications": false, |
684 | + "description": "Reality Technician, Twitter API team, synthesizer enthusiast; a most excellent adventure in timelines. I know it's hard to believe in something you can't see.", |
685 | + "profile_background_color": "000000", |
686 | + "verified": false, |
687 | + "time_zone": "Pacific Time (US & Canada)", |
688 | + "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/643655842/hzfv12wini4q60zzrthg.png", |
689 | + "statuses_count": 18076, |
690 | + "profile_background_image_url": "http://a0.twimg.com/profile_background_images/643655842/hzfv12wini4q60zzrthg.png", |
691 | + "default_profile_image": false, |
692 | + "friends_count": 5444, |
693 | + "following": true, |
694 | + "show_all_inline_media": true, |
695 | + "screen_name": "episod" |
696 | + }, |
697 | + "in_reply_to_screen_name": null, |
698 | + "in_reply_to_status_id": null |
699 | + } |
700 | + ] |
701 | |
702 | === modified file 'gwibber/gwibber/tests/test_dbus.py' |
703 | --- gwibber/gwibber/tests/test_dbus.py 2012-09-25 20:25:42 +0000 |
704 | +++ gwibber/gwibber/tests/test_dbus.py 2012-10-04 20:29:21 +0000 |
705 | @@ -111,6 +111,11 @@ |
706 | '/com/gwibber/Service') |
707 | iface = dbus.Interface(obj, 'com.Gwibber.Service') |
708 | # TODO Add more cases as more protocols are added. |
709 | + self.assertEqual(json.loads(iface.GetFeatures('twitter')), |
710 | + ['delete', 'follow', 'home', 'like', 'list', 'lists', |
711 | + 'mentions', 'private', 'receive', 'retweet', 'search', |
712 | + 'send', 'send_private', 'send_thread', 'tag', |
713 | + 'unfollow', 'unlike', 'user']) |
714 | self.assertEqual(json.loads(iface.GetFeatures('flickr')), ['receive']) |
715 | self.assertEqual(json.loads(iface.GetFeatures('foursquare')), |
716 | ['receive']) |
717 | |
718 | === modified file 'gwibber/gwibber/tests/test_download.py' |
719 | --- gwibber/gwibber/tests/test_download.py 2012-09-14 20:29:11 +0000 |
720 | +++ gwibber/gwibber/tests/test_download.py 2012-10-04 20:29:21 +0000 |
721 | @@ -249,3 +249,44 @@ |
722 | headers={'X-Foo': 'baz', |
723 | 'X-Bar': 'foo'}), |
724 | dict(foo='baz', bar='foo')) |
725 | + |
726 | + @mock.patch('gwibber.utils.download.urlopen', |
727 | + FakeData('gwibber.tests.data', 'twitter-home.dat', |
728 | + headers={'X-Rate-Limit-Reset':1349382153 + 300, |
729 | + 'X-Rate-Limit-Remaining': 1})) |
730 | + @mock.patch('gwibber.utils.download.time.sleep') |
731 | + @mock.patch('gwibber.utils.download.time.time', return_value=1349382153) |
732 | + def test_rate_limiter_maximum(self, time, sleep): |
733 | + # First call does not get limited, but establishes the limit |
734 | + get_json('http://example.com/alpha') |
735 | + sleep.assert_called_with(0) |
736 | + # Second call gets called with the established limit |
737 | + get_json('http://example.com/alpha') |
738 | + sleep.assert_called_with(300) |
739 | + |
740 | + @mock.patch('gwibber.utils.download.urlopen', |
741 | + FakeData('gwibber.tests.data', 'twitter-home.dat', |
742 | + headers={'X-Rate-Limit-Reset':1349382153 + 300, |
743 | + 'X-Rate-Limit-Remaining': 3})) |
744 | + @mock.patch('gwibber.utils.download.time.sleep') |
745 | + @mock.patch('gwibber.utils.download.time.time', return_value=1349382153) |
746 | + def test_rate_limiter_medium(self, time, sleep): |
747 | + # First call does not get limited, but establishes the limit |
748 | + get_json('http://example.com/beta') |
749 | + sleep.assert_called_with(0) |
750 | + # Second call gets called with the established limit |
751 | + get_json('http://example.com/beta') |
752 | + sleep.assert_called_with(100.0) |
753 | + |
754 | + @mock.patch('gwibber.utils.download.urlopen', |
755 | + FakeData('gwibber.tests.data', 'twitter-home.dat', |
756 | + headers={'X-Rate-Limit-Reset':int(time.time()) + 300, |
757 | + 'X-Rate-Limit-Remaining': 10})) |
758 | + @mock.patch('gwibber.utils.download.time.sleep') |
759 | + def test_rate_limiter_unlimited(self, sleep): |
760 | + # First call does not get limited, but establishes the limit |
761 | + get_json('http://example.com/omega') |
762 | + sleep.assert_called_with(0) |
763 | + # Second call gets called with the established limit |
764 | + get_json('http://example.com/omega') |
765 | + sleep.assert_called_with(0) |
766 | |
767 | === modified file 'gwibber/gwibber/tests/test_protocols.py' |
768 | --- gwibber/gwibber/tests/test_protocols.py 2012-09-25 20:25:42 +0000 |
769 | +++ gwibber/gwibber/tests/test_protocols.py 2012-10-04 20:29:21 +0000 |
770 | @@ -176,6 +176,7 @@ |
771 | |
772 | @mock.patch('gwibber.utils.base.Model', TestModel) |
773 | @mock.patch('gwibber.utils.base._seen_messages', {}) |
774 | + @mock.patch('gwibber.utils.base._seen_ids', {}) |
775 | def test_one_message(self): |
776 | # Test that publishing a message inserts a row into the model. |
777 | base = Base(FakeAccount()) |
778 | @@ -215,6 +216,32 @@ |
779 | |
780 | @mock.patch('gwibber.utils.base.Model', TestModel) |
781 | @mock.patch('gwibber.utils.base._seen_messages', {}) |
782 | + @mock.patch('gwibber.utils.base._seen_ids', {}) |
783 | + def test_unpublish(self): |
784 | + base = Base(FakeAccount()) |
785 | + self.assertEqual(0, TestModel.get_n_rows()) |
786 | + self.assertTrue(base._publish( |
787 | + message_id='1234', |
788 | + sender='fred', |
789 | + message='hello, @jimmy')) |
790 | + self.assertTrue(base._publish( |
791 | + message_id='5678', |
792 | + sender='fred', |
793 | + message='hello, +jimmy')) |
794 | + self.assertEqual(1, TestModel.get_n_rows()) |
795 | + self.assertEqual(TestModel[0][0], |
796 | + [['base', 'faker/than fake', '1234'], |
797 | + ['base', 'faker/than fake', '5678']]) |
798 | + base._unpublish('1234') |
799 | + self.assertEqual(1, TestModel.get_n_rows()) |
800 | + self.assertEqual(TestModel[0][0], |
801 | + [['base', 'faker/than fake', '5678']]) |
802 | + base._unpublish('5678') |
803 | + self.assertEqual(0, TestModel.get_n_rows()) |
804 | + |
805 | + @mock.patch('gwibber.utils.base.Model', TestModel) |
806 | + @mock.patch('gwibber.utils.base._seen_messages', {}) |
807 | + @mock.patch('gwibber.utils.base._seen_ids', {}) |
808 | def test_duplicate_messages_identified(self): |
809 | # When two messages which are deemed identical, by way of the |
810 | # _make_key() test in base.py, are published, only one ends up in the |
811 | @@ -259,6 +286,31 @@ |
812 | |
813 | @mock.patch('gwibber.utils.base.Model', TestModel) |
814 | @mock.patch('gwibber.utils.base._seen_messages', {}) |
815 | + @mock.patch('gwibber.utils.base._seen_ids', {}) |
816 | + def test_duplicate_ids_not_duplicated(self): |
817 | + # When two messages are actually identical (same ids and all), |
818 | + # we need to avoid duplicating the id in the sharedmodel. |
819 | + base = Base(FakeAccount()) |
820 | + self.assertEqual(0, TestModel.get_n_rows()) |
821 | + self.assertTrue(base._publish( |
822 | + message_id='1234', |
823 | + stream='messages', |
824 | + sender='fred', |
825 | + message='hello, @jimmy')) |
826 | + self.assertTrue(base._publish( |
827 | + message_id='1234', |
828 | + stream='messages', |
829 | + sender='fred', |
830 | + message='hello, @jimmy')) |
831 | + self.assertEqual(1, TestModel.get_n_rows()) |
832 | + row = TestModel.get_row(0) |
833 | + # The same message_id should not appear twice. |
834 | + self.assertEqual(row[COLUMN_INDICES['message_ids']], |
835 | + [['base', 'faker/than fake', '1234']]) |
836 | + |
837 | + @mock.patch('gwibber.utils.base.Model', TestModel) |
838 | + @mock.patch('gwibber.utils.base._seen_messages', {}) |
839 | + @mock.patch('gwibber.utils.base._seen_ids', {}) |
840 | def test_similar_messages_allowed(self): |
841 | # Because both the sender and message contribute to the unique key we |
842 | # use to identify messages, if two messages are published with |
843 | |
844 | === modified file 'gwibber/gwibber/tests/test_twitter.py' |
845 | --- gwibber/gwibber/tests/test_twitter.py 2012-09-21 14:11:34 +0000 |
846 | +++ gwibber/gwibber/tests/test_twitter.py 2012-10-04 20:29:21 +0000 |
847 | @@ -14,6 +14,7 @@ |
848 | |
849 | """Test the Twitter plugin.""" |
850 | |
851 | + |
852 | __all__ = [ |
853 | 'TestTwitter', |
854 | ] |
855 | @@ -21,9 +22,13 @@ |
856 | |
857 | import unittest |
858 | |
859 | +from gi.repository import Dee |
860 | + |
861 | from gwibber.protocols.twitter import Twitter |
862 | from gwibber.testing.helpers import FakeAccount |
863 | -from gwibber.testing.mocks import LogMock |
864 | +from gwibber.testing.mocks import FakeData, LogMock |
865 | +from gwibber.utils.model import COLUMN_TYPES |
866 | + |
867 | |
868 | try: |
869 | # Python 3.3 |
870 | @@ -32,10 +37,12 @@ |
871 | import mock |
872 | |
873 | |
874 | -# Ensure synchronicity between the main thread and the sub-thread. Also, set |
875 | -# up the loggers for the modules-under-test so that we can assert their error |
876 | -# messages. |
877 | -@mock.patch.dict('gwibber.utils.base.__dict__', {'_SYNCHRONIZE': True}) |
878 | +# Create a test model that will not interfere with the user's environment. |
879 | +# We'll use this object as a mock of the real model. |
880 | +TestModel = Dee.SharedModel.new('com.Gwibber.TestSharedModel') |
881 | +TestModel.set_schema_full(COLUMN_TYPES) |
882 | + |
883 | + |
884 | class TestTwitter(unittest.TestCase): |
885 | """Test the Twitter API.""" |
886 | |
887 | @@ -46,8 +53,289 @@ |
888 | 'gwibber.protocols.twitter') |
889 | |
890 | def tearDown(self): |
891 | + # Ensure that any log entries we haven't tested just get consumed so |
892 | + # as to isolate out test logger from other tests. |
893 | self.log_mock.stop() |
894 | |
895 | - def test_protocol_info(self): |
896 | - # Each protocol carries with it a number of protocol variables. |
897 | - self.assertEqual(self.protocol.__class__.__name__, 'Twitter') |
898 | + @mock.patch('gwibber.utils.authentication.Authentication.login', |
899 | + return_value=None) |
900 | + @mock.patch('gwibber.utils.download.get_json', |
901 | + return_value=None) |
902 | + def test_unsuccessful_authentication(self, *mocks): |
903 | + self.assertFalse(self.protocol._login()) |
904 | + self.assertIsNone(self.account.user_name) |
905 | + self.assertIsNone(self.account.user_id) |
906 | + |
907 | + @mock.patch('gwibber.utils.authentication.Authentication.login', |
908 | + return_value=dict(AccessToken='some clever fake data', |
909 | + TokenSecret='sssssshhh!', |
910 | + UserId='rickygervais', |
911 | + ScreenName='Ricky Gervais')) |
912 | + def test_successful_authentication(self, *mocks): |
913 | + self.assertTrue(self.protocol._login()) |
914 | + self.assertEqual(self.account.user_name, 'Ricky Gervais') |
915 | + self.assertEqual(self.account.user_id, 'rickygervais') |
916 | + self.assertEqual(self.account.access_token, 'some clever fake data') |
917 | + self.assertEqual(self.account.secret_token, 'sssssshhh!') |
918 | + |
919 | + |
920 | + @mock.patch('gwibber.protocols.twitter.get_json', lambda *x, **y: y) |
921 | + @mock.patch('oauthlib.oauth1.rfc5849.generate_nonce', |
922 | + lambda: 'once upon a nonce') |
923 | + @mock.patch('oauthlib.oauth1.rfc5849.generate_timestamp', |
924 | + lambda: '1348690628') |
925 | + def test_signatures(self, *mocks): |
926 | + self.account.secret_token = 'alpha' |
927 | + self.account.access_token = 'omega' |
928 | + self.account.auth.id = 6 |
929 | + self.account.auth.method = 'oauth2' |
930 | + self.account.auth.mechanism = 'HMAC-SHA1' |
931 | + self.account.auth.parameters = dict(ConsumerKey='consume', |
932 | + ConsumerSecret='obey') |
933 | + result = '''\ |
934 | +OAuth oauth_nonce="once%20upon%20a%20nonce", \ |
935 | +oauth_timestamp="1348690628", \ |
936 | +oauth_version="1.0", \ |
937 | +oauth_signature_method="HMAC-SHA1", \ |
938 | +oauth_consumer_key="consume", \ |
939 | +oauth_token="omega", \ |
940 | +oauth_signature="2MlC4DOqcAdCUmU647izPmxiL%2F0%3D"''' |
941 | + |
942 | + self.assertEqual( |
943 | + self.protocol._get_url('http://example.com')['headers'], |
944 | + dict(Authorization=result)) |
945 | + |
946 | + @mock.patch('gwibber.utils.base.Model', TestModel) |
947 | + @mock.patch('gwibber.utils.download.urlopen', |
948 | + FakeData('gwibber.tests.data', 'twitter-home.dat')) |
949 | + @mock.patch('gwibber.protocols.twitter.Twitter._login', |
950 | + return_value=True) |
951 | + def test_home(self, *mocks): |
952 | + self.account.access_token = 'access' |
953 | + self.account.secret_token = 'secret' |
954 | + self.account.auth.parameters = dict( |
955 | + ConsumerKey='key', |
956 | + ConsumerSecret='secret') |
957 | + self.assertEqual(0, TestModel.get_n_rows()) |
958 | + self.protocol.home() |
959 | + self.assertEqual(3, TestModel.get_n_rows()) |
960 | + |
961 | + # This test data was ripped directly from Twitter's API docs. |
962 | + expected = [ |
963 | + [[['twitter', 'faker/than fake', '240558470661799936']], |
964 | + 'messages', 'OAuth Dancer', 'oauth_dancer', False, |
965 | + '2012-08-28T21:16:23', 'just another test', '', |
966 | + 'https://si0.twimg.com/profile_images/730275945/oauth-dancer_normal.jpg', |
967 | + 'https://twitter.com/oauth_dancer/status/240558470661799936', '', |
968 | + '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '', |
969 | + '', '', '', '', '', '', '', '', '', '', |
970 | + ], |
971 | + [[['twitter', 'faker/than fake', '240556426106372096']], |
972 | + 'messages', 'Raffi Krikorian', 'raffi', False, |
973 | + '2012-08-28T21:08:15', 'lecturing at the "analyzing big data ' + |
974 | + 'with twitter" class at @cal with @othman http://t.co/bfj7zkDJ', '', |
975 | + 'https://si0.twimg.com/profile_images/1270234259/raffi-headshot-casual_normal.png', |
976 | + 'https://twitter.com/raffi/status/240556426106372096', '', |
977 | + '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '', |
978 | + '', '', '', '', '', '', '', '', '', '', |
979 | + ], |
980 | + [[['twitter', 'faker/than fake', '240539141056638977']], |
981 | + 'messages', 'Taylor Singletary', 'episod', False, |
982 | + '2012-08-28T19:59:34', 'You\'d be right more often if you thought you were wrong.', '', |
983 | + 'https://si0.twimg.com/profile_images/2546730059/f6a8zq58mg1hn0ha8vie_normal.jpeg', |
984 | + 'https://twitter.com/episod/status/240539141056638977', '', |
985 | + '', '', '', 0.0, False, '', '', '', '', '', '', '', '', '', '', '', |
986 | + '', '', '', '', '', '', '', '', '', '', |
987 | + ], |
988 | + ] |
989 | + for i, expected_row in enumerate(expected): |
990 | + for got, want in zip(TestModel.get_row(i), expected_row): |
991 | + self.assertEqual(got, want) |
992 | + |
993 | + def test_mentions(self): |
994 | + get_url = self.protocol._get_url = mock.Mock(return_value=['tweet']) |
995 | + publish = self.protocol._publish_tweet = mock.Mock() |
996 | + |
997 | + self.protocol.mentions() |
998 | + |
999 | + publish.assert_called_with('tweet') |
1000 | + get_url.assert_called_with( |
1001 | + 'https://api.twitter.com/1.1/statuses/mentions_timeline.json') |
1002 | + |
1003 | + def test_user(self): |
1004 | + get_url = self.protocol._get_url = mock.Mock(return_value=['tweet']) |
1005 | + publish = self.protocol._publish_tweet = mock.Mock() |
1006 | + |
1007 | + self.protocol.user() |
1008 | + |
1009 | + publish.assert_called_with('tweet') |
1010 | + get_url.assert_called_with( |
1011 | + 'https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=') |
1012 | + |
1013 | + def test_list(self): |
1014 | + get_url = self.protocol._get_url = mock.Mock(return_value=['tweet']) |
1015 | + publish = self.protocol._publish_tweet = mock.Mock() |
1016 | + |
1017 | + self.protocol.list('some_list_id') |
1018 | + |
1019 | + publish.assert_called_with('tweet') |
1020 | + get_url.assert_called_with( |
1021 | + 'https://api.twitter.com/1.1/lists/statuses.json?list_id=some_list_id') |
1022 | + |
1023 | + def test_lists(self): |
1024 | + get_url = self.protocol._get_url = mock.Mock( |
1025 | + return_value=[dict(id_str='twitlist')]) |
1026 | + publish = self.protocol.list = mock.Mock() |
1027 | + |
1028 | + self.protocol.lists() |
1029 | + |
1030 | + publish.assert_called_with('twitlist') |
1031 | + get_url.assert_called_with( |
1032 | + 'https://api.twitter.com/1.1/lists/list.json') |
1033 | + |
1034 | + def test_private(self): |
1035 | + get_url = self.protocol._get_url = mock.Mock(return_value=['tweet']) |
1036 | + publish = self.protocol._publish_tweet = mock.Mock() |
1037 | + |
1038 | + self.protocol.private() |
1039 | + |
1040 | + publish.assert_called_with('tweet') |
1041 | + self.assertEqual( |
1042 | + get_url.mock_calls, |
1043 | + [mock.call('https://api.twitter.com/1.1/direct_messages.json'), |
1044 | + mock.call('https://api.twitter.com/1.1/direct_messages/sent.json')]) |
1045 | + |
1046 | + def test_send_private(self): |
1047 | + get_url = self.protocol._get_url = mock.Mock(return_value='tweet') |
1048 | + publish = self.protocol._publish_tweet = mock.Mock() |
1049 | + |
1050 | + self.protocol.send_private('pumpichank', 'Are you mocking me?') |
1051 | + |
1052 | + publish.assert_called_with('tweet') |
1053 | + get_url.assert_called_with( |
1054 | + 'https://api.twitter.com/1.1/direct_messages/new.json', |
1055 | + dict(text='Are you mocking me?', screen_name='pumpichank')) |
1056 | + |
1057 | + def test_failing_send_private(self): |
1058 | + from urllib.error import HTTPError |
1059 | + def failing_request(*ignore): |
1060 | + raise HTTPError('url', 403, 'Forbidden', 'Forbidden', mock.Mock()) |
1061 | + get_url = self.protocol._get_url = failing_request |
1062 | + publish = self.protocol._publish_tweet = mock.Mock() |
1063 | + |
1064 | + self.protocol.send_private('pumpichank', 'Are you mocking me?') |
1065 | + |
1066 | + self.assertEqual( |
1067 | + self.log_mock.empty(), |
1068 | + 'HTTP Error 403: Forbidden: Does that user follow you?\n') |
1069 | + |
1070 | + def test_send(self): |
1071 | + get_url = self.protocol._get_url = mock.Mock(return_value='tweet') |
1072 | + publish = self.protocol._publish_tweet = mock.Mock() |
1073 | + |
1074 | + self.protocol.send('Hello, twitterverse!') |
1075 | + |
1076 | + publish.assert_called_with('tweet') |
1077 | + get_url.assert_called_with( |
1078 | + 'https://api.twitter.com/1.1/statuses/update.json', |
1079 | + dict(status='Hello, twitterverse!')) |
1080 | + |
1081 | + def test_send_thread(self): |
1082 | + get_url = self.protocol._get_url = mock.Mock(return_value='tweet') |
1083 | + publish = self.protocol._publish_tweet = mock.Mock() |
1084 | + |
1085 | + self.protocol.send_thread( |
1086 | + '1234', |
1087 | + 'Why yes, I would love to respond to your tweet @pumpichank!') |
1088 | + |
1089 | + publish.assert_called_with('tweet') |
1090 | + get_url.assert_called_with( |
1091 | + 'https://api.twitter.com/1.1/statuses/update.json', |
1092 | + dict(status='Why yes, I would love to respond to your tweet @pumpichank!', |
1093 | + in_reply_to_status_id='1234')) |
1094 | + |
1095 | + def test_delete(self): |
1096 | + get_url = self.protocol._get_url = mock.Mock(return_value='tweet') |
1097 | + publish = self.protocol._unpublish = mock.Mock() |
1098 | + |
1099 | + self.protocol.delete('1234') |
1100 | + |
1101 | + publish.assert_called_with('1234') |
1102 | + get_url.assert_called_with( |
1103 | + 'https://api.twitter.com/1.1/statuses/destroy/1234.json', |
1104 | + dict(trim_user='true')) |
1105 | + |
1106 | + def test_retweet(self): |
1107 | + get_url = self.protocol._get_url = mock.Mock(return_value='tweet') |
1108 | + publish = self.protocol._publish_tweet = mock.Mock() |
1109 | + |
1110 | + self.protocol.retweet('1234') |
1111 | + |
1112 | + publish.assert_called_with('tweet') |
1113 | + get_url.assert_called_with( |
1114 | + 'https://api.twitter.com/1.1/statuses/retweet/1234.json', |
1115 | + dict(trim_user='true')) |
1116 | + |
1117 | + def test_unfollow(self): |
1118 | + get_url = self.protocol._get_url = mock.Mock() |
1119 | + |
1120 | + self.protocol.unfollow('pumpichank') |
1121 | + |
1122 | + get_url.assert_called_with( |
1123 | + 'https://api.twitter.com/1.1/friendships/destroy.json', |
1124 | + dict(screen_name='pumpichank')) |
1125 | + |
1126 | + def test_follow(self): |
1127 | + get_url = self.protocol._get_url = mock.Mock() |
1128 | + |
1129 | + self.protocol.follow('pumpichank') |
1130 | + |
1131 | + get_url.assert_called_with( |
1132 | + 'https://api.twitter.com/1.1/friendships/create.json', |
1133 | + dict(screen_name='pumpichank', follow='true')) |
1134 | + |
1135 | + def test_like(self): |
1136 | + get_url = self.protocol._get_url = mock.Mock() |
1137 | + |
1138 | + self.protocol.like('1234') |
1139 | + |
1140 | + get_url.assert_called_with( |
1141 | + 'https://api.twitter.com/1.1/favorites/create.json', |
1142 | + dict(id='1234')) |
1143 | + |
1144 | + def test_unlike(self): |
1145 | + get_url = self.protocol._get_url = mock.Mock() |
1146 | + |
1147 | + self.protocol.unlike('1234') |
1148 | + |
1149 | + get_url.assert_called_with( |
1150 | + 'https://api.twitter.com/1.1/favorites/destroy.json', |
1151 | + dict(id='1234')) |
1152 | + |
1153 | + def test_tag(self): |
1154 | + get_url = self.protocol._get_url = mock.Mock( |
1155 | + return_value=dict(statuses=['tweet'])) |
1156 | + publish = self.protocol._publish_tweet = mock.Mock() |
1157 | + |
1158 | + self.protocol.tag('yegbike') |
1159 | + |
1160 | + publish.assert_called_with('tweet') |
1161 | + get_url.assert_called_with( |
1162 | + 'https://api.twitter.com/1.1/search/tweets.json?q=%23yegbike') |
1163 | + |
1164 | + self.protocol.tag('#yegbike') |
1165 | + |
1166 | + publish.assert_called_with('tweet') |
1167 | + get_url.assert_called_with( |
1168 | + 'https://api.twitter.com/1.1/search/tweets.json?q=%23yegbike') |
1169 | + |
1170 | + def test_search(self): |
1171 | + get_url = self.protocol._get_url = mock.Mock( |
1172 | + return_value=dict(statuses=['tweet'])) |
1173 | + publish = self.protocol._publish_tweet = mock.Mock() |
1174 | + |
1175 | + self.protocol.search('hello') |
1176 | + |
1177 | + publish.assert_called_with('tweet') |
1178 | + get_url.assert_called_with( |
1179 | + 'https://api.twitter.com/1.1/search/tweets.json?q=hello') |
1180 | |
1181 | === modified file 'gwibber/gwibber/utils/base.py' |
1182 | --- gwibber/gwibber/utils/base.py 2012-09-25 20:25:42 +0000 |
1183 | +++ gwibber/gwibber/utils/base.py 2012-10-04 20:29:21 +0000 |
1184 | @@ -42,6 +42,7 @@ |
1185 | # representing the rows matching those keys. It is used for quickly finding |
1186 | # duplicates when we want to insert new rows into the model. |
1187 | _seen_messages = {} |
1188 | +_seen_ids = {} |
1189 | |
1190 | |
1191 | # Protocol __call__() methods run in threads, so we need to serialize |
1192 | @@ -179,11 +180,10 @@ |
1193 | # The column value is a list of lists (see gwibber/utils/model.py for |
1194 | # details), and because the arguments are themselves a list, this gets |
1195 | # initialized as a triply-nested list. |
1196 | - args = [[[ |
1197 | - self.__class__.__name__.lower(), |
1198 | - self._account.id, |
1199 | - message_id |
1200 | - ]]] |
1201 | + triple = [self.__class__.__name__.lower(), |
1202 | + self._account.id, |
1203 | + message_id] |
1204 | + args = [[triple]] |
1205 | # Now iterate through all the column names listed in the SCHEMA, |
1206 | # except for the first, since we just composed its value in the |
1207 | # preceding line. Pop matching column values from the kwargs, in the |
1208 | @@ -218,9 +218,34 @@ |
1209 | # the model gets updated, we need to insert into the row, thus |
1210 | # it's best to concatenate the two lists together and store it |
1211 | # back into the column. |
1212 | - row[IDS_IDX] = row[IDS_IDX] + args[IDS_IDX] |
1213 | + if triple not in row[IDS_IDX]: |
1214 | + row[IDS_IDX] = row[IDS_IDX] + args[IDS_IDX] |
1215 | + |
1216 | + _seen_ids[tuple(triple)] = _seen_messages.get(key) |
1217 | return key in _seen_messages |
1218 | |
1219 | + def _unpublish(self, message_id): |
1220 | + """Remove message_id from the Dee.SharedModel.""" |
1221 | + triple = [self.__class__.__name__.lower(), |
1222 | + self._account.id, |
1223 | + message_id] |
1224 | + |
1225 | + row_iter = _seen_ids.pop(tuple(triple), None) |
1226 | + if row_iter is None: |
1227 | + log.error('Tried to delete an invalid message id.') |
1228 | + return |
1229 | + |
1230 | + row = Model.get_row(row_iter) |
1231 | + if len(row[IDS_IDX]) == 1: |
1232 | + # Message only exists on one protocol, delete it |
1233 | + del _seen_messages[_make_key(row)] |
1234 | + Model.remove(row_iter) |
1235 | + else: |
1236 | + # Message exists on other protocols too, only drop id |
1237 | + row[IDS_IDX] = [ids for ids |
1238 | + in row[IDS_IDX] |
1239 | + if message_id not in ids] |
1240 | + |
1241 | def _get_access_token(self): |
1242 | """Return an access token, logging in if necessary.""" |
1243 | if self._account.access_token is None: |
1244 | |
1245 | === modified file 'gwibber/gwibber/utils/download.py' |
1246 | --- gwibber/gwibber/utils/download.py 2012-09-14 20:29:11 +0000 |
1247 | +++ gwibber/gwibber/utils/download.py 2012-10-04 20:29:21 +0000 |
1248 | @@ -20,6 +20,7 @@ |
1249 | ] |
1250 | |
1251 | |
1252 | +import time |
1253 | import json |
1254 | import logging |
1255 | |
1256 | @@ -31,12 +32,18 @@ |
1257 | log = logging.getLogger('gwibber.service') |
1258 | |
1259 | |
1260 | +# This stores some information about the current rate limits imposed |
1261 | +# upon us, and persists that data between instances of the Downloader. |
1262 | +_rate_limits = {} |
1263 | + |
1264 | + |
1265 | class Downloader: |
1266 | """Convenient downloading wrapper.""" |
1267 | |
1268 | def __init__(self, url, params=None, post=False, |
1269 | username=None, password=None, |
1270 | headers=None): |
1271 | + self.base_url = url.split('?')[0] |
1272 | self.url = url |
1273 | self.params = params |
1274 | self.post = post |
1275 | @@ -47,11 +54,19 @@ |
1276 | |
1277 | def _download(self): |
1278 | """Return the results as a decoded unicode string.""" |
1279 | + # Downloads will be happening in threads, so this won't |
1280 | + # block the whole app ;-) |
1281 | + time.sleep(_rate_limits.get(self.base_url, 0)) |
1282 | + |
1283 | data = None |
1284 | url = self.url |
1285 | headers = self.headers.copy() |
1286 | if self.params is not None: |
1287 | - params = urlencode(self.params) |
1288 | + # urlencode() does not have an option to use quote() |
1289 | + # instead of quote_plus(), but Twitter requires |
1290 | + # percent-encoded spaces, and this is harmless to any |
1291 | + # other protocol. |
1292 | + params = urlencode(self.params).replace('+', '%20') |
1293 | if self.post: |
1294 | data = params.encode('utf-8') |
1295 | headers['Content-Type'] = ( |
1296 | @@ -89,6 +104,24 @@ |
1297 | with urlopen(request) as result: |
1298 | payload = result.read() |
1299 | info = result.info() |
1300 | + |
1301 | + rate_reset = info.get('X-Rate-Limit-Reset') # in UTC epoch seconds |
1302 | + rate_count = info.get('X-Rate-Limit-Remaining') |
1303 | + if None not in (rate_reset, rate_count): |
1304 | + rate_reset = int(rate_reset) |
1305 | + rate_count = int(rate_count) |
1306 | + rate_delta = rate_reset - time.time() |
1307 | + if rate_count > 5: |
1308 | + # Ehhh, let's not impede the user if they have more than 5 |
1309 | + # requests remaining in this window. |
1310 | + _rate_limits[self.base_url] = 0 |
1311 | + elif rate_count <= 1: |
1312 | + # Avoid division by zero... wait the full length of time! |
1313 | + _rate_limits[self.base_url] = rate_delta |
1314 | + else: |
1315 | + wait_secs = rate_delta / rate_count |
1316 | + _rate_limits[self.base_url] = wait_secs |
1317 | + |
1318 | # RFC 4627 $3. JSON text SHALL be encoded in Unicode. The default |
1319 | # encoding is UTF-8. Since the first two characters of a JSON text |
1320 | # will always be ASCII characters [RFC0020], it is possible to |
1321 | |
1322 | === removed directory 'gwibber/microblog/plugins/twitter' |
1323 | === removed file 'gwibber/microblog/plugins/twitter/__init__.py' |
1324 | --- gwibber/microblog/plugins/twitter/__init__.py 2012-06-15 13:53:08 +0000 |
1325 | +++ gwibber/microblog/plugins/twitter/__init__.py 1970-01-01 00:00:00 +0000 |
1326 | @@ -1,819 +0,0 @@ |
1327 | -import cgi |
1328 | -from gettext import lgettext as _ |
1329 | -from oauth import oauth |
1330 | - |
1331 | -from gwibber.microblog import network, util |
1332 | -from gwibber.microblog.util import resources |
1333 | -from gwibber.microblog.util.time import parsetime |
1334 | -from gwibber.microblog.util.auth import Authentication |
1335 | - |
1336 | -import logging |
1337 | -logger = logging.getLogger("Twitter") |
1338 | -logger.debug("Initializing.") |
1339 | - |
1340 | -PROTOCOL_INFO = { |
1341 | - "name": "Twitter", |
1342 | - "version": "1.0", |
1343 | - |
1344 | - "config": [ |
1345 | - "private:secret_token", |
1346 | - "access_token", |
1347 | - "username", |
1348 | - "color", |
1349 | - "receive_enabled", |
1350 | - "send_enabled", |
1351 | - ], |
1352 | - |
1353 | - "authtype": "oauth1a", |
1354 | - "color": "#729FCF", |
1355 | - |
1356 | - "features": [ |
1357 | - "send", |
1358 | - "receive", |
1359 | - "search", |
1360 | - "tag", |
1361 | - "reply", |
1362 | - "responses", |
1363 | - "private", |
1364 | - "public", |
1365 | - "delete", |
1366 | - "follow", |
1367 | - "unfollow", |
1368 | - "profile", |
1369 | - "retweet", |
1370 | - "like", |
1371 | - "send_thread", |
1372 | - "send_private", |
1373 | - "user_messages", |
1374 | - "sinceid", |
1375 | - "lists", |
1376 | - "list", |
1377 | - ], |
1378 | - |
1379 | - "default_streams": [ |
1380 | - "receive", |
1381 | - "images", |
1382 | - "responses", |
1383 | - "private", |
1384 | - "lists", |
1385 | - ], |
1386 | -} |
1387 | - |
1388 | -URL_PREFIX = "https://twitter.com" |
1389 | -API_PREFIX = "https://api.twitter.com/1" |
1390 | - |
1391 | -class Client (): |
1392 | - """Querys Twitter and converts the data. |
1393 | - |
1394 | - The Client class is responsible for querying Twitter and turning the data obtained |
1395 | - into data that Gwibber can understand. Twitter uses a version of OAuth for security. |
1396 | - |
1397 | - Tokens have already been obtained when the account was set up in Gwibber and are used to |
1398 | - authenticate when getting data. |
1399 | - |
1400 | - """ |
1401 | - def __init__(self, acct): |
1402 | - self.service = util.getbus("Service") |
1403 | - self.account = acct |
1404 | - self._loop = None |
1405 | - |
1406 | - self.sigmethod = oauth.OAuthSignatureMethod_HMAC_SHA1() |
1407 | - self.token = None |
1408 | - parameters = self.account["auth"]["parameters"] |
1409 | - self.consumer = oauth.OAuthConsumer(parameters["ConsumerKey"], |
1410 | - parameters["ConsumerSecret"]) |
1411 | - |
1412 | - def _login(self): |
1413 | - old_token = self.account.get("secret_token", None) |
1414 | - with self.account.login_lock: |
1415 | - # Perform the login only if it wasn't already performed by another thread |
1416 | - # while we were waiting to the lock |
1417 | - if self.account.get("secret_token", None) == old_token: |
1418 | - self._locked_login(old_token) |
1419 | - |
1420 | - self.token = oauth.OAuthToken(self.account["access_token"], |
1421 | - self.account["secret_token"]) |
1422 | - return "access_token" in self.account and \ |
1423 | - self.account["access_token"] != old_token |
1424 | - |
1425 | - def _locked_login(self, old_token): |
1426 | - logger.debug("Re-authenticating" if old_token else "Logging in") |
1427 | - |
1428 | - auth = Authentication(self.account, logger) |
1429 | - reply = auth.login() |
1430 | - if reply and reply.has_key("AccessToken"): |
1431 | - self.account["access_token"] = reply["AccessToken"] |
1432 | - self.account["secret_token"] = reply["TokenSecret"] |
1433 | - self.account["uid"] = reply["UserId"] |
1434 | - self.account["username"] = reply["ScreenName"] |
1435 | - logger.debug("User id is: %s, name is %s" % (self.account["uid"], |
1436 | - self.account["username"])) |
1437 | - else: |
1438 | - logger.error("Didn't find token in session: %s", (reply,)) |
1439 | - |
1440 | - def _common(self, data): |
1441 | - """Parses messages into Gwibber compatible forms. |
1442 | - |
1443 | - This function is common to all tweet types |
1444 | - and includes parsing of user mentions, hashtags, |
1445 | - urls, images and videos. |
1446 | - |
1447 | - Arguments: |
1448 | - data -- A data object obtained from Twitter containing a complete tweet |
1449 | - |
1450 | - Returns: |
1451 | - m -- A data object compatible with inserting into the Gwibber database for that tweet |
1452 | - |
1453 | - """ |
1454 | - m = {} |
1455 | - try: |
1456 | - m["mid"] = str(data["id"]) |
1457 | - m["service"] = "twitter" |
1458 | - m["account"] = self.account["id"] |
1459 | - if data.has_key("created_at"): |
1460 | - m["time"] = parsetime(data["created_at"]) |
1461 | - m["text"] = util.unescape(data["text"]) |
1462 | - m["text"] = cgi.escape(m["text"]) |
1463 | - m["content"] = m["text"] |
1464 | - |
1465 | - # Go through the entities in the tweet and use them to linkify/filter tweets as appropriate |
1466 | - if data.has_key("entities"): |
1467 | - |
1468 | - #Get mention entries |
1469 | - if data["entities"].has_key("user_mentions"): |
1470 | - names = [] |
1471 | - for mention in data["entities"]["user_mentions"]: |
1472 | - if not mention["screen_name"] in names: |
1473 | - try: |
1474 | - m["content"] = m["content"].replace("@" + mention["screen_name"], "@<a href='gwibber:/user?acct=" + m["account"] + "&name=@" + mention["screen_name"] + "'>" + mention["screen_name"] + "</a>") |
1475 | - except: |
1476 | - pass |
1477 | - names.append(mention["screen_name"]) |
1478 | - |
1479 | - #Get hashtag entities |
1480 | - if data["entities"].has_key("hashtags"): |
1481 | - hashtags = [] |
1482 | - for tag in data["entities"]["hashtags"]: |
1483 | - if not tag["text"] in hashtags: |
1484 | - try: |
1485 | - m["content"] = m["content"].replace("#" + tag["text"], "#<a href='gwibber:/tag?acct=" + m["account"] + "&query=#" + tag["text"] + "'>" + tag["text"] + "</a>") |
1486 | - except: |
1487 | - pass |
1488 | - hashtags.append(tag["text"]) |
1489 | - |
1490 | - # Get url entities - These usually go in the link stream, but if they're pictures or videos, they should go in the proper stream |
1491 | - if data["entities"].has_key("urls"): |
1492 | - for urls in data["entities"]["urls"]: |
1493 | - url = cgi.escape (urls["url"]) |
1494 | - expanded_url = url |
1495 | - if urls.has_key("expanded_url"): |
1496 | - if not urls["expanded_url"] is None: |
1497 | - expanded_url = cgi.escape(urls["expanded_url"]) |
1498 | - |
1499 | - display_url = url |
1500 | - if urls.has_key("display_url"): |
1501 | - display_url = cgi.escape (urls["display_url"]) |
1502 | - |
1503 | - if url == m["content"]: |
1504 | - m["content"] = "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>" |
1505 | - else: |
1506 | - try: |
1507 | - startindex = m["content"].index(url) |
1508 | - endindex = startindex + len(url) |
1509 | - start = m["content"][0:startindex] |
1510 | - end = m["content"][endindex:] |
1511 | - m["content"] = start + "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>" + end |
1512 | - except: |
1513 | - logger.debug ("Failed to set url for ID: %s", m["mid"]) |
1514 | - |
1515 | - m["type"] = "link" |
1516 | - |
1517 | - images = util.imgpreview(expanded_url) |
1518 | - videos = util.videopreview(expanded_url) |
1519 | - if images: |
1520 | - m["images"] = images |
1521 | - m["type"] = "photo" |
1522 | - elif videos: |
1523 | - m["images"] = videos |
1524 | - m["type"] = "video" |
1525 | - else: |
1526 | - # Well, it's not anything else, so it must be a link |
1527 | - m["link"] = {} |
1528 | - m["link"]["picture"] = "" |
1529 | - m["link"]["name"] = "" |
1530 | - m["link"]["description"] = m["content"] |
1531 | - m["link"]["url"] = url |
1532 | - m["link"]["icon"] = "" |
1533 | - m["link"]["caption"] = "" |
1534 | - m["link"]["properties"] = {} |
1535 | - |
1536 | - if data["entities"].has_key("media"): |
1537 | - for media in data["entities"]["media"]: |
1538 | - try: |
1539 | - url = cgi.escape (media["url"]) |
1540 | - media_url_https = media["media_url_https"] |
1541 | - expanded_url = url |
1542 | - if media.has_key("expanded_url"): |
1543 | - expanded_url = cgi.escape(media["expanded_url"]) |
1544 | - |
1545 | - display_url = url |
1546 | - if media.has_key("display_url"): |
1547 | - display_url = cgi.escape (media["display_url"]) |
1548 | - |
1549 | - startindex = m["content"].index(url) |
1550 | - endindex = startindex + len(url) |
1551 | - start = m["content"][0:startindex] |
1552 | - end = m["content"][endindex:] |
1553 | - m["content"] = start + "<a href='" + url + "' title='" + expanded_url + "'>" + display_url + "</a>" + end |
1554 | - |
1555 | - if media["type"] == "photo": |
1556 | - m["type"] = "photo" |
1557 | - m["photo"] = {} |
1558 | - m["photo"]["picture"] = media_url_https |
1559 | - m["photo"]["url"] = None |
1560 | - m["photo"]["name"] = None |
1561 | - |
1562 | - except: |
1563 | - pass |
1564 | - |
1565 | - else: |
1566 | - m["content"] = util.linkify(util.unescape(m["text"]), |
1567 | - ((util.PARSE_HASH, '#<a href="gwibber:/tag?acct=%s&query=\\1">\\1</a>' % m["account"]), |
1568 | - (util.PARSE_NICK, '@<a href="gwibber:/user?acct=%s&name=\\1">\\1</a>' % m["account"])), escape=True) |
1569 | - |
1570 | - m["html"] = m["content"] |
1571 | - |
1572 | - m["to_me"] = ("@%s" % self.account["username"]) in data["text"] # Check if it's a reply directed at the user |
1573 | - m["favorited"] = data.get("favorited", False) # Check if the tweet has been favourited |
1574 | - |
1575 | - except: |
1576 | - logger.error("%s failure - %s", PROTOCOL_INFO["name"], data) |
1577 | - return {} |
1578 | - |
1579 | - return m |
1580 | - |
1581 | - def _user(self, user): |
1582 | - """Parses the user portion of a tweet. |
1583 | - |
1584 | - Arguments: |
1585 | - user -- A user object from a tweet |
1586 | - |
1587 | - Returns: |
1588 | - A user object in a format compatible with Gwibber's database. |
1589 | - |
1590 | - """ |
1591 | - return { |
1592 | - "name": user.get("name", None), |
1593 | - "nick": user.get("screen_name", None), |
1594 | - "id": user.get("id", None), |
1595 | - "location": user.get("location", None), |
1596 | - "followers": user.get("followers_count", None), |
1597 | - "friends": user.get("friends_count", None), |
1598 | - "description": user.get("description", None), |
1599 | - "following": user.get("following", None), |
1600 | - "protected": user.get("protected", None), |
1601 | - "statuses": user.get("statuses_count", None), |
1602 | - "image": user.get("profile_image_url", None), |
1603 | - "website": user.get("url", None), |
1604 | - "url": "/".join((URL_PREFIX, user.get("screen_name", ""))) or None, |
1605 | - "is_me": user.get("screen_name", None) == self.account["username"], |
1606 | - } |
1607 | - |
1608 | - def _message(self, data): |
1609 | - """Parses messages into gwibber compatible forms. |
1610 | - |
1611 | - This is the initial function for tweet parsing and parses |
1612 | - retweeted status (the shared-by portion), source (the program |
1613 | - the tweet was tweeted from) and reply details (the in-reply-to |
1614 | - portion). It sends the rest to _common() for further parsing. |
1615 | - |
1616 | - Arguments: |
1617 | - data -- A data object obtained from Twitter containing a complete tweet |
1618 | - |
1619 | - Returns: |
1620 | - m -- A data object compatible with inserting into the Gwibber database for that tweet |
1621 | - |
1622 | - """ |
1623 | - if type(data) != dict: |
1624 | - logger.error("Cannot parse message data: %s", str(data)) |
1625 | - return {} |
1626 | - |
1627 | - n = {} |
1628 | - if data.has_key("retweeted_status"): |
1629 | - n["retweeted_by"] = self._user(data["user"] if "user" in data else data["sender"]) |
1630 | - if data.has_key("created_at"): |
1631 | - n["time"] = parsetime(data["created_at"]) |
1632 | - data = data["retweeted_status"] |
1633 | - else: |
1634 | - n["retweeted_by"] = None |
1635 | - if data.has_key("created_at"): |
1636 | - n["time"] = parsetime(data["created_at"]) |
1637 | - |
1638 | - m = self._common(data) |
1639 | - for k in n: |
1640 | - m[k] = n[k] |
1641 | - |
1642 | - m["source"] = data.get("source", False) |
1643 | - |
1644 | - if data.has_key("in_reply_to_status_id"): |
1645 | - if data["in_reply_to_status_id"]: |
1646 | - m["reply"] = {} |
1647 | - m["reply"]["id"] = data["in_reply_to_status_id"] |
1648 | - m["reply"]["nick"] = data["in_reply_to_screen_name"] |
1649 | - if m["reply"]["id"] and m["reply"]["nick"]: |
1650 | - m["reply"]["url"] = "/".join((URL_PREFIX, m["reply"]["nick"], "statuses", str(m["reply"]["id"]))) |
1651 | - else: |
1652 | - m["reply"]["url"] = None |
1653 | - |
1654 | - m["sender"] = self._user(data["user"] if "user" in data else data["sender"]) |
1655 | - m["url"] = "/".join((m["sender"]["url"], "statuses", str(m.get("mid", None)))) |
1656 | - |
1657 | - return m |
1658 | - |
1659 | - def _responses(self, data): |
1660 | - """Sets the message type if the message should be in the replies stream. |
1661 | - |
1662 | - It sends the rest to _message() for further parsing. |
1663 | - |
1664 | - Arguments: |
1665 | - data -- A data object obtained from Twitter containing a complete tweet |
1666 | - |
1667 | - Returns: |
1668 | - m -- A data object compatible with inserting into the Gwibber database for that tweet |
1669 | - |
1670 | - """ |
1671 | - m = self._message(data) |
1672 | - m["type"] = None |
1673 | - |
1674 | - return m |
1675 | - |
1676 | - def _private(self, data): |
1677 | - """Sets the message type and privacy. |
1678 | - |
1679 | - Sets the message type and privacy if the message should be in the private stream. |
1680 | - Also parses the recipient as both sent & recieved messages can be in the private stream. |
1681 | - It sends the rest to _message() for further parsing |
1682 | - |
1683 | - Arguments: |
1684 | - data -- A data object obtained from Twitter containing a complete tweet |
1685 | - |
1686 | - Returns: |
1687 | - m -- A data object compatible with inserting into the Gwibber database for that tweet |
1688 | - |
1689 | - """ |
1690 | - m = self._message(data) |
1691 | - m["private"] = True |
1692 | - m["type"] = None |
1693 | - |
1694 | - m["recipient"] = {} |
1695 | - m["recipient"]["name"] = data["recipient"]["name"] |
1696 | - m["recipient"]["nick"] = data["recipient"]["screen_name"] |
1697 | - m["recipient"]["id"] = data["recipient"]["id"] |
1698 | - m["recipient"]["image"] = data["recipient"]["profile_image_url"] |
1699 | - m["recipient"]["location"] = data["recipient"]["location"] |
1700 | - m["recipient"]["url"] = "/".join((URL_PREFIX, m["recipient"]["nick"])) |
1701 | - m["recipient"]["is_me"] = m["recipient"]["nick"] == self.account["username"] |
1702 | - m["to_me"] = m["recipient"]["is_me"] |
1703 | - |
1704 | - return m |
1705 | - |
1706 | - def _result(self, data): |
1707 | - """Called when a search is done in Gwibber. |
1708 | - |
1709 | - Parses the sender and sends the rest to _common() |
1710 | - for further parsing. |
1711 | - |
1712 | - Arguments: |
1713 | - data -- A data object obtained from Twitter containing a complete tweet |
1714 | - |
1715 | - Returns: |
1716 | - m -- A data object compatible with inserting into the Gwibber database for that tweet |
1717 | - |
1718 | - """ |
1719 | - m = self._common(data) |
1720 | - |
1721 | - if data["to_user_id"]: |
1722 | - m["reply"] = {} |
1723 | - m["reply"]["id"] = data["to_user_id"] |
1724 | - m["reply"]["nick"] = data["to_user"] |
1725 | - |
1726 | - m["sender"] = {} |
1727 | - m["sender"]["nick"] = data["from_user"] |
1728 | - m["sender"]["id"] = data["from_user_id"] |
1729 | - m["sender"]["image"] = data["profile_image_url"] |
1730 | - m["sender"]["url"] = "/".join((URL_PREFIX, m["sender"]["nick"])) |
1731 | - m["sender"]["is_me"] = m["sender"]["nick"] == self.account["username"] |
1732 | - m["url"] = "/".join((m["sender"]["url"], "statuses", str(m["mid"]))) |
1733 | - return m |
1734 | - |
1735 | - def _profile(self, data): |
1736 | - """Called when a user is clicked on. |
1737 | - |
1738 | - Args: |
1739 | - data -- A data object obtained from Twitter containing a complete user |
1740 | - |
1741 | - Returns: |
1742 | - A data object compatible with inserting into the Gwibber database for that user. |
1743 | - |
1744 | - """ |
1745 | - if "error" in data: |
1746 | - return { |
1747 | - "error": data["error"] |
1748 | - } |
1749 | - return { |
1750 | - "name": data.get("name", data["screen_name"]), |
1751 | - "service": "twitter", |
1752 | - "stream": "profile", |
1753 | - "account": self.account["id"], |
1754 | - "mid": data["id"], |
1755 | - "text": data.get("description", ""), |
1756 | - "nick": data["screen_name"], |
1757 | - "url": data.get("url", ""), |
1758 | - "protected": data.get("protected", False), |
1759 | - "statuses": data.get("statuses_count", 0), |
1760 | - "followers": data.get("followers_count", 0), |
1761 | - "friends": data.get("friends_count", 0), |
1762 | - "following": data.get("following", 0), |
1763 | - "favourites": data.get("favourites_count", 0), |
1764 | - "image": data["profile_image_url"], |
1765 | - "utc_offset": data.get("utc_offset", 0), |
1766 | - "id": data["id"], |
1767 | - "lang": data.get("lang", "en"), |
1768 | - "verified": data.get("verified", False), |
1769 | - "geo_enabled": data.get("geo_enabled", False), |
1770 | - "time_zone": data.get("time_zone", "") |
1771 | - } |
1772 | - |
1773 | - def _list(self, data): |
1774 | - """Called when a list is clicked on. |
1775 | - |
1776 | - Args: |
1777 | - data -- A data object obtained from Twitter containing a complete list |
1778 | - |
1779 | - Returns: |
1780 | - A data object compatible with inserting into the Gwibber database for that list. |
1781 | - |
1782 | - """ |
1783 | - return { |
1784 | - "mid": data["id"], |
1785 | - "service": "twitter", |
1786 | - "account": self.account["id"], |
1787 | - "time": 0, |
1788 | - "text": data["description"], |
1789 | - "html": data["description"], |
1790 | - "content": data["description"], |
1791 | - "url": "/".join((URL_PREFIX, data["uri"])), |
1792 | - "sender": self._user(data["user"]), |
1793 | - "name": data["name"], |
1794 | - "nick": data["slug"], |
1795 | - "key": data["slug"], |
1796 | - "full": data["full_name"], |
1797 | - "uri": data["uri"], |
1798 | - "mode": data["mode"], |
1799 | - "members": data["member_count"], |
1800 | - "followers": data["subscriber_count"], |
1801 | - "kind": "list", |
1802 | - } |
1803 | - |
1804 | - def _get(self, path, parse="message", post=False, single=False, **args): |
1805 | - """Establishes a connection with Twitter and gets the data requested. |
1806 | - |
1807 | - Requires authentication. |
1808 | - |
1809 | - Arguments: |
1810 | - path -- The end of the url to look up on Twitter |
1811 | - parse -- The function to use to parse the data returned (message by default) |
1812 | - post -- True if using POST, for example the send operation. False if using GET, most operations other than send. (False by default) |
1813 | - single -- True if a single checkin is requested, False if multiple (False by default) |
1814 | - **args -- Arguments to be added to the URL when accessed. |
1815 | - |
1816 | - Returns: |
1817 | - A list of Gwibber compatible objects which have been parsed by the parse function. |
1818 | - |
1819 | - """ |
1820 | - if not self.token and not self._login(): |
1821 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Authentication failed"), "Auth needs updating") |
1822 | - logger.error("%s", logstr) |
1823 | - return [{"error": {"type": "auth", "account": self.account, "message": _("Authentication failed, please re-authorize")}}] |
1824 | - |
1825 | - url = "/".join((API_PREFIX, path)) |
1826 | - |
1827 | - request = oauth.OAuthRequest.from_consumer_and_token(self.consumer, self.token, |
1828 | - http_method=post and "POST" or "GET", http_url=url, parameters=util.compact(args)) |
1829 | - request.sign_request(self.sigmethod, self.consumer, self.token) |
1830 | - |
1831 | - if post: |
1832 | - headers = request.to_header() |
1833 | - data = network.Download(url, util.compact(args), post, header=headers).get_json() |
1834 | - else: |
1835 | - data = network.Download(request.to_url(), None, post).get_json() |
1836 | - resources.dump(self.account["service"], self.account["id"], data) |
1837 | - |
1838 | - if isinstance(data, dict) and data.get("errors", 0): |
1839 | - if "authenticate" in data["errors"][0]["message"]: |
1840 | - # Try again, if we get a new token |
1841 | - if self._login(): |
1842 | - logger.debug("Authentication error, logging in again") |
1843 | - return self._get(path, parse, post, single, args) |
1844 | - else: |
1845 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Authentication failed"), data["errors"][0]["message"]) |
1846 | - logger.error("%s", logstr) |
1847 | - return [{"error": {"type": "auth", "account": self.account, "message": data["errors"][0]["message"]}}] |
1848 | - else: |
1849 | - for error in data["errors"]: |
1850 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Unknown failure"), error["message"]) |
1851 | - return [{"error": {"type": "unknown", "account": self.account, "message": error["message"]}}] |
1852 | - elif isinstance(data, dict) and data.get("error", 0): |
1853 | - if "Incorrect signature" in data["error"]: |
1854 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Request failed"), data["error"]) |
1855 | - logger.error("%s", logstr) |
1856 | - return [{"error": {"type": "auth", "account": self.account, "message": data["error"]}}] |
1857 | - elif isinstance(data, str): |
1858 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("Request failed"), data) |
1859 | - logger.error("%s", logstr) |
1860 | - return [{"error": {"type": "request", "account": self.account, "message": data}}] |
1861 | - |
1862 | - if parse == "follow" or parse == "unfollow": |
1863 | - if isinstance(data, dict) and data.get("error", 0): |
1864 | - logstr = """%s: %s - %s""" % (PROTOCOL_INFO["name"], _("%s failed" % parse), data["error"]) |
1865 | - logger.error("%s", logstr) |
1866 | - return [{"error": {"type": "auth", "account": self.account, "message": data["error"]}}] |
1867 | - else: |
1868 | - return [["friendships", {"type": parse, "account": self.account["id"], "service": self.account["service"],"user_id": data["id"], "nick": data["screen_name"]}]] |
1869 | - |
1870 | - if parse == "profile" and isinstance(data, dict): |
1871 | - return self._profile(data) |
1872 | - |
1873 | - if parse == "list": |
1874 | - return [self._list(l) for l in data["lists"]] |
1875 | - |
1876 | - if single: return [getattr(self, "_%s" % parse)(data)] |
1877 | - if parse: return [getattr(self, "_%s" % parse)(m) for m in data] |
1878 | - else: return [] |
1879 | - |
1880 | - def _search(self, **args): |
1881 | - """Establishes a connection with Twitter and gets the results of a search. |
1882 | - |
1883 | - Does not require authentication |
1884 | - |
1885 | - Arguments: |
1886 | - **args -- The search terms |
1887 | - |
1888 | - Returns: |
1889 | - A list of Gwibber compatible objects which have been parsed by _result(). |
1890 | - |
1891 | - """ |
1892 | - data = network.Download("http://search.twitter.com/search.json", util.compact(args)) |
1893 | - data = data.get_json()["results"] |
1894 | - |
1895 | - if type(data) != list: |
1896 | - logger.error("Cannot parse search data: %s", str(data)) |
1897 | - return [] |
1898 | - |
1899 | - return [self._result(m) for m in data] |
1900 | - |
1901 | - def __call__(self, opname, **args): |
1902 | - return getattr(self, opname)(**args) |
1903 | - |
1904 | - def receive(self, count=util.COUNT, since=None): |
1905 | - """Gets the latest tweets and adds them to the database. |
1906 | - |
1907 | - Arguments: |
1908 | - count -- Number of updates to get |
1909 | - since -- Time to get updates since |
1910 | - |
1911 | - Returns: |
1912 | - A list of Gwibber compatible objects which have been parsed by _message(). |
1913 | - |
1914 | - """ |
1915 | - return self._get("statuses/home_timeline.json", include_entities=1, count=count, since_id=since) |
1916 | - |
1917 | - def responses(self, count=util.COUNT, since=None): |
1918 | - """Gets the latest replies and adds them to the database. |
1919 | - |
1920 | - Arguments: |
1921 | - count -- Number of updates to get |
1922 | - since -- Time to get updates since |
1923 | - |
1924 | - Returns: |
1925 | - A list of Gwibber compatible objects which have been parsed by _responses(). |
1926 | - |
1927 | - """ |
1928 | - return self._get("statuses/mentions.json", "responses", include_entities=1, count=count, since_id=since) |
1929 | - |
1930 | - def private(self, count=util.COUNT, since=None): |
1931 | - """Gets the latest direct messages sent and recieved and adds them to the database. |
1932 | - |
1933 | - Args: |
1934 | - count -- Number of updates to get |
1935 | - since -- Time to get updates since |
1936 | - |
1937 | - Returns: |
1938 | - A list of Gwibber compatible objects which have been parsed by _private(). |
1939 | - |
1940 | - """ |
1941 | - private = self._get("direct_messages.json", "private", include_entities=1, count=count, since_id=since) or [] |
1942 | - private_sent = self._get("direct_messages/sent.json", "private", count=count, since_id=since) or [] |
1943 | - return private + private_sent |
1944 | - |
1945 | - def public(self): |
1946 | - """Gets the latest tweets from the public timeline and adds them to the database. |
1947 | - |
1948 | - Arguments: |
1949 | - None |
1950 | - |
1951 | - Returns: |
1952 | - A list of Gwibber compatible objects which have been parsed by _message(). |
1953 | - |
1954 | - """ |
1955 | - return self._get("statuses/public_timeline.json", include_entities=1) |
1956 | - |
1957 | - def lists(self, **args): |
1958 | - """Gets subscribed lists and adds them to the database. |
1959 | - |
1960 | - Arguments: |
1961 | - None |
1962 | - |
1963 | - Returns: |
1964 | - A list of Gwibber compatible objects which have been parsed by _list(). |
1965 | - |
1966 | - """ |
1967 | - if not "username" in self.account: |
1968 | - self._login() |
1969 | - following = self._get("%s/lists/subscriptions.json" % self.account["username"], "list") or [] |
1970 | - lists = self._get("%s/lists.json" % self.account["username"], "list") or [] |
1971 | - return following + lists |
1972 | - |
1973 | - def list(self, user, id, count=util.COUNT, since=None): |
1974 | - """Gets the latest tweets from subscribed lists and adds them to the database. |
1975 | - |
1976 | - Arguments: |
1977 | - user -- The user's name whose lists are to be got |
1978 | - id -- The user's id whose lists are to be got |
1979 | - count -- Number of updates to get |
1980 | - since -- Time to get updates since |
1981 | - |
1982 | - Returns: |
1983 | - A list of Gwibber compatible objects which have been parsed by _message(). |
1984 | - |
1985 | - """ |
1986 | - return self._get("%s/lists/%s/statuses.json" % (user, id), include_entities=1, per_page=count, since_id=since) |
1987 | - |
1988 | - def search(self, query, count=util.COUNT, since=None): |
1989 | - """Gets the latest results from a search and adds them to the database. |
1990 | - |
1991 | - Arguments: |
1992 | - query -- The search query |
1993 | - count -- Number of updates to get |
1994 | - since -- Time to get updates since |
1995 | - |
1996 | - Returns: |
1997 | - A list of Gwibber compatible objects which have been parsed by _search(). |
1998 | - |
1999 | - """ |
2000 | - return self._search(include_entities=1, q=query, rpp=count, since_id=since) |
2001 | - |
2002 | - def tag(self, query, count=util.COUNT, since=None): |
2003 | - """Gets the latest results from a hashtag search and adds them to the database. |
2004 | - |
2005 | - Arguments: |
2006 | - query -- The search query (hashtag without the #) |
2007 | - count -- Number of updates to get |
2008 | - since -- Time to get updates since |
2009 | - |
2010 | - Returns: |
2011 | - A list of Gwibber compatible objects which have been parsed by _search(). |
2012 | - |
2013 | - """ |
2014 | - return self._search(q="#%s" % query, count=count, since_id=since) |
2015 | - |
2016 | - def delete(self, message): |
2017 | - """Deletes a specified tweet from Twitter. |
2018 | - |
2019 | - Arguments: |
2020 | - message -- A Gwibber compatible message object (from gwibber's database) |
2021 | - |
2022 | - Returns: |
2023 | - Nothing |
2024 | - |
2025 | - """ |
2026 | - return self._get("statuses/destroy/%s.json" % message["mid"], None, post=True, do=1) |
2027 | - |
2028 | - def like(self, message): |
2029 | - """Favourites a specified tweet on Twitter. |
2030 | - |
2031 | - Arguments: |
2032 | - message -- A Gwibber compatible message object (from Gwibber's database) |
2033 | - |
2034 | - Returns: |
2035 | - Nothing |
2036 | - |
2037 | - """ |
2038 | - return self._get("favorites/create/%s.json" % message["mid"], None, post=True, do=1) |
2039 | - |
2040 | - def send(self, message): |
2041 | - """Sends a tweet to Twitter. |
2042 | - |
2043 | - Arguments: |
2044 | - message -- The tweet's text |
2045 | - |
2046 | - Returns: |
2047 | - Nothing |
2048 | - |
2049 | - """ |
2050 | - return self._get("statuses/update.json", post=True, single=True, |
2051 | - status=message) |
2052 | - |
2053 | - def send_private(self, message, private): |
2054 | - """Sends a direct message to Twitter. |
2055 | - |
2056 | - Arguments: |
2057 | - message -- The tweet's text |
2058 | - private -- A gwibber compatible user object (from gwibber's database) |
2059 | - |
2060 | - Returns: |
2061 | - Nothing |
2062 | - |
2063 | - """ |
2064 | - return self._get("direct_messages/new.json", "private", post=True, single=True, |
2065 | - text=message, screen_name=private["sender"]["nick"]) |
2066 | - |
2067 | - def send_thread(self, message, target): |
2068 | - """Sends a reply to a user on Twitter. |
2069 | - |
2070 | - Arguments: |
2071 | - message -- The tweet's text |
2072 | - target -- A Gwibber compatible user object (from Gwibber's database) |
2073 | - |
2074 | - Returns: |
2075 | - Nothing |
2076 | - |
2077 | - """ |
2078 | - return self._get("statuses/update.json", post=True, single=True, |
2079 | - status=message, in_reply_to_status_id=target["mid"]) |
2080 | - |
2081 | - def retweet(self, message): |
2082 | - """Retweets a tweet. |
2083 | - |
2084 | - Arguments: |
2085 | - message -- A Gwibber compatible message object (from gwibber's database) |
2086 | - |
2087 | - Returns: |
2088 | - Nothing |
2089 | - |
2090 | - """ |
2091 | - return self._get("statuses/retweet/%s.json" % message["mid"], None, post=True, do=1) |
2092 | - |
2093 | - def follow(self, screen_name): |
2094 | - """Follows a user. |
2095 | - |
2096 | - Arguments: |
2097 | - screen_name -- The screen name (@someone without the @) of the user to be followed |
2098 | - |
2099 | - Returns: |
2100 | - Nothing |
2101 | - |
2102 | - """ |
2103 | - return self._get("friendships/create.json", screen_name=screen_name, post=True, parse="follow") |
2104 | - |
2105 | - def unfollow(self, screen_name): |
2106 | - """Unfollows a user. |
2107 | - |
2108 | - Arguments: |
2109 | - screen_name -- The screen name (@someone without the @) of the user to be unfollowed |
2110 | - |
2111 | - Returns: |
2112 | - Nothing |
2113 | - |
2114 | - """ |
2115 | - return self._get("friendships/destroy.json", screen_name=screen_name, post=True, parse="unfollow") |
2116 | - |
2117 | - def profile(self, id=None, count=None, since=None): |
2118 | - """Gets a user's profile. |
2119 | - |
2120 | - Arguments: |
2121 | - id -- The user's screen name |
2122 | - count -- Number of tweets to get |
2123 | - since -- Time to get tweets since |
2124 | - |
2125 | - Returns: |
2126 | - A list of Gwibber compatible objects which have been parsed by _profile(). |
2127 | - |
2128 | - """ |
2129 | - return self._get("users/show.json", screen_name=id, count=count, since_id=since, parse="profile") |
2130 | - |
2131 | - def user_messages(self, id=None, count=util.COUNT, since=None): |
2132 | - """Gets a user's profile & timeline. |
2133 | - |
2134 | - Arguments: |
2135 | - id -- The user's screen name |
2136 | - count -- Number of tweets to get |
2137 | - since -- Time to get tweets since |
2138 | - |
2139 | - Returns: |
2140 | - A list of Gwibber compatible objects which have been parsed by _profile(). |
2141 | - |
2142 | - """ |
2143 | - profiles = [self.profile(id)] or [] |
2144 | - messages = self._get("statuses/user_timeline.json", id=id, include_entities=1, count=count, since_id=since) or [] |
2145 | - return messages + profiles |
2146 | |
2147 | === added directory 'gwibber/tools' |
2148 | === added file 'gwibber/tools/debug_live.py' |
2149 | --- gwibber/tools/debug_live.py 1970-01-01 00:00:00 +0000 |
2150 | +++ gwibber/tools/debug_live.py 2012-10-04 20:29:21 +0000 |
2151 | @@ -0,0 +1,67 @@ |
2152 | +#!/usr/bin/env python3 |
2153 | + |
2154 | +"""Usage: ./tools/debug_live.py PROTOCOL OPERATION [OPTIONS] |
2155 | + |
2156 | +Where PROTOCOL is a protocol supported by Gwibber, such as 'twitter', |
2157 | +OPERATION is an instance method defined in that protocol's class, and |
2158 | +OPTIONS are whatever arguments you'd like to pass to that method (if |
2159 | +any), such as message id's or a status message. |
2160 | + |
2161 | +Examples: |
2162 | + |
2163 | +./tools/debug_live.py twitter home |
2164 | +./tools/debug_live.py twitter send 'Hello, world!' |
2165 | + |
2166 | +This tool is provided to aid with rapid feedback of changes made to |
2167 | +the gwibber source tree, and as such is designed to be run from the |
2168 | +same directory that contains 'setup.py'. It is not intended for use |
2169 | +with an installed gwibber package. |
2170 | +""" |
2171 | + |
2172 | +import sys |
2173 | + |
2174 | +sys.path.insert(0, '.') |
2175 | + |
2176 | +if len(sys.argv) < 3: |
2177 | + sys.exit(__doc__) |
2178 | + |
2179 | +protocol = sys.argv[1] |
2180 | +args = sys.argv[2:] |
2181 | + |
2182 | +from gwibber.utils.account import AccountManager |
2183 | +from gwibber.utils.model import Model |
2184 | +from gwibber.utils.base import Base |
2185 | +from gi.repository import GObject |
2186 | + |
2187 | +# Disable threading for easier testing. |
2188 | +Base._SYNCHRONIZE = True |
2189 | + |
2190 | +def refresh(account): |
2191 | + print() |
2192 | + print('#' * 80) |
2193 | + print('Performing "{}" operation!'.format(args[0])) |
2194 | + print('#' * 80) |
2195 | + |
2196 | + account.protocol(*args) |
2197 | + for row in Model: |
2198 | + print([col for col in row]) |
2199 | + print() |
2200 | + print('ROWS: ', len(Model)) |
2201 | + return True |
2202 | + |
2203 | +if __name__ == '__main__': |
2204 | + |
2205 | + found = False |
2206 | + a = AccountManager(None) |
2207 | + |
2208 | + for account_id, account in a._accounts.items(): |
2209 | + if account_id.endswith(protocol): |
2210 | + found = True |
2211 | + refresh(account) |
2212 | + GObject.timeout_add_seconds(300, refresh, account) |
2213 | + |
2214 | + if not found: |
2215 | + print('No {} account found in your Ubuntu Online Accounts!'.format( |
2216 | + protocol)) |
2217 | + else: |
2218 | + GObject.MainLoop().run() |
2219 | |
2220 | === added file 'gwibber/tools/debug_slave.py' |
2221 | --- gwibber/tools/debug_slave.py 1970-01-01 00:00:00 +0000 |
2222 | +++ gwibber/tools/debug_slave.py 2012-10-04 20:29:21 +0000 |
2223 | @@ -0,0 +1,22 @@ |
2224 | +#!/usr/bin/env python3 |
2225 | + |
2226 | +from gi.repository import Dee |
2227 | +from gi.repository import GObject |
2228 | + |
2229 | + |
2230 | +class Slave: |
2231 | + def __init__(self): |
2232 | + model_name = 'com.Gwibber.Streams' |
2233 | + print('Joining model ' + model_name) |
2234 | + self.model = Dee.SharedModel.new(model_name) |
2235 | + self.model.connect('row-added', self.on_row_added) |
2236 | + |
2237 | + def on_row_added(self, model, itr): |
2238 | + row = self.model.get_row(itr) |
2239 | + print(row) |
2240 | + print('ROWS: ', len(self.model)) |
2241 | + |
2242 | +if __name__ == '__main__': |
2243 | + |
2244 | + s = Slave() |
2245 | + GObject.MainLoop().run() |
On Oct 01, 2012, at 09:30 PM, Robert Bruce Park wrote:
>Twitter branch mostly done. Some TODOs remain. I am mp'ing a little bit
>prematurely because it's still the easiest way to see an overall diff ;-)
The branch is looking pretty good. I'm glad you were able to verify that the
OAuth signatures were working, at least for public tweets. I guess more
research will have to be done to figure out why private replies are broken.
Anyway, here's a review of the branch so far. In generally, it looks pretty
good. There are a few error cases that need to be thought about, a few
additional tests that I think need to be written, and just a few minor style
nits here and there. But overall, it's looking fantastic and the small
problems should be easy to fix. I think this can be landed pretty quickly.
Detailed comments follow.
Do you intend to include the debug_twitter_*.py scripts in the branch? If so,
let's put them in a tools subdirectory (i.e. not in the gwibber Python
package). That way they'll be easier to omit from the Debian package.
Also...
=== added file 'gwibber/ debug_twitter_ live.py' debug_twitter_ live.py 1970-01-01 00:00:00 +0000 debug_twitter_ live.py 2012-10-02 14:33:58 +0000 utils.account import AccountManager ####### ####### ####### ### Receive ####### ####### ####### ####### ####### #")
--- gwibber/
+++ gwibber/
> @@ -0,0 +1,31 @@
> +#!/usr/bin/env python3
> +
> +from gwibber.
> +from gwibber.utils.model import Model
> +from gi.repository import GObject
> +
> +def refresh (account):
> + print ("#####
This can probably be fit into 79 characters. :)
Also, no space between print and open paren. (PEP 8)
> + account. protocol. user() protocol. delete( '25282352797831 1680')
> + account.
What does this number correspond to? Is it private information?
> + for row in Model:
> + print([col for col in row])
> + print()
> + print ("ROWS: ", len(Model))
extra space
> + return True None) items() : id.endswith( 'twitter' ): timeout_ add_seconds( 300, refresh, account) MainLoop( ).run()
> +
> +if __name__ == "__main__":
> +
> + found = False
> + a = AccountManager(
> +
> + for account_id, account in a._accounts.
> + if account_
> + found = True
> + refresh(account)
> + GObject.
> +
> + if not found:
> + print('No Twitter account found in your Ubuntu Online Accounts!')
> + else:
> + GObject.
=== added file 'gwibber/ debug_twitter_ slave.py' debug_twitter_ slave.py 1970-01-01 00:00:00 +0000 debug_twitter_ slave.py 2012-10-02 14:33:58 +0000
--- gwibber/
+++ gwibber/
> @@ -0,0 +1,23 @@
> +#!/usr/bin/env python3
> +
> +from gi.repository import Dee
> +from gi.repository import GObject
> +
> +
> +class Slave:
> + def __init__(self):
4 space indents (PEP 8).
> + model_name = "com.Gwibber. Streams"
> + print ("Joining model %s" % model_name)
extra space.
> + self.model = Dee.SharedModel .new(model_ name) connect( "row-added" , self.on_row_added)
> + self.model.
> +
> + def on_row_added (self, model, itr):
> + row = self.model.get_row (itr)
> + print (row)
> + print ("ROWS: ", len(self.model))
extra spaces.
> +
> +if __name__ == "__main__":
> +
> + s =...