Merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin

Proposed by Lucio Torre
Status: Merged
Merged at revision: 3
Proposed branch: lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin
Merge into: lp:~txstatsd-dev/txstatsd/distinct-plugin
Diff against target: 641 lines (+497/-14)
7 files modified
Makefile (+26/-0)
bin/redis.conf (+312/-0)
bin/start-redis.sh (+5/-0)
bin/stop-redis.sh (+3/-0)
distinctdb/distinctmetric.py (+62/-6)
distinctdb/tests/test_distinct.py (+80/-4)
twisted/plugins/distinctdbplugin.py (+9/-4)
To merge this branch: bzr merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin
Reviewer Review Type Date Requested Status
Lucio Torre (community) Approve
Sidnei da Silva Approve
Review via email: mp+85962@code.launchpad.net

Commit message

add redis support for distinct plugin

Description of the change

add redis support for distinct plugin

To post a comment you must log in.
Revision history for this message
Sidnei da Silva (sidnei) wrote :

Looks good.

review: Approve
Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.
ERROR: role "client" already exists
NOTICE: CREATE TABLE will create implicit sequence "paths_id_seq" for serial column "paths.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "paths_pkey" for table "paths"
NOTICE: CREATE TABLE / UNIQUE will create implicit index "paths_path_key" for table "paths"
./bin/start-redis.sh: 2: start-stop-daemon: not found
make: *** [start-redis] Error 127

Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.
ERROR: role "client" already exists
NOTICE: CREATE TABLE will create implicit sequence "paths_id_seq" for serial column "paths.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "paths_pkey" for table "paths"
NOTICE: CREATE TABLE / UNIQUE will create implicit index "paths_path_key" for table "paths"
./bin/start-redis.sh: 2: start-stop-daemon: not found
make: *** [start-redis] Error 127

Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Revision history for this message
Lucio Torre (lucio.torre) :
review: Approve
Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :
Download full text (4.6 KiB)

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh
trial distinctdb/
                                                                        [ERROR]

===============================================================================
[ERROR]: distinctdb.tests.test_distinct

Traceback (most recent call last):
  File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py", line 563, in loadPackage
    module = modinfo.load()
  File "/usr/lib/python2.6/dist-packages/twisted/python/modules.py", line 381, in load
    return self.pathEntry.pythonPath.moduleLoader(self.name)
  File "/usr/lib/python2.6/dist-packages/twisted/python/reflect.py", line 464, in namedAny
    topLevelPackage = _importAndCheckStack(trialname)
  File "/home/tarmac/cache/txstatsd/distinct-plugin/distinctdb/tests/test_distinct.py", line 22, in <module>
    from twisted.plugins import distinctdbplugin
  File "/home/tarmac/cache/txstatsd/distinct-plugin/twisted/plugins/distinctdbplugin.py", line 4, in <module>
    from txstatsd.itxstatsd import IMetricFactory
exceptions.ImportError: No module named txstatsd.itxstatsd
-------------------------------------------------------------------------------

FAILED (errors=1)

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.
ERROR: role "client" alread...

Read more...

Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :
Download full text (4.6 KiB)

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh
trial distinctdb/
                                                                        [ERROR]

===============================================================================
[ERROR]: distinctdb.tests.test_distinct

Traceback (most recent call last):
  File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py", line 563, in loadPackage
    module = modinfo.load()
  File "/usr/lib/python2.6/dist-packages/twisted/python/modules.py", line 381, in load
    return self.pathEntry.pythonPath.moduleLoader(self.name)
  File "/usr/lib/python2.6/dist-packages/twisted/python/reflect.py", line 464, in namedAny
    topLevelPackage = _importAndCheckStack(trialname)
  File "/home/tarmac/cache/txstatsd/distinct-plugin/distinctdb/tests/test_distinct.py", line 22, in <module>
    from twisted.plugins import distinctdbplugin
  File "/home/tarmac/cache/txstatsd/distinct-plugin/twisted/plugins/distinctdbplugin.py", line 4, in <module>
    from txstatsd.itxstatsd import IMetricFactory
exceptions.ImportError: No module named txstatsd.itxstatsd
-------------------------------------------------------------------------------

FAILED (errors=1)

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.
ERROR: role "client" alread...

Read more...

Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :
Download full text (7.7 KiB)

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh
trial distinctdb/
distinctdb.tests.test_distinct
  TestDatabase
    test_connect ... [OK]
    test_create_metric_id ... [OK]
    test_find_saved_data ... [OK]
    test_load_metric_id ... [OK]
  TestDistinctMetricReporter
    test_configure ... [OK]
    test_get_bucket_no ... [OK]
    test_max ... [OK]
    test_reports ... [OK]
  TestPlugin
    test_factory ... [OK]
  TestRedis
    test_configure ... [ERROR]
                                                 [ERROR]
    test_connect ... [ERROR]
                                                   [ERROR]
    test_load ... [ERROR]
                                                      [ERROR]
    test_usage ... [ERRO...

Read more...

Revision history for this message
Ubuntu One Server Tarmac Bot (ubuntuone-server-tarmac) wrote :
Download full text (14.6 MiB)

The attempt to merge lp:~lucio.torre/txstatsd/add-redis-to-distinct-plugin into lp:~txstatsd-dev/txstatsd/distinct-plugin failed. Below is the output from the failed tests.

./bin/start-database.sh
## Starting postgres in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1 ##
The files belonging to this database system will be owned by user "tarmac".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default text search configuration will be set to "english".

fixing permissions on existing directory /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 28MB
creating configuration files ... ok
creating template1 database in /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/8.4/bin/postgres -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data
or
    /usr/lib/postgresql/8.4/bin/pg_ctl -D /home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1/data -l logfile start

waiting for server to start.... done
server started
CREATE ROLE
To set your environment so psql will connect to this DB instance type:
    export PGHOST=/home/tarmac/cache/txstatsd/distinct-plugin/tmp/db1
## Done. ##
psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
CREATE TABLE
CREATE TABLE
CREATE INDEX
./bin/start-redis.sh
trial distinctdb/
distinctdb.tests.test_distinct
  TestDatabase
    test_connect ... [OK]
    test_create_metric_id ... [OK]
    test_find_saved_data ... [OK]
    test_load_metric_id ... [OK]
  TestDistinctMetricReporter
    test_configure ... [OK]
    test_get_bucket_no ... [OK]
    test_max ... [OK]
    test_reports ... [OK]
  TestPlugin
    test_factory ... [OK]
  TestRedis
    test_configure ... [ERROR]
    test_connect ... [ERROR]
                                                   [ERROR]
    test_load ... [ERROR]
                                                      [ERROR]
                                                      [ERROR]
                                                      [ERROR]
          ...

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'Makefile'
2--- Makefile 1970-01-01 00:00:00 +0000
3+++ Makefile 2011-12-16 15:10:16 +0000
4@@ -0,0 +1,26 @@
5+
6+trial:
7+ trial distinctdb/
8+
9+start-database:
10+ ./bin/start-database.sh
11+ psql -h `pwd`/tmp/db1 -d distinct < bin/schema.sql
12+
13+stop-database:
14+ ./bin/stop-database.sh
15+
16+start-redis:
17+ ./bin/start-redis.sh
18+
19+stop-redis:
20+ ./bin/stop-redis.sh
21+
22+start: start-database start-redis
23+
24+stop: stop-redis stop-database
25+
26+clean:
27+ rm -rf ./tmp/
28+test: start trial stop clean
29+
30+.PHONY: trial test start-database stop-database start-redis stop-redis start stop
31
32=== added file 'bin/redis.conf'
33--- bin/redis.conf 1970-01-01 00:00:00 +0000
34+++ bin/redis.conf 2011-12-16 15:10:16 +0000
35@@ -0,0 +1,312 @@
36+# Redis configuration file example
37+
38+# Note on units: when memory size is needed, it is possible to specifiy
39+# it in the usual form of 1k 5GB 4M and so forth:
40+#
41+# 1k => 1000 bytes
42+# 1kb => 1024 bytes
43+# 1m => 1000000 bytes
44+# 1mb => 1024*1024 bytes
45+# 1g => 1000000000 bytes
46+# 1gb => 1024*1024*1024 bytes
47+#
48+# units are case insensitive so 1GB 1Gb 1gB are all the same.
49+
50+# By default Redis does not run as a daemon. Use 'yes' if you need it.
51+# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
52+daemonize no
53+
54+# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
55+# default. You can specify a custom pid file location here.
56+# pidfile ./tmp/redis.pid
57+
58+# Accept connections on the specified port, default is 6379
59+port 16379
60+
61+# If you want you can bind a single interface, if the bind option is not
62+# specified all the interfaces will listen for incoming connections.
63+#
64+bind 127.0.0.1
65+
66+# Close the connection after a client is idle for N seconds (0 to disable)
67+timeout 300
68+
69+# Set server verbosity to 'debug'
70+# it can be one of:
71+# debug (a lot of information, useful for development/testing)
72+# verbose (many rarely useful info, but not a mess like the debug level)
73+# notice (moderately verbose, what you want in production probably)
74+# warning (only very important / critical messages are logged)
75+loglevel verbose
76+
77+# Specify the log file name. Also 'stdout' can be used to force
78+# Redis to log on the standard output. Note that if you use standard
79+# output for logging but daemonize, logs will be sent to /dev/null
80+logfile tmp/redis/redis-server.log
81+
82+# Set the number of databases. The default database is DB 0, you can select
83+# a different one on a per-connection basis using SELECT <dbid> where
84+# dbid is a number between 0 and 'databases'-1
85+databases 16
86+
87+################################ SNAPSHOTTING #################################
88+#
89+# Save the DB on disk:
90+#
91+# save <seconds> <changes>
92+#
93+# Will save the DB if both the given number of seconds and the given
94+# number of write operations against the DB occurred.
95+#
96+# In the example below the behaviour will be to save:
97+# after 900 sec (15 min) if at least 1 key changed
98+# after 300 sec (5 min) if at least 10 keys changed
99+# after 60 sec if at least 10000 keys changed
100+#
101+# Note: you can disable saving at all commenting all the "save" lines.
102+
103+save 900 1
104+save 300 10
105+save 60 10000
106+
107+# Compress string objects using LZF when dump .rdb databases?
108+# For default that's set to 'yes' as it's almost always a win.
109+# If you want to save some CPU in the saving child set it to 'no' but
110+# the dataset will likely be bigger if you have compressible values or keys.
111+rdbcompression yes
112+
113+# The filename where to dump the DB
114+dbfilename dump.rdb
115+
116+# The working directory.
117+#
118+# The DB will be written inside this directory, with the filename specified
119+# above using the 'dbfilename' configuration directive.
120+#
121+# Also the Append Only File will be created inside this directory.
122+#
123+# Note that you must specify a directory here, not a file name.
124+dir tmp/redis
125+
126+################################# REPLICATION #################################
127+
128+# Master-Slave replication. Use slaveof to make a Redis instance a copy of
129+# another Redis server. Note that the configuration is local to the slave
130+# so for example it is possible to configure the slave to save the DB with a
131+# different interval, or to listen to another port, and so on.
132+#
133+# slaveof <masterip> <masterport>
134+
135+# If the master is password protected (using the "requirepass" configuration
136+# directive below) it is possible to tell the slave to authenticate before
137+# starting the replication synchronization process, otherwise the master will
138+# refuse the slave request.
139+#
140+# masterauth <master-password>
141+
142+################################## SECURITY ###################################
143+
144+# Require clients to issue AUTH <PASSWORD> before processing any other
145+# commands. This might be useful in environments in which you do not trust
146+# others with access to the host running redis-server.
147+#
148+# This should stay commented out for backward compatibility and because most
149+# people do not need auth (e.g. they run their own servers).
150+#
151+# Warning: since Redis is pretty fast an outside user can try up to
152+# 150k passwords per second against a good box. This means that you should
153+# use a very strong password otherwise it will be very easy to break.
154+#
155+# requirepass foobared
156+
157+################################### LIMITS ####################################
158+
159+# Set the max number of connected clients at the same time. By default there
160+# is no limit, and it's up to the number of file descriptors the Redis process
161+# is able to open. The special value '0' means no limits.
162+# Once the limit is reached Redis will close all the new connections sending
163+# an error 'max number of clients reached'.
164+#
165+# maxclients 128
166+
167+# Don't use more memory than the specified amount of bytes.
168+# When the memory limit is reached Redis will try to remove keys with an
169+# EXPIRE set. It will try to start freeing keys that are going to expire
170+# in little time and preserve keys with a longer time to live.
171+# Redis will also try to remove objects from free lists if possible.
172+#
173+# If all this fails, Redis will start to reply with errors to commands
174+# that will use more memory, like SET, LPUSH, and so on, and will continue
175+# to reply to most read-only commands like GET.
176+#
177+# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
178+# 'state' server or cache, not as a real DB. When Redis is used as a real
179+# database the memory usage will grow over the weeks, it will be obvious if
180+# it is going to use too much memory in the long run, and you'll have the time
181+# to upgrade. With maxmemory after the limit is reached you'll start to get
182+# errors for write operations, and this may even lead to DB inconsistency.
183+#
184+# maxmemory <bytes>
185+
186+############################## APPEND ONLY MODE ###############################
187+
188+# By default Redis asynchronously dumps the dataset on disk. If you can live
189+# with the idea that the latest records will be lost if something like a crash
190+# happens this is the preferred way to run Redis. If instead you care a lot
191+# about your data and don't want to that a single record can get lost you should
192+# enable the append only mode: when this mode is enabled Redis will append
193+# every write operation received in the file appendonly.aof. This file will
194+# be read on startup in order to rebuild the full dataset in memory.
195+#
196+# Note that you can have both the async dumps and the append only file if you
197+# like (you have to comment the "save" statements above to disable the dumps).
198+# Still if append only mode is enabled Redis will load the data from the
199+# log file at startup ignoring the dump.rdb file.
200+#
201+# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
202+# log file in background when it gets too big.
203+
204+appendonly no
205+
206+# The name of the append only file (default: "appendonly.aof")
207+# appendfilename appendonly.aof
208+
209+# The fsync() call tells the Operating System to actually write data on disk
210+# instead to wait for more data in the output buffer. Some OS will really flush
211+# data on disk, some other OS will just try to do it ASAP.
212+#
213+# Redis supports three different modes:
214+#
215+# no: don't fsync, just let the OS flush the data when it wants. Faster.
216+# always: fsync after every write to the append only log . Slow, Safest.
217+# everysec: fsync only if one second passed since the last fsync. Compromise.
218+#
219+# The default is "everysec" that's usually the right compromise between
220+# speed and data safety. It's up to you to understand if you can relax this to
221+# "no" that will will let the operating system flush the output buffer when
222+# it wants, for better performances (but if you can live with the idea of
223+# some data loss consider the default persistence mode that's snapshotting),
224+# or on the contrary, use "always" that's very slow but a bit safer than
225+# everysec.
226+#
227+# If unsure, use "everysec".
228+
229+# appendfsync always
230+appendfsync everysec
231+# appendfsync no
232+
233+################################ VIRTUAL MEMORY ###############################
234+
235+# Virtual Memory allows Redis to work with datasets bigger than the actual
236+# amount of RAM needed to hold the whole dataset in memory.
237+# In order to do so very used keys are taken in memory while the other keys
238+# are swapped into a swap file, similarly to what operating systems do
239+# with memory pages.
240+#
241+# To enable VM just set 'vm-enabled' to yes, and set the following three
242+# VM parameters accordingly to your needs.
243+
244+vm-enabled no
245+# vm-enabled yes
246+
247+# This is the path of the Redis swap file. As you can guess, swap files
248+# can't be shared by different Redis instances, so make sure to use a swap
249+# file for every redis process you are running. Redis will complain if the
250+# swap file is already in use.
251+#
252+# The best kind of storage for the Redis swap file (that's accessed at random)
253+# is a Solid State Disk (SSD).
254+#
255+# *** WARNING *** if you are using a shared hosting the default of putting
256+# the swap file under /tmp is not secure. Create a dir with access granted
257+# only to Redis user and configure Redis to create the swap file there.
258+vm-swap-file tmp/redis/redis.swap
259+
260+# vm-max-memory configures the VM to use at max the specified amount of
261+# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
262+# is, if there is still enough contiguous space in the swap file.
263+#
264+# With vm-max-memory 0 the system will swap everything it can. Not a good
265+# default, just specify the max amount of RAM you can in bytes, but it's
266+# better to leave some margin. For instance specify an amount of RAM
267+# that's more or less between 60 and 80% of your free RAM.
268+vm-max-memory 0
269+
270+# Redis swap files is split into pages. An object can be saved using multiple
271+# contiguous pages, but pages can't be shared between different objects.
272+# So if your page is too big, small objects swapped out on disk will waste
273+# a lot of space. If you page is too small, there is less space in the swap
274+# file (assuming you configured the same number of total swap file pages).
275+#
276+# If you use a lot of small objects, use a page size of 64 or 32 bytes.
277+# If you use a lot of big objects, use a bigger page size.
278+# If unsure, use the default :)
279+vm-page-size 32
280+
281+# Number of total memory pages in the swap file.
282+# Given that the page table (a bitmap of free/used pages) is taken in memory,
283+# every 8 pages on disk will consume 1 byte of RAM.
284+#
285+# The total swap size is vm-page-size * vm-pages
286+#
287+# With the default of 32-bytes memory pages and 134217728 pages Redis will
288+# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
289+#
290+# It's better to use the smallest acceptable value for your application,
291+# but the default is large in order to work in most conditions.
292+vm-pages 134217728
293+
294+# Max number of VM I/O threads running at the same time.
295+# This threads are used to read/write data from/to swap file, since they
296+# also encode and decode objects from disk to memory or the reverse, a bigger
297+# number of threads can help with big objects even if they can't help with
298+# I/O itself as the physical device may not be able to couple with many
299+# reads/writes operations at the same time.
300+#
301+# The special value of 0 turn off threaded I/O and enables the blocking
302+# Virtual Memory implementation.
303+vm-max-threads 4
304+
305+############################### ADVANCED CONFIG ###############################
306+
307+# Glue small output buffers together in order to send small replies in a
308+# single TCP packet. Uses a bit more CPU but most of the times it is a win
309+# in terms of number of queries per second. Use 'yes' if unsure.
310+glueoutputbuf yes
311+
312+# Hashes are encoded in a special way (much more memory efficient) when they
313+# have at max a given numer of elements, and the biggest element does not
314+# exceed a given threshold. You can configure this limits with the following
315+# configuration directives.
316+hash-max-zipmap-entries 64
317+hash-max-zipmap-value 512
318+
319+# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
320+# order to help rehashing the main Redis hash table (the one mapping top-level
321+# keys to values). The hash table implementation redis uses (see dict.c)
322+# performs a lazy rehashing: the more operation you run into an hash table
323+# that is rhashing, the more rehashing "steps" are performed, so if the
324+# server is idle the rehashing is never complete and some more memory is used
325+# by the hash table.
326+#
327+# The default is to use this millisecond 10 times every second in order to
328+# active rehashing the main dictionaries, freeing memory when possible.
329+#
330+# If unsure:
331+# use "activerehashing no" if you have hard latency requirements and it is
332+# not a good thing in your environment that Redis can reply form time to time
333+# to queries with 2 milliseconds delay.
334+#
335+# use "activerehashing yes" if you don't have such hard requirements but
336+# want to free memory asap when possible.
337+activerehashing yes
338+
339+################################## INCLUDES ###################################
340+
341+# Include one or more other config files here. This is useful if you
342+# have a standard template that goes to all redis server but also need
343+# to customize a few per-server settings. Include files can include
344+# other files, so use this wisely.
345+#
346+# include /path/to/local.conf
347+# include /path/to/other.conf
348
349=== added file 'bin/start-redis.sh'
350--- bin/start-redis.sh 1970-01-01 00:00:00 +0000
351+++ bin/start-redis.sh 2011-12-16 15:10:16 +0000
352@@ -0,0 +1,5 @@
353+#!/bin/bash
354+
355+mkdir -p tmp/redis
356+/sbin/start-stop-daemon --start -b -m -d . -p tmp/redis.pid --exec /usr/bin/redis-server -- `pwd`/bin/redis.conf
357+
358
359=== added file 'bin/stop-redis.sh'
360--- bin/stop-redis.sh 1970-01-01 00:00:00 +0000
361+++ bin/stop-redis.sh 2011-12-16 15:10:16 +0000
362@@ -0,0 +1,3 @@
363+#!/bin/bash
364+
365+/sbin/start-stop-daemon --stop -p tmp/redis.pid --exec /usr/bin/redis-server -- `pwd`/bin/redis.conf
366
367=== modified file 'distinctdb/distinctmetric.py'
368--- distinctdb/distinctmetric.py 2011-12-09 14:47:23 +0000
369+++ distinctdb/distinctmetric.py 2011-12-16 15:10:16 +0000
370@@ -1,12 +1,16 @@
371 import time
372+import threading
373
374 import psycopg2
375+import redis
376
377 from zope.interface import implements
378-from twisted.internet.threads import deferToThread
379+from twisted.internet import reactor
380 from txstatsd.itxstatsd import IMetric
381
382-ONEDAY = 60 * 60 * 24
383+ONE_MINUTE = 60
384+ONE_HOUR = 60 * ONE_MINUTE
385+ONE_DAY = 24 * ONE_HOUR
386
387
388 class DistinctMetricReporter(object):
389@@ -16,8 +20,10 @@
390 """
391 implements(IMetric)
392
393+ periods = [5 * ONE_MINUTE, ONE_HOUR, ONE_DAY]
394+
395 def __init__(self, name, wall_time_func=time.time, prefix="",
396- bucket_size=ONEDAY, dsn=None):
397+ bucket_size=ONE_DAY, dsn=None, redis_host=None, redis_port=None):
398 """Construct a metric we expect to be periodically updated.
399
400 @param name: Indicates what is being instrumented.
401@@ -32,8 +38,17 @@
402 self.prefix = prefix
403 self.bucket_size = bucket_size
404 self.dsn = dsn
405+ self.redis_host = redis_host
406+ if redis_port is None:
407+ redis.port = 6379
408+ self.redis_port = redis_port
409 self.metric_id = None
410 self.build_bucket()
411+ self.redis_flush_lock = threading.Lock()
412+ self.redis_count = {}
413+
414+ if redis_host != None:
415+ self.redis = redis.client.Redis(host=redis_host, port=redis_port)
416
417 def build_bucket(self, timestamp=None):
418 self.max = 0
419@@ -55,6 +70,40 @@
420 if value > self.max:
421 self.max = value
422
423+ now = self.wall_time_func()
424+ if self.redis_host is not None:
425+ reactor.callInThread(self._update_count, item, now)
426+
427+ def bucket_name_for(self, period):
428+ return "bucket_" + str(period)
429+
430+ def _update_count(self, value, when):
431+ for period in self.periods:
432+ self.redis.zadd(self.bucket_name_for(period), value, when)
433+
434+ def _flush_redis(self, now):
435+ if self.redis_flush_lock.acquire(False) is False:
436+ return
437+ try:
438+ for period in self.periods:
439+ bucket = self.bucket_name_for(period)
440+ self.redis.zremrangebyscore(bucket, 0, now - period)
441+ self.redis_count[bucket] = self.redis.zcard(bucket)
442+ finally:
443+ self.redis_flush_lock.release()
444+
445+ def count(self, period):
446+ return self.redis_count.get(self.bucket_name_for(period), 0)
447+
448+ def count_5min(self):
449+ return self.count(5 * ONE_MINUTE)
450+
451+ def count_1hour(self):
452+ return self.count(ONE_HOUR)
453+
454+ def count_1day(self):
455+ return self.count(ONE_DAY)
456+
457 def _save_bucket(self, bucket, bucket_no):
458 path = self.prefix + self.name
459 if self.metric_id is None:
460@@ -80,7 +129,7 @@
461
462 def save_bucket(self, bucket, bucket_no):
463 if self.dsn is not None:
464- deferToThread(self._save_bucket, bucket, bucket_no)
465+ reactor.callInThread(self._save_bucket, bucket, bucket_no)
466
467 def flush(self, interval, timestamp):
468 current_bucket = self.get_bucket_no(timestamp)
469@@ -88,9 +137,16 @@
470 self.save_bucket(self.bucket, self.bucket_no)
471 self.build_bucket(timestamp)
472
473+ if self.redis_host is not None:
474+ reactor.callInThread(self._flush_redis, timestamp)
475+
476 metrics = []
477- items = {".count": len(self.bucket),
478- ".max": self.max}
479+ items = {".messages": len(self.bucket),
480+ ".max": self.max,
481+ ".count_5min": self.count_5min(),
482+ ".count_1hour": self.count_1hour(),
483+ ".count_1day": self.count_1day(),
484+ }
485 for item, value in items.iteritems():
486 metrics.append((self.prefix + self.name + item, value, timestamp))
487 return metrics
488
489=== modified file 'distinctdb/tests/test_distinct.py'
490--- distinctdb/tests/test_distinct.py 2011-12-09 20:04:06 +0000
491+++ distinctdb/tests/test_distinct.py 2011-12-16 15:10:16 +0000
492@@ -5,11 +5,19 @@
493 from cStringIO import StringIO
494 import os
495 import time
496-import subprocess
497+try:
498+ from subprocess import check_output
499+except ImportError:
500+ import subprocess
501+ def check_output(args):
502+ return subprocess.Popen(args,
503+ stdout=subprocess.PIPE).communicate()[0]
504
505 import psycopg2
506+import redis
507
508 from twisted.trial.unittest import TestCase
509+from twisted.internet import reactor
510 from twisted.plugin import getPlugins
511 from twisted.plugins import distinctdbplugin
512 from txstatsd.itxstatsd import IMetricFactory
513@@ -71,8 +79,10 @@
514 dmr.update("three")
515 self.assertEquals(result,
516 {"bucket": {"one": 2, "two": 1}, "bucket_no": 0})
517- self.assertEquals(dmr.flush(1, day),
518- [("test.max", 1, day), ("test.count", 1, day)])
519+ result = dmr.flush(1, day)
520+
521+ self.assertTrue(("test.max", 1, day) in result)
522+ self.assertTrue(("test.messages", 1, day) in result)
523
524 def test_configure(self):
525 class TestOptions(service.OptionsGlue):
526@@ -101,7 +111,7 @@
527 class TestDatabase(TestCase):
528
529 def setUp(self):
530- rootdir = subprocess.check_output(["bzr", "root"]).strip()
531+ rootdir = check_output(["bzr", "root"]).strip()
532 dsn_file = os.path.join(rootdir, "tmp", "pg.dsn")
533 self.dsn = open(dsn_file).read()
534 self.conn = psycopg2.connect(self.dsn)
535@@ -145,3 +155,69 @@
536 dmr2 = distinct.DistinctMetricReporter("test", dsn=self.dsn)
537 dmr2._save_bucket({}, 0)
538 self.assertEquals(dmr.metric_id, dmr2.metric_id)
539+
540+
541+class TestRedis(TestCase):
542+
543+ def setUp(self):
544+ reactor._initThreadPool()
545+ reactor.threadpool.start()
546+
547+ def tearDown(self):
548+ r = redis.client.Redis(host="localhost", port=16379)
549+ r.flushdb()
550+ reactor.threadpool.stop()
551+
552+ def test_connect(self):
553+ r = redis.client.Redis(host="localhost", port=16379)
554+ r.ping()
555+
556+ def test_configure(self):
557+ class TestOptions(service.OptionsGlue):
558+ optParameters = [["test", "t", "default", "help"]]
559+ config_section = "statsd"
560+
561+ o = TestOptions()
562+ config_file = ConfigParser.RawConfigParser()
563+ config_file.readfp(StringIO("[statsd]\n\n[plugin_distinctdb]\n"
564+ "redis_host = localhost\nredis_port = 16379"))
565+ o.configure(config_file)
566+ dmf = distinctdbplugin.DistinctMetricFactory()
567+ dmf.configure(o)
568+ dmr = dmf.build_metric("foo", "bar", time.time)
569+ self.assertEquals(dmr.redis_host, "localhost")
570+ self.assertEquals(dmr.redis_port, 16379)
571+
572+ def test_usage(self):
573+ dmr = distinct.DistinctMetricReporter("test",
574+ redis_host="localhost", redis_port=16379)
575+
576+ self.assertEquals(dmr.count_1hour(), 0)
577+ dmr._update_count("one", 0)
578+ dmr._flush_redis(1)
579+ self.assertEquals(dmr.count_1hour(), 1)
580+ dmr._update_count("one", 0)
581+ dmr._flush_redis(1)
582+ self.assertEquals(dmr.count_1hour(), 1)
583+ dmr._update_count("two", 30 * distinct.ONE_MINUTE)
584+ dmr._flush_redis(30 * distinct.ONE_MINUTE)
585+ self.assertEquals(dmr.count_1hour(), 2)
586+ dmr._flush_redis(distinct.ONE_HOUR + 10 * distinct.ONE_MINUTE)
587+ self.assertEquals(dmr.count_1hour(), 1)
588+
589+ def test_load(self):
590+ dmr = distinct.DistinctMetricReporter("test",
591+ redis_host="localhost", redis_port=16379)
592+ start = time.time()
593+ for i in range(10000):
594+ dmr.update(str(i % 1000))
595+
596+ while True:
597+ w = len(reactor.threadpool.working)
598+ if w == 0:
599+ break
600+ time.sleep(0.1)
601+ dmr._flush_redis(time.time())
602+ duration = time.time() - start
603+ self.assertEquals(dmr.count_1hour(), 1000)
604+ self.assertTrue(duration < 10)
605
606=== modified file 'twisted/plugins/distinctdbplugin.py'
607--- twisted/plugins/distinctdbplugin.py 2011-12-08 21:08:38 +0000
608+++ twisted/plugins/distinctdbplugin.py 2011-12-16 15:10:16 +0000
609@@ -2,7 +2,7 @@
610
611 from twisted.plugin import IPlugin
612 from txstatsd.itxstatsd import IMetricFactory
613-from distinctdb.distinctmetric import DistinctMetricReporter, ONEDAY
614+from distinctdb.distinctmetric import DistinctMetricReporter, ONE_DAY
615
616
617 class DistinctMetricFactory(object):
618@@ -19,15 +19,20 @@
619 return DistinctMetricReporter(name, prefix=prefix,
620 wall_time_func=wall_time_func,
621 bucket_size=self.bucket_size,
622- dsn=self.dsn)
623+ dsn=self.dsn, redis_host=self.redis_host,
624+ redis_port=self.redis_port)
625
626 def configure(self, options):
627 self.section = dict(options.get("plugin_distinctdb", {}))
628 try:
629- self.bucket_size = int(self.section.get("bucket_size", ONEDAY))
630+ self.bucket_size = int(self.section.get("bucket_size", ONE_DAY))
631 except ValueError:
632- self.bucket_size = ONEDAY
633+ self.bucket_size = ONE_DAY
634
635 self.dsn = self.section.get("dsn", None)
636+ self.redis_host = self.section.get("redis_host", None)
637+ self.redis_port = self.section.get("redis_port", None)
638+ if self.redis_port is not None:
639+ self.redis_port = int(self.redis_port)
640
641 distinct_metric_factory = DistinctMetricFactory()

Subscribers

People subscribed via source and target branches