Merge ~ubuntu-docker-images/ubuntu-docker-images/+git/redis:readme into ~ubuntu-docker-images/ubuntu-docker-images/+git/redis:edge

Proposed by Sergio Durigan Junior
Status: Merged
Approved by: Sergio Durigan Junior
Approved revision: 5a29550f0523d59defade47334fd9e35c478fe96
Merge reported by: Sergio Durigan Junior
Merged at revision: 5a29550f0523d59defade47334fd9e35c478fe96
Proposed branch: ~ubuntu-docker-images/ubuntu-docker-images/+git/redis:readme
Merge into: ~ubuntu-docker-images/ubuntu-docker-images/+git/redis:edge
Diff against target: 1673 lines (+1613/-10)
6 files modified
HACKING.md (+19/-0)
README.md (+89/-10)
examples/README.md (+74/-0)
examples/config/redis.conf (+1373/-0)
examples/docker-compose.yml (+10/-0)
examples/microk8s-deployments.yml (+48/-0)
Reviewer Review Type Date Requested Status
Richard Harding (community) Approve
Lucas Kanashiro Pending
Review via email: mp+393674@code.launchpad.net

Description of the change

Improve on the existing README file; add HACKING and examples. As per requested by LP #1904004.

To post a comment you must log in.
Revision history for this message
Richard Harding (rharding) wrote :

Awesome, ty. A couple of clean up please, but with those this looks like a great start.

review: Approve
Revision history for this message
Sergio Durigan Junior (sergiodj) wrote :

Thanks for the review, Rick! I've addressed your comments, and will push the changes now.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/HACKING.md b/HACKING.md
2new file mode 100644
3index 0000000..4de80f4
4--- /dev/null
5+++ b/HACKING.md
6@@ -0,0 +1,19 @@
7+# Contributing
8+
9+In order to contribute to the redis OCI image, do the following:
10+
11+* Create a new branch.
12+
13+* Make your changes. Keep your commits logically separated. If it is
14+ fixing a bug do not forget to mention it in the commit message.
15+
16+* Build a new image with your changes. You can use the following command:
17+
18+```
19+$ docker build -t squeakywheel/redis:test .
20+```
21+
22+* Test the new image. Run it in some way that exercise your changes,
23+ you can also check th README.md file.
24+
25+* If everything goes well submit a merge proposal.
26diff --git a/README.md b/README.md
27index 4011330..77cd3bc 100644
28--- a/README.md
29+++ b/README.md
30@@ -1,21 +1,42 @@
31 # Ubuntu redis OCI image
32
33-This is the OCI image for redis.
34+This is the OCI image for redis. In Ubuntu, redis is available as a
35+`.deb.` package. For this reason, this image was built by installing
36+the redis Ubuntu Focal package inside a docker container.
37+
38+## Versions supported
39+
40+* `edge` = `5.0.7-2`
41+
42+## Architectures supported
43+
44+* amd64
45+
46+* arm64
47+
48+* ppc64el
49+
50+* s390x
51
52 ## How to use it
53
54-`$ docker run --name redis -e ALLOW_EMPTY_PASSWORD=yes <TODONAMEHERE>/redis:edge`
55+To obtain this image, run:
56
57-Bear in mind that the use of `ALLOW_EMPTY_PASSWORD=yes` is not
58-recommended in production environments.
59+```
60+$ docker pull squeakywheel/redis:edge
61+```
62
63-## Differences between our image and upstream's
64+You will be able to launch `redis-server` by doing:
65
66-Upstream's redis image does not enforce the use of a password when
67-connecting to `redis-server`, whereas our image does. For this
68-reason, you have to explicitly provide the password via an environment
69-variable (see below), or disable it via the
70-`ALLOW_EMPTY_PASSWORD=yes`, as mentioned above.
71+```
72+$ docker run --name redis -e ALLOW_EMPTY_PASSWORD=yes squeakywheel/redis:edge
73+```
74+
75+Bear in mind that the use of `ALLOW_EMPTY_PASSWORD=yes` is not
76+recommended in production environments. For production environments,
77+it is recommended that you specify the `REDIS_PASSWORD` environment
78+variable, or instruct the entrypoint script to create a random
79+password for you by specifying `REDIS_RANDOM_PASSWORD=1`.
80
81 ## Environment variables
82
83@@ -42,3 +63,61 @@ the behaviour of `redis-server`.
84 - `REDIS_EXTRA_FLAGS`: If you would like to specify any extra flags to
85 be passed to `redis-server` when initializing it, use this
86 environment variable to do so.
87+
88+## Specifying your own configuration file
89+
90+You can specify your own `redis.conf` file by mounting it:
91+
92+```
93+$ docker run -v /myredis/redis.conf:/etc/redis/redis.conf -e ALLOW_EMPTY_PASSWORD=yes --name redis squeakywheel/redis:edge
94+```
95+
96+## Connecting to the redis-server via redis-cli
97+
98+If you want, you can launch another container with the `redis-cli`
99+program, and connect to the `redis-server` that is running in the
100+first container.
101+
102+For example, suppose that you already have a docker network named
103+`redis-network` configured, and you launch `redis-server` like this:
104+
105+```
106+$ docker run --network redis-network --name redis-container -e REDIS_PASSWORD=mypassword -d squeakywheel/redis:edge
107+```
108+
109+You can now launch `redis-cli` by doing:
110+
111+```
112+$ docker run --network redis-network --rm squeakywheel/redis:edge redis-cli -h redis-container
113+redis:6379> AUTH mypassword
114+OK
115+redis:6379> PING
116+PONG
117+redis:6379>
118+```
119+
120+## Differences between our image and upstream's
121+
122+Upstream's redis image does not enforce the use of a password when
123+connecting to `redis-server`, whereas our image does. For this
124+reason, you have to explicitly provide the password via an environment
125+variable (see below), or disable it via the
126+`ALLOW_EMPTY_PASSWORD=yes`, as mentioned above.
127+
128+## Bugs and Features request
129+
130+If you find a bug in our image or want to request a specific feature
131+file a bug here:
132+
133+https://bugs.launchpad.net/ubuntu-server-oci/+filebug
134+
135+In the title of the bug add `redis: <reason>`.
136+
137+Make sure to include:
138+
139+* The tag of the image you are using
140+
141+* Reproduction steps for the deployment
142+
143+* If it is a feature request, please provide as much detail as
144+ possible
145diff --git a/examples/README.md b/examples/README.md
146new file mode 100644
147index 0000000..09e3cd1
148--- /dev/null
149+++ b/examples/README.md
150@@ -0,0 +1,74 @@
151+# Running the examples
152+
153+## docker-compose
154+
155+Install `docker-compose` from the Ubuntu archive:
156+
157+```
158+$ sudo apt install -y docker-compose
159+```
160+
161+Call `docker-compose` from the examples directory:
162+
163+```
164+$ docker-compose up -d
165+```
166+
167+You can now access the redis server on port 6379, using `redis-cli`:
168+
169+```
170+$ redis-cli -h 127.0.0.1
171+127.0.0.1:6379> auth mypassword
172+OK
173+127.0.0.1:6379> ping
174+PONG
175+127.0.0.1:6379>
176+```
177+
178+Notice that the default password set in the `docker-compose` file is
179+`mypassword`.
180+
181+To stop `docker-compose`, run:
182+
183+```
184+$ docker-compose down
185+```
186+
187+# Microk8s
188+
189+Install microk8s from snap:
190+
191+```
192+$ snap install microk8s
193+```
194+
195+With microk8s running, enable the `dns` and `storage` add-ons:
196+
197+```
198+$ microk8s enable dns storage
199+```
200+
201+Create a configmap for the configuration files:
202+
203+```
204+$ microk8s kubectl create configmap redis-config \
205+ --from-file=redis=config/redis.conf \
206+```
207+
208+Apply the `microk8s-deployments.yml`:
209+
210+```
211+$ microk8s kubectl apply -f microk8s-deployments.yml
212+```
213+
214+You will now be able to connect to the redis server using port 30073
215+on `localhost`:
216+
217+```
218+$ redis-cli -h 127.0.0.1 -p 30073
219+127.0.0.1:30073> auth mypassword
220+OK
221+127.0.0.1:30073> ping
222+PONG
223+127.0.0.1:30073>
224+```
225diff --git a/examples/config/redis.conf b/examples/config/redis.conf
226new file mode 100644
227index 0000000..d01b31e
228--- /dev/null
229+++ b/examples/config/redis.conf
230@@ -0,0 +1,1373 @@
231+# Redis configuration file example.
232+#
233+# Note that in order to read the configuration file, Redis must be
234+# started with the file path as first argument:
235+#
236+# ./redis-server /path/to/redis.conf
237+
238+# Note on units: when memory size is needed, it is possible to specify
239+# it in the usual form of 1k 5GB 4M and so forth:
240+#
241+# 1k => 1000 bytes
242+# 1kb => 1024 bytes
243+# 1m => 1000000 bytes
244+# 1mb => 1024*1024 bytes
245+# 1g => 1000000000 bytes
246+# 1gb => 1024*1024*1024 bytes
247+#
248+# units are case insensitive so 1GB 1Gb 1gB are all the same.
249+
250+################################## INCLUDES ###################################
251+
252+# Include one or more other config files here. This is useful if you
253+# have a standard template that goes to all Redis servers but also need
254+# to customize a few per-server settings. Include files can include
255+# other files, so use this wisely.
256+#
257+# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
258+# from admin or Redis Sentinel. Since Redis always uses the last processed
259+# line as value of a configuration directive, you'd better put includes
260+# at the beginning of this file to avoid overwriting config change at runtime.
261+#
262+# If instead you are interested in using includes to override configuration
263+# options, it is better to use include as the last line.
264+#
265+# include /path/to/local.conf
266+# include /path/to/other.conf
267+
268+################################## MODULES #####################################
269+
270+# Load modules at startup. If the server is not able to load modules
271+# it will abort. It is possible to use multiple loadmodule directives.
272+#
273+# loadmodule /path/to/my_module.so
274+# loadmodule /path/to/other_module.so
275+
276+################################## NETWORK #####################################
277+
278+# By default, if no "bind" configuration directive is specified, Redis listens
279+# for connections from all the network interfaces available on the server.
280+# It is possible to listen to just one or multiple selected interfaces using
281+# the "bind" configuration directive, followed by one or more IP addresses.
282+#
283+# Examples:
284+#
285+# bind 192.168.1.100 10.0.0.1
286+# bind 127.0.0.1 ::1
287+#
288+# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
289+# internet, binding to all the interfaces is dangerous and will expose the
290+# instance to everybody on the internet. So by default we uncomment the
291+# following bind directive, that will force Redis to listen only into
292+# the IPv4 loopback interface address (this means Redis will be able to
293+# accept connections only from clients running into the same computer it
294+# is running).
295+#
296+# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
297+# JUST COMMENT THE FOLLOWING LINE.
298+# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
299+#bind 127.0.0.1 ::1
300+bind 0.0.0.0
301+
302+# Protected mode is a layer of security protection, in order to avoid that
303+# Redis instances left open on the internet are accessed and exploited.
304+#
305+# When protected mode is on and if:
306+#
307+# 1) The server is not binding explicitly to a set of addresses using the
308+# "bind" directive.
309+# 2) No password is configured.
310+#
311+# The server only accepts connections from clients connecting from the
312+# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
313+# sockets.
314+#
315+# By default protected mode is enabled. You should disable it only if
316+# you are sure you want clients from other hosts to connect to Redis
317+# even if no authentication is configured, nor a specific set of interfaces
318+# are explicitly listed using the "bind" directive.
319+protected-mode yes
320+
321+# Accept connections on the specified port, default is 6379 (IANA #815344).
322+# If port 0 is specified Redis will not listen on a TCP socket.
323+port 6379
324+
325+# TCP listen() backlog.
326+#
327+# In high requests-per-second environments you need an high backlog in order
328+# to avoid slow clients connections issues. Note that the Linux kernel
329+# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
330+# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
331+# in order to get the desired effect.
332+tcp-backlog 511
333+
334+# Unix socket.
335+#
336+# Specify the path for the Unix socket that will be used to listen for
337+# incoming connections. There is no default, so Redis will not listen
338+# on a unix socket when not specified.
339+#
340+# unixsocket /var/run/redis/redis-server.sock
341+# unixsocketperm 700
342+
343+# Close the connection after a client is idle for N seconds (0 to disable)
344+timeout 0
345+
346+# TCP keepalive.
347+#
348+# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
349+# of communication. This is useful for two reasons:
350+#
351+# 1) Detect dead peers.
352+# 2) Take the connection alive from the point of view of network
353+# equipment in the middle.
354+#
355+# On Linux, the specified value (in seconds) is the period used to send ACKs.
356+# Note that to close the connection the double of the time is needed.
357+# On other kernels the period depends on the kernel configuration.
358+#
359+# A reasonable value for this option is 300 seconds, which is the new
360+# Redis default starting with Redis 3.2.1.
361+tcp-keepalive 300
362+
363+################################# GENERAL #####################################
364+
365+# By default Redis does not run as a daemon. Use 'yes' if you need it.
366+# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
367+daemonize yes
368+
369+# If you run Redis from upstart or systemd, Redis can interact with your
370+# supervision tree. Options:
371+# supervised no - no supervision interaction
372+# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
373+# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
374+# supervised auto - detect upstart or systemd method based on
375+# UPSTART_JOB or NOTIFY_SOCKET environment variables
376+# Note: these supervision methods only signal "process is ready."
377+# They do not enable continuous liveness pings back to your supervisor.
378+supervised no
379+
380+# If a pid file is specified, Redis writes it where specified at startup
381+# and removes it at exit.
382+#
383+# When the server runs non daemonized, no pid file is created if none is
384+# specified in the configuration. When the server is daemonized, the pid file
385+# is used even if not specified, defaulting to "/var/run/redis.pid".
386+#
387+# Creating a pid file is best effort: if Redis is not able to create it
388+# nothing bad happens, the server will start and run normally.
389+pidfile /var/run/redis/redis-server.pid
390+
391+# Specify the server verbosity level.
392+# This can be one of:
393+# debug (a lot of information, useful for development/testing)
394+# verbose (many rarely useful info, but not a mess like the debug level)
395+# notice (moderately verbose, what you want in production probably)
396+# warning (only very important / critical messages are logged)
397+loglevel notice
398+
399+# Specify the log file name. Also the empty string can be used to force
400+# Redis to log on the standard output. Note that if you use standard
401+# output for logging but daemonize, logs will be sent to /dev/null
402+logfile /var/log/redis/redis-server.log
403+
404+# To enable logging to the system logger, just set 'syslog-enabled' to yes,
405+# and optionally update the other syslog parameters to suit your needs.
406+# syslog-enabled no
407+
408+# Specify the syslog identity.
409+# syslog-ident redis
410+
411+# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
412+# syslog-facility local0
413+
414+# Set the number of databases. The default database is DB 0, you can select
415+# a different one on a per-connection basis using SELECT <dbid> where
416+# dbid is a number between 0 and 'databases'-1
417+databases 16
418+
419+# By default Redis shows an ASCII art logo only when started to log to the
420+# standard output and if the standard output is a TTY. Basically this means
421+# that normally a logo is displayed only in interactive sessions.
422+#
423+# However it is possible to force the pre-4.0 behavior and always show a
424+# ASCII art logo in startup logs by setting the following option to yes.
425+always-show-logo yes
426+
427+################################ SNAPSHOTTING ################################
428+#
429+# Save the DB on disk:
430+#
431+# save <seconds> <changes>
432+#
433+# Will save the DB if both the given number of seconds and the given
434+# number of write operations against the DB occurred.
435+#
436+# In the example below the behaviour will be to save:
437+# after 900 sec (15 min) if at least 1 key changed
438+# after 300 sec (5 min) if at least 10 keys changed
439+# after 60 sec if at least 10000 keys changed
440+#
441+# Note: you can disable saving completely by commenting out all "save" lines.
442+#
443+# It is also possible to remove all the previously configured save
444+# points by adding a save directive with a single empty string argument
445+# like in the following example:
446+#
447+# save ""
448+
449+save 900 1
450+save 300 10
451+save 60 10000
452+
453+# By default Redis will stop accepting writes if RDB snapshots are enabled
454+# (at least one save point) and the latest background save failed.
455+# This will make the user aware (in a hard way) that data is not persisting
456+# on disk properly, otherwise chances are that no one will notice and some
457+# disaster will happen.
458+#
459+# If the background saving process will start working again Redis will
460+# automatically allow writes again.
461+#
462+# However if you have setup your proper monitoring of the Redis server
463+# and persistence, you may want to disable this feature so that Redis will
464+# continue to work as usual even if there are problems with disk,
465+# permissions, and so forth.
466+stop-writes-on-bgsave-error yes
467+
468+# Compress string objects using LZF when dump .rdb databases?
469+# For default that's set to 'yes' as it's almost always a win.
470+# If you want to save some CPU in the saving child set it to 'no' but
471+# the dataset will likely be bigger if you have compressible values or keys.
472+rdbcompression yes
473+
474+# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
475+# This makes the format more resistant to corruption but there is a performance
476+# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
477+# for maximum performances.
478+#
479+# RDB files created with checksum disabled have a checksum of zero that will
480+# tell the loading code to skip the check.
481+rdbchecksum yes
482+
483+# The filename where to dump the DB
484+dbfilename dump.rdb
485+
486+# The working directory.
487+#
488+# The DB will be written inside this directory, with the filename specified
489+# above using the 'dbfilename' configuration directive.
490+#
491+# The Append Only File will also be created inside this directory.
492+#
493+# Note that you must specify a directory here, not a file name.
494+dir /var/lib/redis
495+
496+################################# REPLICATION #################################
497+
498+# Master-Replica replication. Use replicaof to make a Redis instance a copy of
499+# another Redis server. A few things to understand ASAP about Redis replication.
500+#
501+# +------------------+ +---------------+
502+# | Master | ---> | Replica |
503+# | (receive writes) | | (exact copy) |
504+# +------------------+ +---------------+
505+#
506+# 1) Redis replication is asynchronous, but you can configure a master to
507+# stop accepting writes if it appears to be not connected with at least
508+# a given number of replicas.
509+# 2) Redis replicas are able to perform a partial resynchronization with the
510+# master if the replication link is lost for a relatively small amount of
511+# time. You may want to configure the replication backlog size (see the next
512+# sections of this file) with a sensible value depending on your needs.
513+# 3) Replication is automatic and does not need user intervention. After a
514+# network partition replicas automatically try to reconnect to masters
515+# and resynchronize with them.
516+#
517+# replicaof <masterip> <masterport>
518+
519+# If the master is password protected (using the "requirepass" configuration
520+# directive below) it is possible to tell the replica to authenticate before
521+# starting the replication synchronization process, otherwise the master will
522+# refuse the replica request.
523+#
524+# masterauth <master-password>
525+
526+# When a replica loses its connection with the master, or when the replication
527+# is still in progress, the replica can act in two different ways:
528+#
529+# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
530+# still reply to client requests, possibly with out of date data, or the
531+# data set may just be empty if this is the first synchronization.
532+#
533+# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
534+# an error "SYNC with master in progress" to all the kind of commands
535+# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
536+# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
537+# COMMAND, POST, HOST: and LATENCY.
538+#
539+replica-serve-stale-data yes
540+
541+# You can configure a replica instance to accept writes or not. Writing against
542+# a replica instance may be useful to store some ephemeral data (because data
543+# written on a replica will be easily deleted after resync with the master) but
544+# may also cause problems if clients are writing to it because of a
545+# misconfiguration.
546+#
547+# Since Redis 2.6 by default replicas are read-only.
548+#
549+# Note: read only replicas are not designed to be exposed to untrusted clients
550+# on the internet. It's just a protection layer against misuse of the instance.
551+# Still a read only replica exports by default all the administrative commands
552+# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
553+# security of read only replicas using 'rename-command' to shadow all the
554+# administrative / dangerous commands.
555+replica-read-only yes
556+
557+# Replication SYNC strategy: disk or socket.
558+#
559+# -------------------------------------------------------
560+# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
561+# -------------------------------------------------------
562+#
563+# New replicas and reconnecting replicas that are not able to continue the replication
564+# process just receiving differences, need to do what is called a "full
565+# synchronization". An RDB file is transmitted from the master to the replicas.
566+# The transmission can happen in two different ways:
567+#
568+# 1) Disk-backed: The Redis master creates a new process that writes the RDB
569+# file on disk. Later the file is transferred by the parent
570+# process to the replicas incrementally.
571+# 2) Diskless: The Redis master creates a new process that directly writes the
572+# RDB file to replica sockets, without touching the disk at all.
573+#
574+# With disk-backed replication, while the RDB file is generated, more replicas
575+# can be queued and served with the RDB file as soon as the current child producing
576+# the RDB file finishes its work. With diskless replication instead once
577+# the transfer starts, new replicas arriving will be queued and a new transfer
578+# will start when the current one terminates.
579+#
580+# When diskless replication is used, the master waits a configurable amount of
581+# time (in seconds) before starting the transfer in the hope that multiple replicas
582+# will arrive and the transfer can be parallelized.
583+#
584+# With slow disks and fast (large bandwidth) networks, diskless replication
585+# works better.
586+repl-diskless-sync no
587+
588+# When diskless replication is enabled, it is possible to configure the delay
589+# the server waits in order to spawn the child that transfers the RDB via socket
590+# to the replicas.
591+#
592+# This is important since once the transfer starts, it is not possible to serve
593+# new replicas arriving, that will be queued for the next RDB transfer, so the server
594+# waits a delay in order to let more replicas arrive.
595+#
596+# The delay is specified in seconds, and by default is 5 seconds. To disable
597+# it entirely just set it to 0 seconds and the transfer will start ASAP.
598+repl-diskless-sync-delay 5
599+
600+# Replicas send PINGs to server in a predefined interval. It's possible to change
601+# this interval with the repl_ping_replica_period option. The default value is 10
602+# seconds.
603+#
604+# repl-ping-replica-period 10
605+
606+# The following option sets the replication timeout for:
607+#
608+# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
609+# 2) Master timeout from the point of view of replicas (data, pings).
610+# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
611+#
612+# It is important to make sure that this value is greater than the value
613+# specified for repl-ping-replica-period otherwise a timeout will be detected
614+# every time there is low traffic between the master and the replica.
615+#
616+# repl-timeout 60
617+
618+# Disable TCP_NODELAY on the replica socket after SYNC?
619+#
620+# If you select "yes" Redis will use a smaller number of TCP packets and
621+# less bandwidth to send data to replicas. But this can add a delay for
622+# the data to appear on the replica side, up to 40 milliseconds with
623+# Linux kernels using a default configuration.
624+#
625+# If you select "no" the delay for data to appear on the replica side will
626+# be reduced but more bandwidth will be used for replication.
627+#
628+# By default we optimize for low latency, but in very high traffic conditions
629+# or when the master and replicas are many hops away, turning this to "yes" may
630+# be a good idea.
631+repl-disable-tcp-nodelay no
632+
633+# Set the replication backlog size. The backlog is a buffer that accumulates
634+# replica data when replicas are disconnected for some time, so that when a replica
635+# wants to reconnect again, often a full resync is not needed, but a partial
636+# resync is enough, just passing the portion of data the replica missed while
637+# disconnected.
638+#
639+# The bigger the replication backlog, the longer the time the replica can be
640+# disconnected and later be able to perform a partial resynchronization.
641+#
642+# The backlog is only allocated once there is at least a replica connected.
643+#
644+# repl-backlog-size 1mb
645+
646+# After a master has no longer connected replicas for some time, the backlog
647+# will be freed. The following option configures the amount of seconds that
648+# need to elapse, starting from the time the last replica disconnected, for
649+# the backlog buffer to be freed.
650+#
651+# Note that replicas never free the backlog for timeout, since they may be
652+# promoted to masters later, and should be able to correctly "partially
653+# resynchronize" with the replicas: hence they should always accumulate backlog.
654+#
655+# A value of 0 means to never release the backlog.
656+#
657+# repl-backlog-ttl 3600
658+
659+# The replica priority is an integer number published by Redis in the INFO output.
660+# It is used by Redis Sentinel in order to select a replica to promote into a
661+# master if the master is no longer working correctly.
662+#
663+# A replica with a low priority number is considered better for promotion, so
664+# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
665+# pick the one with priority 10, that is the lowest.
666+#
667+# However a special priority of 0 marks the replica as not able to perform the
668+# role of master, so a replica with priority of 0 will never be selected by
669+# Redis Sentinel for promotion.
670+#
671+# By default the priority is 100.
672+replica-priority 100
673+
674+# It is possible for a master to stop accepting writes if there are less than
675+# N replicas connected, having a lag less or equal than M seconds.
676+#
677+# The N replicas need to be in "online" state.
678+#
679+# The lag in seconds, that must be <= the specified value, is calculated from
680+# the last ping received from the replica, that is usually sent every second.
681+#
682+# This option does not GUARANTEE that N replicas will accept the write, but
683+# will limit the window of exposure for lost writes in case not enough replicas
684+# are available, to the specified number of seconds.
685+#
686+# For example to require at least 3 replicas with a lag <= 10 seconds use:
687+#
688+# min-replicas-to-write 3
689+# min-replicas-max-lag 10
690+#
691+# Setting one or the other to 0 disables the feature.
692+#
693+# By default min-replicas-to-write is set to 0 (feature disabled) and
694+# min-replicas-max-lag is set to 10.
695+
696+# A Redis master is able to list the address and port of the attached
697+# replicas in different ways. For example the "INFO replication" section
698+# offers this information, which is used, among other tools, by
699+# Redis Sentinel in order to discover replica instances.
700+# Another place where this info is available is in the output of the
701+# "ROLE" command of a master.
702+#
703+# The listed IP and address normally reported by a replica is obtained
704+# in the following way:
705+#
706+# IP: The address is auto detected by checking the peer address
707+# of the socket used by the replica to connect with the master.
708+#
709+# Port: The port is communicated by the replica during the replication
710+# handshake, and is normally the port that the replica is using to
711+# listen for connections.
712+#
713+# However when port forwarding or Network Address Translation (NAT) is
714+# used, the replica may be actually reachable via different IP and port
715+# pairs. The following two options can be used by a replica in order to
716+# report to its master a specific set of IP and port, so that both INFO
717+# and ROLE will report those values.
718+#
719+# There is no need to use both the options if you need to override just
720+# the port or the IP address.
721+#
722+# replica-announce-ip 5.5.5.5
723+# replica-announce-port 1234
724+
725+################################## SECURITY ###################################
726+
727+# Require clients to issue AUTH <PASSWORD> before processing any other
728+# commands. This might be useful in environments in which you do not trust
729+# others with access to the host running redis-server.
730+#
731+# This should stay commented out for backward compatibility and because most
732+# people do not need auth (e.g. they run their own servers).
733+#
734+# Warning: since Redis is pretty fast an outside user can try up to
735+# 150k passwords per second against a good box. This means that you should
736+# use a very strong password otherwise it will be very easy to break.
737+#
738+requirepass mypassword
739+
740+# Command renaming.
741+#
742+# It is possible to change the name of dangerous commands in a shared
743+# environment. For instance the CONFIG command may be renamed into something
744+# hard to guess so that it will still be available for internal-use tools
745+# but not available for general clients.
746+#
747+# Example:
748+#
749+# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
750+#
751+# It is also possible to completely kill a command by renaming it into
752+# an empty string:
753+#
754+# rename-command CONFIG ""
755+#
756+# Please note that changing the name of commands that are logged into the
757+# AOF file or transmitted to replicas may cause problems.
758+
759+################################### CLIENTS ####################################
760+
761+# Set the max number of connected clients at the same time. By default
762+# this limit is set to 10000 clients, however if the Redis server is not
763+# able to configure the process file limit to allow for the specified limit
764+# the max number of allowed clients is set to the current file limit
765+# minus 32 (as Redis reserves a few file descriptors for internal uses).
766+#
767+# Once the limit is reached Redis will close all the new connections sending
768+# an error 'max number of clients reached'.
769+#
770+# maxclients 10000
771+
772+############################## MEMORY MANAGEMENT ################################
773+
774+# Set a memory usage limit to the specified amount of bytes.
775+# When the memory limit is reached Redis will try to remove keys
776+# according to the eviction policy selected (see maxmemory-policy).
777+#
778+# If Redis can't remove keys according to the policy, or if the policy is
779+# set to 'noeviction', Redis will start to reply with errors to commands
780+# that would use more memory, like SET, LPUSH, and so on, and will continue
781+# to reply to read-only commands like GET.
782+#
783+# This option is usually useful when using Redis as an LRU or LFU cache, or to
784+# set a hard memory limit for an instance (using the 'noeviction' policy).
785+#
786+# WARNING: If you have replicas attached to an instance with maxmemory on,
787+# the size of the output buffers needed to feed the replicas are subtracted
788+# from the used memory count, so that network problems / resyncs will
789+# not trigger a loop where keys are evicted, and in turn the output
790+# buffer of replicas is full with DELs of keys evicted triggering the deletion
791+# of more keys, and so forth until the database is completely emptied.
792+#
793+# In short... if you have replicas attached it is suggested that you set a lower
794+# limit for maxmemory so that there is some free RAM on the system for replica
795+# output buffers (but this is not needed if the policy is 'noeviction').
796+#
797+# maxmemory <bytes>
798+
799+# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
800+# is reached. You can select among five behaviors:
801+#
802+# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
803+# allkeys-lru -> Evict any key using approximated LRU.
804+# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
805+# allkeys-lfu -> Evict any key using approximated LFU.
806+# volatile-random -> Remove a random key among the ones with an expire set.
807+# allkeys-random -> Remove a random key, any key.
808+# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
809+# noeviction -> Don't evict anything, just return an error on write operations.
810+#
811+# LRU means Least Recently Used
812+# LFU means Least Frequently Used
813+#
814+# Both LRU, LFU and volatile-ttl are implemented using approximated
815+# randomized algorithms.
816+#
817+# Note: with any of the above policies, Redis will return an error on write
818+# operations, when there are no suitable keys for eviction.
819+#
820+# At the date of writing these commands are: set setnx setex append
821+# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
822+# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
823+# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
824+# getset mset msetnx exec sort
825+#
826+# The default is:
827+#
828+# maxmemory-policy noeviction
829+
830+# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
831+# algorithms (in order to save memory), so you can tune it for speed or
832+# accuracy. For default Redis will check five keys and pick the one that was
833+# used less recently, you can change the sample size using the following
834+# configuration directive.
835+#
836+# The default of 5 produces good enough results. 10 Approximates very closely
837+# true LRU but costs more CPU. 3 is faster but not very accurate.
838+#
839+# maxmemory-samples 5
840+
841+# Starting from Redis 5, by default a replica will ignore its maxmemory setting
842+# (unless it is promoted to master after a failover or manually). It means
843+# that the eviction of keys will be just handled by the master, sending the
844+# DEL commands to the replica as keys evict in the master side.
845+#
846+# This behavior ensures that masters and replicas stay consistent, and is usually
847+# what you want, however if your replica is writable, or you want the replica to have
848+# a different memory setting, and you are sure all the writes performed to the
849+# replica are idempotent, then you may change this default (but be sure to understand
850+# what you are doing).
851+#
852+# Note that since the replica by default does not evict, it may end using more
853+# memory than the one set via maxmemory (there are certain buffers that may
854+# be larger on the replica, or data structures may sometimes take more memory and so
855+# forth). So make sure you monitor your replicas and make sure they have enough
856+# memory to never hit a real out-of-memory condition before the master hits
857+# the configured maxmemory setting.
858+#
859+# replica-ignore-maxmemory yes
860+
861+############################# LAZY FREEING ####################################
862+
863+# Redis has two primitives to delete keys. One is called DEL and is a blocking
864+# deletion of the object. It means that the server stops processing new commands
865+# in order to reclaim all the memory associated with an object in a synchronous
866+# way. If the key deleted is associated with a small object, the time needed
867+# in order to execute the DEL command is very small and comparable to most other
868+# O(1) or O(log_N) commands in Redis. However if the key is associated with an
869+# aggregated value containing millions of elements, the server can block for
870+# a long time (even seconds) in order to complete the operation.
871+#
872+# For the above reasons Redis also offers non blocking deletion primitives
873+# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
874+# FLUSHDB commands, in order to reclaim memory in background. Those commands
875+# are executed in constant time. Another thread will incrementally free the
876+# object in the background as fast as possible.
877+#
878+# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
879+# It's up to the design of the application to understand when it is a good
880+# idea to use one or the other. However the Redis server sometimes has to
881+# delete keys or flush the whole database as a side effect of other operations.
882+# Specifically Redis deletes objects independently of a user call in the
883+# following scenarios:
884+#
885+# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
886+# in order to make room for new data, without going over the specified
887+# memory limit.
888+# 2) Because of expire: when a key with an associated time to live (see the
889+# EXPIRE command) must be deleted from memory.
890+# 3) Because of a side effect of a command that stores data on a key that may
891+# already exist. For example the RENAME command may delete the old key
892+# content when it is replaced with another one. Similarly SUNIONSTORE
893+# or SORT with STORE option may delete existing keys. The SET command
894+# itself removes any old content of the specified key in order to replace
895+# it with the specified string.
896+# 4) During replication, when a replica performs a full resynchronization with
897+# its master, the content of the whole database is removed in order to
898+# load the RDB file just transferred.
899+#
900+# In all the above cases the default is to delete objects in a blocking way,
901+# like if DEL was called. However you can configure each case specifically
902+# in order to instead release memory in a non-blocking way like if UNLINK
903+# was called, using the following configuration directives:
904+
905+lazyfree-lazy-eviction no
906+lazyfree-lazy-expire no
907+lazyfree-lazy-server-del no
908+replica-lazy-flush no
909+
910+############################## APPEND ONLY MODE ###############################
911+
912+# By default Redis asynchronously dumps the dataset on disk. This mode is
913+# good enough in many applications, but an issue with the Redis process or
914+# a power outage may result into a few minutes of writes lost (depending on
915+# the configured save points).
916+#
917+# The Append Only File is an alternative persistence mode that provides
918+# much better durability. For instance using the default data fsync policy
919+# (see later in the config file) Redis can lose just one second of writes in a
920+# dramatic event like a server power outage, or a single write if something
921+# wrong with the Redis process itself happens, but the operating system is
922+# still running correctly.
923+#
924+# AOF and RDB persistence can be enabled at the same time without problems.
925+# If the AOF is enabled on startup Redis will load the AOF, that is the file
926+# with the better durability guarantees.
927+#
928+# Please check http://redis.io/topics/persistence for more information.
929+
930+appendonly no
931+
932+# The name of the append only file (default: "appendonly.aof")
933+
934+appendfilename "appendonly.aof"
935+
936+# The fsync() call tells the Operating System to actually write data on disk
937+# instead of waiting for more data in the output buffer. Some OS will really flush
938+# data on disk, some other OS will just try to do it ASAP.
939+#
940+# Redis supports three different modes:
941+#
942+# no: don't fsync, just let the OS flush the data when it wants. Faster.
943+# always: fsync after every write to the append only log. Slow, Safest.
944+# everysec: fsync only one time every second. Compromise.
945+#
946+# The default is "everysec", as that's usually the right compromise between
947+# speed and data safety. It's up to you to understand if you can relax this to
948+# "no" that will let the operating system flush the output buffer when
949+# it wants, for better performances (but if you can live with the idea of
950+# some data loss consider the default persistence mode that's snapshotting),
951+# or on the contrary, use "always" that's very slow but a bit safer than
952+# everysec.
953+#
954+# More details please check the following article:
955+# http://antirez.com/post/redis-persistence-demystified.html
956+#
957+# If unsure, use "everysec".
958+
959+# appendfsync always
960+appendfsync everysec
961+# appendfsync no
962+
963+# When the AOF fsync policy is set to always or everysec, and a background
964+# saving process (a background save or AOF log background rewriting) is
965+# performing a lot of I/O against the disk, in some Linux configurations
966+# Redis may block too long on the fsync() call. Note that there is no fix for
967+# this currently, as even performing fsync in a different thread will block
968+# our synchronous write(2) call.
969+#
970+# In order to mitigate this problem it's possible to use the following option
971+# that will prevent fsync() from being called in the main process while a
972+# BGSAVE or BGREWRITEAOF is in progress.
973+#
974+# This means that while another child is saving, the durability of Redis is
975+# the same as "appendfsync none". In practical terms, this means that it is
976+# possible to lose up to 30 seconds of log in the worst scenario (with the
977+# default Linux settings).
978+#
979+# If you have latency problems turn this to "yes". Otherwise leave it as
980+# "no" that is the safest pick from the point of view of durability.
981+
982+no-appendfsync-on-rewrite no
983+
984+# Automatic rewrite of the append only file.
985+# Redis is able to automatically rewrite the log file implicitly calling
986+# BGREWRITEAOF when the AOF log size grows by the specified percentage.
987+#
988+# This is how it works: Redis remembers the size of the AOF file after the
989+# latest rewrite (if no rewrite has happened since the restart, the size of
990+# the AOF at startup is used).
991+#
992+# This base size is compared to the current size. If the current size is
993+# bigger than the specified percentage, the rewrite is triggered. Also
994+# you need to specify a minimal size for the AOF file to be rewritten, this
995+# is useful to avoid rewriting the AOF file even if the percentage increase
996+# is reached but it is still pretty small.
997+#
998+# Specify a percentage of zero in order to disable the automatic AOF
999+# rewrite feature.
1000+
1001+auto-aof-rewrite-percentage 100
1002+auto-aof-rewrite-min-size 64mb
1003+
1004+# An AOF file may be found to be truncated at the end during the Redis
1005+# startup process, when the AOF data gets loaded back into memory.
1006+# This may happen when the system where Redis is running
1007+# crashes, especially when an ext4 filesystem is mounted without the
1008+# data=ordered option (however this can't happen when Redis itself
1009+# crashes or aborts but the operating system still works correctly).
1010+#
1011+# Redis can either exit with an error when this happens, or load as much
1012+# data as possible (the default now) and start if the AOF file is found
1013+# to be truncated at the end. The following option controls this behavior.
1014+#
1015+# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
1016+# the Redis server starts emitting a log to inform the user of the event.
1017+# Otherwise if the option is set to no, the server aborts with an error
1018+# and refuses to start. When the option is set to no, the user requires
1019+# to fix the AOF file using the "redis-check-aof" utility before to restart
1020+# the server.
1021+#
1022+# Note that if the AOF file will be found to be corrupted in the middle
1023+# the server will still exit with an error. This option only applies when
1024+# Redis will try to read more data from the AOF file but not enough bytes
1025+# will be found.
1026+aof-load-truncated yes
1027+
1028+# When rewriting the AOF file, Redis is able to use an RDB preamble in the
1029+# AOF file for faster rewrites and recoveries. When this option is turned
1030+# on the rewritten AOF file is composed of two different stanzas:
1031+#
1032+# [RDB file][AOF tail]
1033+#
1034+# When loading Redis recognizes that the AOF file starts with the "REDIS"
1035+# string and loads the prefixed RDB file, and continues loading the AOF
1036+# tail.
1037+aof-use-rdb-preamble yes
1038+
1039+################################ LUA SCRIPTING ###############################
1040+
1041+# Max execution time of a Lua script in milliseconds.
1042+#
1043+# If the maximum execution time is reached Redis will log that a script is
1044+# still in execution after the maximum allowed time and will start to
1045+# reply to queries with an error.
1046+#
1047+# When a long running script exceeds the maximum execution time only the
1048+# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
1049+# used to stop a script that did not yet called write commands. The second
1050+# is the only way to shut down the server in the case a write command was
1051+# already issued by the script but the user doesn't want to wait for the natural
1052+# termination of the script.
1053+#
1054+# Set it to 0 or a negative value for unlimited execution without warnings.
1055+lua-time-limit 5000
1056+
1057+################################ REDIS CLUSTER ###############################
1058+
1059+# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
1060+# started as cluster nodes can. In order to start a Redis instance as a
1061+# cluster node enable the cluster support uncommenting the following:
1062+#
1063+# cluster-enabled yes
1064+
1065+# Every cluster node has a cluster configuration file. This file is not
1066+# intended to be edited by hand. It is created and updated by Redis nodes.
1067+# Every Redis Cluster node requires a different cluster configuration file.
1068+# Make sure that instances running in the same system do not have
1069+# overlapping cluster configuration file names.
1070+#
1071+# cluster-config-file nodes-6379.conf
1072+
1073+# Cluster node timeout is the amount of milliseconds a node must be unreachable
1074+# for it to be considered in failure state.
1075+# Most other internal time limits are multiple of the node timeout.
1076+#
1077+# cluster-node-timeout 15000
1078+
1079+# A replica of a failing master will avoid to start a failover if its data
1080+# looks too old.
1081+#
1082+# There is no simple way for a replica to actually have an exact measure of
1083+# its "data age", so the following two checks are performed:
1084+#
1085+# 1) If there are multiple replicas able to failover, they exchange messages
1086+# in order to try to give an advantage to the replica with the best
1087+# replication offset (more data from the master processed).
1088+# Replicas will try to get their rank by offset, and apply to the start
1089+# of the failover a delay proportional to their rank.
1090+#
1091+# 2) Every single replica computes the time of the last interaction with
1092+# its master. This can be the last ping or command received (if the master
1093+# is still in the "connected" state), or the time that elapsed since the
1094+# disconnection with the master (if the replication link is currently down).
1095+# If the last interaction is too old, the replica will not try to failover
1096+# at all.
1097+#
1098+# The point "2" can be tuned by user. Specifically a replica will not perform
1099+# the failover if, since the last interaction with the master, the time
1100+# elapsed is greater than:
1101+#
1102+# (node-timeout * replica-validity-factor) + repl-ping-replica-period
1103+#
1104+# So for example if node-timeout is 30 seconds, and the replica-validity-factor
1105+# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
1106+# replica will not try to failover if it was not able to talk with the master
1107+# for longer than 310 seconds.
1108+#
1109+# A large replica-validity-factor may allow replicas with too old data to failover
1110+# a master, while a too small value may prevent the cluster from being able to
1111+# elect a replica at all.
1112+#
1113+# For maximum availability, it is possible to set the replica-validity-factor
1114+# to a value of 0, which means, that replicas will always try to failover the
1115+# master regardless of the last time they interacted with the master.
1116+# (However they'll always try to apply a delay proportional to their
1117+# offset rank).
1118+#
1119+# Zero is the only value able to guarantee that when all the partitions heal
1120+# the cluster will always be able to continue.
1121+#
1122+# cluster-replica-validity-factor 10
1123+
1124+# Cluster replicas are able to migrate to orphaned masters, that are masters
1125+# that are left without working replicas. This improves the cluster ability
1126+# to resist to failures as otherwise an orphaned master can't be failed over
1127+# in case of failure if it has no working replicas.
1128+#
1129+# Replicas migrate to orphaned masters only if there are still at least a
1130+# given number of other working replicas for their old master. This number
1131+# is the "migration barrier". A migration barrier of 1 means that a replica
1132+# will migrate only if there is at least 1 other working replica for its master
1133+# and so forth. It usually reflects the number of replicas you want for every
1134+# master in your cluster.
1135+#
1136+# Default is 1 (replicas migrate only if their masters remain with at least
1137+# one replica). To disable migration just set it to a very large value.
1138+# A value of 0 can be set but is useful only for debugging and dangerous
1139+# in production.
1140+#
1141+# cluster-migration-barrier 1
1142+
1143+# By default Redis Cluster nodes stop accepting queries if they detect there
1144+# is at least an hash slot uncovered (no available node is serving it).
1145+# This way if the cluster is partially down (for example a range of hash slots
1146+# are no longer covered) all the cluster becomes, eventually, unavailable.
1147+# It automatically returns available as soon as all the slots are covered again.
1148+#
1149+# However sometimes you want the subset of the cluster which is working,
1150+# to continue to accept queries for the part of the key space that is still
1151+# covered. In order to do so, just set the cluster-require-full-coverage
1152+# option to no.
1153+#
1154+# cluster-require-full-coverage yes
1155+
1156+# This option, when set to yes, prevents replicas from trying to failover its
1157+# master during master failures. However the master can still perform a
1158+# manual failover, if forced to do so.
1159+#
1160+# This is useful in different scenarios, especially in the case of multiple
1161+# data center operations, where we want one side to never be promoted if not
1162+# in the case of a total DC failure.
1163+#
1164+# cluster-replica-no-failover no
1165+
1166+# In order to setup your cluster make sure to read the documentation
1167+# available at http://redis.io web site.
1168+
1169+########################## CLUSTER DOCKER/NAT support ########################
1170+
1171+# In certain deployments, Redis Cluster nodes address discovery fails, because
1172+# addresses are NAT-ted or because ports are forwarded (the typical case is
1173+# Docker and other containers).
1174+#
1175+# In order to make Redis Cluster working in such environments, a static
1176+# configuration where each node knows its public address is needed. The
1177+# following two options are used for this scope, and are:
1178+#
1179+# * cluster-announce-ip
1180+# * cluster-announce-port
1181+# * cluster-announce-bus-port
1182+#
1183+# Each instruct the node about its address, client port, and cluster message
1184+# bus port. The information is then published in the header of the bus packets
1185+# so that other nodes will be able to correctly map the address of the node
1186+# publishing the information.
1187+#
1188+# If the above options are not used, the normal Redis Cluster auto-detection
1189+# will be used instead.
1190+#
1191+# Note that when remapped, the bus port may not be at the fixed offset of
1192+# clients port + 10000, so you can specify any port and bus-port depending
1193+# on how they get remapped. If the bus-port is not set, a fixed offset of
1194+# 10000 will be used as usually.
1195+#
1196+# Example:
1197+#
1198+# cluster-announce-ip 10.1.1.5
1199+# cluster-announce-port 6379
1200+# cluster-announce-bus-port 6380
1201+
1202+################################## SLOW LOG ###################################
1203+
1204+# The Redis Slow Log is a system to log queries that exceeded a specified
1205+# execution time. The execution time does not include the I/O operations
1206+# like talking with the client, sending the reply and so forth,
1207+# but just the time needed to actually execute the command (this is the only
1208+# stage of command execution where the thread is blocked and can not serve
1209+# other requests in the meantime).
1210+#
1211+# You can configure the slow log with two parameters: one tells Redis
1212+# what is the execution time, in microseconds, to exceed in order for the
1213+# command to get logged, and the other parameter is the length of the
1214+# slow log. When a new command is logged the oldest one is removed from the
1215+# queue of logged commands.
1216+
1217+# The following time is expressed in microseconds, so 1000000 is equivalent
1218+# to one second. Note that a negative number disables the slow log, while
1219+# a value of zero forces the logging of every command.
1220+slowlog-log-slower-than 10000
1221+
1222+# There is no limit to this length. Just be aware that it will consume memory.
1223+# You can reclaim memory used by the slow log with SLOWLOG RESET.
1224+slowlog-max-len 128
1225+
1226+################################ LATENCY MONITOR ##############################
1227+
1228+# The Redis latency monitoring subsystem samples different operations
1229+# at runtime in order to collect data related to possible sources of
1230+# latency of a Redis instance.
1231+#
1232+# Via the LATENCY command this information is available to the user that can
1233+# print graphs and obtain reports.
1234+#
1235+# The system only logs operations that were performed in a time equal or
1236+# greater than the amount of milliseconds specified via the
1237+# latency-monitor-threshold configuration directive. When its value is set
1238+# to zero, the latency monitor is turned off.
1239+#
1240+# By default latency monitoring is disabled since it is mostly not needed
1241+# if you don't have latency issues, and collecting data has a performance
1242+# impact, that while very small, can be measured under big load. Latency
1243+# monitoring can easily be enabled at runtime using the command
1244+# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
1245+latency-monitor-threshold 0
1246+
1247+############################# EVENT NOTIFICATION ##############################
1248+
1249+# Redis can notify Pub/Sub clients about events happening in the key space.
1250+# This feature is documented at http://redis.io/topics/notifications
1251+#
1252+# For instance if keyspace events notification is enabled, and a client
1253+# performs a DEL operation on key "foo" stored in the Database 0, two
1254+# messages will be published via Pub/Sub:
1255+#
1256+# PUBLISH __keyspace@0__:foo del
1257+# PUBLISH __keyevent@0__:del foo
1258+#
1259+# It is possible to select the events that Redis will notify among a set
1260+# of classes. Every class is identified by a single character:
1261+#
1262+# K Keyspace events, published with __keyspace@<db>__ prefix.
1263+# E Keyevent events, published with __keyevent@<db>__ prefix.
1264+# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
1265+# $ String commands
1266+# l List commands
1267+# s Set commands
1268+# h Hash commands
1269+# z Sorted set commands
1270+# x Expired events (events generated every time a key expires)
1271+# e Evicted events (events generated when a key is evicted for maxmemory)
1272+# A Alias for g$lshzxe, so that the "AKE" string means all the events.
1273+#
1274+# The "notify-keyspace-events" takes as argument a string that is composed
1275+# of zero or multiple characters. The empty string means that notifications
1276+# are disabled.
1277+#
1278+# Example: to enable list and generic events, from the point of view of the
1279+# event name, use:
1280+#
1281+# notify-keyspace-events Elg
1282+#
1283+# Example 2: to get the stream of the expired keys subscribing to channel
1284+# name __keyevent@0__:expired use:
1285+#
1286+# notify-keyspace-events Ex
1287+#
1288+# By default all notifications are disabled because most users don't need
1289+# this feature and the feature has some overhead. Note that if you don't
1290+# specify at least one of K or E, no events will be delivered.
1291+notify-keyspace-events ""
1292+
1293+############################### ADVANCED CONFIG ###############################
1294+
1295+# Hashes are encoded using a memory efficient data structure when they have a
1296+# small number of entries, and the biggest entry does not exceed a given
1297+# threshold. These thresholds can be configured using the following directives.
1298+hash-max-ziplist-entries 512
1299+hash-max-ziplist-value 64
1300+
1301+# Lists are also encoded in a special way to save a lot of space.
1302+# The number of entries allowed per internal list node can be specified
1303+# as a fixed maximum size or a maximum number of elements.
1304+# For a fixed maximum size, use -5 through -1, meaning:
1305+# -5: max size: 64 Kb <-- not recommended for normal workloads
1306+# -4: max size: 32 Kb <-- not recommended
1307+# -3: max size: 16 Kb <-- probably not recommended
1308+# -2: max size: 8 Kb <-- good
1309+# -1: max size: 4 Kb <-- good
1310+# Positive numbers mean store up to _exactly_ that number of elements
1311+# per list node.
1312+# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
1313+# but if your use case is unique, adjust the settings as necessary.
1314+list-max-ziplist-size -2
1315+
1316+# Lists may also be compressed.
1317+# Compress depth is the number of quicklist ziplist nodes from *each* side of
1318+# the list to *exclude* from compression. The head and tail of the list
1319+# are always uncompressed for fast push/pop operations. Settings are:
1320+# 0: disable all list compression
1321+# 1: depth 1 means "don't start compressing until after 1 node into the list,
1322+# going from either the head or tail"
1323+# So: [head]->node->node->...->node->[tail]
1324+# [head], [tail] will always be uncompressed; inner nodes will compress.
1325+# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
1326+# 2 here means: don't compress head or head->next or tail->prev or tail,
1327+# but compress all nodes between them.
1328+# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
1329+# etc.
1330+list-compress-depth 0
1331+
1332+# Sets have a special encoding in just one case: when a set is composed
1333+# of just strings that happen to be integers in radix 10 in the range
1334+# of 64 bit signed integers.
1335+# The following configuration setting sets the limit in the size of the
1336+# set in order to use this special memory saving encoding.
1337+set-max-intset-entries 512
1338+
1339+# Similarly to hashes and lists, sorted sets are also specially encoded in
1340+# order to save a lot of space. This encoding is only used when the length and
1341+# elements of a sorted set are below the following limits:
1342+zset-max-ziplist-entries 128
1343+zset-max-ziplist-value 64
1344+
1345+# HyperLogLog sparse representation bytes limit. The limit includes the
1346+# 16 bytes header. When an HyperLogLog using the sparse representation crosses
1347+# this limit, it is converted into the dense representation.
1348+#
1349+# A value greater than 16000 is totally useless, since at that point the
1350+# dense representation is more memory efficient.
1351+#
1352+# The suggested value is ~ 3000 in order to have the benefits of
1353+# the space efficient encoding without slowing down too much PFADD,
1354+# which is O(N) with the sparse encoding. The value can be raised to
1355+# ~ 10000 when CPU is not a concern, but space is, and the data set is
1356+# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
1357+hll-sparse-max-bytes 3000
1358+
1359+# Streams macro node max size / items. The stream data structure is a radix
1360+# tree of big nodes that encode multiple items inside. Using this configuration
1361+# it is possible to configure how big a single node can be in bytes, and the
1362+# maximum number of items it may contain before switching to a new node when
1363+# appending new stream entries. If any of the following settings are set to
1364+# zero, the limit is ignored, so for instance it is possible to set just a
1365+# max entires limit by setting max-bytes to 0 and max-entries to the desired
1366+# value.
1367+stream-node-max-bytes 4096
1368+stream-node-max-entries 100
1369+
1370+# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
1371+# order to help rehashing the main Redis hash table (the one mapping top-level
1372+# keys to values). The hash table implementation Redis uses (see dict.c)
1373+# performs a lazy rehashing: the more operation you run into a hash table
1374+# that is rehashing, the more rehashing "steps" are performed, so if the
1375+# server is idle the rehashing is never complete and some more memory is used
1376+# by the hash table.
1377+#
1378+# The default is to use this millisecond 10 times every second in order to
1379+# actively rehash the main dictionaries, freeing memory when possible.
1380+#
1381+# If unsure:
1382+# use "activerehashing no" if you have hard latency requirements and it is
1383+# not a good thing in your environment that Redis can reply from time to time
1384+# to queries with 2 milliseconds delay.
1385+#
1386+# use "activerehashing yes" if you don't have such hard requirements but
1387+# want to free memory asap when possible.
1388+activerehashing yes
1389+
1390+# The client output buffer limits can be used to force disconnection of clients
1391+# that are not reading data from the server fast enough for some reason (a
1392+# common reason is that a Pub/Sub client can't consume messages as fast as the
1393+# publisher can produce them).
1394+#
1395+# The limit can be set differently for the three different classes of clients:
1396+#
1397+# normal -> normal clients including MONITOR clients
1398+# replica -> replica clients
1399+# pubsub -> clients subscribed to at least one pubsub channel or pattern
1400+#
1401+# The syntax of every client-output-buffer-limit directive is the following:
1402+#
1403+# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
1404+#
1405+# A client is immediately disconnected once the hard limit is reached, or if
1406+# the soft limit is reached and remains reached for the specified number of
1407+# seconds (continuously).
1408+# So for instance if the hard limit is 32 megabytes and the soft limit is
1409+# 16 megabytes / 10 seconds, the client will get disconnected immediately
1410+# if the size of the output buffers reach 32 megabytes, but will also get
1411+# disconnected if the client reaches 16 megabytes and continuously overcomes
1412+# the limit for 10 seconds.
1413+#
1414+# By default normal clients are not limited because they don't receive data
1415+# without asking (in a push way), but just after a request, so only
1416+# asynchronous clients may create a scenario where data is requested faster
1417+# than it can read.
1418+#
1419+# Instead there is a default limit for pubsub and replica clients, since
1420+# subscribers and replicas receive data in a push fashion.
1421+#
1422+# Both the hard or the soft limit can be disabled by setting them to zero.
1423+client-output-buffer-limit normal 0 0 0
1424+client-output-buffer-limit replica 256mb 64mb 60
1425+client-output-buffer-limit pubsub 32mb 8mb 60
1426+
1427+# Client query buffers accumulate new commands. They are limited to a fixed
1428+# amount by default in order to avoid that a protocol desynchronization (for
1429+# instance due to a bug in the client) will lead to unbound memory usage in
1430+# the query buffer. However you can configure it here if you have very special
1431+# needs, such us huge multi/exec requests or alike.
1432+#
1433+# client-query-buffer-limit 1gb
1434+
1435+# In the Redis protocol, bulk requests, that are, elements representing single
1436+# strings, are normally limited ot 512 mb. However you can change this limit
1437+# here.
1438+#
1439+# proto-max-bulk-len 512mb
1440+
1441+# Redis calls an internal function to perform many background tasks, like
1442+# closing connections of clients in timeout, purging expired keys that are
1443+# never requested, and so forth.
1444+#
1445+# Not all tasks are performed with the same frequency, but Redis checks for
1446+# tasks to perform according to the specified "hz" value.
1447+#
1448+# By default "hz" is set to 10. Raising the value will use more CPU when
1449+# Redis is idle, but at the same time will make Redis more responsive when
1450+# there are many keys expiring at the same time, and timeouts may be
1451+# handled with more precision.
1452+#
1453+# The range is between 1 and 500, however a value over 100 is usually not
1454+# a good idea. Most users should use the default of 10 and raise this up to
1455+# 100 only in environments where very low latency is required.
1456+hz 10
1457+
1458+# Normally it is useful to have an HZ value which is proportional to the
1459+# number of clients connected. This is useful in order, for instance, to
1460+# avoid too many clients are processed for each background task invocation
1461+# in order to avoid latency spikes.
1462+#
1463+# Since the default HZ value by default is conservatively set to 10, Redis
1464+# offers, and enables by default, the ability to use an adaptive HZ value
1465+# which will temporary raise when there are many connected clients.
1466+#
1467+# When dynamic HZ is enabled, the actual configured HZ will be used as
1468+# as a baseline, but multiples of the configured HZ value will be actually
1469+# used as needed once more clients are connected. In this way an idle
1470+# instance will use very little CPU time while a busy instance will be
1471+# more responsive.
1472+dynamic-hz yes
1473+
1474+# When a child rewrites the AOF file, if the following option is enabled
1475+# the file will be fsync-ed every 32 MB of data generated. This is useful
1476+# in order to commit the file to the disk more incrementally and avoid
1477+# big latency spikes.
1478+aof-rewrite-incremental-fsync yes
1479+
1480+# When redis saves RDB file, if the following option is enabled
1481+# the file will be fsync-ed every 32 MB of data generated. This is useful
1482+# in order to commit the file to the disk more incrementally and avoid
1483+# big latency spikes.
1484+rdb-save-incremental-fsync yes
1485+
1486+# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
1487+# idea to start with the default settings and only change them after investigating
1488+# how to improve the performances and how the keys LFU change over time, which
1489+# is possible to inspect via the OBJECT FREQ command.
1490+#
1491+# There are two tunable parameters in the Redis LFU implementation: the
1492+# counter logarithm factor and the counter decay time. It is important to
1493+# understand what the two parameters mean before changing them.
1494+#
1495+# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
1496+# uses a probabilistic increment with logarithmic behavior. Given the value
1497+# of the old counter, when a key is accessed, the counter is incremented in
1498+# this way:
1499+#
1500+# 1. A random number R between 0 and 1 is extracted.
1501+# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
1502+# 3. The counter is incremented only if R < P.
1503+#
1504+# The default lfu-log-factor is 10. This is a table of how the frequency
1505+# counter changes with a different number of accesses with different
1506+# logarithmic factors:
1507+#
1508+# +--------+------------+------------+------------+------------+------------+
1509+# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
1510+# +--------+------------+------------+------------+------------+------------+
1511+# | 0 | 104 | 255 | 255 | 255 | 255 |
1512+# +--------+------------+------------+------------+------------+------------+
1513+# | 1 | 18 | 49 | 255 | 255 | 255 |
1514+# +--------+------------+------------+------------+------------+------------+
1515+# | 10 | 10 | 18 | 142 | 255 | 255 |
1516+# +--------+------------+------------+------------+------------+------------+
1517+# | 100 | 8 | 11 | 49 | 143 | 255 |
1518+# +--------+------------+------------+------------+------------+------------+
1519+#
1520+# NOTE: The above table was obtained by running the following commands:
1521+#
1522+# redis-benchmark -n 1000000 incr foo
1523+# redis-cli object freq foo
1524+#
1525+# NOTE 2: The counter initial value is 5 in order to give new objects a chance
1526+# to accumulate hits.
1527+#
1528+# The counter decay time is the time, in minutes, that must elapse in order
1529+# for the key counter to be divided by two (or decremented if it has a value
1530+# less <= 10).
1531+#
1532+# The default value for the lfu-decay-time is 1. A Special value of 0 means to
1533+# decay the counter every time it happens to be scanned.
1534+#
1535+# lfu-log-factor 10
1536+# lfu-decay-time 1
1537+
1538+########################### ACTIVE DEFRAGMENTATION #######################
1539+#
1540+# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
1541+# even in production and manually tested by multiple engineers for some
1542+# time.
1543+#
1544+# What is active defragmentation?
1545+# -------------------------------
1546+#
1547+# Active (online) defragmentation allows a Redis server to compact the
1548+# spaces left between small allocations and deallocations of data in memory,
1549+# thus allowing to reclaim back memory.
1550+#
1551+# Fragmentation is a natural process that happens with every allocator (but
1552+# less so with Jemalloc, fortunately) and certain workloads. Normally a server
1553+# restart is needed in order to lower the fragmentation, or at least to flush
1554+# away all the data and create it again. However thanks to this feature
1555+# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
1556+# in an "hot" way, while the server is running.
1557+#
1558+# Basically when the fragmentation is over a certain level (see the
1559+# configuration options below) Redis will start to create new copies of the
1560+# values in contiguous memory regions by exploiting certain specific Jemalloc
1561+# features (in order to understand if an allocation is causing fragmentation
1562+# and to allocate it in a better place), and at the same time, will release the
1563+# old copies of the data. This process, repeated incrementally for all the keys
1564+# will cause the fragmentation to drop back to normal values.
1565+#
1566+# Important things to understand:
1567+#
1568+# 1. This feature is disabled by default, and only works if you compiled Redis
1569+# to use the copy of Jemalloc we ship with the source code of Redis.
1570+# This is the default with Linux builds.
1571+#
1572+# 2. You never need to enable this feature if you don't have fragmentation
1573+# issues.
1574+#
1575+# 3. Once you experience fragmentation, you can enable this feature when
1576+# needed with the command "CONFIG SET activedefrag yes".
1577+#
1578+# The configuration parameters are able to fine tune the behavior of the
1579+# defragmentation process. If you are not sure about what they mean it is
1580+# a good idea to leave the defaults untouched.
1581+
1582+# Enabled active defragmentation
1583+# activedefrag yes
1584+
1585+# Minimum amount of fragmentation waste to start active defrag
1586+# active-defrag-ignore-bytes 100mb
1587+
1588+# Minimum percentage of fragmentation to start active defrag
1589+# active-defrag-threshold-lower 10
1590+
1591+# Maximum percentage of fragmentation at which we use maximum effort
1592+# active-defrag-threshold-upper 100
1593+
1594+# Minimal effort for defrag in CPU percentage
1595+# active-defrag-cycle-min 5
1596+
1597+# Maximal effort for defrag in CPU percentage
1598+# active-defrag-cycle-max 75
1599+
1600+# Maximum number of set/hash/zset/list fields that will be processed from
1601+# the main dictionary scan
1602+# active-defrag-max-scan-fields 1000
1603+
1604diff --git a/examples/docker-compose.yml b/examples/docker-compose.yml
1605new file mode 100644
1606index 0000000..0b97104
1607--- /dev/null
1608+++ b/examples/docker-compose.yml
1609@@ -0,0 +1,10 @@
1610+version: '2'
1611+
1612+services:
1613+ redis:
1614+ image: squeakywheel/redis:edge
1615+ network_mode: "host"
1616+ ports:
1617+ - 6273:6273
1618+ environment:
1619+ - REDIS_PASSWORD=mypassword
1620diff --git a/examples/microk8s-deployments.yml b/examples/microk8s-deployments.yml
1621new file mode 100644
1622index 0000000..1e481eb
1623--- /dev/null
1624+++ b/examples/microk8s-deployments.yml
1625@@ -0,0 +1,48 @@
1626+---
1627+apiVersion: apps/v1
1628+kind: Deployment
1629+metadata:
1630+ name: redis-deployment
1631+spec:
1632+ replicas: 1
1633+ selector:
1634+ matchLabels:
1635+ app: redis
1636+ template:
1637+ metadata:
1638+ labels:
1639+ app: redis
1640+ spec:
1641+ containers:
1642+ - name: redis
1643+ image: squeakywheel/redis:edge
1644+ volumeMounts:
1645+ - name: redis-config-volume
1646+ mountPath: /etc/redis/redis.conf
1647+ subPath: redis.conf
1648+ ports:
1649+ - containerPort: 6379
1650+ name: redis
1651+ protocol: TCP
1652+ volumes:
1653+ - name: redis-config-volume
1654+ configMap:
1655+ name: redis-config
1656+ items:
1657+ - key: redis
1658+ path: redis.conf
1659+---
1660+apiVersion: v1
1661+kind: Service
1662+metadata:
1663+ name: redis-service
1664+spec:
1665+ type: NodePort
1666+ selector:
1667+ app: redis
1668+ ports:
1669+ - protocol: TCP
1670+ port: 6379
1671+ targetPort: 6379
1672+ nodePort: 30073
1673+ name: redis

Subscribers

People subscribed via source and target branches

to all changes: