Skip to content
This repository has been archived by the owner on Jan 9, 2024. It is now read-only.

Commit

Permalink
Merge branch 'unstable'
Browse files Browse the repository at this point in the history
  • Loading branch information
Grokzen committed Jul 22, 2018
2 parents 038ff1e + 8fe8b71 commit 187838a
Show file tree
Hide file tree
Showing 25 changed files with 283 additions and 73 deletions.
13 changes: 7 additions & 6 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,23 @@ install:
- "if [[ $REDIS_VERSION == '3.0' ]]; then REDIS_VERSION=3.0 make redis-install; fi"
- "if [[ $REDIS_VERSION == '3.2' ]]; then REDIS_VERSION=3.2 make redis-install; fi"
- "if [[ $REDIS_VERSION == '4.0' ]]; then REDIS_VERSION=4.0 make redis-install; fi"
- "if [[ $REDIS_VERSION == '5.0' ]]; then REDIS_VERSION=5.0 make redis-install; fi"
- pip install -r dev-requirements.txt
- pip install -e .
- "if [[ $HIREDIS == '1' ]]; then pip install hiredis; fi"
env:
# Redis 3.0
# Redis 3.0 & HIREDIS
- HIREDIS=0 REDIS_VERSION=3.0
# Redis 3.0 and HIREDIS
- HIREDIS=1 REDIS_VERSION=3.0
# Redis 3.2
# Redis 3.2 & HIREDIS
- HIREDIS=0 REDIS_VERSION=3.2
# Redis 3.2 and HIREDIS
- HIREDIS=1 REDIS_VERSION=3.2
# Redis 4.0
# Redis 4.0 & HIREDIS
- HIREDIS=0 REDIS_VERSION=4.0
# Redis 4.0 and HIREDIS
- HIREDIS=1 REDIS_VERSION=4.0
# Redis 5.0 & HIREDIS
- HIREDIS=0 REDIS_VERSION=5.0
- HIREDIS=1 REDIS_VERSION=5.0
script:
- make start
- coverage erase
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ ifndef REDIS_TRIB_RB
endif

ifndef REDIS_VERSION
REDIS_VERSION=3.0.7
REDIS_VERSION=4.0.10
endif

export REDIS_CLUSTER_NODE1_CONF
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ True

## License & Authors

Copyright (c) 2013-2017 Johan Andersson
Copyright (c) 2013-2018 Johan Andersson

MIT (See docs/License.txt file)

Expand Down
6 changes: 3 additions & 3 deletions dev-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
-r requirements.txt

coverage>=4.0,<5.0
pytest>=2.8.3,<3.0.0
testfixtures>=4.5.0,<5.0.0
mock>=1.3.0,<2.0.0
pytest>=2.8.3,<4.0.0
testfixtures>=4.5.0,<5.5.0
mock>=1.3.0,<2.1.0
docopt>=0.6.2,<1.0.0
tox>=2.2.0,<3.0.0
python-coveralls>=2.5.0,<3.0.0
Expand Down
3 changes: 3 additions & 0 deletions docs/authors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,6 @@ Authors who contributed code or testing:
- AngusP - https://github.com/AngusP
- Doug Kent - https://github.com/dkent
- VascoVisser - https://github.com/VascoVisser
- astrohsy - https://github.com/astrohsy
- Artur Stawiarski - https://github.com/astawiarski
- Matthew Anderson - https://github.com/mc3ander
2 changes: 1 addition & 1 deletion docs/license.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Licensing
---------

Copyright (c) 2013-2016 Johan Andersson
Copyright (c) 2013-2018 Johan Andersson

MIT (See docs/License.txt file)

Expand Down
23 changes: 15 additions & 8 deletions docs/pipelines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ An `ASK` error means the slot is only partially migrated and that the client can
The philosophy on pipelines
---------------------------

After playing around with pipelines and thinking about possible solutions that could be used in a cluster setting this document will describe how pipelines work, strengths and weaknesses with the implementation that was chosen.
After playing around with pipelines and thinking about possible solutions that could be used in a cluster setting this document will describe how pipelines work, strengths and weaknesses of the implementation that was chosen.

Why can't we reuse the pipeline code in `redis-py`? In short it is almost the same reason why code from the normal redis client can't be reused in a cluster environment and that is because of the slots system. Redis cluster consist of a number of slots that is distributed across a number of servers and each key belongs in one of these slots.

Expand All @@ -62,9 +62,16 @@ Consider the following example. Create a pipeline and issue 6 commands `A`, `B`,

If we look back at the order we executed the commands we get `[A, F]` for the first node and `[B, E, C, D]` for the second node. At first glance this looks like it is out of order because command `E` is executed before `C` & `D`. Why is this not matter? Because no multi key operations can be done in a pipeline, we only have to care the execution order is correct for each slot and in this case it was because `B` & `E` belongs to the same slot and `C` & `D` belongs to the same slot. There should be no possible way to corrupt any data between slots if multi key commands are blocked by the code.

What is good with this pipeline solution? First we can actually have a pipeline solution that will work in most cases with few commands blocked (only multi key commands). Secondly we can run it in parralel to increase the performance of the pipeline even further, making the benefits even greater.
What is good with this pipeline solution? First we can actually have a pipeline solution that will work in most cases with few commands blocked (only multi key commands). Secondly we can run it in parallel to increase the performance of the pipeline even further, making the benefits even greater.


Packing Commands
----------------

When issuing only a single command, there is only one network round trip to be made. But what if you issue 100 pipelined commands? In a single-instance redis configuration, you still only need to make one network hop. The commands are packed into a single request and the server responds with all the data for those requests in a single response. But with redis cluster, those keys could be spread out over many different nodes.

The client is responsible for figuring out which commands map to which nodes. Let's say for example that your 100 pipelined commands need to route to 3 different nodes? The first thing the client does is break out the commands that go to each node, so it only has 3 network requests to make instead of 100.


Transactions and WATCH
----------------------
Expand Down Expand Up @@ -135,7 +142,7 @@ This section will describe different types of pipeline solutions. It will list t
Suggestion one
**************

Simple but yet sequential pipeline. This solution acts more like an interface for the already existing pipeline implementation and only provides a simple backwards compatible interface to ensure that code that sexists still will work withouth any major modifications. The good this with this implementation is that because all commands is runned in sequence it will handle `MOVED` or `ASK` redirections very good and withouth any problems. The major downside to this solution is that no commands is ever batched and runned in parralell and thus you do not get any major performance boost from this approach. Other plus is that execution order is preserved across the entire cluster but a major downside is that thte commands is no longer atomic on the cluster scale because they are sent in multiple commands to different nodes.
Simple but yet sequential pipeline. This solution acts more like an interface for the already existing pipeline implementation and only provides a simple backwards compatible interface to ensure that code that sexists still will work withouth any major modifications. The good this with this implementation is that because all commands is runned in sequence it will handle `MOVED` or `ASK` redirections very good and withouth any problems. The major downside to this solution is that no command is ever batched and ran in parallel and thus you do not get any major performance boost from this approach. Other plus is that execution order is preserved across the entire cluster but a major downside is that thte commands is no longer atomic on the cluster scale because they are sent in multiple commands to different nodes.

**Good**

Expand All @@ -151,24 +158,24 @@ Simple but yet sequential pipeline. This solution acts more like an interface fo
Suggestion two
**************

Current pipeline implementation. This implementation is rather good and works well because it combines the existing pipeline interface and functionality and it also provides a basic handling of `ASK` or `MOVED` errors inside the client. One major downside to this is that execution order is not preserved across the cluster. Altho the execution order is somewhat broken if you look at the entire cluster level becuase commands can be splitted so that cmd1, cmd3, cmd5 get sent to one server and cmd2, cmd4 gets sent to another server. The order is then broken globally but locally for each server it is preserved and maintained correctly. On the other hand i guess that there can't be any commands that can affect different hashslots within the same command so it maybe do not really matter if the execution order is not correct because for each slot/key the order is valid.
There might be some issues with rebuilding the correct response ordering from the scattered data because each command might be in different sub pipelines. But i think that our current code still handles this correctly. I think i have to figure out some wierd case where the execution order acctually matters. There might be some issues with the nonsupported mget/mset commands that acctually performs different sub commands then it currently supports.
Current pipeline implementation. This implementation is rather good and works well because it combines the existing pipeline interface and functionality and it also provides a basic handling of `ASK` or `MOVED` errors inside the client. One major downside to this is that execution order is not preserved across the cluster. Although the execution order is somewhat broken if you look at the entire cluster level because commands can be split so that cmd1, cmd3, cmd5 get sent to one server and cmd2, cmd4 gets sent to another server. The order is then broken globally but locally for each server it is preserved and maintained correctly. On the other hand I guess that there can't be any commands that can affect different hashslots within the same command so maybe it really doesn't matter if the execution order is not correct because for each slot/key the order is valid.
There might be some issues with rebuilding the correct response ordering from the scattered data because each command might be in different sub pipelines. But I think that our current code still handles this correctly. I think I have to figure out some weird case where the execution order actually matters. There might be some issues with the nonsupported mget/mset commands that acctually performs different sub commands then it currently supports.

**Good**

- Sequential execution per node

**Bad**

- Not sequential execution on the entire pipeline
- Non sequential execution on the entire pipeline
- Medium difficult `ASK` or `MOVED` handling



Suggestion three
****************

There is a even simpler form of pipelines that can be made where all commands is supported as long as they conform to the same hashslot because redis supports that mode of operation. The good thing with this is that sinc all keys must belong to the same slot there can't be very few `ASK` or `MOVED` errors that happens and if they happen they will be very easy to handle because the entire pipeline is kinda atomic because you talk to the same server and only 1 server. There can't be any multiple server communication happening.
There is a even simpler form of pipelines that can be made where all commands is supported as long as they conform to the same hashslot because REDIS supports that mode of operation. The good thing with this is that since all keys must belong to the same slot there can't be very few `ASK` or `MOVED` errors that happens and if they happen they will be very easy to handle because the entire pipeline is kinda atomic because you talk to the same server and only 1 server. There can't be any multiple server communication happening.

**Good**

Expand All @@ -184,7 +191,7 @@ There is a even simpler form of pipelines that can be made where all commands is
Suggestion four
**************

One other solution is the 2 step commit solution where you send for each server 2 batches of commands. The first command should somehow establish that each keyslot is in the correct state and able to handle the data. After the client have recieved OK from all nodes that all data slots is good to use then it will acctually send the real pipeline with all data and commands. The big problem with this approach is that ther eis a gap between the checking of the slots and the acctual sending of the data where things can happen to the already established slots setup. But at the same time there is no possibility of merging these 2 steps because if step 2 is automatically runned if step 1 is Ok then the pipeline for the first node that will fail will fail but for the other nodes it will suceed but when it should not because if one command gets `ASK` or `MOVED` redirection then all pipeline objects must be rebuilt to match the new specs/setup and then reissued by the client. The major advantage of this solution is that if you have total controll of the redis server and do controlled upgrades when no clients is talking to the server then it can acctually work really well because there is no possibility that `ASK` or `MOVED` will triggered by migrations in between the 2 batches.
One other solution is the 2 step commit solution where you send for each server 2 batches of commands. The first command should somehow establish that each keyslot is in the correct state and able to handle the data. After the client have recieved OK from all nodes that all data slots is good to use then it will acctually send the real pipeline with all data and commands. The big problem with this approach is that ther eis a gap between the checking of the slots and the acctual sending of the data where things can happen to the already established slots setup. But at the same time there is no possibility of merging these 2 steps because if step 2 is automatically runned if step 1 is Ok then the pipeline for the first node that will fail will fail but for the other nodes it will suceed but when it should not because if one command gets `ASK` or `MOVED` redirection then all pipeline objects must be rebuilt to match the new specs/setup and then reissued by the client. The major advantage of this solution is that if you have total controll of the redis server and do controlled upgrades when no clients is talking to the server then it can actually work really well because there is no possibility that `ASK` or `MOVED` will triggered by migrations in between the 2 batches.

**Good**

Expand Down
22 changes: 21 additions & 1 deletion docs/release-notes.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,26 @@
Release Notes
=============

1.3.5 (July 22, 2018)
--------------

* Add Redis 4 compatability fix to CLUSTER NODES command (See issue #217)
* Fixed bug with command "CLUSTER GETKEYSINSLOT" that was throwing exceptions
* Added new methods cluster_get_keys_in_slot() to client
* Fixed bug with `StrictRedisCluster.from_url` that was ignoring the `readonly_mode` parameter
* NodeManager will now ignore nodes showing cluster errors when initializing the cluster
* Fix bug where RedisCluster wouldn't refresh the cluster table when executing commands on specific nodes
* Add redis 5.0 to travis-ci tests
* Change default redis version from 3.0.7 to 4.0.10
* Increase accepted ranges of dependencies specefied in dev-requirements.txt
* Several major and minor documentation updates and tweaks
* Add example script "from_url_password_protected.py"
* command "CLUSTER GETKEYSINSLOT" is now returned as a list and not int
* Improve support for ssl connections
* Retry on Timeout errors when doing cluster discovery
* Added new error class "MasterDownError"
* Updated requirements for dependency of redis-py to latest version

1.3.4 (Mar 5, 2017)
-------------------

Expand Down Expand Up @@ -79,7 +99,7 @@ Release Notes
* Implement all "CLUSTER ..." commands as methods in the client class
* Client now follows the service side setting 'cluster-require-full-coverage=yes/no' (baranbartu)
* Change the pubsub implementation (PUBLISH/SUBSCRIBE commands) from using one single node to now determine the hashslot for the channel name and use that to connect to
a node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is pattern
a node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is pattern
subscription that do not work properly because a pattern can't know all the possible channel names in advance.
* Convert all docs to ReadTheDocs
* Rework connection pool logic to be more similar to redis-py. This also fixes an issue with pubsub and that connections
Expand Down
9 changes: 0 additions & 9 deletions docs/threads.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,6 @@ The advantage to this design is that a smart client can communicate with the clu



Packing Commands
----------------

When issuing only a single command, there is only one network round trip to be made. But what if you issue 100 pipelined commands? In a single-instance redis configuration, you still only need to make one network hop. The commands are packed into a single request and the server responds with all the data for those requests in a single response. But with redis cluster, those keys could be spread out over many different nodes.

The client is responsible for figuring out which commands map to which nodes. Let's say for example that your 100 pipelined commands need to route to 3 different nodes? The first thing the client does is break out the commands that go to each node, so it only has 3 network requests to make instead of 100.



Parallel network i/o using threads
----------------------------------

Expand Down
7 changes: 6 additions & 1 deletion docs/upgrading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,16 @@ Upgrading redis-py-cluster

This document describes what must be done when upgrading between different versions to ensure that code still works.

1.3.2 --> Next Release
----------------------

If you created the `StrictRedisCluster` (or `RedisCluster`) instance via the `from_url` method and were passing `readonly_mode` to it, the connection pool created will now properly allow selecting read-only slaves from the pool. Previously it always used master nodes only, even in the case of `readonly_mode=True`. Make sure your code don't attempt any write commands over connections with `readonly_mode=True`.


1.3.1 --> 1.3.2
---------------

If your redis instance is configured to not have the `CONFIG ...` comannds enabled due to security reasons you need to pass this into the client object `skip_full_coverage_check=True`. Benefits is that the client class no longer requires the `CONFIG ...` commands to be enabled on the server. Downsides is that you can't use the option in your redis server and still use the same feature in this client.
If your redis instance is configured to not have the `CONFIG ...` commands enabled due to security reasons you need to pass this into the client object `skip_full_coverage_check=True`. Benefits is that the client class no longer requires the `CONFIG ...` commands to be enabled on the server. Downsides is that you can't use the option in your redis server and still use the same feature in this client.



Expand Down
9 changes: 9 additions & 0 deletions examples/from_url_password_protected.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from rediscluster import StrictRedisCluster

url="redis://:[email protected]:6572/0"

rc = StrictRedisCluster.from_url(url, skip_full_coverage_check=True)

rc.set("foo", "bar")

print(rc.get("foo"))
12 changes: 6 additions & 6 deletions examples/pipeline-incrby.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@
pipe.execute()

pipe = r.pipeline(transaction=False)
pipe.set("foo-{0}".format(d, d))
pipe.incrby("foo-{0}".format(d, 1))
pipe.set("bar-{0}".format(d, d))
pipe.incrby("bar-{0}".format(d, 1))
pipe.set("bazz-{0}".format(d, d))
pipe.incrby("bazz-{0}".format(d, 1))
pipe.set("foo-{0}".format(d), d)
pipe.incrby("foo-{0}".format(d), 1)
pipe.set("bar-{0}".format(d), d)
pipe.incrby("bar-{0}".format(d), 1)
pipe.set("bazz-{0}".format(d), d)
pipe.incrby("bazz-{0}".format(d), 1)
pipe.execute()
2 changes: 1 addition & 1 deletion rediscluster/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
setattr(redis, "StrictClusterPipeline", StrictClusterPipeline)

# Major, Minor, Fix version
__version__ = (1, 3, 4)
__version__ = (1, 3, 5)

if sys.version_info[0:3] == (3, 4, 0):
raise RuntimeError("CRITICAL: rediscluster do not work with python 3.4.0. Please use 3.4.1 or higher.")
Loading

0 comments on commit 187838a

Please sign in to comment.