RPQ is an extremely fast and flexible priority queue, capable of millions of transactions a second. RPQ supports a complex "Double Priority Queue" which allows for priorities to be distributed across N buckets, with each bucket holding a second priority queue which allows for internal escalation and timeouts of items based on parameters the user can specify during submission combined with how frequently you ask RPQ to prioritize the queue.
RPQ was written as an experiment when I was playing with Fibonacci Heaps and wanted to find something faster. I was disappointed by the state of research and libraries being used by most common applications, so RPQ is meant to be a highly flexible framework that can support a multitude of workloads.
RPQ is a concurrency safe, embeddable priority queue that can be used in a variety of applications. RPQ might be the right choice if:
- Your data requires strict ordering guarantees
- You need to prioritize items that are in the queue too long
- You need to timeout items
- You have multiple writers and readers that need to access the queue concurrently
- You run critical workloads and need to store the queue on disk in case of a crash
Due to the fact that most operations are done in constant time O(1) or logarithmic time O(log n), with the exception of the prioritize function which happens in linear time O(n), all RPQ operations are extremely fast. A single RPQ can handle a few million transactions a second and can be tuned depending on your work load. I have included some basic benchmarks using C++, Rust, Zig, and Go to measure RPQ's performance against the standard implementations of other languages that can be found here at: pq-bench.
RPQ is a embeddable priority queue that is meant to be used at the core of critical workloads where complex ordering are required in combination with large volumes of data. The best way to us RPQ is to import the Crate and use the API to interact with the queue.
TODO: Publish to crates.io
[dependencies]
rpq = "0.2.0"
RPQ::new(options: RPQOptions) -> Result<(RPQ, usize), Error>
- Creates a new RPQ with the given options and returns the number of restored items.enqueue(mut item: Item) -> Result<(), Error>
- Enqueues a new item into the RPQ.enqueue_batch(mut items: Vec<Item>) -> Result<(), Error>
- Enqueues a batch of items into the RPQ.dequeue() -> Result<Item, Error>
- Dequeues the next item from the RPQ.dequeue_batch(count: usize) -> Result<Vec<Item>, Error>
- Dequeues a batch of items from the RPQ.prioritize() -> Result<(usize, usize), Error>
- Prioritizes the items in the RPQ and returns the number of timed out and reprioritized items.items_in_queue() -> usize
- Returns the number of items in the RPQ.active_buckets() -> usize
- Returns the number of active buckets in the RPQ.unsynced_batches() -> usize
- Returns the number of unsynced batches pending to be commit to disk.items_in_db() -> usize
- Returns the number of items in the RPQ database.close()
- Closes the RPQ and saves the state to disk if needed.
use chrono::Duration;
use rpq::{schema::RPQOptions, schema::Item, RPQ};
#[tokio::main(flavor = "multi_thread")]
async fn main() {
let message_count = 1_000_000;
let options = RPQOptions {
max_priority: 10,
disk_cache_enabled: true,
database_path: "/tmp/rpq.redb".to_string(),
lazy_disk_cache: true,
lazy_disk_write_delay: Duration::seconds(5),
lazy_disk_cache_batch_size: 10_000,
};
let r = RPQ::new(options).await;
match r {
Ok(_) => {}
Err(e) => {
println!("Error Creating RPQ: {}", e);
return;
}
}
let (rpq, _restored_items) = r.unwrap();
for i in 0..message_count {
let item = Item::new(
i % 10,
i,
false,
None,
false,
None,
);
let result = rpq.enqueue(item).await;
if result.is_err() {
println!("Error Enqueuing: {}", result.err().unwrap());
return;
}
}
for _i in 0..message_count {
let result = rpq.dequeue().await;
if result.is_err() {
println!("Error Dequeuing: {}", result.err().unwrap());
return;
}
}
rpq.close().await;
}
RPQ is actively looking for maintainers so feel free to help out when:
- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing new features
We use github to host code, to track issues and feature requests, as well as accept pull requests.
- Fork the repo and create your branch from
master
. - If you've added code that should be tested, add tests.
- If you've changed APIs, update the documentation.
- Ensure the test suite passes.
- Make sure your code lints.
- Issue that pull request!
In short, when you submit code changes, your submissions are understood to be under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
Report bugs using Github's Issues
We use GitHub issues to track public bugs. Report a bug by opening a new issue; it's that easy!
Great Bug Reports tend to have:
- A quick summary and/or background
- Steps to reproduce
- Be specific!
- Give sample code if you can.
- What you expected would happen
- What actually happens
- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work)
All code here was originally written by me, Justin Timperio, under an MIT license.