Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bench: Add the room_list bench #3661

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions benchmarks/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,19 @@ publish = false

[dependencies]
criterion = { version = "0.5.1", features = ["async", "async_tokio", "html_reports"] }
futures-util = { workspace = true }
matrix-sdk-base = { workspace = true }
matrix-sdk-crypto = { workspace = true }
matrix-sdk-sqlite = { workspace = true, features = ["crypto-store"] }
matrix-sdk-test = { workspace = true }
matrix-sdk = { workspace = true, features = ["native-tls", "e2e-encryption", "sqlite"] }
matrix-sdk-ui = { workspace = true }
ruma = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tempfile = "3.3.0"
tokio = { version = "1.24.2", default-features = false, features = ["rt-multi-thread"] }
wiremock = { workspace = true }

[target.'cfg(target_os = "linux")'.dependencies]
pprof = { version = "0.13.0", features = ["flamegraph", "criterion"] }
Expand All @@ -34,3 +37,7 @@ harness = false
[[bench]]
name = "room_bench"
harness = false

[[bench]]
name = "room_list_bench"
harness = false
176 changes: 176 additions & 0 deletions benchmarks/benches/room_list_bench.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,176 @@
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use futures_util::{pin_mut, StreamExt};
use matrix_sdk::test_utils::logged_in_client_with_server;
use matrix_sdk_ui::{room_list_service::filters::new_filter_non_left, RoomListService};
use serde::Deserialize;
use serde_json::{json, Map, Value};
use tokio::runtime::Builder;
use wiremock::{http::Method, Match, Mock, Request, ResponseTemplate};

struct SlidingSyncMatcher;

impl Match for SlidingSyncMatcher {
fn matches(&self, request: &Request) -> bool {
request.url.path() == "/_matrix/client/unstable/org.matrix.msc3575/sync"
&& request.method == Method::POST
}
}

#[derive(Deserialize)]
pub(crate) struct PartialSlidingSyncRequest {
pub txn_id: Option<String>,
}

fn criterion() -> Criterion {
#[cfg(target_os = "linux")]
let criterion = Criterion::default().with_profiler(pprof::criterion::PProfProfiler::new(
100,
pprof::criterion::Output::Flamegraph(None),
));

#[cfg(not(target_os = "linux"))]
let criterion = Criterion::default();

criterion
}

pub fn room_list(c: &mut Criterion) {
let runtime = Builder::new_multi_thread()
.enable_time()
.enable_io()
.build()
.expect("Can't create runtime");

let mut benchmark = c.benchmark_group("room_list");

for number_of_rooms in [10, 100, 1000, 10_000] {
benchmark.throughput(Throughput::Elements(number_of_rooms));
benchmark.bench_with_input(
BenchmarkId::new("sync", number_of_rooms),
&number_of_rooms,
|benchmark, maximum_number_of_rooms| {
benchmark.to_async(&runtime).iter(|| async {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a lot of line in this benchmark that should be considered to be part of the setup. I don't think we want to time wiremock starting up.

Since there are so many lines that are timed that shouldn't be, I think it's quite hard to get the benchmark to settle down. I.e. after some 10 runs I finally got it to settle down, but still it's quite the noisy benchmark:

    time:   [-2.6332% +7.1045% +17.983%] (p = 0.18 > 0.05)
    thrpt:  [-15.242% -6.6332% +2.7044%]
    No change in performance detected.

Please take a look at the Bencher documentation and more specifically at the iter_batched() method:

If your routine requires some per-iteration setup that shouldn’t be timed, use iter_batched or iter_batched_ref.

Having that many lines as part of the benchmark also makes it quite hard to see what exactly we're measuring. So what are we intending to measure? The sync response processing, the room list creation?

// Create a fresh new `Client` and a mocked server.
let (client, server) = logged_in_client_with_server().await;

// Create a new `RoomListService`.
let room_list_service = RoomListService::new(client.clone())
.await
.expect("Failed to create the `RoomListService`");

// Get the `RoomListService` sync stream.
let room_list_stream = room_list_service.sync();
pin_mut!(room_list_stream);

// Get the `RoomList` itself with a default filter.
let room_list = room_list_service
.all_rooms()
.await
.expect("Failed to fetch the `all_rooms` room list");
let (room_list_entries, room_list_entries_controller) = room_list
.entries_with_dynamic_adapters(100, client.roominfo_update_receiver());
pin_mut!(room_list_entries);
room_list_entries_controller.set_filter(Box::new(new_filter_non_left()));
room_list_entries.next().await;

// “Send” (mocked) and receive rooms.
{
const ROOMS_BATCH_SIZE: u64 = 100;
let maximum_number_of_rooms = *maximum_number_of_rooms;
let mut number_of_sent_rooms = 0;
let mut room_nth = 0;

while number_of_sent_rooms < maximum_number_of_rooms {
let number_of_rooms_to_send = u64::max(
number_of_sent_rooms + ROOMS_BATCH_SIZE,
maximum_number_of_rooms,
) - number_of_sent_rooms;

number_of_sent_rooms += number_of_rooms_to_send;

let rooms_as_json = Value::Object(
(0..number_of_rooms_to_send)
.map(|_| {
let room_id = format!("!r{room_nth}:matrix.org");

let room = json!({
// The recency timestamp is different, so that it triggers a re-ordering of the
// room list every time, just to stress the APIs.
"timestamp": room_nth % 10,
// One state event to activate the associated code path.
"required_state": [
{
"content": {
"name": format!("Room #{room_nth}"),
},
"event_id": format!("$s{room_nth}"),
"origin_server_ts": 1,
"sender": "@example:matrix.org",
"state_key": "",
"type": "m.room.name"
},
],
// One room event to active the associated code path.
"timeline": [
{
"event_id": format!("$t{room_nth}"),
"sender": "@example:matrix.org",
"type": "m.room.message",
"content": {
"body": "foo",
"msgtype": "m.text",
},
"origin_server_ts": 1,
}
],
});

room_nth += 1;

(room_id, room)
})
.collect::<Map<_, _>>(),
);

// Mock the response from the server.
let _mock_guard = Mock::given(SlidingSyncMatcher)
.respond_with(move |request: &Request| {
let partial_request: PartialSlidingSyncRequest =
request.body_json().unwrap();

ResponseTemplate::new(200).set_body_json(json!({
"txn_id": partial_request.txn_id,
"pos": room_nth.to_string(),
"rooms": rooms_as_json,
}))
})
.mount_as_scoped(&server)
.await;

// Sync the room list service.
assert!(
room_list_stream.next().await.is_some(),
"`room_list_stream` has stopped"
);

// Sync the room list entries.
assert!(
room_list_entries.next().await.is_some(),
"`room_list_entries` has stopped"
);
}
}
})
},
);
}

benchmark.finish();
}

criterion_group! {
name = benches;
config = criterion();
targets = room_list
}
criterion_main!(benches);
Loading