NOTE: Starting from version 0.3.0 of the library:
- The library runtime is published to Maven Central and no longer published to Bintray.
- The Gradle plugin is published to Gradle Plugin Portal
- The Gradle plugin id has changed to
org.jetbrains.kotlinx.benchmark
- The library runtime artifact id has changed to
kotlinx-benchmark-runtime
NOTE: When Kotlin 1.5.0 until 1.5.30 is used make sure the
kotlin-gradle-plugin
is pulled from Maven Central, not Gradle Plugin Portal. For more information: Kotlin#42
kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code written in Kotlin and running on the next supported targets: JVM, JavaScript.
If you're familiar with JMH, it is very similar and uses it under the hoods to run benchmarks on JVM.
Gradle 6.8 or newer
Kotlin 1.5.30 or newer
Use plugin in build.gradle
:
plugins {
id 'org.jetbrains.kotlinx.benchmark' version '0.3.1'
}
For Kotlin/JS specify building nodejs
flavour:
kotlin {
js {
nodejs()
…
}
}
For Kotlin/JVM code, add allopen
plugin to make JMH happy. Alternatively, make all benchmark classes and methods open
.
For example, if you annotated each of your benchmark classes with @State(Scope.Benchmark)
:
@State(Scope.Benchmark)
class Benchmark {
…
}
and added the following code to your build.gradle
:
plugins {
id 'org.jetbrains.kotlin.plugin.allopen'
}
allOpen {
annotation("org.openjdk.jmh.annotations.State")
}
then you don't have to make benchmark classes and methods open
.
You need a runtime library with annotations and code that will run benchmarks.
Enable Maven Central for dependencies lookup:
repositories {
mavenCentral()
}
Add the runtime to dependencies of the platform source set, e.g.:
kotlin {
sourceSets {
commonMain {
dependencies {
implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.3.1")
}
}
}
}
In a build.gradle
file create benchmark
section, and inside it add a targets
section.
In this section register all targets you want to run benchmarks from.
Example for multiplatform project:
benchmark {
targets {
register("jvm")
register("js")
register("native")
}
}
This package can also be used for Java and Kotlin/JVM projects. Register a Java sourceSet as a target:
benchmark {
targets {
register("main")
}
}
To configure benchmarks and create multiple profiles, create a configurations
section in the benchmark
block,
and place options inside. Toolkit creates main
configuration by default, and you can create as many additional
configurations, as you need.
benchmark {
configurations {
main {
// configure default configuration
}
smoke {
// create and configure "smoke" configuration, e.g. with several fast benchmarks to quickly check
// if code changes result in something very wrong, or very right.
}
}
}
Available configuration options:
iterations
– number of measuring iterationswarmups
– number of warm up iterationsiterationTime
– time to run each iteration (measuring and warmup)iterationTimeUnit
– time unit foriterationTime
(default is seconds)outputTimeUnit
– time unit for results outputmode
- "thrpt" (default) – measures number of benchmark function invocations per time
- "avgt" – measures time per benchmark function invocation
include("…")
– regular expression to include benchmarks with fully qualified names matching it, as a substringexclude("…")
– regular expression to exclude benchmarks with fully qualified names matching it, as a substringparam("name", "value1", "value2")
– specify a parameter for a public mutable propertyname
annotated with@Param
reportFormat
– format of report, can bejson
(default),csv
,scsv
ortext
- There are also some advanced platform-specific settings that can be configured using
advanced("…", …)
function, where the first argument is the name of the configuration parameter, and the second is its value. Valid options:- (Kotlin/Native)
nativeFork
- "perBenchmark" (default) – executes all iterations of a benchmark in the same process (one binary execution)
- "perIteration" – executes each iteration of a benchmark in a separate process, measures in cold Kotlin/Native runtime environment
- (Kotlin/Native)
nativeGCAfterIteration
– when set totrue
, additionally collects garbage after each measuring iteration (default isfalse
). - (Kotlin/JVM)
jvmForks
– number of times harness should fork (default is1
)- a non-negative integer value – the amount to use for all benchmarks included in this configuration, zero means "no fork"
- "definedByJmh" – let the underlying JMH determine, which uses the amount specified in
@Fork
annotation defined for the benchmark function or its enclosing class, or Defaults.MEASUREMENT_FORKS (5
) if it is not specified by@Fork
.
- (Kotlin/Native)
Time units can be NANOSECONDS, MICROSECONDS, MILLISECONDS, SECONDS, MINUTES, or their short variants such as "ms" or "ns".
Example:
benchmark {
// Create configurations
configurations {
main { // main configuration is created automatically, but you can change its defaults
warmups = 20 // number of warmup iterations
iterations = 10 // number of iterations
iterationTime = 3 // time in seconds per iteration
}
smoke {
warmups = 5 // number of warmup iterations
iterations = 3 // number of iterations
iterationTime = 500 // time in seconds per iteration
iterationTimeUnit = "ms" // time unity for iterationTime, default is seconds
}
}
// Setup targets
targets {
// This one matches compilation base name, e.g. 'jvm', 'jvmTest', etc
register("jvm") {
jmhVersion = "1.21" // available only for JVM compilations & Java source sets
}
register("js") {
// Note, that benchmarks.js uses a different approach of minTime & maxTime and run benchmarks
// until results are stable. We estimate minTime as iterationTime and maxTime as iterationTime*iterations
}
register("native")
}
}
Often you want to have benchmarks in the same project, but separated from main code, much like tests. Here is how:
Define source set:
sourceSets {
benchmarks
}
Propagate dependencies and output from main
sourceSet.
dependencies {
benchmarksCompile sourceSets.main.output + sourceSets.main.runtimeClasspath
}
You can also add output and compileClasspath from sourceSets.test
in the same way if you want
to reuse some of the test infrastructure.
Register benchmarks
source set:
benchmark {
targets {
register("benchmarks")
}
}
The project contains examples subproject that demonstrates using the library.