The kamon-http4s-<version>
module brings traces and metrics to your http4s based applications.
It is currently available for Scala 2.12, 2.13, and 3.x, depending on the http4s version. The current version supports Kamon 2.6.1 and is published for http4s 0.22, 0.23, and 1.0.
kamon | kamon-http4s | status | jdk | scala | http4s |
---|---|---|---|---|---|
2.6.x | 2.6.1 | stable | 8+ | 2.12, 2.13 | 0.22.x |
2.6.x | 2.6.1 | stable | 8+ | 2.12, 2.13, 3.x | 0.23.x |
2.6.x | 2.6.1 | stable | 8+ | 2.13, 3.x | 1.0.M31+ |
To get started with sbt, simply add the following to your build.sbt
file, for instance for http4s 0.23:
libraryDependencies += "io.kamon" %% "kamon-http4s-0.23" % "2.6.1"
The releases and dependencies for the legacy module kamon-http4s
(without http4s version) are shown below.
kamon-http4s | status | jdk | scala | http4s |
---|---|---|---|---|
1.0.8-1.0.10 | stable | 8+ | 2.11, 2.12 | 0.18.x |
1.0.13 | stable | 8+ | 2.11, 2.12 | 0.20.x |
2.0.0 | stable | 8+ | 2.11, 2.12 | 0.20.x |
2.0.1 | stable | 8+ | 2.12, 2.13 | 0.21.x |
def serve[F[_]](implicit Effect: Effect[F], EC: ExecutionContext) : Stream[F, StreamApp.ExitCode] =
for {
_ <- Stream.eval(Sync[F].delay(println("Starting Google Service with Client")))
client <- Http1Client.stream[F]()
service = GoogleService.service[F](middleware.client.KamonSupport(client)) (1)
exitCode <- BlazeBuilder[F]
.bindHttp(Config.server.port, Config.server.interface)
.mountService(middleware.server.KamonSupport(service)) (2)
.serve
} yield exitCode
- (1): The Kamon Middleware for the
Client
side. - (2): The Kamon Middleware for the
Server
side.
object GoogleService {
def service[F[_]: Effect](c: Client[F]): HttpService[F] = {
val dsl = new Http4sDsl[F]{}
import dsl._
HttpService[F] {
case GET -> Root / "call-google" =>
Ok(c.expect[String]("https://www.google.com.ar"))
}
}
}
libraryDependencies ++= Seq(
"io.kamon" %% "kamon-core" % "2.6.1",
"io.kamon" %% "kamon-http4s-0.23" % "2.6.1",
"io.kamon" %% "kamon-prometheus" % "2.6.1",
"io.kamon" %% "kamon-zipkin" % "2.6.1",
"io.kamon" %% "kamon-jaeger" % "2.6.1"
)
Since version 2.0, Kamon reporters are started automatically through their default configuration.
Now you can simply sbt run
the application and after a few seconds you will get the Prometheus metrics
exposed on http://localhost:9095/ and message traces sent to Zipkin! The default configuration publishes the Prometheus
endpoint on port 9095 and assumes that you have a Zipkin instance running locally on port 9411 but you can change these
values under the kamon.prometheus
and kamon.zipkin
configuration keys, respectively.
All you need to do is configure a scrape configuration in Prometheus. The following snippet is a minimal example that should work with the minimal server from the previous section.
A minimal Prometheus configuration snippet
------------------------------------------------------------------------------
scrape_configs:
- job_name: 'kamon-prometheus'
static_configs:
- targets: ['localhost:9095']
------------------------------------------------------------------------------
Once you configure this target in Prometheus you are ready to run some queries like this:
Those are the Server Metrics
metrics that we are gathering by default:
- active-requests: The the number active requests.
- http-responses: Response time by status code.
- abnormal-termination: The number of abnormal requests termination.
- service-errors: The number of service errors.
- headers-times: The number of abnormal requests termination.
- http-request: Request time by status code.
Now you can go ahead, make your custom metrics and create your own dashboards!
Assuming that you have a Zipkin instance running locally with the default ports, you can go to http://localhost:9411 and start investigating traces for this application. Once you find a trace you are interested in you will see something like this:
Clicking on a span will bring up a details view where you can see all tags for the selected span:
That's it, you are now collecting metrics and tracing information from a http4s application.
Example of how to correctly configure the execution context by @cmcmteixeira