akka-persistence-inmemory is a plugin for akka-persistence that stores journal and snapshot messages memory, which is very useful when testing persistent actors, persistent FSM and akka cluster.
Add the following to your build.sbt
:
// the library is available in Bintray repository
resolvers += Resolver.bintrayRepo("dnvriend", "maven")
// akka 2.5.x
libraryDependencies += "com.github.dnvriend" %% "akka-persistence-inmemory" % "2.5.15.2"
// akka 2.4.x
libraryDependencies += "com.github.dnvriend" %% "akka-persistence-inmemory" % "2.4.20.1"
Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.
This code is open source software licensed under the Apache 2.0 License.
Add the following to the application.conf:
akka {
persistence {
journal.plugin = "inmemory-journal"
snapshot-store.plugin = "inmemory-snapshot-store"
}
}
The query API can be configured by overriding the defaults by placing the following in application.conf:
inmemory-read-journal {
# Absolute path to the write journal plugin configuration section to get the event adapters from
write-plugin = "inmemory-journal"
# there are two modes; sequence or uuid. If set to "sequence" and NoOffset will be requested, then
# the query will return Sequence offset types. If set to "uuid" and NoOffset will be requested, then
# the query will return TimeBasedUUID offset types. When the query is called with Sequence then
# the query will return Sequence offset types and if the query is called with TimeBasedUUID types then
# the query will return TimeBasedUUID offset types.
offset-mode = "sequence"
# ask timeout on Futures
ask-timeout = "10s"
# New events are retrieved (polled) with this interval.
refresh-interval = "100ms"
# How many events to fetch in one query (replay) and keep buffered until they
# are delivered downstreams.
max-buffer-size = "100"
}
It is possible to manually clear the journal an snapshot storage, for example:
import akka.actor.ActorSystem
import akka.persistence.inmemory.extension.{ InMemoryJournalStorage, InMemorySnapshotStorage, StorageExtension }
import akka.testkit.TestProbe
import org.scalatest.{ BeforeAndAfterEach, Suite }
trait InMemoryCleanup extends BeforeAndAfterEach { _: Suite =>
implicit def system: ActorSystem
override protected def beforeEach(): Unit = {
val tp = TestProbe()
tp.send(StorageExtension(system).journalStorage, InMemoryJournalStorage.ClearJournal)
tp.expectMsg(akka.actor.Status.Success(""))
tp.send(StorageExtension(system).snapshotStorage, InMemorySnapshotStorage.ClearSnapshots)
tp.expectMsg(akka.actor.Status.Success(""))
super.beforeEach()
}
}
From Java:
ActorRef actorRef = extension.journalStorage();
InMemoryJournalStorage.ClearJournal clearJournal = InMemoryJournalStorage.clearJournal();
tp.send(actorRef, clearJournal);
tp.expectMsg(new Status.Success(""));
InMemorySnapshotStorage.ClearSnapshots clearSnapshots = InMemorySnapshotStorage.clearSnapshots();
tp.send(actorRef, clearSnapshots);
tp.expectMsg(new Status.Success(""));
akka-persistence-query introduces akka.persistence.query.Offset
, an ADT that defines akka.persistence.query.NoOffset
,
akka.persistence.query.Sequence
and akka.persistence.query.TimeBasedUUID
. These offsets can be used when using the
queries akka.persistence.query.scaladsl.EventsByTagQuery2
and akka.persistence.query.scaladsl.CurrentEventsByTagQuery2
to request and offset in the stream of events.
Because akka-persistence-inmemory implements both the Sequence-based number offset strategy as the TimeBasedUUID strategy
it is required to configure the inmemory-read-journal.offset-mode="sequence"
. This way akka-persistence-inmemory knows
what kind of journal it should emulate when a NoOffset type is requested. EventEnvelope will contain either a Sequence
when the configuration is sequence
or a TimeBasedUUID when the configuration is uuid
.
By default the setting is sequence
.
Write plugins (ie. akka-persistence-plugins that write events) can define event adapters. These event adapters can be
reused when executing a query so that the EventEnvelope contains the application domain event
and not the data-model
representation of that event. Set the inmemory-read-journal.write-plugin="inmemory-journal"
and configure it with the
write plugin name (defaults to the inmemory-journal
).
The async query API uses polling to query the journal for new events. The refresh interval can be configured eg. "1s" so that the journal will be polled every 1 second. This setting is global for each async query, so the allPersistenceId, eventsByTag and eventsByPersistenceId queries.
When an async query is started, a number of events will be buffered and will use memory when not consumed by a Sink. The default size is 100.
The ReadJournal
is retrieved via the akka.persistence.query.PersistenceQuery
extension:
import akka.persistence.query.scaladsl._
lazy val readJournal = PersistenceQuery(system).readJournalFor("inmemory-read-journal")
.asInstanceOf[ReadJournal
with CurrentPersistenceIdsQuery
with AllPersistenceIdsQuery
with CurrentEventsByPersistenceIdQuery
with CurrentEventsByTagQuery
with EventsByPersistenceIdQuery
with EventsByTagQuery]
The ReadJournal
is retrieved via the akka.persistence.query.PersistenceQuery
extension:
import akka.persistence.query.PersistenceQuery
import akka.persistence.inmemory.query.journal.javadsl.InMemoryReadJournal
final InMemoryReadJournal readJournal = PersistenceQuery.get(system).getReadJournalFor(InMemoryReadJournal.class, InMemoryReadJournal.Identifier());
The plugin supports the following queries:
allPersistenceIds
and currentPersistenceIds
are used for retrieving all persistenceIds of all persistent actors.
import akka.actor.ActorSystem
import akka.stream.{Materializer, ActorMaterializer}
import akka.stream.scaladsl.Source
import akka.persistence.query.PersistenceQuery
import akka.persistence.inmemory.query.journal.scaladsl.InMemoryReadJournal
implicit val system: ActorSystem = ActorSystem()
implicit val mat: Materializer = ActorMaterializer()(system)
val readJournal: InMemoryReadJournal = PersistenceQuery(system).readJournalFor[InMemoryReadJournal](InMemoryReadJournal.Identifier)
val willNotCompleteTheStream: Source[String, NotUsed] = readJournal.allPersistenceIds()
val willCompleteTheStream: Source[String, NotUsed] = readJournal.currentPersistenceIds()
The returned event stream is unordered and you can expect different order for multiple executions of the query.
When using the allPersistenceIds
query, the stream is not completed when it reaches the end of the currently used persistenceIds,
but it continues to push new persistenceIds when new persistent actors are created.
When using the currentPersistenceIds
query, the stream is completed when the end of the current list of persistenceIds is reached,
thus it is not a live
query.
The stream is completed with failure if there is a failure in executing the query in the backend journal.
eventsByPersistenceId
and currentEventsByPersistenceId
is used for retrieving events for
a specific PersistentActor identified by persistenceId.
import akka.actor.ActorSystem
import akka.stream.{Materializer, ActorMaterializer}
import akka.stream.scaladsl.Source
import akka.persistence.query.scaladsl._
implicit val system: ActorSystem = ActorSystem()
implicit val mat: Materializer = ActorMaterializer()(system)
lazy val readJournal = PersistenceQuery(system).readJournalFor("inmemory-read-journal")
.asInstanceOf[ReadJournal
with CurrentPersistenceIdsQuery
with AllPersistenceIdsQuery
with CurrentEventsByPersistenceIdQuery
with CurrentEventsByTagQuery
with EventsByPersistenceIdQuery
with EventsByTagQuery]
val willNotCompleteTheStream: Source[EventEnvelope, NotUsed] = readJournal.eventsByPersistenceId("some-persistence-id", 0L, Long.MaxValue)
val willCompleteTheStream: Source[EventEnvelope, NotUsed] = readJournal.currentEventsByPersistenceId("some-persistence-id", 0L, Long.MaxValue)
You can retrieve a subset of all events by specifying fromSequenceNr
and toSequenceNr
or use 0L
and Long.MaxValue
respectively to retrieve all events. Note that the corresponding sequence number of each event is provided in the EventEnvelope
, which makes it possible to resume the stream at a later point from a given sequence number.
The returned event stream is ordered by sequence number, i.e. the same order as the PersistentActor persisted the events. The same prefix of stream elements (in same order) are returned for multiple executions of the query, except for when events have been deleted.
The stream is completed with failure if there is a failure in executing the query in the backend journal.
eventsByTag
and currentEventsByTag
are used for retrieving events that were marked with a given
tag
, e.g. all domain events of an Aggregate Root type.
import akka.actor.ActorSystem
import akka.stream.{Materializer, ActorMaterializer}
import akka.stream.scaladsl.Source
import akka.persistence.query.scaladsl._
implicit val system: ActorSystem = ActorSystem()
implicit val mat: Materializer = ActorMaterializer()(system)
lazy val readJournal = PersistenceQuery(system).readJournalFor("inmemory-read-journal")
.asInstanceOf[ReadJournal
with CurrentPersistenceIdsQuery
with AllPersistenceIdsQuery
with CurrentEventsByPersistenceIdQuery
with CurrentEventsByTagQuery
with EventsByPersistenceIdQuery
with EventsByTagQuery]
val willNotCompleteTheStream: Source[EventEnvelope, NotUsed] = readJournal.eventsByTag("apple", 0L)
val willCompleteTheStream: Source[EventEnvelope, NotUsed] = readJournal.currentEventsByTag("apple", 0L)
To tag events you'll need to create an Event Adapter
that will wrap the event in a akka.persistence.journal.Tagged
class with the given tags. The Tagged
class will instruct the persistence plugin to tag the event with the given set of tags.
The persistence plugin will not store the Tagged
class in the journal. It will strip the tags
and payload
from the Tagged
class,
and use the class only as an instruction to tag the event with the given tags and store the payload
in the
message
field of the journal table.
import akka.persistence.journal.{ Tagged, WriteEventAdapter }
import com.github.dnvriend.Person.{ LastNameChanged, FirstNameChanged, PersonCreated }
class TaggingEventAdapter extends WriteEventAdapter {
override def manifest(event: Any): String = ""
def withTag(event: Any, tag: String) = Tagged(event, Set(tag))
override def toJournal(event: Any): Any = event match {
case _: PersonCreated ⇒
withTag(event, "person-created")
case _: FirstNameChanged ⇒
withTag(event, "first-name-changed")
case _: LastNameChanged ⇒
withTag(event, "last-name-changed")
case _ ⇒ event
}
}
The EventAdapter
must be registered by adding the following to the root of application.conf
Please see the
demo-akka-persistence-jdbc project for more information. The identifier of the persistence plugin must be used which for the inmemory plugin is inmemory-journal
.
inmemory-journal {
event-adapters {
tagging = "com.github.dnvriend.TaggingEventAdapter"
}
event-adapter-bindings {
"com.github.dnvriend.Person$PersonCreated" = tagging
"com.github.dnvriend.Person$FirstNameChanged" = tagging
"com.github.dnvriend.Person$LastNameChanged" = tagging
}
}
You can retrieve a subset of all events by specifying offset, or use 0L to retrieve all events with a given tag. The offset corresponds to an ordered sequence number for the specific tag. Note that the corresponding offset of each event is provided in the EventEnvelope, which makes it possible to resume the stream at a later point from a given offset.
In addition to the offset the EventEnvelope also provides persistenceId and sequenceNr for each event. The sequenceNr is the sequence number for the persistent actor with the persistenceId that persisted the event. The persistenceId + sequenceNr is an unique identifier for the event.
The returned event stream contains only events that correspond to the given tag, and is ordered by the creation time of the events, The same stream elements (in same order) are returned for multiple executions of the same query. Deleted events are not deleted from the tagged event stream.
You can change the default storage to store a journal by defined property keys using this configuration. This can be useful to configure a behavior similar to cassandra key spaces.
# the storage in use
inmemory-storage {
# storage using inmemory journal for each different value for the configured property keys
class = "akka.persistence.inmemory.extension.StorageExtensionByProperty"
# property keys in journal plugin configuration, for each different value a own journal will be stored
property-keys = ["keyspace"]
}
- Scala 2.11.x, 2.12.x, 2.13.x support
- Akka 2.5.15 -> 2.5.23
- Merged PR #59 "Pluggable storage" by Beat Sager, thanks!
- Java 8 binary release
- Applied PR #50 "Fix for Akka Typed Persistence" by Lukasz Sanek, thanks!
- Java 10 binary release
- Merged PR #50 "Fix for Akka Typed Persistence" by Lukasz Sanek, thanks!
- Merged PR #52 "Provide nice Java API for clearing journal" by Christopher Batey, thanks!
- Merged PR #53 "Bump dependencies" by Artsiom Miklushou, thanks!
- Merged PR #42 "Scala 2.12.4 support" by sullis, thanks!
- Fix for issue #35 "no serializer for internal plugin messages"
- Fix for issue #35 "no serializer for internal plugin messages"
- Akka 2.5.0 -> 2.5.1
- Akka 2.4.17 -> 2.4.18
- Support for Akka 2.5.0
- Support for Akka 2.5.0-RC2
- Support for Akka 2.5.0-RC1
- Support for Akka 2.5-M2
- Changed to a simpler Time-based UUID generator.
- Changed to a simpler Time-based UUID generator.
- Fix for issue #33 'InMemoryReadJournal.eventsByPersistenceId returns deleted messages'
- Fix for issue #33 'InMemoryReadJournal.eventsByPersistenceId returns deleted messages'
- Fix for PR #31 'eventsByTag including substrings of tag' by jibbers42, thanks!
- Tags will be matched against the whole tag so tag 'foo' will be matched against 'foo' and not 'fo' or 'f' which was the previous behavior.
- Fix for PR #31 'eventsByTag including substrings of tag' by jibbers42, thanks!
- Tags will be matched against the whole tag so tag 'foo' will be matched against 'foo' and not 'fo' or 'f' which was the previous behavior.
- Akka 2.4.16 -> 2.4.17
- New versioning scheme; now using the version of Akka with the akka-persistence-inmemory version appended to it, starting from
.0
- Support for Akka 2.4.16
- Support akka 2.11.x and 2.12.x
- Changed how the
byTag
queries work, the requested offset is excluding, so if a materialized stream is created, when you ask for Sequence(2) for example, you will get Sequence(3) and so on so this is for the use case when you store the lastest offset on the read side, you can just put that value in the query and the stream will continue with the next offset, no need to manually do the plus-one operation.
- New versioning scheme; now using the version of Akka with the akka-persistence-inmemory version appended to it, starting from
.0
- Support for Akka 2.5-M1
- Support akka 2.11.x and 2.12.x
- You need Java 8 or higher
- Please read the Akka 2.4 -> 2.5 Migration Guide
- Changed how the
byTag
queries work, the requested offset is excluding, so if a materialized stream is created, when you ask for Sequence(2) for example, you will get Sequence(3) and so on so this is for the use case when you store the lastest offset on the read side, you can just put that value in the query and the stream will continue with the next offset, no need to manually do the plus-one operation.
- Akka 2.4.14 -> 2.4.16
- Scala 2.12.0 -> 2.12.1
- Akka 2.4.13 -> 2.4.14
- Akka 2.4.12 -> 2.4.13
- cross scala 2.11.8 and 2.12.0 build
- Implemented support for the
akka.persistence.query.TimeBasedUUID
. - You should set the new configuration key
inmemory-read-journal.offset-mode = "uuid"
, defaults tosequence
to produceEventEnvelope2
that containTimeBasedUUID
offset fields.
- Akka 2.4.11 -> 2.4.12
- Support for the new queries
CurrentEventsByTagQuery2
andEventsByTagQuery2
, please read the akka-persistence-query documentation to see what has changed. - The akka-persistence-inmemory plugin only supports the
akka.persistence.query.NoOffset
orakka.persistence.query.Sequence
offset types. - There is no support for the
akka.persistence.query.TimeBasedUUID
offset type. When used, akka-persistence-inmemory will throw an IllegalArgumentException.
- Scala 2.11.8 and 2.12.0-RC2 compatible
- Akka 2.4.10 -> 2.4.11
- Adapted version of PR #28 by Yury Gribkov - Fix bug: It doesn't adapt events read from journal, thanks!
- As event adapters are no first class citizins of akka-persistence-query (yet), a workaround based on the configuration of akka-persistence-cassandra has been implemented in the inmemory journal based on the work of Yury Gribkov. Basically, the query-journal will look for a write-plugin entry in the inmemory-read-journal configuration of your application.conf that must point to the writePluginId that will write the events to the journal. That writePlugin has all event adapters configured and if applicable, those event adapters will be used to adapt the events from the data-model to the application-model effectively you should have application-model events in your EventEnvelope if configured correctly.
- Removed the non-official and never-to-be-used bulk loading interface
- Akka 2.4.9 -> Akka 2.4.10
- Fix for EventsByPersistenceId should terminate when toSequenceNumber is reached as pointed out by monktastic, thanks!
- Akka 2.4.9-RC2 -> Akka 2.4.9
- Akka 2.4.9-RC1 -> 2.4.9-RC2
- Akka 2.4.8 -> 2.4.9-RC1
- Support for the non-official bulk loading interface akka.persistence.query.scaladsl.EventWriter
added. I need this interface to load massive amounts of data, that will be processed by many actors, but initially I just want to create and store one or
more events belonging to an actor, that will handle the business rules eventually. Using actors or a shard region for that matter, just gives to much
actor life cycle overhead ie. too many calls to the data store. The
akka.persistence.query.scaladsl.EventWriter
interface is non-official and puts all responsibility of ensuring the integrity of the journal on you. This means when some strange things are happening caused by wrong loading of the data, and therefor breaking the integrity and ruleset of akka-persistence, all the responsibility on fixing it is on you, and not on the Akka team.
- Codacy code cleanup release.
- No need for Query Publishers with the new akka-streams API.
- Journal entry 'deleted' fixed, must be set manually.
- Akka 2.4.7 -> 2.4.8,
- Behavior of akka-persistence-query *byTag query should be up to spec,
- Refactored the inmemory plugin code base, should be more clean now.
- Removed the queries
eventsByPersistenceIdAndTag
andcurrentEventsByPersistenceIdAndTag
as they are not supported by Akka natively and can be configured by filtering the event stream. - Implemented true async queries using the polling strategy
- Akka 2.4.6 -> 2.4.7
- Fixed issue Unable to differentiate between persistence failures and serialization issues
- Akka 2.4.4 -> 2.4.6
- Akka 2.4.3 -> 2.4.4
- Scala 2.11.7 -> 2.11.8
- Akka 2.4.2 -> 2.4.3
- Fixed issue on the query api where the offset on eventsByTag and eventsByPersistenceIdAndTag queries were not sequential
- Refactored the akka-persistence-query interfaces, integrated it back again in one jar, for jcenter deployment simplicity
- Added the appropriate Maven POM resources to be publishing to Bintray's JCenter
- Fix for propagating serialization errors to akka-persistence so that any error regarding the persistence of messages will be handled by the callback handler of the Persistent Actor;
onPersistFailure
.
- Better storage implementation for journal and snapshot
- Akka 2.4.2-RC3 -> 2.4.2
- akka-persistence-jdbc-query 1.0.0 -> 1.0.1
- Akka 2.4.2-RC2 -> 2.4.2-RC3
- Compatibility with Akka 2.4.2-RC2
- Refactored the akka-persistence-query extension interfaces to its own jar:
"com.github.dnvriend" %% "akka-persistence-jdbc-query" % "1.0.0"
- Code is based on akka-persistence-jdbc
- Supports the following queries:
allPersistenceIds
andcurrentPersistenceIds
eventsByPersistenceId
andcurrentEventsByPersistenceId
eventsByTag
andcurrentEventsByTag
eventsByPersistenceIdAndTag
andcurrentEventsByPersistenceIdAndTag
- Supports for the javadsl query API
- Compatibility with Akka 2.4.2-RC1
- Compatibility with Akka 2.4.1
- Merged PR #17 Evgeny Shepelyuk Upgrade to AKKA 2.4.1, thanks!
- Compatibility with Akka 2.4.0
- Merged PR #13 Evgeny Shepelyuk HighestSequenceNo should be kept on message deletion, thanks!
- Should be a fix for Issue #13 - HighestSequenceNo should be kept on message deletion as per Akka issue #18559
- Compatibility with Akka 2.4.0
- Merged PR #12 Evgeny Shepelyuk Live version of eventsByPersistenceId, thanks!
- Compatibility with Akka 2.4.0
- Akka 2.4.0-RC3 -> 2.4.0
- Merged PR #10 Evgeny Shepelyuk Live version of allPersistenceIds, thanks!
- Compatibility with Akka 2.4.0-RC3
- Use the following library dependency:
"com.github.dnvriend" %% "akka-persistence-inmemory" % "1.1.3-RC3"
- Merged Issue #9 Evgeny Shepelyuk Initial implemenation of Persistence Query for In Memory journal, thanks!
- Compatibility with Akka 2.4.0-RC3
- Use the following library dependency:
"com.github.dnvriend" %% "akka-persistence-inmemory" % "1.1.1-RC3"
- Merged Issue #6 Evgeny Shepelyuk Conditional ability to perform full serialization while adding messages to journal, thanks!
- Compatibility with Akka 2.4.0-RC3
- Use the following library dependency:
"com.github.dnvriend" %% "akka-persistence-inmemory" % "1.1.0-RC3"
- Compatibility with Akka 2.4.0-RC2
- Use the following library dependency:
"com.github.dnvriend" %% "akka-persistence-inmemory" % "1.1.0-RC2"
- Compatibilty with Akka 2.3.13
- Akka 2.3.12 -> 2.3.13
- Compatibility with Akka 2.4.0-RC1
- Use the following library dependency:
"com.github.dnvriend" %% "akka-persistence-inmemory" % "1.1.0-RC1"
- Scala 2.11.6 -> 2.11.7
- Akka 2.3.11 -> 2.3.12
- Apache-2.0 license
- Merged Issue #2 Sebastián Ortega Regression: Fix corner case when persisted events are deleted, thanks!
- Added test for the corner case issue #1 and #2
- Refactored from the ConcurrentHashMap implementation to a pure Actor managed concurrency model
- Some refactoring, fixed some misconceptions about the behavior of Scala Futures one year ago :)
- Akka 2.3.6 -> 2.3.11
- Scala 2.11.1 -> 2.11.6
- Scala 2.10.4 -> 2.10.5
- Merged Issue #1 Sebastián Ortega Fix corner case when persisted events are deleted, thanks!
- Moved to bintray
- Akka 2.3.4 -> 2.3.6
- Initial Release
Have fun!