akka.persistence.query.typed.javadsl
EventsBySliceFirehoseQuery
Companion object EventsBySliceFirehoseQuery
final class EventsBySliceFirehoseQuery extends ReadJournal with EventsBySliceQuery with EventsBySliceStartingFromSnapshotsQuery with EventTimestampQuery with LoadEventQuery with LatestEventTimestampQuery
- Alphabetic
- By Inheritance
- EventsBySliceFirehoseQuery
- LatestEventTimestampQuery
- LoadEventQuery
- EventTimestampQuery
- EventsBySliceStartingFromSnapshotsQuery
- EventsBySliceQuery
- ReadJournal
- AnyRef
- Any
- by any2stringadd
- by StringFormat
- by Ensuring
- by ArrowAssoc
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new EventsBySliceFirehoseQuery(delegate: scaladsl.EventsBySliceFirehoseQuery)
Value Members
- def eventsBySlices[Event](entityType: String, minSlice: Int, maxSlice: Int, offset: Offset): [EventEnvelope[Event], NotUsed]
Query events for given slices.
Query events for given slices. A slice is deterministically defined based on the persistence id. The purpose is to evenly distribute all persistence ids over the slices.
The consumer can keep track of its current position in the event stream by storing the
offset
and restart the query from a givenoffset
after a crash/restart.The exact meaning of the
offset
depends on the journal and must be documented by the read journal plugin. It may be a sequential id number that uniquely identifies the position of each event within the event stream. Distributed data stores cannot easily support those semantics and they may use a weaker meaning. For example it may be a timestamp (taken when the event was created or stored). Timestamps are not unique and not strictly ordered, since clocks on different machines may not be synchronized.In strongly consistent stores, where the
offset
is unique and strictly ordered, the stream should start from the next event after theoffset
. Otherwise, the read journal should ensure that between an invocation that returned an event with the givenoffset
, and this invocation, no events are missed. Depending on the journal implementation, this may mean that this invocation will return events that were already returned by the previous invocation, including the event with the passed inoffset
.The returned event stream should be ordered by
offset
if possible, but this can also be difficult to fulfill for a distributed data store. The order must be documented by the read journal plugin.The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted. Corresponding query that is completed when it reaches the end of the currently stored events is provided by CurrentEventsBySliceQuery.currentEventsBySlices.
- Definition Classes
- → EventsBySliceQuery
- def eventsBySlicesStartingFromSnapshots[Snapshot, Event](entityType: String, minSlice: Int, maxSlice: Int, offset: Offset, transformSnapshot: Function[Snapshot, Event]): [EventEnvelope[Event], NotUsed]
Same as EventsBySliceQuery but with the purpose to use snapshots as starting points and thereby reducing number of events that have to be loaded.
Same as EventsBySliceQuery but with the purpose to use snapshots as starting points and thereby reducing number of events that have to be loaded. This can be useful if the consumer start from zero without any previously processed offset or if it has been disconnected for a long while and its offset is far behind.
- Definition Classes
- → EventsBySliceStartingFromSnapshotsQuery
- def latestEventTimestamp(entityType: String, minSlice: Int, maxSlice: Int): [[Instant]]
- Definition Classes
- → LatestEventTimestampQuery
- def loadEnvelope[Event](persistenceId: String, sequenceNr: Long): [EventEnvelope[Event]]
Load a single event on demand.
Load a single event on demand. The
CompletionStage
is completed with aNoSuchElementException
if the event for the givenpersistenceId
andsequenceNr
doesn't exist.- Definition Classes
- → LoadEventQuery
- def sliceForPersistenceId(persistenceId: String): Int
- Definition Classes
- → → EventsBySliceQuery
- def sliceRanges(numberOfRanges: Int): [[, Integer]]
- Definition Classes
- → → EventsBySliceQuery
- def timestampOf(persistenceId: String, sequenceNr: Long): [[Instant]]
- Definition Classes
- → EventTimestampQuery
This wrapper of EventsBySliceQuery gives better scalability when many consumers retrieve the same events, for example many Projections of the same entity type. The purpose is to share the stream of events from the database and fan out to connected consumer streams. Thereby fewer queries and loading of events from the database.
It is retrieved with:
EventsBySliceQuery queries = PersistenceQuery.get(system).getReadJournalFor(EventsBySliceQuery.class, EventsBySliceFirehoseQuery.Identifier());
Corresponding Scala API is in akka.persistence.query.typed.scaladsl.EventsBySliceFirehoseQuery.
Configuration settings can be defined in the configuration section with the absolute path corresponding to the identifier, which is
"akka.persistence.query.events-by-slice-firehose"
for the default EventsBySliceFirehoseQuery#Identifier. Seereference.conf
.