final class EventsBySliceFirehoseQuery extends ReadJournal with EventsBySliceQuery with EventsBySliceStartingFromSnapshotsQuery with EventTimestampQuery with LoadEventQuery with LatestEventTimestampQuery

This wrapper of EventsBySliceQuery gives better scalability when many consumers retrieve the same events, for example many Projections of the same entity type. The purpose is to share the stream of events from the database and fan out to connected consumer streams. Thereby fewer queries and loading of events from the database.

It is retrieved with:

val queries = PersistenceQuery(system).readJournalFor[EventsBySliceQuery](EventsBySliceFirehoseQuery.Identifier)

Corresponding Java API is in akka.persistence.query.typed.javadsl.EventsBySliceFirehoseQuery.

Configuration settings can be defined in the configuration section with the absolute path corresponding to the identifier, which is "akka.persistence.query.events-by-slice-firehose" for the default EventsBySliceFirehoseQuery#Identifier. See reference.conf.

Annotations
@nowarn()
Source
EventsBySliceFirehoseQuery.scala
Type Hierarchy
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. EventsBySliceFirehoseQuery
  2. LatestEventTimestampQuery
  3. LoadEventQuery
  4. EventTimestampQuery
  5. EventsBySliceStartingFromSnapshotsQuery
  6. EventsBySliceQuery
  7. ReadJournal
  8. AnyRef
  9. Any
Implicitly
  1. by any2stringadd
  2. by StringFormat
  3. by Ensuring
  4. by ArrowAssoc
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new EventsBySliceFirehoseQuery(system: ExtendedActorSystem, config: Config, cfgPath: String)

Value Members

  1. def eventsBySlices[Event](entityType: String, minSlice: Int, maxSlice: Int, offset: Offset): [EventEnvelope[Event], NotUsed]

    Query events for given slices.

    Query events for given slices. A slice is deterministically defined based on the persistence id. The purpose is to evenly distribute all persistence ids over the slices.

    The consumer can keep track of its current position in the event stream by storing the offset and restart the query from a given offset after a crash/restart.

    The exact meaning of the offset depends on the journal and must be documented by the read journal plugin. It may be a sequential id number that uniquely identifies the position of each event within the event stream. Distributed data stores cannot easily support those semantics and they may use a weaker meaning. For example it may be a timestamp (taken when the event was created or stored). Timestamps are not unique and not strictly ordered, since clocks on different machines may not be synchronized.

    In strongly consistent stores, where the offset is unique and strictly ordered, the stream should start from the next event after the offset. Otherwise, the read journal should ensure that between an invocation that returned an event with the given offset, and this invocation, no events are missed. Depending on the journal implementation, this may mean that this invocation will return events that were already returned by the previous invocation, including the event with the passed in offset.

    The returned event stream should be ordered by offset if possible, but this can also be difficult to fulfill for a distributed data store. The order must be documented by the read journal plugin.

    The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted. Corresponding query that is completed when it reaches the end of the currently stored events is provided by CurrentEventsBySliceQuery.currentEventsBySlices.

    Definition Classes
    EventsBySliceQuery
  2. def eventsBySlicesStartingFromSnapshots[Snapshot, Event](entityType: String, minSlice: Int, maxSlice: Int, offset: Offset, transformSnapshot: (Snapshot) => Event): [EventEnvelope[Event], NotUsed]

    Same as EventsBySliceQuery but with the purpose to use snapshots as starting points and thereby reducing number of events that have to be loaded.

    Same as EventsBySliceQuery but with the purpose to use snapshots as starting points and thereby reducing number of events that have to be loaded. This can be useful if the consumer start from zero without any previously processed offset or if it has been disconnected for a long while and its offset is far behind.

    Definition Classes
    EventsBySliceStartingFromSnapshotsQuery
  3. def latestEventTimestamp(entityType: String, minSlice: Int, maxSlice: Int): [[Instant]]
    Definition Classes
    LatestEventTimestampQuery
  4. def loadEnvelope[Event](persistenceId: String, sequenceNr: Long): [EventEnvelope[Event]]

    Load a single event on demand.

    Load a single event on demand. The Future is completed with a NoSuchElementException if the event for the given persistenceId and sequenceNr doesn't exist.

    Definition Classes
    LoadEventQuery
  5. def sliceForPersistenceId(persistenceId: String): Int
    Definition Classes
    → → EventsBySliceQuery
  6. def sliceRanges(numberOfRanges: Int): Seq[Range]
    Definition Classes
    → → EventsBySliceQuery
  7. def timestampOf(persistenceId: String, sequenceNr: Long): [[Instant]]
    Definition Classes
    EventTimestampQuery