Class: Karafka::Pro::ScheduledMessages::Consumer
- Inherits:
-
BaseConsumer
- Object
- BaseConsumer
- Karafka::Pro::ScheduledMessages::Consumer
- Defined in:
- lib/karafka/pro/scheduled_messages/consumer.rb
Overview
Consumer that coordinates scheduling of messages when the time comes
Instance Attribute Summary
Attributes inherited from BaseConsumer
#client, #coordinator, #id, #messages, #producer
Instance Method Summary collapse
-
#consume ⇒ Object
Processes messages and runs dispatch (via tick) if needed.
-
#eofed ⇒ Object
Runs end of file operations.
-
#initialized ⇒ Object
Prepares the initial state of all stateful components.
-
#tick ⇒ Object
Performs periodic operations when no new data is provided to the topic partition.
Methods inherited from BaseConsumer
#initialize, #on_after_consume, #on_before_consume, #on_before_schedule_consume, #on_before_schedule_eofed, #on_before_schedule_idle, #on_before_schedule_revoked, #on_before_schedule_shutdown, #on_consume, #on_eofed, #on_idle, #on_initialized, #on_revoked, #on_shutdown
Constructor Details
This class inherits a constructor from Karafka::BaseConsumer
Instance Method Details
#consume ⇒ Object
Processes messages and runs dispatch (via tick) if needed
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
# File 'lib/karafka/pro/scheduled_messages/consumer.rb', line 29 def consume return if reload! .each do || SchemaValidator.call() () end @states_reporter.call eofed if eofed? # Unless given day data is fully loaded we should not dispatch any notifications nor # should we mark messages. return unless @state.loaded? tick # Despite the fact that we need to load the whole stream once a day we do mark. # We mark as consumed for two main reasons: # - by marking we can indicate to Web UI and other monitoring tools that we have a # potential real lag with loading schedules in case there would be a lot of messages # added to the schedules topic # - we prevent a situation where there is no notion of this consumer group in the # reporting, allowing us to establish "presence" mark_as_consumed(.last) end |
#eofed ⇒ Object
Runs end of file operations
58 59 60 61 62 63 64 |
# File 'lib/karafka/pro/scheduled_messages/consumer.rb', line 58 def eofed return if reload! # If end of the partition is reached, it always means all data is loaded @state.loaded! @states_reporter.call end |
#initialized ⇒ Object
Prepares the initial state of all stateful components
20 21 22 23 24 25 26 |
# File 'lib/karafka/pro/scheduled_messages/consumer.rb', line 20 def initialized clear! # Max epoch is always moving forward with the time. Never backwards, hence we do not # reset it at all. @max_epoch = MaxEpoch.new @state = State.new(nil) end |
#tick ⇒ Object
Performs periodic operations when no new data is provided to the topic partition
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
# File 'lib/karafka/pro/scheduled_messages/consumer.rb', line 67 def tick return if reload! # We should not dispatch any data until the whole state is loaded. We need to make sure, # that all tombstone events are loaded not to duplicate dispatches return unless @state.loaded? keys = [] epochs = [] # We first collect all the data for dispatch and then dispatch and **only** after # dispatch that is sync is successful we remove those messages from the daily buffer # and update the max epoch. Since only the dispatch itself is volatile and can crash # with timeouts, etc, we need to be sure it wen through prior to deleting those messages # from the daily buffer. That way we ensure the at least once delivery and in case of # a transactional producer, exactly once delivery. @daily_buffer.for_dispatch do |epoch, | epochs << epoch keys << .key @dispatcher << end @dispatcher.flush @max_epoch.update(epochs.max) keys.each { |key| @daily_buffer.delete(key) } @states_reporter.call end |