Class: Karafka::Web::Management::Actions::CreateTopics

Inherits:
Base
  • Object
show all
Defined in:
lib/karafka/web/management/actions/create_topics.rb

Overview

Creates all the needed topics (if they don’t exist). It does not populate data.

Instance Method Summary collapse

Instance Method Details

#call(replication_factor) ⇒ Object

Note:

The order of creation of those topics is important. In order to support the zero-downtime bootstrap, we use the presence of the states topic and its initial state existence as an indicator that the setup went as expected. It the consumers states topic exists and contains needed data, it means all went as expected and that topics created before it also exist (as no error).

Runs the creation process

Parameters:

  • replication_factor (Integer)

    replication factor for Web-UI topics



19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
# File 'lib/karafka/web/management/actions/create_topics.rb', line 19

def call(replication_factor)
  consumers_states_topic = ::Karafka::Web.config.topics.consumers.states
  consumers_metrics_topic = ::Karafka::Web.config.topics.consumers.metrics
  consumers_reports_topic = ::Karafka::Web.config.topics.consumers.reports
  errors_topic = ::Karafka::Web.config.topics.errors

  if existing_topics_names.include?(errors_topic)
    exists(errors_topic)
  else
    creating(errors_topic)
    # All the errors will be dispatched here
    # This topic can have multiple partitions but we go with one by default. A single
    # Ruby process should not crash that often and if there is an expectation of a higher
    # volume of errors, this can be changed by the end user
    ::Karafka::Admin.create_topic(
      errors_topic,
      1,
      replication_factor,
      # Remove really old errors (older than 3 months just to preserve space)
      {
        'cleanup.policy': 'delete',
        'retention.ms': 3 * 31 * 24 * 60 * 60 * 1_000 # 3 months
      }
    )
    created(errors_topic)
  end

  if existing_topics_names.include?(consumers_reports_topic)
    exists(consumers_reports_topic)
  else
    creating(consumers_reports_topic)
    # This topic needs to have one partition
    ::Karafka::Admin.create_topic(
      consumers_reports_topic,
      1,
      replication_factor,
      # We do not need to to store this data for longer than 1 day as this data is only
      # used to materialize the end states
      # On the other hand we do not want to have it really short-living because in case
      # of a consumer crash, we may want to use this info to catch up and backfill the
      # state.
      #
      # In case its not consumed because no processes are running, it also usually means
      # there's no data to consume because no karafka servers report
      {
        'cleanup.policy': 'delete',
        'retention.ms': 24 * 60 * 60 * 1_000 # 1 day
      }
    )
    created(consumers_reports_topic)
  end

  if existing_topics_names.include?(consumers_metrics_topic)
    exists(consumers_metrics_topic)
  else
    creating(consumers_metrics_topic)
    # This topic needs to have one partition
    # Same as states - only most recent is relevant as it is a materialized state
    ::Karafka::Admin.create_topic(
      consumers_metrics_topic,
      1,
      replication_factor,
      {
        'cleanup.policy': 'compact',
        'retention.ms': 60 * 60 * 1_000, # 1h
        'segment.ms': 24 * 60 * 60 * 1_000, # 1 day
        'segment.bytes': 104_857_600 # 100MB
      }
    )
    created(consumers_metrics_topic)
  end

  # Create only if needed
  if existing_topics_names.include?(consumers_states_topic)
    exists(consumers_states_topic)
  else
    creating(consumers_states_topic)
    # This topic needs to have one partition
    ::Karafka::Admin.create_topic(
      consumers_states_topic,
      1,
      replication_factor,
      # We care only about the most recent state, previous are irrelevant. So we can
      # easily compact after one minute. We do not use this beyond the most recent
      # collective state, hence it all can easily go away. We also limit the segment
      # size to at most 100MB not to use more space ever.
      {
        'cleanup.policy': 'compact',
        'retention.ms': 60 * 60 * 1_000,
        'segment.ms': 24 * 60 * 60 * 1_000, # 1 day
        'segment.bytes': 104_857_600 # 100MB
      }
    )
    created(consumers_states_topic)
  end
end