Class: Rdkafka::Producer

Inherits:
Object
  • Object
show all
Includes:
Helpers::Time
Defined in:
lib/rdkafka/producer.rb,
lib/rdkafka/producer/delivery_handle.rb,
lib/rdkafka/producer/delivery_report.rb

Overview

A producer for Kafka messages. To create a producer set up a Config and call producer on that.

Defined Under Namespace

Classes: DeliveryHandle, DeliveryReport

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Helpers::Time

#monotonic_now

Instance Attribute Details

#delivery_callback=(callback) ⇒ nil

Set a callback that will be called every time a message is successfully produced. The callback is called with a DeliveryReport and DeliveryHandle

Parameters:

  • callback (Proc, #call)

    The callback

Returns:

  • (nil)

Raises:

  • (TypeError)


69
70
71
72
73
# File 'lib/rdkafka/producer.rb', line 69

def delivery_callback=(callback)
  raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call)
  @delivery_callback = callback
  @delivery_callback_arity = arity(callback)
end

Instance Method Details

#arity(callback) ⇒ Integer

Figures out the arity of a given block/method

Parameters:

  • callback (#call, Proc)

Returns:

  • (Integer)

    arity of the provided block/method



292
293
294
295
296
# File 'lib/rdkafka/producer.rb', line 292

def arity(callback)
  return callback.arity if callback.respond_to?(:arity)

  callback.method(:call).arity
end

#call_delivery_callback(delivery_report, delivery_handle) ⇒ Object

Calls (if registered) the delivery callback

Parameters:



275
276
277
278
279
280
281
282
283
284
285
286
# File 'lib/rdkafka/producer.rb', line 275

def call_delivery_callback(delivery_report, delivery_handle)
  return unless @delivery_callback

  case @delivery_callback_arity
  when 0
    @delivery_callback.call
  when 1
    @delivery_callback.call(delivery_report)
  else
    @delivery_callback.call(delivery_report, delivery_handle)
  end
end

#closeObject

Close this producer and wait for the internal poll queue to empty.



76
77
78
79
80
# File 'lib/rdkafka/producer.rb', line 76

def close
  return if closed?
  ObjectSpace.undefine_finalizer(self)
  @native_kafka.close
end

#closed?Boolean

Whether this producer has closed

Returns:

  • (Boolean)


83
84
85
# File 'lib/rdkafka/producer.rb', line 83

def closed?
  @native_kafka.closed?
end

#flush(timeout_ms = 5_000) ⇒ Boolean

Note:

We raise an exception for other errors because based on the librdkafka docs, there should be no other errors.

Note:

For timed_out we do not raise an error to keep it backwards compatible

Wait until all outstanding producer requests are completed, with the given timeout in seconds. Call this before closing a producer to ensure delivery of all messages.

Parameters:

  • timeout_ms (Integer) (defaults to: 5_000)

    how long should we wait for flush of all messages

Returns:

  • (Boolean)

    true if no more data and all was flushed, false in case there are still outgoing messages after the timeout



98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# File 'lib/rdkafka/producer.rb', line 98

def flush(timeout_ms=5_000)
  closed_producer_check(__method__)

  code = nil

  @native_kafka.with_inner do |inner|
    code = Rdkafka::Bindings.rd_kafka_flush(inner, timeout_ms)
  end

  # Early skip not to build the error message
  return true if code.zero?

  error = Rdkafka::RdkafkaError.new(code)

  return false if error.code == :timed_out

  raise(error)
end

#nameString

Returns producer name.

Returns:

  • (String)

    producer name



57
58
59
60
61
# File 'lib/rdkafka/producer.rb', line 57

def name
  @name ||= @native_kafka.with_inner do |inner|
    ::Rdkafka::Bindings.rd_kafka_name(inner)
  end
end

#partition_count(topic) ⇒ Integer

Note:

If ‘allow.auto.create.topics’ is set to true in the broker, the topic will be auto-created after returning nil.

Note:

We cache the partition count for a given topic for given time. This prevents us in case someone uses partition_key from querying for the count with each message. Instead we query once every 30 seconds at most if we have a valid partition count or every 5 seconds in case we were not able to obtain number of partitions

Partition count for a given topic.

Parameters:

  • topic (String)

    The topic name.

Returns:

  • (Integer)

    partition count for a given topic or -1 if it could not be obtained.



155
156
157
158
159
160
161
162
163
# File 'lib/rdkafka/producer.rb', line 155

def partition_count(topic)
  closed_producer_check(__method__)

  @_partitions_count_cache.delete_if do |_, cached|
    monotonic_now - cached.first > PARTITIONS_COUNT_TTL
  end

  @_partitions_count_cache[topic].last
end

#produce(topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil, label: nil) ⇒ DeliveryHandle

Produces a message to a Kafka topic. The message is added to rdkafka’s queue, call wait on the returned delivery handle to make sure it is delivered.

When no partition is specified the underlying Kafka library picks a partition based on the key. If no key is specified, a random partition will be used. When a timestamp is provided this is used instead of the auto-generated timestamp.

Parameters:

  • topic (String)

    The topic to produce to

  • payload (String, nil) (defaults to: nil)

    The message’s payload

  • key (String, nil) (defaults to: nil)

    The message’s key

  • partition (Integer, nil) (defaults to: nil)

    Optional partition to produce to

  • partition_key (String, nil) (defaults to: nil)

    Optional partition key based on which partition assignment can happen

  • timestamp (Time, Integer, nil) (defaults to: nil)

    Optional timestamp of this message. Integer timestamp is in milliseconds since Jan 1 1970.

  • headers (Hash<String,String>) (defaults to: nil)

    Optional message headers

  • label (Object, nil) (defaults to: nil)

    a label that can be assigned when producing a message that will be part of the delivery handle and the delivery report

Returns:

  • (DeliveryHandle)

    Delivery handle that can be used to wait for the result of producing this message

Raises:

  • (RdkafkaError)

    When adding the message to rdkafka’s queue failed



182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
# File 'lib/rdkafka/producer.rb', line 182

def produce(topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil, label: nil)
  closed_producer_check(__method__)

  # Start by checking and converting the input

  # Get payload length
  payload_size = if payload.nil?
                   0
                 else
                   payload.bytesize
                 end

  # Get key length
  key_size = if key.nil?
               0
             else
               key.bytesize
             end

  if partition_key
    partition_count = partition_count(topic)
    # If the topic is not present, set to -1
    partition = Rdkafka::Bindings.partitioner(partition_key, partition_count, @partitioner_name) if partition_count.positive?
  end

  # If partition is nil, use -1 to let librdafka set the partition randomly or
  # based on the key when present.
  partition ||= -1

  # If timestamp is nil use 0 and let Kafka set one. If an integer or time
  # use it.
  raw_timestamp = if timestamp.nil?
                    0
                  elsif timestamp.is_a?(Integer)
                    timestamp
                  elsif timestamp.is_a?(Time)
                    (timestamp.to_i * 1000) + (timestamp.usec / 1000)
                  else
                    raise TypeError.new("Timestamp has to be nil, an Integer or a Time")
                  end

  delivery_handle = DeliveryHandle.new
  delivery_handle.label = label
  delivery_handle[:pending] = true
  delivery_handle[:response] = -1
  delivery_handle[:partition] = -1
  delivery_handle[:offset] = -1
  DeliveryHandle.register(delivery_handle)

  args = [
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_TOPIC, :string, topic,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_MSGFLAGS, :int, Rdkafka::Bindings::RD_KAFKA_MSG_F_COPY,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_VALUE, :buffer_in, payload, :size_t, payload_size,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_KEY, :buffer_in, key, :size_t, key_size,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_PARTITION, :int32, partition,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_TIMESTAMP, :int64, raw_timestamp,
    :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_OPAQUE, :pointer, delivery_handle,
  ]

  if headers
    headers.each do |key0, value0|
      key = key0.to_s
      value = value0.to_s
      args << :int << Rdkafka::Bindings::RD_KAFKA_VTYPE_HEADER
      args << :string << key
      args << :pointer << value
      args << :size_t << value.bytes.size
    end
  end

  args << :int << Rdkafka::Bindings::RD_KAFKA_VTYPE_END

  # Produce the message
  response = @native_kafka.with_inner do |inner|
    Rdkafka::Bindings.rd_kafka_producev(
      inner,
      *args
    )
  end

  # Raise error if the produce call was not successful
  if response != 0
    DeliveryHandle.remove(delivery_handle.to_ptr.address)
    raise RdkafkaError.new(response)
  end

  delivery_handle
end

#purgeObject

Purges the outgoing queue and releases all resources.

Useful when closing the producer with outgoing messages to unstable clusters or when for any other reasons waiting cannot go on anymore. This purges both the queue and all the inflight requests + updates the delivery handles statuses so they can be materialized into purge_queue errors.



123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# File 'lib/rdkafka/producer.rb', line 123

def purge
  closed_producer_check(__method__)

  code = nil

  @native_kafka.with_inner do |inner|
    code = Bindings.rd_kafka_purge(
      inner,
      Bindings::RD_KAFKA_PURGE_F_QUEUE | Bindings::RD_KAFKA_PURGE_F_INFLIGHT
    )
  end

  code.zero? || raise(Rdkafka::RdkafkaError.new(code))

  # Wait for the purge to affect everything
  sleep(0.001) until flush(100)

  true
end