Producer(cluster, topic, partitioner=<function random_partitioner>, compression=0, max_retries=3, retry_backoff_ms=100, required_acks=1, ack_timeout_ms=10000, batch_size=200)¶
This class implements the synchronous producer logic found in the JVM driver.
__init__(cluster, topic, partitioner=<function random_partitioner>, compression=0, max_retries=3, retry_backoff_ms=100, required_acks=1, ack_timeout_ms=10000, batch_size=200)¶
Instantiate a new Producer.
- cluster (
pykafka.cluster.Cluster) – The cluster to which to connect
- topic (
pykafka.topic.Topic) – The topic to which to produce messages
- partitioner (
pykafka.partitioners.BasePartitioner) – The partitioner to use during message production
- compression (
pykafka.common.CompressionType) – The type of compression to use.
- max_retries (int) – How many times to attempt to produce messages before raising an error.
- retry_backoff_ms (int) – The amount of time (in milliseconds) to back off during produce request retries.
- required_acks (int) – How many other brokers must have committed the data to their log and acknowledged this to the leader before a request is considered complete?
- ack_timeout_ms (int) – Amount of time (in milliseconds) to wait for acknowledgment of a produce request.
- batch_size (int) – Size (in bytes) of batches to send to brokers.
- cluster (
Assign messages to partitions using the partitioner.
Parameters: messages – Iterable of messages to publish. Returns: Generator of ((key, value), partition_id)
Publish a set of messages to relevant brokers.
Parameters: message_partition_tups (tuples of ((key, value), partition_id)) – Messages with partitions assigned.
_send_request(broker, req, attempt)¶
Send the produce request to the broker and handle the response.
Produce a set of messages.
Parameters: messages (Iterable of str or (str, str) tuples) – The messages to produce