Plumber is our open source project for working with various messaging systems.
Besides offering read and write functionality, it can also be used for relaying data to Batch (which uses the gRPC API under the hood).
Relaying data using plumber is the most reliable and performant way to get data into Batch as plumber makes use of batching events which can increase your total throughput.
Relaying can be done via running the binary directly or running our plumber docker container.
docker run --name plumber-rabbit -p 8080:8080 \-e PLUMBER_RELAY_TYPE=rabbit \-e PLUMBER_RELAY_TOKEN=$YOUR-BATCHSH-TOKEN-HERE \-e PLUMBER_RELAY_RABBIT_EXCHANGE=my_exchange \-e PLUMBER_RELAY_RABBIT_QUEUE=my_queue \-e PLUMBER_RELAY_RABBIT_ROUTING_KEY=some.routing.key \-e PLUMBER_RELAY_RABBIT_QUEUE_EXCLUSIVE=false \-e PLUMBER_RELAY_RABBIT_QUEUE_DURABLE=true \batchcorp/plumber
In this example, all messages sent to
my_exchange that match the routing key
some.routing.key will be sent to
At that point, plumber will pick up the messages and send them to Batch using the specified token.
A full suite of environment variables are provided in ENV.md for configuring plumber's relay mode.
plumber relay kafka \--address "server.domain.com:9092" \--token $YOUR-BATCHSH-TOKEN-HERE \--topic new_orders
In this example, all messages from kafka topic new_orders will be automatically sent to the collection with the specified relay token.
More examples of relaying from various systems can be found in EXAMPLES.md.
Since plumber uses the gRPC API under the hood, you should be able to reach the same throughput as using the gRPC API directly which is ~50,000 events/sec.
If you need to go higher - you can launch additional
plumber instances (making sure to use the same consumer group if using Kafka).