Docs
Search…
Plumber Relay
This page describes the many ways you can get data into the Batch platform.

Plumber Relay

Plumber is our open source project for working with various messaging systems.
Besides offering read and write functionality, it can also be used for relaying data to Batch (which uses the gRPC API under the hood).
Relaying data using plumber is the most reliable and performant way to get data into Batch as plumber makes use of batching events which can increase your total throughput.
You can launch plumber relays in multiple ways:
  • Running plumber in single-relay mode via CLI
    • Best for quick, one-offs
  • Running plumber as a docker container
    • Best for ephemeral workloads
  • Running plumber in server mode
    • Best for production
The following examples show how to run plumber in single relay mode.
For production deployments, we suggest to deploy plumber running in server mode.
CLI Relay
Docker Relay
Kubernetes
1
plumber relay kafka \
2
--address "your-kafka-address.com:9092" \
3
--token YOUR-COLLECTION-TOKEN-HERE \
4
--topics orders \
5
--tls-skip-verify
Copied!
In this example, all messages from kafka topic new_orders will be automatically sent to the collection with the specified relay token.
1
docker run --name plumber-rabbit -p 8080:8080 \
2
-e PLUMBER_RELAY_TYPE=rabbit \
3
-e PLUMBER_RELAY_TOKEN=$YOUR-BATCHSH-TOKEN-HERE \
4
-e PLUMBER_RELAY_RABBIT_EXCHANGE=my_exchange \
5
-e PLUMBER_RELAY_RABBIT_QUEUE=my_queue \
6
-e PLUMBER_RELAY_RABBIT_ROUTING_KEY=some.routing.key \
7
-e PLUMBER_RELAY_RABBIT_QUEUE_EXCLUSIVE=false \
8
-e PLUMBER_RELAY_RABBIT_QUEUE_DURABLE=true \
9
batchcorp/plumber \
10
rabbit
Copied!
In this example, all messages sent to my_exchange that match the routing key some.routing.key will be sent to my_queue .
At that point, plumber will pick up the messages and send them to Batch using the specified token.
A full suite of environment variables are provided in ENV.md for configuring plumber's relay mode.
Example of running plumber via kubernetes
1
apiVersion: apps/v1
2
kind: Deployment
3
metadata:
4
name: plumber-deployment
5
spec:
6
selector:
7
matchLabels:
8
app: plumber
9
replicas: 1
10
template:
11
metadata:
12
labels:
13
app: plumber
14
spec:
15
containers:
16
- name: plumber
17
image: batchcorp/plumber:latest
18
command: ["/plumber-linux", "relay", "kafka"]
19
args: ["--stats-enable"]
20
ports:
21
- containerPort: 9191
22
env:
23
- name: PLUMBER_RELAY_TOKEN
24
value: "--- COLLECTION TOKEN HERE ---"
25
- name: PLUMBER_RELAY_KAFKA_ADDRESS
26
value: "kafka.server.com:9092"
27
- name: PLUMBER_RELAY_KAFKA_TOPIC
28
value: "new-orders"
29
- name: PLUMBER_RELAY_KAFKA_GROUP_ID
30
value: "plumber"
31
resources:
32
requests:
33
memory: "256Mi"
34
cpu: "250m"
35
limits:
36
memory: "512Mi"
37
cpu: "500m"
38
Copied!
More examples of relaying from various systems can be found in EXAMPLES.md.

When should you use this API?

Plumber is the easiest way to relay throughput heavy workloads and should be used by anyone wanting to get up and running quickly.

Throughput

plumber uses gRPC under the hood to communicate with Batch's collectors.
You should be able to comfortably reach 25K-50K messages/sec on a single plumber instance. To reach higher levels, you should run plumber in cluster server mode and launch 2+ replicas of plumber.
Make sure to use the same consumer group if relaying for backends such as Kafka or NATS.