Docs
Search…
Server Mode

Overview

The plumber tool allows you to read, write, relay and tunnel data right from your CLI. While this is OK for quick one-offs, it has some drawbacks - if you are planning to make use of relay or tunnel functionality in production or in a more serious capacity:
  1. 1.
    You will need to launch a new plumber instance for every relay and tunnel
  2. 2.
    Difficult to pull off high-availability
  3. 3.
    Potential for missed data while plumber is not running (such as a redeploy)
This might be acceptable depending on your integration needs. If plumber exists in dev to facilitate testing or enable developers to tunnel data into their local workstations - it's probably fine. But if you are planning on having many relays and/or many tunnels, it wouldn't be well-suited.
To address this, we have built plumber server mode.
Plumber's server mode enables you to run any number of plumber instances as a distributed cluster. Server mode exposes a gRPC API which can be used to run and manage multiple relays and tunnels in parallel. Plumber server can run in either of two operational modes - standalone or cluster.

Operational Modes

Standalone Mode

In standalone mode, plumber will write its config to a local JSON file (default ~/.batchsh/config.json that can be overridden via PLUMBER_SERVER_FOO).
Suitable for:
  • Local and dev environments
  • Low throughput (<100 events per sec)
  • High-reliability is not required

Cluster Mode

In cluster mode, plumber uses a message bus (NATS) to facilitate communication between the plumber instances and for storing its configs. You can run any number of plumber instances - we recommend to start with 3.
Suitable for:
  • Production environments
  • High throughput (1,000+ events per sec)
  • High-reliability is required

How to Run

Standalone mode

When running in Docker, you will want to mount this file as a writable volume to the container.
CLI
Docker
Kubernetes Deployment
Kubernetes Helm
1
# Install plumber
2
$ brew tap batchcorp/public
3
...
4
$ brew install plumber
5
...
6
​
7
# Launch plumber in standalone mode
8
$ plumber server
9
INFO[0000]
10
β–ˆβ–€β–ˆβ€ƒβ–ˆ β€ƒβ–ˆ β–ˆβ€ƒβ–ˆβ–€β–„β–€β–ˆβ€ƒβ–ˆβ–„β–„β€ƒβ–ˆβ–€β–€β€ƒβ–ˆβ–€β–ˆ
11
β–ˆβ–€β–€β€ƒβ–ˆβ–„β–„β€ƒβ–ˆβ–„β–ˆβ€ƒβ–ˆ β–€ β–ˆβ€ƒβ–ˆβ–„β–ˆβ€ƒβ–ˆβ–ˆβ–„β€ƒβ–ˆβ–€β–„
12
INFO[0000] starting plumber server in 'standalone' mode... pkg=plumber
13
INFO[0005] plumber server started pkg=plumber
Copied!
1
$ docker run --name plumber-server -p 8080:8080 \
2
-v plumber-config:/Users/username/.batchsh:rw \
3
batchcorp/plumber:latest
Copied!
1
---
2
apiVersion: v1
3
kind: Service
4
metadata:
5
name: plumber-standalone
6
labels:
7
app.kubernetes.io/name: plumber-standalone
8
spec:
9
type: ClusterIP
10
ports:
11
- port: 9090
12
targetPort: 9090
13
protocol: TCP
14
name: grpc-api
15
selector:
16
app.kubernetes.io/name: plumber-standalone
17
---
18
apiVersion: apps/v1
19
kind: Deployment
20
metadata:
21
name: plumber-standalone
22
labels:
23
app.kubernetes.io/name: plumber-standalone
24
spec:
25
replicas: 1
26
selector:
27
matchLabels:
28
app.kubernetes.io/name: plumber-standalone
29
template:
30
metadata:
31
labels:
32
app.kubernetes.io/name: plumber-standalone
33
spec:
34
containers:
35
- name: plumber-standalone
36
image: "batchcorp/plumber:v1.4.0"
37
imagePullPolicy: IfNotPresent
38
command: ["/plumber-linux", "server"]
39
ports:
40
- containerPort: 9090
41
env:
42
- name: PLUMBER_SERVER_ENABLE_CLUSTER
43
value: "false"
44
- name: PLUMBER_SERVER_NODE_ID
45
valueFrom:
46
fieldRef:
47
fieldPath: metadata.name
48
​
Copied!

Cluster mode

The NATS container SHOULD have a persistent storage volume.
CLI
Docker
Kubernetes Deployment
Kubernetes Helm
1
# Install plumber
2
$ brew tap batchcorp/public
3
...
4
$ brew install plumber
5
...
6
​
7
# Git clone plumber repo (to get access to its docker-compose + assets)
8
$ git clone [email protected]:batchcorp/plumber.git
9
$ cd plumber
10
​
11
# Launch a NATS container
12
$ docker-compose up -d natsjs
13
​
14
# Launch plumber in cluster mode
15
$ plumber server --enable-cluster
16
INFO[0000]
17
β–ˆβ–€β–ˆβ€ƒβ–ˆ β€ƒβ–ˆ β–ˆβ€ƒβ–ˆβ–€β–„β–€β–ˆβ€ƒβ–ˆβ–„β–„β€ƒβ–ˆβ–€β–€β€ƒβ–ˆβ–€β–ˆ
18
β–ˆβ–€β–€β€ƒβ–ˆβ–„β–„β€ƒβ–ˆβ–„β–ˆβ€ƒβ–ˆ β–€ β–ˆβ€ƒβ–ˆβ–„β–ˆβ€ƒβ–ˆβ–ˆβ–„β€ƒβ–ˆβ–€β–„
19
INFO[0000] starting plumber server in 'cluster' mode... pkg=plumber
20
INFO[0015] plumber server started pkg=plumber
Copied!
1
# Git clone plumber repo (to get access to its docker-compose + assets)
2
$ git clone [email protected]:batchcorp/plumber.git
3
$ cd plumber
4
​
5
# Launch a NATS container
6
$ docker-compose up -d natsjs
7
​
8
# Launch a plumber container that points to your NATS instance
9
$ docker run --name plumber-server -p 9090:9090 \
10
--network plumber_default \
11
-e PLUMBER_SERVER_NATS_URL=nats://natsjs \
12
-e PLUMBER_SERVER_ENABLE_CLUSTER=true \
13
batchcorp/plumber:latest
14
{"level":"info","msg":"starting plumber server in 'cluster' mode...","pkg":"plumber","time":"2022-02-15T20:21:47Z"}
15
{"level":"info","msg":"plumber server started","pkg":"plumber","time":"2022-02-15T20:22:02Z"}
Copied!
1
---
2
apiVersion: v1
3
kind: ConfigMap
4
metadata:
5
name: nats-config
6
namespace: default
7
labels:
8
app.kubernetes.io/name: nats
9
data:
10
nats.conf: |
11
# NATS Clients Port
12
port: 4222
13
​
14
# PID file shared with configuration reloader.
15
pid_file: "/var/run/nats/nats.pid"
16
http: 8222
17
server_name:$POD_NAME
18
jetstream {
19
max_mem: 1Gi
20
store_dir: /data
21
​
22
max_file:5Gi
23
}
24
lame_duck_duration: 120s
25
---
26
apiVersion: v1
27
kind: Service
28
metadata:
29
name: nats
30
namespace: default
31
labels:
32
app.kubernetes.io/name: nats
33
spec:
34
selector:
35
app.kubernetes.io/name: nats
36
clusterIP: None
37
ports:
38
- name: client
39
port: 4222
40
- name: cluster
41
port: 6222
42
- name: monitor
43
port: 8222
44
- name: metrics
45
port: 7777
46
- name: leafnodes
47
port: 7422
48
- name: gateways
49
port: 7522
50
---
51
apiVersion: v1
52
kind: Service
53
metadata:
54
name: plumber-cluster
55
labels:
56
app.kubernetes.io/name: plumber-cluster
57
spec:
58
type: ClusterIP
59
ports:
60
- port: 9090
61
targetPort: 9090
62
protocol: TCP
63
name: grpc-api
64
selector:
65
app.kubernetes.io/name: plumber-cluster
66
---
67
apiVersion: apps/v1
68
kind: Deployment
69
metadata:
70
name: nats-box
71
namespace: default
72
labels:
73
app: nats-box
74
chart: nats-0.13.0
75
spec:
76
replicas: 1
77
selector:
78
matchLabels:
79
app: nats-box
80
template:
81
metadata:
82
labels:
83
app: nats-box
84
spec:
85
volumes:
86
containers:
87
- name: nats-box
88
image: natsio/nats-box:0.8.1
89
imagePullPolicy: IfNotPresent
90
resources:
91
null
92
env:
93
- name: NATS_URL
94
value: nats
95
command:
96
- "tail"
97
- "-f"
98
- "/dev/null"
99
---
100
apiVersion: apps/v1
101
kind: Deployment
102
metadata:
103
name: plumber-cluster
104
labels:
105
app.kubernetes.io/name: plumber-cluster
106
spec:
107
replicas: 3
108
selector:
109
matchLabels:
110
app.kubernetes.io/name: plumber-cluster
111
template:
112
metadata:
113
labels:
114
app.kubernetes.io/name: plumber-cluster
115
spec:
116
containers:
117
- name: plumber-cluster
118
image: "batchcorp/plumber:v1.4.0"
119
imagePullPolicy: IfNotPresent
120
command: ["/plumber-linux", "server"]
121
ports:
122
- containerPort: 9090
123
env:
124
- name: PLUMBER_SERVER_CLUSTER_ID
125
value: "7EB6C7FB-9053-41B4-B456-78E64CF9D393"
126
- name: PLUMBER_SERVER_ENABLE_CLUSTER
127
value: "true"
128
- name: PLUMBER_SERVER_NATS_URL
129
value: "nats://nats.default.svc.cluster.local:4222"
130
- name: PLUMBER_SERVER_USE_TLS
131
value: "false"
132
- name: PLUMBER_SERVER_NODE_ID
133
valueFrom:
134
fieldRef:
135
fieldPath: metadata.name
136
---
137
apiVersion: apps/v1
138
kind: StatefulSet
139
metadata:
140
name: nats
141
namespace: default
142
labels:
143
app.kubernetes.io/name: nats
144
spec:
145
selector:
146
matchLabels:
147
app.kubernetes.io/name: nats
148
replicas: 1
149
serviceName: nats
150
podManagementPolicy: Parallel
151
​
152
template:
153
metadata:
154
annotations:
155
prometheus.io/path: /metrics
156
prometheus.io/port: "7777"
157
prometheus.io/scrape: "true"
158
labels:
159
app.kubernetes.io/name: nats
160
spec:
161
# Common volumes for the containers.
162
volumes:
163
- name: config-volume
164
configMap:
165
name: nats-config
166
# Local volume shared with the reloader.
167
- name: pid
168
emptyDir: {}
169
# Required to be able to HUP signal and apply config
170
# reload to the server without restarting the pod.
171
shareProcessNamespace: true
172
terminationGracePeriodSeconds: 120
173
containers:
174
- name: nats
175
image: nats:2.7.2-alpine
176
imagePullPolicy: IfNotPresent
177
ports:
178
- containerPort: 4222
179
name: client
180
- containerPort: 7422
181
name: leafnodes
182
- containerPort: 7522
183
name: gateways
184
- containerPort: 6222
185
name: cluster
186
- containerPort: 8222
187
name: monitor
188
- containerPort: 7777
189
name: metrics
190
command:
191
- "nats-server"
192
- "--config"
193
- "/etc/nats-config/nats.conf"
194
env:
195
- name: POD_NAME
196
valueFrom:
197
fieldRef:
198
fieldPath: metadata.name
199
- name: SERVER_NAME
200
value: $(POD_NAME)
201
- name: POD_NAMESPACE
202
valueFrom:
203
fieldRef:
204
fieldPath: metadata.namespace
205
- name: CLUSTER_ADVERTISE
206
value: $(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.local
207
volumeMounts:
208
- name: config-volume
209
mountPath: /etc/nats-config
210
- name: pid
211
mountPath: /var/run/nats
212
- name: nats-js-pvc
213
mountPath: /data
214
livenessProbe:
215
httpGet:
216
path: /
217
port: 8222
218
initialDelaySeconds: 10
219
timeoutSeconds: 5
220
periodSeconds: 60
221
successThreshold: 1
222
failureThreshold: 3
223
startupProbe:
224
httpGet:
225
path: /
226
port: 8222
227
initialDelaySeconds: 10
228
timeoutSeconds: 5
229
periodSeconds: 10
230
successThreshold: 1
231
failureThreshold: 30
232
​
233
# Gracefully stop NATS Server on pod deletion or image upgrade.
234
#
235
lifecycle:
236
preStop:
237
exec:
238
# Using the alpine based NATS image, we add an extra sleep that is
239
# the same amount as the terminationGracePeriodSeconds to allow
240
# the NATS Server to gracefully terminate the client connections.
241
#
242
command:
243
- "/bin/sh"
244
- "-c"
245
- "nats-server -sl=ldm=/var/run/nats/nats.pid"
246
- name: reloader
247
image: natsio/nats-server-config-reloader:0.6.2
248
imagePullPolicy: IfNotPresent
249
resources:
250
null
251
command:
252
- "nats-server-config-reloader"
253
- "-pid"
254
- "/var/run/nats/nats.pid"
255
- "-config"
256
- "/etc/nats-config/nats.conf"
257
volumeMounts:
258
- name: config-volume
259
mountPath: /etc/nats-config
260
- name: pid
261
mountPath: /var/run/nats
262
- name: metrics
263
image: natsio/prometheus-nats-exporter:0.9.1
264
imagePullPolicy: IfNotPresent
265
args:
266
- -connz
267
- -routez
268
- -subz
269
- -varz
270
- -prefix=nats
271
- -use_internal_server_id
272
- -jsz=all
273
- http://localhost:8222/
274
ports:
275
- containerPort: 7777
276
name: metrics
277
​
278
​
279
volumeClaimTemplates:
280
- metadata:
281
name: nats-js-pvc
282
spec:
283
accessModes:
284
- "ReadWriteOnce"
285
resources:
286
requests:
287
storage: 5Gi
288
---
289
apiVersion: v1
290
kind: Pod
291
metadata:
292
name: "nats-test-request-reply"
293
labels:
294
app.kubernetes.io/name: nats
295
annotations:
296
"hook": test
297
spec:
298
containers:
299
- name: nats-box
300
image: synadia/nats-box
301
env:
302
- name: NATS_HOST
303
value: nats
304
command:
305
- /bin/sh
306
- -ec
307
- |
308
nats reply -s nats://$NATS_HOST:4222 'name.>' --command "echo 1" &
309
- |
310
"&&"
311
- |
312
name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null)
313
- |
314
"&&"
315
- |
316
[ $name = test ]
317
​
318
restartPolicy: Never
319
​
Copied!

Environment Variables

Cluster Mode

All of the following environment variables are only used if plumber is launched in cluster mode.
Environment Variable
Description
PLUMBER_SERVER_ENABLE_CLUSTER
Set to "true" to enable cluster mode. If not set, plumber will run in standalone mode.
PLUMBER_SERVER_CLUSTER_ID
All plumber instances part of the same cluster must share the same id.
PLUMBER_SERVER_NODE_ID
If you run plumber as a stateful set, you can assign each node its own $node_id which will simplify debug, parsing logs, etc. If unset, it will be automatically generated at launch.
PLUMBER_SERVER_NATS_URL
Must be set to your local NATS server's address.
PLUMBER_SERVER_AUTH_TOKEN
Used for authenticating with the plumber instance. Set to batchcorp by default.

Metrics

Plumber exposes various prometheus metrics that are exposed via its internal HTTP server:
http://127.0.0.1:9191/metrics
127.0.0.1