Once an event is collected by our collector, we inspect the event, followed by a write to our internal Kafka cluster. From there, the event will go through our pipeline of schema identification (or generation) and get written permanently to our hot and cold storage.
It usually takes <5s for a collected event to become visible in our dashboard.
Generally, you will want to group collections by the message envelope that the event has.
In some cases, you may want to also group them by the source of the events.
While the two events are similar, they represent an entirely different data set - one is identifying a person and their personal attributes while the other is something billing related.
Both events share some fields and while you could send both events to the same collection (and Batch's automatic schema inference would do its magic) - you probably should NOT combine these events into the same collection as it'll be hard to determine which event represents what.
In this scenario, your best bet would be to create two collections: persons and orders and send each event to a different collection.
NOTE: If both of the events are located on the same messaging system, you will have no choice but to send the events to the same collection. At that point, it would be best to come up with a unified message schema or include a "type" designator in the event to make it easier to search for.