Real-time inventory logistics data management is vitally important. It helps to ensure timely deliveries, lower holding costs, and a superior customer experience.
Streaming inventory logistics data involves concise messages, with each averaging 100 bytes, including fields like product code, quantity, timestamp, and location code. Inventory data streams often carry thousands of messages per second (m/s) and can be spiky, such as when stocks are replenished in batches or when large shipments are received and logged.
The Apache Kafka partition calculator inputs below are typical for small, medium, and large-scale inventory data streams.
- The brokers are of similar capability.
- The load on the brokers’ machines is similar.
- The messages don't diverge too much in size.
- The messages are evenly distributed across all partitions.
- The number of brokers makes sense in this context.
- Brokers have similar latencies between producers and consumers.
- The throughput per producer is less than 10MB/s.
- Individual brokers have less than 40k partitions.
- The cluster has less than 200k partitions in total.