# Auto-batching

WARNING

๐Ÿšจ This is an experimental feature ๐Ÿšจ
Using it may result in unexpected crashes, bugs, or holes in the space-time continuum.
You have been warned.

Auto-batching is an experimental feature designed to improve indexing speed.

When auto-batching is enabled, consecutive document addition requests may be automatically combined into a batch and processed together, significantly speeding up the indexation process.

We would appreciate your feedback on this feature. Join the discussion (opens new window).

# Enable auto-batching

To enable auto-batching, start Meilisearch while supplying the --enable-auto-batching CLI flag:

./meilisearch --enable-auto-batching

For document addition requests to be added to the same batch, they need to:

  • Target the same index
  • Have the same update method (e.g. POST or PUT)
  • Be immediately consecutive

By default, auto-batching will not delay processing a request in order to batch multiple requests together. If it can process the request immediately, it will. This behavior can be altered using a command-line option.

After enabling autobatching, the field batchUid will appear in all task API responses.

WARNING

If even a single task in a batch fails, the entire batch will fail.

# Customization options

There are three command-line options allowing you to customize auto-batching behavior:

  • --debounce-duration-sec: the number of seconds to wait between receiving a document addition task and beginning the batching and indexation process. Default: 0
  • --max-batch-size: the maximum number of tasks per batch. Default: unlimited
  • --max-documents-per-batch: the maximum number of documents in a batch. Default: unlimited

TIP

  • Giving a smaller number for max-documents-per-batch will reduce memory use, but slow down indexation
  • Giving a larger number for max-documents-per-batch will increase memory use, but speed up indexation