interleave for filters

Read multiple streams in parallel and combine them into one stream.

Signature

> interleave {flags} ...rest

Flags

  • --buffer-size, -b {int}: Number of items to buffer from the streams. Increases memory usage, but can help performance when lots of output is produced.

Parameters

  • ...rest: The closures that will generate streams to be combined.

Input/output types:

inputoutput
list<any>list<any>
nothinglist<any>

Examples

Read two sequences of numbers into separate columns of a table. Note that the order of rows with 'a' columns and rows with 'b' columns is arbitrary.

> seq 1 50 | wrap a | interleave { seq 1 50 | wrap b }

Read two sequences of numbers, one from input. Sort for consistency.

> seq 1 3 | interleave { seq 4 6 } | sort
╭───┬───╮
 0 1
 1 2
 2 3
 3 4
 4 5
 5 6
╰───┴───╯

Read two sequences, but without any input. Sort for consistency.

> interleave { "foo\nbar\n" | lines } { "baz\nquux\n" | lines } | sort
╭───┬──────╮
 0 bar
 1 baz
 2 foo
 3 quux
╰───┴──────╯

Run two commands in parallel and annotate their output.

> (
interleave
    { nu -c "print hello; print world" | lines | each { "greeter: " ++ $in } }
    { nu -c "print nushell; print rocks" | lines | each { "evangelist: " ++ $in } }
)

Use a buffer to increase the performance of high-volume streams.

> seq 1 20000 | interleave --buffer-size 16 { seq 1 20000 } | math sum

Notes

This combinator is useful for reading output from multiple commands.

If input is provided to interleave, the input will be combined with the output of the closures. This enables interleave to be used at any position within a pipeline.

Because items from each stream will be inserted into the final stream as soon as they are available, there is no guarantee of how the final output will be ordered. However, the order of items from any given stream is guaranteed to be preserved as they were in that stream.

If interleaving streams in a fair (round-robin) manner is desired, consider using zip { ... } | flatten instead.