API

Stream

Stream([upstream, upstreams, stream_name, …]) A Stream is an infinite sequence of data
accumulate(upstream, func[, start, …]) Accumulate results with previous state
buffer(upstream, n[, loop]) Allow results to pile up at this point in the stream
collect(upstream[, cache]) Hold elements in a cache and emit them as a collection when flushed.
combine_latest(*upstreams, **kwargs) Combine multiple streams together to a stream of tuples
Stream.connect(downstream) Connect this stream to a downstream element.
delay(upstream, interval[, loop]) Add a time delay to results
Stream.destroy([streams]) Disconnect this stream from any upstream sources
Stream.disconnect(downstream) Disconnect this stream to a downstream element.
filter(upstream, predicate, **kwargs) Only pass through elements that satisfy the predicate
flatten([upstream, upstreams, stream_name, …]) Flatten streams of lists or iterables into a stream of elements
map(upstream, func, *args, **kwargs) Apply a function to every element in the stream
partition(upstream, n, **kwargs) Partition stream into tuples of equal size
rate_limit(upstream, interval, **kwargs) Limit the flow of data
scatter(*args, **kwargs) Convert local stream to Dask Stream
sink(upstream, func, *args, **kwargs) Apply a function on every element
sliding_window(upstream, n, **kwargs) Produce overlapping tuples of size n
timed_window(upstream, interval[, loop]) Emit a tuple of collected results every interval
union(*upstreams, **kwargs) Combine multiple streams into one
unique(upstream[, history, key]) Avoid sending through repeated elements
pluck(upstream, pick, **kwargs) Select elements from elements in the stream.
zip(*upstreams, **kwargs) Combine streams together into a stream of tuples
zip_latest(lossless, *upstreams, **kwargs) Combine multiple streams together to a stream of tuples

Sources

filenames(path[, poll_interval]) Stream over filenames in a directory
from_kafka(topics, consumer_params[, …]) Accepts messages from Kafka
from_textfile(f[, poll_interval]) Stream data from a text file

DaskStream

DaskStream(*args, **kwargs)

Attributes

gather([upstream, upstreams, stream_name, …]) Convert Dask stream to local Stream

Definitions

streamz.accumulate(upstream, func, start='--no-default--', returns_state=False, **kwargs)

Accumulate results with previous state

This performs running or cumulative reductions, applying the function to the previous total and the new element. The function should take two arguments, the previous accumulated state and the next element and it should return a new accumulated state.

Parameters:

func: callable

start: object

Initial value. Defaults to the first submitted element

returns_state: boolean

If true then func should return both the state and the value to emit If false then both values are the same, and func returns one value

**kwargs:

Keyword arguments to pass to func

Examples

>>> source = Stream()
>>> source.accumulate(lambda acc, x: acc + x).sink(print)
>>> for i in range(5):
...     source.emit(i)
1
3
6
10
streamz.buffer(upstream, n, loop=None, **kwargs)

Allow results to pile up at this point in the stream

This allows results to buffer in place at various points in the stream. This can help to smooth flow through the system when backpressure is applied.

streamz.collect(upstream, cache=None, **kwargs)

Hold elements in a cache and emit them as a collection when flushed.

Examples

>>> source1 = Stream()
>>> source2 = Stream()
>>> collector = collect(source1)
>>> collector.sink(print)
>>> source2.sink(collector.flush)
>>> source1.emit(1)
>>> source1.emit(2)
>>> source2.emit('anything')  # flushes collector
...
[1, 2]
streamz.combine_latest(*upstreams, **kwargs)

Combine multiple streams together to a stream of tuples

This will emit a new tuple of all of the most recent elements seen from any stream.

Parameters:

emit_on : stream or list of streams or None

only emit upon update of the streams listed. If None, emit on update from any stream

See also

zip

streamz.delay(upstream, interval, loop=None, **kwargs)

Add a time delay to results

streamz.filter(upstream, predicate, **kwargs)

Only pass through elements that satisfy the predicate

Parameters:

predicate : function

The predicate. Should return True or False, where True means that the predicate is satisfied.

Examples

>>> source = Stream()
>>> source.filter(lambda x: x % 2 == 0).sink(print)
>>> for i in range(5):
...     source.emit(i)
0
2
4
streamz.flatten(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=False)

Flatten streams of lists or iterables into a stream of elements

See also

partition

Examples

>>> source = Stream()
>>> source.flatten().sink(print)
>>> for x in [[1, 2, 3], [4, 5], [6, 7, 7]]:
...     source.emit(x)
1
2
3
4
5
6
7
streamz.map(upstream, func, *args, **kwargs)

Apply a function to every element in the stream

Parameters:

func: callable

*args :

The arguments to pass to the function.

**kwargs:

Keyword arguments to pass to func

Examples

>>> source = Stream()
>>> source.map(lambda x: 2*x).sink(print)
>>> for i in range(5):
...     source.emit(i)
0
2
4
6
8
streamz.partition(upstream, n, **kwargs)

Partition stream into tuples of equal size

Examples

>>> source = Stream()
>>> source.partition(3).sink(print)
>>> for i in range(10):
...     source.emit(i)
(0, 1, 2)
(3, 4, 5)
(6, 7, 8)
streamz.rate_limit(upstream, interval, **kwargs)

Limit the flow of data

This stops two elements of streaming through in an interval shorter than the provided value.

Parameters:

interval: float

Time in seconds

streamz.sink(upstream, func, *args, **kwargs)

Apply a function on every element

See also

map, Stream.sink_to_list

Examples

>>> source = Stream()
>>> L = list()
>>> source.sink(L.append)
>>> source.sink(print)
>>> source.sink(print)
>>> source.emit(123)
123
123
>>> L
[123]
streamz.sliding_window(upstream, n, **kwargs)

Produce overlapping tuples of size n

Examples

>>> source = Stream()
>>> source.sliding_window(3).sink(print)
>>> for i in range(8):
...     source.emit(i)
(0, 1, 2)
(1, 2, 3)
(2, 3, 4)
(3, 4, 5)
(4, 5, 6)
(5, 6, 7)
streamz.Stream(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=False)

A Stream is an infinite sequence of data

Streams subscribe to each other passing and transforming data between them. A Stream object listens for updates from upstream, reacts to these updates, and then emits more data to flow downstream to all Stream objects that subscribe to it. Downstream Stream objects may connect at any point of a Stream graph to get a full view of the data coming off of that point to do with as they will.

Examples

>>> def inc(x):
...     return x + 1
>>> source = Stream()  # Create a stream object
>>> s = source.map(inc).map(str)  # Subscribe to make new streams
>>> s.sink(print)  # take an action whenever an element reaches the end
>>> L = list()
>>> s.sink(L.append)  # or take multiple actions (streams can branch)
>>> for i in range(5):
...     source.emit(i)  # push data in at the source
'1'
'2'
'3'
'4'
'5'
>>> L  # and the actions happen at the sinks
['1', '2', '3', '4', '5']
streamz.timed_window(upstream, interval, loop=None, **kwargs)

Emit a tuple of collected results every interval

Every interval seconds this emits a tuple of all of the results seen so far. This can help to batch data coming off of a high-volume stream.

streamz.union(*upstreams, **kwargs)

Combine multiple streams into one

Every element from any of the upstreams streams will immediately flow into the output stream. They will not be combined with elements from other streams.

See also

Stream.zip, Stream.combine_latest

streamz.unique(upstream, history=None, key=<function identity>, **kwargs)

Avoid sending through repeated elements

This deduplicates a stream so that only new elements pass through. You can control how much of a history is stored with the history= parameter. For example setting history=1 avoids sending through elements when one is repeated right after the other.

Examples

>>> source = Stream()
>>> source.unique(history=1).sink(print)
>>> for x in [1, 1, 2, 2, 2, 1, 3]:
...     source.emit(x)
1
2
1
3
streamz.pluck(upstream, pick, **kwargs)

Select elements from elements in the stream.

Parameters:

pluck : object, list

The element(s) to pick from the incoming element in the stream If an instance of list, will pick multiple elements.

Examples

>>> source = Stream()
>>> source.pluck([0, 3]).sink(print)
>>> for x in [[1, 2, 3, 4], [4, 5, 6, 7], [8, 9, 10, 11]]:
...     source.emit(x)
(1, 4)
(4, 7)
(8, 11)
>>> source = Stream()
>>> source.pluck('name').sink(print)
>>> for x in [{'name': 'Alice', 'x': 123}, {'name': 'Bob', 'x': 456}]:
...     source.emit(x)
'Alice'
'Bob'
streamz.zip(*upstreams, **kwargs)

Combine streams together into a stream of tuples

We emit a new tuple once all streams have produce a new tuple.

streamz.zip_latest(lossless, *upstreams, **kwargs)

Combine multiple streams together to a stream of tuples

The stream which this is called from is lossless. All elements from the lossless stream are emitted reguardless of when they came in. This will emit a new tuple consisting of an element from the lossless stream paired with the latest elements from the other streams. Elements are only emitted when an element on the lossless stream are received, similar to combine_latest with the emit_on flag.

See also

Stream.combine_latest, Stream.zip

streamz.filenames(path, poll_interval=0.1)

Stream over filenames in a directory

Parameters:

path: string

Directory path or globstring over which to search for files

poll_interval: Number

Seconds between checking path

Examples

>>> source = Stream.filenames('path/to/dir')  
>>> source = Stream.filenames('path/to/*.csv', poll_interval=0.500)  
streamz.from_kafka(topics, consumer_params, poll_interval=0.1)

Accepts messages from Kafka

Uses the confluent-kafka library, https://docs.confluent.io/current/clients/confluent-kafka-python/

Parameters:

topics: list of str

Labels of Kafka topics to consume from

consumer_params: dict

Settings to set up the stream, see https://docs.confluent.io/current/clients/confluent-kafka-python/#configuration Examples: url: Connection string (host:port) by which to reach Kafka group: Identity of the consumer. If multiple sources share the same

group, each message will be passed to only one of them.

poll_interval: number

Seconds that elapse between polling Kafka for new messages

streamz.from_textfile(f, poll_interval=0.1)

Stream data from a text file

Parameters:

f: file or string

poll_interval: Number

Interval to poll file for new data in seconds

Returns:

Stream

streamz.dask.DaskStream(*args, **kwargs)
streamz.dask.gather(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=False)

Convert Dask stream to local Stream