flox.groupby_reduce

flox.groupby_reduce(array, *by, func, expected_groups=None, sort=True, isbin=False, axis=None, fill_value=None, dtype=None, min_count=None, method=None, engine=None, reindex=None, finalize_kwargs=None)[source]

GroupBy reductions using tree reductions for dask.array

Parameters:
arrayndarray or DaskArray

Array to be reduced, possibly nD

*byndarray or DaskArray

Array of labels to group over. Must be aligned with array so that array.shape[-by.ndim :] == by.shape or any disagreements in that equality check are for dimensions of size 1 in by.

func{“all”, “any”, “count”, “sum”, “nansum”, “mean”, “nanmean”, “max”, “nanmax”, “min”, “nanmin”, “argmax”, “nanargmax”, “argmin”, “nanargmin”, “quantile”, “nanquantile”, “median”, “nanmedian”, “mode”, “nanmode”, “first”, “nanfirst”, “last”, “nanlast”} or Aggregation

Single function name or an Aggregation instance

expected_groups(optional) Sequence

Expected unique labels.

isbinbool, optional

Are expected_groups bin edges?

sortbool, optional

Whether groups should be returned in sorted order. Only applies for dask reductions when method is not "map-reduce". For "map-reduce", the groups are always sorted.

axisNone or int or Sequence[int], optional

If None, reduce across all dimensions of by Else, reduce across corresponding axes of array Negative integers are normalized using array.ndim

fill_valueAny

Value to assign when a label in expected_groups is not present.

dtypedata-type , optional

DType for the output. Can be anything that is accepted by np.dtype.

min_countint, default: None

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Only used if skipna is set to True or defaults to True for the array’s dtype.

method{“map-reduce”, “blockwise”, “cohorts”}, optional
Strategy for reduction of dask arrays only:
  • "map-reduce": First apply the reduction blockwise on array, then combine a few newighbouring blocks, apply the reduction. Continue until finalizing. Usually, func will need to be an Aggregation instance for this method to work. Common aggregations are implemented.

  • "blockwise": Only reduce using blockwise and avoid aggregating blocks together. Useful for resampling-style reductions where group members are always together. If by is 1D, array is automatically rechunked so that chunk boundaries line up with group boundaries i.e. each block contains all members of any group present in that block. For nD by, you must make sure that all members of a group are present in a single block.

  • "cohorts": Finds group labels that tend to occur together (“cohorts”), indexes out cohorts and reduces that subset using “map-reduce”, repeat for all cohorts. This works well for many time groupings where the group labels repeat at regular intervals like ‘hour’, ‘month’, dayofyear’ etc. Optimize chunking array for this method by first rechunking using rechunk_for_cohorts (for 1D by only).

engine{“flox”, “numpy”, “numba”, “numbagg”}, optional
Algorithm to compute the groupby reduction on non-dask arrays and on each dask chunk:
  • "numpy": Use the vectorized implementations in numpy_groupies.aggregate_numpy. This is the default choice because it works for most array types.

  • "flox": Use an internal implementation where the data is sorted so that all members of a group occur sequentially, and then numpy.ufunc.reduceat is to used for the reduction. This will fall back to numpy_groupies.aggregate_numpy for a reduction that is not yet implemented.

  • "numba": Use the implementations in numpy_groupies.aggregate_numba.

  • "numbagg": Use the reductions supported by numbagg.grouped. This will fall back to numpy_groupies.aggregate_numpy for a reduction that is not yet implemented.

reindexbool, optional

Whether to “reindex” the blockwise results to expected_groups (possibly automatically detected). If True, the intermediate result of the blockwise groupby-reduction has a value for all expected groups, and the final result is a simple reduction of those intermediates. In nearly all cases, this is a significant boost in computation speed. For cases like time grouping, this may result in large intermediates relative to the original block size. Avoid that by using method="cohorts". By default, it is turned off for argreductions.

finalize_kwargsdict, optional

Kwargs passed to finalize the reduction such as ddof for var, std or q for quantile.

Returns:
result

Aggregated result

*groups

Group labels