7.1.1. Dispatcher

class Dispatcher(dmap=None, name='', default_values=None, raises=False, description='', executor=None)[source]

It provides a data structure to process a complex system of functions.

The scope of this data structure is to compute the shortest workflow between input and output data nodes.

A workflow is a sequence of function calls.

————————————————————————

Example:

As an example, here is a system of equations:

\(b - a = c\)

\(log(c) = d_{from-log}\)

\(d = (d_{from-log} + d_{initial-guess}) / 2\)

that will be solved assuming that \(a = 0\), \(b = 1\), and \(d_{initial-guess} = 4\).

Steps

Create an empty dispatcher:

>>> dsp = Dispatcher(name='Dispatcher')

Add data nodes to the dispatcher map:

>>> dsp.add_data(data_id='a')
'a'
>>> dsp.add_data(data_id='c')
'c'

Add a data node with a default value to the dispatcher map:

>>> dsp.add_data(data_id='b', default_value=1)
'b'

Add a function node:

>>> def diff_function(a, b):
...     return b - a
...
>>> dsp.add_function('diff_function', function=diff_function,
...                  inputs=['a', 'b'], outputs=['c'])
'diff_function'

Add a function node with domain:

>>> from math import log
...
>>> def log_domain(x):
...     return x > 0
...
>>> dsp.add_function('log', function=log, inputs=['c'], outputs=['d'],
...                  input_domain=log_domain)
'log'

Add a data node with function estimation and callback function.

  • function estimation: estimate one unique output from multiple estimations.
  • callback function: is invoked after computing the output.
>>> def average_fun(kwargs):
...     '''
...     Returns the average of node estimations.
...
...     :param kwargs:
...         Node estimations.
...     :type kwargs: dict
...
...     :return:
...         The average of node estimations.
...     :rtype: float
...     '''
...
...     x = kwargs.values()
...     return sum(x) / len(x)
...
>>> def callback_fun(x):
...     print('(log(1) + 4) / 2 = %.1f' % x)
...
>>> dsp.add_data(data_id='d', default_value=4, wait_inputs=True,
...              function=average_fun, callback=callback_fun)
'd'

Dispatch the function calls to achieve the desired output data node d:

>>> outputs = dsp.dispatch(inputs={'a': 0}, outputs=['d'])
(log(1) + 4) / 2 = 2.0
>>> outputs
Solution([('a', 0), ('b', 1), ('c', 1), ('d', 2.0)])

Methods

__init__ Initializes the dispatcher.
add_data Add a single data node to the dispatcher.
add_dispatcher Add a single sub-dispatcher node to dispatcher.
add_from_lists Add multiple function and data nodes to dispatcher.
add_func Add a single function node to dispatcher.
add_function Add a single function node to dispatcher.
blue Constructs a BlueDispatcher out of the current object.
copy Returns a deepcopy of the Dispatcher.
copy_structure Returns a copy of the Dispatcher structure.
dispatch Evaluates the minimum workflow and data outputs of the dispatcher model from given inputs.
extend Extends Dispatcher calling each deferred operation of given Blueprints.
get_node Returns a sub node of a dispatcher.
get_sub_dsp Returns the sub-dispatcher induced by given node and edge bunches.
get_sub_dsp_from_workflow Returns the sub-dispatcher induced by the workflow from sources.
plot Plots the Dispatcher with a graph in the DOT language with Graphviz.
set_default_value Set the default value of a data node in the dispatcher.
shrink_dsp Returns a reduced dispatcher.
web Creates a dispatcher Flask app.
__init__(dmap=None, name='', default_values=None, raises=False, description='', executor=None)[source]

Initializes the dispatcher.

Parameters:
  • dmap (networkx.DiGraph, optional) – A directed graph that stores data & functions parameters.
  • name (str, optional) – The dispatcher’s name.
  • default_values (dict[str, dict], optional) – Data node default values. These will be used as input if it is not specified as inputs in the ArciDispatch algorithm.
  • raises (bool|callable|str, optional) – If True the dispatcher interrupt the dispatch when an error occur, otherwise if raises != ‘’ it logs a warning. If a callable is given it will be executed passing the exception to decide to raise or not the exception.
  • description (str, optional) – The dispatcher’s description.
  • executor (str, optional) –

    A pool executor id to dispatch asynchronously or in parallel.

    There are four default Pool executors to dispatch asynchronously or in parallel:

    • async: execute all functions asynchronously in the same process,
    • parallel: execute all functions in parallel excluding SubDispatch functions,
    • parallel-pool: execute all functions in parallel using a process pool excluding SubDispatch functions,
    • parallel-dispatch: execute all functions in parallel including SubDispatch.

Attributes

data_nodes Returns all data nodes of the dispatcher.
function_nodes Returns all function nodes of the dispatcher.
sub_dsp_nodes Returns all sub-dispatcher nodes of the dispatcher.
dmap = None

The directed graph that stores data & functions parameters.

name = None

The dispatcher’s name.

nodes = None

The function and data nodes of the dispatcher.

default_values = None

Data node default values. These will be used as input if it is not specified as inputs in the ArciDispatch algorithm.

raises = None

If True the dispatcher interrupt the dispatch when an error occur.

executor = None

Pool executor to dispatch asynchronously.

solution = None

Last dispatch solution.

counter = None

Counter to set the node index.

copy_structure(**kwargs)[source]

Returns a copy of the Dispatcher structure.

Parameters:kwargs (dict) – Additional parameters to initialize the new class.
Returns:A copy of the Dispatcher structure.
Return type:Dispatcher
add_data(data_id=None, default_value=empty, initial_dist=0.0, wait_inputs=False, wildcard=None, function=None, callback=None, description=None, filters=None, await_result=None, **kwargs)[source]

Add a single data node to the dispatcher.

Parameters:
  • data_id (str, optional) – Data node id. If None will be assigned automatically (‘unknown<%d>’) not in dmap.
  • default_value (T, optional) – Data node default value. This will be used as input if it is not specified as inputs in the ArciDispatch algorithm.
  • initial_dist (float, int, optional) – Initial distance in the ArciDispatch algorithm when the data node default value is used.
  • wait_inputs (bool, optional) – If True ArciDispatch algorithm stops on the node until it gets all input estimations.
  • wildcard (bool, optional) – If True, when the data node is used as input and target in the ArciDispatch algorithm, the input value will be used as input for the connected functions, but not as output.
  • function (callable, optional) – Data node estimation function. This can be any function that takes only one dictionary (key=function node id, value=estimation of data node) as input and return one value that is the estimation of the data node.
  • callback (callable, optional) – Callback function to be called after node estimation. This can be any function that takes only one argument that is the data node estimation output. It does not return anything.
  • description (str, optional) – Data node’s description.
  • filters (list[function], optional) – A list of functions that are invoked after the invocation of the main function.
  • await_result (bool|int|float, optional) – If True the Dispatcher waits data results before assigning them to the solution. If a number is defined this is used as timeout for Future.result method [default: False]. Note this is used when asynchronous or parallel execution is enable.
  • kwargs (keyword arguments, optional) – Set additional node attributes using key=value.
Returns:

Data node id.

Return type:

str

——————————————————————–

Example:

Add a data to be estimated or a possible input data node:

>>> dsp.add_data(data_id='a')
'a'

Add a data with a default value (i.e., input data node):

>>> dsp.add_data(data_id='b', default_value=1)
'b'

Create a data node with function estimation and a default value.

  • function estimation: estimate one unique output from multiple estimations.
  • default value: is a default estimation.
>>> def min_fun(kwargs):
...     '''
...     Returns the minimum value of node estimations.
...
...     :param kwargs:
...         Node estimations.
...     :type kwargs: dict
...
...     :return:
...         The minimum value of node estimations.
...     :rtype: float
...     '''
...
...     return min(kwargs.values())
...
>>> dsp.add_data(data_id='c', default_value=2, wait_inputs=True,
...              function=min_fun)
'c'

Create a data with an unknown id and return the generated id:

>>> dsp.add_data()
'unknown'
add_function(function_id=None, function=None, inputs=None, outputs=None, input_domain=None, weight=None, inp_weight=None, out_weight=None, description=None, filters=None, await_domain=None, await_result=None, **kwargs)[source]

Add a single function node to dispatcher.

Parameters:
  • function_id (str, optional) – Function node id. If None will be assigned as <fun.__name__>.
  • function (callable, optional) – Data node estimation function.
  • inputs (list, optional) – Ordered arguments (i.e., data node ids) needed by the function.
  • outputs (list, optional) – Ordered results (i.e., data node ids) returned by the function.
  • input_domain (callable, optional) – A function that checks if input values satisfy the function domain. This can be any function that takes the same inputs of the function and returns True if input values satisfy the domain, otherwise False. In this case the dispatch algorithm doesn’t pass on the node.
  • weight (float, int, optional) – Node weight. It is a weight coefficient that is used by the dispatch algorithm to estimate the minimum workflow.
  • inp_weight (dict[str, float | int], optional) – Edge weights from data nodes to the function node. It is a dictionary (key=data node id) with the weight coefficients used by the dispatch algorithm to estimate the minimum workflow.
  • out_weight (dict[str, float | int], optional) – Edge weights from the function node to data nodes. It is a dictionary (key=data node id) with the weight coefficients used by the dispatch algorithm to estimate the minimum workflow.
  • description (str, optional) – Function node’s description.
  • filters (list[function], optional) – A list of functions that are invoked after the invocation of the main function.
  • await_domain (bool|int|float, optional) – If True the Dispatcher waits all input results before executing the input_domain function. If a number is defined this is used as timeout for Future.result method [default: True]. Note this is used when asynchronous or parallel execution is enable.
  • await_result (bool|int|float, optional) – If True the Dispatcher waits output results before assigning them to the workflow. If a number is defined this is used as timeout for Future.result method [default: False]. Note this is used when asynchronous or parallel execution is enable.
  • kwargs (keyword arguments, optional) – Set additional node attributes using key=value.
Returns:

Function node id.

Return type:

str

——————————————————————–

Example:

Add a function node:

>>> def my_function(a, b):
...     c = a + b
...     d = a - b
...     return c, d
...
>>> dsp.add_function(function=my_function, inputs=['a', 'b'],
...                  outputs=['c', 'd'])
'my_function'

Add a function node with domain:

>>> from math import log
>>> def my_log(a, b):
...     return log(b - a)
...
>>> def my_domain(a, b):
...     return a < b
...
>>> dsp.add_function(function=my_log, inputs=['a', 'b'],
...                  outputs=['e'], input_domain=my_domain)
'my_log'
add_func(function, outputs=None, weight=None, inputs_defaults=False, inputs_kwargs=False, filters=None, input_domain=None, await_domain=None, await_result=None, inp_weight=None, out_weight=None, description=None, inputs=None, function_id=None, **kwargs)[source]

Add a single function node to dispatcher.

Parameters:
  • inputs_kwargs (bool) – Do you want to include kwargs as inputs?
  • inputs_defaults (bool) – Do you want to set default values?
  • function_id (str, optional) – Function node id. If None will be assigned as <fun.__name__>.
  • function (callable, optional) – Data node estimation function.
  • inputs (list, optional) – Ordered arguments (i.e., data node ids) needed by the function. If None it will take parameters names from function signature.
  • outputs (list, optional) – Ordered results (i.e., data node ids) returned by the function.
  • input_domain (callable, optional) – A function that checks if input values satisfy the function domain. This can be any function that takes the same inputs of the function and returns True if input values satisfy the domain, otherwise False. In this case the dispatch algorithm doesn’t pass on the node.
  • weight (float, int, optional) – Node weight. It is a weight coefficient that is used by the dispatch algorithm to estimate the minimum workflow.
  • inp_weight (dict[str, float | int], optional) – Edge weights from data nodes to the function node. It is a dictionary (key=data node id) with the weight coefficients used by the dispatch algorithm to estimate the minimum workflow.
  • out_weight (dict[str, float | int], optional) – Edge weights from the function node to data nodes. It is a dictionary (key=data node id) with the weight coefficients used by the dispatch algorithm to estimate the minimum workflow.
  • description (str, optional) – Function node’s description.
  • filters (list[function], optional) – A list of functions that are invoked after the invocation of the main function.
  • await_domain (bool|int|float, optional) – If True the Dispatcher waits all input results before executing the input_domain function. If a number is defined this is used as timeout for Future.result method [default: True]. Note this is used when asynchronous or parallel execution is enable.
  • await_result (bool|int|float, optional) – If True the Dispatcher waits output results before assigning them to the workflow. If a number is defined this is used as timeout for Future.result method [default: False]. Note this is used when asynchronous or parallel execution is enable.
  • kwargs (keyword arguments, optional) – Set additional node attributes using key=value.
Returns:

Function node id.

Return type:

str

——————————————————————–

Example:

>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='Dispatcher')
>>> def f(a, b, c, d=3, m=5):
...     return (a + b) - c + d - m
>>> dsp.add_func(f, outputs=['d'])
'f'
>>> dsp.add_func(f, ['m'], inputs_defaults=True, inputs='beal')
'f<0>'
>>> dsp.add_func(f, ['i'], inputs_kwargs=True)
'f<1>'
>>> def g(a, b, c, *args, d=0):
...     return (a + b) * c + d
>>> dsp.add_func(g, ['e'], inputs_defaults=True)
'g'
>>> sol = dsp({'a': 1, 'b': 3, 'c': 0}); sol
Solution([('a', 1), ('b', 3), ('c', 0), ('l', 3), ('d', 2),
          ('e', 0), ('m', 0), ('i', 6)])
add_dispatcher(dsp, inputs, outputs, dsp_id=None, input_domain=None, weight=None, inp_weight=None, description=None, include_defaults=False, await_domain=None, **kwargs)[source]

Add a single sub-dispatcher node to dispatcher.

Parameters:
  • dsp (Dispatcher | dict[str, list]) – Child dispatcher that is added as sub-dispatcher node to the parent dispatcher.
  • inputs (dict[str, str | list[str]] | tuple[str] | (str, .., dict[str, str | list[str]])) – Inputs mapping. Data node ids from parent dispatcher to child sub-dispatcher.
  • outputs (dict[str, str | list[str]] | tuple[str] | (str, .., dict[str, str | list[str]])) – Outputs mapping. Data node ids from child sub-dispatcher to parent dispatcher.
  • dsp_id (str, optional) – Sub-dispatcher node id. If None will be assigned as <dsp.name>.
  • input_domain ((dict) -> bool, optional) –

    A function that checks if input values satisfy the function domain. This can be any function that takes the a dictionary with the inputs of the sub-dispatcher node and returns True if input values satisfy the domain, otherwise False.

    Note

    This function is invoked every time that a data node reach the sub-dispatcher node.

  • weight (float, int, optional) – Node weight. It is a weight coefficient that is used by the dispatch algorithm to estimate the minimum workflow.
  • inp_weight (dict[str, int | float], optional) – Edge weights from data nodes to the sub-dispatcher node. It is a dictionary (key=data node id) with the weight coefficients used by the dispatch algorithm to estimate the minimum workflow.
  • description (str, optional) – Sub-dispatcher node’s description.
  • include_defaults (bool, optional) – If True the default values of the sub-dispatcher are added to the current dispatcher.
  • await_domain (bool|int|float, optional) – If True the Dispatcher waits all input results before executing the input_domain function. If a number is defined this is used as timeout for Future.result method [default: True]. Note this is used when asynchronous or parallel execution is enable.
  • kwargs (keyword arguments, optional) – Set additional node attributes using key=value.
Returns:

Sub-dispatcher node id.

Return type:

str

——————————————————————–

Example:

Create a sub-dispatcher:

>>> sub_dsp = Dispatcher()
>>> sub_dsp.add_function('max', max, ['a', 'b'], ['c'])
'max'

Add the sub-dispatcher to the parent dispatcher:

>>> dsp.add_dispatcher(dsp_id='Sub-Dispatcher', dsp=sub_dsp,
...                    inputs={'A': 'a', 'B': 'b'},
...                    outputs={'c': 'C'})
'Sub-Dispatcher'

Add a sub-dispatcher node with domain:

>>> def my_domain(kwargs):
...     return kwargs['C'] > 3
...
>>> dsp.add_dispatcher(dsp_id='Sub-Dispatcher with domain',
...                    dsp=sub_dsp, inputs={'C': 'a', 'D': 'b'},
...                    outputs={('c', 'b'): ('E', 'E1')},
...                    input_domain=my_domain)
'Sub-Dispatcher with domain'
add_from_lists(data_list=None, fun_list=None, dsp_list=None)[source]

Add multiple function and data nodes to dispatcher.

Parameters:
  • data_list (list[dict], optional) – It is a list of data node kwargs to be loaded.
  • fun_list (list[dict], optional) – It is a list of function node kwargs to be loaded.
  • dsp_list (list[dict], optional) – It is a list of sub-dispatcher node kwargs to be loaded.
Returns:

  • Data node ids.
  • Function node ids.
  • Sub-dispatcher node ids.

Return type:

(list[str], list[str], list[str])

——————————————————————–

Example:

Define a data list:

>>> data_list = [
...     {'data_id': 'a'},
...     {'data_id': 'b'},
...     {'data_id': 'c'},
... ]

Define a functions list:

>>> def func(a, b):
...     return a + b
...
>>> fun_list = [
...     {'function': func, 'inputs': ['a', 'b'], 'outputs': ['c']}
... ]

Define a sub-dispatchers list:

>>> sub_dsp = Dispatcher(name='Sub-dispatcher')
>>> sub_dsp.add_function(function=func, inputs=['e', 'f'],
...                      outputs=['g'])
'func'
>>>
>>> dsp_list = [
...     {'dsp_id': 'Sub', 'dsp': sub_dsp,
...      'inputs': {'a': 'e', 'b': 'f'}, 'outputs': {'g': 'c'}},
... ]

Add function and data nodes to dispatcher:

>>> dsp.add_from_lists(data_list, fun_list, dsp_list)
(['a', 'b', 'c'], ['func'], ['Sub'])
set_default_value(data_id, value=empty, initial_dist=0.0)[source]

Set the default value of a data node in the dispatcher.

Parameters:
  • data_id (str) – Data node id.
  • value (T, optional) –

    Data node default value.

    Note

    If EMPTY the previous default value is removed.

  • initial_dist (float, int, optional) – Initial distance in the ArciDispatch algorithm when the data node default value is used.

——————————————————————–

Example:

A dispatcher with a data node named a:

>>> dsp = Dispatcher(name='Dispatcher')
...
>>> dsp.add_data(data_id='a')
'a'

Add a default value to a node:

>>> dsp.set_default_value('a', value='value of the data')
>>> list(sorted(dsp.default_values['a'].items()))
[('initial_dist', 0.0), ('value', 'value of the data')]

Remove the default value of a node:

>>> dsp.set_default_value('a', value=EMPTY)
>>> dsp.default_values
{}
get_sub_dsp(nodes_bunch, edges_bunch=None)[source]

Returns the sub-dispatcher induced by given node and edge bunches.

The induced sub-dispatcher contains the available nodes in nodes_bunch and edges between those nodes, excluding those that are in edges_bunch.

The available nodes are non isolated nodes and function nodes that have all inputs and at least one output.

Parameters:
  • nodes_bunch (list[str], iterable) – A container of node ids which will be iterated through once.
  • edges_bunch (list[(str, str)], iterable, optional) – A container of edge ids that will be removed.
Returns:

A dispatcher.

Return type:

Dispatcher

Note

The sub-dispatcher edge or node attributes just point to the original dispatcher. So changes to the node or edge structure will not be reflected in the original dispatcher map while changes to the attributes will.

——————————————————————–

Example:

A dispatcher with a two functions fun1 and fun2:

Get the sub-dispatcher induced by given nodes bunch:

>>> sub_dsp = dsp.get_sub_dsp(['a', 'c', 'd', 'e', 'fun2'])
get_sub_dsp_from_workflow(sources, graph=None, reverse=False, add_missing=False, check_inputs=True, blockers=None, wildcard=False, _update_links=True)[source]

Returns the sub-dispatcher induced by the workflow from sources.

The induced sub-dispatcher of the dsp contains the reachable nodes and edges evaluated with breadth-first-search on the workflow graph from source nodes.

Parameters:
  • sources (list[str], iterable) – Source nodes for the breadth-first-search. A container of nodes which will be iterated through once.
  • graph (networkx.DiGraph, optional) – A directed graph where evaluate the breadth-first-search.
  • reverse (bool, optional) – If True the workflow graph is assumed as reversed.
  • add_missing (bool, optional) – If True, missing function’ inputs are added to the sub-dispatcher.
  • check_inputs (bool, optional) – If True the missing function’ inputs are not checked.
  • blockers (set[str], iterable, optional) – Nodes to not be added to the queue.
  • wildcard (bool, optional) – If True, when the data node is used as input and target in the ArciDispatch algorithm, the input value will be used as input for the connected functions, but not as output.
  • _update_links (bool, optional) – If True, it updates remote links of the extracted dispatcher.
Returns:

A sub-dispatcher.

Return type:

Dispatcher

See also

get_sub_dsp()

Note

The sub-dispatcher edge or node attributes just point to the original dispatcher. So changes to the node or edge structure will not be reflected in the original dispatcher map while changes to the attributes will.

——————————————————————–

Example:

A dispatcher with a function fun and a node a with a default value:

Dispatch with no calls in order to have a workflow:

>>> o = dsp.dispatch(inputs=['a', 'b'], no_call=True)

Get sub-dispatcher from workflow inputs a and b:

>>> sub_dsp = dsp.get_sub_dsp_from_workflow(['a', 'b'])

Get sub-dispatcher from a workflow output c:

>>> sub_dsp = dsp.get_sub_dsp_from_workflow(['c'], reverse=True)
data_nodes

Returns all data nodes of the dispatcher.

Returns:All data nodes of the dispatcher.
Return type:dict[str, dict]
function_nodes

Returns all function nodes of the dispatcher.

Returns:All data function of the dispatcher.
Return type:dict[str, dict]
sub_dsp_nodes

Returns all sub-dispatcher nodes of the dispatcher.

Returns:All sub-dispatcher nodes of the dispatcher.
Return type:dict[str, dict]
copy()[source]

Returns a deepcopy of the Dispatcher.

Returns:A copy of the Dispatcher.
Return type:Dispatcher

Example:

>>> dsp = Dispatcher()
>>> dsp is dsp.copy()
False
blue(memo=None)[source]

Constructs a BlueDispatcher out of the current object.

Parameters:memo (dict[T,schedula.utils.blue.Blueprint]) – A dictionary to cache Blueprints.
Returns:A BlueDispatcher of the current object.
Return type:schedula.utils.blue.BlueDispatcher
extend(*blues, memo=None)[source]

Extends Dispatcher calling each deferred operation of given Blueprints.

Parameters:
  • blues (Blueprint | schedula.dispatcher.Dispatcher) – Blueprints or Dispatchers to extend deferred operations.
  • memo (dict[T,schedula.utils.blue.Blueprint|Dispatcher]) – A dictionary to cache Blueprints and Dispatchers.
Returns:

Self.

Return type:

Dispatcher

——————————————————————–

Example:

>>> import schedula as sh
>>> dsp = sh.Dispatcher()
>>> dsp.add_func(callable, ['is_callable'])
'callable'
>>> blue = sh.BlueDispatcher().add_func(len, ['length'])
>>> dsp = sh.Dispatcher().extend(dsp, blue)
dispatch(inputs=None, outputs=None, cutoff=None, inputs_dist=None, wildcard=False, no_call=False, shrink=False, rm_unused_nds=False, select_output_kw=None, _wait_in=None, stopper=None, executor=False, sol_name=())[source]

Evaluates the minimum workflow and data outputs of the dispatcher model from given inputs.

Parameters:
  • inputs (dict[str, T], list[str], iterable, optional) – Input data values.
  • outputs (list[str], iterable, optional) – Ending data nodes.
  • cutoff (float, int, optional) – Depth to stop the search.
  • inputs_dist (dict[str, int | float], optional) – Initial distances of input data nodes.
  • wildcard (bool, optional) – If True, when the data node is used as input and target in the ArciDispatch algorithm, the input value will be used as input for the connected functions, but not as output.
  • no_call (bool, optional) – If True data node estimation function is not used and the input values are not used.
  • shrink (bool, optional) –

    If True the dispatcher is shrink before the dispatch.

    See also

    shrink_dsp()

  • rm_unused_nds (bool, optional) – If True unused function and sub-dispatcher nodes are removed from workflow.
  • select_output_kw (dict, optional) – Kwargs of selector function to select specific outputs.
  • _wait_in (dict, optional) – Override wait inputs.
  • stopper (multiprocess.Event, optional) – A semaphore to abort the dispatching.
  • executor (str, optional) – A pool executor id to dispatch asynchronously or in parallel.
  • sol_name (tuple[str], optional) – Solution name.
Returns:

Dictionary of estimated data node outputs.

Return type:

schedula.utils.sol.Solution

——————————————————————–

Example:

A dispatcher with a function \(log(b - a)\) and two data a and b with default values:

Dispatch without inputs. The default values are used as inputs:

>>> outputs = dsp.dispatch()
>>> outputs
Solution([('a', 0), ('b', 5), ('d', 1), ('c', 0), ('e', 0.0)])

Dispatch until data node c is estimated:

>>> outputs = dsp.dispatch(outputs=['c'])
>>> outputs
Solution([('a', 0), ('b', 5), ('c', 0)])

Dispatch with one inputs. The default value of a is not used as inputs:

>>> outputs = dsp.dispatch(inputs={'a': 3})
>>> outputs
Solution([('a', 3), ('b', 5), ('d', 1), ('c', 3)])
shrink_dsp(inputs=None, outputs=None, cutoff=None, inputs_dist=None, wildcard=True)[source]

Returns a reduced dispatcher.

Parameters:
  • inputs (list[str], iterable, optional) – Input data nodes.
  • outputs (list[str], iterable, optional) – Ending data nodes.
  • cutoff (float, int, optional) – Depth to stop the search.
  • inputs_dist (dict[str, int | float], optional) – Initial distances of input data nodes.
  • wildcard (bool, optional) – If True, when the data node is used as input and target in the ArciDispatch algorithm, the input value will be used as input for the connected functions, but not as output.
Returns:

A sub-dispatcher.

Return type:

Dispatcher

See also

dispatch()

——————————————————————–

Example:

A dispatcher like this:

Get the sub-dispatcher induced by dispatching with no calls from inputs a, b, and c to outputs c, e, and f:

>>> shrink_dsp = dsp.shrink_dsp(inputs=['a', 'b', 'd'],
...                             outputs=['c', 'f'])