schedula: A smart function scheduler for dynamic flow-based programming¶
- release
1.3.6
- date
2022-11-21 09:00:00
- repository
- pypi-repo
- docs
- wiki
- download
- keywords
flow-based programming, dataflow, parallel, async, scheduling, dispatch, functional programming, dataflow programming
- developers
Vincenzo Arcidiacono <vincenzo.arcidiacono@ext.jrc.ec.europa.eu>
- license
About schedula¶
schedula is a dynamic flow-based programming environment for python, that handles automatically the control flow of the program. The control flow generally is represented by a Directed Acyclic Graph (DAG), where nodes are the operations/functions to be executed and edges are the dependencies between them.
The algorithm of schedula dates back to 2014, when a colleague asked for a method to automatically populate the missing data of a database. The imputation method chosen to complete the database was a system of interdependent physical formulas - i.e., the inputs of a formula are the outputs of other formulas. The current library has been developed in 2015 to support the design of the CO2MPAS tool - a CO2 vehicle simulator. During the developing phase, the physical formulas (more than 700) were known on the contrary of the software inputs and outputs.
Why schedula?¶
The design of flow-based programs begins with the definition of the control flow graph, and implicitly of its inputs and outputs. If the program accepts multiple combinations of inputs and outputs, you have to design and code all control flow graphs. With normal schedulers, it can be very demanding.
While with schedula, giving whatever set of inputs, it automatically calculates any of the desired computable outputs, choosing the most appropriate DAG from the dataflow execution model.
Note
The DAG is determined at runtime and it is extracted using the shortest path from the provided inputs. The path is calculated based on a weighted directed graph (dataflow execution model) with a modified Dijkstra algorithm.
schedula makes the code easy to debug, to optimize, and to present it to a non-IT audience through its interactive graphs and charts. It provides the option to run a model asynchronously or in parallel managing automatically the Global Interpreter Lock (GIL), and to convert a model into a web API service.
Dataflow Execution Model¶
The Dispatcher
is the main model of schedula
and it represents the dataflow execution model of your code. It is defined by
a weighted directed graph. The nodes are the operations to be executed.
The arcs between the nodes represent their dependencies. The weights are used to
determine the control flow of your model (i.e. operations’ invocation order).
Conceptually, when the model is executed, input-data flows as tokens along the
arcs. When the execution/dispatch()
begins, a special node (START
) places the data onto
key input arcs, triggering the computation of the control flow. The latter is
represented by a Directed Acyclic Graph (DAG) and it is defined as the shortest
path from the provided inputs. It is computed using the weighted directed graph
and a modified Dijkstra algorithm. A node is executed when its inputs and domain
are satisfied. After the node execution, new data are placed on some or all of
its output arcs. In presence of cycles in the graph, to avoid undesired infinite
loops, the nodes are computed only once. In case of an execution failure of a
node, the algorithm searches automatically for an alternative path to compute
the desired outputs. The nodes are differentiated according to their scope.
schedula defines three node’s types:
data node: stores the data into the solution. By default, it is executable when it receives one input arch.
function node: invokes the user defined function and place the results onto its output arcs. It is executable when all inputs are satisfied and it has at least one data output to be computed.
sub-dispatcher node: packages particular dataflow execution model as sub component of the parent dispatcher. Practically, it creates a bridge between two dispatchers (parent and child) linking some data nodes. It allows to simplify your model, reusing some functionality defined in other models.
The key advantage is that, by this method, the scheduling is not affected by the operations’ execution times. Therefore, it is deterministic and reproducible. Moreover, since it is based on flow-based programming, it inherits the ability to execute more than one operation at the same time, making the program executable in parallel. The following video shows an example of a runtime dispatch.
Installation¶
To install it use (with root privileges):
$ pip install schedula
or download the last git version and use (with root privileges):
$ python setup.py install
Install extras¶
Some additional functionality is enabled installing the following extras:
io
: enables to read/write functions.plot
: enables the plot of the Dispatcher model and workflow (seeplot()
).web
: enables to build a dispatcher Flask app (seeweb()
).sphinx
: enables the sphinx extension directives (i.e., autosummary and dispatcher).parallel
: enables the parallel execution of Dispatcher model.
To install schedula and all extras, do:
$ pip install 'schedula[all]'
Note
plot
extra requires Graphviz. Make sure that the directory
containing the dot
executable is on your systems’ path. If you have not
you can install it from its download page.
Tutorial¶
Let’s assume that we want develop a tool to automatically manage the symmetric cryptography. The base idea is to open a file, read its content, encrypt or decrypt the data and then write them out to a new file. This tutorial shows how to:
Note
You can find more examples, on how to use the schedula library, into the folder examples.
Model definition¶
First of all we start defining an empty Dispatcher
named symmetric_cryptography that defines the dataflow execution model:
>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='symmetric_cryptography')
There are two main ways to get a key, we can either generate a new one or use
one that has previously been generated. Hence, we can define three functions to
simply generate, save, and load the key. To automatically populate the model
inheriting the arguments names, we can use the decorator
add_function()
as follow:
>>> import os.path as osp
>>> from cryptography.fernet import Fernet
>>> @sh.add_function(dsp, outputs=['key'], weight=2)
... def generate_key():
... return Fernet.generate_key().decode()
>>> @sh.add_function(dsp)
... def write_key(key_fpath, key):
... with open(key_fpath, 'w') as f:
... f.write(key)
>>> @sh.add_function(dsp, outputs=['key'], input_domain=osp.isfile)
... def read_key(key_fpath):
... with open(key_fpath) as f:
... return f.read()
Note
Since Python does not come with anything that can encrypt/decrypt files, in
this tutorial, we use a third party module named cryptography
. To install
it execute pip install cryptography
.
To encrypt/decrypt a message, you will need a key as previously defined and your data encrypted or decrypted. Therefore, we can define two functions and add them, as before, to the model:
>>> @sh.add_function(dsp, outputs=['encrypted'])
... def encrypt_message(key, decrypted):
... return Fernet(key.encode()).encrypt(decrypted.encode()).decode()
>>> @sh.add_function(dsp, outputs=['decrypted'])
... def decrypt_message(key, encrypted):
... return Fernet(key.encode()).decrypt(encrypted.encode()).decode()
Finally, to read and write the encrypted or decrypted message, according to the
functional programming philosophy, we can reuse the previously defined functions
read_key
and write_key
changing the model mapping (i.e., function_id,
inputs, and outputs). To add to the model, we can simply use the
add_function
method as follow:
>>> dsp.add_function(
... function_id='read_decrypted',
... function=read_key,
... inputs=['decrypted_fpath'],
... outputs=['decrypted']
... )
'read_decrypted'
>>> dsp.add_function(
... 'read_encrypted', read_key, ['encrypted_fpath'], ['encrypted'],
... input_domain=osp.isfile
... )
'read_encrypted'
>>> dsp.add_function(
... 'write_decrypted', write_key, ['decrypted_fpath', 'decrypted'],
... input_domain=osp.isfile
... )
'write_decrypted'
>>> dsp.add_function(
... 'write_encrypted', write_key, ['encrypted_fpath', 'encrypted']
... )
'write_encrypted'
Note
For more details on how to create a Dispatcher
see: add_data()
,
add_func()
,
add_function()
,
add_dispatcher()
,
SubDispatch
,
MapDispatch
,
SubDispatchFunction
,
SubDispatchPipe
, and
DispatchPipe
.
To inspect and visualize the dataflow execution model, you can simply plot the graph as follow:
>>> dsp.plot()
Tip
You can explore the diagram by clicking on it.
Dispatching¶
To see the dataflow execution model in action and its workflow to generate a
key, to encrypt a message, and to write the encrypt data, you can simply invoke
dispatch()
or
__call__()
methods of the dsp
:
>>> import tempfile
>>> tempdir = tempfile.mkdtemp()
>>> message = "secret message"
>>> sol = dsp(inputs=dict(
... decrypted=message,
... encrypted_fpath=osp.join(tempdir, 'data.secret'),
... key_fpath=osp.join(tempdir,'key.key')
... ))
>>> sol.plot(index=True)
Note
As you can see from the workflow graph (orange nodes), when some function’s inputs does not respect its domain, the Dispatcher automatically finds an alternative path to estimate all computable outputs. The same logic applies when there is a function failure.
Now to decrypt the data and verify the message without saving the decrypted
message, you just need to execute again the dsp
changing the inputs and
setting the desired outputs. In this way, the dispatcher automatically
selects and executes only a sub-part of the dataflow execution model.
>>> dsp(
... inputs=sh.selector(('encrypted_fpath', 'key_fpath'), sol),
... outputs=['decrypted']
... )['decrypted'] == message
True
If you want to visualize the latest workflow of the dispatcher, you can use the
plot()
method with the keyword
workflow=True
:
>>> dsp.plot(workflow=True, index=True)
Sub-model extraction¶
A good security practice, when design a light web API service, is to avoid the unregulated access to the system’s reading and writing features. Since our current dataflow execution model exposes these functionality, we need to extract sub-model without read/write of key and message functions:
>>> api = dsp.get_sub_dsp((
... 'decrypt_message', 'encrypt_message', 'key', 'encrypted',
... 'decrypted', 'generate_key', sh.START
... ))
Note
For more details how to extract a sub-model see:
shrink_dsp()
,
get_sub_dsp()
,
get_sub_dsp_from_workflow()
,
SubDispatch
,
MapDispatch
,
SubDispatchFunction
,
DispatchPipe
, and
SubDispatchPipe
.
API server¶
Now that the api
model is secure, we can deploy our web API service.
schedula allows to convert automatically a
Dispatcher
to a web API service using the
web()
method. By default, it exposes the
dispatch()
method of the Dispatcher and
maps all its functions and sub-dispatchers. Each of these APIs are commonly
called endpoints. You can launch the server with the code below:
>>> server = api.web().site(host='127.0.0.1', port=5000).run()
>>> url = server.url; url
'http://127.0.0.1:5000'
Note
When server
object is garbage collected, the server shutdowns
automatically. To force the server shutdown, use its method
server.shutdown()
.
Once the server is running, you can try out the encryption functionality making
a JSON POST request, specifying the args and kwargs of the
dispatch()
method, as follow:
>>> import requests
>>> res = requests.post(
... 'http://127.0.0.1:5000', json={'args': [{'decrypted': 'message'}]}
... ).json()
Note
By default, the server returns a JSON response containing the function
results (i.e., 'return'
) or, in case of server code failure, it returns
the 'error'
message.
To validate the encrypted message, you can directly invoke the decryption function as follow:
>>> res = requests.post(
... '%s/symmetric_cryptography/decrypt_message?data=input,return' % url,
... json={'kwargs': sh.selector(('key', 'encrypted'), res['return'])}
... ).json(); sorted(res)
['input', 'return']
>>> res['return'] == 'message'
True
Note
The available endpoints are formatted like:
/
or/{dsp_name}
: calls thedispatch()
method,
/{dsp_name}/{function_id}
: invokes the relative function.
There is an optional query param data=input,return
, to include the
inputs into the server JSON response and exclude the possible error message.
Asynchronous and Parallel dispatching¶
When there are heavy calculations which takes a significant amount of time, you want to run your model asynchronously or in parallel. Generally, this is difficult to achieve, because it requires an higher level of abstraction and a deeper knowledge of python programming and the Global Interpreter Lock (GIL). Schedula will simplify again your life. It has four default executors to dispatch asynchronously or in parallel:
async
: execute all functions asynchronously in the same process,
parallel
: execute all functions in parallel excludingSubDispatch
functions,
parallel-pool
: execute all functions in parallel using a process pool excludingSubDispatch
functions,
parallel-dispatch
: execute all functions in parallel includingSubDispatch
.
Note
Running functions asynchronously or in parallel has a cost. Schedula will spend time creating / deleting new threads / processes.
The code below shows an example of a time consuming code, that with the
concurrent execution it requires at least 6 seconds to run. Note that the
slow
function return the process id.
>>> import schedula as sh
>>> dsp = sh.Dispatcher()
>>> def slow():
... import os, time
... time.sleep(1)
... return os.getpid()
>>> for o in 'abcdef':
... dsp.add_function(function=slow, outputs=[o])
'...'
while using the async
executor, it lasts a bit more then 1 second:
>>> import time
>>> start = time.time()
>>> sol = dsp(executor='async').result() # Asynchronous execution.
>>> (time.time() - start) < 2 # Faster then concurrent execution.
True
all functions have been executed asynchronously, but on the same process:
>>> import os
>>> pid = os.getpid() # Current process id.
>>> {sol[k] for k in 'abcdef'} == {pid} # Single process id.
True
if we use the parallel
executor all functions are executed on different
processes:
>>> sol = dsp(executor='parallel').result() # Parallel execution.
>>> pids = {sol[k] for k in 'abcdef'} # Process ids returned by ``slow``.
>>> len(pids) == 6 # Each function returns a different process id.
True
>>> pid not in pids # The current process id is not in the returned pids.
True
>>> sorted(sh.shutdown_executors())
['async', 'parallel']
Contributing to schedula¶
If you want to contribute to schedula and make it better, your help is very welcome. The contribution should be sent by a pull request. Next sections will explain how to implement and submit a new functionality:
clone the repository
implement a new functionality
open a pull request
Clone the repository¶
The first step to contribute to schedula is to clone the repository:
Create a personal fork of the schedula repository on Github.
Clone the fork on your local machine. Your remote repo on Github is called
origin
.Add the original repository as a remote called
upstream
, to maintain updated your fork.If you created your fork a while ago be sure to pull
upstream
changes into your local repository.Create a new branch to work on! Branch from
dev
.
How to implement a new functionality¶
Test cases are very important. This library uses a data-driven testing approach.
To implement a new function I recommend the test-driven development cycle. Hence, when you think that the code is ready,
add new test in test
folder.
When all test cases are ok (python setup.py test
), open a pull request.
Note
A pull request without new test case will not be taken into consideration.
How to open a pull request¶
Well done! Your contribution is ready to be submitted:
Squash your commits into a single commit with git’s interactive rebase. Create a new branch if necessary. Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
Push your branch to your fork on Github (i.e.,
git push origin dev
).From your fork open a pull request in the correct branch. Target the project’s
dev
branch!Once the pull request is approved and merged you can pull the changes from
upstream
to your local repo and delete your extra branch(es).
Donate¶
If you want to support the schedula development please donate.
API Reference¶
The core of the library is composed from the following modules:
It contains a comprehensive list of all modules and classes within schedula.
Docstrings should provide sufficient understanding for any individual function.
Modules:
It provides Dispatcher class. |
|
It contains utility classes and functions. |
|
It provides sphinx extensions. |
Changelog¶
v1.3.6 (2022-11-21)¶
Feat¶
(form): Add data saver and restore options + fix fullscreen + improve ScrollTop.
Fix¶
(form): Fix layout isEmpty.
v1.3.5 (2022-11-08)¶
Fix¶
(form): Correct data import in nav.
v1.3.4 (2022-11-07)¶
Feat¶
(form): Add fullscreen support.
(form): Add nunjucks support.
(form): Add react-reflex component.
(web): Add option to rise a WebResponse from a dispatch.
(form): Add CSRF protection.
v1.3.3 (2022-11-03)¶
Feat¶
(form): Add markdown.
(form): Avoid rendering elements with empty children.
(form): Add more option to accordion and stepper.
(form): Change position of error messages.
Fix¶
(rtd): Correct doc rendering.
(form): Correct plotting behaviour.
v1.3.2 (2022-10-24)¶
Feat¶
(drw, web, form): Add option to return a blueprint.
(form): Update bundle.
Fix¶
(form): Add extra missing package data.
v1.3.1 (2022-10-20)¶
Fix¶
(form): Add missing package data.
(ext): Correct documenter doctest import.
v1.3.0 (2022-10-19)¶
Feat¶
(form): Add new method form to create jsonschema react forms automatically.
(blue): Add option to limit the depth of sub-dispatch blue.
Fix¶
(sol): Correct default initialization for sub-dispatchers.
(setup): Ensure correct size of distribution pkg.
v1.2.19 (2022-07-06)¶
Feat¶
(dsp): Add new utility function run_model.
(dsp): Add output_type_kw option to SubDispatch utility.
(core): Add workflow when function is a dsp.
Fix¶
(blue): Add memo when call register by default.
v1.2.18 (2022-07-02)¶
Feat¶
(micropython): Update build for micropython==v1.19.1.
(sol): Improve speed performance.
(dsp): Make shrink optional for SubDispatchPipe.
(core): Improve performance dropping set instances.
v1.2.17 (2022-06-29)¶
Feat¶
(sol): Improve speed performances.
Fix¶
(sol): Correct missing reference due to sphinx update.
(dsp): Correct wrong workflow.pred reference.
v1.2.16 (2022-05-10)¶
Fix¶
(drw): Correct recursive plots.
(doc): Correct requirements.io link.
v1.2.15 (2022-04-12)¶
Feat¶
(sol): Improve performances of _see_remote_link_node.
(drw): Improve performances of site rendering.
v1.2.14 (2022-01-21)¶
Fix¶
(drw): Correct plot of DispatchPipe.
v1.2.13 (2022-01-13)¶
Feat¶
(doc): Update copyright.
(actions): Add fail-fast: false.
(setup): Add missing dev requirement.
Fix¶
(drw): Skip permission error in server cleanup.
(core): Correct import dependencies.
(doc): Correct link target.
v1.2.12 (2021-12-03)¶
Feat¶
(test): Add test cases improving coverage.
Fix¶
(drw): Correct graphviz _view attribute call.
(drw): Correct cleanup function.
v1.2.11 (2021-12-02)¶
Feat¶
(actions): Add test cases.
(test): Update test cases.
(drw): Make plot rendering parallel.
(asy): Add sync executor.
(dispatcher): Add auto inputs and outputs + prefix tags for add_dispatcher method.
(setup): Pin sphinx version.
Fix¶
(test): Remove windows long path test.
(test): Correct test cases for parallel.
(drw): Correct optional imports.
(doc): Remove sphinx warning.
(drw): Correct body format.
(asy): Correct atexit_register function.
(bin): Correct script.
v1.2.10 (2021-11-11)¶
Feat¶
(drw): Add custom style per node.
(drw): Make clean-up site optional.
(drw): Add force_plot option to data node to plot Solution results.
(drw): Update graphs colors.
Fix¶
(setup): Pin graphviz version <0.18.
(alg): Ensure str type of node_id.
(drw): Remove empty node if some node is available.
(drw): Add missing node type on js script.
(drw): Extend short name to sub-graphs.
v1.2.9 (2021-10-05)¶
Feat¶
(drw): Add option to reduce length of file names.
Fix¶
(setup): Correct supported python versions.
(doc): Correct typos.
v1.2.8 (2021-05-31)¶
Fix¶
(doc): Skip KeyError when searching descriptions.
v1.2.7 (2021-05-19)¶
Feat¶
(travis): Remove python 3.6 and add python 3.9 from text matrix.
Fix¶
(sphinx): Add missing attribute.
(sphinx): Update option parser.
(doc): Update some documentation.
(test): Correct test case missing library.
v1.2.6 (2021-02-09)¶
Feat¶
(sol): Improve performances.
Fix¶
(des): Correct description error due to MapDispatch.
(drw): Correct index plotting.
v1.2.5 (2021-01-17)¶
Fix¶
(core): Update copyright.
(drw): Correct viz rendering.
v1.2.4 (2020-12-12)¶
Fix¶
(drw): Correct plot auto-opening.
v1.2.3 (2020-12-11)¶
Feat¶
(drw): Add plot option to use viz.js as back-end.
Fix¶
(setup): Add missing requirement requests.
v1.2.2 (2020-11-30)¶
Feat¶
(dsp): Add custom formatters for MapDispatch class.
v1.2.1 (2020-11-04)¶
Feat¶
(dsp): Add MapDispatch class.
(core): Add execution function log.
Fix¶
(rtd): Correct documentation rendering in rtd.
(autosumary): Correct bug for AutosummaryEntry.
v1.2.0 (2020-04-08)¶
Feat¶
(dispatcher): Avoid failure when functions does not have the name.
(ubuild): Add compiled and not compiled code.
(sol): Improve speed importing functions directly for heappop and heappush.
(dispatcher): Avoid failure when functions does not have the name.
(dsp): Simplify repr of inf numbers.
(micropython): Pin specific MicroPython version v1.12.
(micropython): Add test using .mpy files.
(setup): Add MicroPython support.
(setup): Drop dill dependency and add io extra.
(github): Add pull request templates.
Fix¶
(test): Skip micropython tests.
(ext): Update code for sphinx 3.0.0.
(sphinx): Remove documentation warnings.
(utils): Drop unused pairwise function.
(dsp): Avoid fringe increment in SubDispatchPipe.
v1.1.1 (2020-03-12)¶
Feat¶
(github): Add issue templates.
(exc): Add base exception to DispatcherError.
(build): Update build script.
v1.1.0 (2020-03-05)¶
Feat¶
(core): Drop networkx dependency.
(core): Add ProcessPoolExecutor.
(asy): Add ExecutorFactory class.
(asy): Split asy module.
(core): Add support for python 3.8 and drop python 3.5.
(asy): Check if stopper is set when getting executor.
(asy): Add mp_context option in ProcessExecutor and ProcessPoolExecutor.
Fix¶
(alg): Correct pipe generation when NoSub found.
(asy): Remove un-useful and dangerous states before serialization.
(asy): Ensure wait of all executor futures.
(asy): Correct bug when future is set.
(asy): Correct init and shutdown of executors.
(sol): Correct raise exception order in sol.result.
(travis): Correct tests collector.
(test): Correct test for multiple async.
v1.0.0 (2020-01-02)¶
Feat¶
(doc): Add code of conduct.
(examples): Add new example + formatting.
(sol): New raises option, if raises=’’ no warning logs.
(web): Add query param data to include/exclude data into the server JSON response.
(sphinx): Update dispatcher documenter and directive.
(drw): Add wildcard rendering.
Fix¶
(test): Update test cases.
(dsp): Correct pipe extraction for wildcards.
(setup): Add missing drw files.
v0.3.7 (2019-12-06)¶
Feat¶
(drw): Update the index GUI of the plot.
(appveyor): Drop appveyor in favor of travis.
(travis): Update travis configuration file.
(plot): Add node link and id in graph plot.
Fix¶
(drw): Render dot in temp folder.
(plot): Add quiet arg to _view method.
(doc): Correct missing gh links.
(core) #17: Correct deprecated Graph attribute.
v0.3.6 (2019-10-18)¶
Fix¶
v0.3.4 (2019-07-15)¶
Feat¶
(binder): Add @jupyterlab/plotly-extension.
(binder): Customize Site._repr_html_ with env SCHEDULA_SITE_REPR_HTML.
(binder): Add jupyter-server-proxy.
(doc): Add binder examples.
(gen): Create super-class of Token.
(dsp): Improve error message.
Fix¶
(binder): Simplify processing_chain example.
(setup): Exclude binder and examples folders as packages.
(doc): Correct binder data.
(doc): Update examples for binder.
(doc): Add missing requirements binder.
(test): Add state to fake directive.
(import): Remove stub file to enable autocomplete.
Update to canonical pypi name of beautifulsoup4.
v0.3.3 (2019-04-02)¶
Feat¶
(dispatcher): Improve error message.
Fix¶
(doc): Correct bug for sphinx AutoDirective.
(dsp): Add dsp as kwargs for a new Blueprint.
(doc): Update PEP and copyright.
v0.3.2 (2019-02-23)¶
Feat¶
(core): Add stub file.
(sphinx): Add Blueprint in Dispatcher documenter.
(sphinx): Add BlueDispatcher in documenter.
(doc): Add examples.
(blue): Customizable memo registration of blueprints.
Fix¶
(sphinx): Correct bug when “ is in csv-table directive.
(core): Set module attribute when __getattr__ is invoked.
(doc): Correct utils description.
(setup): Improve keywords.
(drw): Correct tooltip string format.
(version): Correct import.
v0.3.1 (2018-12-10)¶
Fix¶
(setup): Correct long description for pypi.
(dsp): Correct bug DispatchPipe when dill.
v0.3.0 (2018-12-08)¶
Feat¶
(blue, dispatcher): Add method extend to extend Dispatcher or Blueprint with Dispatchers or Blueprints.
(blue, dsp): Add BlueDispatcher class + remove DFun util.
(core): Remove weight attribute from Dispatcher struc.
(dispatcher): Add method add_func to Dispatcher.
(core): Remove remote_links attribute from dispatcher data nodes.
(core): Implement callable raise option in Dispatcher.
(core): Add feature to dispatch asynchronously and in parallel.
(setup): Add python 3.7.
(dsp): Use the same dsp.solution class in SubDispatch functions.
Fix¶
(dsp): Do not copy solution when call DispatchPipe, but reset solution when copying the obj.
(alg): Correct and clean get_sub_dsp_from_workflow algorithm.
(sol): Ensure bool output from input_domain call.
(dsp): Parse arg and kw using SubDispatchFunction.__signature__.
(core): Do not support python 3.4.
(asy): Do not dill the Dispatcher solution.
(dispatcher): Correct bug in removing remote links.
(core): Simplify and correct Exception handling.
(dsp): Postpone __signature__ evaluation in add_args.
(gen): Make Token constant when pickled.
(sol): Move callback invocation in _evaluate_node.
(core) #11: Lazy import of modules.
(sphinx): Remove warnings.
(dsp): Add missing code option in add_function decorator.
Other¶
Refact: Update documentation.
v0.2.8 (2018-10-09)¶
Feat¶
(dsp): Add inf class to model infinite numbers.
v0.2.7 (2018-09-13)¶
Fix¶
(setup): Correct bug when long_description fails.
v0.2.6 (2018-09-13)¶
Feat¶
(setup): Patch to use sphinxcontrib.restbuilder in setup long_description.
v0.2.5 (2018-09-13)¶
Fix¶
(doc): Correct link docs_status.
(setup): Use text instead rst to compile long_description + add logging.
v0.2.4 (2018-09-13)¶
Fix¶
(sphinx): Correct bug sphinx==1.8.0.
(sphinx): Remove all sphinx warnings.
v0.2.3 (2018-08-02)¶
Fix¶
(des): Correct bug when SubDispatchFunction have no outputs.
v0.2.2 (2018-08-02)¶
Fix¶
(des): Correct bug of get_id when tuple ids nodes are given as input or outputs of a sub_dsp.
(des): Correct bug when tuple ids are given as inputs or outputs of add_dispatcher method.
v0.2.1 (2018-07-24)¶
Feat¶
(setup): Update Development Status to 5 - Production/Stable.
(setup): Add additional project_urls.
(doc): Add changelog to rtd.
Fix¶
(doc): Correct link docs_status.
(des): Correct bugs get_des.
v0.2.0 (2018-07-19)¶
Feat¶
(doc): Add changelog.
(travis): Test extras.
(des): Avoid using sphinx for getargspec.
(setup): Add extras_require to setup file.
Fix¶
(setup): Correct bug in get_long_description.
v0.1.19 (2018-06-05)¶
Fix¶
(dsp): Add missing content block in note directive.
(drw): Make sure to plot same sol as function and as node.
(drw): Correct format of started attribute.
v0.1.18 (2018-05-28)¶
Feat¶
(dsp): Add DispatchPipe class (faster pipe execution, it overwrite the existing solution).
(core): Improve performances replacing datetime.today() with time.time().
v0.1.17 (2018-05-18)¶
Feat¶
(travis): Run coveralls in python 3.6.
Fix¶
(web): Skip Flask logging for the doctest.
(ext.dispatcher): Update to the latest Sphinx 1.7.4.
(des): Use the proper dependency (i.e., sphinx.util.inspect) for getargspec.
(drw): Set socket option to reuse the address (host:port).
(setup): Correct dill requirements dill>=0.2.7.1 –> dill!=0.2.7.
v0.1.16 (2017-09-26)¶
Fix¶
(requirements): Update dill requirements.
v0.1.15 (2017-09-26)¶
Fix¶
(networkx): Update according to networkx 2.0.
v0.1.14 (2017-07-11)¶
Fix¶
(io): pin dill version <=0.2.6.
(abort): abort was setting Exception.args instead of sol attribute.
Other¶
Merge pull request #9 from ankostis/fixabortex.
v0.1.13 (2017-06-26)¶
Feat¶
(appveyor): Add python 3.6.
Fix¶
(install): Force update setuptools>=36.0.1.
(exc): Do not catch KeyboardInterrupt exception.
(doc) #7: Catch exception for sphinx 1.6.2 (listeners are moved in EventManager).
(test): Skip empty error message.
v0.1.12 (2017-05-04)¶
Fix¶
(drw): Catch dot error and log it.
v0.1.11 (2017-05-04)¶
Feat¶
Fix¶
(doc): Replace type function with callable.
(drw): Folder name without ext.
(test): Avoid Documentation of DspPlot.
(doc): fix docstrings types.
v0.1.10 (2017-04-03)¶
Feat¶
(sol): Close sub-dispatcher solution when all outputs are satisfied.
Fix¶
(drw): Log error when dot is not able to render a graph.
v0.1.9 (2017-02-09)¶
Fix¶
(appveyor): Setup of lmxl.
(drw): Update plot index.
v0.1.8 (2017-02-09)¶
Feat¶
(drw): Update plot index + function code highlight + correct plot outputs.
v0.1.7 (2017-02-08)¶
Fix¶
(setup): Add missing package_data.
v0.1.6 (2017-02-08)¶
Fix¶
(setup): Avoid setup failure due to get_long_description.
(drw): Avoid to plot unneeded weight edges.
(dispatcher): get_sub_dsp_from_workflow set correctly the remote links.
v0.1.5 (2017-02-06)¶
Feat¶
(exl): Drop exl module because of formulas.
(sol): Add input value of filters in solution.
Fix¶
(drw): Plot just one time the filer attribute in workflow +filers|solution_filters .
v0.1.4 (2017-01-31)¶
Feat¶
(drw): Save autoplot output.
(sol): Add filters and function solutions to the workflow nodes.
(drw): Add filters to the plot node.
Fix¶
(dispatcher): Add missing function data inputs edge representation.
(sol): Correct value when apply filters on setting the node output.
(core): get_sub_dsp_from_workflow blockers can be applied to the sources.
v0.1.3 (2017-01-29)¶
Fix¶
(dsp): Raise a DispatcherError when the pipe workflow is not respected instead KeyError.
(dsp): Unresolved references.
v0.1.2 (2017-01-28)¶
Feat¶
(dsp): add_args _set_doc.
(dsp): Remove parse_args class.
(readme): Appveyor badge status == master.
(dsp): Add _format option to get_unused_node_id.
(dsp): Add wildcard option to SubDispatchFunction and SubDispatchPipe.
(drw): Create sub-package drw.
Fix¶
(dsp): combine nested dicts with different length.
(dsp): are_in_nested_dicts return false if nested_dict is not a dict.
(sol): Remove defaults when setting wildcards.
(drw): Misspelling outpus –> outputs.
(directive): Add exception on graphviz patch for sphinx 1.3.5.
v0.1.1 (2017-01-21)¶
Fix¶
(site): Fix ResourceWarning: unclosed socket.
(setup): Not log sphinx warnings for long_description.
(travis): Wait util the server is up.
(rtd): Missing requirement dill.
(travis): Install first - pip install -r dev-requirements.txt.
(directive): Tagname from _img to img.
(directive): Update minimum sphinx version.
(readme): Badge svg links.
Other¶
Add project descriptions.
(directive): Rename schedula.ext.dsp_directive –> schedula.ext.dispatcher.
Update minimum sphinx version and requests.