Garo Garabedyan's Divergent Thinking Blog

Data Flow Processing. eventBased Algorithms and Data

with 4 comments

I have made updates over the paper presented down here. Now you can read it in Bulgarian, too.

In OpenDocumentFormat (*.ODT):

In Bulgarian (BG): bg_paper_data_flow_processing_eventbased_algori

In English (ENG): en_paper_data_flow_processing_eventbased_algori

In Bulgarian (BG): bg_paper_data_flow_processing_eventbased_algori

In English (ENG): en_paper_data_flow_processing_eventbased_algori

10 November 2008y

Science claims that every system in every particular moment of time is describable with a discrete count of data variables. Applying this count of variable to an abstract model of this system is supposed to produce identical outputs in every time the application is done. Lets apply this model of scientific thinking to a computer application.

eventBased Algorithms and Data (odt) (pdf). Main idea

Every program works by (dynamically) wiring atoms which receive input data and produce output of commands or data or both. Finding and separating as much as possible independent atoms and wiring them on an abstract framework (all input and output is manipulated through the framework) is an extremely efficient practice to decrease the computations to the only needed ones. Separating the software model into algorithm atoms. Separating the data source(s) and the data they contain into data atoms.

A common application architecture is:
A Common Application

My offer about a program written on new and abstract framework (scenario):
Write code to apply the business logic but not about every possible occurred change during run-time, but only implement the pure theoretical relations between input and output of every atom. The abstract framework encapsulates all the atoms and their inputs and outputs and decides which atom(s) to call when some program input (output of the outside world) is changed and which when some atoms’ (inside components) output values (data and/ or commands) is changed. This callings are recursive and can produce wave of callings in different directions of wired atoms. This way the abstract framework and the atoms will implement in run-time the theoretical knowledge about the relations between input and output of every atom and the outside world, something that developers do by writing code about every possible case (occurred event).

When an event is received only the related to it atom is called and is computed result (output data and commands), when the result is different than the situation before receiving the event the atom(s) which are connected to the newly computed result are called and this way recursively until there are no more atoms connected to any changed result or the result(s) isn’t anymore different from its past stage (the computation didn’t produce a change on the output data).
The biggest aim according to me is that a big computation is not needed to start every time when something small is changed but just finding which atomic computation have to be executed in order to react correctly on the change.

Event-Driven design forces developers to write code about every possible event. The problem is that some applications aren’t receiving only GUI events, but a lot of more events from sources which count isn’t within developers’ control. It is easier to implement the model, but not to cut the all model into event reactors. The letter is not upgradeable and looks more like a structure-oriented programming than object-oriented.

Brief presentation of the practical implementation of this idea:

Boxes (algorithm atoms) and circles (event triggers) are connected with lines. Some lines has pointers which shows how the interest is orientated. When there is no pointer, the interest is on the both directions. When some output result of any atom/ trigger is changed (on the base of some previous computation or some event) all interested atoms are called to recompute their output results because of the relation of the results with the change. This is repeated until there are no interested atoms of some result change, or the change doesn’t produce change in the result of interested atoms after the recomputation.

Complicated software system which is more Domain Driven Design than Event-Driven. Both of the figures contain computed and saved data in themselves. When some data (in atom or trigger) is changed and it is in the interests of atomic algorithm the interested atom recomputes it’s result according to the occurred change in the first one. Triggers aren’t interested in anything inside the program, they only present a characteristic of a system out of this application in which the trigger is placed.

Application areas:

Human User in the view of a computer application is a source of asynchronous commands. The user is free to choose the time to call a program functionality.

In data flow apps. When the application controls some systems/ processes and the software must fast react on changes occurred in them. A good example is a satellite which have to react on a lot of possible events while flying in the cosmos. The application which is designed to react on them can use this approach and make it’s code more upgradeable and changeable for future projects.
In big SOA services which make approximately one calculations for each request.  Example:

Computer + User
Not in XP, where developers model user stories, but about applications which care about more circumstances, which aren’t synchronous. (*)
Mainly in content editing products where the user is treated as a creator and applier of the all algos and the content in general is as output or input. The rest computer aided products has in their core some techniques of tracing mistakes, errors and so on in users’ projects. The projects are computed from their beginning again and again every time something is changed. The only thing to be won in this kind of software is the time of recomputing the whole project when a change is made not in the whole project, but in a piece of it (when the object of work of the software and the nature of the data enables atomization in order to be made conclusion which atom data is changed and which is not).

(*) eXtreme Programming teaches developers to write code according to the possible user stories. The user of its own is one source of commands, which is synchronous. The user finishes filling a command and chooses executing it, the computer starts executing it and when it finishes, the suer is informed with the results. It is not a practice, the user to pause some calculation at the moment of its execution, but only aborting it.
In SOA architecture, we have one initiator of activity (computation) and he is again synchronous. No matter that we don’t know when a request is going to be received, the execution of this request is atomic and can’t be aborted or customized while it is executed.

Example of Computer application of eventBased
An example of Computer + User is presented in the eventBased Content Editor (

An example of real life system which is supposed to implement the described eventBased architecture is a space satellite. This is a a big embedded system doing many things concurrently which needs things to be known in a small amount of time.
Space satellites are embedded systems, which have for a certain period of time to find the most proper solution of an asynchronous occurred problem. If the solution is not proper its implementation can spend resource which is needed for different operation, if the solution is not found within the required period of time the entire satellite is possible to not exist аnymore.
Finding the best solution of a composition of commands (steps) is done by using decision tree, such data structures like which are used in applications for playing chess, which creates of recursive calculation of the possible command (chess move) and the consequences applied to the current situation for every separate possible command. The maximal period of time for choosing a solution and implementing it is supposed to change during the search and implementation of the chosen solution, this period of time is a dependent from the behaviour of outside (external) variables, whose behaviour is asynchronous to the system.
In order to enable the fast creation of this tree, fast computations of resource spending are required, which volume and quality is important to the outside world.
The author imagines single modules determining outside changes and in every such change the physical variable value is computed, which variable can be important to decision making in future. In the blog the author presents a way of enabling this data updates and a parallel adding of new modules and relationships between variables (formulas).

Theoretic Conclusion
MVC pattern and the Observer pattern between the View and the Model: If we say that the observer pattern inverts the control and makes the interested part to care about the interested data. By analogy with this post, we can say that if the function (procedure) is the atom of every program (command-based) so using this data flow approach, the function has to not call any other function and send him array of variables, just changing variables which the other function is interested in.

This data flow approach about every function is not general applicable, because of memory efficiency problems. This is the reason why I make difference between data changes and commands, why I speak about an abstract framework, but not about a new programming language. But at all I thing that I have give another definition of Data Flow programming by using Inversion of Control terminology.

Practical Conclusion
I hope that this will open a door of parallel using data flow and command like program developments, using more observer patterns in the business logic. Declarative programming plus Command programming.


Written by garabedyan

March 4, 2008 at 10:59

4 Responses

Subscribe to comments with RSS.

  1. […] was often used in blogs, look how Google BlogSearch uses a kind of an eventBased policy to present information about new search results. They use RSS, too. You see how an RSS source is […]

  2. […] (Concrete Implementation of the eventBased Data Philosophy) […]

  3. […] framework implementing the ideas of eventBased Algorithms and Data and Data flow processing ( for my university […]

  4. […] made a transcript of the introduction of Taha (~10 min.). I was facing common problem domains in G. Garabedyan, Data Flow Processing, eventBased Algorithms and Data, 2008 and particularly in G. Garabedyan, eventBased Framework, […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s