Commit 810e3e03 authored by Giulio's avatar Giulio

Proposal for streamlining GUI-Pd communication

parent d9371eef
......@@ -409,3 +409,26 @@ account by careful profiling.
Languages: C for the profiling business logic, HTML5 for displaying the
results in the GUI.
Streamlining Purr Data GUI-Pd communication
------------------------------------------
The Pd GUI is heavily entangled with the Pd audio backend. In fact, most of the "gestures" performed on the GUI are passed straight to the Pd engine for processing. The GUI gestures are then "analyzed" by the audio thread, which may respond with triggering a GUI action, changing the state of an object, or nothing.
For instance, each mouse move triggers a `motion` message to the Pd backend, handled by `canvas_motion()` in `g_editor.h`. This calls `canvas_doclick(... doit = false)`, which in turn iterates through all the objects on the patch and asks each of them "does the cursor happen to be on top of you?" (`canvas_findhitbox()`/`canvas_hitbox()`), calling a callback function (`w_getrectfn()`) for each of those objects.
Now, most of the time the cursor is not on an object (or patch cable) and the calls to `w_getrecfn()` have no effect, except for wasting CPU power. There are two notable exceptions:
a) when the mouse pointer is on top of an object, or one of its inlets or outlets, or on top of a patch cord, or on top of a GUI object, the mouse pointer may change, plus, e.g.: flickering inlets/outlets.
b) some objects use the calls to `w_getrecfn()` to track mouse position (e.g.: [mousestate] from cyclone).
The above results in a plethora of CPU cycles being wasted, which may cause dropouts when using small blocksizes and/or embedded platforms. Besides - and perhaps most importantly - it seems the wrong approach that some GUI-specific actions (like the ones at a) above) have to be processed and validated by the audio engine, within the audio thread.
We could therefore think of an improvement to the Purr-data architecture, where the GUI stuff (e.g.: point a) above) is delegated uniquely to the GUI, which makes for lower CPU usage and potentially a more responsive GUI. For instance, the GUI could be designed to only send `motion` messages when the mouse is on top of an object and it could send alongside with it the Pd "tag" of the object, so that `w_getrectfn()` can be called only for the relevant object).
The optimal approach would involve handling all the graphics effects (in/outlet animation, mouse pointers) directly within the GUI, and only sending `motion` messages when something relevant to the Pd engine is _actually_ happening (e.g.: when connecting objects).
Additionally, and looking forward, in order to address point b), objects that need to track mouse position should declare this at initialization and should be kept in a dedicated list, so that the `motion` messages from the GUI can be delivered only to them with minimal CPU waste.
An alternative - and probably worse - approach to the problem, which could reduce peak CPU usage, would be for the Pd audio engine to maintain a "rasterized" cached map of the patch (e.g.: by calling `w_getrecfn()` for each object at each pixel). This way, it could simply look up the cached map in response to each `motion` message. The cache could be recomputed in a separate thread every time after a new object or patch cord is created. Threading issues may arise here, in case one of the objects is deleted while the cached map is being built.
This project comes with a number of challenges, including: potential threading issues between the engine and the GUI, the necessity to re-write the C code of some objects, providing complete documentation for creators of externals, maintaining - where possible (e.g.: excluding objects that track mouse position) - backwards compatibility with Pd.
More details on a previous attempt at addressing the problem can be found [here]( http://disis.music.vt.edu/pipermail/l2ork-dev/2017-June/001383.html).
  • Looks good. A few points:

    • If threading can be completely avoided, it should. For example-- if we can get away with just offloading more of the work to the GUI to avoid the wasted audio process cycles, that will be way easier to reason about than implementing a caching system in a separate thread of the audio process.

    • There may be some hidden edge cases to consider when moving logic out of the audio process. For example, the "hold" attribute of [bng] will trigger its animation at a moment that is sync'd perfectly with any other clock callbacks in Pd. If we move the hold animation to the GUI it may be slightly off by a few ms because it no longer gets timed by Pd but by the Javascript engine.

Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment