On MetaControl

The notion of MetaControl occurred to me while I was collaborating on the modality toolkit, a software library based on the idea of highly modal interfaces/instruments, i.e. setups where a small number of physical interfaces can access and control a variety of processes in a great variety of ways.
Like much other software, modality makes ‘one to one mapping’, where one interface element (such as a slider) controls one process parameter (such as speed), very easy, thus privileging it over its alternatives. I wondered what a polar opposite would be like, and sketched out the Influx class: here, the default mapping is that M continuous interface elements affect N continuous process parameters by freely definable sets of weights.
When these weights are random (as they are by default), the physical control elements become really powerful: Moving a single control element traverses the parameter space along a multidimensional diagonal, changing every process parameter to some degree; moving a different control does the same along a very different axis. These experiments led to a deep insight (for me at least): Not knowing the technical details of a mapping allows performers to fully concentrate on the experience of playing, learning to navigate intuitively in the parameter space of the process. (By comparison, having one controller per parameter places cognitive load on the performer, and allows interventions that may appear simplistic: changing only, say, the speed of the process and nothing else lets the audience hear a ‘fader move’).MetaKtlStudy1_setup
These first experiments quickly suggested combinations with existing strategies, such as storing snapshots of parameter states as presets, and controller movements as control loops, both as material the evolving performance can refer back to in a multitude of ways.
Finally, following the notion of ‘influence’, a perspective that fascinates me control polyphony, where several (human or machine) sources influence one running process, and several processes receive influence from one or several sources. This approach creates shared networks of influence which jointly create a sound world that can genuinely surprise both the audience and the players themselves. As I tried to summarize it in (de Campo 2014), the concept of ‘Lose Control, Gain Influence’ (LCGI) is about ‘gracefully relinquishing full control of the processes involved, in order to gain higher-order forms [of] influence on their behavior.’
The software elements involved, Influx, NdefPreset, TdefPreset, and KtlLoop have been used extensively by the Trio und many others in the context of Generative Arts Class at UdK Berlin, and further extensions (such as Nudge and NudgeGroups) are continuously extending the range of MetaControl possibilities.
My first piece that explores fundamental MetaControl notions is MetaControl Study No. 1.

A recording of a performance of it in Bergen 2014

The verbal score:

MetaControl Study No. 1, Alberto de Campo, (Oct 2014, rephrased May 2016)
[technically based on Hannes Hoelzl's ha_influx setup]

Score for improvised exploratory performance

* Prepare four contrasting complex sound processes to play
 [ For the show at NK Oct 2014, the four sounds prepared were:
 \n1_bleepo, \n2_revFB, \n3_robTr, \n4_dog.]

* Prepare a controller for playing each process via Influx
 [ Recommended: Thrustmaster Ferrari Gamepad controller with two joysticks ]

* Prepare simple volume control for the four sounds

    - Play sound 1 by exploring its range with influx;
    - keep interesting states by storing them as presets.
    - Optionally put such a new preset in the center of the control space,
    and continue playing by exploring the space around it.
    - Ad lib, jump or step to already stored presets;
    - ad lib, jump to new random presets by using random seeds.
    - When ready, put this layer on its prepared 'autopilot task' :
    - The task keeps morphing between randomly chosen presets,
    and optionally recedes in volume gradually with every morph
    in order to leave foreground space for the next layer.

B. REPEAT STEP A three times with sounds 2, 3, and 4,
    thus eventually creating four layers.
    [ Adjust volumes of layers as deemed necessary;
     the processes may have different average volumes,
    and early layers may get too soft by receding. ]

    After all layers have been created, set the end of the piece in motion:
    starting with the last layer, start the 'end task' of each process.
    [ When all end tasks are started, the rest of the piece plays by itself, 
    and the player is off-duty. ]
    This task morphs through the sequence of stored presets in reverse,
    going thru all presets beginning with the last one stored,
    and ends that layer after the first preset is reached.

    The performance ends when the last layer has reached its end.