Blogs

Semblance Typology of Entrainments

These english words are built from Greek or Latin prefixes for "same" or "together" or "with". There are many others still to be explored, e.g.,

synonym - together in meaning

synorogeny - form boundaries

conclude- close together

symptosis - fall together

symptomatic - happening together

symphesis - growing together

sympatria - in the same region

symmetrophilic - lover of collecting in pairs

synaleiphein - melt together

For an evaluation of various mathematical formalizations inspired by this entrainment work, check out this paper: https://www.researchgate.net/profile/Rushil_Anirudh/publication/29991298... The ASU team successfully quantify the advantages of metrics such as the one I suggested (HyperComplex Signal Correlation) for dancing bodies where correlated rotations and displacement are in play.

Thanks to:

  • My mentor Georgia Psyllidou.
  • Eric Lewis who wrote a paper that introduces Stoic chemistry with a reflection on coextensively of gin and tonic.
  • Another greek friend and colleague, Vangelis Lympouridis, who reflects on some of these words here

  • David Wessel’s IRCAM

    I experienced three IRCAMs: the “official” IRCAM officiated by Pierre Boulez, an IRCAM "by night" wrangled by David Wessel, and the IRCAM everybody takes away from Beaubourg into their future lives and work.

    This is a tale of David Wessel’s IRCAM that came to mind on his birthday and which I tell as a remembrance on his death day - both in October.

    Due to a highly improbable series of events in the 1980’s, I found myself working for Pierre Boulez at IRCAM in my early 20’s. The most improbable part was being hired at all - the result of a brief meeting with David Wessel. David was looking for a UNIX expert to connect a DEC PDP11/34 computer in the acoustics lab to a forthcoming VAX 11/780 to be purchased to replace an ailing PDP10 - the heart of IRCAM’s computing infrastructure. I had a very approximate grasp of the situation having learned UNIX at University in Australia, and having apprenticed with some experienced folk who actually knew how to do this sort of thing. David toured me around the extremely odd place that is IRCAM, introduced me to the machines there and the System’s team that was run by a grumpy dude who took an instant dislike to me. I figured I had blown the interview and resigned myself to continuing my job at a music technology startup company that was pivoting from music to training and sales of UNIX - a new technology in Europe. The teenage dream of combining my interests in music and technology was dashed again.

    A few weeks later David called me to offer me a permanent job replacing the grumpy System Administrator who quit because his attempts to spare the beloved PDP10 failed.

    One day during my first year at IRCAM, David asked me to come in on the weekend to help him with a recording session involving his buddies Steve Lacy and Oliver Johnson. The session was to take place in IRCAM’s famed Espace de Projection which is 3 floors below ground.

    IRCAM was built as the contemporary music complement to the Centre Pompidou contemporary Art center. The only space available was in front of the Eglise Saint Merri with an appreciated and historically important architectural feature that could not be obscured (flying buttresses).

    IRCAM being underground was advantageous for acoustic projects as it considerably reduced noise from the streets of Paris entering both its anechoic chamber and the Espace do Projection concert space.

    The Espro, as it is called, was special because it had variable, controllable acoustics. The walls and ceiling were built with a huge array of rotatable panels with different acoustic properties on 3 faces. Congruent with the French enthusiasm for trinities, the entire ceiling was divided into 3 areas that could be raised and lowered to change the volume of the space. This was a popular place for performances, rehearsals and acoustics research so booking it was always challenging. Recording at night or weekends was the workaround for unsanctioned projects.

    As we started sprinkling microphone stands around, I realized David was going to use as many microphones, cables and channels of the mixer as he possibly could - quite a few just for the large drum kit. Concerned about entanglements and tripping hazards, I tried to take advantage of an unusual aspect of the construction of IRCAM - the extensive use of false floor panels rather than false ceilings. This had turned out to be an incredible time saver when my team replaced the PDP10.

    When I pulled up a panel of the false floor in the ESPRO I discovered the floor was sitting on a small lake.

    "Where did all this water come from?" I wondered. I should have known the answer right away because I encountered it every day crossing bridges on my way to work: the river Seine. It was not far away, a similar depth and flooded quite often.

    Resorting to gaffers tape to manage the cables on top of the floor, it didn’t take long before we ran out of equipment on the bottom floor so we climbed the stairs up to the Audio Team’s offices.

    Valuable microphones and equipment were secured in an enormous metal-grilled cage about 20ft high - an important strategy to make sure everything was reliably available for the next concert or project of the Ensemble Intercontemporain. To my initial surprise David didn’t have the key and proceeded to climb up to the top of this tall cage, scramble over the top and down again inside it.

    This wouldn’t have surprised old friends of David as they would know that he had a brief enthusiasm for rock climbing in his youth - and an eye out for pranks that would likely not harm anyone.

    In a pattern that would repeat, David was publicly admonished for these unofficial side projects but always “forgiven” by Boulez who I suspect secretly admired and encouraged them.

    I was fortunate to have been there that weekend because I learned how David achieved such a good recording and it influenced my invention a few years later of the first Digital Audio Workstation.

    Instead of using booths, baffles, absorbers and overdubbing individual musicians, we adjusted the panels of the variable acoustic space itself to capture a natural reverberant field. We spread the musicians out in the space rather more distant than they would be in a jazz club - not too far so they would have trouble getting a tight rhythmic feel but far enough to minimize the bleed between sounds from the closest microphones to the more distant ones.

    David always chuckled telling how he worked the console to engineer these sessions. He used the console as an array of microphone preamps, disabling EQ, avoiding artificial reverberation, and muting all unused channels. He used no compression other than the “warm” compression inherent in analog multitrack tape. This strategy reduced distortion and noise - a significant challenge with these early transistor mixing desks.

    David used his intimate knowledge of the music being made and how the musicians played their instruments to optimize the choice of microphones and place them and achieve great timbral balance and clarity in the recording.

    David was proud of having received a “best recording” award for a Steve Lacy album and amused that the administration changed its tune to embrace this success as an IRCAM accomplishment.

    Steve Lacy was recorded quite a few times in the 1980's at IRCAM and at least once officially in 1986 when his quartet booked the smaller Studio V.

    https://www.youtube.com/watch?v=rSi0YPqf3ts

    It was a pleasure to see David and Steve play music together some twenty years later at the Berkeley Edge Festival in 2003:

    Organized Entanglement: Fiber and Textile Arts, Science and Engineering

    In 2017 I participated in the e-Textile Summer Camp at the Paillard Centre d’Art Contemporain & Résidence d’Artistes in Poncé sur le Loir, a small village 214 km from Paris. Rachel Freire insisted that I joined this panel discussion chaired by Dr. Becky Stewart:
    "The History of Computing and Weaving, the Consumer Society and Data Democratisation"
    with Becky Stewart, Audrey Briot, Rachel Freire and Adrian Freed
    July 22 17:00- 18:00

    I have researched the history of electronics, computing and e-textiles, but I thought I was likely going to be a "hair in the soup"–as the French say–as I don't have a portfolio of e-textile works that comes anywhere near the quality of that of my co-panelists. I reflexively took the role of respondent and prompted my copanelists into elaborating on their work and curiosities. I first noted that their work shows how we can rescue the worn out adjective "digital" by celebrating a continuity between the "digital" and "numerical" aspects of quipu (and it's ancient analogs) and contemporary e-textile projects - which very often involve interactions with our digits in gloves or interacting textile surfaces.

    My second response was an improvisation on some philosophical work I had been doing on entrainment and entanglement. I am grateful to Hannah Perner Wilson, a key instigator in the Summer camps, for encouraging me write these ideas down. This note is a kind of writing where I am often told to "unpack" the ideas or integrate a line of argument. Thankfully Hannah found value in this writing without me addressing those concerns. She saw potential in it as part of a "zine" for woolpunkers, a pataphysical take on e-textiles inventors.

    So, instead of unpacking the text, I illustrate it below in the style of a gloss.

    Organized Entanglement

    I propose the term "Organized Entanglement" to encompass fiber and textile art, design, engineering and science.

    Managing Complexity with Explicit Mapping of Gestures to Sound Control with OSC

    This paper was presented by Matt Wright in 2001 at the ICMC in Havana, Cuba.

    This annotated version by Adrian Freed borrows some ideas from medieval glosses. It is intented to provide context and readability unavailable in the original pdf.

    Managing Complexity
    with Explicit Mapping of Gestures to Sound Control
    with OSC

    Matthew Wright
    Adrian Freed
    Ahm Lee
    Tim Madden
    Ali Momeni

    email: {matt,adrian,ahm,tjmadden,ali}@cnmat.berkeley.edu

    CNMAT
    UC Berkeley
    1750 Arch St.
    Berkeley, CA 94709, USA

    Abstract

    We present a novel use of the OSC protocol to represent the output of gestural controllers as well as the input to sound synthesis processes. With this scheme, the problem of mapping gestural input into sound synthesis control becomes a simple translation from OSC messages into other OSC messages. We provide examples of this strategy and show benefits including increased encapsulation and program clarity.

    Introduction

    David Wessel (L)
    Shafqat Ali Kahn
    Matt Wright (R)
    We desire expressive real-time control of computer sound synthesis and processing from many different gestural interface devices such as the Boie/Mathews Radio Drum, the Buchla Thunder, Wacom Tablets, gaming joysticks, etc. Unlike acoustic instruments, these devices have no built-in associations of the gestures they sense and the resulting sound output.
    Indeed, most of the art of designing a real-time-playable computer music instrument lies in designing useful mappings between sensed gestures and sound generating and processing.

    Open Sound Control (OSC) offers much to the creators of these gesture-to-sound-control mappings: It is general enough to represent both the sensed gestures from physical controllers and the parameter settings needed to control sound synthesis. It provides a uniform syntax and conceptual framework for this mapping. The symbolic names for all OSC parameters make explicit what is being controlled and can make programs easier to read and understand. An OSC interface to a gestural-sensing or signal-processing subprogram is an effective form of abstraction that exposes pertinent features while hiding the complexities of implementation.

    We present a paradigm for using OSC for mapping tasks and describe a series of examples culled from several years of live performance with a variety of gestural controllers and performance paradigms.

    Open Sound Control

    Open Sound Control (OSC) was originally developed to facilitate the distribution of control-structure computations to small arrays of loosely-coupled heterogeneous computer systems. A common application of OSC is to communicate control-structure computations from one client machine to an array of synthesis servers. The abstraction mechanisms built into OSC, a hierarchical name space and regular-expression message dispatch, are also useful in implementations running entirely on a single machine. In this situation we have adapted the OSC client/server model to the organization of the gestural component of control-structure computations.

    The basic strategy is to:

    • Translate all incoming gestural data into OSC messages with descriptive addresses
    • Make all controllable parameters in the rest of the system OSC-addressable

    Now the gestural performance mapping is simply a translation of one set of OSC messages to another. This gives performers greater scope and facility in choosing how best to effect the required parameter changes.

    An OSC Address Subspace for Wacom Tablet Data

    
    /{tip, eraser}/{hovering, drawing} x y xtilt ytilt pressure
    /{tip, eraser}/{touch, release} x y xtilt ytilt
    /{airbrush, puckWheel, puckRotation} value
    /buttons/[1-2] booleanValue
    
    

    Wacom digitizing graphic tablets are attractive gestural interfaces for real-time computer music. They provide extremely accurate two-dimensional absolute position sensing of a stylus, along with measurements of pressure, two-dimensional tilt, and the state of the switches on the side of the stylus, with reasonably low latency. The styli (pens) are two-sided, with a tip and an eraser.;

    The tablets also support other devices, including a mouse-like puck, and can be used with two devices simultaneously.

    Unfortunately, this measurement data comes from the Wacom drivers in an inconvenient form. Each of the five continuous parameters is available independently, but another parameter, the device type, indicates what kind of device is being used and, for the case of pens, whether the tip or eraser is being used. For a program to have different behavior based on which end of the pen is used, there must be a switching and gating mechanism to route the continuous parameters to the correct processing based on the device type. Similarly, the tablet senses position and tilt even when the pen is not touching the tablet, so a program that behaves differently based on whether or not the pen is touching the tablet must examine another variable to properly dispatch the continuous parameters.

    Instead of simply providing the raw data from the Wacom drivers, our Wacom-OSC object outputs OSC messages with different addresses for the different states. For example, if the eraser end of the pen is currently touching the tablet, Wacom-OSC continuously outputs messages whose address is /eraser/drawing and whose arguments are the current values of position, tilt, and pressure.

    At the moment the eraser end of the pen is released from the tablet, Wacom-OSC outputs the message /eraser/release. As long as the eraser is within range of the tablet surface, Wacom-OSC continuously outputs messages with the address /eraser/hovering and the same position, tilt, and pressure arguments.

    With this scheme, all of the dispatching on the device type variables is done once and for all inside Wacom-OSC, and hidden from the interface designer. The interface designer simply uses the existing mechanisms for routing OSC messages to map the different pen states to different musical behaviors.

    We use another level of OSC addressing to define distinct behaviors for different regions of the tablet. The interface designer creates a data structure giving the names and locations of any number of regions on the tablet surface. An object called Wacom-Regions takes the OSC messages from Wacom-OSC and prepends the appropriate region name for events that occur in the region.

    For example, suppose the pen is drawing within a region named foo. Wacom-OSC outputs the message /tip/drawing with arguments giving the current pen position, tilt, and pressure. Wacom-Regions looks up this current pen position in the data structure of all the regions and sees that the pen is currently in region foo so it outputs the message /foo/tip/drawing with the same arguments. Now the standard OSC message routing mechanisms can dispatch these messages to the part of the program that implements the behavior of the region foo.

    Once again tedious programming work is hidden from the interface designer, whose job is simply to take OSC messages describing tablet gestures and map them to musical control.

    4. Dimensionality Reduction for the Tactex Control Surface

    Tactex MTC Express

    Tactex's MTC Express controller senses pressure at multiple points on a surface. The primary challenge using the device is to reduce the high dimensionality of the raw sensor output (over a hundred pressure values) to a small number of parameters that can be reliably controlled.

    One approach is to install physical tactile guides over the surface and interpret the result as a set of sliders controlled by a performer's fingers. Apart from not fully exploiting the potential of the controller this approach has the disadvantage of introducing delays as the performer finds the slider positions.

    An alternative approach is to interpret the output of the tactile array as an image and use computer vision techniques to estimate pressure for each finger of the hand. Software provided by Tactex outputs four parameters for each of up to five sensed fingers, which we represent with the following OSC addresses:

    /x
    X position on the surface
    /y
    Y position on the surface
    /z
    Pressure
    /age
    Amount of time this finger has been touching the surface

    The anatomy of the human hand makes it impossible to control these four variables independently for each of five fingers. We have developed another level of analysis, based on interpreting the parameters of three fingers as a triangle, as shown below. This results in the parameters listed below. These parameters are particularly easy to control and were chosen because they work with any orientation of the hand.

    Parameters of Triangle Formed by 3 Fingers
    Tactex with 3-point touch
    
    /area <area_of_inscribed_triangle>
    /averageX <avg_X_value>
    /averageY <avg_Y_value>
    /incircle/radius <Incircle radius>
    /incircle/area <Incircle area>
    /sideLengths <side1> <side2> <side3>
    /baseLength <length_of_longest_side>
    /orientation <slope_of_longest_side>
    /pressure/average <avg_pressure_value>
    /pressure/max <maximum_pressure_value>
    /pressure/min <minimum_pressure_value>
    /pressure/tilt <leftmost_Z-rightmost_Z> 
    
    

    An OSC Address Space for Joysticks

    USB joysticks used for computer games also have good properties as musical controllers. One model senses two dimensions of tilt, rotation of the joystick, and a large array of buttons and switches. The buttons support chording, meaning that multiple buttons can be pressed at once and detected individually.

    We developed a modal interface in which each button corresponds to a particular musical behavior. With no buttons pressed, no sound results. When one or more buttons are pressed, the joystick's tilt and rotation continuously affect the behaviors associated with those buttons.

    The raw joystick data is converted into OSC messages whose address indicates which button is pressed and whose arguments give the current continuous measurements of the joystick’s state. When two or more buttons are depressed, the mapper outputs one OSC message per depressed button, each having identical arguments. For example, while buttons B and D are pressed, our software continuously outputs these two messages:

    • /joystick/b xtilt ytilt rotation
    • /joystick/d xtilt ytilt rotation

    Messages with the address /joystick/b are then routed to the software implementing the behavior associated with button B with the normal OSC message routing mechanisms.

    Mapping Incoming MIDI to OSC

    Suppose a computer-music instrument is to be controlled by two keyboards, two continuous foot-pedals, and a foot- switch. There is no reason for the designer of this instrument to think about which MIDI channels will be used, which MIDI controller numbers the foot-pedals output, whether the input comes to the computer on one or more MIDI ports, etc.

    We map MIDI message to OSC messages as soon as possible. Only the part of the program which does this translation needs to embody any of the MIDI addressing details listed above. The rest of the program sees messages with symbolic names like /footpedal1, so the mapping of MIDI to synthesis control is clear and self-documenting.

    Controller Remapping

    The use of an explicit mapping from gestural input to sound control, both represented as OSC messages, makes it easy to change this mapping in real-time to create different modes of behavior for an instrument. Simply route incoming OSC messages to the mapping software corresponding to the current mode.

    For example, we have developed Wacom tablet interfaces where the region in which the pen touches the tablet surface defines a musical behavior to be controlled by the continuous pen parameters even as the pen moves outside the original region. Selection of a region when the pen touches the tablet determines which mapper(s) will interpret the continuous pen parameters until the pen is released from the tablet.

    OSC to Control Hexaphonic Guitar Processing

    We have created a large body of signal processing instruments that transform the hexaphonic output of an electric guitar. Many of these effects are structured as groups of 6 signal-processing modules, one for each string, with individual control of all parameters on a per-string basis. For example, a hexaphonic 4-tap delay has 6 signal inputs, 6 signal outputs, and six groups of nine parameters: the gain of the undelayed signal, four tap times, and four tap gains.

    OSC gives us a clean way to organize these 54 parameters. We make an address space whose top level chooses one of the six delay lines with the numerals 1-6, and whose bottom level names the parameters. For example, the address /3/tap2time sets the time of the second delay tap for the third string. We can then leverage OSC’s pattern-matching capabilities, for example, by sending the message /[4-6]/tap[3-4]gain to set the gain of taps three and four of strings four, five, and six all at once.

    Example: A Tactex-Controlled Granular Synthesis Instrument

    We have developed an interface that controls granular synthesis from the triangle-detection software for the Tactex MTC described above. Our granular synthesis instrument continuously synthesizes grains each with a location in the original sound and frequency transposition value that are chosen randomly from within a range of possible values. The real-time-controllable parameters of this instrument are arranged in a straightforward OSC address space:

    /bufpos
    Avg. grain position in input sound
    /bufposrange
    Range of possible values around
    /duration
    duration of each grain
    /transpose
    Average transposition per grain
    /transposerange
    Range of possible transposition values around

    The Max/MSP patch shown below maps incoming Tactex triangle-detection OSC messages to OSC messages to control this granular synthesizer.

    Tactex Granular Max/MSP Patch

    This mapping was codeveloped with Guitarist/Composer John Schott and used for his composition The Fly.

    Conclusion

    We have described the benefits in diverse contexts of explicit mapping of gestures to sound control parameters with OSC.

    References

    Boie, B., M. Mathews, and A. Schloss 1989. The Radio Drum as a Synthesizer Controller. Proceedings of the International Computer Music Conference, Columbus, OH, pp. 42-45.

    Buchla, D. 2001. Buchla Thunder. http://www.buchla.com/historical/thunder/index.html

    Tactex. 2001. Tactex Controls Home Page. http://www.tactex.com

    Wright, M. 1998. Implementation and Performance Issues with Open Sound Control. Proceedings of the International Computer Music Conference, Ann Arbor, Michigan.

    Wright, M. and A. Freed 1997. Open Sound Control: A New Protocol for Communicating with Sound Synthesizers. Proceedings of the International Computer Music Conference, Thessaloniki, Hellas, pp. 101-104.

    Wright, M., D. Wessel, and A. Freed 1997. New Musical Control Structures from Standard Gestural Controllers. Proceedings of the International Computer Music Conference, Thessaloniki, Hellas.

    Notations for Performative Electronics: the case of the CMOS Varactor

    Summary

    Demonstrating an unusual application of a CMOS NOR gate to implement a CMOS varactor controlled VCO, I explore the idea of circuit schematics as a music notation and how scholars and practitioners might create and analyze notations as part of the rich web of interactions that constitute current music practice.

    A Synthesizable Hybrid VCO using SkyWater 130nm Standard-Cell Multiplexers

    Summary

    I introduce a new synthesizable VCO using (n+1) mux2i cells from the SkyWater130 PDK to form a (2n+1) hybrid ring oscillator.

    A Recipe using OSC Messages

    Penne ai Funghi Porcini with OSC

    Birds of the East Bay 2020

    Enjoy this Flockumentary:

    Exercising the odot language in Graphics Animation Applications

    One of my jobs on the odot language development team was developing a diverse set of applications and tests to give feedback to the language design team (mainly John MacCallum) on affordances and difficulties with the language.

    This example digs back into knowledge acquired in my early career before I found a way to focus on music technology. I was doing a lot of computer graphics - plotting libraries, GUIs, cad tools and so on. These were the early days when the core concepts had been worked out and companies like SGI were just starting (we had one of their first machines to play with in my university).

    The idea of this patch is to use OSC packets to describe objects, mostly platonic solids, organize them into a display list and hand them off to openGL to render via their Jitter interface. You can think of these objects as signals-functions of time. Their properties are computed from an incoming "time" variable. This point of view is a rich way to handle compositional challenges and mirrors the time machines we did in OSW and CAST.

    C++ container output stream header file

    This header file provides extensions to the iostream << operator so you can pretty print C++ containers. It has a few interesting examples of template metaprogramming and uses some of the more modern C++ (2017) features. The repo code is evolving: https://github.com/adrianfreed/containerostream
    Syndicate content