Program


Evening Concert, Dec. 11th, 8pm

Prince Theater, Annenberg Center for the Performing arts 3680 Walnut St, Philadelphia, PA 19104

Roscoe Mitchell (reeds)
Steve Lehman + Hprizm (sax, voice, electronics)
Bob Ostertag (keyboards)
David Rosenboom (piano, spatialization system, computer interaction)
Bernard Lubat + Marc Chemillier & Gérard Assayag (piano, voice, Omax/ImproteK system)
Marie Kimura, Gyorgy Kurtag, Pierre Couprie (violin,synths,computer interaction)
LaDonna Smith, Susan Alcorn, (violin, pedal steel, electronics)
Michael Young (piano, piano-prosthesis system)
Hatchers aka Michael Barker, Brian Osbourne (electronics, drums)

Program Note


Evening Concert, Dec. 13th, 8pm

The Rotunda 4014 Walnut Street, Philadelphia, PA 19104

Moor Mother (voice, electronics)
Dana Naphtali (electronics) Denis Beuret, Sarah Belle Reid (trombone, Trumpet, live electronics)
Georges Bloch, Rémi Fox (ImproteK system, Saxophone)
Max Eilbacher (electronics)
Charles Kely, Marc Chemillier (guitar, djazz system)
Bhob Rainey (electronics) Lance Simmons, Ada Babar (electronics, prepared guitar)
Adam Vidiksis (percussion, electronics)

Program Notes



Workshop : Dec. 11-13th (9:30am - 6pm)

U. Penn, Lerner Center, Rm 101, 201 S 34th St
U. Penn, Fisher-Bennett Hall, 419 Rose Hall, 3340 Walnut St



Keynotes Speakers



Bob Ostertag

" No Idea "

David Rosenboom

Deviant Resonances — Listening to Evolution
What happens when two forms of musical intelligence—either having emerged naturally from cosmological dynamics or been volitionally constructed by purposeful beings—attempt to initiate improvised co-communication with each other, while neither possess an a priori model describing the range and scope of manners in which either intelligence or music can be manifested? Will they even recognize each other? What predictive models can they use to search for something for which neither has a clear pre-definition? This is both a challenging and inspiring space to explore.
Read more ...


Round Table : Hommage to David Wessel



With Roscoe Mitchell, Marc Chemillier (EHESS), Matt Wright (CCRMA, CNMAT), Georges Bloch (U. Strasbourg)



Presentations, demos, performances



Intelligent Music Agents capable of joint intuitive and rational thinking

Jonas Braasch (RPI)
This talk describes an intelligent music system approach that utilizes a joint bottom-up/top-down structure. The bottom-up structure is purely signal driven and calculates pitch, loudness, and information rate among other parameters using auditory models that simulate the functions of different parts of the brain. The top-down structure builds on a logic-based reasoning system and an ontology that was developed to reflect rules in jazz practice. Two instances of the agent have been developed to perform traditional and free jazz, and it is shown that the same general structure can be used to improvise different styles of jazz.

Live Scoring for Computer Assisted Composition

Justin Yang (RPI)
This talk explores the use of networked computer animation as a front end for composition and scoring for live performers. An assortment of computer animated graphic tools can be used to develop interactions between algorithms and AI, and live performers. These tools help open the door for possibilities such as real-time scoring, structured improvisation, multi-nodal composition, real-time orchestration, and performer-computer interactions.

Reembodied Sound and Algorithmic Environments for Improvisation

Matthew Goodheart (RPI)
Reembodied sound is a form of electroacoustics that uses transducer-driven resonant objects to create acoustic realizations of sample and analysis derived mixed synthesis. This talk will focus on the use of reembodied sound as a generative basis to create large-scale, algorithmically driven sonic environments for improvisers, discussing both technical implementation and aesthetic orientation. Directions for future research involving digital listening agents and interactivity will also be addressed.

Mediation between Musicians and Code with Neural Networks

Jeremy Stewart (RPI)
For this talk, we will discuss design and implementation of a neural network system for performance between acoustic musicians and live coding performers. Starting with simple classification systems and experimenting with data for training deep neural networks, while also considering novel integrations into existing performance systems, we will outline our current work while also discussing potential steps forward.

Coughing is a Form of Love

Joseph Pfender & Melanie Farley This performance was born out of a shared interest in creating new expressive resources at the seam of the human and the algorithmic. The vocal experiment embodies the vulnerability of vocal expression, and attempts to disrupt that vulnerability by introducing chance robotic interventions, creating a kind of cyborg poetry. Drawing on the energy of groups like Pere Ubu and Patti Smith, the poetry also takes substantive inspiration from Yoko Ono’s instruction poems. The patch itself responds to historical avant-garde methods of recursive feedback, using sound synthesis and convolution procedures that have a topological affinity to David Tudor’s Toneburst. Taking a cue from earlier forays into instrumental belligerence and productive obstruction at NIME, we attempt to work into vocalic expressivity a logic of resistance and effervescence.

From OMax to DYCI2 : merging free, reactive, and scenario-based features in human-computer co-improvisation

Jérôme Nika (Ircam) with Rémi Fox (Musician)
The collaborative research and development project DYCI2, Creative Dynamics of Improvised Interaction, focuses on conceiving, adapting, and bringing into play efficient models of artificial listening, learning, interaction, and generation of musical contents. It aims at developing creative and autonomous digital musical agents able to take part in various human projects in an interactive and artistically credible way. The concerned areas are live performance, production, pedagogy, and active listening. This presentation will give an overview of the project focusing on the design of multi-agent architectures and models of knowledge and decision (OMax, SoMax, ImproteK, DYCI2) in order to explore scenarios of music co-improvisation involving human and digital agents. The objective is to merge the usually exclusive "free", "reactive", and "scenario-based" paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions. The DYCI2 project is led in close and continuous interaction with expert musicians, and these interactions are an integral part of the iterative development of the models and of the software prototypes. Therefore, the presentation will be illustrated by material coming from past, present, and future associated artistic projects.

DYCI2 project : Multi-dimensional and Multi-scale learning of music structure for machine improvisation

Ken Deguernel (Inria Nancy)
Current musical improvisation systems are able to generate unidimensional musical sequences by recombining their musical contents. However, considering several dimensions (melody, harmony...) and several temporal levels are difficult issues. We propose to combine probabilistic approaches with formal language theory in order to better assess the complexity of a musical discourse, both from a multidimensional and multi-level point of view in the context of improvisation where the amount of data is limited. The methods proposed have been evaluated by professional musicians and improvisers during listening sessions.

Portable Gold and Philosophers’ Stones (Deviant Resonances)

David Rosenboom (CalArts)
(1972 & 2015), Computer-electronics with BCMI (Brain-Computer Music Interface), auxiliary instrument, and two active imaginative listening brainwave performers (volunteers)
The “Philosopher’s Stone” is a mental symbol about the prima materia, the original substance and ultimate principle of the universe. It has been said that by returning from the qualities of sensation and thought, which we perceive through differentiation and specialization, to the undifferentiated purity of the prima materia, we might learn truths about creative power and the fundamental mutability of all phenomena. Combining this with the symbol, Portable Gold, was my way of emphasizing the timelessness and spacelessness of this idea, which we can carry with us anywhere. To manifest these symbols in music, I’ve made pieces that work with resonant coincidences detected among the physical brainwaves of performers and apply them inside the circuits of custom-built, live electronic music devices, to grow spontaneous musical forms. This version is realized with portable brainwave detectors, computer music software, and an auxiliary acoustic instrument.
Read more ...

ONE and John: Oriented Improvisation

Pierre Couprie (Sorbonne University), Gyorgy Kurtag (SCRIME, Bordeaux University)
During this talk, we will present a French improvisation group in electroacoustic music: ONE (Orchestre National Electroacoustique). In this group, we have developed our instrument and/or the digital part of our devices. We also use a digital conductor called 'John' which propose an improvisation score for each musician from a list of words, nuances and variation of intensity. We also discuss the analysis of performance through representation technic to study John's influence on musical improvisation.

Steve Lehman (CalArts)
Recent work in the domain of computer-driven improvisation has privileged modes of interactivity that eschew tempo-based hierarchies of musical time. This talk will seek to interrogate some of these recent tendencies, and provide a brief overview of the speaker's recent work integrating improvisation with contemporary research in the field of rhythmic cognition. Some potential applications to tempo-based musical models for human-computer interaction will also be discussed.

Revolutionizing the Tradition: Extracting Human Expression using Motion Sensor for Music

Mari Kimura (UC Irvine)
Violinist and composer Mari Kimura will discuss creating performances and compositions that integrate interactive computer, and the use of a motion sensor she has been developing. The lecture includes demonstrations including musical performances with her current prototype model “Mugic”, and her work with students at her summer program “Future Music Lab” she directs at the Atlantic Music Festival in collaboration with IRCAM, and in her classroom at her new ‘home’, “Integrated Composition, Improvisation and Technology” (ICIT) program at the Music Department of University of California, Irvine.

Canopy of Catastrophes

Bhob Rainey
What are some good ways, when making music that is shared among humans (and, in terms of appreciation, likely only among humans), to “get outside yourself” and connect to non-human patterns, entities, signals, etc., without pretending to be objective? Can you not only point to or represent the “great outdoors” but also bring yourself and maybe your audience "outside”? Are computers and computational thinking at all helpful in answering these questions? Let’s talk sonification and data streams and generative patterns, but let’s also ask how they function aesthetically, what ends they might serve when sounds reach ears and that communal event we call music happens.

Smart Acoustic Instruments: From Early Research to HyVibe

Adrien Mamou-Mani (HyVibe)
Smart acoustic instruments are acoustic instruments with programmable sounds. I will present the research at the origin of this concept and examples of prototypes that have been used by artists. The emphasis will be put on the HyVibe Guitar, designed to be the future of electro-acoustic guitars, using digital technology and vibration control. Finally, I will share first ideas on its potential use for improvisation.

Live Algorithms for Music

Michael Young (U. of Sunderland)
Computational systems able to collaborate with human improvisers are live algorithms: able to cooperate proactively, on an equal basis, with musicians in performance. This is an ideal that raises fundamental questions about creativity and group interaction, how these might be computationally modelled. Can musicians and computers relate to one another, just as human musicians do? Can an audience recognize and appraise this relationship? Live algorithms offer the prospect of a new understanding of real-time creative practice that differs from the established paradigms in live electronic music: computer as instrument and computer as proxy. Drawing upon ideas from social psychology, collective music-making can be viewed as a special case of social cooperation, evidenced primarily through sound. To attempt a functional description of live algorithms is to model modes of cooperation and causal attribution that occur between proactive agents in a shared sonic environment. The challenge of live algorithms is to find genuinely original ways for humans and computers to work together - an original way to make music. The ideal live algorithm paradigm is computer as partner.

MIGSI: The Minimally Invasive Gesture Sensing Interface for Trumpet

Sarah Reid (CalArts)
Performer-composer-technologist Sarah Reid will introduce the Minimally Invasive Gesture Sensing Interface (MIGSI) for trumpet. MIGSI uses sensor technology to capture gestural data such as valve displacement, hand tension, and instrument position, to offer extended control and expressivity to trumpet players. In addition to addressing technical and design-based considerations of MIGSI, this presentation will discuss various strategies for performing and composing with this new instrument, and will delve into a larger discussion on integrating new musical interfaces, micro-controllers, and electronic instruments into an improvisational practice.

100 Strange Sounds: Practice on Electroacoustic Improvisation

Joo Won Park (Wayne State U.)
100 Strange Sounds consists of one hundred video recordings of solo improvisations using live electronics http://www.100strangesounds.com. The purpose of this project was to improve the creator's technical and improvisational abilities while examining the documentation and promotional possibilities of an online video platform. The author will present the technical and aesthetic findings in completing the project by demonstrating the software and hardware setup as well as sharing the viewer data from Youtube and Google Analytics.

Spatial constructs and concepts, rituals

John Mallia / Scott Deal (NEC studio)
John Mallia lives and works in Boston, where he is a member of the Composition Faculty, and directs the Electronic Music Studio, at the New England Conservatory of Music. His compositional process is informed by spatial constructs and concepts, and a fascination with presence, ritual, and the thresholds standing between states of existence or awareness. In addition to composing chamber music and works combining acoustic instruments with electronics, he creates fixed media compositions, and collaborates with visual artists on multimedia works, including installation.

Talkback: Human and Computer Improvisation through a Live-Trained Machine Learning Model

Flannery Cunningham (UPenn)
Talkback for instrument and computer consists of an ongoing process of improvisation and musical evolution shared between a human player and a computer. The piece uses Max/MSP with Rebecca Fiebrink’s open source Wekinator machine learning software; however, contrary to the usual practice of training a machine learning system during composition or rehearsal, in Talkback the machine learning system is trained live during the performance. An initial semi-random musical “seed” is used as an opening output by the computer. The player improvises in response to this, and the piece evolves through an alternation of “training” and “running” the computer’s learned model. The non-human half of the partnership is also endowed with creative agency, as an “activity meter” allows the computer system to decide when it will freeze on current material (allowing the human player to layer new musical material on top of an existing texture) and when it will generate a new seed (as when an improviser decides that the music has become too static and introduces a new idea). In this workshop, composer Flannery Cunningham will perform a version of Talkback for hammered dulcimer and laptop, introduce the technology and process of creating the work, and invite participants to experiment with the piece’s structure with their own instruments or voice.

Body/Environment couplings through sound and light

Mina Zarfsaz
This talk/demo is about an interactive audio/video piece that is consisted of a system of sensors, speakers, and projectors measuring the impact of movement and human organization as it reconstructs the dismantled fragments of pieces of music with any given group of people. Like an orchestra of instruments, it is the body of the spectator that co-composes the rhythmic content by co-ordinating movements with others as they trigger the sensors. The “notes” in this project are struck at the interface of body and machine. While in the space, each person is either an active “ON” (within a sensor range) or a passive “OFF” (out of a sensor range.) This peice forces one’s perceptual system to search the space for triggered sounds and lit surfaces; to track changes, estimate distances and corporeal relationship with others. The piece never repeats itself exactly, has no beginning, middle or end.

KEROAÄN

KEROAÄN is a collaborative research project between IAN M FRASER and REED EVAN ROSENBERG exploring composition of electronic music by an artificial intelligence. Pieces are diffused in real time with no human intervention whatsoever as the machine agent manipulates the qualities of chosen non-standard synthesis and microsound techniques. In live diffusions, the machine agent additionally controls laser apertures and an array of strobe lighting which collectively act as a visual projection of the agent's internal state as it structures the performance. A distinctly non-human logic pervades the resultant arrangements of chaotic sounds and high-intensity lighting, presenting an immersive, alien environment.

Drexel MET Lab overview

Yougmoo Kim, Jeff Gregorio (Drexel University)
Drexel ExCITe Center / The Music and Entertainment Technology Laboratory (MET-lab) at Drexel University is devoted to research in media technologies shaping the future of live performance and entertainment. Founded in 2005, the lab's research is ever evolving.

Augmentation of Acoustic Drums using Electromagnetic Actuation and Wireless Control

Jeff Gregorio (Drexel)
Youngmoo Kim (Drexel)
We present a system for augmentation of acoustic drums using electromagnetic actuation of the resonant membrane, driven with continuous audio signals. Use of combinations of synthesized tones and feedback taken from the batter membrane extends the timbral and functional range of the drum. The system is designed to run on an embedded, WiFi-enabled platform, allowing multiple augmented drums to serve as voices of a spatially-distributed polyphonic synthesizer. Semi-autonomous behavior is also explored, with individual drums configured as nodes in a directed graph. EM actuation and wireless connectivity enables a network of augmented drums to function in traditionally percussive roles, as well as harmonic, melodic, and textural roles. This work is developed by an engineer in close collaboration with an artist in residence for use in live performance and interactive sound installation.

Soundscapes as interfaces for data-driven musical possibilities

Tae Hong Park (NYU)

NYU Music and Audio Research Laboratory presentations

Tae Hong Park, Leo Chang, Dafna Naphtali, Erich Barganier, Toshi Tsuruoka (NYU)

Matmos

Density Function

Adam Vidiksis + BEEP (Temple U.)

Ableton

Ableton tech update for improvisers

Experimental electronics of the American Midwest

Max Eilbacher
experimental electronics of the American Midwest (Sonic Arts Union aka Robert Ashley, Alvin Lucier, Gordon Mumma, etc)

The djazz project

Marc Chemillier (EHESS)
Jazz machines and anthropology

Math and improvisation

Dmitri Tymoczko

Generative patches for Modular and Semi-Modular Synthesizers

Sandy James (Temple U.)

CCRMA

Matt Wright