Please RSVP if you intend to come to any of these events.
PDF PROGRAM SUMMARY
Evening Concert, Dec. 11th, 8pm
Prince Theater, Annenberg Center for the Performing arts 3680 Walnut St, Philadelphia, PA 19104
LaDonna Smith, Susan Alcorn, Miya Masaoka (violin / viola, pedal steel, one-string koto, electronics)
Roscoe Mitchell + Bob Ostertag (reeds, sampler)
David Rosenboom (piano, spatialization system, computer interaction) : The Right Measure of Opposites
Steve Lehman + Hprizm (saxophone, voice, electronics)
Bernard Lubat + Marc Chemillier & Gérard Assayag (piano / voice, Omax/ImproteK agents)
Bob Ostertag (Gamepad and Aalto)
Mari Kimura, György Kurtág Jr, Pierre Couprie (violin,synths,computer interaction)
Hatchers aka Michael Barker, Brian Osborne (electronics, drumset)
Farid Barron (piano)
Program Notes
Tickets available from the Annenberg Center Live box office
free admission for students with ID
Evening Concert, Dec. 13th, 8pm
The Rotunda 4014 Walnut Street, Philadelphia, PA 19104
KEROAÄN (machine agents)
John Mallia (percussion & electronics): Husk, with Aura
Dafna Naphtali, Matthew Clayton, Mohamed Kubbara (voice & live-processing, saxophone, drumset)
Lance Simmons, Ada Babar (electronics, prepared guitar)
Bhob Rainey (electronics)
Scott Deal (vibraphone & electronics): Goldstream Variations
Sarah Belle Reid, Ashley Tini, D Hotep (trumpet & live electronics, vibraphone, guitar)
Max Eilbacher (electronics)
Rémi Fox, Georges Bloch, Jérôme Nika (Saxophone, ImproteK system)
Moor Mother, Madam Data, Mental Jewelry (voice, clarinet & electronics)
Adam Vidiksis (floor tom, computer processing): Hyperdyne
free admission for all
Workshop : Dec. 11-13th, 9:30am - 6pm
Workshop, Dec. 11th Morning
Drexel University ExCITe Center, 3401 Market St
Theme : Instruments, spaces, bodies
Session chair : Jérôme Nika
09:30
Presentation of the ExCITe Center and the MET-lab at Drexel University
Youngmoo Kim (Drexel University)
The ExCITe Center is Drexel University’s home for research and discovery connecting technology and creative expression, bringing together faculty and students from across the University to pursue transdisciplinary, collaborative projects. As part of the Center, the Music & Entertainment Technology Laboratory (MET-lab) focuses on the machine understanding of audio, human-machine interfaces and robotics for expressive interaction, real-time analysis, synthesis, and visualization of sound, and K-12 outreach for STEAM (Science, Technology, Engineering, Arts & Design, and Mathematics) education. This presentation will highlight recent MET-lab/ExCITe projects with external collaborators, including Sophia’s Forrest (a chamber opera with electroacoustic sound sculptures by composer Lembit Beecher) and a dance work involving autonomous drones, created with Parsons Dance.
Augmentation of Acoustic Drums using Electromagnetic Actuation and Wireless Control
Jeff Gregorio (Drexel University)
We present a system for augmentation of acoustic drums using electromagnetic actuation of the resonant membrane, driven with continuous audio signals. Use of combinations of synthesized tones and feedback taken from the batter membrane extends the timbral and functional range of the drum. The system is designed to run on an embedded, WiFi-enabled platform, allowing multiple augmented drums to serve as voices of a spatially-distributed polyphonic synthesizer. Semi-autonomous behavior is also explored, with individual drums configured as nodes in a directed graph. EM actuation and wireless connectivity enables a network of augmented drums to function in traditionally percussive roles, as well as harmonic, melodic, and textural roles. This work is developed by an engineer in close collaboration with an artist in residence for use in live performance and interactive sound installation.
10:15
ONE and John: Oriented Improvisation
Pierre Couprie (Sorbonne University), Gyorgy Kurtag Jr (SCRIME, Bordeaux University)
During this talk, we will present a French improvisation group in electroacoustic music: ONE (Orchestre National Electroacoustique). In this group, we have developed our instrument and/or the digital part of our devices. We also use a digital conductor called 'John' which propose an improvisation score for each musician from a list of words, nuances and variation of intensity. We also discuss the analysis of performance through representation technic to study John's influence on musical improvisation.
10:45
Coffee Break
11:15
The Djazz project: Jazz machines and Anthropology
Marc Chemillier (EHESS)
Djazz belongs to the family of improvisation softwares called OMax/ImproteK/Djazz designed by IRCAM and EHESS. Its distinctiveness is that it is adapted to the interaction with jazz and world musicians in real social contexts, and as such is the subject of an anthropological survey dealing with rhythm and the way people synchronize themselves to the music during particular social events (dance rituals, concerts). We’ll show how rhythm is handled in Djazz and make a demo with Malagasy guitarist Charles Kely Zana-Rotsy.
11:45
Smart Acoustic Instruments: From Early Research to HyVibe
Adrien Mamou-Mani (HyVibe)
Smart acoustic instruments are acoustic instruments with programmable sounds. I will present the research at the origin of this concept and examples of prototypes that have been used by artists. The emphasis will be put on the HyVibe Guitar, designed to be the future of electro-acoustic guitars, using digital technology and vibration control. Finally, I will share first ideas on its potential use for improvisation.
12:15
Revolutionizing the Tradition: Extracting Human Expression using Motion Sensor for Music
Mari Kimura (UC Irvine)
Violinist and composer Mari Kimura will discuss creating performances and compositions that integrate interactive computer, and the use of a motion sensor she has been developing. The lecture includes demonstrations including musical performances with her current prototype model “Mugic”, and her work with students at her summer program “Future Music Lab” she directs at the Atlantic Music Festival in collaboration with IRCAM, and in her classroom at her new ‘home’, “Integrated Composition, Improvisation and Technology” (ICIT) program at the Music Department of University of California, Irvine.
12:45 Lunch Break
Workshop, Dec. 11th Afternoon
U. Penn, Lerner Center, Rm 101, 201 S 34th St
Theme : Instruments, spaces, bodies
Session chair : Georges Bloch
15:30
MIGSI: The Minimally Invasive Gesture Sensing Interface for Trumpet
Sarah Reid (CalArts)
Performer-composer-technologist Sarah Reid will introduce the Minimally Invasive Gesture Sensing Interface (MIGSI) for trumpet. MIGSI uses sensor technology to capture gestural data such as valve displacement, hand tension, and instrument position, to offer extended control and expressivity to trumpet players. In addition to addressing technical and design-based considerations of MIGSI, this presentation will discuss various strategies for performing and composing with this new instrument, and will delve into a larger discussion on integrating new musical interfaces, micro-controllers, and electronic instruments into an improvisational practice.
16:00
Improvising with Ensemble Feedback Instruments and First-order Cybernetic Followers
Matt Wright (CCRMA, Stanford University)
Ensemble Feedback Instruments (first presented at NIME 2015) show that even extremely simple individual musical "instruments", if they each have an audio input as well as an audio output, can give rise to rich and varied musical behaviors when patched together in various feedback topologies. As long as there is just a little system delay, on the order of trained musicians' reaction times, then even unstable "exploding" systems can safely generate glorious crescendos when one or more careful listeners have low-latency gain controls in the loop allowing them to attenuate when things begin to get out of control. Taking this a step further, we can automate this "turn down when it gets too loud" behavior; we reframe the behavior of a side-chained compressor-limiter in terms of a first-order cybernetic feedback system like a thermostat. Each sound element inside the feedback network can have a "goal" volume (analogous to a thermostat's temperature setting) and limited (i.e., time-slewed) ability to adjust its own output volume (analogous to turning the heater on or off) according to the volume it detects in its input or output signals and/or the ensemble's overall level (analogous to the thermometer). Again the simplest of elements can give rise to rich behaviors when placed in proper recursive context. Automating this guarantee that the potentially unstable ensemble feedback network will never blow up allows a shift from a model where a human performer directly controls each individual instrument to a model where a human performer might indirectly control multiple instruments at the same time and/or improvise in real-time collaboration with goal-seeking, if not "intelligent," software agents.
16:30
Ableton Live, Machine + Intelligence for Music Production and Performance
Adriano Clemente
Artist-Sound Designer-Technologist Adriano Clemente will discuss some of the key features of Ableton Live. Live is a software used globally by artists, performers, producers and DJs that offers a great variety of tools, flexible performance workflows and a strong integration with MAX Clemente will preview Ableton Live 10 and some of its features including Capture, a tool to support spontaneity and preserve musicality when working with MIDI recordings
17:00 End of Dec. 11 sessions
Workshop, Dec. 12th Morning
U. Penn, Fisher-Bennett Hall, 419 Rose Hall, 3340 Walnut St
Theme : Composition, Notation, Synthesis, Form
Session chair : Pierre Couprie
09:30
Keynote presentation by Bob Ostertag
10:15
Recent Trends in Pulse-Based Improvisation, Rhythm Cognition, and some Potential Applications for Interactive Design
Steve Lehman (CalArts)
Recent work in the domain of computer-driven improvisation has privileged modes of interactivity that eschew tempo-based hierarchies of musical time. This talk will seek to interrogate some of these recent tendencies, and provide a brief overview of the speaker's recent work integrating improvisation with contemporary research in the field of rhythmic cognition. Some potential applications to tempo-based musical models for human-computer interaction will also be discussed.
10:45
100 Strange Sounds: Practice on Electroacoustic Improvisation
Joo Won Park (Wayne State U.)
100 Strange Sounds consists of one hundred video recordings of solo improvisations using live electronics http://www.100strangesounds.com. The purpose of this project was to improve the creator's technical and improvisational abilities while examining the documentation and promotional possibilities of an online video platform. The author will present the technical and aesthetic findings in completing the project by demonstrating the software and hardware setup as well as sharing the viewer data from Youtube and Google Analytics.
11:15
Coffee Break
11:45
Portable Gold and Philosophers' Stones (Deviant Resonances)
David Rosenboom (CalArts)
The “Philosopher’s Stone” is a mental symbol about the prima materia, the original substance and ultimate principle of the universe. It has been said that by returning from the qualities of sensation and thought, which we perceive through differentiation and specialization, to the undifferentiated purity of the prima materia, we might learn truths about creative power and the fundamental mutability of all phenomena. Combining this with the symbol, Portable Gold, was my way of emphasizing the timelessness and spacelessness of this idea, which we can carry with us anywhere. To manifest these symbols in music, I’ve made pieces that work with resonant coincidences detected among the physical brainwaves of performers and apply them inside the circuits of custom-built, live electronic music devices, to grow spontaneous musical forms. This version is realized with portable brainwave detectors, computer music software, and an auxiliary acoustic instrument. Portable Gold and Philosophers' Stones (1972 & 2015), Computer-electronics with BCMI (Brain-Computer Music Interface), auxiliary instrument, and two active imaginative listening brainwave performers (volunteers).
12:15
Improvising in Imaginary Spaces
Dmitri Tymoczko with Rudresh Mahantappa
We will show how how geometry can be used to construct new musical instruments based on the idea of a configuration space. we will provide some examples of these indigenous electronic instruments and explain how they might be used in performance.
12:45 Lunch Break
Workshop, Dec. 12th Afternoon
U. Penn, Fisher-Bennett Hall, 419 Rose Hall, 3340 Walnut St
Theme : Composition, Notation, Synthesis, Form
Session chair : Ken Deguernel
15:00
Drop for bassoon and electronics
Katarina Miljkovic and Chris Watford
Fabric of the electronic and instrumental part in the composition is derived from the structure of an automaton. Resulting sound events are of a modular nature and provide a field of possibilities for performers. During the performance, both performers traverse through the automaton by freely selecting sound modules while using the automaton data as time brackets. Live sound processing is happening through plugins. As a result, CA generates the structure, dictates the overall process but also embraces indeterminacy coming from a human response to a deterministic nature of the automaton, and an attempt to communicate with it. The piece is 12' long (there is also a version 24’ long). Sonic material is based on a field recording of a lament song from Balkans. The field recording is filtered through CA to create sound modules, which are treated in the way previously described.
Talkback: Human and Computer Improvisation through a Live-Trained Machine Learning Model
Flannery Cunningham (UPenn) CANCELLED DUE TO ILLNESS
Talkback for instrument and computer consists of an ongoing process of improvisation and musical evolution shared between a human player and a computer. The piece uses Max/MSP with Rebecca Fiebrink’s open source Wekinator machine learning software; however, contrary to the usual practice of training a machine learning system during composition or rehearsal, in Talkback the machine learning system is trained live during the performance. An initial semi-random musical “seed” is used as an opening output by the computer. The player improvises in response to this, and the piece evolves through an alternation of “training” and “running” the computer’s learned model. The non-human half of the partnership is also endowed with creative agency, as an “activity meter” allows the computer system to decide when it will freeze on current material (allowing the human player to layer new musical material on top of an existing texture) and when it will generate a new seed (as when an improviser decides that the music has become too static and introduces a new idea). In this workshop, composer Flannery Cunningham will perform a version of Talkback for hammered dulcimer and laptop, introduce the technology and process of creating the work, and invite participants to experiment with the piece’s structure with their own instruments or voice.
16:00
Break, Venue Change
Events at ICA, dec. 12th
Institute of Contemporary Art 118 S. 36th Street
Round table, presentations, performances
16:30
Round Table : David Wessel Legacy
"The Figure of the Researcher / Musician" with Roscoe Mitchell, Matt Wright, Marc Chemillier, George Bloch, Gérard Assayag
17:45
Presentations/performances
Coughing is a Form of Love
Joseph Pfender & Melanie Farley
This performance was born out of a shared interest in creating new expressive resources at the seam of the human and the algorithmic. The vocal experiment embodies the vulnerability of vocal expression, and attempts to disrupt that vulnerability by introducing chance robotic interventions, creating a kind of cyborg poetry. Drawing on the energy of groups like Pere Ubu and Patti Smith, the poetry also takes substantive inspiration from Yoko Ono's instruction poems. The patch itself responds to historical avant-garde methods of recursive feedback, using sound synthesis and convolution procedures that have a topological affinity to David Tudor's Toneburst. Taking a cue from earlier forays into instrumental belligerence and productive obstruction at NIME, we attempt to work into vocalic expressivity a logic of resistance and effervescence.
Generative patches for Modular and Semi-Modular Synthesizers
Sandy James (Temple U.)
Analog modular and semi-modular synthesizers can be useful tools for creating generative music. A sample and hold circuit fed with noise, or combined with a slew limiter and VCO (voltage controlled oscillator) is at the heart of most random voltage sources. The MakeNoise WoggleBug is a Eurorack module that includes three outputs for control voltage: stepped, smooth and woggle. Having only stepped random output, the MakeNoise 0-Coast semi-modular synthesizer is a scaled back version of the WoggleBug. The Moog Mother-32’s assignable output can be programmed with a step random source. Output from all three random sources can be used to generate streams of stochastic pitches, changes in loudness, duration, modulation depth, and any other controllable parameter. This talk will demonstrate several generative patches created with the MakeNoise and Moog synthesizers.
Density Function
Adam Vidiksis + BEEP (Temple U.)
Density Function is a work for iPads and spatialization choreography. The work plays on the psychoacoustic effect generated by how we use timbre to help localize sounds. Forming and reforming to create clusters of bodies and notes, the individual players act at times as individual sound sources, and other times as partials of a larger timbral event. This work was workshopped and co-composed by the student members of the Fall 2017 Boyer Electroacoustic Ensemble Project: Alyssa Almeida, Megan Burke, Joshua Carey, Simeon Church, Stefano G. Daddi, Bailey Fatool, Sean Gallagher, Daniel Gilbert, Fede ZyMoon Gillespie-Anderson, Kay Gross, Austin Johnson, Zachary Kane, Eric A. Keefer, Jon Mayse, RJ McGhee, Dan Moser, and Anthony Passaro.
18:30 - 20:30 Reception for Improtech participants
Workshop, Dec. 13th Morning
U. Penn, Fisher-Bennett Hall, 419 Rose Hall, 3340 Walnut St
Theme : (Artificial) Creative Intelligences
Session chair : Matt Wright
09:30
Keynote presentation by David Rosenboom (CalArts)
Deviant Resonances — Listening to Evolution
What happens when two forms of musical intelligence—either having emerged naturally from cosmological dynamics or been volitionally constructed by purposeful beings—attempt to initiate improvised co-communication with each other, while neither possess an a priori model describing the range and scope of manners in which either intelligence or music can be manifested? Will they even recognize each other? What predictive models can they use to search for something for which neither has a clear pre-definition? This is both a challenging and inspiring space to explore.
10:15
Matmos, improvisation and technologies from 1997 to the present
Matmos
Matmos is M.C. Schmidt and Drew Daniel, aided and abetted by many others. Currently based in Baltimore, the duo formed in San Francisco in the mid 1990s. Marrying the conceptual tactics and noisy textures of object-based musique concrete to a rhythmic matrix rooted in electronic pop music, the two are known for their highly unusual sound sources: amplified crayfish nerve tissue, the pages of bibles turning, liposuction surgery, rat cages, a cow uterus, snails, cigarettes, laser eye surgery, latex fetish clothing, life support systems, a five gallon bucket of oatmeal, and a washing machine. Matmos’ work presents a model of electronic composition as a relational network that connects sources and outcomes together; information about the process of creation activates the listening experience, providing the listener with entry points into sometimes densely allusive, baroque recordings. Matmos will talk about their practice and it’s relationship with improvisation and technologies from 1997 to the present.
10:45
Coffee Break
11:15
NYU Music Technology Group presentation
The NYU Music Technology and Composition Programs
Tae Hong Park
Introduction to Citygram
Tae Hong Park
Citygram as a "plug-and-sense" urban soundscape sensor network system and its application in the context of soundscapes as interfaces for data-driven musical possibilities.
Group Listening
Leo Chang
Group Listening is a process I am developing, inspired by Pauline Oliveros’s Deep Listening. Group Listening aims to facilitate listening, group improvising, (co-)composing, and practicing music via a set of written guidelines. The process starts with a group of players listening to some sort of audio source: this can be spontaneous or precomposed, environmental or fixed media. The players then interpret the audio source and engage in a structured, group improvisation on their instrument(s) of choice. Afterwards, they engage in a structured feedback session, which allows them to iteratively shape the composition they are (re)constructing, until the entire group is satisfied.
Computer-Systems for Group Improvisation
Mohamed Kubbara
A brief review of some of the classic challenges and choices faced in the large-ensemble free improvisation and a demonstration of two computer-system approaches to addressing them.
k⋅fn[s]+"a"
Oliver Hickman
A brief descriptoin of k⋅fn[s]+"a" (2017), a piece written for harmonica and electronics, which is directly inspired from the simplistic and minimalistic qualities of the harmonica, a compact, and diatonic instrument that idiomatically becomes part of the performer’s hand and disappears on when onstage. This is accompanied by WOSC, a motion-capture app for the Apple Watch, developed for controlling signal processing modules in impovised harmonica + electronics setting.
DITLOrk – Do-It-Together Laptop Orchestra:
Erich Barganier
The concept of a laptop orchestra has become entrenched in academia and is not necessarily accessible to those with limited technological abilities. This talk deals with how we can democratize laptop orchestras, strategies for how to organize any group of people with access to a laptop and Supercollider into a laptop orchestra, and strategies to connect participants remotely. This talk will also address the laptop orchestra as an effective pedagogical tool as a STEM project in Hackerspaces and Makerspaces.
12:45 Lunch Break
Workshop, Dec. 13th Afternoon
U. Penn, Fisher-Bennett Hall, 419 Rose Hall, 3340 Walnut St
Theme : (Artificial) Creative Intelligences
Session chair : Michael Young
14:00
Rensselaer Polytechnic Institute Research, group presentation & tribute performance to Pauline Oliveros using her EIS and CAIRA systems
Reembodied Sound and Algorithmic Environments for Improvisation
Matthew Goodheart (Rensselaer Poly. Inst.)
Reembodied Sound and Algorithmic Environments for Improvisation uses transducer-driven resonant objects to create acoustic realizations of sample and analysis derived mixed synthesis. This talk will focus on the use of reembodied sound as a generative basis to create large-scale, algorithmically driven sonic environments for improvisers, discussing both technical implementation and aesthetic orientation. Directions for future research involving digital listening agents and interactivity will also be addressed.
Intelligent Music Agents capable of joint intuitive and rational thinking
Jonas Braasch (RPI)
This talk describes an intelligent music system approach that utilizes a joint bottom-up/top-down structure. The bottom-up structure is purely signal driven and calculates pitch, loudness, and information rate among other parameters using auditory models that simulate the functions of different parts of the brain. The top-down structure builds on a logic-based reasoning system and an ontology that was developed to reflect rules in jazz practice. Two instances of the agent have been developed to perform traditional and free jazz, and it is shown that the same general structure can be used to improvise different styles of jazz.
Mediation between Musicians and Code with Neural Networks
Jeremy Stewart (RPI)
For this talk, we will discuss design and implementation of a neural network system for performance between acoustic musicians and live coding performers. Starting with simple classification systems and experimenting with data for training deep neural networks, while also considering novel integrations into existing performance systems, we will outline our current work while also discussing potential steps forward.
Live Scoring for Computer Assisted Composition
Justin Yang (RPI)
This talk explores the use of networked computer animation as a front end for composition and scoring for live performers. An assortment of computer animated graphic tools can be used to develop interactions between algorithms and AI, and live performers. These tools help open the door for possibilities such as real-time scoring, structured improvisation, multi-nodal composition, real-time orchestration, and performer-computer interactions.
15:30
Coffee Break
16:00
The DYCI2 project: Creative Dynamics of Improvised Interaction
The collaborative research and development project DYCI2 led by Ircam, Inria Nancy, and La Rochelle University (http://repmus.ircam.fr/dyci2/home) focuses on conceiving, adapting, and bringing into play efficient models of artificial listening, learning, interaction, and generation of musical contents. It aims at developing creative and autonomous digital musical agents able to take part in various human projects in an interactive and artistically credible way. The concerned areas are live performance, production, pedagogy, and active listening.
Performance: human-computer co-improvisation
Rémi Fox (Saxophone), Jérôme Nika (Ircam)
Merging free, reactive, and scenario-based features in human-computer co-improvisation.
Jérôme Nika (Ircam)
This presentation will give an overview of the project focusing on the design of multi-agent architectures and models of knowledge and decision (OMax, SoMax, ImproteK, DYCI2) in order to explore scenarios of music co-improvisation involving human and digital agents. The objective is to merge the usually exclusive "free", "reactive", and "scenario-based" paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions. The DYCI2 project is led in close and continuous interaction with expert musicians, and these interactions are an integral part of the iterative development of the models and of the software prototypes. Therefore, the presentation will be illustrated by material coming from past, present, and future associated artistic projects.
Multi-dimensional and Multi-scale learning of music structure for machine improvisation in the DYCI2 project
Ken Deguernel (Inria Nancy)
Current musical improvisation systems are able to generate unidimensional musical sequences by recombining their musical contents. However, considering several dimensions (melody, harmony...) and several temporal levels are difficult issues. We propose to combine probabilistic approaches with formal language theory in order to better assess the complexity of a musical discourse, both from a multidimensional and multi-level point of view in the context of improvisation where the amount of data is limited. The methods proposed have been evaluated by professional musicians and improvisers during listening sessions.
17:00
Body/Environment couplings through sound and light
Mina Zarfsaz
This talk/demo is about an interactive audio/video piece that is consisted of a system of sensors, speakers, and projectors measuring the impact of movement and human organization as it reconstructs the dismantled fragments of pieces of music with any given group of people. Like an orchestra of instruments, it is the body of the spectator that co-composes the rhythmic content by co-ordinating movements with others as they trigger the sensors. The “notes” in this project are struck at the interface of body and machine. While in the space, each person is either an active “ON” (within a sensor range) or a passive “OFF” (out of a sensor range.) This piece forces one’s perceptual system to search the space for triggered sounds and lit surfaces; to track changes, estimate distances and corporeal relationship with others. The piece never repeats itself exactly, has no beginning, middle or end.
17:30 End of Dec. 13 sessions