DSpace Repository

AI and Music: From Composition to Expressive Performance

Show simple item record

dc.contributor Ministerio de Ciencia y Tecnología (España)
dc.creator López de Mantaras, Ramón
dc.creator Arcos, Josep Ll.
dc.date 2008-02-20T09:16:23Z
dc.date 2008-02-20T09:16:23Z
dc.date 2002
dc.date.accessioned 2017-01-31T01:00:16Z
dc.date.available 2017-01-31T01:00:16Z
dc.identifier AI magazine, 2002, 23 (3): 43-57
dc.identifier 0738-4602
dc.identifier http://hdl.handle.net/10261/3001
dc.identifier.uri http://dspace.mediu.edu.my:8181/xmlui/handle/10261/3001
dc.description In this paper we first survey the three major types of computer music systems based on AI techniques: compositional, improvisational, and performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer’s “touch”; that is, the knowledge applied when performing a score. Humans acquire it through a long process of observation and imitation. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the paper we report on a performance system, SaxEx, based on this alternative approach, capable of generating high quality expressive solo performances of Jazz ballads based on examples of human performers within a case-based reasoning system.
dc.description "AI and Music..." is partially supported by the Spanish Ministry of Science and Technology under project TIC 2000-1094-C02, "TABASCO: Content-Based Audio Transformation Using CBR".
dc.description Peer reviewed
dc.format 761695 bytes
dc.format application/pdf
dc.language eng
dc.publisher AAAI Press
dc.rights openAccess
dc.subject Artificial Intelligence
dc.subject Case-Based Reasoning
dc.title AI and Music: From Composition to Expressive Performance
dc.type Artículo


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account