Catálogo de publicaciones - libros

Compartir en
redes sociales


Information Retrieval for Music and Motion

Meinard Müller

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Database Management; Theory of Computation; Computer Applications; Information Storage and Retrieval; Multimedia Information Systems; Computer Graphics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74047-6

ISBN electrónico

978-3-540-74048-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer Berlin Heidelberg 2007

Tabla de contenidos

Relational Features and Adaptive Segmentation

Even though there is a rapidly growing corpus of motion capture data, there still is a lack of efficient motion retrieval systems that allow to identify and extract user-specified motions. Previous retrieval systems often require manually generated textual annotations, which roughly describe the motions in words. Since the manual generation of reliable and descriptive labels is infeasible for large datasets, one needs efficient content-based retrieval methods that only access the raw data itself. In this context, the query-by-example (QBE) paradigm has attracted a large amount of attention: given a query in form of a motion fragment, the task is to automatically retrieve all motion clips from the database containing parts or aspects similar to the query. The crucial point in such an approach is the notion of similarity used to compare the query with the database motions. For the motion scenario, two motions may be regarded as similar if they represent variations of the same action or sequence of actions. These variations may concern the spatial as well as the temporal domain. For example, the two jumps shown in Fig. 11.1 describe the same kind of motion, even though they differ considerably with respect to timing, intensity, and execution style (note, e.g., the arm swing). Similarly, the kicks shown in Fig. 1.1 describe the same kind of motion, even though they differ considerably with respect to direction and height of the kick. In other words, semantically similar motions need not be numerically similar , as is also pointed out in [107].

Part II - Analysis and Retrieval Techniques for Motion Data | Pp. 227-240

Index-Based Motion Retrieval

In Chap. 11, we gave an answer to the first question by introducing the concept of feature sequences, which represent motion capture data streams as coarse sequences of binary vectors. In Sect. 12.1, we will formally introduce the concepts of exact hits, fuzzy hits, and adaptive fuzzy hits. We then describe how one can compute such hits using an inverted file index. The proposed indexing and matching techniques can be put to use in a variety of query modes. Here, the possibilities range from isolated pose-based queries up to query-by-example (QBE), where the user supplies the system with a short query motion clip. In Sect. 12.2, we present a flexible and efficient QBE-based motion retrieval system and report on experimental results. Furthermore, we show how our relational approach to motion comparison can be used as a general tool for efficient motion preprocessing (Sect. 12.3). Finally, we discuss some problems and limitation of the presented index-based techniques and close with a discussion of related work (Sect. 12.4).

Palabras clave: Inverted List; Motion Capture Data; Motion Clip; Scene Description; Mocap Data.

Part II - Analysis and Retrieval Techniques for Motion Data | Pp. 241-264

Motion Templates

In this chapter, we introduce a method for capturing the spatio-temporal characteristics of an entire motion class of semantically related motions in a compact and explicit matrix representation called a motion template (MT). Motion templates, which can be regarded as generalized boolean feature matrices, are formally introduced in Sect. 13.1. Employing an iterative warping and averaging strategy, we then describe an efficient algorithm that automatically derives a motion template from a class of training motions. We summarize the main ideas of this algorithm in Sect. 13.2 before giving the technical details in Sect. 13.3. In Sect. 13.4, we report on our experiments on template learning and discuss a number of illustrative examples to demonstrate the descriptive power of motion templates. Finally, in Sect. 13.5, we close with some general remarks on the multiple alignment problem underlying our learning procedure. In this and the following chapter, we close follow Müller and Röder [143]. An accompanying video is available at [144].

Palabras clave: Dynamic Time Warping; Feature Matrice; Colored Region; Motion Capture Data; Training Motion.

Part II - Analysis and Retrieval Techniques for Motion Data | Pp. 265-281

MT-Based Motion Annotation and Retrieval

Given a class of semantically related motions, we have derived a class motion template that captures the consistent as well as the inconsistent aspects of all motions in the class. The application of MTs to automatic motion annotation and retrieval, which is the content of this chapter, is based on the following interpretation: the consistent aspects of a class MT represent the class characteristics that are shared by all motions, whereas the inconsistent aspects represent the class variations that are due to different realizations. The key idea in designing a distance measure for comparing a class MT with unknown motion data is to mask out the inconsistent aspects – a kind of class-dependent adaptive feature selection – so that related motions can be identified even in the presence of significant spatio-temporal variations. In Sect. 14.1, we define such a distance measure, which is based on a subsequence variant of DTW. Our concepts of MT-based annotation and retrieval are then described in Sect. 14.2 and Sect. 14.3, respectively, where we also report on our extensive experiments [143, 144]. To substantially speed up the annotation and retrieval process, we introduce an index-based (the index being independent of the class MTs) preprocessing step to cut down the set of candidate motions by using suitable keyframes (Sect. 14.4). In Sect. 14.5, we compare MT-based matching to several baseline methods (based on numerical features) as well as to adaptive fuzzy querying. Finally, related work and future research directions are discussed Sect. 14.6.

Palabras clave: Quality Threshold; Motion Class; Candidate Motion; Motion Clip; Retrieval Quality.

Part II - Analysis and Retrieval Techniques for Motion Data | Pp. 283-296