Automated Metadata in Multimedia Information Systems

Automated Metadata in Multimedia Information Systems

Michael Christel
ISBN: 9781598297713 | PDF ISBN: 9781598297720
Copyright © 2009 | 74 Pages | Publication Date: 01/01/2009

BEFORE YOU ORDER: You may have Academic or Corporate access to this title. Click here to find out: 10.2200/S00167ED1V01Y200812ICR002

Ordering Options: Paperback $30.00   E-book $24.00   Paperback & E-book Combo $37.50


Why pay full price? Members receive 15% off all orders.
Learn More Here

Read Our Digital Content License Agreement (pop-up)

Purchasing Options:



Improvements in network bandwidth along with dramatic drops in digital storage and processing costs have resulted in the explosive growth of multimedia (combinations of text, image, audio, and video) resources on the Internet and in digital repositories. A suite of computer technologies delivering speech, image, and natural language understanding can automatically derive descriptive metadata for such resources. Difficulties for end users ensue, however, with the tremendous volume and varying quality of automated metadata for multimedia information systems. This lecture surveys automatic metadata creation methods for dealing with multimedia information resources, using broadcast news, documentaries, and oral histories as examples. Strategies for improving the utility of such metadata are discussed, including computationally intensive approaches, leveraging multimodal redundancy, folding in context, and leaving precision-recall tradeoffs under user control. Interfaces building from automatically generated metadata are presented, illustrating the use of video surrogates in multimedia information systems. Traditional information retrieval evaluation is discussed through the annual National Institute of Standards and Technology TRECVID forum, with experiments on exploratory search extending the discussion beyond fact-finding to broader, longer term search activities of learning, analysis, synthesis, and discovery.

Table of Contents

Evolution of Multimedia Information Systems: 1990-2008
Survey of Automatic Metadata Creation Methods
Refinement of Automatic Metadata
Multimedia Surrogates
End-User Utility for Metadata and Surrogates: Effectiveness, Efficiency, and Satisfaction

About the Author(s)

Michael Christel, Carnegie Mellon University
Michael G. Christel has worked at Carnegie Mellon University (CMU), Pittsburgh, PA, since 1987, first with the Software Engineering Institute, and since 1997 as a senior systems scientist in the School of Computer Science. In September 2008, he accepted a position as research professor in CMU's Entertainment Technology Center (ETC). He is a founding member of the Informedia research team at CMU designing, deploying, and evaluating video analysis and retrieval systems for use in education, health care, humanities research, and situation analysis. His research interests focus on the convergence of multimedia processing, information visualization, and digital library research. He has published more than 50 conference and journal papers in related areas, serves on the Program Committee for various multimedia and digital library IEEE-CS and ACM conferences, and is an associate editor for IEEE Transactions on Multimedia. He has worked with digital video since its inception in 1987, and received his PhD from the Georgia Institute of Technology, Atlanta, GA, in 1991, with his thesis examining digital video interfaces for software engineering training. He received his bachelor's degree in mathematics and computer science from Canisius College, Buffalo, NY, in 1983. At the ETC, Christel hopes to broaden his research focus from multimedia for information search and retrieval, to multimedia for information engagement and edutainment, with users being both producers and consumers of multimedia content.

Reviews
Browse by Subject
Case Studies in Engineering
ACM Books
IOP Concise Physics
SEM Books
0 items
LATEST NEWS

Newsletter
Note: Registered customers go to: Your Account to subscribe.

E-Mail Address:

Your Name: