Limit this search to....

Advances in Visual Information Management: Visual Database Systems. Ifip Tc2 Wg2.6 Fifth Working Conference on Visual Database Systems May 10-12, 2000 2000 Edition
Contributor(s): Arisawa, Hiroshi (Editor), Catarci, Tiziana (Editor)
ISBN: 0792378350     ISBN-13: 9780792378358
Publisher: Springer
OUR PRICE:   $208.99  
Product Type: Hardcover - Other Formats
Published: April 2000
Qty:
Annotation: This state-of-the-art book explores new concepts, tools, and techniques for both visual interfaces to database systems and management of visual data. It provides intensive discussion of original research contributions and practical system design, implementation, and evaluation. The following topics are covered in detail: Video retrieval; Information visualization; Modeling and recognition; Image similarity retrieval and clustering; Spatio-temporal databases; Visual querying; Visual user interfaces.The book also includes invited lectures by recognized leaders in the fields of user interfaces and multimedia database systems. These are hot' topics within the main themes of the book and are intended to lay the seeds for fruitful discussions on the future development of visual information management.The book comprises the proceedings of the Fifth Working Conference on Visual Database Systems (VDB5), held in Fukuoka, Japan, in May 2000, and sponsored by the International Federation for Information Processing.Advances in Visual Information Management will be essential reading for computer scientists and engineers, database designers and practitioners, and researchers working in human-computer communication.
Additional Information
BISAC Categories:
- Medical
- Computers | Computer Graphics
Dewey: 006.42
LCCN: 00035707
Series: IFIP Advances in Information and Communication Technology
Physical Information: 0.94" H x 6.14" W x 9.21" (1.70 lbs) 410 pages
 
Descriptions, Reviews, Etc.
Publisher Description:
Video segmentation is the most fundamental process for appropriate index- ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI., 1993) (Zhang et aI., 1993) (Zhang et aI., 1995) (Kobla et aI., 1997). Through the inte- gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI., 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu- ally, it is a semantically meaningful interval that most users are interested in re- trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI., 1995) (Hjelsvold et aI., 1996) suggest manually defining all such inter- vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi- interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.