System and method for direct multi-modal annotation of objects
The system includes an image display system, a direct annotation creation module, an annotation display module, a vocabulary comparison module and a dynamic updating module. These modules are coupled together by a bus and provide for the direct multi-modal annotation of media of media objects. The direct annotation creation module creates annotation objects. The annotation display module works in cooperation with the image display system to display the annotations themselves or graphic representations of the annotation positioned relative to the images of the objects. The system automatically creates the annotation, associates it with the selected images, and displays either a graphic representation of the annotation or a text translation of the audio input.
Latest Ricoh Co., Ltd. Patents:
1. Field of the Invention
The present invention relates to systems and methods for annotating objects. In particular, the present invention relates to a system and method for annotating images with audio input signals. The present invention also relates to a method for translation of audio input and providing display feedback of annotation.
2. Description of the Background Art
With the proliferation of imaging, digital copying, and digital photography, there has been a dramatic increase in the number of images created. This proliferation of images has in turn created a need to track and organize such images. One way to organize images and make them more accessible is to provide annotations to the images, adding information that can put the images in context. Such information makes the images more accessible because the annotations make the images searchable. However, existing methods for annotating images are cumbersome to use and typically very limited.
Even for traditional photographic film, the value in annotating images is well known. For example, there are a variety of cameras that annotate photographic images by adding the time and date when the photograph is taken to the actual image recorded on film. However as noted above, such methods are severely limited and allow little more than the date and time as annotations. In some instances, simple symbols or limited alphanumeric characters are also permitted. Another problem with such annotations is that a portion of the original image where the annotation is positioned is destroyed. Thus, such existing annotation systems and methods are inadequate for the new proliferation of digital images.
There have been attempts in the prior art to provide for annotation of images, but they continue to be cumbersome and difficult to use. One such method allows for the text annotation of images is described in “Direct Annotation: A Drag-and-Drop Strategy for Labeling Photos” by Ben Shneiderman and Hyunmo Kang, Institute for Advanced Computer Studies & Institute for Systems Research University of Maryland. It requires a computer upon which to display the images and a fixed mode for performing annotating. The annotations are entered into the system. Then the images are displayed in an annotation mode along with permissible annotations. The user is able to use a mouse-type controller to drag and drop pre-existing annotations onto images displayed. However, such existing systems do not provide a system for direct annotation without regard to the mode of operation of the system, and do not allow additional type of annotations such audio signals.
Therefore, what is needed is an easy to use system and method for annotating images that overcomes the limitations found in the prior art.
SUMMARY OF THE INVENTIONThe present invention overcomes the deficiencies and limitations of the prior art by providing a system and method for direct, multi-modal annotation of objects. The system of the present invention includes an image display system, a direct annotation creation module, an annotation display module, a vocabulary comparison module and a dynamic updating module. These modules are coupled together by a bus and provide for the direct multi-modal annotation of media objects. The image display system is coupled to a media object cache and displays images of media objects. The direct annotation creation module creates annotations in response to user input and stores the annotations in memory. The annotation display module works in cooperation with the image display system to display the annotations themselves or graphic representations of the annotation positioned relative to the images of the objects. The vocabulary comparison module works in cooperation with the direct annotation creation module to receive audio input and present matches of annotations. Similarly, the dynamic updating module receives input annotations, and updates an audio vocabulary to include a text annotation for new audio input signal. The system of the present invention is particularly advantageous because it provides direct annotation of images. Once an image is displayed, the user need only select an image and speak to create an annotation. The system automatically creates the annotation, associates it with the selected images, and displays either a graphic representation of the annotation or a text translation of the audio input. The present invention may also present likely matches of text to the audio input and/or update an audio vocabulary in response to input of audio inputs that are not recognized.
The present invention also includes a number of novel methods including: a method for annotating objects with audio signals, a method for annotating images including vocabulary comparison, a method for annotating images including recording audio annotations, and a method for annotating and dynamically adding to a vocabulary.
The invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
A method and apparatus for direct, multi-modal annotation of objects is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention. For example, the present invention is described with reference to the annotation of images using audio input. However, the present invention applies to any media objects, not just images, and the input could be other than audio input.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Moreover, the present invention claimed below is operating on or working in conjunction with an information system. Such an information system as claimed may be an entire messaging system or only portions of such a system. For example, the present invention can operate with an information system that need only be a browser in the simplest sense to present and display media objects. The information system might alternately be the system described below with reference to
Control unit 150 may comprise an arithmetic logic unit, a microprocessor, a general purpose computer, a personal digital assistant or some other information appliance equipped to provide electronic display signals to display device 100. In one embodiment, control unit 150 comprises a general purpose computer having a graphical user interface, which may be generated by, for example, a program written in Java running on top of an operating system like WINDOWS® or UNIX® based operating systems. In one embodiment, one or more application programs executed by control unit 150 including, without limitation, word processing applications, electronic mail applications, spreadsheet applications, and web browser applications generate images. In one embodiment, the operating system and/or one or more application programs executed by control unit 150 provide “drag-and-drop” functionality where each image or object may be selected.
Still referring to
Processor 102 processes data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown in
Main memory 104 stores instructions and/or data that may be executed by processor 102. The instructions and/or data may comprise code for performing any and/or all of the techniques described herein. Main memory 104 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or some other memory device known in the art. The memory 104 is described in more detail below with reference to
Data storage device 107 stores data and instructions for processor 102 and comprises one or more devices including a hard disk drive, a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, a flash memory device, or some other mass storage device known in the art.
System bus 101 represents a shared bus for communicating information and data throughout control unit 150. System bus 101 may represent one or more buses including an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or some other bus known in the art to provide similar functionality. Additional components coupled to control unit 150 through system bus 101 include the display device 100, the keyboard 122, the cursor control device 123, the network controller 124 and the I/O device(s) 125.
Display device 100 represents any device equipped to display electronic images and data as described herein. Display device 100 may be, for example, a cathode ray tube (CRT), liquid crystal display (LCD), or any other similarly equipped display device, screen, or monitor. In one embodiment, display device 100 is equipped with a touch screen in which a touch-sensitive, transparent panel covers the screen of display device 100.
Keyboard 122 represents an alphanumeric input device coupled to control unit 150 to communicate information and command selections to processor 102.
Cursor control 123 represents a user input device equipped to communicate positional data as well as command selections to processor 102. Cursor control 123 may include a mouse, a trackball, a stylus, a pen, a touch screen, cursor direction keys, or other mechanisms to cause movement of a cursor.
Network controller 124 links control unit 150 to a network that may include multiple processing systems. The network of processing systems may comprise a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or any other interconnected data path across which multiple devices may communicate. The control unit 150 also has other conventional connections to other systems such as a network for distribution of files (media objects) using standard network protocols such as TCP/IP, http, and SMTP as will be understood to those skilled in the art.
One or more I/O devices 125 are coupled to the system bus 101. For example, the I/O device 125 may be an audio input/output device 125 equipped to receive audio input such as via a microphone and transmit audio output via speakers. Audio input may be received through various devices including a microphone within I/O audio device 125 and network controller 124. Similarly, audio output may originate from various devices including processor 102 and network controller 124. In one embodiment, audio device 125 is a general purpose; audio add-in/expansion card designed for use within a general purpose computer system. Optionally, I/O audio device 125 may contain one or more analog-to-digital or digital-to-analog converters, and/or one or more digital signal processors to facilitate audio processing.
It should be apparent to one skilled in the art that control unit 150 may include more or less components than those shown in
Annotation Overview. While the present invention will now be described primarily in the context of audio annotation, those skilled in the art will recognize that the principles of the present invention are applicable to any type of annotation even though described in the context of audio annotation. In accordance with one embodiment, one can record a variable-length audio narration that may be used as an audio annotation for one or more images or objects displayed upon a display device 100. In one embodiment, by indicating a position on a display device 100 through clicking, pointing, or touching the display screen, creation of an annotation is completed and audio recording is initiated. Audio recording may cease when the audio level drops below a predetermined threshold or may cease in response to specific user input. In one embodiment, for each additional positional stimulus received, a new annotation is generated and the previous annotation is complete.
The term “positional stimulus,” as referred to herein, represents an input that can simultaneously indicate an electronic location on the display screen and an image or object tracked by the control unit 150. Various input sources may generate a positional stimulus including, without limitation, a computer mouse, a trackball, a stylus or pen 122, and cursor control keys 123. Similarly, a touch screen is capable of both generating and detecting a positional stimulus. In one embodiment, positional stimuli are detected by control unit 150, whereas in another embodiment, positional stimuli are detected by display device 100.
In an exemplary embodiment, once a positional stimulus occurs, such as a “click” of a mouse or a “touch” on a touch screen, an annotation object is generated. The system 103 then receives input or data that comprise the annotation. The input may be a verbal utterance by the user that is converted to an audio signal and recorded. Audio signals may be recorded by control unit 150 through I/O audio device 125 or similar audio hardware (or software), and the audio signal may be stored within data storage device 107 or a similarly equipped audio storage device. In one embodiment, control unit 150 initiates audio recording in response to detecting a positional stimulus, whereas in an alternate embodiment, control unit 150 automatically initiates audio recording upon detecting audio input above a predetermined threshold level. Similarly, audio recording may automatically be terminated upon the audio level dropping below a predetermined threshold or upon control unit 150 detecting a predetermined duration of silence where there is no audio input. In another embodiment, once the audio input for the annotation is complete, the input may be compared to determine a corresponding text equivalent. After the input is complete and/or a text equivalent found, a symbol or text representing the annotation is added to the image at the position of the image of the positional stimulus. For example, in
The location on an image to which an annotation is graphically connected may be represented by (x, y) coordinates in the case of a graphical image, or the location may be represented by a single coordinate in the case where an image represents a linear document. Examples of linear documents may include a plain text document, a hypertext markup language (HTML) document, or some other markup language-based document including extensible markup language (XML) documents.
In one embodiment, if during audio recording the system detects an additional positional stimulus, control unit 150 generates an additional annotation object. The additional annotation object may be generated in a manner similar to the first annotation object described above. It should be understood that an additional positional stimulus need not be on the same image or object, but could be on any other object or area visible on the display device 100.
In another embodiment, if the positional stimulus is detected upon text or a symbol representing an annotation, the control unit 150 retrieves the audio annotation from the annotation object corresponding to the symbol or text and outputs the signal via the I/O audio output device 125. Such may be necessary when the text does not match the annotation or where there is only an audio annotation with no text translation.
In yet another embodiment, the text or a symbol representing an annotation object may be repositioned individually or as a group relative to the images shown on the display device 100 using conventional “drag” operations. For such operations, the user either sets the system 103 in an editing mode of operation, or a different positional stimulus (e.g., right mouse click or double tap gesture) is recognized by the control unit 150 to initiate such a drag and drop operation. For example,
Annotation Object. Referring now to
Annotation System. Referring now to
The operating system 202 is preferably one of a conventional type such as, WINDOWS®, SOLARIS® or LINUX® based operating systems. Although not shown, the memory unit 104 may also include one or more application programs including, without limitation, word processing applications, electronic mail applications, spreadsheet applications, and web browser applications.
The image display system 204 is preferably a system for displaying and storing media objects. The image display system 204 generates image of the object on the display device 100. The objects are preferably graphic images, but may also be icons or symbols representing documents, text, video, graphics, etc. Such media objects are stored in the memory unit 104 in media object cache 220 or in the data storage device 107 (
Today, it is well understood by those skilled in the art that multiple computers can be used in the place of a single computer by applying the appropriate software, hardware, and communication protocols. For instance, data used by a computer often resides on a hard disk or other storage device that is located somewhere on the network to which the computer is connected and not within the computer enclosure itself. That data can be accessed using NFS, FTP, HTTP or one of many other remote file access protocols. Additionally, remote procedure calls (RPC) can execute software on remote processors not part of the local computer. In some cases, this remote data or remote procedure operation is transparent to the user of the computer and even to the application itself because the remote operation is executed through the underlying operating system as if it was a local operation.
It should be apparent to those skilled in the art that although the embodiment described in this invention refers to a single computer with local storage and processor, the data might be stored remotely in a manner that is transparent to the local computer user or the data might explicitly reside in a remote computer accessible over the network. In either case, the functionality of the invention is the same and both embodiments are recognized and considered as possible embodiments of this invention.
For example,
The web browser 206 is of a conventional type that provides access to the Internet and processes HTML, XML or other mark up language to generated images on the display device 100. For example, the web browser 206 could be Netscape Navigator or Microsoft Internet Explorer.
The annotation display module 208 is coupled to the bus 101 for communication with the image display system 204, the web browser 206, the direct annotation creation module 210, the audio vocabulary comparison module 214, the dynamic vocabulary updating module 218, and the media object cache 220. The annotation display module 208 interacts with these components as will be described below with reference to
The direct annotation creation module 210 is coupled to the bus 101 for communication with the image display system 204, the annotation display module 208, the audio vocabulary comparison module 214, the dynamic vocabulary updating module 218, and the media object cache 220. The direct annotation creation module 210 interacts with these components as will be described below with reference to
The annotation audio output module 212 is similar to the annotation display module 208, but for outputting audio signals. The annotation audio output module 212 is coupled to the bus 101 for communication with the image display system 204, the web browser 206 and the media object cache 220. The annotation audio output module 212 interacts with these components as will be described below with reference to
The audio vocabulary comparison module 214 is coupled to the bus 101 for communication with the image display system 204, the web browser 206, the annotation display module 208, the direct annotation creation module 210, the audio vocabulary storage 216 and a media object cache 220. The audio vocabulary comparison module 214 interacts with these components as will be described below with reference to
The audio vocabulary storage 216 is coupled to the bus 101 for communication with the audio vocabulary comparison module 214. The audio vocabulary storage 216 preferably stores a plurality of audio signals and corresponding text strings. The audio vocabulary storage 216 is preferably a table of such audio signals and text strings. The audio vocabulary storage 216 includes a database of text and associated audio signals that is searchable for matching. The audio vocabulary storage 216 preferably orders the audio for searching efficiency. The audio vocabulary storage 216 can be augmented as new annotations are added to the system 103.
The dynamic vocabulary updating module 218 is coupled to the bus 101 for communication with the image display system 204, the web browser 206, the annotation display module 208, the direct annotation creation module 210, the audio vocabulary storage 216 and a media object cache 220. The dynamic vocabulary updating module 218 interacts with these components as will be described below with reference to
The media object cache 220 forms a portion of memory 104 and temporarily stores media and annotation objects used by the annotation system 103 for faster access. The media object cache 220 stores media objects identical to those stored on the data storage device 107 or other storage devices accessible via the network controller 124. By storing the media objects in the media object cache 220, the media objects are usable by the various modules 202-218 with less latency.
Annotation Processes. Referring now to
Referring now to
Referring now to
Referring now to
A more significant modification from
Referring now to
Retrieval Using Audio and Annotations. Once the images have been annotated, the system of the present invention assists in the retrieval of images. In particular, the present invention allows images to be searched and retrieved using only audio input from the user. Referring now to
While the present invention has been described with reference to certain preferred embodiments, those skilled in the art will recognize that various modifications may be provided. For example, the point and talk functionality provided by the present invention may be used to augment the capabilities already existing in a multimedia message system. Variations upon and modifications to the preferred embodiments are provided for by the present invention, which is limited only by the following claims.
Claims
1. An apparatus for direct annotation of objects, the apparatus comprising:
- a display device for displaying one or more images;
- an audio input device for receiving an audio signal;
- a storage device for storing a plurality of different visual notations each comprising a text or a graphic image and for storing a plurality of corresponding audio signals;
- a direct annotation creation module coupled to receive the audio signal from the audio input device and to receive a reference to a location within an image on the display device, the direct annotation creation module, in response to receiving the audio signal and the reference to the location within the image, automatically creating an annotation object, independent from the image, that associates the input audio signal, the location and one of the plurality of different visual notations; and
- an audio vocabulary comparison module coupled to the audio input device, the storage device and the direct annotation creation module, the audio vocabulary comparison module receiving audio input and finding a corresponding one of the plurality of different visual notations that matches content of the audio input.
2. The apparatus of claim 1 further comprising an annotation display module coupled to the direct annotation creation module, the annotation display module generating symbols or text representing the annotation objects.
3. The apparatus of claim 1 further comprising an annotation audio output module coupled to the direct annotation creation module, the annotation audio output module generating audio output in response to user selection of an annotation symbol representing an annotation object.
4. The apparatus of claim 1 further comprising:
- an audio vocabulary storage for storing a plurality of audio signals and corresponding text strings;
- a dynamic vocabulary updating module coupled to the audio vocabulary storage and the audio input device, the dynamic vocabulary updating module for displaying an interface to create a new entry in the audio vocabulary storage, the dynamic vocabulary updating module receiving an audio input and a text string and creating the new entry in the audio vocabulary storage that includes a new visual annotation.
5. The apparatus of claim 1 further comprising a media object cache for storing media and annotation objects.
6. A computer program product having a computer-readable storage medium storing computer-executable code for direct annotation of objects, the code comprising:
- a media object storage for storing media, annotation objects, a plurality of different visual notations each comprising a text or a graphic image and a plurality of corresponding audio signals;
- a direct annotation creation module coupled to receive an audio signal, a selected visual notation from the plurality of different visual notations and a reference to a location within an image, the direct annotation creation module, in response to receiving the audio signal or the reference to the location within the image, automatically creating an annotation object, independent of the image, that associates the audio signal, the selected visual notation and the location, and the direct annotation creation module storing the audio annotation in the media object storage;
- an audio vocabulary comparison module coupled to the media object storage and the direct annotation creation module, the audio vocabulary comparison module receiving audio input and finding a corresponding one of the plurality of different visual notations that matches content of the audio input; and
- an annotation output module coupled to the direct annotation creation module, the annotation output module generating audio or visual output in response to user selection of an annotation symbol representing the annotation object.
7. A computer implemented method for direct annotation of objects, the method comprising the steps of:
- displaying an image;
- receiving audio input;
- detecting selection of a location within the image;
- comparing the audio input to a vocabulary;
- finding a corresponding one of a plurality of different visual notations that matches content of the audio input; and
- creating an annotation object, independent of the selected image, that provides an association between the image, the audio input, the selected location, the found corresponding one of a plurality of different visual notations comprising text or a graphic image, the annotation object including at least a text annotation field, an image reference field, and an annotation location field, the creating step occurring automatically in response to the receiving or detecting.
8. The method of claim 7, wherein the step of displaying is performed before or simultaneously with the step of receiving.
9. The method of claim 7, wherein the step of receiving is performed before or simultaneously with the step of displaying.
10. The method of claim 7, further comprising the step of displaying the one of the plurality of different visual notations to indicate that the image has an annotation.
11. The method of claim 7, wherein the step of creating an annotation object includes storing the annotation object in an object storage.
12. The method of claim 11, further comprising the step of recording the audio input received.
13. The method of claim 12, wherein the step of creating the annotation object includes creating an annotation object including a reference to the selected location, the recorded audio input and one of the plurality of different visual annotations, and storing the annotation object in an object storage.
14. The method of claim 11, wherein the step of creating an annotation object includes storing the text as part of the annotation object.
15. The method of claim 11, further comprising the steps of:
- determining if the audio input has a matching entry in the vocabulary; and
- storing the entry as part of the annotation object if the audio input has a matching entry in the vocabulary.
16. The method of claim 15, further comprising the steps of:
- determining if the audio input has a close match in the vocabulary;
- displaying the close matches;
- receiving input selecting a close match; and
- storing the selected close match as part of the annotation object if the audio input has a close match in the vocabulary.
17. The method of claim 16, further comprising the step of displaying a message that the image has not been annotated if there is neither a matching entry in the vocabulary nor a close match in the vocabulary.
18. The method of claim 16, further comprising the following steps if there is neither a matching entry in the vocabulary nor a close match in the vocabulary:
- receiving text input corresponding to the audio input;
- updating the vocabulary with a new entry including the audio input and the text input; and
- wherein the received text is stored as part of the annotation object.
19. The method of claim 11, further comprising the steps of:
- receiving text input corresponding to the audio input;
- updating the vocabulary with a new entry including the audio input and the text input.
20. A computer implemented method for displaying objects with annotations, the method comprising the steps of:
- receiving audio input;
- finding a corresponding annotation object comprising one of a plurality of different visual notations, the plurality of different visual notations referencing a close match to content of the audio input;
- retrieving an image associated with the corresponding annotation object;
- displaying the image with one of the plurality of different visual notations to indicate that an annotation exists;
- receiving user selection of the one visual notation;
- generating the annotation automatically, in response to user input of a location within the image and an audio input;
- outputting the annotation associated with the selected visual notation;
- determining whether the annotation includes text;
- retrieving a text annotation for the selected visual notation; and
- displaying the retrieved text with the image.
21. The method of claim 20, wherein the annotation is text and the step of outputting is displaying the text proximate the image that it annotates.
22. The method of claim 20, wherein the annotation is an audio signal and the step of outputting is playing the audio signal.
23. The method of claim 20, further comprising the steps of:
- determining whether the annotation includes an audio signal;
- retrieving an audio signal for the selected visual annotation; and
- wherein the step of outputting is playing the audio signal.
24. A computer implemented method for retrieving images, the method comprising the steps of:
- receiving audio input;
- finding corresponding annotation objects comprising one of a plurality of different visual notations, the plurality of different visual notations referencing a close match to content of the audio input, each corresponding annotation object generated automatically in response to user input of a location within an image and an audio signal, where a recording of the audio signal is terminated automatically based on a predetermined audio level;
- retrieving the images that are referenced by the found annotation objects; and
- displaying the retrieved images, the plurality of different visual notations for the found corresponding annotation objects and wherein each of the found corresponding annotation objects include at least an audio input field, an image reference field, and an annotation location field.
25. The method of claim 24, wherein the step of determining annotation objects further comprises the steps of:
- comparing the audio input to an audio signal reference of the annotation object; and
- determining a close match between the audio input and the audio signal reference of the annotation object if a probability metric is greater than a threshold of 80%.
26. The method of claim 24, wherein the step of determining annotation objects further comprises the steps of:
- determining the annotation objects for a plurality of images;
- for each annotation object, comparing the audio input to an audio signal reference of the annotation object; and
- determining a close match between the audio input and the audio signal reference of the annotation object if a probability metric is greater than an a threshold of 80%.
5481345 | January 2, 1996 | Ishida et al. |
5502727 | March 26, 1996 | Catanzaro et al. |
5537526 | July 16, 1996 | Anderson et al. |
5546145 | August 13, 1996 | Bernardi et al. |
5600775 | February 4, 1997 | King et al. |
5623578 | April 22, 1997 | Mikkilineni |
5649104 | July 15, 1997 | Carleton et al. |
5671428 | September 23, 1997 | Muranaga et al. |
5826025 | October 20, 1998 | Gramlich |
5838313 | November 17, 1998 | Hou et al. |
5845301 | December 1, 1998 | Rivette et al. |
5857099 | January 5, 1999 | Mitchell et al. |
5881360 | March 9, 1999 | Fong |
5893126 | April 6, 1999 | Drews et al. |
5933804 | August 3, 1999 | Huang et al. |
6041335 | March 21, 2000 | Merritt et al. |
6101338 | August 8, 2000 | Bernardi et al. |
6119147 | September 12, 2000 | Toomey et al. |
6226422 | May 1, 2001 | Oliver |
6230171 | May 8, 2001 | Pacifici et al. |
6279014 | August 21, 2001 | Schilit et al. |
6388681 | May 14, 2002 | Nozaki |
6401069 | June 4, 2002 | Boys et al. |
6499016 | December 24, 2002 | Anderson |
6510427 | January 21, 2003 | Bossemeyer et al. |
6546405 | April 8, 2003 | Gupta et al. |
6624826 | September 23, 2003 | Balabanovic |
6687878 | February 3, 2004 | Eintracht et al. |
6859909 | February 22, 2005 | Lerner et al. |
20020099552 | July 25, 2002 | Rubin et al. |
20020116420 | August 22, 2002 | Allam et al. |
20040034832 | February 19, 2004 | Taylor et al. |
20040201633 | October 14, 2004 | Barsness et al. |
20040267693 | December 30, 2004 | Lowe et al. |
- Lin, James. An Ink and Voice Annotation System for DENIM. Sep. 8, 1999. pp. 1-7.
- Balabanovic, Marko, Multimedia Chronicles for Business Communication, 2000 IEEE, pp. 1-10.
- Heck et al. , “A survey of Web Annotation Systems”, Dept. of Math and Comp. Sci. Grinnell College, 1999 pp. 1-6.
- Balabanovic, Marko “Multimedia Chronicles for Business Communication”, 2000, IEEE, pp. 1-10.
Type: Grant
Filed: Jan 9, 2002
Date of Patent: Feb 17, 2009
Assignee: Ricoh Co., Ltd. (Tokyo)
Inventors: Gregory J. Wolff (Redwood City, CA), Peter E. Hart (Menlo Park, CA)
Primary Examiner: Stephen Hong
Assistant Examiner: Ryan F Pitaro
Attorney: Fenwick & West LLP
Application Number: 10/043,575
International Classification: G06F 3/00 (20060101);