System and method for linking streams of multimedia data to reference material for display

A system for indexing displayed elements that is useful for accessing and understanding new or difficult materials, in which a user highlights unknown words or characters or other displayed elements encountered while viewing displayed materials. In a language learning application, the system displays the meaning of a word in context; and the user may include the word in a personal vocabulary to build a database of words and phrases. In a Japanese language application, one or more Japanese language books are read on an electronic display. Readings (‘yomi’) for all words are readily viewable for any selected word or phrase, as well as an English reference to the selected word or phrase. Extensive notes are provided for difficult phrases and words not normally found in a dictionary. A unique indexing scheme allows word-by-word access to any of several external multi-media references.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description

Note: More than one reissue patent application has been filed for the reissue of U.S. Pat. No. 5,822,720. The reissue patent applications are U.S. Reissue patent application Ser. No. 11/064,519, filed Feb. 24, 2005, and issued as U.S. Pat. No. Re. 40,731, and the present U.S. Reissue patent application Ser. No. 12/480,556, filed Jun. 8, 2009, which is a continuation reissue application of U.S. Reissue patent application Ser. No. 11/064,519 filed Feb. 24, 2005 now U.S. Pat. No. Re 40,731.

This is a continuation of application Ser. No. 08/197,157 filed Feb. 16, 1994 now abandoned.

BACKGROUND OF THE INVENTION

1. Technical Field

The present invention relates to indexing displayed elements. More particularly, the present invention relates to a novel indexing scheme that is useful in such applications as learning a foreign language, for example a language based upon an ideographic alphabet, such as Japanese.

2. Description of the Prior Art

As the global economy turns the world's many nations into what media visionary Marshall McLuhan referred to as a global village, the need to learn and use new or specialized information, such as a language other than one's native language, becomes increasingly important. For example, there is a tremendous international demand for information related to Japan. Inside Japan, there is an abundance of information available in the Japanese language in numerous media forms. Japan has five national newspapers, seven major broadcasting networks, and several hundred book and magazine publishers. Japanese television focuses on the most obscure topics; and there are special interest magazines covering the full spectrum of Japanese society. Speakers of the Japanese language can find information on just about any topic imaginable. Unfortunately, outside of Japan this information is in short supply and the information that is available is primarily in English.

Individuals trying to learn about Japan are faced with the dilemma of either relying on English language sources or going through the pains of learning Japanese. English language information on Japan must go through the translation process. This results in time delays in obtaining necessary information, as well as in distortions in meaning. Furthermore, economics itself places restrictions on what information makes it's way into English and what information does not. For general and introductory information on Japan, the English-based media is providing a valuable service. But for people who want to do more than scratch the surface, such information is far from sufficient.

A large number of non-native speakers have sought to study Japanese in universities or in professional language schools. In recent years, the interest level in Japanese among first year level college students has soared, such that it is rated second only to Spanish in some surveys. The number of people studying Japanese in the mid-1980's in the United States was 50,000. This number has recently grown to 400,000 persons. But the study of Japanese language is plagued by the burdens of learning Kanji, the ideographic alphabet in which Japanese is written. Thus, the standing room only first-year Japanese language class in many universities soon becomes the almost private lesson-like third year class due to student attrition resulting from the difficulty of mastering Kanji.

The situation in Japan for foreigners is not much more encouraging. The cost of living in Japan poses a major barrier for both business people and students. There are currently over 300,000 United States citizens working or studying in Japan. But in recent years, foreign companies have been cutting their foreign staff. This, in part, has been in response to the enormous expense associated with maintaining them in Japan; but it is also a statement about the effectiveness of a large percentage of these people, who typically possess no Japanese language skills or background. Nevertheless, the necessity to do business in Japan is clear to most major United States companies, and access to Japan's inside information is critical to the success of such companies.

The situation in Japanese universities is also discouraging. There are currently about 30,000 foreign students in Japanese universities, compared to a total of over 300,000 foreign students studying in the United States. Ninety percent of the foreign students in Japan are from Asia, while there are less than 1,000 students in Japan from the United States. The cost of living and housing again contribute greatly to this disparity, but the language barrier must be seen as the prime hurdle that causes students to abandon the attempt to explore Japan. In the future, the desirability for students and researchers to work in Japan should increase due to the growth of “science cities” and the increase in the hiring of foreign researchers by Japanese corporations. The burden of studying Japanese, however, remains.

In total there are over 60,000 people enrolled in Japanese language programs in Japan; and according to the Japan Foundation, there are approximately 1,000,000 Japanese language students worldwide, with a total of over 8,200 Japanese language instructors in 4,000 institutes. However, without a more effective and productive methodology for reading Japanese and for building Japanese language vocabulary, the level and breadth of the information making its way to non-natives should not be expected to improve.

The foregoing is but one example of the many difficulties one is faced with when acquiring or using difficult or unfamiliar material. The first challenge anyone reading a difficult text, is faced with is the issue of character recognition and pronunciation. For example, a student of the Japanese language spends many frustrating hours counting character strokes and looking up characters in a dictionary. Challenges such as this are the primary reason so many people give up on Japanese after a short trial period. It is also the reason that people who continue to pursue the language are unable to build an effective vocabulary.

Knowing the “yomi” or pronunciation or reading of a word is essential to memorize and assimilate the word into one's vocabulary. This allows the student to read a word in context and often times deduce its meaning. But in many cases, the word may be entirely new to the reader, or it may be a usage that the reader has never seen. Looking up the word in the dictionary or asking a native speaker are the only options available to a student. Once the yomi for the word is known, i.e. the meaning and understanding of the word in context, the final challenge is to memorize the word and make it a part of a usable vocabulary.

The sheer number of characters in ideographic alphabets, such as Kanji, presents unique challenges for specifying and identifying individual characters.

Various schemes have been proposed and descriptions can be found in the literature for the entry of Kanji characters into computers and the like.

See, for example, Y. Chu, Chinese/Kanji Text and Data Processing, IEEE Computer (January 1985); J. Becker, Typing Chinese, Japanese, and Korean, IEEE Computer (January 1985); R. Matsuda, Processing Information in Japanese, IEEE Computer (January 1985); R. Walters, Design of a Bitmapped Multilingual Workstation, IEEE Computer (February 1990); and J. Huang, The Input and Output of Chinese and Japanese Characters, IEEE Computer (January 1985).

And, see J. Monroe, S. Roberts, T. Knoche, Method and Apparatus for Processing Ideographic Characters, U.S. Pat. No. 4,829,583 (9 May 1989), in which a specific sequence of strokes is entered into a 9×9 matrix, referred to as a training square. This sequence is matched to a set of possible corresponding ideographs. Because the matrix senses stroke starting point and stroke sequences based on the correct writing of the ideograph to be identified, this system cannot be used effectively until one has mastered the writing of the ideographic script. See, also G. Kostopoulos, Composite Character Generator, U.S. Pat. No. 4,670,841 (2 Jun. 1987); A. Carmon, Method and Apparatus For Selecting, Storing and Displaying Chinese Script Characters, U.S. Pat. No. 4,937,745 (26 Jun. 1990); and R. Thomas, H. Stohr, Symbol Definition Apparatus, U.S. Pat. No. 5,187,480 (16 Feb. 1993).

A text revision system is disclosed in R. Sakai, N, Kitajima, C. Oshima, Document Revising System For Use With Document Reading and Translation System, U.S. Pat. No. 5,222,160 (22 Jun. 1993), in which a foreigner having little knowledge of Japanese can revise misrecognized imaged characters during translation of the document from Japanese to another language. However, the system is provided for commercial translation services and not intended to educate a user in the understanding or meaning of the text.

Thus, although much attention has been paid, for example, to the writing, identification, and manipulation of ideographic characters, none of these approaches are concerned with providing a language learning system. The state of the art for ideographic languages, such as Japanese, does not provide an approach to learning the language that meets the four primary challenges discussed above, i.e. reading the language (for example, where an ideographic alphabet is used), comprehending the meaning of a particular word encountered while reading the language, understanding the true meaning of the word within the context that the word is used, and including the word in a personal dictionary to promote long term retention of the meaning of the word. A system that applies this approach to learning a language would be a significant advance in bridging the gap between the world's diverse cultures because of the increased understanding that would result from an improved ability to communicate with one another. Such system would only be truly useful if it were based upon an indexing scheme that allowed meaningful manipulation and display of the various elements of the language.

SUMMARY OF THE INVENTION

The invention provides a unique system for indexing displayed elements and finds ready application, for example in a language learning system that enhances and improves the way non-natives read foreign languages, for example the way a native English speaker reads Japanese. The language learning system provides a more effective way for people to read and improve their command of the foreign language, while at the same time communicating insightful and relevant cultural, social, and economic information about the country.

The learning model used by the language learning system is straightforward and is based upon methods that are familiar to most learners of foreign languages. The system addresses the four challenges of reading a foreign language, such as Japanese: i.e. reading the foreign word or character, such as Kanji in the case of a language having an ideographic alphabet, such as Japanese; comprehending the meaning of the word; understanding the word in context; and including the word in a personal vocabulary.

The exemplary embodiment of the invention includes one or more foreign language books that are read on an electronic display of a personal computer. English word references are available for each word in such books. The definitions of such words are derived from well known foreign language dictionaries. With regard to the Japanese language, the system saves significant amounts of time and effort by eliminating the need for the user to look up Japanese characters in a Kanji dictionary.

When one uses the system, the pronunciations or readings (‘yomi’) for all words are immediately viewable in a pop-up window without accessing a disk based database, for example by clicking a mouse on a selected word or phrase. In the same pop-up window, the system provides an English reference to any word that is also selected by clicking on the selected word or phrase. The system provides extensive notes for difficult phrases and words not normally found in a dictionary, and includes a relational database designed for managing and studying words. This allows a user to build a personal database of words that he would like to master. Words may also be entered from other sources that are currently in paper or other electronic formats. A unique indexing scheme allows word-by-word access to any of several external multi-media references.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block schematic diagram of a language learning system according to the invention;

FIG. 2 is a flow diagram in which the mechanism for indexing and linking text to external references is shown according to the invention;

FIG. 3 is a screen display showing a highlighted Japanese word and a pop-up menu, including an English reference to the Japanese word, according to the invention;

FIG. 4 is a screen display showing a highlighted Japanese word and a pop-up menu, including Japanese language annotations of the Japanese word, according to the invention; and

FIG. 5 is a screen display showing a Japanese word listed in a personal dictionary, as well as a word control palette, according to the invention.

DETAILED DESCRIPTION OF THE INVENTION

The invention provides a system that is designed to enhance and improve the way one reads or learns to read a difficult text, such as a foreign language, especially a language based upon an ideographic alphabet, such as Kanji which is used in the Japanese language. The text may be any of actual text based material, or audio, video, or graphic based information. In the language learning application, the system is modeled on the process by which the foreign language is read and addresses the problems most persons face when reading a language that is different from their own.

The exemplary embodiment of the invention is based upon two powerful functional modules that provide a comprehensive approach to reading and learning a foreign language, such as Japanese. The first module is an electronic viewer that gives the user access to reference information on each word in the electronic text at a word by word level. The second module is a relational database that allows a user to create word lists with practically no limit in size. The two modules are integrated to provide the user with everything needed to read the foreign language quickly and enjoyably, as well as to build their own individual vocabulary.

FIG. 1 is a block schematic diagram of an exemplary embodiment of the invention that implements a language learning system. An electronic book and/or a multi-media source material is provided as a teaching resource. A text file 10 and/or a multimedia source 14, consisting of an audio/video file 11 and synchronized text 13, which may include sound, images, and/or video is edited during construction of a linked text database by a visual editor 19 that used to build a wordified database 20. The database 20 sources a grammar parser 23 and a link engine 22 that builds an index 21 which, in turn, locates each textual and audio/video reference in the source material. The index provides a location for each reference in a database 12 that includes a relational database engine 15, and linkable entities, such as text references 16, audio references 17, graphic references 18, and the like.

The link engine 22 outputs the selected text to a word list 28 derived from the input text file 10 and/or audio/video information 14, and also outputs the reference information 24, consisting of linkable entities 25, 26, 27, which are derived from the indexed database 12. The indexor/viewer 29 creates a multi-media resource 30, such as a file 33 that was processed as described above to produce a data resource 34, an offset index 35, and linked entities 36 to the data resource for access by the user.

A user interface 32 to the system includes an electronic viewer 43 that runs along with the system application program 42 and provides the following functional elements: index management 37, user display 38, a table of contents 39, a pop-up display 40, and a personal dictionary 41.

The electronic viewer module is used to view and read the electronic books provided with the language learning system. The module includes the following features:

  • 1. One-click, pop-up information for all words containing foreign language words;
  • 2. A word display palette;
  • 3. A contents menu for each book;
  • 4. Search functions;
  • 5. Selectable browse and edit modes; and
  • 6. The ability to copy words and associated information into personal dictionary.

The personal dictionary is a relational database that is optimized to manage and study words. Unlike electronic dictionaries, where only the word entries of the dictionary are searchable, the personal dictionary of the system herein allows one to search on each of eight or more keys associated with a word.

The following functions are supported by the personal dictionary:

  • 1. Display of words in an easy to read, easy to access format;
  • 2. Full relational database capabilities for the following: the word, the pronunciation, English reference, notes, category, source, priority, and review date;
  • 3. Search capabilities for any item;
  • 4. Capabilities to store an unlimited number of words;
  • 5. A flash word feature to allow self-testing in sorted or random order; and
  • 6. Capabilities to review words sorted by any word key.

The personal dictionary also allows the user to enter words from other sources that are currently in paper or other electronic formats. For example, a user can copy all the words that they have in paper format from study lists and notes. With this feature, a student can have all of his study materials in one easy to access database. Users can also import and export data in a text format supported by standard word processor and spreadsheet programs.

The exemplary personal dictionary includes a base 500-word vocabulary list designed for the beginning student. A variety of words are included related to such general topics as: foods and drink, family, health, the body, commuting and transportation, environment, economics, finance, politics, companies, industries, computers, sports, and the language itself.

The system includes one or more electronic books. The words in each book is fully supported with readings, English references, and hypernotes. In the exemplary embodiment of the invention there are typically over 10,000 words, as well as over 1,000 notes presented in an easy to read, easy to memorize format.

The English reference feature of the system provides basic information to help users understand the word in its context. For each word, a generalized definition of the word is provided. The pop-up fields are used to give the user a quick reference to the word and to allow the user to continue reading or reviewing the text.

Current electronic book formats provide simple hyperlinks in what is termed hypertext or multimedia. Hyperlinks to date have been simple pointers that directly link text with other text, graphics, or sound within the text file itself. For reference materials, such as electronic encyclopedias, and dictionaries, hyperlinks provide a quick and easy way to find related material to a topic or subject. However, these links must be hard coded and are therefore cumbersome to author. The format of the system herein described provides a new means of relating text, pictures, and/or video with information to enrich and expand the impact of every element in a text, picture, or video. This format differs from current electronic books which only link text with other parts of text or content.

In the new format of the present system, every word or sound, for example, can be linked to information not contained within the text using an indexing method that maps a single word or phrase to a table that contains external reference material. This reference can be in the form or of text, graphics, images, movies, and/or sound. Thus, the resource materials, such as the text, remains unaltered and therefore compact in terms of file size. Thus, the resource materials, for example the text, takes up less disk space and runs faster.

FIG. 2 is a flow diagram in which the mechanism for indexing and linking text to external references is shown according to the invention. To find a reference to a particular word or other selected entry displayed on the screen, the user clicks the text that is viewed with a pointing device, such as a mouse (200). The click position is determined and used to calculate an offset value within the text (200). In the example shown in FIG. 2, the user clicks at a particular location, e.g. horizontal and vertical coordinates 100 and 75, respectively, and an offset value of 25 is returned. The offset value is compared to the start and end position indices stored in a look-up table (201, 202). The link between the selected text and the external reference is resolved (203), and the external reference is retrieved and displayed to the user (204). In the example of FIG. 2 an offset of 25 is located at the look-up table location having a start point of 20 and an end point of 27 and is linked to text located at position 200. As can be seen from the look-up table (202), the link may be to text, sound, pictures, and video. In the example, the text linkage is to the English language word “Japanese economy”.

The actual indexing process is completed in several steps, including word cuts, linking, and compilation.

Word Cuts

The word cutting process is accomplished using a simple visual editor, for example a point and click system using a pointing device, such as a mouse. The process divides the text into the individual components of text that are linked with the additional reference material. The original text is provided by a publisher in electronic form in a raw binary text format (e.g. an ASCII text file or other word processor file). This text is then divided up into the component word or phrases in preparation for the next step.

Linking

The linking process takes the text after the word cut process and links it to an external reference. The database 20 sources a grammar parser 23 and a link engine 22 that builds an index 21 which, in turn, locates each textual and audio/video reference in the source material. In the case of language learning, the component words and phrases are linked to a foreign language dictionary. In other cases, links may be made to other reference materials, such as graphics and/or sound.

Compilation

After linking, the text and references are compiled. During compilation, the cut text is reassembled to create an image of the text that the end user sees. At this point additional formatting may be applied to the text for final display. Indices of the component words and phrases are built with links to the reference material and duplicate references are consolidated to conserve memory and storage requirements.

A key feature of the system format is the method by which the original book text is indexed and linked with the external references. During the compile process an image of the text is created. When the image is created, the cuts are indexed based upon the position offset from the beginning of the text. The start and end points of the cut text are recorded in a look-up table along with the links to external references. The number and type of links for any component is dynamic. This means that a single entry could have several different references attached to it, each containing different forms of data.

The user interacts with the electronic book using a pointing device. When the user “clicks” within the text image, the location of the pointer is determined. The location is converted into a position offset from the beginning of the text and used to determine which component word or phrase was selected. The process involves comparing the offset with the start and end values stored in the look-up table as discussed above in connection with FIG. 2. When the offset value falls between a component's start and end points, a match is made and the external references can be resolved.

English Reference

FIG. 3 is a screen display showing a highlighted Japanese word and a pop-up menu, including a translation of the Japanese word, according to the invention. The following section explains the English reference pop-ups associated with each word:

The English reference is intended to give the user basic information to help him understand a selected word in its context. A majority of the word definitions found in the English reference are not the direct translation of the word in that particular context. They are mostly generalized definitions of the given word. These pop-up fields give the user a quick reference to the word and allow him to continue reading or reviewing the text without the need to stop and access a dictionary. In applying the invention to other languages, for example Korean or Chinese, or to difficult materials, such as highly technical or complex matters, appropriate external references should be selected.

In the exemplary embodiment of the invention, a priority is placed on making the text readable, rather than on creating a detailed grammatical description of it. The English reference is not considered a direct translation of the foreign language, but rather is preferably a contextual definition based upon the word's meaning within the text.

Definitions

Definitions in dictionaries are written for practical use. Accordingly, word and sentence translations are preferably written in modern English at a level acceptable to native speakers. The types of phrases and words covered by the English reference are preferably of great variety. The English translation should therefore be highly readable and useful.

Hyper Notes

FIG. 4 is a screen display showing a highlighted Japanese word and a pop-up menu, including Japanese language annotations of the Japanese word, according to the invention.

Hyper notes are provided for a great number of words and phrases included in the system. Most of the explanations are grammatical in nature, but others simply explain the passage in further depth or rephrase the foreign language word or phrase in simpler language. The notes have been written in the foreign language because it is believed that this is the best way for students of the language to improve their skills. As in the main text, the yomi and meanings of the words are given in a pop-up form.

Using the Electronic Viewer Module

The electronic viewer module provides the following pull-down menus: File, Edit, Words, View.

The File Menu includes:

  • 1. Open (opens up a book for reading);
  • 2. Close (closes a book);
  • 3. Personal Dictionary (opens the personal dictionary);
  • 4. Import Words (imports a tab delineated file into the personal dictionary);
  • 5. Export Words (exports a tab delineated file into the personal dictionary); and Quit (quits the applications).

The Edit Menu Includes:

  • 1. Undo (undoes a previously deleted entry in the personal dictionary fields);
  • 2. Cut (cuts a highlighted block of text in the personal dictionary fields);
  • 3. Copy (copies the selected text into the clipboard in either the electronic viewer module or the personal dictionary); and
  • 4. Paste (pastes the copied text into the target field in the personal dictionary).

The Words Menu includes:

  • 1. Find (displays the search dialogue box);
  • 2. Find Next (finds the next entry using the previously entered search word);
  • 3. Next (goes to the next word in the personal dictionary based on the current sort setting);
  • 4. Prey (goes to the previous word in the personal dictionary based on the current sort setting);
  • 5. Jump to Text (jumps from the personal dictionary to the source of the word in the original text); and
  • 6. Flash Words (displays the words in the personal dictionary in slide show fashion).

The View Menu includes:

  • 1. Browse (sets the program to Browse Mode, indicated by the arrow cursor);
  • 2. Edit (sets the program to Edit Mode, indicated by the I-beam cursor);
  • 3. Show Note Guides (displays the location of the Notes in the text of the viewer);
  • 4. Show Notes (displays the Notes field in the personal dictionary);
  • 5. Show Info (displays the Word Information and sort control button in the personal dictionary); and
  • 6. Show Palette (displays the Word Display Palette with the electronic viewer module).

After a study session starts, a Table of Contents for the selected book appears. By clicking on any item, the user is able to go to the desired section of the book. The selected chapter appears as a normal text file. The electronic viewer window has a display region with a button to display the Table of Contents. The current chapter name of the selected book is also displayed in this area. To select a word or phrase in the book, the user clicks on a word that is not understood and a pop-up menu immediately appears (see FIG. 3). The pop-up information contains the yomi, the English reference, and the notes selection. If the pop-up menu does not appear, the selected word is not referenced. The yomi also appears in the pop-up menu.

To view the English reference information the user selects the English Reference from the pop-up menu and the information appears next to the pop-up menu.

To see the Note associated with the text, the user selects Notes from the pop-up menu and the Note appears in a separate window. If the Notes item is gray (for example, as shown in FIG. 3), no Note is available for the word. Notes also include a pop-up reference feature. The first word in the text with reference information has a black underbar beneath it. This is the Word Pointer, which indicates the most recent location for the pop-up menu and defaults to the first word. To see where a Note begins and ends, the user selects Show Note Guides from the View Menu.

The electronic viewer module also provides a Palette. To display the palette, the user selects Show Palette from the View Menu. The Word Display Palette displays all the reference information for quick viewing. The arrow buttons move the location of the Word Pointer and update the reference information. The See Note command displays the Note if one exists for the word and is gray if one is not present. The Add to PD command automatically copies the word and its associated information to the personal dictionary. If a Note is present, it is also copied to the personal dictionary.

A limited amount of text can be copied from the book by selecting Edit Mode from the View Menu, highlighting the desired text, and selecting Copy from the Edit Menu. Words can be searched for in the book by selecting Find from the Words Menu.

Using the Personal Dictionary Module

FIG. 5 is a screen display showing a Japanese word listed in a personal dictionary, as well as a personal dictionary control panel, according to the invention. The personal dictionary module in the exemplary embodiment of the invention is implemented in a relational database that is optimized for managing and studying words. Unlike electronic dictionaries where only the word entries of the dictionary are searchable, the personal dictionary module allows a user to search on each of the eight or more keys associated with a word, as discussed above. To open the personal dictionary, the user selects Personal Dictionary from the File menu or double clicks on a Personal Dictionary icon.

The words contained in the personal dictionary are displayed in large fields with the word on the bottom, the yomi above the word, and the English reference on top, as shown in FIG. 5. In Browse Mode, clicking on a word alternately hides and shows the word. This function is used to enhance review and study of the Main Control Buttons. The Main Control Buttons are located just below the Word field. The arrow keys display the next or previous words based on the sort key indicated by the Sort Button in the bottom left corner. The Show Notes button displays the Note information about the Word. This button toggles to Hide Notes when the field is displayed and Show Notes when hidden. Additional notes and annotations can be entered directly. The Quick Search button displays the word in a pop-window for quick search of a single character. After the pop-up is displayed, the user can click on the desired character to search. The Flash Words button displays the words in the personal dictionary in slide show fashion. Sort order or random order are selectable: sort order uses the current sort order.

The Find button displays the search dialogue window. Words are searchable by the following keys: Word, Yomi, English Reference, Category, Source, Priority, or Date. The personal dictionary supports logical “AND” searching for each of the above keys. The following features are supported:

  • 1. Jump to Text—this button jumps control and display from the personal dictionary to the source of the word in the original text;
  • 2. Show Info—this button displays the Word Information Buttons, as well as the Date Indicator; this button toggles to Hide Info when displayed, and Show Info when hidden; and
  • 3. Word Information—this button appears on the bottom of the screen and has the following functions:
    • a. Current Sort—sets the sort order for the Dictionary to either Category, Source, Priority, or Date;
    • b. Category—provides for a set of predefined Categories for words as well as the ability to add new Categories;
    • c. Source—indicates the source of the Word: user entered words are indicated by the user name or if not available, by the default User;
    • d. Priority—allows the user to assign a priority to a word from 1 to 5; and
    • e. Date Display—the date is displayed in the bottom right hand corner; the date is automatically updated each time the word is displayed.
      Searching

Both the electronic viewer module and the personal dictionary module provide search features accessible via the Word Menu. After selecting Find from the menu, the search dialogue appears.

The electronic viewer module includes a simple search feature that allows the user to search for a string of text anywhere in the book. The user enters the desired text and clicks Find to execute the Search. Find Next searches for the next occurrence of the word in the text.

In the personal dictionary, a slightly more complex search feature is provided. The search dialogue allows the user to enter multiple search terms. For example, a user can search for a certain term in the ‘Economics’ category or the user can look for a Kanji that has a certain reading. More search terms result in increased search time. The search term for Word, Yomi, Reference, Note, and Source are indexed based on the first seven characters contained in the field. Characters appearing after the seventh character in any of these fields are not found with the ‘Starts With’ selection. Selecting ‘Contains’ searches the entire text in the field.

To search, the user enters the desired word or character and then selects ‘Starts With’ or ‘Contains’ from the menu. A ‘Starts With’ search is the fastest. The ‘Category’ search terms are based on the category list. The integers 1 to 5 can be entered for ‘Priority.’ Date searching can be performed as ‘is’, ‘is before’, or ‘is after.’ After entering the desired search information, the user clicks ‘Find’ to execute the Search. Find Next searches for the next occurrence in the personal dictionary.

Importing/Exporting Word Lists

Text files can be read into the personal dictionary to make data exchange with other programs and colleagues feasible. The following format should be followed to allow accurate importing. One may use a spreadsheet program to build the word table and export the information as a tab delimited file. If a word processor is used, the user must add an extra tab for blank fields and follow the format listed below. In the exemplary embodiment of the invention, Export and Import uses the following format:

Word [TAB]; Pronunciation [TAB]; Meaning [TAB]; Notes [TAB]; Category [TAB]; Source [TAB]; Priority [TAB]; and Date [Hard Return].

Setting up the Word field as column A in a spreadsheet and then exporting as a text file results in this format. If a word processor is used, one should also save as a text file. One should not include any hard returns (user entered returns) within the string of text for the given word. If given the option, the user should elect to have soft returns (automatically entered returns) deleted. To import, the user selects Import Words from the File Menu, and then chooses the file for import. To export, the user selects Export Words from the File Menu, and then enters a name for the given file.

Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. For example, the invention may be used to index images such that elements of the image are linked to an external reference. Thus, an illustration of the human body may include descriptive external resources for each of the body's internal organs, and would thereby aid in the study of anatomy. Likewise, a video or other moving picture display, for example animated displays, could be indexed such that the picture could be stopped and various elements within a frame of the picture could be examined through links to external references. The invention allows such application because it does not embed information within the source material as is the case with prior art hyperlink technology. Rather, the invention creates a physical counterpart to the image in which a selected image position defines an offset that locates a desired external reference. Accordingly, the invention should only be limited by the claims included below.

Claims

1. A system for linking source material to reference material for display comprising:

a source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information, said source material image stored in an electronic database;
means for determining an address on said electronic database for the beginning position of said source material image;
means for cutting said source material image into said discrete pieces;
means for determining an address on said electronic database for a start point and an end point of said discrete pieces of said image based upon said beginning position of said source material image;
means for recording said start and said end point addresses in a look-up table;
means for selecting a discrete portion of said source material image;
means for determining the address on said electronic database of said selected discrete portion;
means for converting said address of said selected discrete portion to an offset value from said beginning position address of said source material image;
means for comparing said offset value with said recorded start and end point addresses of said discrete pieces in said look-up table;
means for selecting an external reference that corresponds to said look-up table start and end point addresses; and
means for reproducing said external reference.

2. The system of claim 1, further comprising:

a linking engine for linking said source material to said reference information on any of a word-by-word and phrase-by-phrase basis.

3. The system of claim 2, said linking engine further comprising:

word cut means for dividing said source material into discrete pieces;
linking means for establishing at least one link between each of said discrete pieces and said reference information;
compiler means for assembling an integrated source image from said discrete pieces;
indexing means for linking said assembled discrete pieces to said reference information.

4. The system of claim 3, said linking engine further comprising:

means for building an index to link each of said source material pieces to said reference information.

5. The system of claim 4, wherein said index links said source material pieces to said reference information based upon the value of the offset of the starting and ending position addresses of said source material pieces from the beginning position address of said integrated source image.

6. The system of claim 5, wherein said offset locates said reference information to a corresponding source material piece based upon offset occurrence within a range defined by the value of the offsets of the starting and ending point addresses of said source material pieces from said beginning position address of said integrated source image.

7. The system of claim 1, further comprising:

means for manipulating said stored source material and reference information with at least two user keys.

8. A method for linking source material to reference material for display, comprising:

determining the beginning position address of a source material image stored in an electronic database, said source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information;
cutting said source material image into said discrete pieces;
determining a starting point address and an ending point address of said discrete pieces of said image based upon said beginning position address of said source material image;
recording said starting and said ending addresses in a look-up table;
selecting a discrete portion of said source material image;
determining the address of said selected discrete portion;
converting said address of said selected discrete portion to an offset value from said beginning position address of said source material image;
comparing said offset value with said recorded start and end point addresses of said discrete pieces in said look-up table;
selecting an external reference that corresponds to said look-up table start and end point addresses; and
reproducing said external reference.

9. In a language learning method, a method for linking source material to reference material for display, comprising the steps of:

reading a foreign language source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information with an electronic viewer;
accessing reference materials on selected portions of said source material image;
determining the beginning position address of said source material image;
cutting said source material image into said discrete pieces;
determining a start point address and an end point address of said discrete pieces of said image based upon said beginning position address of said source material image;
recording said start and said end point addresses in a look-up table;
selecting a discrete portion of said source material image;
determining the address of said selected discrete portion;
converting said address of said selected discrete portion to an offset value from said beginning position address of said source material image;
comparing said offset value with said recorded start and end point addresses of said discrete pieces in said look-up table;
selecting an external reference that corresponds to said look-up table start and end point addresses; and
reproducing said external reference.

10. The method of claim 9, further comprising the step of:

linking said source material to said reference information with a linking engine on any of a word-by-word and phrase-by-phrase basis.

11. The method of claim 10, said linking step further comprising the steps of:

dividing said source material into discrete pieces;
establishing at least one link between each of said discrete pieces and said reference information;
assembling an integrated source image from said discrete pieces; and
linking said assembled discrete pieces to said reference information.

12. The method of claim 11, said linking step further comprising the step of:

building an index to link each of said source material pieces to said reference information.

13. The method of claim 12, wherein said index links said source material pieces to said reference information based upon the offset between the starting position address for said source material pieces and the beginning position address of said integrated source image.

14. The method of claim 13, wherein said offset locates said reference information to a corresponding source material piece based upon offset occurrence within a range defined by the value of the offsets of the starting and ending position addresses of said source material pieces from said beginning position address of said integrated source image.

15. In a language learning system, a system for linking source material to reference material for display, comprising:

a text image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information;
means for determining the beginning position address of said text image;
means for cutting said text image into said discrete pieces;
means for determining a starting point address and an ending point address of said discrete pieces of said image based upon said beginning position address of said source material image;
means for recording said starting and said ending point addresses in a look-up table;
means for selecting a discrete portion of said text image;
means for determining the address of said selected discrete portion;
means for converting said address of said selected discrete portion to an offset value from said beginning position address of said source material image;
means for comparing said offset value with said recorded start and end point addresses of said discrete pieces in said look-up table;
means for selecting an external reference that corresponds to said look-up table start and end point addresses; and
means for displaying said external reference.

16. In a language learning method, a method for linking source material to reference material for display, comprising the steps of:

determining the beginning position address of a text image, said text image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information;
cutting said source material image into said discrete pieces;
determining a starting point address and an ending point address of said discrete pieces of said image based upon said beginning position address of said text image;
recording said starting and said ending point addresses in a look-up table;
selecting a discrete portion of said text image;
determining the address of said selected discrete portion;
converting said address of said selected discrete portion to an offset value from said beginning position of said text image;
comparing said offset value with said recorded start and end point addresses of said discrete pieces in said look-up table;
selecting an external reference that corresponds to said look-up table start and end point address; and
displaying said external reference.

17. A system for linking textual source material to external reference materials for display, the system comprising:

means for determining a beginning position address of a textual source material stored in an electronic database;
means for cutting the textual source material into a plurality of discrete pieces;
means for determining starting point addresses and ending point addresses of the plurality of discrete pieces based upon the beginning position address;
means for recording in a look-up table the starting and ending point addresses;
means for linking the plurality of discrete pieces to external reference materials by recording in the look-up table, along with the starting and ending point addresses of the plurality of discrete pieces, links to the external reference materials, the external reference materials comprising any of textual, audio, video, and picture information;
means for selecting a discrete portion of an image of the source material;
means for determining a display address of the selected discrete portion;
means for converting the display address of the selected discrete portion to an offset value from the beginning position address;
means for comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
means for selecting one of the external reference materials corresponding to the identified one of the plurality of discrete pieces; and
means for displaying on a computer the selected one of the external reference materials.

18. The system of claim 17, wherein the means for linking links the plurality of discrete pieces to external reference materials on a word-by-word or phrase-by-phrase basis.

19. The system of claim 18, further comprising:

means for compiling the source material image from at least the plurality of discrete pieces; and
means for indexing the plurality of discrete pieces and corresponding links to the external reference materials.

20. The system of claim 19, further comprising:

means for building an index for each of the linked external reference materials.

21. The system of claim 20, wherein the look-up table links the identified one of the plurality of discrete pieces to at least a corresponding one of the external reference materials based upon the offset value.

22. The system of claim 21, wherein the identified one of the plurality of discrete pieces is identified based upon the offset value being within a range defined by the starting and ending point addresses of the identified one of the plurality of discrete pieces.

23. The system of claim 17, further comprising:

means for manipulating the source material image and the external reference materials with at least two user keys.

24. The system of claim 17, wherein cutting the textual source material into a plurality of discrete pieces is done manually.

25. The system of claim 17, wherein cutting the textual source material into a plurality of discrete pieces is done automatically.

26. The system of claim 25, wherein automatically cutting the textual source material into a plurality of discrete pieces is done using a grammar parser.

27. The system of claim 25, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without using tags.

28. The system of claim 25, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

29. The system of claim 17, wherein the link is a hyperlink.

30. The system of claim 17, wherein the link is an address of the selected one of the external reference materials.

31. The system of claim 17, wherein the link is reference information for retrieving the selected one of the external reference materials.

32. The system of claim 17, wherein determining a display address of the selected discrete portion is done without using tags.

33. The system of claim 17, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the textual source material.

34. The system of claim 17, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the textual source material.

35. The system of claim 17, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.

36. The system of claim 17, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the textual source material.

37. The system of claim 17, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the textual source material.

38. The system of claim 17, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.

39. The system of claim 17, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

40. The system of claim 17, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the textual source material.

41. The system of claim 17, wherein the external reference materials comprise a plurality of text based external reference materials.

42. The system of claim 17, wherein the external reference materials comprise a plurality of image based external reference materials.

43. The system of claim 17, wherein the external reference materials comprise a plurality of graphic based external reference materials.

44. The system of claim 17, wherein the external reference materials comprise a plurality of audio based external reference materials.

45. The system of claim 17, wherein the external reference materials comprise a plurality of video based external reference materials.

46. The system of claim 17, wherein at least one of the external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.

47. The system of claim 17, wherein linking the plurality of discrete pieces is done manually.

48. The system of claim 17, wherein linking the plurality of discrete pieces is done automatically.

49. The system of claim 17, wherein the electronic database is an electronic relational database.

50. The system of claim 17, wherein the electronic database is an electronic file.

51. The system of claim 17, wherein the electronic database is electronic text.

52. The system of claim 17, wherein the beginning position address is a beginning location of the textual source material in the electronic database.

53. The system of claim 52, wherein each starting point address is a starting location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

54. The system of claim 52, wherein each ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

55. The system of claim 17, further comprising:

means for displaying the external reference materials corresponding to the identified one of the plurality of discrete pieces in a pop-up menu, prior to selecting one of the external reference materials corresponding to the identified one of the plurality of discrete pieces.

56. The system of claim 55, wherein the pop-up menu is displayed in the textual source material image next to the selected discrete portion.

57. The system of claim 55, wherein the selected one of the external reference materials is selected using the pop-up menu.

58. The system of claim 55, wherein the pop-up menu displays labels for the external reference materials corresponding to the identified one of the plurality of discrete pieces.

59. The system of claim 55, wherein the labels can each be selected in the pop-up menu to display the external reference materials corresponding to the identified one of the plurality of discrete pieces.

60. The system of claim 17, wherein the selected one of the external reference materials is a single word.

61. The system of claim 17, wherein each of the external reference materials is a single word.

62. A computer-implemented method for linking textual source material to external reference materials for display, the method comprising the steps of:

determining a beginning position address of textual source material stored in an electronic database;
cutting the textual source material into a plurality of discrete pieces;
determining starting point addresses and ending point addresses of the plurality of discrete pieces based upon the beginning position address;
recording in a look up table the starting and ending point addresses;
linking the plurality of discrete pieces to external reference materials by recording in the look-up table, along with the starting and ending point addresses of the plurality of discrete pieces, links to the external reference materials, the external reference materials comprising any of textual, audio, video, and picture information;
selecting a discrete portion of an image of the textual source material;
determining a display address of the selected discrete portion;
converting the display address of the selected discrete portion to an offset value from the beginning position address;
comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
selecting one of the external reference materials corresponding to the identified one of the plurality of discrete pieces; and
displaying on a computer the selected one of the external reference materials.

63. The method of claim 62, wherein cutting the textual source material into a plurality of discrete pieces is done manually.

64. The method of claim 62, wherein cutting the textual source material into a plurality of discrete pieces is done automatically.

65. The method of claim 64, wherein automatically cutting the textual source material into a plurality of discrete pieces is done using a grammar parser.

66. The method of claim 64, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without using tags.

67. The method of claim 64, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

68. The method of claim 62, wherein the link is a hyperlink.

69. The method of claim 62, wherein the link is an address of the selected one of the external reference materials.

70. The method of claim 62, wherein the link is reference information for retrieving the selected one of the external reference materials.

71. The method of claim 62, wherein determining a display address of the selected discrete portion is done without using tags.

72. The method of claim 62, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the textual source material.

73. The method of claim 62, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the textual source material.

74. The method of claim 62, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.

75. The method of claim 62, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the textual source material.

76. The method of claim 62, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the textual source material.

77. The method of claim 62, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.

78. The method of claim 62, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

79. The method of claim 62, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the textual source material.

80. The method of claim 62, wherein the external reference materials comprise a plurality of text based external reference materials.

81. The method of claim 62, wherein the external reference materials comprise a plurality of image based external reference materials.

82. The method of claim 62, wherein the external reference materials comprise a plurality of graphic based external reference materials.

83. The method of claim 62, wherein the external reference materials comprise a plurality of audio based external reference materials.

84. The method of claim 62, wherein the external reference materials comprise a plurality of video based external reference materials.

85. The method of claim 62, wherein at least one of the external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.

86. The method of claim 62, wherein linking the plurality of discrete pieces is done manually.

87. The method of claim 62, wherein linking the plurality of discrete pieces is done automatically.

88. The method of claim 62, wherein the electronic database is an electronic relational database.

89. The method of claim 62, wherein the electronic database is an electronic file.

90. The method of claim 62, wherein the electronic database is electronic text.

91. The method of claim 62, wherein the beginning position address is a beginning location of the textual source material in the electronic database.

92. The method of claim 91, wherein each starting point address is a starting location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

93. The method of claim 91, wherein each ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

94. The method of claim 62, further comprising:

displaying the external reference materials corresponding to the identified one of the plurality of discrete pieces in a pop-up menu, prior to selecting one of the external reference materials corresponding to the identified one of the plurality of discrete pieces.

95. The method of claim 94, wherein the pop-up menu is displayed in the textual source material image next to the selected discrete portion.

96. The method of claim 94, wherein the selected one of the external reference materials is selected using the pop-up menu.

97. The method of claim 94, wherein the pop-up menu displays labels for the external reference materials corresponding to the identified one of the plurality of discrete pieces.

98. The method of claim 94, wherein the labels can each be selected in the pop-up menu to display the external reference materials corresponding to the identified one of the plurality of discrete pieces.

99. The method of claim 62, wherein the selected one of the external reference materials is a single word.

100. The method of claim 62, wherein each of the external reference materials is a single word.

101. A system for linking textual source material to external reference materials for display, the system comprising:

means for determining a beginning position address of a textual source material stored in an electronic database;
means for cutting the textual source material into a plurality of discrete pieces;
means for determining a starting point address and an ending point address of at least one of the plurality of discrete pieces based upon the beginning position address;
means for recording in a look-up table the starting and ending point addresses;
means for linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
means for selecting a discrete portion of an image of the source material;
means for determining a display address of the selected discrete portion;
means for converting the display address of the selected discrete portion to an offset value from the beginning position address;
means for comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
means for selecting one of the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
means for displaying on a computer the selected one of the plurality of external reference materials.

102. The system of claim 101, wherein the means for linking links at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials on a word-by-word or phrase-by-phrase basis.

103. The system of claim 102, further comprising:

means for compiling the source material image from at least the plurality of discrete pieces; and
means for indexing at least one of the plurality of discrete pieces and corresponding links to the plurality of external reference materials.

104. The system of claim 103, further comprising:

means for building an index for each of the linked plurality of external reference materials.

105. The system of claim 104, wherein the look-up table links the identified one of the plurality of discrete pieces to at least a corresponding one of a plurality of external reference materials based upon the offset value.

106. The system of claim 105, wherein the identified one of the plurality of discrete pieces is identified based upon the offset value being within a range defined by the starting and ending point addresses of the identified one of the plurality of discrete pieces.

107. The system of claim 101, further comprising:

means for manipulating the source material image and the plurality of external reference materials with at least two user keys.

108. The system of claim 101, wherein cutting the textual source material into a plurality of discrete pieces is done manually.

109. The system of claim 101, wherein cutting the textual source material into a plurality of discrete pieces is done automatically.

110. The system of claim 109, wherein automatically cutting the textual source material into a plurality of discrete pieces is done using a grammar parser.

111. The system of claim 109, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without using tags.

112. The system of claim 109, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

113. The system of claim 101, wherein the link is a hyperlink.

114. The system of claim 101, wherein the link is an address of the selected one of the plurality of external reference materials.

115. The system of claim 101, wherein the link is reference information for retrieving the selected one of the plurality of external reference materials.

116. The system of claim 101, wherein determining a display address of the selected discrete portion is done without using tags.

117. The system of claim 101, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the textual source material.

118. The system of claim 101, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the textual source material.

119. The system of claim 101, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.

120. The system of claim 101, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the textual source material.

121. The system of claim 101, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the textual source material.

122. The system of claim 101, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.

123. The system of claim 101, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

124. The system of claim 101, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the textual source material.

125. The system of claim 101, wherein the plurality of external reference materials comprises a plurality of text based external reference materials.

126. The system of claim 101, wherein the plurality of external reference materials comprises a plurality of image based external reference materials.

127. The system of claim 101, wherein the plurality of external reference materials comprises a plurality of graphic based external reference materials.

128. The system of claim 101, wherein the plurality of external reference materials comprises a plurality of audio based external reference materials.

129. The system of claim 101, wherein the plurality of external reference materials comprises a plurality of video based external reference materials.

130. The system of claim 101, wherein at least one of the plurality of external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.

131. The system of claim 101, wherein linking at least one of the plurality of discrete pieces is done manually.

132. The system of claim 101, wherein linking at least one of the plurality of discrete pieces is done automatically.

133. The system of claim 101, wherein the electronic database is an electronic relational database.

134. The system of claim 101, wherein the electronic database is an electronic file.

135. The system of claim 101, wherein the electronic database is electronic text.

136. The system of claim 101, wherein the beginning position address is a beginning location of the textual source material in the electronic database.

137. The system of claim 136, wherein each starting point address is a starting location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

138. The system of claim 136, wherein each ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

139. The system of claim 101, further comprising:

means for displaying the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces in a pop-up menu, prior to selecting one of the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

140. The system of claim 139, wherein the pop-up menu is displayed in the textual source material image next to the selected discrete portion.

141. The system of claim 139, wherein the selected one of the plurality of external reference materials is selected using the pop-up menu.

142. The system of claim 139, wherein the pop-up menu displays labels for the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

143. The system of claim 139, wherein the labels can each be selected in the pop-up menu to display the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

144. The system of claim 101, wherein the at least one of the plurality of external reference materials is a single word.

145. The system of claim 101, wherein each of the plurality of external reference materials is a single word.

146. A computer-implemented method for linking textual source material to external reference materials for display, the method comprising the steps of:

determining a beginning position address of textual source material stored in an electronic database;
cutting the textual source material into a plurality of discrete pieces;
determining a starting point address and an ending point address of at least one of the plurality of discrete pieces based upon the beginning position address;
recording in a look up table the starting and ending point addresses;
linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
selecting a discrete portion of an image of the textual source material;
determining a display address of the selected discrete portion;
converting the display address of the selected discrete portion to an offset value from the beginning position address;
comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
selecting one of the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
displaying on a computer the selected one of the plurality of external reference materials.

147. The method of claim 146, wherein cutting the textual source material into a plurality of discrete pieces is done manually.

148. The method of claim 146, wherein cutting the textual source material into a plurality of discrete pieces is done automatically.

149. The method of claim 148, wherein automatically cutting the textual source material into a plurality of discrete pieces is done using a grammar parser.

150. The method of claim 148, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without using tags.

151. The method of claim 148, wherein automatically cutting the textual source material into a plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

152. The method of claim 146, wherein the link is a hyperlink.

153. The method of claim 146, wherein the link is an address of the selected one of the plurality of external reference materials.

154. The method of claim 146, wherein the link is reference information for retrieving the selected one of the plurality of external reference materials.

155. The method of claim 146, wherein determining a display address of the selected discrete portion is done without using tags.

156. The method of claim 146, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the textual source material.

157. The method of claim 146, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the textual source material.

158. The method of claim 146, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.

159. The method of claim 146, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the textual source material.

160. The method of claim 146, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the textual source material.

161. The method of claim 146, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.

162. The method of claim 146, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the textual source material.

163. The method of claim 146, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the textual source material.

164. The method of claim 146, wherein the plurality of external reference materials comprises a plurality of text based external reference materials.

165. The method of claim 146, wherein the plurality of external reference materials comprises a plurality of image based external reference materials.

166. The method of claim 146, wherein the plurality of external reference materials comprises a plurality of graphic based external reference materials.

167. The method of claim 146, wherein the plurality of external reference materials comprises a plurality of audio based external reference materials.

168. The method of claim 146, wherein the plurality of external reference materials comprises a plurality of video based external reference materials.

169. The method of claim 146, wherein at least one of the plurality of external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.

170. The method of claim 146, wherein linking at least one of the plurality of discrete pieces is done manually.

171. The method of claim 146, wherein linking at least one of the plurality of discrete pieces is done automatically.

172. The method of claim 146, wherein the electronic database is an electronic relational database.

173. The method of claim 146, wherein the electronic database is an electronic file.

174. The method of claim 146, wherein the electronic database is electronic text.

175. The method of claim 146, wherein the beginning position address is a beginning location of the textual source material in the electronic database.

176. The method of claim 175, wherein each starting point address is a starting location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

177. The method of claim 175, wherein each ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the textual source material.

178. The method of claim 146, further comprising:

displaying the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces in a pop-up menu, prior to selecting one of the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

179. The method of claim 178, wherein the pop-up menu is displayed in the textual source material image next to the selected discrete portion.

180. The method of claim 178, wherein the selected one of the plurality of external reference materials is selected using the pop-up menu.

181. The method of claim 178, wherein the pop-up menu displays labels for the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

182. The method of claim 178, wherein the labels can each be selected in the pop-up menu to display the at least one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces.

183. The method of claim 146, wherein the at least one of the plurality of external reference materials is a single word.

184. The method of claim 146, wherein each of the plurality of external reference materials is a single word.

Referenced Cited
U.S. Patent Documents
3872448 March 1975 Mitchell, Jr. et al.
4136395 January 23, 1979 Kolpek et al.
4318184 March 2, 1982 Millett et al.
4384288 May 17, 1983 Walton
4651300 March 17, 1987 Suzuki et al.
4674065 June 16, 1987 Lange et al.
4689768 August 25, 1987 Heard et al.
4742481 May 3, 1988 Yoshimura
4773009 September 20, 1988 Kucera et al.
4817050 March 28, 1989 Komatsu et al.
4837797 June 6, 1989 Freeny, Jr.
4864501 September 5, 1989 Kucera et al.
4868743 September 19, 1989 Nishio
4868750 September 19, 1989 Kucera et al.
4887212 December 12, 1989 Zamora et al.
4893270 January 9, 1990 Beck et al.
4914586 April 3, 1990 Swinehart et al.
4945476 July 31, 1990 Bodick et al.
4958283 September 18, 1990 Tawara et al.
4980855 December 25, 1990 Kojima
4982344 January 1, 1991 Jordan
4994966 February 19, 1991 Hutchins
5020019 May 28, 1991 Ogawa
5065315 November 12, 1991 Garcia
5088052 February 11, 1992 Spielman et al.
5128865 July 7, 1992 Sadler
5146439 September 8, 1992 Jachmann et al.
5146552 September 8, 1992 Cassorla et al.
5151857 September 29, 1992 Matsui
5157606 October 20, 1992 Nagashima
5204947 April 20, 1993 Bernstein et al.
5214583 May 25, 1993 Miike et al.
5218697 June 8, 1993 Chung
5222160 June 22, 1993 Sakai et al.
5226117 July 6, 1993 Miklos
5233513 August 3, 1993 Doyle
5241671 August 31, 1993 Reed et al.
5253362 October 12, 1993 Nolan et al.
5256067 October 26, 1993 Gildea et al.
5267155 November 30, 1993 Buchanan et al.
5289376 February 22, 1994 Yokogawa
5297249 March 22, 1994 Bernstein et al.
5303151 April 12, 1994 Neumann
5319711 June 7, 1994 Servi
5329446 July 12, 1994 Kugimiya et al.
5331555 July 19, 1994 Hashimoto et al.
5337233 August 9, 1994 Hofert et al.
5349368 September 20, 1994 Takeda et al.
5351190 September 27, 1994 Kondo
5361202 November 1, 1994 Doue
5367621 November 22, 1994 Cohen et al.
5375200 December 20, 1994 Dugan et al.
5377323 December 27, 1994 Vasudevan
5392386 February 21, 1995 Chalas
5404435 April 4, 1995 Rosenbaum
5404506 April 4, 1995 Fujisawa et al.
5408655 April 18, 1995 Oren et al.
5416901 May 16, 1995 Torres
5418942 May 23, 1995 Krawchuk et al.
5434974 July 18, 1995 Loucks et al.
5438655 August 1, 1995 Richichi et al.
5455945 October 3, 1995 Vanderdrift
5459860 October 17, 1995 Brunett et al.
5491783 February 13, 1996 Douglas et al.
5491784 February 13, 1996 Douglas et al.
5500859 March 19, 1996 Sharma et al.
5506984 April 9, 1996 Miller
5515534 May 7, 1996 Chuah et al.
5517409 May 14, 1996 Ozawa et al.
5530852 June 25, 1996 Meske, Jr. et al.
5530853 June 25, 1996 Schell et al.
5537132 July 16, 1996 Teraoka et al.
5537590 July 16, 1996 Amado
5541836 July 30, 1996 Church et al.
5546447 August 13, 1996 Skarbo et al.
5546529 August 13, 1996 Bowers et al.
5564046 October 8, 1996 Nemoto et al.
5576955 November 19, 1996 Newbold et al.
5581460 December 3, 1996 Kotake et al.
5583761 December 10, 1996 Chou
5603025 February 11, 1997 Tabb et al.
5606712 February 25, 1997 Hidaka
5608900 March 4, 1997 Dockter et al.
5617488 April 1, 1997 Hong et al.
5629981 May 13, 1997 Nerlikar
5640565 June 17, 1997 Dickinson
5644740 July 1, 1997 Kiuchi
5646416 July 8, 1997 Van De Velde
5649222 July 15, 1997 Mogilevsky
5657259 August 12, 1997 Davis et al.
5659676 August 19, 1997 Redpath
5666502 September 9, 1997 Capps
5694523 December 2, 1997 Wical
5708804 January 13, 1998 Goodwin et al.
5708822 January 13, 1998 Wical
5708825 January 13, 1998 Sotomayor
5724593 March 3, 1998 Hargrave, III et al.
5724597 March 3, 1998 Cuthbertson et al.
5727129 March 10, 1998 Barrett et al.
5729741 March 17, 1998 Liaguno et al.
5732229 March 24, 1998 Dickinson
5740252 April 14, 1998 Minor et al.
5745360 April 28, 1998 Leone
5745908 April 28, 1998 Anderson et al.
5754847 May 19, 1998 Kaplan et al.
5754857 May 19, 1998 Gadol
5761436 June 2, 1998 Nielsen
5761656 June 2, 1998 Ben-Shachar
5761659 June 2, 1998 Bertoni
5761689 June 2, 1998 Rayson
5764906 June 9, 1998 Edelstein et al.
5778363 July 7, 1998 Light
5781189 July 14, 1998 Holleran et al.
5781900 July 14, 1998 Shoji et al.
5781904 July 14, 1998 Oren et al.
5787386 July 28, 1998 Kaplan et al.
5793972 August 11, 1998 Shane
5794050 August 11, 1998 Dahlgren et al.
5794228 August 11, 1998 French et al.
5794259 August 11, 1998 Kikinis
5799267 August 25, 1998 Siegel
5799302 August 25, 1998 Johnson et al.
5802559 September 1, 1998 Bailey
5805886 September 8, 1998 Skarbo et al.
5805911 September 8, 1998 Miller
5806079 September 8, 1998 Rivette et al.
5815830 September 1998 Anthony
5819092 October 6, 1998 Ferguson et al.
5822539 October 13, 1998 Van Hoff
5822720 October 13, 1998 Bookman et al.
5826257 October 20, 1998 Snelling, Jr.
5832496 November 3, 1998 Anand et al.
5835059 November 10, 1998 Nadel et al.
5835089 November 10, 1998 Skarbo et al.
5845238 December 1, 1998 Fredenburg
5855007 December 29, 1998 Jovicic et al.
5859636 January 12, 1999 Pandit
5860073 January 12, 1999 Ferrel et al.
5862325 January 19, 1999 Reed et al.
5864848 January 26, 1999 Horvitz et al.
5870702 February 9, 1999 Yamabana
5870746 February 9, 1999 Knutson
5873107 February 16, 1999 Borovoy et al.
5875443 February 23, 1999 Nielsen
5875446 February 23, 1999 Brown et al.
5878421 March 2, 1999 Ferrel et al.
5884247 March 16, 1999 Christy
5884302 March 16, 1999 Ho
5884309 March 16, 1999 Vanechanos, Jr.
5892919 April 6, 1999 Nielsen
5893093 April 6, 1999 Wills
5895461 April 20, 1999 De La Huerga et al.
5896321 April 20, 1999 Miller et al.
5896533 April 20, 1999 Ramos et al.
5897475 April 27, 1999 Pace et al.
5900004 May 4, 1999 Gipson
5905866 May 18, 1999 Nakabayashi et al.
5905991 May 18, 1999 Reynolds
5907838 May 25, 1999 Miyasaka et al.
5913214 June 15, 1999 Madnnick et al.
5920859 July 6, 1999 Li
5924090 July 13, 1999 Krellenstein
5926808 July 20, 1999 Evans et al.
5930471 July 27, 1999 Milewski et al.
5940843 August 17, 1999 Zucknovich et al.
5946647 August 31, 1999 Miller et al.
5953718 September 14, 1999 Wical
5963205 October 5, 1999 Sotomayor
5963940 October 5, 1999 Liddy et al.
5963950 October 5, 1999 Nielsen et al.
5970505 October 19, 1999 Ebrahim
5974413 October 26, 1999 Beauregard et al.
5983171 November 9, 1999 Yokoyama et al.
5987403 November 16, 1999 Sugimura
5987460 November 16, 1999 Niwa et al.
5987475 November 16, 1999 Murai
5999938 December 7, 1999 Bliss et al.
6006218 December 21, 1999 Breese et al.
6006242 December 21, 1999 Poole et al.
6014677 January 11, 2000 Hayashi et al.
6021403 February 1, 2000 Horvitz et al.
6022222 February 8, 2000 Guinan
6026088 February 15, 2000 Rostoker et al.
6026398 February 15, 2000 Brown et al.
6028605 February 22, 2000 Conrad et al.
6031537 February 29, 2000 Hugh
6038573 March 14, 2000 Parks
6047252 April 4, 2000 Kumano et al.
6055531 April 25, 2000 Bennett et al.
6061675 May 9, 2000 Wical
6067565 May 23, 2000 Horvitz
6076088 June 13, 2000 Paik et al.
6085201 July 4, 2000 Tso
6085226 July 4, 2000 Horvitz
6092074 July 18, 2000 Rodkin et al.
6094649 July 25, 2000 Bowen et al.
6108674 August 22, 2000 Murakami et al.
6122647 September 19, 2000 Horowitz et al.
6126306 October 3, 2000 Ando
6128635 October 3, 2000 Ikeno
6137911 October 24, 2000 Zhilyaev
6151624 November 21, 2000 Teare et al.
6154738 November 28, 2000 Call
6178434 January 23, 2001 Saitoh
6182133 January 30, 2001 Horvitz
6185550 February 6, 2001 Snow et al.
6185576 February 6, 2001 McIntosh
6233570 May 15, 2001 Horvits et al.
6260035 July 10, 2001 Horvitz et al.
6262730 July 17, 2001 Horvitz et al.
6272505 August 7, 2001 De La Huerga
6289342 September 11, 2001 Lawrence et al.
6292768 September 18, 2001 Chan
6308171 October 23, 2001 De La Huerga
6311177 October 30, 2001 Dauerer et al.
6311194 October 30, 2001 Sheth et al.
6323853 November 27, 2001 Hedloy
6338059 January 8, 2002 Fields et al.
6373502 April 16, 2002 Nielsen
6438545 August 20, 2002 Beauregard et al.
6442545 August 27, 2002 Feldman et al.
6516321 February 4, 2003 De La Huerga
6519603 February 11, 2003 Bays et al.
6556984 April 29, 2003 Zien
6571241 May 27, 2003 Nosohara
6601026 July 29, 2003 Appelt et al.
6618733 September 9, 2003 White et al.
6625581 September 23, 2003 Perkowski
6629079 September 30, 2003 Spiegel et al.
6651059 November 18, 2003 Sundaresan et al.
6697824 February 24, 2004 Bowman-Amuah
6732090 May 4, 2004 Shanahan et al.
6732361 May 4, 2004 Andreoli et al.
7003522 February 21, 2006 Reynar et al.
7032174 April 18, 2006 Montero et al.
7130861 October 31, 2006 Bookman et al.
7287218 October 23, 2007 Knotz et al.
7496854 February 24, 2009 Hedloy
RE40731 June 9, 2009 Bookman et al.
20020035581 March 21, 2002 Reynar et al.
20020036654 March 28, 2002 Evans et al.
20020062353 May 23, 2002 Konno et al.
20020065891 May 30, 2002 Malik
20020091803 July 11, 2002 Imamura et al.
20020099730 July 25, 2002 Brown
20020184247 December 5, 2002 Jokela et al.
20030004909 January 2, 2003 Chauhan et al.
20030033290 February 13, 2003 Garner et al.
20030154144 August 14, 2003 Pokorny et al.
20030187587 October 2, 2003 Swindells et al.
20030212527 November 13, 2003 Moore et al.
Foreign Patent Documents
0 093 250 November 1983 EP
0 725 353 July 1996 EP
0 840 240 May 1998 EP
03-174653 July 1991 JP
04-220768 August 1992 JP
04-288674 October 1992 JP
04-320530 November 1992 JP
04-320551 November 1992 JP
05-012096 January 1993 JP
05-128157 May 1993 JP
95/04974 February 1995 WO
Other references
  • Allen, “Introduction to Natural Language Understanding”, Natural Language Understanding, Chapter 1, p. 1-19, The Benjamin/Cummings Publishing Company, Inc., 1988.
  • Almasi, et al., Highly Parallel Computing 2nd Edition, The Benjamin/Cummings Publishing Company, Inc.,Chapter 2, p. 38-51 and 87-95, 1994.
  • Anonymous, “Hypertext Method”, IBM Technical Bulletin, 1 page, Oct. 1989.
  • Anonymous, Microsoft Corporation, Microsoft Word 97, screen printouts, pp. 1-4,1997.
  • Baez, et al., “Portable Translator”, IBM Technical Bulletin, vol. 37, No. 11, p. 185-188, Nov. 1994.
  • Brookshear, Introduction to “Computer Science, An Overview”, Benjamin/Cummings Publishing, p. 17, 1988.
  • Cohen, et al., “Method for Automatic Analysis of Meter in (both) Poetry and Prose”, IBM Technical Bulletin, vol. 32, No. 9B, p. 224-226, Feb. 1990.
  • Dunnington, et al., “Methodology and Apparatus for Translation of Text to Sign Language Images”, vol. 37, No. 04A, p. 229-230, Apr. 1994.
  • Eisen, et al., “Multilingual Multimedia Hyperlink Network Design”, IBM Technical Bulletin, vol. 36, No. 09B, p. 471-472, Sep. 1993.
  • Eisen, et al., “OS/2 Presentation Manager Controls Enabled for Hypermedia Link Markers”, IBM Technical Bulletin, vol. 34, No. 10B, p. 433-434, Mar. 1992.
  • Elliott, “Tuning up HyperCard's Database Engine”, Supplement to Dr. Dobb's Journal, p. 39s-41s, Apr. 1993.
  • Foley, et al., Introduction to “Computer Graphics—Principles and Practice 2nd Ed. in C”, Addison-Wesley Publishing Company, Inc., p. 1-9, 1996.
  • Fuger et al., Proceedings of The First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Walt Disney World Dolphin Hotel, Orlando, FL, vol. I, p. 229-234, Jun. 27-Jun. 29, 1994.
  • Germain, et al., “Hypertext Document Update”, IBM Technical Bulletin, vol. 34, No. 8, p. 22-23, Jan. 1992.
  • Glinert, A Pumped-Up Publishing Pro, Computer Shopper, Computer Select, p. 462, Apr. 1997.
  • Goodman, Web Documents Without HTML, Computer Select, Computer Shopper, Apr. 1997.
  • Goose, et al., “Unifying Distributed Processing and Open Hypermedia through a Heterogeneous Communication Model”, University of Southampton, Technical Report No. 95-6, p. 1-12, Nov. 1995.
  • Marshall, Acrobat, Common Ground Extend Reach Beyond Document Viewing, InfoWorld, Computer Select, Apr. 21, 1997, p. 105.
  • Montana, “Automated Parameter Tuning for Interpretation of Synthetic Images”, Handbook of Genetic Algorithms, edited by Lawrence Davis, Chapter 19, p. 282-311, 1991.
  • Nelson, FrenchNow 3.0 Language-Learning Tool, Macworld Reviews, p. 83, Dec. 1995.
  • Takehi, “Implementing Memory Efficient Hypertext in Online Manual Tool”, IBM Technical Bulletin, vol. 33, No. 11, p. 259-263, Apr. 1991.
  • Weibel, Publish to Paper and the Web, Computer Select, Dec. 1996, PC/Computing, p. 130.
  • Winston, et al., Finding Patterns in Images, LISP 3rd Edition, Chapter 31, p. 456-483, Addison-Wesley Publishing Company, 1989.
  • Yankelovich, et al., “Reading and Writing the Electronic Book”, Computer, vol. 18, No. 10, p. 15-30, Oct. 1985.
  • Addressmate Automatic Envelope Addressing Program, User's Manual (1991).
  • User Manual for AddressMate and AddressMate Plus by AddressMate Software (1994-1995).
  • Apple Internet Address Detector User's Manual, Aug. 28, 1997.
  • Microsoft Word 97 Help File entitled Automatically check spelling and grammar as you type (1997).
  • Developer's Guide to Apple Data Detectors, Dec. 1997.
  • Novell GroupWise User's Guide for Windows 16-Bit, Version 5.2, 1993, MS 125993, Novell, Inc., Orem, Utah. cited by other (1993).
  • Haak, Personal Computing—WordPerfect News: WordPerfect for Windows 7.0; A Sneak Preview, Colorado State University, Vector Academic Computing & Network Services, vol. 13, No. 3, Jan./Feb. 1996.
  • Thistlewaite, “Automatic Construction and Management of Large Open Webs”, Information Processing and Management, vol. 33, pp. 161-173 (published Mar. 1997).
  • Delivery and Retrieval Technology, Electronic Delivery, Document Management, Catalog Publishing, Page Layout, Hardware, Seybold Seminars Boston—Special Report, vol. 2, No. 8, Apr. 1994.
  • Seybold Seminars and Imprinta '92, part 1: RIPs and Recorders, Seybold Report on Publishing Systems, vol. 21, No. 12, p. 10, 1992.
  • Order re: Construction of Claim 8 of United States Patent No. 5,822,720 in the matter of Sentius Corporation v. Flyswat, Inc. in the United States District Court for the Northern District of California dated Apr. 4, 2002.
  • Order in the matter of Sentius Corporation v. Flyswat, Inc. in the United States District Court for the Northern District of California dated Aug. 5, 2002.
  • A sales brochure from Transparent Language of Hollis, NH about the Transparent Language System software, no date.
  • A sample screen from the software of Transparent Language of Hollis, NH, no date.
Patent History
Patent number: RE43633
Type: Grant
Filed: Jun 8, 2009
Date of Patent: Sep 4, 2012
Assignee: Sentius International LLC (McLean, VA)
Inventors: Marc Bookman (Palo Alto, CA), Brian Yamanaka (Mountain View, CA)
Primary Examiner: David Hudspeth
Assistant Examiner: Lamont Spooner
Attorney: Wilmer Cutler Pickering Hale and Dorr LLP
Application Number: 12/480,556