Prompting a user for input during a synchronous presentation of audio content and textual content

- Audible. Inc.

A selective synchronization service may facilitate the synchronous presentation of corresponding audio content and textual content. Corresponding words in companion items of audio and textual content may be selected for synchronous presentation. A corresponding word may be selected for synchronous audible and textual presentation according to any of a number of criteria. Further, a corresponding word may be selected for a modified synchronous presentation, in which the audible and/or textual presentation of the corresponding word is modified. Alternately, a corresponding word may be selected for an audible presentation without a textual presentation, or a textual presentation without an audible presentation.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
BACKGROUND

Generally described, computing devices may facilitate the playback or display of items of content, such as audiobooks, electronic books, songs, videos, television programs, computer and video games, multi-media content, and the like. For example, an electronic book reader computing device (“e-reader”) may display an electronic book on a screen and/or play an audiobook through speakers or headphones.

BRIEF DESCRIPTION OF THE DRAWINGS

Various aspects and many of the attendant advantages of the present disclosure will become more readily appreciated as the same become better understood by reference to the following detailed description when taken in conjunction with the accompanying drawings, wherein:

FIG. 1 is a block diagram depicting an illustrative network environment in which a selective synchronization service may operate.

FIG. 2 is a schematic diagram depicting an illustrative selective synchronization server.

FIG. 3A is a flow diagram depicting an illustrative routine for generating content synchronization information to facilitate the synchronous presentation of audio content and textual content.

FIG. 3B is a flow diagram depicting an illustrative subroutine for providing modifications to the synchronous presentation of audio content and textual content.

FIG. 4A is a flow diagram depicting an illustrative routine for synchronously presenting audio content and textual content.

FIG. 4B is a flow diagram depicting an illustrative subroutine for the modified synchronous presentation of audio content and textual content.

FIG. 5A, FIG. 5B, FIG. 5C and FIG. 5D are pictorial diagrams depicting illustrative synchronous presentations of audio content and textual content.

DETAILED DESCRIPTION

Computing devices may be configured to present multiple items of content in different media. For example, a user may use his or her user computing device to read an electronic book while listening to an audiobook of the same title, such as The Adventures of Tom Sawyer. The electronic book and the audiobook (or more generally, any group of two or more items of content related to the same content title) may be referred to as “companion” items of content. In some approaches, the electronic book and the audiobook can be presented synchronously, such that as a word in the electronic book is textually presented substantially while a word in the audiobook is audibly presented (e.g., spoken by the narrator of the audiobook). Content synchronization information that indicates a corresponding presentation position for a corresponding word may be provided to facilitate the synchronous presentation of the companion items of content. Further information pertaining to the synchronization of companion items of content can be found in U.S. patent application Ser. No. 13/604,482, entitled “IDENTIFYING CORRESPONDING REGIONS OF CONTENT” and filed on Sep. 5, 2012; in U.S. patent application Ser. No. 13/604,486, entitled “SELECTING CONTENT PORTIONS FOR ALIGNMENT” and filed on Sep. 5, 2012; U.S. patent application Ser. No. 13/070,313, entitled “SYNCHRONIZING DIGITAL CONTENT” and filed on Mar. 23, 2011; and in U.S. patent application Ser. No. 12/273,473, entitled “SYNCHRONIZATION OF DIGITAL CONTENT” and filed on Nov. 18, 2008. The disclosures of all four of these applications are hereby incorporated by reference in their entireties.

Generally described, aspects of the present disclosure relate to the selective or modified synchronous presentation of an item of audio content (such as an audiobook) with a companion item of textual content (such as an electronic book). Accordingly, a selective synchronization service is disclosed. In one embodiment, one or more corresponding words are identified in an item of textual content and an item of audio content. As used herein, a “corresponding word” may refer to a word that is audibly presented in an item of audio content at a presentation position that corresponds with a presentation position in an item of textual content at which the word is textually presented. One or more of the corresponding words may be selected for synchronous presentation and presented both audibly and textually by a computing device, while other corresponding words may be selected for either a textual presentation without an audible presentation (e.g., by muting the audio of the corresponding word or otherwise causing audio of the corresponding word not to be presented) or an audible presentation without a textual presentation (e.g., by not displaying the word or otherwise causing the corresponding word not to be presented in the text).

Corresponding words may be selected for synchronous audible and textual presentation according to any of a number of criteria. In some embodiments, the words selected for synchronous audible and textual presentation have a number of letters or syllables that satisfy a threshold. Advantageously, a user looking to improve his or her pronunciation or comprehension of relatively longer and more difficult words may hear those words audibly presented substantially while reading those words as they are textually presented. Relatively easy words (such as short words, or words that do not have a threshold number of letters or syllables), by contrast, may be presented textually without necessarily being presented audibly, as the user is likely to know how to pronounce or understand such easy words without further assistance.

Many variations on the synchronous presentation of textual content and audio content are possible. For example, a modification may be made to a textual presentation of the corresponding word, such that the corresponding word is highlighted, blanked out, or presented at a different presentation position in the text (e.g., out of order). The modification to the presentation of the corresponding word may also include making a substitution for the corresponding word in the text. For example, the corresponding word may be replaced with a homophone of the corresponding word, misspelling of the corresponding word, incorrect grammatical case of the corresponding word, incorrect singular or plural form, of the corresponding word and the like.

A modification may also be made to an audible presentation of the corresponding word. As discussed above, this modification may include muting the audible presentation of the corresponding word. Other possible modifications include presenting the corresponding word at a different presentation rate than the rest of the audio content (e.g., slowing down or speeding up the audible presentation for the corresponding word); presenting the corresponding word one phoneme or syllable at a time (e.g., to help the user learn how to “sound out” the word); or presenting the corresponding word with an incorrect pronunciation (e.g., by substituting one or more phonemes or by altering the inflection of the audible presentation of the corresponding word).

In some embodiments, a computing device presenting the content may obtain user input responsive to the audible and textual presentation of the item of audio content and item of textual content. For example, the computing device may prompt the user to speak the corresponding word; spell the corresponding word (e.g., by speaking each letter out loud, typing in the word, etc.); provide input responsive to an incorrect form of a corresponding word; and so forth. The computing device may be configured to determine whether the user input constitutes an appropriate response (e.g., a correctly spelled or pronounced word). If the response is not an appropriate response, the computing device may optionally provide a hint to the user. Further, in some embodiments, the computing device may only synchronously present a subsequent corresponding word if the user provides a response. For example, the computing device may prompt for a user response every number of words (e.g., every ten words, once a paragraph, etc.), which may advantageously ensure that the user is paying attention to the synchronous presentation.

It will be appreciated that the selective synchronization service may operate on many different types of content. Generally described, content can refer to any data that can be directly or indirectly accessed by a user, including, but not limited to audiobooks, electronic books, songs, videos, television programs, computer and video games, multi-media content, digital images, digital video, displayable text, audio data, electronic documents, computer-executable code, blocks or portions of the above and the like. Accordingly, “item of textual content” may generally refer to any electronic item of content that includes text. Likewise, “item of audio content” may generally refer to any electronic item of content that includes audio content.

Turning to FIG. 1, an illustrative network environment 100 is shown. The network environment 100 may include a user computing device 102, a network 106, a selective synchronization server 110 and a data store 112. The constituents of the network environment 100 may be in communication with each other either locally or over the network 106.

The user computing device 102 may be any computing device capable of communicating over the network 106, such as a laptop or tablet computer, personal computer, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, set-top box, camera, audiobook player, digital media player, video game console, in-store kiosk, television, one or more processors, integrated components for inclusion in computing devices, appliances, electronic devices for inclusion in vehicles or machinery, gaming devices, or the like. The user computing device 102 may generally be capable of presenting content to a user of the user computing device 102. For example, the user computing device 102 may be capable of playing audio content by directing audible output through speakers, headphones, or the like. The user computing device 102 may also be capable of displaying textual content, graphical content, or video content on a display screen.

In some embodiments, the user computing device 102 may also be configured to present textual content and companion audio or video content in a synchronized manner. The user computing device 102 may also be capable of communicating over the network 106, for example, to obtain content synchronization information from the selective synchronization server 110. In some embodiments, the user computing device 102 may include non-transitory computer-readable medium storage for storing content synchronization information and items of content, such as electronic books and audiobooks.

The selective synchronization server 110 is a computing device that may perform a variety of tasks to implement the selective synchronization service. For example, the selective synchronization server 110 may align an item of audio content (e.g., an audiobook) and an item of textual content (e.g., an electronic book) and generate content synchronization information that indicates one or more corresponding words in the item of audio content and the item of textual content. The selective synchronization server 110 may also select which corresponding words are to be synchronously presented and which corresponding words are to be presented in a modified manner, which selections may also be included in the content synchronization information. This content synchronization information may be provided by the selective synchronization server 110 to a user computing device 102 over the network 106, or stored in the data store 112. Additional operations of the selective synchronization server 110 are described in further detail with respect to FIG. 2.

The user computing device 102 and selective synchronization server 110 may each be embodied in a plurality of components, each executing an instance of the respective content user computing device 102 and selective synchronization server 110. A server or other computing system implementing the user computing device 102 and selective synchronization server 110 may include a network interface, memory, processing unit and computer readable medium drive, all of which may communicate with each other by way of a communication bus. Moreover, a processing unit may itself be referred to as a computing device. The network interface may provide connectivity over the network 106 and/or other networks or computer systems. The processing unit may communicate to and from memory containing program instructions that the processing unit executes in order to operate the user computing device 102 and selective synchronization server 110. The memory generally includes RAM, ROM and/or other persistent and/or auxiliary non-transitory computer-readable storage media.

The selective synchronization server 110 may be in communication with a data store 112. The data store 112 may electronically store items of audio content and/or textual content, such as audiobooks, musical works, electronic books, television programs, video clips, movies, multimedia content, video games and other types of content. The data store 112 may additionally store content synchronization information and/or criteria for selecting words for synchronous or modified synchronous presentation. Selection criteria are discussed further below with respect to FIG. 3A and FIG. 3B.

The data store 112 may be embodied in hard disk drives, solid state memories and/or any other type of non-transitory computer-readable storage medium accessible to the selective synchronization server 110. The data store 112 may also be distributed or partitioned across multiple local and/or remote storage devices as is known in the art without departing from the scope of the present disclosure. In yet other embodiments, the data store 112 includes a data storage web service.

It will be recognized that many of the devices described herein are optional and that embodiments of the environment 100 may or may not combine devices. Furthermore, devices need not be distinct or discrete. Devices may also be reorganized in the environment 100. For example, the selective synchronization server 110 may be represented in a single physical server or, alternatively, may be split into multiple physical servers. The entire selective synchronization service may be represented in a single user computing device 102 as well.

Additionally, it should be noted that in some embodiments, the selective synchronization service is executed by one or more virtual machines implemented in a hosted computing environment. The hosted computing environment may include one or more rapidly provisioned and released computing resources, which computing resources may include computing, networking and/or storage devices. A hosted computing environment may also be referred to as a cloud computing environment.

FIG. 2 is a schematic diagram of the selective synchronization server 110 shown in FIG. 1. The selective synchronization server 110 includes an arrangement of computer hardware and software components that may be used to implement the selective synchronization service. FIG. 2 depicts a general architecture of the selective synchronization server 110 illustrated in FIG. 1. The selective synchronization server 110 may include more (or fewer) components than those shown in FIG. 2. It is not necessary, however, that all of these generally conventional components be shown in order to provide an enabling disclosure.

The selective synchronization server 110 includes a processing unit 202, a network interface 204, a non-transitory computer-readable medium drive 206 and an input/output device interface 208, all of which may communicate with one another by way of a communication bus. As illustrated, the selective synchronization server 110 is optionally associated with, or in communication with, an optional display 218 and an optional input device 220. The display 218 and input device 220 may be used in embodiments in which users interact directly with the selective synchronization server 110, such as an integrated in-store kiosk, for example. In other embodiments, the display 218 and input device 220 may be included in a user computing device 102 shown in FIG. 1. The network interface 204 may provide the selective synchronization server 110 with connectivity to one or more networks or computing systems. The processing unit 202 may thus receive information and instructions from other computing systems (such as the user computing device 102) or services via a network. The processing unit 202 may also communicate to and from memory 210 and further provide output information for an optional display 218 via the input/output device interface 208. The input/output device interface 208 may accept input from the optional input device 220, such as a keyboard, mouse, digital pen, touch screen, or gestures recorded via motion capture. The input/output device interface 220 may also output audio data to speakers or headphones (not shown).

The memory 210 contains computer program instructions that the processing unit 202 executes in order to implement one or more embodiments of the selective synchronization service. The memory 210 generally includes RAM, ROM and/or other persistent or non-transitory computer-readable storage media. The memory 210 may store an operating system 214 that provides computer program instructions for use by the processing unit 202 in the general administration and operation of the selective synchronization server 110. The memory 210 may further include other information for implementing aspects of the selective synchronization service. For example, in one embodiment, the memory 210 includes a user interface module 212 that facilitates generation of user interfaces (such as by providing instructions therefor) for display upon a computing device such as user computing device 102. The user interface may be displayed via a navigation interface such as a web browser installed on the user computing device 102. In addition, memory 210 may include or communicate with the data store 112. Content stored in the data store 112 may include items of textual content and items of audio content, as described in FIG. 1.

In addition to the user interface module 212, the memory 210 may include a selective synchronization module 216 that may be executed by the processing unit 202. In one embodiment, the selective synchronization module 216 may be used to implement the selective synchronization service, example operations of which are discussed below with respect to FIG. 3A, FIG. 3B, FIG. 4A and FIG. 4B.

In some embodiments, the selective synchronization service is implemented partially or entirely by the user computing device 102. Accordingly, the user computing device 102 may include a selective synchronization module 216 and other components that operate similarly to the components illustrated as part of the selective synchronization server 110, including a processing unit 202, network interface 204, non-transitory computer-readable medium drive 206, input/output interface 208, memory 210, user interface module 212 and so forth.

Turning now to FIG. 3A, an illustrative routine 300 for generating content synchronization information is shown. The illustrative routine 300 may be implemented by the user computing device 102, the selective synchronization server 110, or both. The content synchronization information may identify a corresponding presentation position at which a corresponding word is presented in both the item of textual content and the item of audio content. The content synchronization information may also direct a computing device to present corresponding words of the companion items of content synchronously and to cease synchronous presentation for mismatched words of the companion items of content. The content synchronization information may further direct the computing device make modifications to the synchronous presentation as discussed above.

In one example of how content synchronization information facilitates the synchronous presentation of companion items of audio content and textual content, the content synchronization information may direct a user computing device 102 to synchronously present one or more corresponding words that are audibly presented in an item of audio content and textually presented in a companion item of textual content. Thus, the audio content may follow a user's progress in the textual content, so that the words spoken in the audio content line up with the words read by the user in the textual content. Optionally, the words in the textual content may be highlighted (or otherwise visually indicated) as they are spoken in the audio content to assist the user in following the presentation.

The illustrative routine 300 begins at block 302. At block 304, a portion of the item of textual content may be selected. Generally described, a portion of the textual content may include a word, phrase, sentence, paragraph, etc. Portions corresponding to words, phrases, or sentences, may be identified using techniques such as statistical language models, finite grammars, optical character recognition to identify spaces (between words, sentences, paragraphs, etc.), or other techniques. In examples pertaining to the English language and many other languages, a word may be bounded by spaces on either side; a phrase may be bounded by punctuation, prepositions, conjunctions, or changes in word type (e.g., noun to verb indicating a change from subject to predicate); and sentences may be bounded at the beginning by a capital letter and at the end by a period, exclamation point, question mark, or the like.

At block 306, the illustrative routine 300 may determine whether the item of audio content includes a portion that corresponds to the portion of the textual content selected in block 304. In some embodiments, these portions correspond if the portion of the textual content includes at least a threshold percentage of words that correspond to words included in a portion of the audio content to which it is compared, as might be determined by comparing the portion of the textual content with a transcription of the portion of the audio content. This threshold percentage may be 50% corresponding words; 70% corresponding words; 95% corresponding words; or any other threshold percentage.

If the portions do not correspond, the illustrative routine 300 may proceed to block 308 and indicate in the content synchronization information that the textual content does not correspond to the audio content. Accordingly, in some embodiments, while the mismatched portion of the item of textual content is presented on a computing device provided with the content synchronization information, no audio content is presented by the computing device. The illustrative routine 300 may then proceed directly to block 312.

If the portion of the item of textual content does correspond to a portion of the item of audio content, the illustrative routine 300 may proceed to block 310 and indicate in the content synchronization information being generated that the portions correspond. Accordingly, in some embodiments, corresponding words present in the portion of the item of textual content and in the portion of the item of audio content may be selected for synchronous presentation or for a modification to their synchronous presentation. The illustrative routine 300 may proceed to block 350 to process each corresponding word.

Turning now to FIG. 3B, an illustrative subroutine 350 for selective synchronization is shown. The illustrative routine 350 starts at block 352. At block 354, the illustrative subroutine 350 selects a corresponding word that is present in both the portion of the item of textual content and in the portion of the item of audio content, as may be determined in block 304 of the illustrative routine 300.

At block 356, the illustrative subroutine 350 determines whether to modify the synchronous presentation of the corresponding word. A number of criteria may be applied to determine whether an audible or textual presentation of the corresponding word should be modified, or whether the corresponding word should be synchronously presented both audibly and textually. In some embodiments, a corresponding word is selected (or not selected) for a presentation modification if it includes a number of letters or syllables satisfying a threshold. In other embodiments, a corresponding word is selected (or not selected) for a presentation modification if it is a loanword from a language other than a language with which the item of content is associated (e.g., a language in which the item of content is primarily presented). In yet other embodiments, a corresponding word is selected (or not selected) for a presentation modification if it is included on a vocabulary list provided to the selective synchronization service. In still other embodiments, a corresponding word is selected (or not selected) for a presentation modification if it does not obey regular pronunciation rules for a language associated with the items of companion content (e.g., the word “colonel” for items of content associated with the English language). In further embodiments, a corresponding word is selected (or not selected) for a presentation modification if it has a particular part of speech (noun, verb, adverb, adjective, preposition, pronoun, etc.). In yet further embodiments, a corresponding word is selected (or not selected) for a presentation modification based on whether a previous corresponding word has been selected (or not selected) for a presentation modification. For example, a presentation modification may be provided for every other corresponding word, for every ten corresponding words, or for one corresponding word per sentence or paragraph, among other examples. Further criteria for selecting (or not selecting) corresponding words for a presentation modification may be applied. Additionally, user input (either from the user to whom the content is to be presented, or from a different user) may be obtained to determine whether the corresponding word should be presented in a synchronous or modified manner.

If the illustrative subroutine 350 determines at block 356 that the synchronous presentation of the corresponding word is to be modified, the illustrative subroutine may proceed to block 358 to select a presentation modification to be indicated in the content synchronization information. The presentation modification may include a modification to the textual presentation of the corresponding word; a modification to the audible presentation of the corresponding word; or a modification to both the textual presentation and the audible presentation of the corresponding word. Further, multiple modifications may be selected for a single corresponding word.

Many modifications to the audible presentation of the corresponding word are possible. In some embodiments, the audible presentation of the corresponding word is modified by altering the volume of the corresponding word, which may include muting or otherwise decreasing the volume of the corresponding word, or may include increasing the volume of the corresponding word. In other embodiments, the audible presentation of the corresponding word is modified by presenting the corresponding word at a presentation rate that is faster or slower than the presentation rate (e.g., playback speed) at which the item of audio content is typically presented. In still other embodiments, the corresponding word may be broken down into fragments such as phonemes or syllables and each phoneme or syllable may be separately audibly presented responsive to user input (e.g., the user speaking the phoneme syllable). In yet further embodiments, the audible presentation of the corresponding word is modified by causing a mispronunciation of the corresponding word to be audibly presented. Still other modifications to the audible presentation of the corresponding word are possible.

Likewise, many modifications to the textual presentation of the corresponding word are also possible. In some embodiments, the textual presentation of the corresponding word is modified by replacing the corresponding word with a blank in the text. In other embodiments, the textual presentation of the corresponding word is modified by replacing the corresponding word with a homophone of the corresponding word; an incorrect grammatical case of the corresponding word; or a misspelling of the corresponding word. In still further embodiments, the textual presentation of the corresponding word is modified by placing the corresponding word out of order in the text (e.g., altering the presentation position in the text of the corresponding word). In yet further embodiments, the textual presentation of the corresponding word is modified by highlighting or otherwise indicating the corresponding word, which highlighting or indicating may differ from any highlighting or indicating provided by an unmodified synchronous presentation. Still other modifications to the textual presentation of the corresponding word are possible.

At block 360, the illustrative subroutine 350 may optionally select a response type for the presentation modification. A user may be prompted to provide a response to the presentation modification if a response type is selected. The selected response type may vary based on the presentation modification selected in block 358. Specific, non-limiting examples of presentation modifications and their associated response types are shown below in Table 1:

TABLE 1 Selected Modification (Block 358) Selected Response Type (Block 360) Corresponding word replaced with blank in User response includes spelling text or with misspelling in text corresponding word (speaking each letter or typing in word) Corresponding word replaced with User response includes spelling homophone in text corresponding word or selecting corresponding word from list that also includes homophone Corresponding word replaced with wrong User response includes spelling grammatical case in text corresponding word or selecting correct grammatical case of corresponding word from list that also includes one or more wrong grammatical cases Corresponding word presented in text at User response includes indicating correct incorrect presentation position (word order) presentation position of corresponding word (word order) Corresponding word muted in audio or User response includes speaking word replaced with mispronunciation in audio Corresponding word presented phoneme-by- User response includes speaking each phoneme or syllable-by-syllable in audio phoneme or syllable before next phoneme or syllable is audibly presented Corresponding word replaced with blank in User response includes doze exercise text and muted in audio response (typed or spoken word)

At block 362, the illustrative routine 350 may determine an appropriate response that corresponds to the response type selected in block 360 for the presentation modification selected in block 358. Responses may be provided by a user via his or her user computing device 102 (e.g., by speaking to an audio input device provided with the user computing device 102, by typing in a response on a keyboard provided with the user computing device 102, by interacting with a touchscreen or mouse provided with the user computing device 102, etc.). Non-limiting examples of appropriate responses are shown below in Table 2.

TABLE 2 Selected Modification Selected Response Type Appropriate Response (Block 358) (Block 360) (Block 362) Corresponding word replaced User response includes Correctly spelled with blank in text or with spelling word corresponding word misspelling in text (speaking each letter or typing in word) Corresponding word replaced User response includes Correctly spelled with homophone in text spelling corresponding word corresponding word or or selecting corresponding correctly selected word from list that also corresponding word includes homophone Corresponding word replaced User response includes Correctly spelled with wrong grammatical case spelling corresponding word corresponding word or in text or selecting correct correctly selected grammatical case of grammatical case of corresponding word from list corresponding word that also includes one or more wrong grammatical cases Corresponding word User response includes Correct presentation position presented in text at incorrect indicating correct of corresponding word (as presentation position (word presentation position of may be indicated by the user order) corresponding word (word “dragging” the corresponding order) word to the correct presentation position) Corresponding word muted User response includes Correctly pronounced in audio or replaced with speaking word corresponding word mispronunciation in audio Corresponding word User response includes Correctly pronounced presented phoneme-by- speaking each phoneme or phoneme or syllable phoneme or syllable-by- syllable before next phoneme syllable in audio or syllable is audibly presented Corresponding word replaced User response includes doze Corresponding word or with blank in text and muted exercise response (typed or synonym for corresponding in audio spoken word) word

After determining an appropriate response at block 362, the illustrative subroutine 350 may proceed to block 366, which will be described below.

Returning to block 356, if the illustrative subroutine 350 does not determine that the synchronous presentation of the corresponding word should be modified, the illustrative subroutine 350 may proceed to block 364 and indicate in the content synchronization information being generated that the corresponding word should be presented synchronously and without modification. The illustrative subroutine 350 may then proceed directly to block 366.

At block 366, the illustrative subroutine 350 may determine if all corresponding words have been processed. If not, the illustrative subroutine 350 may return to block 354 and select another corresponding word to process. If all corresponding words present in both the portion of the item of textual content and in the portion of the item of audio content have been processed, the illustrative subroutine 350 finishes at block 368.

It should be noted that the illustrative subroutine 350 defaults to indicating in the content synchronization information that a corresponding word should be presented both audibly and textually if no modification is selected. However, in other embodiments, the illustrative subroutine 350 defaults to indicating content synchronization information that a corresponding word should be presented only textually or only audibly. In such embodiments, certain corresponding words may be selected for synchronous audible and textual presentation, while other corresponding words are selected to be presented only audibly or only textually. These selections of corresponding words for synchronous presentation may generally be made according to criteria generally similar to that used to select words for presentation modifications as discussed above with respect to block 356.

Returning to FIG. 3A, upon completion of the illustrative subroutine 350, the illustrative routine 300 may proceed to block 312. At block 312, the illustrative routine 300 may determine whether all portions of textual content have been processed for purposes of generating content synchronization information. If not, the illustrative routine 300 returns to block 304. On the other hand, if all portions of textual content have been processed, the illustrative routine 300 finishes the generation of the content synchronization information in block 314.

Accordingly, the generated content synchronization information may include information indicating whether one, some, or all portions of the item of textual content correspond to a portion of the audio content. This generated content synchronization information may be used to facilitate the synchronous presentation of corresponding words present in the item of audio content and in the item of textual content. Likewise, the content synchronization information may include information pertaining to modifications to be made to the synchronous presentation.

Further information pertaining to the generation of content synchronization information may be found in may be found in U.S. patent application Ser. No. 13/604,482, entitled “IDENTIFYING CORRESPONDING REGIONS OF CONTENT” and filed on Sep. 5, 2012; in U.S. patent application Ser. No. 13/604,486, entitled “SELECTING CONTENT PORTIONS FOR ALIGNMENT” and filed on Sep. 5, 2012; U.S. patent application Ser. No. 13/070,313. The disclosures of both of these applications were previously incorporated by reference in their entireties above.

Based on the foregoing, a number of implementations of the selective synchronization service for specific use cases are possible, non-limiting examples of which are discussed herein. In one use case, synchronous audible and textual presentation is provided only for corresponding words that have a number of letters or a number of syllables that satisfies a threshold, while the audible presentation is muted for corresponding words that do not have a number of letters or a number of syllables that satisfies a threshold. Advantageously, a user may hear relatively difficult words presented audibly in conjunction with the text, so as to improve his or her pronunciation or reading skills. In another use case, synchronous audible and textual presentation is provided only for corresponding words that are loanwords from a language other than a language with which the companion items of content are associated, while the audible presentation is muted for corresponding words in the primary language of the companion items of content. For example, if the items of companion content are associated with the English language, a corresponding loanword associated with the French language (such as “champagne” or “coterie”) may be presented both audibly and textually, while corresponding words associated with the English language may only be presented textually. Still further use cases are possible.

Turning now to FIG. 4A, an illustrative routine 400 is shown for presenting companion items of audio and textual content according to the principles of the present disclosure. In some embodiments, the illustrative routine 400 is implemented by a user computing device 102 to cause presentation of the companion items of content.

At block 402, the illustrative routine 400 may obtain content synchronization information. For example, a user computing device 102 may obtain the content synchronization information from the selective synchronization server 110. Alternatively or additionally, the content synchronization information may be obtained by a user computing device 102 configured to generate content synchronization information. An illustrative routine 300 for generating content synchronization information is described above with respect to FIG. 3A.

As previously described, the content synchronization information can include information regarding positions in the item of textual content that correspond to positions in the item of content comprising audio content (e.g., a page and line in an electronic book and a playback position of an audiobook), additional information related to synchronous presentation (e.g., information for highlighting, underlining, etc. or otherwise indicating a portion of an electronic book that corresponds to the playback of an audiobook), information identifying portions of the textual content and audio content that correspond or fail to correspond, or any combination thereof.

At block 404, the illustrative routine 400 may identify a word at the current presentation position in the text. The presentation position of the text may be measured on a word-by-word basis, page-by-page basis, or by any other metric.

At block 406, the illustrative routine 400 may determine whether the word at the current presentation position of the text corresponds to a word in the audio content, as may be indicated by the content synchronization information.

If the word at the current presentation position of the text does not correspond to a word in the audio content, the word of the textual content may be presented in block 408. It should be appreciated that textual content may be presented in several ways, including visually (e.g., as text on a screen) or tactilely (e.g., via mechanical vibrations and/or by presenting Braille), or a combination thereof. As discussed above, an item of textual content may be any electronic item of content that includes text, such as electronic books, periodicals, scripts, librettos and the like, or blocks or portions thereof. The illustrative routine 400 may then proceed to block 418.

If the word at the current presentation position of the text does corresponding to a word of the audio content, the illustrative routine 400 may proceed to block 410 and determine whether a presentation modification is indicated in the content synchronization information for the corresponding word. If no presentation modification is indicated, the illustrative routine 400 may cause a synchronous audible and textual presentation of the corresponding word at block 414. As the audio of the corresponding word is presented, the presentation position of the audio (as might be measured by a timestamp or other metric) may be updated at block 416. The illustrative routine 400 may then proceed to block 418.

If a presentation modification is indicated, the illustrative routine 400 may proceed to implement an illustrative modified presentation subroutine 450. Turning now to FIG. 4B, the illustrative subroutine 450 may begin at block 452. At block 454, the illustrative subroutine 450 may cause a modified textual or audible presentation of the corresponding word.

If the content synchronization information indicates that the user is to provide a response to the modification, at block 456, the illustrative subroutine 450 may prompt the user for a response to the modification. A prompt may be provided to the user in several ways via the user computing device 102. In some embodiments, the user computing device 102 audibly conveys the prompt. For example, a speech synthesizer may be used to generate an audio prompt to be played back to the user through speakers or headphones. In other embodiments, the user computing device 102 presents the prompt as visual content on a display screen. For example, the prompt may be posed as text, an image, a video clip, an animation, or in some other visual format. Still other ways to present the prompt are possible.

At block 458, the illustrative subroutine 450 may receive the user's response via a user computing device 102 or other computing device implementing the illustrative subroutine 450. In some embodiments, the user may interact with an input device associated with the user computing device 102 to provide a response. The user may direct input through a mouse, keyboard, touchscreen, or other input device to interact with a user interface configured to receive the user's response. For example, the selective synchronization service may display on a touchscreen the prompt and one or more software controls indicating response choices to the prompt (e.g., a list of possible responses). The user may tap one of the software controls to indicate his or her response. In another example, the user may be prompted to input a word. The user may type an answer on a software or hardware keyboard, or write on a touchscreen with a stylus or finger to provide a response. In yet another example, the user may speak a response into a microphone of the user computing device. Speech recognition techniques known in the art may be used to convert the user's spoken response into data for processing. For example, the user may be asked to spell a word out loud or to sound out a word. The user may speak each letter, phoneme, or syllable of the word, with the spoken letters, phonemes, or syllables received through the microphone of the user computing device 102. Still other ways of receiving a user response through an input device are possible. In a still further example, the user may physically manipulate the user computing device itself as a response. For example, the user computing device may include an accelerometer, gyroscope, infrared proximity sensor, or other hardware or software for detecting motion.

At block 460, the illustrative subroutine 450 may determine whether the response provided by the user is an appropriate response. In some embodiments, the content synchronization information includes an indication of the appropriate response to a modification, substantially as discussed above with respect to FIG. 3B. The illustrative subroutine 450 may compare the user's response to the appropriate response indicated in the content synchronization information to determine whether the user provided the appropriate response.

If the user's response is neither substantially similar to nor identical to the appropriate response indicated in the content synchronization information, the illustrative subroutine 450 may optionally proceed to block 462, in which a hint may be provided to the user. The hint provided may vary based on the response type. If the user was prompted to spell a corresponding word, the hint may include providing one or more letters of the corresponding word to the user. If the user was prompted to speak the corresponding word, the hint may include audibly presenting one or more phonemes or syllables of the word to the user. If the user is prompted to select a response from a list of possible responses, one or more inappropriate or incorrect responses may be removed from the list. Still other types of hints are possible. Once the hint has been provided in block 462, the illustrative subroutine 450 may receive another user response in block 458.

If the user's response is substantially similar to or identical to the appropriate response indicated in the content synchronization information, the illustrative subroutine 450 may proceed to block 464, in which the corresponding word may optionally be audibly and/or textually presented without modification. The illustrative subroutine 450 may then finish in block 466. Once the illustrative subroutine 450 has been completed, the illustrative routine 400 may proceed to block 416 shown in\ FIG. 4A at which the presentation position of the audio content may be updated as discussed above. The illustrative routine 400 may then proceed to block 418.

At block 418, the presentation position of the textual content may be updated. In a specific example, this may include turning the page of an electronic book in block 418 when the playback of an audiobook has advanced in block 416 beyond the text associated with a page being displayed or to the end of the text associated with the page being displayed. In some embodiments, the presentation position of the audio content is continually updated based on the content synchronization information and the presentation position of the textual content, for example, as previously described. In other embodiments, updating the presentation position of the textual content may include simply indicating that the word has been presented to the user.

At block 420, the illustrative routine 400 may determine whether the textual content is still being presented. If so, the illustrative routine 400 may return to block 404 and present the textual content from the updated position determined in block 418. The illustrative routine 400 may then determine in block 406 a word of the textual content at the updated presentation position corresponds to a word in the audio content at a corresponding presentation position and so forth. If the textual content is no longer being presented (e.g., a user of the user computing device 102 may have turned the user computing device 102 off, or may have closed an application used to present content), then the illustrative routine 400 may finish at block 422.

As discussed above, several use cases may be achieved by the selective synchronization service, as illustrated in FIG. 5A, FIG. 5B, FIG. 5C and FIG. 5D, in which identical reference numbers refer to similar or identical elements. The user computing device 102 may audibly present audio in an item of audio content via one or more speakers 502 and/or one or more audio outputs, which may be provided to speakers or headphones. The user computing device 102 may also textually present a companion item of textual content with which an item of audio content is synchronized on the display 500. In some embodiments, a corresponding word is synchronously audibly and textually presented as discussed above. The corresponding word (and one or more nearby words) may optionally be highlighted in the text on the display 500 as the corresponding word is audibly presented via the speakers 500. In some embodiments, highlighting is only provided for a corresponding word that is synchronously audibly and textually presented, while in other embodiments, highlighting is provided both for corresponding words that are synchronously audibly and textually presented and for words that are textually presented without necessarily being audibly presented.

With specific reference to the example shown in FIG. 5A, certain words of the text that are textually presented on the display 500 have been selected for textual presentation without audible presentation. The user computing device 102 may cause a textual presentation 504 of the words “She looked,” which textual presentation may optionally include highlighting the textually presented words. An audible presentation of the words “She looked” is not provided in this example, however, even though the item of audio content may include the words “she looked” at a presentation position that corresponds to the presentation position in the item of textual content of the words “She looked.” However, for the word “perplexed,” a synchronous textual presentation 506A and audible presentation 506B may be provided by the user computing device, such that the word “perplexed” is displayed (and optionally highlighted) in the text at least substantially while the word “perplexed” is audibly presented. The user computing device 102 may further cause a textual presentation 508 of the words “for a moment,” which textual presentation may optionally include highlighting the textually presented words. Again, an audible presentation of the words “for a moment” is not provided in this example, even though the item of audio content may include the words “for a moment” at a presentation position that corresponds to the presentation position in the item of textual content of the words “for a moment.”

In some examples, the corresponding words for which synchronous audible and textual presentation is to be provided may be relatively spread apart in terms of presentation positions. Highlighting may not necessarily be provided for any words between the corresponding words to be synchronously presented. Accordingly, the user computing device 102 may be configured to estimate a time at which the user reaches the synchronously presented corresponding words based on the user's average reading speed (as may be measured in words per unit time, pages per unit time, etc.) and on the number of words between the synchronously presented corresponding words. Further information about estimating a user's reading speed may be found in U.S. patent application Ser. No. 13/536,711, entitled “PACING CONTENT” and filed on Jun. 28, 2012; and in U.S. patent application Ser. No. 13/662,306, entitled “CONTENT PRESENTATION ANALYSIS” and filed on Oct. 26, 2012. The disclosures of both of these applications are hereby incorporated by reference in their entireties.

Turning now to FIG. 5B, the user computing device 102 may cause a synchronous textual presentation 510A and audible presentation 510B of one or more corresponding words. When the presentation position of a subsequent corresponding word 512 is reached, the synchronous presentation may be halted, and the user may be provided with a prompt 514 to speak the corresponding word 512 (e.g., the corresponding word may be an appropriate response to the modification of muting the audio presentation of the corresponding word). Once the user speaks the corresponding word 512 in response to the prompt 514, the synchronous presentation of subsequent corresponding words may continue. Optionally, the user computing device 102 may determine whether the user pronounced the corresponding word correctly, and/or may cause an audible presentation of the corresponding word responsive to the user speaking the corresponding word.

With reference to FIG. 5C, the user computing device 102 may again cause a synchronous textual presentation 520A and audible presentation 520B of one or more corresponding words. Here, however, the audible presentation of the word “looked” has been muted, and the word “looked” has been replaced in the text with a blank 522. The user may be provided with a prompt 524 to “fill in the blank.” The appropriate response may include, for example, a spoken or typed word that is either the corresponding word or a synonym for the corresponding word. Once the user provides an appropriate response, the synchronous presentation of subsequent corresponding words may continue. Advantageously, a doze exercise implementation of the selective synchronization service may be achieved. Optionally, the user computing device 102 may cause an audible, textual, or synchronous audible and textual presentation of the corresponding word responsive to the user providing a response.

Turning now to FIG. 5D, the user computing device 102 may cause a synchronous presentation of one or more corresponding words, as indicated by textual presentations 532A and 536A and audible presentations 532B and 536B. The textual presentations 532A and 536A may include a first form of highlighting (or other indication) to help the user keep his or her place in the synchronous presentation, such that a corresponding word in the item of textual content is displayed and highlighted while the corresponding word is spoken in the item of audio content. However, a selected corresponding word may be indicated with a different type of highlighting, as shown by modified textual presentation 534A, in which the word “perplexed” is highlighted differently from other synchronously presented corresponding words. Likewise, the selected corresponding word may presented synchronously, but at a different volume or presentation rate than other corresponding words, as indicated by modified audible presentation 534B.

The synchronous presentations and modified synchronous presentations shown in and discussed with respect to FIG. 5A. FIG. 5B, FIG. 5C and FIG. 5D are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Many other use cases are possible and are within the scope of the present disclosure.

For illustrative purposes, the content synchronization information discussed herein includes indications of modifications to the synchronous presentation of one or more corresponding words. However, in some embodiments, the user computing device 102 may obtain content synchronization information that indicates corresponding presentations of corresponding words, without necessarily indicating any selective synchronizations or modifications to the synchronous presentation. Rather, the user computing device 102 may be configured to select for synchronous presentation one or more corresponding words as indicated in the content synchronization information. The user computing device 102 may also be configured to modify an audible or textual presentation of a corresponding word indicated in the content synchronization information. Further, in some embodiments, the user computing device 102 (or other computing device implementing the selective synchronization service) may not generate or obtain content synchronization information at all, but may instead dynamically determine a synchronization between an item of audio content and an item of textual content. Example techniques for synchronizing content are discussed in U.S. patent application Ser. No. 13/604,482, entitled “IDENTIFYING CORRESPONDING REGIONS OF CONTENT” and filed on Sep. 5, 2012; and in U.S. patent application Ser. No. 13/604,486, entitled “SELECTING CONTENT PORTIONS FOR ALIGNMENT” and filed on Sep. 5, 2012. The disclosures of both of these applications were previously incorporated by reference in their entireties above.

Additionally, various embodiments of the selective synchronization service discussed herein refer to a “corresponding word” for illustrative purposes. However, the selective synchronization service may also provide for the synchronous or modified synchronous presentation of one or more corresponding phrases, sentences, or paragraphs, each of which may be a phrase, sentence, or paragraph that has a corresponding presentation position in an item of textual content and an item of audio content. A corresponding phrase, sentence, or paragraph may include one or more corresponding words. In an application of these embodiments, a particular corresponding word and one or more corresponding words near the particular corresponding word may be selected for synchronous audible and textual presentation. Advantageously, the user may hear and read the particular corresponding word in the context of the one or more nearby corresponding words.

While the present disclosure discusses examples of synchronously presenting content for illustrative purposes, the principles and advantages described herein may be applied to other ways of synchronizing content. Any combination of features described herein may be applied to other forms of content synchronization, as appropriate. For example, content synchronization information can be used to switch back and forth between presenting audio content and textual content. More specifically, in some embodiments, a computing device can display the text of an electronic book and then switch to playing the audio of an audiobook at a corresponding position using the content synchronization information. As another example, the principles and advantages described herein can be used to synchronize companion content on different computing devices outside the context of synchronously presenting companion content. For instance, any combination of features described herein can be applied to any of the examples of synchronizing content on different computing devices described in U.S. patent application Ser. No. 13/070,313, filed on Mar. 23, 2011, entitled “SYNCHRONIZING DIGITAL CONTENT,” and in U.S. patent application Ser. No. 12/273,473, filed Nov. 18, 2008, entitled “SYNCHRONIZATION OF DIGITAL CONTENT.” These applications were previously incorporated by reference in their entireties above.

Items of companion content may be acquired and stored on the user computing device 102 in a variety of ways, such as by purchasing, streaming, borrowing, checking out, renting, permanently leasing, temporarily leasing, or otherwise obtaining temporary or permanent access to items of companion content. In one specific example, a user may have purchased both an electronic book and an audiobook from a network-based retail content provider. In another specific example, the user may check out an audiobook and synchronously present the audiobook with an electronic book that the user has purchased. In another specific example, the user may lease an audiobook and synchronously present the audiobook with a borrowed electronic book.

Many of the operations of the selective synchronization service are sufficiently mathematically or technically complex that one or more computing devices may be required to carry them out. In particular, presenting digital content, communicating over a network and synchronizing content may effectively require resort to one or more computing devices.

All of the methods and processes described above may be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in specialized computer hardware.

Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to present that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.

Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc. may be either X, Y, or Z, or a combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each is present.

Any routine descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the routine. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, or executed out of order from that shown or discussed, including substantially synchronously or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.

It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. Nothing in the foregoing description is intended to imply that any particular feature, characteristic, component, step, module, or block is necessary or indispensable. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims

1. A system comprising:

an electronic data store configured to store: an audiobook; and an electronic book that is a companion to the audiobook; and
a computing device, comprising a physical processor, that is in communication with the electronic data store, the computing device configured to: identify a plurality of words that correspond between the audiobook and the electronic book, wherein each of said plurality of words occurs in both the audiobook and the electronic book in identical order; select a word, from the plurality of words, for modified presentation; cause textual presentation of the plurality of words; during the textual presentation of the plurality of words, cause audible presentation of one or more words, from the plurality of words, that precede the word selected for modified presentation without causing audible presentation of the word selected for modified presentation; prompt a user to speak the word selected for modified presentation; obtain a spoken response as audio input; determine that the spoken response includes the word selected for modified presentation; and subsequent to determining that the spoken response includes the word selected for modified presentation, cause audible presentation of one or more words, from the plurality of words, that follow the word selected for modified presentation.

2. The system of claim 1, wherein the computing device is configured to cause a synchronous audible and textual presentation of each of the plurality of words other than the word selected for modified presentation.

3. The system of claim 2, wherein the computing device is configured to cause the synchronous audible and textual presentation in part by:

at least substantially while causing audible presentation of a first word of the plurality of words, causing a highlighted textual presentation of the first word.

4. The system of claim 1, wherein the word selected for modified presentation is selected from the plurality of words based at least in part on user input.

5. The system of claim 1, wherein the plurality of words is less than all words that correspond between the audiobook and the electronic book.

6. The system of claim 1, wherein the audio input is received from a microphone, wherein the computing device is further configured to identify a spoken word in the audio input at least in part by applying a speech recognition technique to one or more spoken phonemes in the audio input.

7. A computer-implemented method comprising:

under control of one or more computing devices configured with specific computer-executable instructions, identifying a first corresponding word, wherein the first corresponding word occurs in both an item of textual content and in an item of audio content; identifying a second corresponding word, wherein the second corresponding word occurs after the first corresponding word in both the item of textual content and in the item of audio content; identifying a third corresponding word, wherein the third corresponding word occurs after the second corresponding word in both the item of textual content and in the item of audio content; causing a synchronous audible and textual presentation of the first corresponding word; causing a textual presentation of the second corresponding word without synchronously audibly presenting the second corresponding word; prompting a user to speak the second corresponding word; obtaining a spoken response as audio input; determining that the spoken response includes the second corresponding word; and in response to determining that the spoken response includes the second corresponding word, causing audible presentation of the third corresponding word.

8. The computer-implemented method of claim 7, wherein the first corresponding word and the second corresponding word are separated by a predetermined number of words in the textual content.

9. The computer-implemented method of claim 7 further comprising:

prior to obtaining the spoken response as audio input: receiving an initial spoken response as audio input; determining that the initial spoken response does not comprise the second corresponding word; and providing a hint regarding the second corresponding word.

10. The computer-implemented method of claim 7, wherein the second corresponding word comprises at least a first phoneme and a second phoneme, the computer-implemented method further comprising:

causing an audible presentation of the first phoneme;
prompting a user to speak the first phoneme;
responsive to the user speaking the first phoneme, causing an audible presentation of the second phoneme;
prompting the user to speak the second phoneme; and
responsive to the user speaking the second phoneme, causing an audible presentation of the second corresponding word.

11. The computer-implemented method of claim 7 further comprising:

before causing textual presentation of the second corresponding word, prompting a user to input a word; and
determining that the input word is substantially identical to the second corresponding word or that the input word is a synonym for the second corresponding word;
wherein the textual presentation of the second corresponding word is only caused if the input word is substantially identical to the second corresponding word or the input word is a synonym for the second corresponding word.

12. The computer-implemented method of claim 7 further comprising obtaining content synchronization information pertaining to the item of textual content and the item of audio content, and wherein the first corresponding word is identified based at least in part on the content synchronization information.

13. The computer-implemented method of claim 7, further comprising:

causing textual presentation of the third corresponding word prior to prompting the user to speak the second corresponding word.

14. A system comprising:

an electronic data store configured to store content synchronization information, wherein the content synchronization information indicates a plurality of words that occur in both an item of textual content and in an item of audio content; and
a computing device, comprising a physical processor, that is in communication with the electronic data store, the computing device configured to: select a word, from the plurality of words, for modified audible presentation; cause a textual presentation of one or more words, from the plurality of words, that precede the word selected for modified audible presentation; prompt a user to input the word selected for modified audible presentation; receive a response to the prompt as one of audio input or textual input; determine that the response includes the word selected for modified audible presentation; and subsequent to determining that the response includes the word selected for modified audible presentation, cause audible presentation of one or more words, from the plurality of words, that follow the word selected for modified audible presentation.

15. The system of claim 14, wherein the computing device is further configured to cause modified audible presentation of the word selected for modified audible presentation by muting the word during audible presentation of the audio content.

16. The system of claim 15, wherein the prompt requests that the user speak the word selected for modified audible presentation, and wherein the response is received as speech from a microphone.

17. The system of claim 16, wherein the computing device is further configured to:

determine that the speech comprises the word selected for modified audible presentation;
determine that the word was spoken with an incorrect pronunciation; and
provide a pronunciation hint to the user.

18. The system of claim 14, wherein:

the one or more words that precede the word selected for modified audible presentation are audibly presented at a first presentation rate; and
the computing device is configured to cause modified audible presentation of the word selected for modified audible presentation at a second presentation rate.

19. The system of claim 18, wherein the second presentation rate is slower than the first presentation rate.

20. The system of claim 14, wherein the computing device is configured to cause modified audible presentation of the word selected for modified audible presentation by audibly presenting the word with an incorrect pronunciation.

21. A non-transitory computer-readable medium having stored thereon computer-executable instructions configured to execute in one or more processors of a computing device, wherein the computer-executable instructions when executed cause the computing to:

identify a corresponding word, wherein the corresponding word occurs at a corresponding presentation position in both an item of textual content and in an item of audio content;
cause an audible presentation and textual presentation of one or more words that precede the corresponding word in both the item of textual content and in the item of audio content;
cause an audible presentation of the corresponding word in the item of audio content;
prompt a user to input the corresponding word;
receive a response to the prompt as textual input;
determine that the response includes the corresponding word; and
subsequent to determining that the response includes the corresponding word, cause audible presentation of one or more words that follow the corresponding word in the item of audio content.

22. The non-transitory computer-readable medium of claim 21, wherein the instructions further configure the computing device to cause modified textual presentation of the corresponding word by presenting at least one of a blank, a homophone of the corresponding word, a misspelling of the corresponding word, or an incorrect grammatical case of the corresponding word.

23. The non-transitory computer-readable medium of claim 22, wherein the prompt requests that the user spell the corresponding word.

24. The non-transitory computer-readable medium of claim 21, wherein the instructions further cause the computing device to cause a modified textual presentation of the corresponding word.

25. The non-transitory computer-readable medium of claim 24, wherein causing the modified textual presentation comprises highlighting the corresponding word.

26. The non-transitory computer-readable medium of claim 24, wherein the corresponding word is caused to be textually presented at a presentation position other than the corresponding presentation position.

27. The non-transitory computer-readable medium of claim 21, wherein the corresponding word is identified based at least in part on content synchronization information pertaining to the item of textual content and the item of audio content.

Referenced Cited
U.S. Patent Documents
5203705 April 20, 1993 Hardy et al.
5351189 September 27, 1994 Doi et al.
5657426 August 12, 1997 Waters et al.
5737489 April 7, 1998 Chou et al.
5978754 November 2, 1999 Kumano
6076059 June 13, 2000 Glickman et al.
6208956 March 27, 2001 Motoyama
6256610 July 3, 2001 Baum
6260011 July 10, 2001 Heckerman et al.
6356922 March 12, 2002 Schilit et al.
6766294 July 20, 2004 MacGinite et al.
6912505 June 28, 2005 Linden et al.
7107533 September 12, 2006 Duncan et al.
7231351 June 12, 2007 Griggs
8106285 January 31, 2012 Gerl et al.
8109765 February 7, 2012 Beattie et al.
8131545 March 6, 2012 Moreno et al.
8131865 March 6, 2012 Rebaud et al.
8442423 May 14, 2013 Ryan et al.
8527272 September 3, 2013 Qin et al.
8548618 October 1, 2013 Story, Jr. et al.
8577668 November 5, 2013 Rosart et al.
8855797 October 7, 2014 Story, Jr. et al.
8862255 October 14, 2014 Story, Jr. et al.
8948892 February 3, 2015 Story, Jr. et al.
9037956 May 19, 2015 Goldstein et al.
9099089 August 4, 2015 Dzik et al.
20020002459 January 3, 2002 Lewis et al.
20020007349 January 17, 2002 Yuen
20020041692 April 11, 2002 Seto et al.
20020046023 April 18, 2002 Fujii et al.
20020116188 August 22, 2002 Amir et al.
20020133350 September 19, 2002 Cogliano
20020184189 December 5, 2002 Hay et al.
20030023442 January 30, 2003 Akabane et al.
20030061028 March 27, 2003 Dey et al.
20030077559 April 24, 2003 Braunberger et al.
20030083885 May 1, 2003 Frimpong-Ansah
20030115289 June 19, 2003 Chinn et al.
20040261093 December 23, 2004 Rebaud et al.
20050022113 January 27, 2005 Hanlon
20060148569 July 6, 2006 Beck
20070016314 January 18, 2007 Chan et al.
20070061487 March 15, 2007 Moore et al.
20070136459 June 14, 2007 Roche et al.
20070276657 November 29, 2007 Gournay et al.
20070282844 December 6, 2007 Kim et al.
20080005656 January 3, 2008 Pang et al.
20080027726 January 31, 2008 Hansen et al.
20080039163 February 14, 2008 Eronen et al.
20080140412 June 12, 2008 Millman et al.
20080177822 July 24, 2008 Yoneda
20080294453 November 27, 2008 Baird-Smith et al.
20090047003 February 19, 2009 Yamamoto
20090136213 May 28, 2009 Calisa et al.
20090210213 August 20, 2009 Cannon et al.
20090222520 September 3, 2009 Sloo et al.
20090228570 September 10, 2009 Janik et al.
20090233705 September 17, 2009 Lemay et al.
20090239202 September 24, 2009 Stone
20090276215 November 5, 2009 Hager
20090281645 November 12, 2009 Kitahara et al.
20090282093 November 12, 2009 Allard et al.
20090305203 December 10, 2009 Okumura et al.
20090319273 December 24, 2009 Mitsui et al.
20100042682 February 18, 2010 Kaye
20100042702 February 18, 2010 Hanses
20100064218 March 11, 2010 Bull et al.
20100070575 March 18, 2010 Bergquist et al.
20100225809 September 9, 2010 Connors et al.
20100279822 November 4, 2010 Ford
20100286979 November 11, 2010 Zangvil et al.
20100287256 November 11, 2010 Neilio
20110066438 March 17, 2011 Lindahl et al.
20110067082 March 17, 2011 Walker
20110087802 April 14, 2011 Witriol et al.
20110119572 May 19, 2011 Jang et al.
20110153330 June 23, 2011 Yazdani et al.
20110177481 July 21, 2011 Haff et al.
20110184738 July 28, 2011 Kalisky et al.
20110191105 August 4, 2011 Spears
20110231474 September 22, 2011 Locker et al.
20110246175 October 6, 2011 Yi et al.
20110288861 November 24, 2011 Kurzweil et al.
20110288862 November 24, 2011 Todic
20110296287 December 1, 2011 Shahraray et al.
20110320189 December 29, 2011 Carus et al.
20120030288 February 2, 2012 Burckart et al.
20120109640 May 3, 2012 Anisimovich et al.
20120150935 June 14, 2012 Frick et al.
20120166180 June 28, 2012 Au
20120197998 August 2, 2012 Kessel et al.
20120245719 September 27, 2012 Story, Jr. et al.
20120245720 September 27, 2012 Story, Jr. et al.
20120245721 September 27, 2012 Story, Jr. et al.
20120246343 September 27, 2012 Story, Jr. et al.
20120310642 December 6, 2012 Cao et al.
20120315009 December 13, 2012 Evans et al.
20120324324 December 20, 2012 Hwang et al.
20130041747 February 14, 2013 Anderson et al.
20130073449 March 21, 2013 Voynow et al.
20130073675 March 21, 2013 Hwang et al.
20130074133 March 21, 2013 Hwang et al.
20130130216 May 23, 2013 Morton et al.
20130212454 August 15, 2013 Casey
20130257871 October 3, 2013 Goldstein et al.
20130262127 October 3, 2013 Goldstein et al.
20140005814 January 2, 2014 Hwang et al.
20140039887 February 6, 2014 Dzik et al.
20140040713 February 6, 2014 Dzik et al.
20140250219 September 4, 2014 Hwang
20150026577 January 22, 2015 Story et al.
Foreign Patent Documents
103988193 August 2014 CN
104662604 May 2015 CN
2689346 January 2014 EP
9-265299 October 1997 JP
2002-140085 May 2002 JP
2002-328949 November 2002 JP
2003-304511 October 2003 JP
2004-029324 January 2004 JP
2004-117618 April 2004 JP
2004-266576 September 2004 JP
2005-189454 July 2005 JP
2007-522591 August 2007 JP
2007-249703 September 2007 JP
2010-250023 November 2010 JP
532174 November 2012 NZ
WO 2006/029458 March 2006 WO
WO 2011/144617 November 2011 WO
WO 2012/129438 September 2012 WO
WO 2012/129445 September 2012 WO
WO 2013/148724 October 2013 WO
WO 2013/169670 November 2013 WO
WO 2013/181158 December 2013 WO
WO 2013/192050 December 2013 WO
WO 2014/004658 January 2014 WO
Other references
  • Arar, Y., Blio E-Book Platform: No Reader (Yet), But Great Graphics, Jan. 7, 2010.
  • Beattie, V., et al., Reading Assistant: Technology for Guided Oral Reading, Scientific Learning, Apr. 10, 2012, 5 pages.
  • Dzik, S.C., U.S. Appl. No. 13/604,482, filed Sep. 5, 2012, entitled Identifying Corresponding Regions of Content.
  • Dzik, S.C., U.S. Appl. No. 13/604,486, filed Sep. 5, 2012, entitled Selecting Content Portions for Alignment.
  • Dzik, S.C., U.S. Appl. No. 13/662,306, filed Oct. 26, 2012, entitled Content Presentation Analysis.
  • Hwang, D.C., et al., U.S. Appl. No. 13/536,711, filed Jun. 28, 2012, entitled Pacing Content.
  • International Search Report issued in connection with International Patent Application No. PCTUS12/30186 mailed on Jun. 20, 2012, 12 pages.
  • International Search Report issued in connection with International Patent Application No. PCT/US12/30198 mailed on Jun. 20, 2012, 16 pages.
  • International Preliminary Report on Patentability issued in connection with International Patent Application No. PCT/US12/30198 mailed on Jan. 30, 2014, 8 pages.
  • International Search Report and Written Opinion in PCT/US2013/042903 mailed Feb. 7, 2014.
  • International Search Report issued in connection with International Application No. PCT/US13/53020 mailed on Dec. 16, 2013.
  • International Search Report and Written Opinion re PCT Application No. PCT/US2014/014508 mailed on Jun. 25, 2014.
  • Levinson, S.E., et al., Continuous Speech Recognition from a Phonetic Transcription, Acoustics, Speech, and Signal Processing, Apr. 1990, pp. 190-199.
  • Vignoli, F., et al., A Text-Speech Synchronization Technique With Applications to Talking Heads, Auditory-Visual Speech Processing, ISCA Archive, Aug. 7-10, 1999.
  • Weber, F.V., U.S. Appl. No. 13/531,376, filed Jun. 22, 2012, entitled Modelling Expected Errors for Discriminative Training.
  • Roub, Paul, “I'll Buy an E-book Reader When . . . ”, Nov. 16, 2007, available at: http://roub.net/blahg/2007/11/16/ill-buy-an-eboo/ (accessed: Sep. 6, 2012), 2 pages.
  • Enhanced Editions, “Feature: Synched Audio and Text” Aug. 31, 2009, last accessed Nov. 15, 2012, available at http://www.enhanced-editions.com/blog/2009/08/enhanced-editions-features-exclusive-soundtracks-and-extracts/.
  • Extended Search Report in European Application No. (12761404.8) dated Jan. 26, 2015.
  • Office Action in Japanese Application No. 2014-501254 dated Oct. 14, 2014.
  • Office Action in Japanese Application No. 2014-501257 dated Aug. 25, 2014.
  • International Preliminary Report on Patentability in PCT/US2013/042903 mailed Dec. 2, 2014.
  • International Preliminary Report issued in connection with International Application No. PCT/US13/53020 mailed on Feb. 12, 2015.
  • Office Action in Japanese Application No. 2014-501254 dated May 11, 2015.
  • Office Action in Canadian Application No. 2830622 dated Jun. 10, 2015.
  • Extended Search Report in European Application No. 12761104.4 dated Apr. 20, 2015.
  • Office Action in Canadian Application No. 2830906 dated Mar. 17, 2015.
  • Office Action in Japanese Application No. 2014-501257 dated Apr. 6, 2015.
  • International Preliminary Search Report on Patentability in PCT/US2014/014508 mailed Aug. 4, 2015.
Patent History
Patent number: 9280906
Type: Grant
Filed: Feb 4, 2013
Date of Patent: Mar 8, 2016
Patent Publication Number: 20140223272
Assignee: Audible. Inc. (Newark, NJ)
Inventors: Ajay Arora (New York, NY), Guy Ashley Story, Jr. (New York, NY)
Primary Examiner: Scott Baderman
Assistant Examiner: Asher Kells
Application Number: 13/758,749
Classifications
Current U.S. Class: Application (704/270)
International Classification: G06F 17/27 (20060101); G09B 5/06 (20060101);