Patents by Inventor Walter W. Chang
Walter W. Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20150356571Abstract: In techniques for trending topics tracking, input text data is received as communications from a user or between users, where the communications are from one user to other users, or between two or more of the users. A topics tracking application is implemented to determine topics from the communications that are from or between the users, and track how the topics are trending over a time duration. The topics can include expressed sentiments and/or expressed emotions. An input selection of at least one of the topics can be received, and the topics tracking application generates a visual dashboard that displays a trending representation of the at least one topic that is trending over the time duration. The visual dashboard can also display data sources of the communications between the two or more users and/or an overview of one or more of the topics determined from a selected data source.Type: ApplicationFiled: June 5, 2014Publication date: December 10, 2015Inventors: Walter W. Chang, Hartmut Warncke, Emre Demiralp
-
Publication number: 20150286928Abstract: In techniques for causal modeling and attribution, a causal modeling application implements a dynamical causal modeling framework. Input data is received as a representation of communications between users, such as social media interactions between social media users, and causal relationships between the users can be determined based in part on the input data that represents the communications. Influence variables, such as exogenous variables and/or endogenous variables, can also be determined that influence the causal relationships between the users. A causal relationships model is generated based on the influence variables and the causal relationships between the users, where the causal relationships model is representative of causality, influence, and attribution between the users.Type: ApplicationFiled: April 3, 2014Publication date: October 8, 2015Applicant: Adobe Systems IncorporatedInventors: Emre Demiralp, Walter W. Chang
-
Publication number: 20150286710Abstract: In techniques for contextualized sentiment text analysis vocabulary generation, a contextual analysis application is implemented to receive input data derived from rated product or service reviews. Each of the domain-specific reviews across multiple categories include a rating that is associated with expressed sentiments about a subject within a rated review. The contextual analysis application determines categories of the subjects of the rated reviews, and then generates a sentiment score for a term that is an expressed sentiment in a rated review. The sentiment score is generated based in part on a context of the term as it pertains to the category and rating of the rated review. The contextual analysis application is implemented to then determine a polarity of a term-category pair based on the sentiment score, and generate a contextualized sentiment vocabulary for all of the term-category pairs of the expressed sentiments about the subjects of the rated reviews.Type: ApplicationFiled: April 3, 2014Publication date: October 8, 2015Applicant: Adobe Systems IncorporatedInventors: Walter W. Chang, Emre Demiralp
-
Publication number: 20150286627Abstract: In techniques for contextual sentiment text analysis, a sentiment analysis application is implemented to receive sentences as text data, and each of the sentences can include one or more sentiments about a subject of the sentence. The text data can be received as part-of-speech information that includes noun expressions, verb expressions, and tagged parts-of-speech of the sentences. The sentiment analysis application is implemented to analyze the text data to identify the sentiment about the subject of a sentence, and determine a context of the sentiment as the sentiment pertains to a topic category of the subject in the sentence, where the topic category of the subject is determined based on text categorization of the text data. The sentiment analysis application can also determine whether the sentiment is positive about the subject or negative about the subject based on the context of the sentiment within the topic category of the subject.Type: ApplicationFiled: April 3, 2014Publication date: October 8, 2015Applicant: Adobe Systems IncorporatedInventors: Walter W. Chang, Emre Demiralp, Shantanu Kumar, Shanshan Xia
-
Patent number: 9141335Abstract: Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.Type: GrantFiled: November 21, 2012Date of Patent: September 22, 2015Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20150242391Abstract: This document describes techniques for contextualization and enhancement of textual content. In one or more implementations, textual content is analyzed to determine whether the textual content is appropriate for an intended context. The intended context corresponds to an intended mood, emotion, tone, or sentiment of the textual content. If it is determined that the textual content does not conform to the intended context, suggestions are generated to modify the textual content to conform to the intended context.Type: ApplicationFiled: February 25, 2014Publication date: August 27, 2015Inventors: Naveen Prakash Goel, Walter W. Chang, Emre Demiralp, Sachin Soni, Rekha Agarwal
-
Patent number: 9066049Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialogue words to be spoken, providing recorded dialogue audio data corresponding to at least a portion of the dialogue words to be spoken, wherein the recorded dialogue audio data includes timecodes associated with recorded audio dialogue words, matching at least some of the script words to corresponding recorded audio dialogue words to determine alignment points, determining that a set of unmatched script words are accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words, generating time-aligned script data including the script words and their corresponding timecodes and the set of unmatched script words determined to be accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words.Type: GrantFiled: May 28, 2010Date of Patent: June 23, 2015Assignee: Adobe Systems IncorporatedInventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa
-
Patent number: 8825489Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialog words to be spoken, providing audio data corresponding to at least a portion of the dialog words to be spoken, wherein the audio data includes timecodes associated with dialog words, generating a sequential alignment of the script words to the dialog words, matching at least some of the script words to corresponding dialog words to determine alignment points, determining corresponding timecodes for unmatched script words using interpolation based on the timecodes associated with matching script words, and generating time-aligned script data including the script words and their corresponding time codes.Type: GrantFiled: May 28, 2010Date of Patent: September 2, 2014Assignee: Adobe Systems IncorporatedInventors: Jerry R. Scoggins, II, Walter W. Chang
-
Patent number: 8825488Abstract: A method includes receiving script data including script words for dialogue, receiving audio data corresponding to at least a portion of the dialogue, wherein the audio data includes timecodes associated with dialogue words, generating a sequential alignment of the script words to the dialogue words, matching at least some of the script words to corresponding dialogue words to determine hard alignment points, partitioning the sequential alignment of script words into alignment sub-sets, wherein the bounds of the alignment sub-subsets are defined by adjacent hard-alignment points, and wherein the alignment subsets includes a sub-set of the script words and a corresponding sub-set of dialogue words that occur between the hard-alignment points, determining corresponding timecodes for a sub-set of script words in a sub-subset based on the timecodes associated with the sub-set of dialogue words, and generating time-aligned script data including the sub-set of script words and their corresponding timecodes.Type: GrantFiled: May 28, 2010Date of Patent: September 2, 2014Assignee: Adobe Systems IncorporatedInventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa, Charles E. Van Winkle, Simon R. Hayhurst
-
Publication number: 20140189501Abstract: Systems and methods are provided for providing a navigation interface to access or otherwise use electronic content items. In one embodiment, an augmentation application identifies at least one entity referenced in a document. The entity can be referenced in at least two portions of the document by at least two different words or phrases. The augmentation application associates the at least one entity with at least one multimedia asset. The augmentation application generates a layout including at least some content of the document referencing the at least one entity and the at least one multimedia asset associated with the at least one entity. The augmentation application renders the layout for display.Type: ApplicationFiled: December 31, 2012Publication date: July 3, 2014Applicant: Adobe Systems IncorporatedInventors: Emre Demiralp, Gavin Stuart Peter Miller, Walter W. Chang, Daicho Ito, Grayson Squier Lang
-
Patent number: 8688445Abstract: This specification describes technologies relating to multi core processing for parallel speech-to-text processing. In some implementations, a computer-implemented method is provided that includes the actions of receiving an audio file; analyzing the audio file to identify portions of the audio file as corresponding to one or more audio types; generating a time-ordered classification of the identified portions, the time-ordered classification indicating the one or more audio types and position within the audio file of each portion; generating a queue using the time-ordered classification, the queue including a plurality of jobs where each job includes one or more identifiers of a portion of the audio file classified as belonging to the one or more speech types; distributing the jobs in the queue to a plurality of processors; performing speech-to-text processing on each portion to generate a corresponding text file; and merging the corresponding text files to generate a transcription file.Type: GrantFiled: December 10, 2008Date of Patent: April 1, 2014Assignee: Adobe Systems IncorporatedInventors: Walter W. Chang, Michael J. Welch
-
Publication number: 20140081625Abstract: Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140078076Abstract: Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140081626Abstract: Natural language vocabulary generation and usage techniques are described. In one or more implementations, one or more search results are mined for a domain to determine a frequency at which words occur in the one or more search results, respectively. A set of the words is selected based on the determined frequency. A sense is assigned to each of the selected set of the words that identifies a part-of-speech for a respective word. A vocabulary is then generated that includes the selected set of the words and a respective said sense, the vocabulary configured for use in natural language processing associated with the domain.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Walter W. Chang, Gregg D. Wilensky, Lubomira A. Dontcheva
-
Publication number: 20140082500Abstract: Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140078075Abstract: Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Patent number: 8458198Abstract: A term analyzer receives an ordered collection of text-based terms. The term analyzer analyzes groupings of consecutive text-based terms in the ordered collection to identify occurrences of different combinations of text-based terms. In addition, the term analyzer maintains frequency information representing the occurrences of the different combinations of text-based terms in the collection. The frequency information can then be used to determine relatively significant keywords and/or keyword phrases in the document. In an example configuration, the term analyzer creates a tree in which a first term in a given grouping of the groupings is defined as a parent node in the tree and a second term in the given grouping is defined as a child node of the parent node in the tree. The method of the analyzer generalizes to create a tree of multi-word terms in which the terms can be efficiently ranked by occurrence.Type: GrantFiled: December 5, 2011Date of Patent: June 4, 2013Assignee: Adobe Systems IncorporatedInventors: Michael J. Welch, Walter W. Chang
-
Patent number: 8447608Abstract: This specification describes technologies relating to generating custom language models for audio content. In some implementations, a computer-implemented method is provided that includes the actions of receiving a collection of source texts; identifying a type from a collection of types for each source text, each source text being associated with a particular type; generating, for each identified type, a type-specific language model using the source texts associated with the respective type; and storing the language models.Type: GrantFiled: December 10, 2008Date of Patent: May 21, 2013Assignee: Adobe Systems IncorporatedInventors: Walter W. Chang, Michael J. Welch
-
Patent number: 8447604Abstract: Provided in some embodiments is a method including receiving ordered script words are indicative of dialogue words to be spoken, receiving audio data corresponding to at least a portion of the dialogue words to be spoken and including timecodes associated with dialogue words, generating a matrix of the ordered script words versus the dialogue words, aligning the matrix to determine hard alignment points that include matching consecutive sequences of ordered script words with corresponding sequences of dialogue words, partitioning the matrix of ordered script words into sub-matrices bounded by adjacent hard-alignment points and including corresponding sub-sets the script and dialogue words between the hard-alignment points, and aligning each of the sub-matrices.Type: GrantFiled: May 28, 2010Date of Patent: May 21, 2013Assignee: Adobe Systems IncorporatedInventor: Walter W. Chang
-
Publication number: 20130124213Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialogue words to be spoken, providing audio data corresponding to at least a portion of the dialogue words to be spoken, wherein the audio data includes timecodes associated with dialogue words, generating a sequential alignment of the script words to the dialogue words, matching at least some of the script words to corresponding dialogue words to determine alignment points, determining corresponding timecodes for unmatched script words using interpolation based on the timecodes associated with matching script words, and generating time-aligned script data including the script words and their corresponding time codes.Type: ApplicationFiled: May 28, 2010Publication date: May 16, 2013Inventors: Jerry R. Scoggins, II, Walter W. Chang