Patents by Inventor Walter Wei-Tuh Chang

Walter Wei-Tuh Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11886494
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image based on natural language-based inputs. For instance, the object selection system can utilize natural language processing tools to detect objects and their corresponding relationships within natural language object selection queries. For example, the object selection system can determine alternative object terms for unrecognized objects in a natural language object selection query. As another example, the object selection system can determine multiple types of relationships between objects in a natural language object selection query and utilize different object relationship models to select the requested query object.
    Type: Grant
    Filed: September 1, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Walter Wei Tuh Chang, Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding
  • Patent number: 11769111
    Abstract: The present invention is directed towards providing automated workflows for the identification of a reading order from text segments extracted from a document. Ordering the text segments is based on trained natural language models. In some embodiments, the workflows are enabled to perform a method for identifying a sequence associated with a portable document. The methods includes iteratively generating a probabilistic language model, receiving the portable document, and selectively extracting features (such as but not limited to text segments) from the document. The method may generate pairs of features (or feature pair from the extracted features). The method may further generate a score for each of the pairs based on the probabilistic language model and determine an order to features based on the scores. The method may provide the extracted features in the determined order.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: September 26, 2023
    Assignee: Adobe Inc.
    Inventors: Trung Huu Bui, Hung Hai Bui, Shawn Alan Gaither, Walter Wei-Tuh Chang, Michael Frank Kraley, Pranjal Daga
  • Patent number: 11681919
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image utilizing a large-scale object detector. For instance, in response to receiving a request to automatically select a query object with an unknown object class in a digital image, the object selection system can utilize a large-scale object detector to detect potential objects in the image, filter out one or more potential objects, and label the remaining potential objects in the image to detect the query object. In some implementations, the large-scale object detector utilizes a region proposal model, a concept mask model, and an auto tagging model to automatically detect objects in the digital image.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: June 20, 2023
    Assignee: Adobe Inc.
    Inventors: Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding, Walter Wei Tuh Chang
  • Publication number: 20230148406
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Application
    Filed: January 3, 2023
    Publication date: May 11, 2023
    Applicant: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Patent number: 11574630
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Publication number: 20220414142
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image based on natural language-based inputs. For instance, the object selection system can utilize natural language processing tools to detect objects and their corresponding relationships within natural language object selection queries. For example, the object selection system can determine alternative object terms for unrecognized objects in a natural language object selection query. As another example, the object selection system can determine multiple types of relationships between objects in a natural language object selection query and utilize different object relationship models to select the requested query object.
    Type: Application
    Filed: September 1, 2022
    Publication date: December 29, 2022
    Inventors: Walter Wei Tuh Chang, Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding
  • Patent number: 11514244
    Abstract: Techniques and systems are described to model and extract knowledge from images. A digital medium environment is configured to learn and use a model to compute a descriptive summarization of an input image automatically and without user intervention. Training data is obtained to train a model using machine learning in order to generate a structured image representation that serves as the descriptive summarization of an input image. The images and associated text are processed to extract structured semantic knowledge from the text, which is then associated with the images. The structured semantic knowledge is processed along with corresponding images to train a model using machine learning such that the model describes a relationship between text features within the structured semantic knowledge. Once the model is learned, the model is usable to process input images to generate a structured image representation of the image.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: November 29, 2022
    Assignee: Adobe Inc.
    Inventors: Scott D. Cohen, Walter Wei-Tuh Chang, Brian L. Price, Mohamed Hamdy Mahmoud Abdelbaky Elhoseiny
  • Patent number: 11507551
    Abstract: Various methods and systems for performing analytics based on hierarchical categorization of content are provided. Analytics can be performed using an index building workflow and a classification workflow. In the index building workflow, documents are received and analyzed to extract features from the documents. Hierarchical category paths can be identified for the features. The documents are indexed to support searching the documents for the hierarchical category paths. In the classification workflow, a query, that includes or references content, may be received and analyzed to extract features from the content. The features are executed against a search engine that returns search result documents associated with hierarchical category paths. The hierarchical category paths from the search result documents may be used to generate a topic model of the content associated with the query.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 22, 2022
    Assignee: Adobe Inc.
    Inventors: Walter Wei-Tuh Chang, Kenneth Edward Feuerman, Shantanu Kumar, Ankit Bal
  • Patent number: 11468110
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image based on natural language-based inputs. For instance, the object selection system can utilize natural language processing tools to detect objects and their corresponding relationships within natural language object selection queries. For example, the object selection system can determine alternative object terms for unrecognized objects in a natural language object selection query. As another example, the object selection system can determine multiple types of relationships between objects in a natural language object selection query and utilize different object relationship models to select the requested query object.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: October 11, 2022
    Assignee: Adobe Inc.
    Inventors: Walter Wei Tuh Chang, Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding
  • Publication number: 20210319255
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image utilizing a large-scale object detector. For instance, in response to receiving a request to automatically select a query object with an unknown object class in a digital image, the object selection system can utilize a large-scale object detector to detect potential objects in the image, filter out one or more potential objects, and label the remaining potential objects in the image to detect the query object. In some implementations, the large-scale object detector utilizes a region proposal model, a concept mask model, and an auto tagging model to automatically detect objects in the digital image.
    Type: Application
    Filed: May 26, 2021
    Publication date: October 14, 2021
    Inventors: Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding, Walter Wei Tuh Chang
  • Publication number: 20210263962
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image based on natural language-based inputs. For instance, the object selection system can utilize natural language processing tools to detect objects and their corresponding relationships within natural language object selection queries. For example, the object selection system can determine alternative object terms for unrecognized objects in a natural language object selection query. As another example, the object selection system can determine multiple types of relationships between objects in a natural language object selection query and utilize different object relationship models to select the requested query object.
    Type: Application
    Filed: February 25, 2020
    Publication date: August 26, 2021
    Inventors: Walter Wei Tuh Chang, Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding
  • Patent number: 11055566
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image utilizing a large-scale object detector. For instance, in response to receiving a request to automatically select a query object with an unknown object class in a digital image, the object selection system can utilize a large-scale object detector to detect potential objects in the image, filter out one or more potential objects, and label the remaining potential objects in the image to detect the query object. In some implementations, the large-scale object detector utilizes a region proposal model, a concept mask model, and an auto tagging model to automatically detect objects in the digital image.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: July 6, 2021
    Assignee: ADOBE INC.
    Inventors: Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding, Walter Wei Tuh Chang
  • Patent number: 10902076
    Abstract: A method for recommending hashtags includes determining keywords from a post planned for publishing by a publisher. An input criteria comprising at least one of age group, geographical location, date range, or a keyword is received. Previous posts associated with the keywords and satisfying the input criteria are obtained. The previous posts are categorized into one or more categories based on sentiment of each post and for each category hashtags used in the obtained previous posts in that category are determined. The hashtags are ranked based on predefined criteria comprising at least one of frequency of appearance of respective hashtag in posts, number of likes or shares or retweets of post comprising respective hashtag, number of followers of person who used respective hashtag, or sentiment of post comprising respective hashtag. The hashtags are then recommended, based on ranking, to the publisher for use with the post planned for publishing.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: January 26, 2021
    Assignee: ADOBE INC.
    Inventors: Anmol Dhawan, Walter Wei-Tuh Chang, Ashish Duggal, Sachin Soni
  • Publication number: 20200410990
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Application
    Filed: September 9, 2020
    Publication date: December 31, 2020
    Applicant: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Publication number: 20200320329
    Abstract: The present invention is directed towards providing automated workflows for the identification of a reading order from text segments extracted from a document. Ordering the text segments is based on trained natural language models. In some embodiments, the workflows are enabled to perform a method for identifying a sequence associated with a portable document. The methods includes iteratively generating a probabilistic language model, receiving the portable document, and selectively extracting features (such as but not limited to text segments) from the document. The method may generate pairs of features (or feature pair from the extracted features). The method may further generate a score for each of the pairs based on the probabilistic language model and determine an order to features based on the scores. The method may provide the extracted features in the determined order.
    Type: Application
    Filed: June 18, 2020
    Publication date: October 8, 2020
    Inventors: Trung Huu Bui, Hung Hai Bui, Shawn Alan Gaither, Walter Wei-Tuh Chang, Michael Frank Kraley, Pranjal Daga
  • Patent number: 10796690
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: October 6, 2020
    Assignee: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Patent number: 10783314
    Abstract: Techniques are disclosed for generating a structured transcription from a speech file. In an example embodiment, a structured transcription system receives a speech file comprising speech from one or more people and generates a navigable structured transcription object. The navigable structured transcription object may comprise one or more data structures representing multimedia content with which a user may navigate and interact via a user interface. Text and/or speech relating to the speech file can be selectively presented to the user (e.g., the text can be presented via a display, and the speech can be aurally presented via a speaker).
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Franck Dernoncourt, Walter Wei-Tuh Chang, Seokhwan Kim, Sean Fitzgerald, Ragunandan Rao Malangully, Laurie Marie Byrum, Frederic Thevenet, Carl Iwan Dockhorn
  • Patent number: 10783431
    Abstract: Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10769495
    Abstract: In implementations of collecting multimodal image editing requests (IERs), a user interface is generated that exposes an image pair including a first image and a second image including at least one edit to the first image. A user simultaneously speaks a voice command and performs a user gesture that describe an edit of the first image used to generate the second image. The user gesture and the voice command are simultaneously recorded and synchronized with timestamps. The voice command is played back, and the user transcribes their voice command based on the play back, creating an exact transcription of their voice command. Audio samples of the voice command with respective timestamps, coordinates of the user gesture with respective timestamps, and a transcription are packaged as a structured data object for use as training data to train a neural network to recognize multimodal IERs in an image editing application.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: September 8, 2020
    Assignee: Adobe Inc.
    Inventors: Trung Huu Bui, Zhe Lin, Walter Wei-Tuh Chang, Nham Van Le, Franck Dernoncourt
  • Patent number: 10713519
    Abstract: The present invention is directed towards providing automated workflows for the identification of a reading order from text segments extracted from a document. Ordering the text segments is based on trained natural language models. In some embodiments, the workflows are enabled to perform a method for identifying a sequence associated with a portable document. The methods includes iteratively generating a probabilistic language model, receiving the portable document, and selectively extracting features (such as but not limited to text segments) from the document. The method may generate pairs of features (or feature pair from the extracted features). The method may further generate a score for each of the pairs based on the probabilistic language model and determine an order to features based on the scores. The method may provide the extracted features in the determined order.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: July 14, 2020
    Assignee: ADOBE INC.
    Inventors: Trung Huu Bui, Hung Hai Bui, Shawn Alan Gaither, Walter Wei-Tuh Chang, Michael Frank Kraley, Pranjal Daga