Patents by Inventor Jonathan Brandt

Jonathan Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12183349
    Abstract: Systems, apparatuses, and methods for capturing voice messages are provided. In one embodiment, a method can include receiving, by one or more processors of a mobile user device, a user input indicative of a voice message at a first time. The method can further include identifying contextual data indicative of one or more computing devices within proximity of the mobile user device. The method can include providing a set of data for storage in one or more memory devices of the mobile user device. The set of data can indicate the voice message and the contextual data indicative of the computing devices. The method can further include providing an output indicative of the voice message and the contextual data to one or more secure computing devices at a second time.
    Type: Grant
    Filed: December 7, 2022
    Date of Patent: December 31, 2024
    Assignee: GOOGLE LLC
    Inventors: Jonathan Brandt Moeller, Jeremy Drew Payne
  • Patent number: 12147771
    Abstract: System and methods for a text summarization system are described. In one example, a text summarization system receives an input utterance and determines whether the utterance should be included in a summary of the text. The text summarization system includes an embedding network, a convolution network, an encoding component, and a summary component. The embedding network generates a semantic embedding of an utterance. The convolution network generates a plurality of feature vectors based on the semantic embedding. The encoding component identifies a plurality of latent codes respectively corresponding to the plurality of feature vectors. The summary component identifies a prominent code among the latent codes and to select the utterance as a summary utterance based on the prominent code.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: November 19, 2024
    Assignee: ADOBE INC.
    Inventors: Sangwoo Cho, Franck Dernoncourt, Timothy Jeewun Ganter, Trung Huu Bui, Nedim Lipka, Varun Manjunatha, Walter Chang, Hailin Jin, Jonathan Brandt
  • Patent number: 12119028
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: October 15, 2024
    Assignee: ADOBE INC.
    Inventors: Xue Bai, Justin Jonathan Salamon, Aseem Omprakash Agarwala, Hijung Shin, Haoran Cai, Joel Richard Brandt, Lubomira Assenova Dontcheva, Cristin Ailidh Fraser
  • Patent number: 12008464
    Abstract: Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 11, 2024
    Assignee: ADOBE INC.
    Inventors: Haoxiang Li, Zhe Lin, Jonathan Brandt, Xiaohui Shen
  • Publication number: 20240172830
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Application
    Filed: October 10, 2023
    Publication date: May 30, 2024
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Publication number: 20240169624
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems generate utilizing a segmentation neural network, an object mask for each object of a plurality of objects of a digital image. The disclosed systems detect a first user interaction with an object in the digital image displayed via a graphical user interface. The disclosed systems surface, via the graphical user interface, the object mask for the object in response to the first user interaction. The disclosed systems perform an object-aware modification of the digital image in response to a second user interaction with the object mask for the object.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Jonathan Brandt, Scott Cohen, Zhe Lin, Zhihong Ding, Darshan Prasad, Matthew Joss, Celso Gomes, Jianming Zhang, Olena Soroka, Klaas Stoeckmann, Michael Zimmermann, Thomas Muehrke
  • Publication number: 20240135561
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that implement depth-aware object move operations for digital image editing. For instance, in some embodiments, the disclosed systems determine a first object depth for a first object portrayed within a digital image and a second object depth for a second object portrayed within the digital image. Additionally, the disclosed systems move the first object to create an overlap area between the first object and the second object within the digital image. Based on the first object depth and the second object depth, the disclosed systems modify the digital image to occlude the first object or the second object within the overlap area.
    Type: Application
    Filed: May 19, 2023
    Publication date: April 25, 2024
    Inventors: Zhihong Ding, Scott Cohen, Matthew Joss, Jianming Zhang, Darshan Prasad, Celso Gomes, Jonathan Brandt
  • Publication number: 20240135514
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via multi-layered scene completion techniques facilitated by artificial intelligence. For instance, in some embodiments, the disclosed systems receive a digital image portraying a first object and a second object against a background, where the first object occludes a portion of the second object. Additionally, the disclosed systems pre-process the digital image to generate a first content fill for the portion of the second object occluded by the first object and a second content fill for a portion of the background occluded by the second object. After pre-processing, the disclosed systems detect one or more user interactions to move or delete the first object from the digital image. The disclosed systems further modify the digital image by moving or deleting the first object and exposing the first content fill for the portion of the second object.
    Type: Application
    Filed: September 1, 2023
    Publication date: April 25, 2024
    Inventors: Daniil Pakhomov, Qing Liu, Zhihong Ding, Scott Cohen, Zhe Lin, Jianming Zhang, Zhifei Zhang, Ohiremen Dibua, Mariette Souppe, Krishna Kumar Singh, Jonathan Brandt
  • Publication number: 20240135613
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that implement perspective-aware object move operations for digital image editing. For instance, in some embodiments, the disclosed systems determine a vanishing point associated with a digital image portraying an object. Additionally, the disclosed systems detect one or more user interactions for moving the object within the digital image. Based on moving the object with respect to the vanishing point, the disclosed systems perform a perspective-based resizing of the object within the digital image.
    Type: Application
    Filed: May 19, 2023
    Publication date: April 25, 2024
    Inventors: Zhihong Ding, Scott Cohen, Matthew Joss, Jianming Zhang, Darshan Prasad, Celso Gomes, Jonathan Brandt
  • Publication number: 20240037398
    Abstract: Some embodiments involve a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.
    Type: Application
    Filed: October 2, 2023
    Publication date: February 1, 2024
    Inventors: Jonathan BRANDT, Chen FANG, Byungmoon KIM, Biao JIA
  • Patent number: 11857018
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: January 2, 2024
    Assignee: Illumagear, Inc.
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Patent number: 11816888
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: November 14, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan Brandt, Jianming Zhang, Chen Fang
  • Patent number: 11775817
    Abstract: Some embodiments involve a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: October 3, 2023
    Assignee: Adobe Inc.
    Inventors: Jonathan Brandt, Chen Fang, Byungmoon Kim, Biao Jia
  • Publication number: 20230214600
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for parsing a given input referring expression into a parse structure and generating a semantic computation graph to identify semantic relationships among and between objects. At a high level, when embodiments of the preset invention receive a referring expression, a parse tree is created and mapped into a hierarchical subject, predicate, object graph structure that labeled noun objects in the referring expression, the attributes of the labeled noun objects, and predicate relationships (e.g., verb actions or spatial propositions) between the labeled objects. Embodiments of the present invention then transform the subject, predicate, object graph structure into a semantic computation graph that may be recursively traversed and interpreted to determine how noun objects, their attributes and modifiers, and interrelationships are provided to downstream image editing, searching, or caption indexing tasks.
    Type: Application
    Filed: March 10, 2023
    Publication date: July 6, 2023
    Inventors: Zhe LIN, Walter W. CHANG, Scott COHEN, Khoi Viet PHAM, Jonathan BRANDT, Franck DERNONCOURT
  • Patent number: 11636270
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for parsing a given input referring expression into a parse structure and generating a semantic computation graph to identify semantic relationships among and between objects. At a high level, when embodiments of the preset invention receive a referring expression, a parse tree is created and mapped into a hierarchical subject, predicate, object graph structure that labeled noun objects in the referring expression, the attributes of the labeled noun objects, and predicate relationships (e.g., verb actions or spatial propositions) between the labeled objects. Embodiments of the present invention then transform the subject, predicate, object graph structure into a semantic computation graph that may be recursively traversed and interpreted to determine how noun objects, their attributes and modifiers, and interrelationships are provided to downstream image editing, searching, or caption indexing tasks.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: April 25, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Walter W. Chang, Scott Cohen, Khoi Viet Pham, Jonathan Brandt, Franck Dernoncourt
  • Publication number: 20230020724
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Application
    Filed: March 4, 2022
    Publication date: January 19, 2023
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Publication number: 20220414338
    Abstract: System and methods for a text summarization system are described. In one example, a text summarization system receives an input utterance and determines whether the utterance should be included in a summary of the text. The text summarization system includes an embedding network, a convolution network, an encoding component, and a summary component. The embedding network generates a semantic embedding of an utterance. The convolution network generates a plurality of feature vectors based on the semantic embedding. The encoding component identifies a plurality of latent codes respectively corresponding to the plurality of feature vectors. The summary component identifies a prominent code among the latent codes and to select the utterance as a summary utterance based on the prominent code.
    Type: Application
    Filed: June 29, 2021
    Publication date: December 29, 2022
    Inventors: SANGWOO CHO, Franck Dernoncourt, Timothy Jeewun Ganter, Trung Huu Bui, Nedim Lipka, Varun Manjunatha, Walter Chang, Hailin Jin, Jonathan Brandt
  • Patent number: 11527251
    Abstract: Systems, apparatuses, and methods for capturing voice messages are provided. In one embodiment, a method can include receiving, by one or more processors of a mobile user device, a user input indicative of a voice message at a first time. The method can further include identifying contextual data indicative of one or more computing devices within proximity of the mobile user device. The method can include providing a set of data for storage in one or more memory devices of the mobile user device. The set of data can indicate the voice message and the contextual data indicative of the computing devices. The method can further include providing an output indicative of the voice message and the contextual data to one or more secure computing devices at a second time.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: December 13, 2022
    Assignee: GOOGLE LLC
    Inventors: Jonathan Brandt Moeller, Jeremy Drew Payne
  • Patent number: 11291260
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: April 5, 2022
    Assignee: Illumagear, Inc.
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Publication number: 20210386154
    Abstract: A suspension unit is provided which is removably coupleable to a helmet shell to enhance user safety and helmet functionality. The suspension unit includes: a suspension assembly configured to interface with a suspension attachment scheme of the helmet shell; one or more sensors carried by the suspension assembly and arranged to obtain biometric, environmental, location, motion, impact and/or other data; and a control system carried by the suspension assembly, the control system including at least a power source and a communication module, and being operatively coupled to the one or more sensors to obtain the biometric, environmental, location, motion, impact and/or other data and to transmit, via the communication module, a data signal to a computing device or network based at least in part on said data from which to enhance user safety and helmet functionality.
    Type: Application
    Filed: October 3, 2019
    Publication date: December 16, 2021
    Inventors: John Maxwell Baker, Jonathan Brandt Hadley, Aaron D. Johnson, Chad Austin Brinckerhoff