Patents by Inventor Richard Zarek Cohen

Richard Zarek Cohen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10432930
    Abstract: In some aspects, methods and systems described herein provide for preparing component videos for combining into a bitstream. An example system may receive a source video. The system may also receive data representing a compression format. The system may encode a reference frame as an intra-coded picture that is sub-divided into intra-coded units. The system may encode the sequence of source frames as a sequence of predictive-coded pictures conforming to the compression format. The sequence may be divided into groups of pictures that include a first predictive-coded picture followed by one or more second predictive-coded pictures. The first predictive-coded picture may be sub-divided into intra-coded units that represent respective portions of a source frame by describing the pixels of the portion so as to simulate intra-coded pictures. The system may concatenate the sequence of predictive-coded pictures after the intra-coded picture so as to produce a bitstream.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: October 1, 2019
    Assignee: Google LLC
    Inventors: Andrew Benedict Lewis, Richard Zarek Cohen
  • Publication number: 20180213226
    Abstract: In some aspects, methods and systems described herein provide for preparing component videos for combining into a bitstream. An example system may receive a source video. The system may also receive data representing a compression format. The system may encode a reference frame as an intra-coded picture that is sub-divided into intra-coded units. The system may encode the sequence of source frames as a sequence of predictive-coded pictures conforming to the compression format. The sequence may be divided into groups of pictures that include a first predictive-coded picture followed by one or more second predictive-coded pictures. The first predictive-coded picture may be sub-divided into intra-coded units that represent respective portions of a source frame by describing the pixels of the portion so as to simulate intra-coded pictures. The system may concatenate the sequence of predictive-coded pictures after the intra-coded picture so as to produce a bitstream.
    Type: Application
    Filed: March 19, 2018
    Publication date: July 26, 2018
    Inventors: Andrew Benedict Lewis, Richard Zarek Cohen
  • Patent number: 9972359
    Abstract: Implementations generally relate to providing video transitions. In some implementations, a method includes receiving a soundtrack. The method further includes determining one or more sound characteristics of the soundtrack. The method further includes determining at least one target portion of the soundtrack based on the one or more sound characteristics. The method further includes receiving one or more video clips. The method further includes adjusting a length of one or more of the video clips based on one or more adjusting policies. The method further includes combining the one or more video clips with the soundtrack.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: May 15, 2018
    Assignee: Google LLC
    Inventors: Ryan James Lothian, Richard Zarek Cohen
  • Patent number: 9955159
    Abstract: In some aspects, methods and systems described herein provide for preparing component videos for combining into a bitstream. An example system may receive a source video. The system may also receive data representing a compression format. The system may encode a reference frame as an intra-coded picture that is sub-divided into intra-coded units. The system may encode the sequence of source frames as a sequence of predictive-coded pictures conforming to the compression format. The sequence may be divided into groups of pictures that include a first predictive-coded picture followed by one or more second predictive-coded pictures. The first predictive-coded picture may be sub-divided into intra-coded units that represent respective portions of a source frame by describing the pixels of the portion so as to simulate intra-coded pictures. The system may concatenate the sequence of predictive-coded pictures after the intra-coded picture so as to produce a bitstream.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Andrew Benedict Lewis, Richard Zarek Cohen
  • Publication number: 20170358324
    Abstract: Implementations generally relate to providing video transitions. In some implementations, a method includes receiving a soundtrack. The method further includes determining one or more sound characteristics of the soundtrack. The method further includes determining at least one target portion of the soundtrack based on the one or more sound characteristics. The method further includes receiving one or more video clips. The method further includes adjusting a length of one or more of the video clips based on one or more adjusting policies. The method further includes combining the one or more video clips with the soundtrack.
    Type: Application
    Filed: August 28, 2017
    Publication date: December 14, 2017
    Applicant: Google Inc.
    Inventors: Ryan James LOTHIAN, Richard Zarek COHEN
  • Patent number: 9747949
    Abstract: Implementations generally relate to providing video transitions. In some implementations, a method includes receiving a soundtrack. The method further includes determining one or more sound characteristics of the soundtrack. The method further includes determining at least one target portion of the soundtrack based on the one or more sound characteristics. The method further includes receiving one or more video clips. The method further includes adjusting a length of one or more of the video clips based on one or more adjusting policies. The method further includes combining the one or more video clips with the soundtrack.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: August 29, 2017
    Assignee: Google Inc.
    Inventors: Ryan James Lothian, Richard Zarek Cohen
  • Patent number: 9438791
    Abstract: A method, computer program product, and system is described. An aspect of an image is identified. One or more other images are identified based upon, at least in part, the one or more other images including one or more other aspects similar to the identified aspect of the image. One or more image filters associated with the one or more other images, including a first image filter, are identified. The first image filter is applied to the image.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: September 6, 2016
    Assignee: Google Inc.
    Inventors: Richard Zarek Cohen, Robert William Hamilton
  • Publication number: 20160180851
    Abstract: The present application describes systems, articles of manufacture, and methods for continuous speech recognition for mobile computing devices. One embodiment includes determining whether a mobile computing device is receiving operating power from an external power source or a battery power source, and activating a trigger word detection subroutine in response to determining that the mobile computing device is receiving power from the external power source. In some embodiments, the trigger word detection subroutine operates continually while the mobile computing device is receiving power from the external power source. The trigger word detection subroutine includes determining whether a plurality of spoken words received via a microphone includes one or more trigger words, and in response to determining that the plurality of spoken words includes at least one trigger word, launching an application corresponding to the at least one trigger word included in the plurality of spoken words.
    Type: Application
    Filed: February 16, 2016
    Publication date: June 23, 2016
    Inventors: Bjorn Erik Bringert, Peter John Hodgson, Pawel Pietryka, Simon Tickner, Richard Zarek Cohen, Henrique Penha, Luca Zanolin, Dave Burke
  • Publication number: 20160127709
    Abstract: In some aspects, methods and systems described herein provide for preparing component videos for combining into a bitstream. An example system may receive a source video. The system may also receive data representing a compression format. The system may encode a reference frame as an intra-coded picture that is sub-divided into intra-coded units. The system may encode the sequence of source frames as a sequence of predictive-coded pictures conforming to the compression format. The sequence may be divided into groups of pictures that include a first predictive-coded picture followed by one or more second predictive-coded pictures. The first predictive-coded picture may be sub-divided into intra-coded units that represent respective portions of a source frame by describing the pixels of the portion so as to simulate intra-coded pictures. The system may concatenate the sequence of predictive-coded pictures after the intra-coded picture so as to produce a bitstream.
    Type: Application
    Filed: October 31, 2014
    Publication date: May 5, 2016
    Inventors: Andrew Benedict Lewis, Richard Zarek Cohen
  • Publication number: 20160100101
    Abstract: A method, computer program product, and system is described. An aspect of an image is identified. One or more other images are identified based upon, at least in part, the one or more other images including one or more other aspects similar to the identified aspect of the image. One or more image filters associated with the one or more other images, including a first image filter, are identified. The first image filter is applied to the image.
    Type: Application
    Filed: October 5, 2015
    Publication date: April 7, 2016
    Applicant: GOOGLE INC.
    Inventors: Richard Zarek COHEN, Robert William HAMILTON
  • Patent number: 9199155
    Abstract: In one example, a method includes determining, by a computing device and based at least in part on an initial character string, one or more candidate morpheme sequences, wherein each of the candidate morpheme sequences includes the initial character string and one or more candidate morphemes. The method further includes outputting, for display, the one or more candidate morpheme sequences. The method further includes receiving an indication of a user input detected at a presence-sensitive input device. The method further includes selecting, based on the indication of the user input, at least one of the candidate morphemes from one of the candidate morpheme sequences to define a selected morpheme sequence that includes the initial character string and the selected candidate morpheme from the one of the candidate morpheme sequences. The method further includes outputting, for display, the selected morpheme sequence.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: December 1, 2015
    Assignee: Google Inc.
    Inventors: Adam Travis Skory, Richard Zarek Cohen
  • Patent number: 9154709
    Abstract: A method, computer program product, and system is described. An aspect of an image is identified. One or more other images are identified based upon, at least in part, the one or more other images including one or more other aspects similar to the identified aspect of the image. One or more image filters associated with the one or more other images, including a first image filter, are identified. The first image filter is applied to the image.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: October 6, 2015
    Assignee: Google Inc.
    Inventors: Richard Zarek Cohen, Robert William Hamilton
  • Patent number: 9135914
    Abstract: Disclosed are systems, methods, and devices for providing a layered user interface for one or more applications. A user-interface layer for a voice user interface is generated. The user-interface layer can be based on a markup-language-structured user-interface description for an application configured to execute on a computing device. The user-interface layer can include a command display of one or more voice-accessible commands for the application. The computing device can display at least the user-interface layer of the voice user interface. The computing device can receive an input utterance, obtain input text based upon speech recognition performed upon the input utterance, and determine that the input text corresponds to a voice-accessible command displayed as part of the command display. The computing device can execute the application to perform the command.
    Type: Grant
    Filed: September 15, 2012
    Date of Patent: September 15, 2015
    Assignee: Google Inc.
    Inventors: Bjorn Erik Bringert, Pawel Pietryka, Peter John Hodgson, Simon Tickner, Henrique Penha, Richard Zarek Cohen, Luca Zanolin
  • Publication number: 20150228310
    Abstract: Implementations generally relate to providing video transitions. In some implementations, a method includes receiving a soundtrack. The method further includes determining one or more sound characteristics of the soundtrack. The method further includes determining at least one target portion of the soundtrack based on the one or more sound characteristics. The method further includes receiving one or more video clips. The method further includes adjusting a length of one or more of the video clips based on one or more adjusting policies. The method further includes combining the one or more video clips with the soundtrack.
    Type: Application
    Filed: February 6, 2015
    Publication date: August 13, 2015
    Applicant: GOOGLE INC.
    Inventors: Ryan James Lothian, Richard Zarek Cohen
  • Publication number: 20150074003
    Abstract: Cloud-based media can be locally cached at a vehicle in which a user will travel. The media may be accessed by an authorized user during an authorized period that can be based upon vehicle status, such as vehicle speed or altitude. When the authorized period expires, the media stored at the vehicle can be deleted.
    Type: Application
    Filed: September 6, 2013
    Publication date: March 12, 2015
    Applicant: Google Inc.
    Inventors: Simon Tickner, Richard Zarek Cohen
  • Patent number: 8959023
    Abstract: A computing device may receive an incoming communication and, in response, generate a notification that indicates that the incoming communication can be accessed using a particular application on the communication device. The computing device may further provide an audio signal indicative of the notification and automatically activate a listening mode. The computing device may receive a voice input during the listening mode, and an input text may be obtained based on speech recognition performed upon the voice input. A command may be detected in the input text. In response to the command, the computing device may generate an output text that is based on at least the notification and provide a voice output that is generated from the output text via speech synthesis. The voice output identifies at least the particular application.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventors: Bjorn Erik Bringert, Pawel Pietryka, Peter John Hodgson, Dave Burke, Henrique Penha, Simon Tickner, Richard Zarek Cohen, Luca Zanolin, Michael J. LeBeau
  • Patent number: 8924219
    Abstract: In a first speech detection mode, a computing device listens for speech that corresponds to one of a plurality of activation phrases or “hotwords” that cause the computing device to recognize further speech input in a second speech detection mode. Each activation phrase is associated with a respective application. During the first speech detection mode, the computing device compares detected speech to the activation phrases to identify any potential matches. In response to identifying a matching activation phrase with a sufficiently high confidence, the computing device invokes the application associated with the matching activation phrase and enters the second speech detection mode. In the second speech detection mode, the computing device listens for speech input related to the invoked application.
    Type: Grant
    Filed: August 16, 2012
    Date of Patent: December 30, 2014
    Assignee: Google Inc.
    Inventors: Bjorn Erik Bringert, Hugo Barra, Richard Zarek Cohen
  • Publication number: 20140278368
    Abstract: In one example, a method includes determining, by a computing device and based at least in part on an initial character string, one or more candidate morpheme sequences, wherein each of the candidate morpheme sequences includes the initial character string and one or more candidate morphemes. The method further includes outputting, for display, the one or more candidate morpheme sequences. The method further includes receiving an indication of a user input detected at a presence-sensitive input device. The method further includes selecting, based on the indication of the user input, at least one of the candidate morphemes from one of the candidate morpheme sequences to define a selected morpheme sequence that includes the initial character string and the selected candidate morpheme from the one of the candidate morpheme sequences. The method further includes outputting, for display, the selected morpheme sequence.
    Type: Application
    Filed: May 17, 2013
    Publication date: September 18, 2014
    Applicant: Google Inc.
    Inventors: Adam Travis Skory, Richard Zarek Cohen
  • Publication number: 20140244253
    Abstract: The present application describes systems, articles of manufacture, and methods for continuous speech recognition for mobile computing devices. One embodiment includes determining whether a mobile computing device is receiving operating power from an external power source or a battery power source, and activating a trigger word detection subroutine in response to determining that the mobile computing device is receiving power from the external power source. In some embodiments, the trigger word detection subroutine operates continually while the mobile computing device is receiving power from the external power source. The trigger word detection subroutine includes determining whether a plurality of spoken words received via a microphone includes one or more trigger words, and in response to determining that the plurality of spoken words includes at least one trigger word, launching an application corresponding to the at least one trigger word included in the plurality of spoken words.
    Type: Application
    Filed: September 27, 2012
    Publication date: August 28, 2014
    Inventors: Bjorn Erik Bringert, Peter John Hodgson, Pawel Pietryka, Simon Tickner, Richard Zarek Cohen, Henrique Penha, Luca Zanolin, Dave Burke
  • Publication number: 20140188742
    Abstract: A method for displaying an aggregate count of endorsements is provided, including the following method operations: processing a request for an online resource from a mobile device, the online resource being associated with an object, the online resource including an endorsement mechanism; sending the online resource to the mobile device; processing an input from a user triggering the endorsement mechanism, to define an endorsement of the object by the user; updating an aggregate count of endorsements of the object to include the endorsement of the object by the user; sending the updated aggregate count of endorsements to the social display device for display on the social display device.
    Type: Application
    Filed: March 15, 2013
    Publication date: July 3, 2014
    Inventors: Thomas Deselaers, Damon Kohler, Daniel Martin Keysers, Matthew Sharifi, Richard Zarek Cohen, Benoit Boissinot, Stephan Robert Gammeter