Patents by Inventor Rahul Garg

Rahul Garg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11275877
    Abstract: Hardware simulation systems and methods for reducing signal dumping time and size of by fast dynamical partial aliasing of signals having similar waveform are provided. One example system is configured to receive, in real-time, a first signal from a producer entity; determine a first signal signature associated with the first signal; determine, in real-time, a second signal signature associated with the second signal; upon determining that the first signal signature matches the second signal signature, designate the first signal as a master signal and designate the second signal as a slave signal; and stop dumping the second signal to a storage space.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: March 15, 2022
    Assignee: Synopsys, Inc.
    Inventors: Parijat Biswas, Sitikant Sahu, Rahul Garg
  • Publication number: 20220064129
    Abstract: The presently claimed invention relates to a novel, highly efficient and general process for the preparation of UV absorbers.
    Type: Application
    Filed: January 2, 2020
    Publication date: March 3, 2022
    Inventors: Rahul GARG, Mushtaq PATEL, Prachin KOLAMBKAR, Mileen KADAM, Deepak MAKADE, Ramraj BHATTA
  • Patent number: 11210799
    Abstract: A camera may capture an image of a scene and use the image to generate a first and a second subpixel image of the scene. The pair of subpixel images may be represented by a first set of subpixels and a second set of subpixels from the image respectively. Each pixel of the image may include two green subpixels that are respectively represented in the first and second subpixel images. The camera may determine a disparity between a portion of the scene as represented by the pair of subpixel images and may estimate a depth map of the scene that indicates a depth of the portion relative to other portions of the scene based on the disparity and a baseline distance between the two green subpixels. A new version of the image may be generated with a focus upon the portion and with the other portions of the scene blurred.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: December 28, 2021
    Assignee: Google LLC
    Inventors: David Jacobs, Rahul Garg, Yael Pritch Knaan, Neal Wadhwa, Marc Levoy
  • Patent number: 11181986
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: November 23, 2021
    Assignee: GOOGLE LLC
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Patent number: 11113832
    Abstract: Example embodiments allow for training of artificial neural networks (ANNs) to generate depth maps based on images. The ANNs are trained based on a plurality of sets of images, where each set of images represents a single scene and the images in such a set of images differ with respect to image aperture and/or focal distance. An untrained ANN generates a depth map based on one or more images in a set of images. This depth map is used to generate, using the image(s) in the set, a predicted image that corresponds, with respect to image aperture and/or focal distance, to one of the images in the set. Differences between the predicted image and the corresponding image are used to update the ANN. ANNs tramed in this manner are especially suited for generating depth maps used to perform simulated image blur on small-aperture images.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: September 7, 2021
    Assignee: Google LLC
    Inventors: Neal Wadhwa, Jonathan Barron, Rahul Garg, Pratul Srinivasan
  • Patent number: 11102155
    Abstract: The disclosed systems and methods join a user to a primary communication channel that is associated with an automated human interface module. The automated human interface module includes a plurality of nodes. A message including a text communication is posted by the user and sent to a decision module associated with a plurality of classifiers. The decision module is configured to identify a node that best matches the text communication in accordance with the plurality of classifiers. Each respective classifier produces a respective classifier result thereby producing a plurality of classifier results. Each respective classifier result identifies a respective node of the plurality of nodes best matching the text communication. The plurality of classifier results is collectively considered, and the node best matching the text communication is identified and the text communication is sent to the identified node.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: August 24, 2021
    Assignee: Pypestream Inc.
    Inventors: Richard Smullen, Rahul A. Garg, Minjun Kim, Matin Kamali, Jatin Patel
  • Publication number: 20210183089
    Abstract: Example embodiments allow for training of artificial neural networks (ANNs) to generate depth maps based on images. The ANNs are trained based on a plurality of sets of images, where each set of images represents a single scene and the images in such a set of images differ with respect to image aperture and/or focal distance. An untrained ANN generates a depth map based on one or more images in a set of images. This depth map is used to generate, using the image(s) in the set, a predicted image that corresponds, with respect to image aperture and/or focal distance, to one of the images in the set. Differences between the predicted image and the corresponding image are used to update the ANN. ANNs tramed in this manner are especially suited for generating depth maps used to perform simulated image blur on small-aperture images.
    Type: Application
    Filed: November 3, 2017
    Publication date: June 17, 2021
    Inventors: Neal Wadhwa, Jonathan Barron, Rahul Garg, Pratul Srinivasan
  • Publication number: 20210056349
    Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.
    Type: Application
    Filed: November 6, 2020
    Publication date: February 25, 2021
    Inventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen
  • Patent number: 10860889
    Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Google LLC
    Inventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen
  • Publication number: 20200379576
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Application
    Filed: August 19, 2020
    Publication date: December 3, 2020
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Patent number: 10852847
    Abstract: A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: December 1, 2020
    Assignee: Google LLC
    Inventors: Joel Hesch, Shiqi Chen, Johnny Lee, Rahul Garg
  • Patent number: 10782793
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: September 22, 2020
    Assignee: GOOGLE LLC
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Publication number: 20200242788
    Abstract: A camera may capture an image of a scene and use the image to generate a first and a second subpixel image of the scene. The pair of subpixel images may be represented by a first set of subpixels and a second set of subpixels from the image respectively. Each pixel of the image may include two green subpixels that are respectively represented in the first and second subpixel images. The camera may determine a disparity between a portion of the scene as represented by the pair of subpixel images and may estimate a depth map of the scene that indicates a depth of the portion relative to other portions of the scene based on the disparity and a baseline distance between the two green subpixels. A new version of the image may be generated with a focus upon the portion and with the other portions of the scene blurred.
    Type: Application
    Filed: December 5, 2017
    Publication date: July 30, 2020
    Inventors: David Jacobs, Rahul Garg, Yael Pritch Knaan, Neal Wadhwa, Marc Levoy
  • Publication number: 20200226419
    Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.
    Type: Application
    Filed: January 11, 2019
    Publication date: July 16, 2020
    Inventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen
  • Patent number: 10635161
    Abstract: In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: April 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Alexander James Faaborg, Rahul Garg, Jonathan Tompson, Shiqi Chen
  • Publication number: 20200104443
    Abstract: Hardware simulation systems and methods for reducing signal dumping time and size of by fast dynamical partial aliasing of signals having similar waveform are provided. One example system is configured to receive, in real-time, a first signal from a producer entity; determine a first signal signature associated with the first signal; determine, in real-time, a second signal signature associated with the second signal; upon determining that the first signal signature matches the second signal signature, designate the first signal as a master signal and designate the second signal as a slave signal; and stop dumping the second signal to a storage space.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Parijat Biswas, Sitikant Sahu, Rahul Garg
  • Patent number: 10599211
    Abstract: In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: March 24, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Alexander James Faaborg, Rahul Garg, Jonathan Tompson, Shiqi Chen
  • Publication number: 20190050062
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Application
    Filed: August 10, 2018
    Publication date: February 14, 2019
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Publication number: 20190033988
    Abstract: A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
    Type: Application
    Filed: July 26, 2017
    Publication date: January 31, 2019
    Inventors: Joel Hesch, Shiqi Chen, Johnny Lee, Rahul Garg
  • Patent number: 10139917
    Abstract: Systems and methods are disclosed for gesture-initiated actions in videoconferences. In one implementation, a processing device receives content streams during a communication session, identifies a request for feedback within one of the content streams, based on an identification of the request for feedback, processes the content streams to identify one or more gestures within at least one of the content streams, and based on a determination that a first gesture of the one or more gestures is relatively more prevalent across the content streams than one or more other gestures, initiates an action with respect to the communication session.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: November 27, 2018
    Assignee: Google LLC
    Inventors: Mehul Nariyawala, Rahul Garg, Navneet Dalal, Thor Carpenter, Gregory Burgess, Timothy Psiaki, Mark Chang, Antonio Bernardo Monteiro Costa, Christian Plagemann, Chee Chew