Patents by Inventor Gowri Somanath

Gowri Somanath has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12266383
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Grant
    Filed: March 25, 2024
    Date of Patent: April 1, 2025
    Assignee: Intel Corporation
    Inventors: Gowri Somanath, Oscar Nestares
  • Publication number: 20240290359
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Application
    Filed: March 25, 2024
    Publication date: August 29, 2024
    Inventors: Gowri Somanath, Oscar Nestares
  • Patent number: 12002165
    Abstract: Various implementations disclosed herein include devices, systems, and methods that use light probes to facilitate the display of virtual objects in 3D environments. A light probe provides lighting information that describes light incident on a point in space in a 3D environment. For example, a light probe may describe such incident light using an environment map. Such lighting information can be used to provide realistic appearances for objects placed at or near light probe locations in the 3D environment. Implementations disclosed herein determine the light probe locations in real-time or other 3D environments that are generated based on a live physical environment. A digital representation of the live physical environment is used to determine where to position the light probes, how many light probes to use, and/or various light probe attributes.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: Daniel Kurz, Gowri Somanath, Tobias Holl
  • Patent number: 11972780
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: April 30, 2024
    Assignee: INTEL CORPORATION
    Inventors: Gowri Somanath, Oscar Nestares
  • Patent number: 11694392
    Abstract: Various implementations disclosed herein include devices, systems, and methods that render a reflective surface of a computer-generated reality (“CGR”) object based on synthesis in a CGR environment. In order to render a reflective surface of the CGR object, one exemplary implementation involves synthesizing an environment map of a CGR environment representing a portion of a physical scene based on observed characteristics of the physical scene. In an implementation, generation of a complete environment map includes identifying pixels of the environment map with no corresponding texture and generating synthesized texture based on textural information associated with one or more camera images of the physical scene. In an implementation, a CGR object is rendered in the CGR environment, wherein an appearance of a reflective surface of the CGR object is determined based on the complete environment map of the CGR environment.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: July 4, 2023
    Assignee: Apple Inc.
    Inventors: Daniel Kurz, Gowri Somanath, Tobias Holl
  • Patent number: 11636578
    Abstract: Various implementations disclosed herein include devices, systems, and methods that complete content for a missing part of an image of an environment. For example, an example process may include obtaining an image including defined content and missing parts for which content is undefined, determining a spatial image transformation for the image based on the defined content and the missing parts of the image, altering the image by applying the spatial image transformation, and completing the altered image.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: April 25, 2023
    Assignee: Apple Inc.
    Inventors: Daniel Kurz, Gowri Somanath, Tobias Holl
  • Patent number: 11423308
    Abstract: Implementations disclosed herein provide systems and methods that use classification-based machine learning to generate perceptually-plausible content for a missing part (e.g., some or all) of an image. The machine learning model may be trained to generate content for the missing part that appears plausible by learning to generate content that cannot be distinguished from real image content, for example, using adversarial loss-based training. To generate the content, a probabilistic classifier may be used to select color attribute values (e.g., RGB values) for each pixel of the missing part of the image. To do so, a pixel color attribute is segmented into a number of bins (e.g., value ranges) that are used as classes. The classifier determines probabilities for each of the bins of a color attribute for each pixel and generates the content by selecting the bin having the highest probability for each color attribute for each pixel.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: August 23, 2022
    Assignee: Apple Inc.
    Inventors: Gowri Somanath, Daniel Kurz
  • Publication number: 20220013148
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Inventors: Gowri Somanath, Oscar Nestares
  • Patent number: 11133033
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: September 28, 2021
    Assignee: Intel Corporation
    Inventors: Gowri Somanath, Oscar Nestares
  • Publication number: 20210056998
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Application
    Filed: July 7, 2020
    Publication date: February 25, 2021
    Inventors: Gowri Somanath, Oscar Nestares
  • Publication number: 20200345317
    Abstract: An apparatus, method, and machine-readable medium for health monitoring and response are described herein. The apparatus includes a processor and a number of sensors configured to collect data corresponding to a user of the device. The apparatus also includes a health monitoring and response application, at least partially including hardware logic. The hardware logic of the health monitoring and response application is to test the data collected by any of the sensors to match the collected data with a predetermined health condition, determine a current health condition of the user based on the predetermined health condition that matches the collected data, and automatically perform an action based on the current health condition of the user.
    Type: Application
    Filed: May 18, 2020
    Publication date: November 5, 2020
    Applicant: INTEL CORPORATION
    Inventors: Gowri Somanath, Karthik Natarajan
  • Patent number: 10706890
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: July 7, 2020
    Assignee: Intel Corporation
    Inventors: Gowri Somanath, Oscar Nestares
  • Patent number: 10653369
    Abstract: An apparatus, method, and machine-readable medium for health monitoring and response are described herein. The apparatus includes a processor and a number of sensors configured to collect data corresponding to a user of the device. The apparatus also includes a health monitoring and response application, at least partially including hardware logic. The hardware logic of the health monitoring and response application is to test the data collected by any of the sensors to match the collected data with a predetermined health condition, determine a current health condition of the user based on the predetermined health condition that matches the collected data, and automatically perform an action based on the current health condition of the user.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: May 19, 2020
    Assignee: Intel Corporation
    Inventors: Gowri Somanath, Karthik Natarajan
  • Publication number: 20190362539
    Abstract: Various implementations disclosed herein include devices, systems, and methods that render a reflective surface of a computer-generated reality (“CGR”) object based on synthesis in a CGR environment. In order to render a reflective surface of the CGR object, one exemplary implementation involves synthesizing an environment map of a CGR environment representing a portion of a physical scene based on observed characteristics of the physical scene. In an implementation, generation of a complete environment map includes identifying pixels of the environment map with no corresponding texture and generating synthesized texture based on textural information associated with one or more camera images of the physical scene. In an implementation, a CGR object is rendered in the CGR environment, wherein an appearance of a reflective surface of the CGR object is determined based on the complete environment map of the CGR environment.
    Type: Application
    Filed: April 2, 2019
    Publication date: November 28, 2019
    Inventors: Daniel Kurz, Gowri Somanath, Tobias Holl
  • Patent number: 10475186
    Abstract: Techniques are provided for segmentation of objects in video frames. A methodology implementing the techniques according to an embodiment includes receiving image frames, including an initial reference frame, and receiving a mask to outline a region in the reference frame that contains the object to be segmented. The method also includes calculating Gaussian mixture models associated with both the masked region and a background region external to the masked region. The method further includes segmenting the object from a current frame based on a modelling of the pixels within an active area of the current frame as a Markov Random Field of nodes for cost minimization. The costs are based in part on the Gaussian mixture models. The active area is based on the segmentation of a previous frame and on an estimation of optical flow between the previous frame and the current frame.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: November 12, 2019
    Assignee: Intel Corportation
    Inventors: Gowri Somanath, Jiajie Yao, Yong Jiang
  • Patent number: 10455219
    Abstract: Stereo correspondence and depth sensor techniques are described. In one or more implementations, a depth map generated by a depth sensor is leveraged as part of processing of stereo images to assist in identifying which parts of stereo images correspond to each other. The depth map, for instance, may be utilized to assist in identifying depth discontinuities in the stereo images. Additionally, techniques may be employed to align the depth discontinuities identified from the depth map to image edges identified from the stereo images. Techniques may also be employed to suppress image edges that do not correspond to the depth discontinuities of the depth map in comparison with image edges that do correspond to the depth discontinuities as part of the identification.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Scott D. Cohen, Brian L. Price, Gowri Somanath
  • Patent number: 10417771
    Abstract: Image scene labeling with 3D image data. A plurality of pixels of an image frame may be label based at least on a function of pixel color and a pixel depth over the spatial positions within the image frame. A graph-cut technique may be utilized to optimize a data cost and neighborhood cost in which at least the data cost function includes a component that is a dependent on a depth associated with a given pixel in the frame. In some embodiments, in the MRF formulation pixels are adaptively merged into pixel groups based on the constructed data cost(s) and neighborhood cost(s). These pixel groups are then made nodes in the directed graphs. In some embodiments, a hierarchical expansion is performed, with the hierarchy set up within the label space.
    Type: Grant
    Filed: May 14, 2015
    Date of Patent: September 17, 2019
    Assignee: Intel Corporation
    Inventors: Gowri Somanath, Jiajie Yao, Yong Jiang
  • Patent number: 10298914
    Abstract: Techniques are provided for perception enhancement of light fields (LFs) for use in integral display applications. A methodology implementing the techniques according to an embodiment includes receiving one or more LF views and a disparity map associated with each LF view. The method also includes quantizing the disparity map into planes, where each plane is associated with a selected range of depth values. The method further includes slicing the LF view into layers, where each layer comprises pixels of the LF view associated with one of the planes. The method further includes shifting each of the layers in a lateral direction by an offset distance. The offset distance is based on a viewing angle associated with the LF view and further based on the depth values of the associated plane. The method also includes merging the shifted layers to generate a synthesized LF view with increased parallax.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: May 21, 2019
    Assignee: INTEL CORPORATION
    Inventors: Basel Salahieh, Ginni Grover, Gowri Somanath, Oscar Nestares
  • Publication number: 20190066733
    Abstract: A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
    Type: Application
    Filed: August 24, 2017
    Publication date: February 28, 2019
    Applicant: Intel Corporation
    Inventors: GOWRI SOMANATH, OSCAR NESTARES
  • Publication number: 20180288387
    Abstract: A mechanism is described for facilitating real-time capturing, processing, and rendering of data according to one embodiment. A method of embodiments, as described herein, includes facilitating a capturing device to capture data of a scene, where the data includes a video having at least one of a two-and-a-half-dimensional video (2.5D) or a three-dimensional (3D) video. The method may further include processing, in real-time, the data to generate contents representing a 3D rendering of the data, and facilitating a display device to render, in real-time, the contents.
    Type: Application
    Filed: March 29, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: GOWRI SOMANATH, Ginni Grover, Oscar Nestares