Patents by Inventor Salma Jiddi

Salma Jiddi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062345
    Abstract: A method, apparatus, and computer-readable medium for foreground object deletion and inpainting, including storing contextual information corresponding to an image of a scene, identifying one or more foreground objects in the scene based at least in part on the contextual information, each foreground object having a corresponding object mask, identifying at least one foreground object in the one or more foreground objects for removal from the image, generating a removal mask corresponding to the at least one foreground object based at least in part on at least one object mask corresponding to the at least one foreground object, determining an estimated geometry of the scene behind the at least one foreground object based at least in part on the contextual information, and inpainting pixels corresponding to the removal mask with a replacement texture omitting the foreground object based at least in part on the estimated geometry of the scene.
    Type: Application
    Filed: June 22, 2023
    Publication date: February 22, 2024
    Inventors: Prakhar Kulshreshtha, Konstantinos Nektarios Lianos, Brian Pugh, Luis Puig Morales, Ajaykumar Unagar, Michael Otrada, Angus Dorbie, Benn Herrera, Patrick Rutkowski, Qing Guo, Jordan Braun, Paul Gauthier, Philip Guindi, Salma Jiddi, Brian Totty
  • Publication number: 20240013478
    Abstract: A method and an apparatus for processing a 3D scene are presented. Techniques are disclosed for determining light source locations, including tracking a current viewpoint of a camera capturing object(s) in a 3D scene and determining a reference viewpoint relative to the current viewpoint of the camera. According to aspects, a light source location is determined by obtaining a registered map of real cast shadows of the object(s) from an input image captured by the camera, registered with respect to the reference viewpoint. Then, for candidates of light sources, obtaining respective maps of virtual shadows of the object(s) created with respect to the reference viewpoint, and determining the location of the light source based on the candidates of light sources with respective maps of virtual shadows that match the registered map of real cast shadows.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 11, 2024
    Inventors: Philippe ROBERT, Salma Jiddi, Tao LUO
  • Publication number: 20230419526
    Abstract: A method for layout extraction is provided. The method can include storing a plurality of scene priors corresponding to an image of a scene, detecting a plurality of borders in the scene, generating a plurality of initial plane masks and a plurality of plane connectivity values based at least in part on the plurality of borders, and generating a plurality of optimized plane masks by refining the plurality of initial plane masks based at least in part on an estimated geometry of the plurality of layout planes.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 28, 2023
    Inventors: Konstantinos Nektarios Lianos, Prakhar Kulshreshtha, Brian Pugh, Luis Puig Morales, Ajaykumar Unagar, Michael Otrada, Angus Dorbie, Benn Herrera, Patrick Rutkowski, Qing Guo, Jordan Braun, Paul Gauthier, Philip Guindi, Salma Jiddi, Brian Totty
  • Publication number: 20230410337
    Abstract: System and method for rendering virtual objects onto an image.
    Type: Application
    Filed: June 20, 2023
    Publication date: December 21, 2023
    Inventors: Brian Pugh, Angus Dorbie, Salma Jiddi, Qiqin Dai, Paul Gauthier, Marc Eder, Jianfeng Yin, Luis Puig Morales, Michael Otrada, Konstantinos Nektarios Lianos, Philip Guindi, Brian Totty
  • Publication number: 20230410424
    Abstract: A method and system for generating a virtual representation of a physical scene, including receiving scene data corresponding to the physical scene, processing the scene data to determine scene components and scene priors corresponding to the scene components, generating, by a plurality of neural networks, dense geometric representations based at least in part on the scene priors, where each dense geometric representation corresponds to a scene component in the scene components, generating a virtual model of the physical scene based at least in part on the dense geometric representations, and generating a virtual representation of the physical scene based at least in part on the scene data, the virtual representation being aligned with the virtual model.
    Type: Application
    Filed: June 16, 2023
    Publication date: December 21, 2023
    Inventors: Brian TOTTY, Kevin WONG, Jianfeng YIN, Luis Puig MORALES, Paul GAUTHIER, Salma JIDDI, Qiqin DAI, Brian PUGH, Konstantinos Nektarios LIANOS, Angus DORBIE, Yacine ALAMI, Marc EDER, Christopher SWEENEY, Javier CIVERA
  • Patent number: 11727587
    Abstract: System and method for rendering virtual objects onto an image.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: August 15, 2023
    Assignee: Geomagical Labs, Inc.
    Inventors: Brian Pugh, Angus Dorbie, Salma Jiddi, Qiqin Dai, Paul Gauthier, Marc Eder, Jianfeng Yin, Luis Puig Morales, Michael Otrada, Konstantinos Nektarios Lianos, Philip Guindi, Brian Totty
  • Patent number: 11721067
    Abstract: A method and system for generating a virtual representation of a physical scene, including receiving scene data corresponding to the physical scene, processing the scene data to determine scene components and scene priors corresponding to the scene components, generating, by a plurality of neural networks, dense geometric representations based at least in part on the scene priors, where each dense geometric representation corresponds to a scene component in the scene components, generating a virtual model of the physical scene based at least in part on the dense geometric representations, and generating a virtual representation of the physical scene based at least in part on the scene data, the virtual representation being aligned with the virtual model.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: August 8, 2023
    Assignee: Geomagical Labs, Inc.
    Inventors: Brian Totty, Kevin Wong, Jianfeng Yin, Luis Puig Morales, Paul Gauthier, Salma Jiddi, Qiqin Dai, Brian Pugh, Konstantinos Nektarios Lianos, Angus Dorbie, Yacine Alami, Marc Eder, Christopher Sweeney, Javier Civera
  • Publication number: 20220020210
    Abstract: A method and system for generating a virtual representation of a physical scene, including receiving scene data corresponding to the physical scene, processing the scene data to determine scene components and scene priors corresponding to the scene components, generating, by a plurality of neural networks, dense geometric representations based at least in part on the scene priors, where each dense geometric representation corresponds to a scene component in the scene components, generating a virtual model of the physical scene based at least in part on the dense geometric representations, and generating a virtual representation of the physical scene based at least in part on the scene data, the virtual representation being aligned with the virtual model.
    Type: Application
    Filed: September 29, 2021
    Publication date: January 20, 2022
    Inventors: Brian TOTTY, Kevin WONG, Jianfeng YIN, Luis Puig MORALES, Paul GAUTHIER, Salma JIDDI, Qiqin DAI, Brian PUGH, Konstantinos Nektarios LIANOS, Angus DORBIE, Yacine ALAMI, Marc EDER, Christopher SWEENEY, Javier CIVERA
  • Patent number: 11170569
    Abstract: A method for determining a visual scene virtual representation and a highly accurate visual scene-aligned geometric representation for virtual interaction.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: November 9, 2021
    Assignee: GEOMAGICAL LABS, INC.
    Inventors: Brian Totty, Kevin Wong, Jianfeng Yin, Luis Puig Morales, Paul Gauthier, Salma Jiddi, Qiqin Dai, Brian Pugh, Konstantinos Nektarios Lianos, Angus Dorbie, Yacine Alami, Marc Eder, Christopher Sweeney, Javier Civera
  • Publication number: 20210142497
    Abstract: System and method for rendering virtual objects onto an image.
    Type: Application
    Filed: November 12, 2020
    Publication date: May 13, 2021
    Inventors: Brian Pugh, Angus Dorbie, Salma Jiddi, Qiqin Dai, Paul Gauthier, Marc Eder, Jianfeng Yin, Luis Puig Morales, Michael Otrada, Konstantinos Nektarios Lianos, Philip Guindi, Brian Totty
  • Publication number: 20210082178
    Abstract: A method and an apparatus for processing a 3D scene are disclosed. At least one virtual reference viewpoint in the 3D scene is determined (41). A map of registered real cast shadows of objects in the 3D scene from an input image captured by a camera positioned at a viewpoint distinct from the virtual reference viewpoint is obtained (42), said map of real cast shadows being registered with regards to the virtual reference viewpoint. Parameters for at least one light source in the 3D scene are determined (44) using the map of registered real cast shadows and at least one map of virtual shadows of objects in the 3D scene cast by the at least one light source from the virtual reference viewpoint.
    Type: Application
    Filed: March 8, 2019
    Publication date: March 18, 2021
    Applicant: InterDigital CE Patent Holdings
    Inventors: Philippe ROBERT, Salma JIDDI, Tao LUO
  • Patent number: 10930059
    Abstract: A method and an apparatus for processing a 3D scene are disclosed. A reference image representative of an image of the scene captured under ambient lighting is determined. A texture-free map is determined from said reference image and an input image of the scene. The 3D scene is then processed using the determined texture-free map.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: February 23, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Salma Jiddi, Gregoire Nieto, Philippe Robert
  • Publication number: 20200302686
    Abstract: A method for determining a visual scene virtual representation and a highly accurate visual scene-aligned geometric representation for virtual interaction.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 24, 2020
    Inventors: Brian Totty, Kevin Wong, Jianfeng Yin, Luis Puig Morales, Paul Gauthier, Salma Jiddi, Qiqin Dai, Brian Pugh, Konstantinos Nektarios Lianos, Angus Dorbie, Yacine Alami, Marc Eder, Christopher Sweeney, Javier Civera
  • Publication number: 20200005527
    Abstract: A synthesis lighting environment representation of a 3D scene is constructed by receiving (10) data representative of at least one first image of the scene taken from at least one location outside the scene; receiving (20) data representative at least one second image of the scene containing at least one light source illuminating the scene and taken from at least one filming position inside the scene; merging (30) a first lighting environment representation derived from the data representative of the first image(s) and a second lighting environment representation derived from the data representative of the second image(s) into the synthesis lighting environment representation (Rep). Applications to augmented and mixed reality.
    Type: Application
    Filed: December 14, 2017
    Publication date: January 2, 2020
    Inventors: Philippe ROBERT, Salma JIDDI, Anthony LAURENT
  • Publication number: 20190325640
    Abstract: A method and an apparatus for processing a 3D scene are disclosed. A reference image representative of an image of the scene captured under ambient lighting is determined. A texture-free map is determined from said reference image and an input image of the scene. The 3D scene is then processed using the determined texture-free map.
    Type: Application
    Filed: April 22, 2019
    Publication date: October 24, 2019
    Inventors: Salma JIDDI, Gregoire NIETO, Philippe ROBERT
  • Patent number: 10132912
    Abstract: A method, apparatus and system for estimating reflectance parameters and a position of the light source(s) of specular reflections of a scene include RGB sequence analysis with measured geometry in order to estimate specular reflectance parameters of an observed 3D scene. Embodiments include pixel-based image registration from which profiles of 3D scene points image intensities over the sequence are estimated. A profile is attached to a 3D point and to the set of pixels that display its intensity in the registered sequence. Subsequently, distinction is made between variable profiles that reveal specular effects and constant profiles that show diffuse reflections only. Then, for each variable profile diffuse reflectance is estimated and subtracted from the intensity profile to deduce the specular profile and the specular parameters are estimated for each observed 3D point. Then, the location of at least one light source responsible for the specular effects is estimated.
    Type: Grant
    Filed: September 17, 2016
    Date of Patent: November 20, 2018
    Assignee: Thomson Licensing
    Inventors: Philippe Robert, Salma Jiddi, Matis Hudon
  • Publication number: 20180211446
    Abstract: A method for processing a 3D scene and a corresponding apparatus are disclosed. A 3D position of at least one-point light source of the 3D scene is determined from information representative of 3D geometry of the scene. Then, an occlusion attenuation coefficient assigned to the at least one-point light source is calculated from an occluded area and an unoccluded area, the occluded area and the unoccluded area only differing in that the at least one-point light source is occluded by an object in the occluded area and the at least one-point light source is not occluded in the unoccluded area. Color intensity of at least one pixel of the 3D scene can thus be modified using at least the occlusion attenuation coefficient.
    Type: Application
    Filed: January 21, 2018
    Publication date: July 26, 2018
    Inventors: Philippe ROBERT, Salma Jiddi, Anthony Laurent
  • Publication number: 20170082720
    Abstract: A method, apparatus and system for estimating reflectance parameters and a position of the light source(s) of specular reflections of a scene include RGB sequence analysis with measured geometry in order to estimate specular reflectance parameters of an observed 3D scene. Embodiments include pixel-based image registration from which profiles of 3D scene points image intensities over the sequence are estimated. A profile is attached to a 3D point and to the set of pixels that display its intensity in the registered sequence. Subsequently, distinction is made between variable profiles that reveal specular effects and constant profiles that show diffuse reflections only. Then, for each variable profile diffuse reflectance is estimated and subtracted from the intensity profile to deduce the specular profile and the specular parameters are estimated for each observed 3D point. Then, the location of at least one light source responsible for the specular effects is estimated.
    Type: Application
    Filed: September 17, 2016
    Publication date: March 23, 2017
    Inventors: PHILIPPE ROBERT, Salma Jiddi, Matis Hudon
  • Publication number: 20170084075
    Abstract: A method and system for three dimensional presentation of two dimensional images in a video sequence having a plurality of frames is provided. In one embodiment, the method comprises identifying a plurality of points to be presented in three dimensional images and performing a color and depth sequence analysis for each of these points. A profile is then generated profiles for each of the points based on the analysis. The profiles are classified as variable profiles or constant profiles and a surface reflectance is calculated for each of the points having a constant profile. Method also comprises modifying the two dimensional images to present as three dimensional images for points having a constant profile, wherein the images maintain uniform color and appearance between adjacent frames along said video sequence.
    Type: Application
    Filed: September 16, 2016
    Publication date: March 23, 2017
    Inventors: Philippe ROBERT, Salma JIDDI, Matis HUDON