Patents by Inventor Kevin Karsch

Kevin Karsch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11455739
    Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: September 27, 2022
    Assignee: Lightform, Inc.
    Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
  • Patent number: 11245883
    Abstract: One variation of a method for augmenting surfaces in a space includes: at a projection system, recording a sequence of scans of the space; aggregating the sequence of scans into a projector-domain image of the space; detecting a set of objects in the projector-domain image; identifying an object, in the set of objects, as of a first type; detecting a surface proximal the object in the projector-domain image; defining an association between the surface and a content source based on the first type of the object; based on the association, warping visual content output by the content source according to a profile of the surface extracted from the projector-domain image; actuating a projector assembly in the projection system to locate the surface in the field of view of a light projector in the projection system; and projecting the visual content, via the light projector, toward the surface.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: February 8, 2022
    Assignee: Lightform, Inc.
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck
  • Patent number: 11223807
    Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: January 11, 2022
    Assignee: Lightform, Inc.
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
  • Publication number: 20200413016
    Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
    Type: Application
    Filed: September 10, 2020
    Publication date: December 31, 2020
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
  • Publication number: 20200402247
    Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
    Type: Application
    Filed: May 11, 2020
    Publication date: December 24, 2020
    Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
  • Publication number: 20200374498
    Abstract: One variation of a method for augmenting surfaces in a space includes: at a projection system, recording a sequence of scans of the space; aggregating the sequence of scans into a projector-domain image of the space; detecting a set of objects in the projector-domain image; identifying an object, in the set of objects, as of a first type; detecting a surface proximal the object in the projector-domain image; defining an association between the surface and a content source based on the first type of the object; based on the association, warping visual content output by the content source according to a profile of the surface extracted from the projector-domain image; actuating a projector assembly in the projection system to locate the surface in the field of view of a light projector in the projection system; and projecting the visual content, via the light projector, toward the surface.
    Type: Application
    Filed: January 29, 2020
    Publication date: November 26, 2020
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck
  • Patent number: 10805585
    Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: October 13, 2020
    Assignee: Lightform, Inc.
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
  • Patent number: 10692233
    Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: June 23, 2020
    Assignee: Lightform, Inc.
    Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
  • Publication number: 20200195900
    Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
    Type: Application
    Filed: December 2, 2019
    Publication date: June 18, 2020
    Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
  • Publication number: 20200105006
    Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
    Type: Application
    Filed: June 20, 2019
    Publication date: April 2, 2020
    Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
  • Patent number: 10373325
    Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 6, 2019
    Assignee: Lightform, Inc.
    Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
  • Patent number: 9852238
    Abstract: A system and method are disclosed for, using structure-from-motion techniques, projecting a building information model (BIM) into images from photographs taken of a construction site, to generate a 3D point cloud model using the BIM and, when combined with scheduling constraints, facilitates 4D visualizations and progress monitoring. One of the images acts as an anchor image. Indications are received of first points in the anchor image that correspond to second points in the BIM. Calibration information for an anchor camera is calculated based on the indications and on metadata extracted from the anchor image, to register the anchor image in relation to the BIM. A homography transformation is determined between the images and the anchor camera using the calibration information, to register the rest of the images with the BIM, where some of those images are taken from different cameras and from different angles to the construction site.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: December 26, 2017
    Assignee: The Board of Trustees of the University of Illinois
    Inventors: David Alexander Forsyth, Kevin Karsch, Mani Golparvar-Fard
  • Patent number: 9786253
    Abstract: A method for generating additional views from a stereo image defined by a left eye image and a right eye image. The method includes receiving as input at least one stereo image. The method includes, for each of the stereo images, generating a plurality of additional images. The method includes interlacing the additional images for each of the stereo images to generate three dimensional (3D) content made up of multiple views of the scenes presented by each of the stereo images. The interlacing may be performed such that the generated 3D content is displayable on a 3D display device including a barrier grid or a lenticular lens array on the monitor screen. The additional images may include 12 to 40 or more frames providing views of the one or more scenes from differing viewing angles than provided by the left and right cameras used to generate the original stereo image.
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: October 10, 2017
    Assignee: LUMENCO, LLC
    Inventors: Mark A. Raymond, Hector Andres Porras Soto, Kevin Karsch
  • Patent number: 9613454
    Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: April 4, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
  • Patent number: 9471967
    Abstract: A fragment is relit for insertion into a target scene of an image by obtaining a fragment model for the fragment. A set of detail maps for the fragment model are generated, each of which encodes fine-scale shading effects from the surface detail of the fragment. A target scene model is obtained for the target scene, and the fragment model is inserted into the target scene model. The target scene model with inserted fragment model is rendered, and a composited target scene is generated. A modified target scene is generated by combining the composited target scene and the set of detail maps. Weights assigned to the different detail maps can be changed by the user, allowing the modified target scene to be readily altered without re-rendering the target scene model with the inserted fragment model.
    Type: Grant
    Filed: July 20, 2012
    Date of Patent: October 18, 2016
    Assignee: The Board of Trustees of the University of Illinois
    Inventors: Kevin Karsch, Zicheng Liao, David Forsyth
  • Publication number: 20160171755
    Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.
    Type: Application
    Filed: February 25, 2016
    Publication date: June 16, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
  • Patent number: 9330500
    Abstract: An image into which one or more objects are to be inserted is obtained. Based on the image, both a 3-dimensional (3D) representation and a light model of the scene in the image are generated. One or more objects are added to the 3D representation of the scene. The 3D representation of the scene is rendered, based on the light model, to generate a modified image that is the obtained image modified to include the one or more objects.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: May 3, 2016
    Assignee: The Board of Trustees of the University of Illinois
    Inventors: Kevin Karsch, Varsha Chandrashekhar Hedau, David A. Forsyth, Derek Hoiem
  • Patent number: 9299188
    Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.
    Type: Grant
    Filed: August 8, 2013
    Date of Patent: March 29, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
  • Publication number: 20150310135
    Abstract: A system and method are disclosed for, using structure-from-motion techniques, projecting a building information model (BIM) into images from photographs taken of a construction site, to generate a 3D point cloud model using the BIM and, when combined with scheduling constraints, facilitates 4D visualizations and progress monitoring. One of the images acts as an anchor image. Indications are received of first points in the anchor image that correspond to second points in the BIM. Calibration information for an anchor camera is calculated based on the indications and on metadata extracted from the anchor image, to register the anchor image in relation to the BIM. A homography transformation is determined between the images and the anchor camera using the calibration information, to register the rest of the images with the BIM, where some of those images are taken from different cameras and from different angles to the construction site.
    Type: Application
    Filed: April 17, 2015
    Publication date: October 29, 2015
    Inventors: David Alexander Forsyth, Kevin Karsch, Mani Golparvar-Fard
  • Publication number: 20150302563
    Abstract: A fragment is relit for insertion into a target scene of an image by obtaining a fragment model for the fragment. A set of detail maps for the fragment model are generated, each of which encodes fine-scale shading effects from the surface detail of the fragment. A target scene model is obtained for the target scene, and the fragment model is inserted into the target scene model. The target scene model with inserted fragment model is rendered, and a composited target scene is generated. A modified target scene is generated by combining the composited target scene and the set of detail maps. Weights assigned to the different detail maps can be changed by the user, allowing the modified target scene to be readily altered without re-rendering the target scene model with the inserted fragment model.
    Type: Application
    Filed: July 20, 2012
    Publication date: October 22, 2015
    Applicant: The Board of Trustees of the University of Illinois
    Inventors: Kevin KARSCH, Zicheng LIAO, David FORSYTH