Patents by Inventor Kevin Karsch
Kevin Karsch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11455739Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: GrantFiled: May 11, 2020Date of Patent: September 27, 2022Assignee: Lightform, Inc.Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Patent number: 11245883Abstract: One variation of a method for augmenting surfaces in a space includes: at a projection system, recording a sequence of scans of the space; aggregating the sequence of scans into a projector-domain image of the space; detecting a set of objects in the projector-domain image; identifying an object, in the set of objects, as of a first type; detecting a surface proximal the object in the projector-domain image; defining an association between the surface and a content source based on the first type of the object; based on the association, warping visual content output by the content source according to a profile of the surface extracted from the projector-domain image; actuating a projector assembly in the projection system to locate the surface in the field of view of a light projector in the projection system; and projecting the visual content, via the light projector, toward the surface.Type: GrantFiled: January 29, 2020Date of Patent: February 8, 2022Assignee: Lightform, Inc.Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck
-
Patent number: 11223807Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.Type: GrantFiled: September 10, 2020Date of Patent: January 11, 2022Assignee: Lightform, Inc.Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
-
Publication number: 20200413016Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.Type: ApplicationFiled: September 10, 2020Publication date: December 31, 2020Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
-
Publication number: 20200402247Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: ApplicationFiled: May 11, 2020Publication date: December 24, 2020Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Publication number: 20200374498Abstract: One variation of a method for augmenting surfaces in a space includes: at a projection system, recording a sequence of scans of the space; aggregating the sequence of scans into a projector-domain image of the space; detecting a set of objects in the projector-domain image; identifying an object, in the set of objects, as of a first type; detecting a surface proximal the object in the projector-domain image; defining an association between the surface and a content source based on the first type of the object; based on the association, warping visual content output by the content source according to a profile of the surface extracted from the projector-domain image; actuating a projector assembly in the projection system to locate the surface in the field of view of a light projector in the projection system; and projecting the visual content, via the light projector, toward the surface.Type: ApplicationFiled: January 29, 2020Publication date: November 26, 2020Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck
-
Patent number: 10805585Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.Type: GrantFiled: December 2, 2019Date of Patent: October 13, 2020Assignee: Lightform, Inc.Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
-
Patent number: 10692233Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: GrantFiled: June 20, 2019Date of Patent: June 23, 2020Assignee: Lightform, Inc.Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Publication number: 20200195900Abstract: One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.Type: ApplicationFiled: December 2, 2019Publication date: June 18, 2020Inventors: Rajinder Sodhi, Brett Jones, Kevin Karsch, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny
-
Publication number: 20200105006Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: ApplicationFiled: June 20, 2019Publication date: April 2, 2020Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Patent number: 10373325Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: GrantFiled: September 28, 2018Date of Patent: August 6, 2019Assignee: Lightform, Inc.Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Patent number: 9852238Abstract: A system and method are disclosed for, using structure-from-motion techniques, projecting a building information model (BIM) into images from photographs taken of a construction site, to generate a 3D point cloud model using the BIM and, when combined with scheduling constraints, facilitates 4D visualizations and progress monitoring. One of the images acts as an anchor image. Indications are received of first points in the anchor image that correspond to second points in the BIM. Calibration information for an anchor camera is calculated based on the indications and on metadata extracted from the anchor image, to register the anchor image in relation to the BIM. A homography transformation is determined between the images and the anchor camera using the calibration information, to register the rest of the images with the BIM, where some of those images are taken from different cameras and from different angles to the construction site.Type: GrantFiled: April 17, 2015Date of Patent: December 26, 2017Assignee: The Board of Trustees of the University of IllinoisInventors: David Alexander Forsyth, Kevin Karsch, Mani Golparvar-Fard
-
Patent number: 9786253Abstract: A method for generating additional views from a stereo image defined by a left eye image and a right eye image. The method includes receiving as input at least one stereo image. The method includes, for each of the stereo images, generating a plurality of additional images. The method includes interlacing the additional images for each of the stereo images to generate three dimensional (3D) content made up of multiple views of the scenes presented by each of the stereo images. The interlacing may be performed such that the generated 3D content is displayable on a 3D display device including a barrier grid or a lenticular lens array on the monitor screen. The additional images may include 12 to 40 or more frames providing views of the one or more scenes from differing viewing angles than provided by the left and right cameras used to generate the original stereo image.Type: GrantFiled: January 25, 2013Date of Patent: October 10, 2017Assignee: LUMENCO, LLCInventors: Mark A. Raymond, Hector Andres Porras Soto, Kevin Karsch
-
Patent number: 9613454Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.Type: GrantFiled: February 25, 2016Date of Patent: April 4, 2017Assignee: Adobe Systems IncorporatedInventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
-
Patent number: 9471967Abstract: A fragment is relit for insertion into a target scene of an image by obtaining a fragment model for the fragment. A set of detail maps for the fragment model are generated, each of which encodes fine-scale shading effects from the surface detail of the fragment. A target scene model is obtained for the target scene, and the fragment model is inserted into the target scene model. The target scene model with inserted fragment model is rendered, and a composited target scene is generated. A modified target scene is generated by combining the composited target scene and the set of detail maps. Weights assigned to the different detail maps can be changed by the user, allowing the modified target scene to be readily altered without re-rendering the target scene model with the inserted fragment model.Type: GrantFiled: July 20, 2012Date of Patent: October 18, 2016Assignee: The Board of Trustees of the University of IllinoisInventors: Kevin Karsch, Zicheng Liao, David Forsyth
-
Publication number: 20160171755Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.Type: ApplicationFiled: February 25, 2016Publication date: June 16, 2016Applicant: Adobe Systems IncorporatedInventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
-
Patent number: 9330500Abstract: An image into which one or more objects are to be inserted is obtained. Based on the image, both a 3-dimensional (3D) representation and a light model of the scene in the image are generated. One or more objects are added to the 3D representation of the scene. The 3D representation of the scene is rendered, based on the light model, to generate a modified image that is the obtained image modified to include the one or more objects.Type: GrantFiled: December 8, 2011Date of Patent: May 3, 2016Assignee: The Board of Trustees of the University of IllinoisInventors: Kevin Karsch, Varsha Chandrashekhar Hedau, David A. Forsyth, Derek Hoiem
-
Patent number: 9299188Abstract: Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.Type: GrantFiled: August 8, 2013Date of Patent: March 29, 2016Assignee: Adobe Systems IncorporatedInventors: Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin
-
Publication number: 20150310135Abstract: A system and method are disclosed for, using structure-from-motion techniques, projecting a building information model (BIM) into images from photographs taken of a construction site, to generate a 3D point cloud model using the BIM and, when combined with scheduling constraints, facilitates 4D visualizations and progress monitoring. One of the images acts as an anchor image. Indications are received of first points in the anchor image that correspond to second points in the BIM. Calibration information for an anchor camera is calculated based on the indications and on metadata extracted from the anchor image, to register the anchor image in relation to the BIM. A homography transformation is determined between the images and the anchor camera using the calibration information, to register the rest of the images with the BIM, where some of those images are taken from different cameras and from different angles to the construction site.Type: ApplicationFiled: April 17, 2015Publication date: October 29, 2015Inventors: David Alexander Forsyth, Kevin Karsch, Mani Golparvar-Fard
-
Publication number: 20150302563Abstract: A fragment is relit for insertion into a target scene of an image by obtaining a fragment model for the fragment. A set of detail maps for the fragment model are generated, each of which encodes fine-scale shading effects from the surface detail of the fragment. A target scene model is obtained for the target scene, and the fragment model is inserted into the target scene model. The target scene model with inserted fragment model is rendered, and a composited target scene is generated. A modified target scene is generated by combining the composited target scene and the set of detail maps. Weights assigned to the different detail maps can be changed by the user, allowing the modified target scene to be readily altered without re-rendering the target scene model with the inserted fragment model.Type: ApplicationFiled: July 20, 2012Publication date: October 22, 2015Applicant: The Board of Trustees of the University of IllinoisInventors: Kevin KARSCH, Zicheng LIAO, David FORSYTH