Patents by Inventor Daniel Prochazka

Daniel Prochazka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12086376
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: August 16, 2022
    Date of Patent: September 10, 2024
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20230079307
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Application
    Filed: August 16, 2022
    Publication date: March 16, 2023
    Applicant: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 11422671
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: August 23, 2022
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20210064217
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Application
    Filed: September 15, 2020
    Publication date: March 4, 2021
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 10775959
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: September 15, 2020
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20190050137
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 10139985
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: November 27, 2018
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20180143756
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Application
    Filed: September 21, 2016
    Publication date: May 24, 2018
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20140119642
    Abstract: Systems for segmenting human hairs and faces in color images are disclosed, with methods and processes for making and using the same. The image may be cropped around the face area and roughly centered. Optionally, the illumination environment of the input image may be determined. If the image is taken under dark environment or the contrast between the face and hair regions and background is low, an extra image enhancement may be applied. Sub-processes for identifying the pose angle and chin contours may be performed. A preliminary mask for the face by using multiple cues, such as skin color, pose angle, face shape and contour information can be represented. An initial hair mask by using the abovementioned multiple cues plus texture and hair shape information may be created. The preliminary face and hair masks are globally refined using multiple techniques.
    Type: Application
    Filed: December 20, 2013
    Publication date: May 1, 2014
    Applicant: FLASHFOTO, INC.
    Inventors: Kuang-chih Lee, Robinson Piramuthu, Katharine Ip, Daniel Prochazka
  • Patent number: 8638993
    Abstract: Systems for segmenting human hairs and faces in color images are disclosed, with methods and processes for making and using the same. The image may be cropped around the face area and roughly centered. Optionally, the illumination environment of the input image may be determined. If the image is taken under dark environment or the contrast between the face and hair regions and background is low, an extra image enhancement may be applied. Sub-processes for identifying the pose angle and chin contours may be performed. A preliminary mask for the face by using multiple cues, such as skin color, pose angle, face shape and contour information can be represented. An initial hair mask by using the abovementioned multiple cues plus texture and hair shape information may be created. The preliminary face and hair masks are globally refined using multiple techniques.
    Type: Grant
    Filed: April 5, 2011
    Date of Patent: January 28, 2014
    Assignee: FlashFoto, Inc.
    Inventors: Kuang-chih Lee, Robinson Piramuthu, Katharine Ip, Daniel Prochazka
  • Patent number: 8385609
    Abstract: Systems, methods, and computer readable media for forming a mugshot from a digital color image are provided in which a dominant face is determined using the digital color image. Person segmentation is also performed using the digital color image. An image and a mask are cropped based on the dominant face thereby forming a cropped image. Rough segmentation is performed on the cropped image. A mask is averaged in projection space based on the cropped image. The mask is refined mask and prepared for the mugshot.
    Type: Grant
    Filed: October 21, 2009
    Date of Patent: February 26, 2013
    Assignee: FlashFoto, Inc.
    Inventors: Robinson Piramuthu, Daniel Prochazka
  • Publication number: 20110305397
    Abstract: Systems for retargeting an image utilizing a saliency map are disclosed, with methods and processes for making and using the same. To create a contextually personalized presentation, an image may be presented within a target area. The desired location within the target area may be determined for the displaying of the salient portions of the image. To expose the image optimally, the image may need to be transformed or reconfigured for proper composition. Aspect ratios of images may be altered with preservation of salient regions and without distorting the image. A quality function is presented to rate target areas available for personalized presentations.
    Type: Application
    Filed: March 8, 2011
    Publication date: December 15, 2011
    Inventors: Robinson Piramuthu, Daniel Prochazka
  • Publication number: 20110299776
    Abstract: Systems for segmenting human hairs and faces in color images are disclosed, with methods and processes for making and using the same. The image may be cropped around the face area and roughly centered. Optionally, the illumination environment of the input image may be determined. If the image is taken under dark environment or the contrast between the face and hair regions and background is low, an extra image enhancement may be applied. Sub-processes for identifying the pose angle and chin contours may be performed. A preliminary mask for the face by using multiple cues, such as skin color, pose angle, face shape and contour information can be represented. An initial hair mask by using the abovementioned multiple cues plus texture and hair shape information may be created. The preliminary face and hair masks are globally refined using multiple techniques.
    Type: Application
    Filed: April 5, 2011
    Publication date: December 8, 2011
    Inventors: Kuang-chih Lee, Robinson Piramuthu, Katharine Ip, Daniel Prochazka
  • Publication number: 20100158325
    Abstract: Systems, methods, and computer readable media for forming a mugshot from a digital color image are provided in which a dominant face is determined using the digital color image. Person segmentation is also performed using the digital color image. An image and a mask are cropped based on the dominant face thereby forming a cropped image. Rough segmentation is performed on the cropped image. A mask is averaged in projection space based on the cropped image. The mask is refined mask and prepared for the mugshot.
    Type: Application
    Filed: October 21, 2009
    Publication date: June 24, 2010
    Inventors: Robinson Piramuthu, Daniel Prochazka