Patents by Inventor Noah Brickman

Noah Brickman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10636184
    Abstract: Methods and systems related to image segmentation are disclosed. In some examples, a computer system obtains segmentation parameters based on a selection of a region of a displayed image. The selection of the region is associated with a select signal generated by an input device. In response to obtaining the segmentation parameters, the computer system processes the image based on the segmentation parameters. The computer system further adjusts the segmentation parameters based on one or more move signals generated by the input device. The move signal is associated with moving of a representation of the input device within the image. The computer system processes the image based on the one or more adjusted segmentation parameters and displays an image of the selected region based on the processing of the image using the adjusted segmentation parameters.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: April 28, 2020
    Assignee: FOVIA, INC.
    Inventors: Kevin Kreeger, Georgiy Buyanovskiy, David Wilkins, Noah Brickman
  • Publication number: 20170109915
    Abstract: Methods and systems related to image segmentation are disclosed. In some examples, a computer system obtains segmentation parameters based on a selection of a region of a displayed image. The selection of the region is associated with a select signal generated by an input device. In response to obtaining the segmentation parameters, the computer system processes the image based on the segmentation parameters. The computer system further adjusts the segmentation parameters based on one or more move signals generated by the input device. The move signal is associated with moving of a representation of the input device within the image. The computer system processes the image based on the one or more adjusted segmentation parameters and displays an image of the selected region based on the processing of the image using the adjusted segmentation parameters.
    Type: Application
    Filed: September 28, 2016
    Publication date: April 20, 2017
    Inventors: Kevin KREEGER, Georgiy BUYANOVSKIY, David WILKINS, Noah BRICKMAN
  • Patent number: 8040361
    Abstract: Systems, methods and structures for combining virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV/HSV color coordinate systems and processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features, electromagnetic spectrum values, and/or sensor line-of-sight, wherein the sensed features can also include electromagnetic radiation characteristics such as color, infra-red, ultra-violet light values, cultural features can include patterns of these characteristics, such as object recognition using edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the video image to see into a 3
    Type: Grant
    Filed: January 20, 2009
    Date of Patent: October 18, 2011
    Assignee: Systems Technology, Inc.
    Inventors: Edward N. Bachelder, Noah Brickman
  • Publication number: 20100245387
    Abstract: Systems, methods and structures for combining virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV/HSV color coordinate systems and processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features, electromagnetic spectrum values, and/or sensor line-of-sight, wherein the sensed features can also include electromagnetic radiation characteristics such as color, infra-red, ultra-violet light values, cultural features can include patterns of these characteristics, such as object recognition using edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the video image to see into a 3
    Type: Application
    Filed: January 20, 2009
    Publication date: September 30, 2010
    Inventors: Edward N. Bachelder, Noah Brickman
  • Publication number: 20100182340
    Abstract: Systems, methods and structures for combining virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV/HSV color coordinate systems and processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features, electromagnetic spectrum values, and/or sensor line-of-sight, wherein the sensed features can also include electromagnetic radiation characteristics such as color, infra-red, ultra-violet light values, cultural features can include patterns of these characteristics, such as object recognition using edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the video image to see into a 3
    Type: Application
    Filed: January 19, 2009
    Publication date: July 22, 2010
    Inventors: Edward N. Bachelder, Noah Brickman
  • Patent number: 7479967
    Abstract: The present invention relates to a method and an apparatus for combining virtual reality and real-time environment. The present invention provides a system that combines captured real-time video data and real-time 3D environment rendering to create a fused (combined) environment. The system captures video imagery and processes it to determine which areas should be made transparent (or have other color modifications made), based on sensed cultural features and/or sensor line-of-sight. Sensed features can include electromagnetic radiation characteristics (i.e. color, infra-red, ultra-violet light). Cultural features can include patterns of these characteristics (i.e. object recognition using edge detection). This processed image is then overlaid on a 3D environment to combine the two data sources into a single scene. This creates an effect where a user can look through ‘windows’ in the video image into a 3D simulated world, and/or see other enhanced or reprocessed features of the captured image.
    Type: Grant
    Filed: April 11, 2005
    Date of Patent: January 20, 2009
    Assignee: Systems Technology Inc.
    Inventors: Edward N. Bachelder, Noah Brickman
  • Publication number: 20070035561
    Abstract: The present invention relates to a method and an apparatus for combining virtual reality and real-time environment. The present invention provides a system that combines captured real-time video data and real-time 3D environment rendering to create a fused (combined) environment. The system captures video imagery and processes it to determine which areas should be made transparent (or have other color modifications made), based on sensed cultural features and/or sensor line-of-sight. Sensed features can include electromagnetic radiation characteristics (i.e. color, infra-red, ultra-violet light). Cultural features can include patterns of these characteristics (i.e. object recognition using edge detection). This processed image is then overlaid on a 3D environment to combine the two data sources into a single scene. This creates an effect where a user can look through ‘windows’ in the video image into a 3D simulated world, and/or see other enhanced or reprocessed features of the captured image.
    Type: Application
    Filed: April 11, 2005
    Publication date: February 15, 2007
    Inventors: Edward Bachelder, Noah Brickman