Patents by Inventor Ayon Sen

Ayon Sen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161341
    Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.
    Type: Application
    Filed: February 8, 2023
    Publication date: May 16, 2024
    Inventors: Ayon Sen, Gang Pan, Cheng-Chieh Yang, Yue Wu
  • Publication number: 20240161342
    Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.
    Type: Application
    Filed: February 8, 2023
    Publication date: May 16, 2024
    Inventors: Ayon Sen, Gang Pan, Cheng-Chieh Yang, Yue Wu
  • Patent number: 11474770
    Abstract: A multi-view (MV) network bridge device includes an upstream interface, multiple downstream interfaces, and a controller. The controller receives, from the upstream interface, a specification of one or more viewing zones and a specification of one or more content streams. Also, the controller sends, on at least one of the downstream interfaces, at least a subset of each of the specifications received from the upstream interface. The upstream interface may be coupled to a computer that provides the specifications. Each of the downstream interfaces may be coupled to a different MV display panel. One of the downstream interfaces may be coupled to an MV display panel that is coupled to another MV display panel. One of the downstream interfaces may be coupled to an upstream interface of another MV network bridge device having a downstream interface coupled to an MV display panel.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: October 18, 2022
    Assignee: Misapplied Sciences, Inc.
    Inventors: Albert Han Ng, David Steven Thompson, David Randall Bonds, Ayon Sen
  • Patent number: 11315526
    Abstract: A multi-view (MV) transportation hub information system is provided, which includes: a MV display including one or more multi-view (MV) pixels, wherein each MV pixel is configured to emit beamlets in different directions; a sensing system configured to detect a first location of a first blob and a second location of a second blob; an input node configured to receive a first attribute of a first viewer and a second attribute of a second viewer; and a system controller configured to perform user tagging to tag the first blob with the first attribute and to tag the second blob with the second attribute. The system controller controls the MV pixels to project a first image based on the first attribute to the first viewer tagged with the first blob, and to project a second image based on the second attribute to the second viewer tagged with the second blob.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: April 26, 2022
    Assignee: Misapplied Sciences, Inc.
    Inventors: Albert Han Ng, David Steven Thompson, David Randall Bonds, Ayon Sen
  • Publication number: 20210303250
    Abstract: A multi-view (MV) network bridge device includes an upstream interface, multiple downstream interfaces, and a controller. The controller receives, from the upstream interface, a specification of one or more viewing zones and a specification of one or more content streams. Also, the controller sends, on at least one of the downstream interfaces, at least a subset of each of the specifications received from the upstream interface. The upstream interface may be coupled to a computer that provides the specifications. Each of the downstream interfaces may be coupled to a different MV display panel. One of the downstream interfaces may be coupled to an MV display panel that is coupled to another MV display panel. One of the downstream interfaces may be coupled to an upstream interface of another MV network bridge device having a downstream interface coupled to an MV display panel.
    Type: Application
    Filed: March 25, 2021
    Publication date: September 30, 2021
    Inventors: Albert Han Ng, David Steven Thompson, David Randall Bonds, Ayon Sen
  • Publication number: 20210210053
    Abstract: A multi-view (MV) transportation hub information system is provided, which includes: a MV display including one or more multi-view (MV) pixels, wherein each MV pixel is configured to emit beamlets in different directions; a sensing system configured to detect a first location of a first blob and a second location of a second blob; an input node configured to receive a first attribute of a first viewer and a second attribute of a second viewer; and a system controller configured to perform user tagging to tag the first blob with the first attribute and to tag the second blob with the second attribute. The system controller controls the MV pixels to project a first image based on the first attribute to the first viewer tagged with the first blob, and to project a second image based on the second attribute to the second viewer tagged with the second blob.
    Type: Application
    Filed: January 6, 2021
    Publication date: July 8, 2021
    Inventors: Albert Han Ng, David Steven Thompson, David Randall Bonds, Ayon Sen
  • Publication number: 20200302643
    Abstract: Systems and methods for determining position and orientation of an object using captured images.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 24, 2020
    Inventors: Ayon Sen, John Stephen Underkoffler
  • Patent number: 10509513
    Abstract: Systems and methods for tracking using a tracking camera. For each frame of image data generated by the tracking camera, each blob of the frame is determined. For each determined blob, a 2D image coordinate of a centroid of the blob is determined in a coordinate space of the frame. A tracking system processor generates a first tag identifier from the determined 2D image coordinates. The tracking system processor uses the first tag identifier to access stored first tag information that is stored in association with the first tag identifier. The tracking system processor determines an absolute 3-space position and orientation of the tracking camera by performing a motion tracking process using the determined 2D image coordinates and the accessed first tag information.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: December 17, 2019
    Assignee: Oblong Industries, Inc.
    Inventors: Ayon Sen, John Underkoffler, Barbara Brand, Michael Chin
  • Publication number: 20180225007
    Abstract: Systems and methods for tracking using a tracking camera. For each frame of image data generated by the tracking camera, each blob of the frame is determined. For each determined blob, a 2D image coordinate of a centroid of the blob is determined in a coordinate space of the frame. A tracking system processor generates a first tag identifier from the determined 2D image coordinates. The tracking system processor uses the first tag identifier to access stored first tag information that is stored in association with the first tag identifier. The tracking system processor determines an absolute 3-space position and orientation of the tracking camera by performing a motion tracking process using the determined 2D image coordinates and the accessed first tag information.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 9, 2018
    Inventors: Ayon Sen, John Underkoffler, Barbara Brand, Michael Chin