Patents by Inventor Bahram Javidi

Bahram Javidi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210256314
    Abstract: Described herein is an object recognition system in low illumination conditions. A 3D InIm system can be trained in the low illumination levels to classify 3D objects obtained under low illumination conditions. Regions of interest obtained from 3D reconstructed images are obtained by de-noising the 3D reconstructed image using total-variation regularization using an augmented Lagrange approach followed by face detection. The regions of interest are then inputted into a trained CNN. The CNN can be trained using 3D InIm reconstructed under low illumination after TV-denoising. The elemental images were obtained under various low illumination conditions having different SNRs. The CNN can effectively recognize the 3D reconstructed faces after TV-denoising.
    Type: Application
    Filed: August 7, 2019
    Publication date: August 19, 2021
    Inventors: Bahram JAVIDI, Adam MARKMAN
  • Publication number: 20210006773
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Application
    Filed: September 16, 2020
    Publication date: January 7, 2021
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20200380710
    Abstract: Systems and methods for optical sensing, visualization and detection in media (e.g., turbid media; turbid water; fog; non-turbid media). A light source and an image sensor are positioned in turbid media or external to the turbid media with the light source within a field of view of the image sensor array. Temporal optical signals are transmitted through the turbid media via the light source and multiple perspective video sequence frames are acquired via the image sensor array of light propagating through the turbid media. A three-dimensional image is reconstructed from each frame and the reconstructed three-dimensional images are combined to form a three-dimensional video sequence. The transmitted optical signals are detected from the three-dimensional video sequence by applying a multi-dimensional signal detection scheme.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 3, 2020
    Inventors: Bahram Javidi, Satoru Komatsu, Adam Markman
  • Patent number: 10805598
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: October 13, 2020
    Assignees: THE ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA, THE UNIVERSITY OF CONNECTICUT
    Inventors: Hong Hua, Bahram Javidi
  • Patent number: 10706258
    Abstract: Embodiments of the present disclosure include systems and methods for cell identification using a lens-less cell identification sensor. Randomly distributed cells can be illuminated by a light source such as a laser. The object beam can be passed through one or more diffusers. Pattern recognition is applied on the captured optical signature to classify the cells. For example, features can be extracted and a trained classifier can be used to classify the cells. The cell classes can be accurately identified even when multiple cells of the same class are inspected.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: July 7, 2020
    Assignee: University of Connecticut
    Inventors: Bahram Javidi, Adam Markman, Siddharth Rawat, Satoru Komatsu
  • Patent number: 10674139
    Abstract: The present disclosure includes systems and methods for detecting and recognizing human gestures or activity in a group of 3D volumes using integral imaging. In exemplary embodiments, a camera array including multiple image capturing devices captures multiple images of a point-of-interest in a 3D scene. The multiple images can be used to reconstruct a group of 3D volumes of the point-of-interest using the images. The system detects spatiotemporal interest points (STIPs) in the group of 3D volumes and assigns descriptors to each of the plurality of STIPs. The descriptors are quantized into a number of visual words to create clusters of the group of 3D volumes. The system builds a histogram for each of the visual words. Each 3D volume is classified using the histogram of the plurality of visual words, implying the classification of a human gesture in each 3D volumes.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: June 2, 2020
    Assignee: University of Connecticut
    Inventors: Bahram Javidi, Pedro Latorre Carmona, Eva Salvador Balaguer, Fiberto Pla Bañon
  • Patent number: 10469833
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Grant
    Filed: March 5, 2015
    Date of Patent: November 5, 2019
    Assignees: THE ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA, THE UNIVERSITY OF CONNECTICUT
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20190260982
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Application
    Filed: May 1, 2019
    Publication date: August 22, 2019
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20190250558
    Abstract: Portable common path shearing interferometry-based holographic microscopy systems are disclosed herein. In one embodiment, a system is configured for positioning a laser light source, a sample holder, a microscope objective lens, a shear plate and the imaging device in a common path shearing interferometry configuration. In some embodiments, the systems are relatively small and lightweight and exhibit good temporal stability. In one embodiment, a system for automatic cell identification and visualization using digital holographic microscopy using an augmented reality display device can include an imaging device mounted to an augmented reality display device, configured to capture one or more images of the hologram of the sample disposed on the sample holder illuminated by a beam. The augmented reality display device can include a display and can be configured to render a pseudo-colored 3D visualization of the sample and information associated with the sample, on the display.
    Type: Application
    Filed: February 11, 2019
    Publication date: August 15, 2019
    Applicant: University of Connecticut
    Inventors: Bahram Javidi, Adam Markman, Siddharth Rawat, Arun Anand
  • Publication number: 20190226972
    Abstract: The present disclosure provides improved systems and methods for automated cell identification/classification. More particularly, the present disclosure provides advantageous systems and methods for automated cell identification/classification using shearing interferometry with a digital holographic microscope. The present disclosure provides for a compact, low-cost, and field-portable 3D printed system for automatic cell identification/classification using a common path shearing interferometry with digital holographic microscopy. This system has demonstrated good results for sickle cell disease identification with human blood cells. The present disclosure provides that a robust, low cost cell identification/classification system based on shearing interferometry can be used for accurate cell identification.
    Type: Application
    Filed: February 11, 2019
    Publication date: July 25, 2019
    Applicant: University of Connecticut
    Inventors: Bahram Javidi, Adam Markman, Siddharth Rawat, Arun Anand
  • Patent number: 10326983
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Grant
    Filed: March 5, 2015
    Date of Patent: June 18, 2019
    Assignees: THE UNIVERSITY OF CONNECTICUT, ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20180247106
    Abstract: Embodiments of the present disclosure include systems and methods for cell identification using a lens-less cell identification sensor. Randomly distributed cells can be illuminated by a light source such as a laser. The object beam can be passed through one or more diffusers. Pattern recognition is applied on the captured optical signature to classify the cells. For example, features can be extracted and a trained classifier can be used to classify the cells. The cell classes can be accurately identified even when multiple cells of the same class are inspected.
    Type: Application
    Filed: February 22, 2018
    Publication date: August 30, 2018
    Applicant: University of Connecticut
    Inventors: Bahram Javidi, Adam Markman, Siddharth Rawat, Satoru Komatsu
  • Patent number: 9785789
    Abstract: An optical security method for object authentication using photon-counting encryption implemented with phase encoded QR codes. By combining the full phase double-random-phase encryption with photon-counting imaging method and applying an iterative Huffman coding technique, encryption and compression of an image containing primary information about the object is achieved. This data can then be stored inside of an optically phase-encoded QR code for robust read out, decryption, and authentication. The optically encoded QR code is verified by examining the speckle signature of the optical masks using statistical analysis.
    Type: Grant
    Filed: April 8, 2015
    Date of Patent: October 10, 2017
    Assignee: University of Connecticut
    Inventors: Bahram Javidi, Adam Markman, Mohammad (Mark) Tehranipoor
  • Publication number: 20170102545
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Application
    Filed: March 5, 2015
    Publication date: April 13, 2017
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20170078652
    Abstract: A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
    Type: Application
    Filed: March 5, 2015
    Publication date: March 16, 2017
    Applicants: The Arizona Board of Regents on Behalf of the University of Arizona, University of Connecticut
    Inventors: Hong Hua, Bahram Javidi
  • Publication number: 20160360186
    Abstract: The present disclosure includes systems and methods for detecting and recognizing human gestures or activity in a group of 3D volumes using integral imaging. In exemplary embodiments, a camera array including multiple image capturing devices captures multiple images of a point-of-interest in a 3D scene. The multiple images can be used to reconstruct a group of 3D volumes of the point-of-interest using the images. The system detects spatiotemporal interest points (STIPs) in the group of 3D volumes and assigns descriptors to each of the plurality of STIPs. The descriptors are quantized into a number of visual words to create clusters of the group of 3D volumes. The system builds a histogram for each of the visual words. Each 3D volume is classified using the histogram of the plurality of visual words, implying the classification of a human gesture in each 3D volumes.
    Type: Application
    Filed: June 3, 2016
    Publication date: December 8, 2016
    Applicant: University of Connecticut
    Inventors: Bahram Javidi, Pedro Latorre Carmona, Eva Salvador Balaguer, Fiberto Pla Bañon
  • Patent number: 9344700
    Abstract: An imaging system is provided, configured for providing three-dimensional data of a region of interest. The system comprising: an optical unit and a control unit. The optical unit comprises a radiation collection unit and a detection unit. The radiation collection unit comprises at least two mask arrangement defining at least two radiation collection regions respectively, the mask arrangements are configured to sequentially apply a plurality of a predetermined number of spatial filtering patterns formed by a predetermined arrangement of apertures applied on radiation collected thereby generating at least two elemental image data pieces corresponding to the collected radiation from said at least two collection regions. The control unit comprising is configured for receiving and processing said at least two elemental image data pieces and determining a plurality of at least two restored elemental images respectively being together indicative of a three dimensional arrangement of the region being imaged.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: May 17, 2016
    Assignees: UNIVERSITY OF CONNECTICUT, BAR ILAN UNIVERSITY
    Inventors: Zeev Zalevsky, Moshe Arie Ariel Schwarz, Amir Shemer, Bahram Javidi, Jingang Wang
  • Patent number: 9232211
    Abstract: A system for three-dimensional visualization of object in a scattering medium includes a sensor for receiving light from the object in the scattering medium and a computing device coupled to the sensor and receiving a plurality of elemental images of the object from the sensor. The computing device causes the elemental images to be magnified through a virtual pin-hole array to create an overlapping pattern of magnified elemental images. The computing device also averages overlapping portions of the element images to form an integrated image.
    Type: Grant
    Filed: July 30, 2010
    Date of Patent: January 5, 2016
    Assignee: THE UNIVERSITY OF CONNECTICUT
    Inventors: Bahram Javidi, Inkyu Moon, Robert T. Schulein, Myungjin Cho, Cuong Manh Do
  • Publication number: 20150381958
    Abstract: An imaging system is provided, configured for providing three-dimensional data of a region of interest. The system comprising: an optical unit and a control unit. The optical unit comprises a radiation collection unit and a detection unit. The radiation collection unit comprises at least two mask arrangement defining at least two radiation collection regions respectively, the mask arrangements are configured to sequentially apply a plurality of a predetermined number of spatial filtering patterns formed by a predetermined arrangement of apertures applied on radiation collected thereby generating at least two elemental image data pieces corresponding to the collected radiation from said at least two collection regions. The control unit comprising is configured for receiving and processing said at least two elemental image data pieces and determining a plurality of at least two restored elemental images respectively being together indicative of a three dimensional arrangement of the region being imaged.
    Type: Application
    Filed: June 30, 2015
    Publication date: December 31, 2015
    Inventors: Zeev ZALEVSKY, Moshe Arie Ariel SCHWARZ, Amir SHEMER, Bahram JAVIDI, Jingang WANG
  • Patent number: 9197877
    Abstract: A smart pseudoscopic to orthoscopic conversion (SPOC) protocol, and method for using the SPOC protocol for three-dimensional imaging, are disclosed. The method allows full control over the optical display parameters in Integral Imaging (InI) monitors. From a given collection of elemental images, a new set of synthetic elemental images (SEIs) ready to be displayed in an InI monitor can be calculated, in which the pitch, microlenses focal length, number of pixels per elemental cell, depth position of the reference plane, and grid geometry of the microlens array (MLA) can be selected to fit the conditions of the display architecture.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: November 24, 2015
    Assignees: UNIVERSITAT DE VALÉNCIA, UNIVERSITY OF CONNECTICUT
    Inventors: Bahram Javidi, Manuel Martinez-Corral, Raul Martinez-Cuenca, Genaro Saavedra-Tortosa, Hector Navarro Fructuoso