Patents by Inventor José M. F. Moura

José M. F. Moura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240050020
    Abstract: Disclosed herein is a system and method implementing an automated, generalizable model for tracking cortical spreading depressions using EEG. The model comprises convolutional neural networks and graph neural networks to leverage both the spatial and the temporal properties of CSDs in the detection. The trained model is generalizable to different head models such that it can be applied to new patients without re-training. Further, the model is scalable to different densities of EEG electrodes, even when trained on a specific electride density.
    Type: Application
    Filed: April 27, 2022
    Publication date: February 15, 2024
    Inventors: Alireza Chamanzar, Xujin Liu, Levender Y. Jiang, Kimon A. Vogt, José M.F. Moura, Pulkit Grover
  • Publication number: 20220164580
    Abstract: Disclosed herein is a method for performing few shot action classification and localization in untrimmed videos, where novel-class untrimmed testing videos are recognized with only few trimmed training videos (i.e., few-shot learning), with prior knowledge transferred from un-overlapped base classes where only untrimmed videos and class labels are available (i.e., weak supervision).
    Type: Application
    Filed: November 17, 2021
    Publication date: May 26, 2022
    Inventors: José M.F. Moura, Yixiong Zou, Shanghang Zhang, Guangyao Chen, Yonghong Tian
  • Patent number: 11183051
    Abstract: Methods and software utilizing artificial neural networks (ANNs) to estimate density and/or flow (speed) of objects in one or more scenes each captured in one or more images. In some embodiments, the ANNs and their training configured to provide reliable estimates despite one or more challenges that include but are not limited to, low-resolution images, low framerate image acquisition, high rates of object occlusions, large camera perspective, widely varying lighting conditions, and widely varying weather conditions. In some embodiments, fully convolutional networks (FCNs) are used in the ANNs. In some embodiments, a long short-term memory network (LSTM) is used with an FCN. In such embodiments, the LSTM can be connected to the FCN in a residual learning manner or in a direct connected manner. Also disclosed are methods of generating training images for training an ANN-based estimating algorithm that make training of the estimating algorithm less costly.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: November 23, 2021
    Assignees: Instituto Superior Tecnico, Carnegie Mellon University
    Inventors: José M. F. Moura, João Paulo Costeira, Shanghang Zhang, Evgeny Toropov
  • Publication number: 20200302781
    Abstract: Methods and software utilizing artificial neural networks (ANNs) to estimate density and/or flow (speed) of objects in one or more scenes each captured in one or more images. In some embodiments, the ANNs and their training configured to provide reliable estimates despite one or more challenges that include but are not limited to, low-resolution images, low framerate image acquisition, high rates of object occlusions, large camera perspective, widely varying lighting conditions, and widely varying weather conditions. In some embodiments, fully convolutional networks (FCNs) are used in the ANNs. In some embodiments, a long short-term memory network (LSTM) is used with an FCN. In such embodiments, the LSTM can be connected to the FCN in a residual learning manner or in a direct connected manner. Also disclosed are methods of generating training images for training an ANN-based estimating algorithm that make training of the estimating algorithm less costly.
    Type: Application
    Filed: June 11, 2020
    Publication date: September 24, 2020
    Inventors: José M. F. Moura, João Paulo Costeira, Shanghang Zhang, Evgeny Toropov
  • Patent number: 10733876
    Abstract: Methods and software utilizing artificial neural networks (ANNs) to estimate density and/or flow (speed) of objects in one or more scenes each captured in one or more images. In some embodiments, the ANNs and their training configured to provide reliable estimates despite one or more challenges that include but are not limited to, low-resolution images, low framerate image acquisition, high rates of object occlusions, large camera perspective, widely varying lighting conditions, and widely varying weather conditions. In some embodiments, fully convolutional networks (FCNs) are used in the ANNs. In some embodiments, a long short-term memory network (LSTM) is used with an FCN. In such embodiments, the LSTM can be connected to the FCN in a residual learning manner or in a direct connected manner. Also disclosed are methods of generating training images for training an ANN-based estimating algorithm that make training of the estimating algorithm less costly.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: August 4, 2020
    Assignees: CARNEGIE MELLON UNIVERSITY, INSTITUTO SUPERIOR TÉCNICO
    Inventors: José M. F. Moura, João Paulo Costeira, Shanghang Zhang, Evgeny Toropov
  • Publication number: 20200118423
    Abstract: Methods and software utilizing artificial neural networks (ANNs) to estimate density and/or flow (speed) of objects in one or more scenes each captured in one or more images. In some embodiments, the ANNs and their training configured to provide reliable estimates despite one or more challenges that include but are not limited to, low-resolution images, low framerate image acquisition, high rates of object occlusions, large camera perspective, widely varying lighting conditions, and widely varying weather conditions. In some embodiments, fully convolutional networks (FCNs) are used in the ANNs. In some embodiments, a long short-term memory network (LSTM) is used with an FCN. In such embodiments, the LSTM can be connected to the FCN in a residual learning manner or in a direct connected manner. Also disclosed are methods of generating training images for training an ANN-based estimating algorithm that make training of the estimating algorithm less costly.
    Type: Application
    Filed: April 5, 2018
    Publication date: April 16, 2020
    Inventors: José M. F. Moura, João Paulo Costeira, Shanghang Zhang, Evgeny Toropov
  • Patent number: 10324068
    Abstract: A method performed by a processing device, the method comprising: obtaining first waveform data indicative of traversal of a first signal through a structure at a first time; applying a scale transform to the first waveform data and the second waveform data; computing, by the processing device and based on applying the scale transform, a scale-cross correlation function that promotes identification of scaling behavior between the first waveform data and the second waveform data; performing one or more of: computing, by the processing device and based on the scale-cross correlation function, a scale factor for the first waveform data and the second waveform data; and computing, by the processing device and based on the scale-cross correlation function, a scale invariant correlation coefficient between the first waveform data and the second waveform data.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: June 18, 2019
    Assignee: Carnegie Mellon University
    Inventors: Joel B. Harley, Jose M. F. Moura
  • Patent number: 9141871
    Abstract: Feature-matching methods for attempting to match visual features in one image with visual features in another image. Feature-matching methods disclosed progressively sample the affine spaces of the images for visual features, starting with a course sampling and iteratively increasing the density of sampling. Once a predetermined threshold number of unambiguous matches has been satisfied, the iterative sampling and matching can be stopped. The iterative sampling and matching methodology is especially, but not exclusively, suited for use in fully affine invariant feature matching applicants and can be particularly computationally efficient for comparing images that have large differences in observational parameters, such as scale, tilt, object-plane rotation, and image-plane rotation. The feature-matching methods disclosed can be useful in object/scene recognition applications. The disclosed methods can be implemented in software and various object/scene recognition systems.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: September 22, 2015
    Assignee: Carnegie Mellon University
    Inventors: Bernardo Rodrigues Pires, José M. F. Moura
  • Publication number: 20140025316
    Abstract: A method performed by a processing device, the method comprising: obtaining first waveform data indicative of traversal of a first signal through a structure at a first time; applying a scale transform to the first waveform data and the second waveform data; computing, by the processing device and based on applying the scale transform, a scale-cross correlation function that promotes identification of scaling behavior between the first waveform data and the second waveform data; performing one or more of: computing, by the processing device and based on the scale-cross correlation function, a scale factor for the first waveform data and the second waveform data; and computing, by the processing device and based on the scale-cross correlation function, a scale invariant correlation coefficient between the first waveform data and the second waveform data.
    Type: Application
    Filed: July 18, 2013
    Publication date: January 23, 2014
    Inventors: Joel B. Harley, Jose M.F. Moura
  • Patent number: 8330642
    Abstract: A high resolution imaging system is used to detect and locate targets using time reversal in rich scattering environments, where the number of scatterers is significantly larger than the number of antennas. Our imaging system performs two major tasks by time reversal: clutter mitigation and target focusing. Clutter mitigation is accomplished through waveform reshaping to suppress the clutter returns. After the suppressed clutter is subtracted from the returned signal, a second time reversal for target focusing is performed. A final image is then obtained by beamforming.
    Type: Grant
    Filed: July 9, 2008
    Date of Patent: December 11, 2012
    Assignee: Carnegie Mellon University
    Inventors: Yuanwei Jin, Jose′ M. F. Moura
  • Patent number: 7928896
    Abstract: A method and apparatus for target focusing and ghost image removal in synthetic aperture radar (SAR) is disclosed. Conventional SAR is not designed for imaging targets in a rich scattering environment. In this case, ghost images due to secondary reflections appear in the SAR images. We demonstrate, how, from a rough estimate of the target location obtained from a conventional SAR image and using time reversal, time reversal techniques can be applied to SAR to focus on the target with improved resolution, and reduce or remove ghost images.
    Type: Grant
    Filed: July 9, 2008
    Date of Patent: April 19, 2011
    Assignee: Carnegie Mellon University
    Inventors: Yuanwei Jin, José M. F. Moura
  • Patent number: 7734075
    Abstract: A system and method are provided for contrast-invariant registration of images, the system including a processor, an imaging adapter or a communications adapter for receiving an image data sequence, a user interface adapter for selecting a reference frame from the image sequence or cropping a region of interest (ROI) from the reference frame, a tracking unit for tracking the ROI across the image sequence, and an estimation unit for segmenting the ROI in the reference frame or performing an affine registration for the ROI; and the method including receiving an image sequence, selecting a reference frame from the image sequence, cropping a region of interest (ROI) from the reference frame, tracking the ROI across the image sequence, segmenting the ROI in the reference frame, and performing an affine registration for the ROI.
    Type: Grant
    Filed: March 11, 2005
    Date of Patent: June 8, 2010
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Ying Sun, Marie-Pierre Jolly, Jose M. F. Moura
  • Publication number: 20090076389
    Abstract: A high resolution imaging system is used to detect and locate targets using time reversal in rich scattering environments, where the number of scatterers is significantly larger than the number of antennas. Our imaging system performs two major tasks by time reversal: clutter mitigation and target focusing. Clutter mitigation is accomplished through waveform reshaping to suppress the clutter returns. After the suppressed clutter is subtracted from the returned signal, a second time reversal for target focusing is performed. A final image is then obtained by beamforming.
    Type: Application
    Filed: July 9, 2008
    Publication date: March 19, 2009
    Inventors: Yuanwei Jin, Jose' M.F. Moura
  • Publication number: 20090033549
    Abstract: A method and apparatus for target focusing and ghost image removal in synthetic aperture radar (SAR) is disclosed. Conventional SAR is not designed for imaging targets in a rich scattering environment. In this case, ghost images due to secondary reflections appear in the SAR images. We demonstrate, how, from a rough estimate of the target location obtained from a conventional SAR image and using time reversal, time reversal techniques can be applied to SAR to focus on the target with improved resolution, and reduce or remove ghost images.
    Type: Application
    Filed: July 9, 2008
    Publication date: February 5, 2009
    Inventors: Yuanwei Jin, Jose M. F. Moura
  • Patent number: 7394921
    Abstract: A system and method are provided for integrated registration of images, the system including a processor, a first registration portion for performing rough registration of an image, a first segmentation portion for performing segmentation of an object of interest in the image, a second registration portion for performing fine registration of the image, and a second segmentation portion for performing segmentation of structures of the object of interest in the image; and the method including receiving a sequence of images, selecting an image from the sequence, cropping a region of interest (ROI) from the selected image, performing rough registration of the cropped ROI, performing segmentation of an object of interest from the rough registered ROI, performing fine registration of the ROI, and performing segmentation of structures of the object of interest from the fine registered ROI.
    Type: Grant
    Filed: March 11, 2005
    Date of Patent: July 1, 2008
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Ying Sun, Marie-Pierre Jolly, José M. F. Moura
  • Patent number: 6760488
    Abstract: A system for generating a three-dimensional model of an object from a two-dimensional image sequence. According to one embodiment, the system includes an image sensor for capturing a sequence of two-dimensional images of a scene, the scene including the object, a two-dimensional motion filter module in communication with the image sensor for determining from the sequence of images a plurality of two-dimensional motion parameters for the object, and a three-dimensional structure recovery module in communication with the two-dimensional motion filter module for estimating a set of three-dimensional shape parameters and a set of three-dimensional motion parameters from the set of two-dimensional motion parameters using a rank 1 factorization of a matrix.
    Type: Grant
    Filed: July 12, 2000
    Date of Patent: July 6, 2004
    Assignee: Carnegie Mellon University
    Inventors: Jose' M. F. Moura, Pedro M. Q. Aguiar
  • Patent number: 6438180
    Abstract: A method of determining branch metric values in a detector. The method includes receiving a plurality of time variant signal samples, the signal samples having one of signal-dependent noise, correlated noise, and both signal dependent and correlated noise associated therewith. The method also includes selecting a branch metric function at a certain time index and applying the selected function to the signal samples to determine the metric values.
    Type: Grant
    Filed: March 1, 1999
    Date of Patent: August 20, 2002
    Assignee: Carnegie Mellon University
    Inventors: Aleksandar Kavcic, Jose M. F. Moura
  • Patent number: 6201839
    Abstract: The present invention is directed to a method of determining branch metric values for branches of a trellis for a Viterbi-like detector. The method includes the step of selecting a branch metric function for each of the branches at a certain time index. The method also includes the step of applying the selected function to a plurality of time variant signal samples to determine the metric values.
    Type: Grant
    Filed: April 3, 1998
    Date of Patent: March 13, 2001
    Assignee: Carnegie Mellon University
    Inventors: Aleksandar Kavcic, Jose M. F. Moura
  • Patent number: 5900778
    Abstract: A high power amplifier system includes an on-line adaptive predistorter for generating predistorted complex data signals to a high power amplifier in response to receiving incoming complex data signals from a remote source. The predistorted complex data signals enable the high powered amplifier to output signals corresponding to the incoming complex data signals. The amplifier system includes an off-line adaptive predistorter which has an adaptive parametric forward filter for combining predistorted complex data signals and demodulated complex data signals, produced from the output of the high power amplifier, to produce an optimized forward amplitude filter that emulates the forward amplitude response of the amplifier, and an optimized inverse phase filter that emulates the inverse phase response of the amplifier.
    Type: Grant
    Filed: May 8, 1997
    Date of Patent: May 4, 1999
    Inventors: John T. Stonick, Virginia L. Stonick, Jose M. F. Moura
  • Patent number: 5854856
    Abstract: A content based method of compressing a segment of video is implemented in two stages. In a spatial integration stage, figures are represented in terms of compact models. In a temporal integration stage, which uses the information from the spatial integration stage, constructs, i.e., world images and data describing relationships between world images and frames, are generated. In operation, each frame in a series of frames is preprocessed to tessellate any moving figures and to obtain motion data for the moving figures and for the image background. The figure motion data is stored in a manner so as to associate the motion data with the original frame. Frame information identifying the size and position of each frame with respect to a background world image is also stored. Each tessellated figure is compared to the original frame to produce a template for each moving figure and a template for the background.
    Type: Grant
    Filed: July 19, 1995
    Date of Patent: December 29, 1998
    Assignee: Carnegie Mellon University
    Inventors: Jose M. F. Moura, Radu S. Jasinschi