Patents by Inventor Tim Marks

Tim Marks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11926823
    Abstract: Compositions and methods are provided for genome modification of a nucleotide sequence located in or near a male fertility gene of MS9, MS22, MS26, or MS45 in the genome of a plant cell or plant to produce a male-sterile plant. In some examples, the methods and compositions employ a guide RNA/Cas endonuclease system for modifying or altering target sites located in or near a male fertility gene of MS9, MS22, MS26, or MS45 in the genome of a plant cell, plant or seed to produce a male-sterile plant. Also provided are compositions and methods employing a guide polynucleotide/Cas endonuclease system for genome modification a nucleotide sequence located in or near a male fertility gene of MS9, MS22, MS26, or MS45 in the genome of a plant cell to produce a male-sterile plant. Compositions and methods are also provided for restoring fertility to a Ms9, Ms22, Ms26, or Ms45 nucleotide sequence to a male-sterile Ms9, Ms22, Ms26, or Ms45 plant produced using the methods and compositions described herein.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: March 12, 2024
    Assignee: PIONEER HI-BRED INTERNATIONAL, INC.
    Inventors: Andrew Mark Cigan, Tim Fox, Manjit Singh
  • Publication number: 20230267614
    Abstract: An imaging controller is provided for segmenting instances from depth images including objects to be manipulated by a robot. The imaging controller includes an input interface configured to receive a depth image that includes objects, a memory configured to store instructions and a neural network trained to segment instances from the objects in the depth image, and a processor, coupled with the memory, configured to perform the instructions to segment a pickable instance using the trained neural network.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 24, 2023
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Anoop Cherian, Tim Marks, Alan Sullivan
  • Patent number: 11663798
    Abstract: Present disclosure discloses an image processing system and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects of a predetermined class (e.g., human faces). A 2D input image of a 3D object of the predetermined class is manipulated by manipulating physical properties of the 3D object, such as a 3D shape of the 3D input object, an albedo of the 3D input object, a pose of the 3D input object, and lighting illuminating the 3D input object. The physical properties are extracted from the 2D input image using a neural network that is trained to reconstruct the 2D input image. The 2D input image is reconstructed by disentangling the physical properties from pixels of the 2D input image using multiple subnetworks. The disentangled physical properties produced by the multiple subnetworks are combined into a 2D output image using a differentiable renderer.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: May 30, 2023
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Safa Medin, Anoop Cherian, Ye Wang
  • Patent number: 11651497
    Abstract: System and method for generating verisimilar images from real depth images. Train a generative adversarial neural network (GAN) by accessing test depth images having identical instances as instances of a real depth image. Input the test depth images in the generator to generate estimated depth images representing an implicit three-dimensional model of the object. Input, each estimated depth image into a discriminator to obtain a loss and into a pose encoder to obtain a matching loss. Iteratively repeat processes until the losses are minimized to a threshold, to end training. Identify the instances in the real image using the trained GAN pose encoder, to produce a pose transformation matrix for each instance in the real image. Identify pixels in the depth images corresponding to the instances of the real image and merge the pixels for the depth images to form an instance segmentation map for the real depth image.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: May 16, 2023
    Inventors: Anoop Cherian, Goncalo José Dias Pais, Tim Marks, Alan Sullivan
  • Patent number: 11635299
    Abstract: A navigation system for providing driving instructions to a driver of a vehicle traveling on a route is provided. The driving instructions are generated by executing a multimodal fusion method that comprises extracting features from sensor measurements, annotating the features with directions for the vehicle to follow the route with respect to objects sensed by the sensors, and encoding the annotated features with a multimodal attention neural network to produce encodings. The encodings are transformed into a common latent space, and the transformed encodings are fused using an attention mechanism producing an encoded representation of the scene. The method further comprises decoding the encoded representation with a sentence generation neural network to generate a driving instruction and submitting the driving instruction to an output device.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: April 25, 2023
    Inventors: Chiori Hori, Anoop Cherian, Siheng Chen, Tim Marks, Jonathan Le Roux, Takaaki Hori, Bret Harsham, Anthony Vetro, Alan Sullivan
  • Publication number: 20230112302
    Abstract: Present disclosure discloses an image processing system and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects of a predetermined class (e.g., human faces). A 2D input image of a 3D object of the predetermined class is manipulated by manipulating physical properties of the 3D object, such as a 3D shape of the 3D input object, an albedo of the 3D input object, a pose of the 3D input object, and lighting illuminating the 3D input object. The physical properties are extracted from the 2D input image using a neural network that is trained to reconstruct the 2D input image. The 2D input image is reconstructed by disentangling the physical properties from pixels of the 2D input image using multiple subnetworks. The disentangled physical properties produced by the multiple subnetworks are combined into a 2D output image using a differentiable renderer.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 13, 2023
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Safa Medin, Anoop Cherian, Ye Wang
  • Publication number: 20230063221
    Abstract: An imaging photoplethysmography (iPPG) system is provided. The iPPG system receives a sequence of images of different regions of the skin of the person, where each region including pixels of different intensities indicative of variation of coloration of the skin. The iPPG system further transforms the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin. The iPPG system further processes the multidimensional time-series signal with a time-series U-Net neural network wherein the pass-through layers include a recurrent neural network (RNN) to generate a PPG waveform, where the vital sign of the person is estimated based on the PPG waveform, and the iPPG system further renders the estimated vital sign of the person.
    Type: Application
    Filed: September 28, 2021
    Publication date: March 2, 2023
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Hassan Mansour, Suhas Lohit, Armand Comas Massague, Xiaoming Liu
  • Patent number: 11582485
    Abstract: Embodiments of the present disclosure discloses a scene-aware video encoder system. The scene-aware encoder system transforms a sequence of video frames of a video of a scene into a spatio-temporal scene graph. The spatio-temporal scene graph includes nodes representing one or multiple static and dynamic objects in the scene. Each node of the spatio-temporal scene graph describes an appearance, a location, and/or a motion of each of the objects (static and dynamic objects) at different time instances. The nodes of the spatio-temporal scene graph are embedded into a latent space using a spatio-temporal transformer encoding different combinations of different nodes of the spatio-temporal scene graph corresponding to different spatio-temporal volumes of the scene. Each node of the different nodes encoded in each of the combinations is weighted with an attention score determined as a function of similarities of spatio-temporal locations of the different nodes in the combination.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: February 14, 2023
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Anoop Cherian, Chiori Hori, Jonathan Le Roux, Tim Marks, Alan Sullivan
  • Publication number: 20220309672
    Abstract: System and method for generating verisimilar images from real depth images. Train a generative adversarial neural network (GAN) by accessing test depth images having identical instances as instances of a real depth image. Input the test depth images in the generator to generate estimated depth images representing an implicit three-dimensional model of the object. Input, each estimated depth image into a discriminator to obtain a loss and into a pose encoder to obtain a matching loss. Iteratively repeat processes until the losses are minimized to a threshold, to end training. Identify the instances in the real image using the trained GAN pose encoder, to produce a pose transformation matrix for each instance in the real image. Identify pixels in the depth images corresponding to the instances of the real image and merge the pixels for the depth images to form an instance segmentation map for the real depth image.
    Type: Application
    Filed: March 25, 2021
    Publication date: September 29, 2022
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Anoop Cherian, Goncalo José Dias Pais, Tim Marks, Alan Sullivan
  • Patent number: 11445267
    Abstract: A scene captioning system is provided. The scene captioning system includes an interface configured to acquire a stream of scene data signals including frames and sound data, a memory to store a computer-executable scene captioning model including a scene encoder, a timing decoder, a timing detector, and a caption decoder, wherein the audio-visual encoder is shared by the timing decoder and the timing detector and the caption decoder, and a processor, in connection with the memory. The processor is configured to perform steps of extracting scene features from the scene data signals by use of the audio-visual encoder, determining a timing of generating a caption by use of the timing detector, wherein the timing is arranged an early stage of the stream of scene data signals, and generating the caption based on the scene features by using the caption decoder according to the timing.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: September 13, 2022
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Chiori Hori, Takaaki Hori, Anoop Cherian, Tim Marks, Jonathan Le Roux
  • Patent number: 11264009
    Abstract: A computer-implemented method for training a dialogue response generation system and the dialogue response generation system are provided. The method includes arranging a first multimodal encoder-decoder for the dialogue response generation or video description having a first input and a first output, wherein the first multimodal encoder-decoder has been pretrained by training audio-video datasets with training video description sentences, arranging a second multimodal encoder-decoder for dialog response generation having a second input and a second output, providing first audio-visual datasets with first corresponding video description sentences to the first input of the first multimodal encoder-decoder, wherein the first encoder-decoder generates first output values based on the first audio-visual datasets with the first corresponding description sentences, providing the first audio-visual datasets excluding the first corresponding video description sentences to the second multimodal encoder-decoder.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: March 1, 2022
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Chiori Hori, Anoop Cherian, Tim Marks, Takaaki Hori
  • Patent number: 11259710
    Abstract: A remote photoplethysmography (RPPG) system includes an input interface to receive a sequence of measurements of intensities of different regions of a skin of a person indicative of vital signs of the person; a solver to solve an optimization problem to determine frequency coefficients of photoplethysmographic waveforms corresponding to the measured intensities at the different regions, wherein the solver determines the frequency coefficients to reduce a distance between intensities of the skin reconstructed from the frequency coefficients and the corresponding measured intensities of the skin while enforcing joint sparsity on the frequency coefficients; and an estimator to estimate the vital signs of the person from the determined frequency coefficients of photoplethysmographic waveforms.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: March 1, 2022
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Hassan Mansour, Tim Marks, Ewa Nowara, Yudai Nakamura, Ashok Veeraraghavan
  • Patent number: 11127164
    Abstract: A controller for executing a task based on probabilistic image-based landmark localization, uses a neural network, which is trained to process images of objects of a type having a structured set of landmarks to produce a parametric probability distribution defined by values of parameters for a location of each landmark in each processed image. The controller submits the set of input images to the neural network to produce the values of the parameters that define the parametric probability distribution over the location of each landmark in the structured set of landmarks of each input image. Further, the controller determines, for each input image, a global landmark uncertainty for the image based on the parametric probability distributions of landmarks in the input image and executes the task based on the parametric probability distributions of landmarks in each input image and the global landmark uncertainty of each input image.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: September 21, 2021
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Abhinav Kumar, Wenxuan Mou, Chen Feng, Xiaoming Liu
  • Publication number: 20210247201
    Abstract: A navigation system configured to provide driving instructions to a driver of a moving vehicle based on real-time description of objects in a scene pertinent to driving the vehicle is provided.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Applicant: Mitsubishi ELectric Research Laboratories, Inc.
    Inventors: Chiori Hori, Anoop Cherian, Siheng Chen, Tim Marks, Jonathan Le Roux, Takaaki Hori, Bret Harsham, Anthony Vetro, Alan Sullivan
  • Publication number: 20210224983
    Abstract: A remote photoplethysmography (RPPG) system for estimating vital signs of a person is provided. The RPPG system is configured to receive a set of imaging photoplethysmography (iPPG) signals measured from different regions of a skin of a person. The RPPG system is further configured to determine frequency coefficients at the frequency bins of the quantized frequency spectrum of the measured iPPG signals by minimizing a distance between the measured iPPG signals and corresponding iPPG signals reconstructed from the determined frequency coefficients, while enforcing joint sparsity of the determined frequency coefficients subject to the sparsity level constraint, such that the determined frequency coefficients of different iPPG signals have the non-zero values at the same frequency bins; and output one or a combination of the determined frequency coefficients, the iPPG signals reconstructed from the determined frequency coefficients, and a vital sign signal corresponding to the reconstructed iPPG signals.
    Type: Application
    Filed: March 12, 2021
    Publication date: July 22, 2021
    Applicants: Mitsubishi Electric Research Laboratories, Inc., Mitsubishi Electric Corporation
    Inventors: Tim Marks, Hassan Mansour, Ewa Nowara, Yudai Nakamura, Ashok Veeraghavan
  • Publication number: 20210104068
    Abstract: A controller for executing a task based on probabilistic image-based landmark localization, uses a neural network, which is trained to process images of objects of a type having a structured set of landmarks to produce a parametric probability distribution defined by values of parameters for a location of each landmark in each processed image. The controller submits the set of input images to the neural network to produce the values of the parameters that define the parametric probability distribution over the location of each landmark in the structured set of landmarks of each input image. Further, the controller determines, for each input image, a global landmark uncertainty for the image based on the parametric probability distributions of landmarks in the input image and executes the task based on the parametric probability distributions of landmarks in each input image and the global landmark uncertainty of each input image.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Abhinav Kumar, Wenxuan Mou, Chen Feng, Xiaoming Liu
  • Publication number: 20210082398
    Abstract: A computer-implemented method for training a dialogue response generation system and the dialogue response generation system are provided. The method includes arranging a first multimodal encoder-decoder for the dialogue response generation or video description having a first input and a first output, wherein the first multimodal encoder-decoder has been pretrained by training audio-video datasets with training video description sentences, arranging a second multimodal encoder-decoder for dialog response generation having a second input and a second output, providing first audio-visual datasets with first corresponding video description sentences to the first input of the first multimodal encoder-decoder, wherein the first encoder-decoder generates first output values based on the first audio-visual datasets with the first corresponding description sentences, providing the first audio-visual datasets excluding the first corresponding video description sentences to the second multimodal encoder-decoder.
    Type: Application
    Filed: September 13, 2019
    Publication date: March 18, 2021
    Inventors: Chiori Hori, Anoop Cherian, Tim Marks, Takaaki Hori
  • Patent number: 10515259
    Abstract: A method and system determine a three-dimensional (3D) pose of an object and 3D locations of landmark points of the object by first obtaining a 3D point cloud of the object. 3D surface patches are extracted from the 3D point cloud, and a parametric model is fitted to each 3D surface patch to determine a set of descriptors. A set of correspondences between the set of descriptors and a set of descriptors of patches extracted from 3D point clouds of objects from the same object class with known 3D poses and known 3D locations of landmark points is determined. Then, the 3D pose of the object and 3D locations of the landmark points of the object are estimated from the set of correspondences.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: December 24, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Michael J Jones, Tim Marks, Chavdar Papazov
  • Publication number: 20190350471
    Abstract: A remote photoplethysmography (RPPG) system includes an input interface to receive a sequence of measurements of intensities of different regions of a skin of a person indicative of vital signs of the person; a solver to solve an optimization problem to determine frequency coefficients of photoplethysmographic waveforms corresponding to the measured intensities at the different regions, wherein the solver determines the frequency coefficients to reduce a distance between intensities of the skin reconstructed from the frequency coefficients and the corresponding measured intensities of the skin while enforcing joint sparsity on the frequency coefficients; and an estimator to estimate the vital signs of the person from the determined frequency coefficients of photoplethysmographic waveforms.
    Type: Application
    Filed: October 23, 2018
    Publication date: November 21, 2019
    Inventors: Tim Marks, Hassan Mansour, Ewa Nowara, Yudai Nakamura, Ashok Veeraraghavan
  • Patent number: 10417498
    Abstract: A system for generating a word sequence includes one or more processors in connection with a memory and one or more storage devices storing instructions causing operations that include receiving first and second input vectors, extracting first and second feature vectors, estimating a first set of weights and a second set of weights, calculating a first content vector from the first set of weights and the first feature vectors, and calculating a second content vector, transforming the first content vector into a first modal content vector having a predetermined dimension and transforming the second content vector into a second modal content vector having the predetermined dimension, estimating a set of modal attention weights, generating a weighted content vector having the predetermined dimension from the set of modal attention weights and the first and second modal content vectors, and generating a predicted word using the sequence generator.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: September 17, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Chiori Hori, Takaaki Hori, John Hershey, Tim Marks