Patents by Inventor Dennis Parks

Dennis Parks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12361688
    Abstract: A method for multi-view dataset formation from fleet data is described. The method includes detecting at least a pair of vehicles within a vicinity of one another, and having overlapping viewing frustums of a scene. The method also includes triggering a capture of sensor data from the pair of vehicles. The method further includes synchronizing the sensor data captured by the pair of vehicles. The method also includes registering the sensor data captured by the pair of vehicles within a shared coordinate system to form a multi-view dataset of the scene.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: July 15, 2025
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Simon A. I. Stent, Dennis Park
  • Patent number: 12086695
    Abstract: A system for training a multi-task model includes a processor and a memory in communication with the processor. The memory has a multi-task training module having instructions that, when executed by the processor, causes the processor to provide simulation training data having a plurality of samples to a multi-task model capable of performing at least a first task and a second task using at least one shared. The training module further causes the processor to determine a first value (gradience or loss) for the first task and a second value (gradience or loss) for a second task using the simulation training data and the at least one shared parameter, determine a task induced variance between the first value and the second value, and iteratively adjust the at least one shared parameter to reduce the task induced variance.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: September 10, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Dennis Park, Adrien David Gaidon
  • Patent number: 12067785
    Abstract: Systems, methods, and other embodiments described herein relate to evaluating a perception network in relation to the accuracy of depth estimates and object detections. In one embodiment, a method includes segmenting range data associated with an image according to bounding boxes of objects identified in the image to produce masked data. The method includes comparing the masked data with corresponding depth estimates in the depth map according to an evaluation mask that correlates the depth estimates with the depth map. The method includes providing a metric that quantifies the comparing to assess a network that generated the depth map and the bounding boxes.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: August 20, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Rares A. Ambrus, Dennis Park, Vitor Guizilini, Jie Li, Adrien David Gaidon
  • Patent number: 12020489
    Abstract: Systems, methods, and other embodiments described herein relate to performing depth estimation and object detection using a common network architecture. In one embodiment, a method includes generating, using a backbone of a combined network, a feature map at multiple scales from an input image. The method includes decoding, using a top-down pathway of the combined network, the feature map to provide features at the multiple scales. The method includes generating, using a head of the combined network, a depth map from the features for a scene depicted in the input image, and bounding boxes identifying objects in the input image.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: June 25, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Dennis Park, Rares A. Ambrus, Vitor Guizilini, Jie Li, Adrien David Gaidon
  • Patent number: 12008818
    Abstract: System, methods, and other embodiments described herein relate to a manner of training a depth prediction system using bounding boxes. In one embodiment, a method includes segmenting an image to mask areas beyond bounding boxes and identify unmasked areas within the bounding boxes. The method also includes training a depth model using depth losses from comparing weighted points associated with pixels of the image within the unmasked areas to ground-truth depth. The method also includes providing the depth model for object detection.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: June 11, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Rares A. Ambrus, Dennis Park, Vitor Guizilini, Jie Li, Adrien David Gaidon
  • Publication number: 20240153101
    Abstract: A method for scene synthesis from human motion is described. The method includes computing three-dimensional (3D) human pose trajectories of human motion in a scene. The method also includes generating contact labels of unseen objects in the scene based on the computing of the 3D human pose trajectories. The method further includes estimating contact points between human body vertices of the 3D human pose trajectories and the contact labels of the unseen objects that are in contact with the human body vertices. The method also includes predicting object placements of the unseen objects in the scene based on the estimated contact points.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 9, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Sifan YE, Yixing WANG, Jiaman LI, Dennis PARK, C. Karen LIU, Huazhe XU, Jiajun WU
  • Patent number: 11972277
    Abstract: A method for ascertaining an emotional goal includes receiving, via an emotionally responsive computerized system having a user interface communicatively coupled to a networked user device including a processor device, user-input concerning a purpose of a user's interaction with a software interface. It can include registering input indicating a target person to whom the purpose of the user's interaction pertains and prompting the user to provide a root motivator comprising a root emotion or a root reason for the interaction. Some variations include generating a user-perceptible output and a set of user interface elements dependent on the root motivator along with obtaining the user's specific emotional goal with respect to the target person on the basis of user-inputs in response to a presentation of a sequence of user interface elements, to provide the user, via the software interface, a recommendation regarding a fulfillment of the specific emotional goal.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: April 30, 2024
    Assignee: LOVINGLY, LLC
    Inventors: Joseph Vega, Kenny Garland, Daniela Virginia Marquez, Marque Nolan Staneluis, Dennis Park-Rodriguez, Ryan Wesley A. Lowe, Lakshmi Pillai, James Craig Rosenthal, Matthew Zangen, Kaitlin Heather Schupp, Danielle Sarah Gorton
  • Publication number: 20240104905
    Abstract: A method for multi-view dataset formation from fleet data is described. The method includes detecting at least a pair of vehicles within a vicinity of one another, and having overlapping viewing frustums of a scene. The method also includes triggering a capture of sensor data from the pair of vehicles. The method further includes synchronizing the sensor data captured by the pair of vehicles. The method also includes registering the sensor data captured by the pair of vehicles within a shared coordinate system to form a multi-view dataset of the scene.
    Type: Application
    Filed: September 28, 2022
    Publication date: March 28, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Simon A.I. STENT, Dennis PARK
  • Patent number: 11918535
    Abstract: Systems and methods for a powered, robotic exoskeleton, or exosuit, for a user's limbs and body are provided. The exosuit may be equipped with airbag devices mounted at various locations on the suit. The exosuit may include on-board computing equipment that can sense, compute control commands in real-time, and actuate limbs and airbags to restore stability (fall prevention) and minimize injuries due to falls, should they happen (fall protection).
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: March 5, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jonathan Decastro, Soon Ho Kong, Nikos Arechiga Gonzalez, Frank Permenter, Dennis Park
  • Patent number: 11887248
    Abstract: Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: January 30, 2024
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology, The Board of Trustees of the Leland Standford Junior Univeristy
    Inventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Dennis Park, Joshua Tenenbaum, Jiajun Wu, Fredo Durand, Vincent Sitzmann
  • Patent number: 11798288
    Abstract: Described are systems and methods for self-learned label refinement of a training set. In on example, a system includes a processor and a memory having a training set generation module that causes the processor to train a model using an image as an input to the model and 2D bounding based on 3D bounding boxes as ground truths, select a first subset from predicted 2D bounding boxes previously outputted by the model, retrain the model using the image as the input and the first subset as ground truths, select a second set of predicted 2D bounding boxes previously outputted by the model, and generate the training set by selecting the 3D bounding boxes from a master set of 3D bounding boxes that have corresponding 2D bounding boxes that form the second subset.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: October 24, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Dennis Park, Rares A. Ambrus, Vitor Guizilini, Jie Li, Adrien David Gaidon
  • Publication number: 20230298199
    Abstract: Systems and methods for detecting occluded objects are disclosed. In one embodiment, a method of determining a shape and pose of an object occluded by an occlusion object includes receiving, by a generative model, a latent vector, and iteratively performing an optimization routine until a loss is less than a loss threshold. The optimization routine includes generating, by the generative model, a predicted object having a shape and a pose from the latent vector, generating a predicted shadow cast by the predicted object, calculating the loss by comparing the predicted shadow with an observed shadow, and modifying the latent vector when the loss is greater than the loss threshold. The method further includes selecting the predicted object as the object when the loss is less than the loss threshold.
    Type: Application
    Filed: February 10, 2023
    Publication date: September 21, 2023
    Applicants: Toyota Research Institute, Inc., Columbia University, Toyota Jidosha Kabushiki Kaisha
    Inventors: Simon A.I. Stent, Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Carl M. Vondrick
  • Patent number: D974066
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: January 3, 2023
    Assignee: ARPER S.p.A.
    Inventors: Jeannette Altherr, Delphine Désile, Dennis Park
  • Patent number: D975723
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: January 17, 2023
    Assignee: Apple Inc.
    Inventors: William Martin Bachman, Colin Bennett, Graham Clarke, Jennifer Lynn Carley Folse, Monika E. Gromek, Alexander W. Johnston, Dennis Park, Brian K. Shiraishi, Jeff Tan-Ang, Christopher Wilson
  • Patent number: D991721
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: July 11, 2023
    Assignee: ARPER S.P.A.
    Inventors: Jeannette Altherr, Delphine Désile, Dennis Park
  • Patent number: D991722
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: July 11, 2023
    Assignee: ARPER S.P.A.
    Inventors: Jeannette Altherr, Delphine Désile, Dennis Park
  • Patent number: D1037306
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: July 30, 2024
    Assignee: Apple Inc.
    Inventors: Anton Davydov, Dennis Park
  • Patent number: D1085104
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: July 22, 2025
    Assignee: Apple Inc.
    Inventors: Dennis Park, Florian Ponson, Policarpo Wood
  • Patent number: D1087131
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: August 5, 2025
    Assignee: Apple Inc.
    Inventors: Jae Woo Chang, Jee Won Choi, Patrick Lee Coffman, Neil Patrick Cormican, Alexander W. Johnston, Dennis Park, Pavan Rajam
  • Patent number: D1087152
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: August 5, 2025
    Assignee: Apple Inc.
    Inventor: Dennis Park