Patents Assigned to Toyota Technological Institute
  • Patent number: 12175708
    Abstract: Systems and methods described herein relate to self-supervised learning of camera intrinsic parameters from a sequence of images. One embodiment produces a depth map from a current image frame captured by a camera; generates a point cloud from the depth map using a differentiable unprojection operation; produces a camera pose estimate from the current image frame and a context image frame; produces a warped point cloud based on the camera pose estimate; generates a warped image frame from the warped point cloud using a differentiable projection operation; compares the warped image frame with the context image frame to produce a self-supervised photometric loss; updates a set of estimated camera intrinsic parameters on a per-image-sequence basis using one or more gradients from the self-supervised photometric loss; and generates, based on a converged set of learned camera intrinsic parameters, a rectified image frame from an image frame captured by the camera.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: December 24, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Technological Institute at Chicago
    Inventors: Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Igor Vasiljevic, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter
  • Publication number: 20240355042
    Abstract: A method for fusing neural radiance fields (NeRFs) is described. The method includes re-rendering a first NeRF and a second NeRF at different viewpoints to form synthesized images from the first NeRF and the second NeRF. The method also includes inferring a transformation between a re-rendered first NeRF and a re-rendered second NeRF based on the synthesized images from the first NeRF and the second NeRF. The method further includes blending the re-rendered first NeRF and the re-rendered second NeRF based on the inferred transformation to fuse the first NeRF and the second NeRF.
    Type: Application
    Filed: January 31, 2024
    Publication date: October 24, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO
    Inventors: Jiading FANG, Shengjie LIN, Igor VASILJEVIC, Vitor Campagnolo GUIZILINI, Rares Andrei AMBRUS, Adrien David GAIDON, Gregory SHAKHNAROVICH, Matthew WALTER
  • Publication number: 20240331268
    Abstract: System, methods, and other embodiments described herein relate to generating an image by interpolating features estimated from a learning model. In one embodiment, a method includes sampling three-dimensional (3D) points of a light ray that crosses a frustum space associated with a single-view camera, the 3D points reflecting depth estimates derived from data that the single-view camera generates for a scene. The method also includes deriving feature values for the 3D points using tri-linear interpolation across feature planes of the frustum space, the feature planes being estimated by a learning model. The method also includes inferring an image in two dimensions (2D) by translating the feature values and compositing the data with volumetric rendering for the scene. The method also includes executing a control task by a controller using the image.
    Type: Application
    Filed: March 29, 2023
    Publication date: October 3, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute at Chicago
    Inventors: Jiading Fang, Vitor Guizilini, Igor Vasiljevic, Rares A. Ambrus, Gregory Shakhnarovich, Matthew R. Walter, Adrien David Gaidon
  • Publication number: 20240320844
    Abstract: A method for scale-aware depth estimation using multi-camera projection loss is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes training a scale-aware depth estimation model and an ego-motion estimation model according to the multi-camera photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the scale-aware depth estimation model and the ego-motion estimation model. The method also includes planning a vehicle control action of the ego vehicle according to the 360° point cloud of the scene surrounding the ego vehicle.
    Type: Application
    Filed: June 5, 2024
    Publication date: September 26, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO
    Inventors: Vitor GUIZILINI, Rares Andrei AMBRUS, Igor VASILJEVIC, Gregory SHAKHNAROVICH
  • Publication number: 20240249465
    Abstract: Systems and methods for enhanced computer vision capabilities, particularly including depth synthesis, which may be applicable to autonomous vehicle operation are described. A vehicle may be equipped with a geometric scene representation (GSR) architecture for synthesizing depth views at arbitrary viewpoints. The GSR architecture synthesizes depth views enable advanced functions, including depth interpolation and depth extrapolation. The GSR architecture implements functions (i.e., depth interpolation, depth extrapolation) that are useful for various computer vision applications for autonomous vehicles, such as predicting depth maps from unseen locations. For example, a vehicle includes a processor device synthesizing depth views at multiple viewpoints, where the multiple viewpoints are from image data of a surrounding environment for the vehicle.
    Type: Application
    Filed: January 19, 2023
    Publication date: July 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO
    Inventors: VITOR GUIZILINI, Igor Vasiljevic, Adrien D. Gaidon, Greg Shakhnarovich, Matthew Walter, Jiading Fang, Rares A. Ambrus
  • Patent number: 12033341
    Abstract: A method for scale-aware depth estimation using multi-camera projection loss is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes training a scale-aware depth estimation model and an ego-motion estimation model according to the multi-camera photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the scale-aware depth estimation model and the ego-motion estimation model. The method also includes planning a vehicle control action of the ego vehicle according to the 360° point cloud of the scene surrounding the ego vehicle.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: July 9, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO
    Inventors: Vitor Guizilini, Rares Andrei Ambrus, Adrien David Gaidon, Igor Vasiljevic, Gregory Shakhnarovich
  • Publication number: 20240029286
    Abstract: A method of generating additional supervision data to improve learning of a geometrically-consistent latent scene representation with a geometric scene representation architecture is provided. The method includes receiving, with a computing device, a latent scene representation encoding a pointcloud from images of a scene captured by a plurality of cameras each with known intrinsics and poses, generating a virtual camera having a viewpoint different from viewpoints of the plurality of cameras, projecting information from the pointcloud onto the viewpoint of the virtual camera, and decoding the latent scene representation based on the virtual camera thereby generating an RGB image and depth map corresponding to the viewpoint of the virtual camera for implementation as additional supervision data.
    Type: Application
    Filed: February 16, 2023
    Publication date: January 25, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute at Chicago
    Inventors: Vitor Guizilini, Igor Vasiljevic, Adrien D. Gaidon, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter, Rares A. Ambrus
  • Publication number: 20230080638
    Abstract: Systems and methods described herein relate to self-supervised learning of camera intrinsic parameters from a sequence of images. One embodiment produces a depth map from a current image frame captured by a camera; generates a point cloud from the depth map using a differentiable unprojection operation; produces a camera pose estimate from the current image frame and a context image frame; produces a warped point cloud based on the camera pose estimate; generates a warped image frame from the warped point cloud using a differentiable projection operation; compares the warped image frame with the context image frame to produce a self-supervised photometric loss; updates a set of estimated camera intrinsic parameters on a per-image-sequence basis using one or more gradients from the self-supervised photometric loss; and generates, based on a converged set of learned camera intrinsic parameters, a rectified image frame from an image frame captured by the camera.
    Type: Application
    Filed: March 11, 2022
    Publication date: March 16, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Technological Institute at Chicago
    Inventors: Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Igor Vasiljevic, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter
  • Patent number: 10839792
    Abstract: A method (and structure and computer product) for learning Out-of-Vocabulary (OOV) words in an Automatic Speech Recognition (ASR) system includes using an Acoustic Word Embedding Recurrent Neural Network (AWE RNN) to receive a character sequence for a new OOV word for the ASR system, the RNN providing an Acoustic Word Embedding (AWE) vector as an output thereof. The AWE vector output from the AWE RNN is provided as an input into an Acoustic Word Embedding-to-Acoustic-to-Word Neural Network (AWE?A2W NN) trained to provide an OOV word weight value from the AWE vector. The OOV word weight is inserted into a listing of Acoustic-to-Word (A2W) word embeddings used by the ASR system to output recognized words from an input of speech acoustic features, wherein the OOV word weight is inserted into the A2W word embeddings list relative to existing weights in the A2W word embeddings list.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: November 17, 2020
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO
    Inventors: Kartik Audhkhasi, Karen Livescu, Michael Picheny, Shane Settle
  • Publication number: 20050044468
    Abstract: In one embodiment, a symbol error correction encoder effects block interleaving on recording data and thereafter performs first error correction encoding on the recording data. Next, a symbol error correction encoder performs encoding on the whole block. A reproducing processing circuit outputs likelihood information of respective bits. A first error correction decoder corrects a random error produced upon recording and reproduction, using the likelihood information. Since it is possible to make an improvement in performance with respect to the random error by repetitive decoding at this time, the post-correction data is returned to the reproducing processing circuit. After the completion of such repetitive processing, the data is digitized and subjected to an error correction in symbol unit by a hard determination, and outputted to a symbol error correction decoder.
    Type: Application
    Filed: August 18, 2004
    Publication date: February 24, 2005
    Applicants: Hitachi Global Storage Technologies, Japan, Ltd., Toyota Technological Institute
    Inventors: Morishi Izumita, Terumi Takashi, Hideki Sawaguchi, Seiichi Mita
  • Patent number: 6564585
    Abstract: There is disclosed second-order nonlinear glass material wherein a part having second-order nonlinearity contains Ge, H and OH and has second-order nonlinear optical constant d of 1 pm/V or more, and a method for producing second-order nonlinear glass material comprising treating a porous glass material containing Ge with hydrogen, sintering it and subjecting it to a ultraviolet poling treatment. There can be provided second-order nonlinear glass material having second-order nonlinearity which is a sufficiently high and has a sufficiently long lifetime for a practical purpose, in use of the glass material for optical functional elements or the like.
    Type: Grant
    Filed: May 9, 2001
    Date of Patent: May 20, 2003
    Assignees: Shin-Etsu Chemical Co., Ltd., Toyota Technological Institute
    Inventors: Jun Abe, Seiki Ejima, Akira J. Ikushima, Takumi Fujiwara
  • Patent number: 5618898
    Abstract: A process for producing a polymer of excellent weatherability, which comprises reacting a polymer having a thioether bond, with a peroxide to oxidize the sulfur atom in the bond to convert it into a sulfone.
    Type: Grant
    Filed: October 5, 1994
    Date of Patent: April 8, 1997
    Assignees: Toagosei Chemical Industry Co., Ltd., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute
    Inventors: Mitsuru Nagasawa, Kazuyuki Kuwano, Takeshi Kawakami, Mamoru Sugiura, Hiroshi Hibino, Shiro Kojima, Kishiro Azuma