Patents Assigned to Toyota Technological Institute
-
Patent number: 12175708Abstract: Systems and methods described herein relate to self-supervised learning of camera intrinsic parameters from a sequence of images. One embodiment produces a depth map from a current image frame captured by a camera; generates a point cloud from the depth map using a differentiable unprojection operation; produces a camera pose estimate from the current image frame and a context image frame; produces a warped point cloud based on the camera pose estimate; generates a warped image frame from the warped point cloud using a differentiable projection operation; compares the warped image frame with the context image frame to produce a self-supervised photometric loss; updates a set of estimated camera intrinsic parameters on a per-image-sequence basis using one or more gradients from the self-supervised photometric loss; and generates, based on a converged set of learned camera intrinsic parameters, a rectified image frame from an image frame captured by the camera.Type: GrantFiled: March 11, 2022Date of Patent: December 24, 2024Assignees: Toyota Research Institute, Inc., Toyota Technological Institute at ChicagoInventors: Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Igor Vasiljevic, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter
-
Publication number: 20240355042Abstract: A method for fusing neural radiance fields (NeRFs) is described. The method includes re-rendering a first NeRF and a second NeRF at different viewpoints to form synthesized images from the first NeRF and the second NeRF. The method also includes inferring a transformation between a re-rendered first NeRF and a re-rendered second NeRF based on the synthesized images from the first NeRF and the second NeRF. The method further includes blending the re-rendered first NeRF and the re-rendered second NeRF based on the inferred transformation to fuse the first NeRF and the second NeRF.Type: ApplicationFiled: January 31, 2024Publication date: October 24, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGOInventors: Jiading FANG, Shengjie LIN, Igor VASILJEVIC, Vitor Campagnolo GUIZILINI, Rares Andrei AMBRUS, Adrien David GAIDON, Gregory SHAKHNAROVICH, Matthew WALTER
-
Publication number: 20240331268Abstract: System, methods, and other embodiments described herein relate to generating an image by interpolating features estimated from a learning model. In one embodiment, a method includes sampling three-dimensional (3D) points of a light ray that crosses a frustum space associated with a single-view camera, the 3D points reflecting depth estimates derived from data that the single-view camera generates for a scene. The method also includes deriving feature values for the 3D points using tri-linear interpolation across feature planes of the frustum space, the feature planes being estimated by a learning model. The method also includes inferring an image in two dimensions (2D) by translating the feature values and compositing the data with volumetric rendering for the scene. The method also includes executing a control task by a controller using the image.Type: ApplicationFiled: March 29, 2023Publication date: October 3, 2024Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute at ChicagoInventors: Jiading Fang, Vitor Guizilini, Igor Vasiljevic, Rares A. Ambrus, Gregory Shakhnarovich, Matthew R. Walter, Adrien David Gaidon
-
Publication number: 20240320844Abstract: A method for scale-aware depth estimation using multi-camera projection loss is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes training a scale-aware depth estimation model and an ego-motion estimation model according to the multi-camera photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the scale-aware depth estimation model and the ego-motion estimation model. The method also includes planning a vehicle control action of the ego vehicle according to the 360° point cloud of the scene surrounding the ego vehicle.Type: ApplicationFiled: June 5, 2024Publication date: September 26, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGOInventors: Vitor GUIZILINI, Rares Andrei AMBRUS, Igor VASILJEVIC, Gregory SHAKHNAROVICH
-
Publication number: 20240249465Abstract: Systems and methods for enhanced computer vision capabilities, particularly including depth synthesis, which may be applicable to autonomous vehicle operation are described. A vehicle may be equipped with a geometric scene representation (GSR) architecture for synthesizing depth views at arbitrary viewpoints. The GSR architecture synthesizes depth views enable advanced functions, including depth interpolation and depth extrapolation. The GSR architecture implements functions (i.e., depth interpolation, depth extrapolation) that are useful for various computer vision applications for autonomous vehicles, such as predicting depth maps from unseen locations. For example, a vehicle includes a processor device synthesizing depth views at multiple viewpoints, where the multiple viewpoints are from image data of a surrounding environment for the vehicle.Type: ApplicationFiled: January 19, 2023Publication date: July 25, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGOInventors: VITOR GUIZILINI, Igor Vasiljevic, Adrien D. Gaidon, Greg Shakhnarovich, Matthew Walter, Jiading Fang, Rares A. Ambrus
-
Patent number: 12033341Abstract: A method for scale-aware depth estimation using multi-camera projection loss is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes training a scale-aware depth estimation model and an ego-motion estimation model according to the multi-camera photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the scale-aware depth estimation model and the ego-motion estimation model. The method also includes planning a vehicle control action of the ego vehicle according to the 360° point cloud of the scene surrounding the ego vehicle.Type: GrantFiled: July 30, 2021Date of Patent: July 9, 2024Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGOInventors: Vitor Guizilini, Rares Andrei Ambrus, Adrien David Gaidon, Igor Vasiljevic, Gregory Shakhnarovich
-
Publication number: 20240029286Abstract: A method of generating additional supervision data to improve learning of a geometrically-consistent latent scene representation with a geometric scene representation architecture is provided. The method includes receiving, with a computing device, a latent scene representation encoding a pointcloud from images of a scene captured by a plurality of cameras each with known intrinsics and poses, generating a virtual camera having a viewpoint different from viewpoints of the plurality of cameras, projecting information from the pointcloud onto the viewpoint of the virtual camera, and decoding the latent scene representation based on the virtual camera thereby generating an RGB image and depth map corresponding to the viewpoint of the virtual camera for implementation as additional supervision data.Type: ApplicationFiled: February 16, 2023Publication date: January 25, 2024Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute at ChicagoInventors: Vitor Guizilini, Igor Vasiljevic, Adrien D. Gaidon, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter, Rares A. Ambrus
-
Publication number: 20230080638Abstract: Systems and methods described herein relate to self-supervised learning of camera intrinsic parameters from a sequence of images. One embodiment produces a depth map from a current image frame captured by a camera; generates a point cloud from the depth map using a differentiable unprojection operation; produces a camera pose estimate from the current image frame and a context image frame; produces a warped point cloud based on the camera pose estimate; generates a warped image frame from the warped point cloud using a differentiable projection operation; compares the warped image frame with the context image frame to produce a self-supervised photometric loss; updates a set of estimated camera intrinsic parameters on a per-image-sequence basis using one or more gradients from the self-supervised photometric loss; and generates, based on a converged set of learned camera intrinsic parameters, a rectified image frame from an image frame captured by the camera.Type: ApplicationFiled: March 11, 2022Publication date: March 16, 2023Applicants: Toyota Research Institute, Inc., Toyota Technological Institute at ChicagoInventors: Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Igor Vasiljevic, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter
-
Patent number: 10839792Abstract: A method (and structure and computer product) for learning Out-of-Vocabulary (OOV) words in an Automatic Speech Recognition (ASR) system includes using an Acoustic Word Embedding Recurrent Neural Network (AWE RNN) to receive a character sequence for a new OOV word for the ASR system, the RNN providing an Acoustic Word Embedding (AWE) vector as an output thereof. The AWE vector output from the AWE RNN is provided as an input into an Acoustic Word Embedding-to-Acoustic-to-Word Neural Network (AWE?A2W NN) trained to provide an OOV word weight value from the AWE vector. The OOV word weight is inserted into a listing of Acoustic-to-Word (A2W) word embeddings used by the ASR system to output recognized words from an input of speech acoustic features, wherein the OOV word weight is inserted into the A2W word embeddings list relative to existing weights in the A2W word embeddings list.Type: GrantFiled: February 5, 2019Date of Patent: November 17, 2020Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGOInventors: Kartik Audhkhasi, Karen Livescu, Michael Picheny, Shane Settle
-
Publication number: 20050044468Abstract: In one embodiment, a symbol error correction encoder effects block interleaving on recording data and thereafter performs first error correction encoding on the recording data. Next, a symbol error correction encoder performs encoding on the whole block. A reproducing processing circuit outputs likelihood information of respective bits. A first error correction decoder corrects a random error produced upon recording and reproduction, using the likelihood information. Since it is possible to make an improvement in performance with respect to the random error by repetitive decoding at this time, the post-correction data is returned to the reproducing processing circuit. After the completion of such repetitive processing, the data is digitized and subjected to an error correction in symbol unit by a hard determination, and outputted to a symbol error correction decoder.Type: ApplicationFiled: August 18, 2004Publication date: February 24, 2005Applicants: Hitachi Global Storage Technologies, Japan, Ltd., Toyota Technological InstituteInventors: Morishi Izumita, Terumi Takashi, Hideki Sawaguchi, Seiichi Mita
-
Patent number: 6564585Abstract: There is disclosed second-order nonlinear glass material wherein a part having second-order nonlinearity contains Ge, H and OH and has second-order nonlinear optical constant d of 1 pm/V or more, and a method for producing second-order nonlinear glass material comprising treating a porous glass material containing Ge with hydrogen, sintering it and subjecting it to a ultraviolet poling treatment. There can be provided second-order nonlinear glass material having second-order nonlinearity which is a sufficiently high and has a sufficiently long lifetime for a practical purpose, in use of the glass material for optical functional elements or the like.Type: GrantFiled: May 9, 2001Date of Patent: May 20, 2003Assignees: Shin-Etsu Chemical Co., Ltd., Toyota Technological InstituteInventors: Jun Abe, Seiki Ejima, Akira J. Ikushima, Takumi Fujiwara
-
Patent number: 5618898Abstract: A process for producing a polymer of excellent weatherability, which comprises reacting a polymer having a thioether bond, with a peroxide to oxidize the sulfur atom in the bond to convert it into a sulfone.Type: GrantFiled: October 5, 1994Date of Patent: April 8, 1997Assignees: Toagosei Chemical Industry Co., Ltd., Toyota Jidosha Kabushiki Kaisha, Toyota Technological InstituteInventors: Mitsuru Nagasawa, Kazuyuki Kuwano, Takeshi Kawakami, Mamoru Sugiura, Hiroshi Hibino, Shiro Kojima, Kishiro Azuma