Patents by Inventor Igor Gilitschenski

Igor Gilitschenski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250078362
    Abstract: A method for editing a local area of a target image using a diffusion model, includes: receiving an input image; receiving a text instruction to edit the input image; generating a relevance map based on the diffusion model and the text instruction; generating a rendered image by performing a relevance guided image editing method on the input image, based on the generated relevance map; and providing, to the user, the generated rendered image.
    Type: Application
    Filed: August 27, 2024
    Publication date: March 6, 2025
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ashkan MIRZAEI, Tristan Ty Aumentado-Armstrong, Marcus A. Brubaker, Igor Gilitschenski, Aleksai Levinshtein, Konstantinos G. Derpanis
  • Publication number: 20250069321
    Abstract: An electronic device includes: a camera; a memory; a processor to obtain a plurality of multiview color images of the object; obtain, from the latent field about the object, a plurality of multiview latent images and a plurality of camera parameters respectively corresponding to the plurality of multiview latent images; based on the plurality of multiview latent images and the plurality of camera parameters, render a first feature map about the object by using a latent field and an autoencoder; based on the first feature map about the object, train the improved NeRF by performing iterative operations; receive a request for a novel view of the object; generate, by using the improved NeRF, a second feature map from the novel view of the object; and generate, by a decoder of the autoencoder, an image about the novel view of the object based on the second feature map.
    Type: Application
    Filed: August 12, 2024
    Publication date: February 27, 2025
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Tristan AUMENTADO-ARMSTRONG, Ashkan MIRZAEI, Aleksai LEVINSHTEIN, Marcus Anthony BRUBAKER, Konstantinos G. DERPANIS, Igor GILITSCHENSKI
  • Patent number: 12168461
    Abstract: Systems and methods for predicting a trajectory of a moving object are disclosed herein. One embodiment downloads, to a robot, a probabilistic hybrid discrete-continuous automaton (PHA) model learned as a deep neural network; uses the deep neural network to infer a sequence of high-level discrete modes and a set of associated low-level samples, wherein the high-level discrete modes correspond to candidate maneuvers for the moving object and the low-level samples are candidate trajectories; uses the sequence of high-level discrete modes and the set of associated low-level samples, via a learned proposal distribution in the deep neural network, to adaptively sample the sequence of high-level discrete modes to produce a reduced set of low-level samples; applies a sample selection technique to the reduced set of low-level samples to select a predicted trajectory for the moving object; and controls operation of the robot based, at least in part, on the predicted trajectory.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: December 17, 2024
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Xin Huang, Igor Gilitschenski, Guy Rosman, Stephen G. McGill, Jr., John Joseph Leonard, Ashkan Mohammadzadeh Jasour, Brian C. Williams
  • Publication number: 20240303789
    Abstract: Provided is a method of training a neural radiance field and producing a rendering of a 3D scene from a novel viewpoint with view-dependent effects. The neural radiance field is initially trained using a first loss associated with a plurality of unmasked regions associated with a reference image and a plurality of target images. The training may also be updated using a second loss associated with a depth estimate of a masked region in the reference image. The training may also be further updated using a third loss associated with a view-substituted image associated with a respective target image. The view-substituted image is a volume rendering from the reference viewpoint across pixels with view-substituted target colors. In some embodiments, the neural radiance field is additionally trained with a fourth loss. The fourth loss is associated with dis-occluded pixels in a target image.
    Type: Application
    Filed: November 13, 2023
    Publication date: September 12, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ashkan MIRZAEI, Tristan TY AUMENTADO-ARMSTRONG, Konstantinos G. DERPANIS, Igor GILITSCHENSKI, Aleksai LEVINSHTEIN, Marcus BRUBAKER
  • Publication number: 20240249180
    Abstract: Prediction training systems that rely on small variation data sets instead of training the model using large passive data sets are disclosed. The smaller variation data sets are used to add loss terms that may mimic intervention. One or more models may be included that mimic the intervention by training with variation datasets. The variation datasets may be collected from such interventions in real world events. The model may mimic an intervention by replacing values in the prediction during a forward model computation.
    Type: Application
    Filed: January 20, 2023
    Publication date: July 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Guy ROSMAN, Igor GILITSCHENSKI, Xiongyi CUI, Stephen G. MCGILL
  • Patent number: 12024203
    Abstract: A method of generating an output trajectory of an ego vehicle is described. The method includes extracting high-level features from a bird-view image of a traffic environment of the ego vehicle. The method also includes generating, using an automaton generative network, an automaton including an automaton state distribution describing a behavior of the ego vehicle in the traffic environment according to the high-level features. The method further includes generating the output trajectory of the ego vehicle according to extracted bird-view features of the bird-view image and the automaton state distribution describing the behavior of the ego vehicle in the traffic environment.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: July 2, 2024
    Assignees: Toyota Research Institute, Massachusetts Institute of Technology
    Inventors: Xiao Li, Brandon Araki, Sertac Karaman, Daniela Rus, Guy Rosman, Igor Gilitschenski, Cristian-Ioan Vasile
  • Publication number: 20240157977
    Abstract: Systems and methods for modeling and predicting scene occupancy in an environment of a robot are disclosed herein. One embodiment processes past agent-trajectory data, map data, and sensor data using one or more encoder neural networks to produce combined encoded input data; generates a weights vector for a Gaussian Mixture Model (GMM) based on the combined encoded input data; produces a volumetric spatio-temporal representation of occupancy in an environment of a robot by generating, for a plurality of modes of the GMM in accordance with the weights vector, corresponding sample probability distributions of scene occupancy based on respective means and variances of the plurality of modes, wherein the respective means and variances sample coefficients of a set of learned basis functions; and controls the operation of the robot based, at least in part, on the volumetric spatio-temporal representation of occupancy.
    Type: Application
    Filed: November 16, 2022
    Publication date: May 16, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Guy Rosman, Igor Gilitschenski, Xin Huang
  • Publication number: 20240153046
    Abstract: A computer-implemented method of configuring an electronic device for inpainting source three-dimensional (3D) scenes, includes: receiving the source 3D scenes and a user's input about a first object of the source 3D scenes; generating accurate object masks about the first object of the source 3D scenes; and generating inpainted 3D scenes of the source 3D scenes by using an inpainting neural radiance field (NeRF) based on the accurate object masks.
    Type: Application
    Filed: July 31, 2023
    Publication date: May 9, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ashkan MIRZAEI, Ttistan TY AUMENTADO-ARMSTRONG, Konstantinos G. DERPANIS, Marcus A. BRUBAKER, Igor GILITSCHENSKI, Aleksai LEVINSHTEIN
  • Publication number: 20240119857
    Abstract: System, methods, and other embodiments described herein relate to training a scene simulator for rendering 2D scenes using data from real and simulated agents. In one embodiment, a method includes acquiring trajectories and three-dimensional (3D) views for multiple agents from observations of real vehicles. The method also includes generating a 3D scene having the multiple agents using the 3D views and information from simulated agents. The method also includes training a scene simulator to render scene projections using the 3D scene. The method also includes outputting a 2D scene having simulated observations for a driving scene using the scene simulator.
    Type: Application
    Filed: September 27, 2022
    Publication date: April 11, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Massachusetts Institute of Technology
    Inventors: Tsun-Hsuan Wang, Alexander Amini, Wilko Schwarting, Igor Gilitschenski, Sertac Karaman, Daniela Rus
  • Patent number: 11724691
    Abstract: Systems and methods described herein relate to estimating risk associated with a vehicular maneuver. One embodiment acquires a geometric representation of an intersection including a lane in which a vehicle is traveling and at least one other lane; discretizes the at least one other lane into a plurality of segments; determines a trajectory along which the vehicle will travel; estimates a probability density function for whether a road agent external to the vehicle is present in the respective segments; estimates a traffic-conflict probability of a traffic conflict in the respective segments conditioned on whether an external road agent is present; estimates a risk associated with the vehicle following the trajectory by integrating a product of the probability density function and the traffic-conflict probability over the at least one other lane and the plurality of segments; and controls operation of the vehicle based, at least in part, on the estimated risk.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: August 15, 2023
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Stephen G. McGill, Jr., Guy Rosman, Moses Theodore Ort, Alyssa Pierson, Igor Gilitschenski, Minoru Brandon Araki, Luke S. Fletcher, Sertac Karaman, Daniela Rus, John Joseph Leonard
  • Publication number: 20230062810
    Abstract: A method of generating an output trajectory of an ego vehicle is described. The method includes extracting high-level features from a bird-view image of a traffic environment of the ego vehicle. The method also includes generating, using an automaton generative network, an automaton including an automaton state distribution describing a behavior of the ego vehicle in the traffic environment according to the high-level features. The method further includes generating the output trajectory of the ego vehicle according to extracted bird-view features of the bird-view image and the automaton state distribution describing the behavior of the ego vehicle in the traffic environment.
    Type: Application
    Filed: July 9, 2021
    Publication date: March 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGY, LEHIGH UNIVERITY
    Inventors: Xiao LI, Brandon ARAKI, Sertac KARAMAN, Daniela RUS, Guy ROSMAN, Igor GILITSCHENSKI, Cristian-Ioan VASILE
  • Publication number: 20220410938
    Abstract: Systems and methods for predicting a trajectory of a moving object are disclosed herein. One embodiment downloads, to a robot, a probabilistic hybrid discrete-continuous automaton (PHA) model learned as a deep neural network; uses the deep neural network to infer a sequence of high-level discrete modes and a set of associated low-level samples, wherein the high-level discrete modes correspond to candidate maneuvers for the moving object and the low-level samples are candidate trajectories; uses the sequence of high-level discrete modes and the set of associated low-level samples, via a learned proposal distribution in the deep neural network, to adaptively sample the sequence of high-level discrete modes to produce a reduced set of low-level samples; applies a sample selection technique to the reduced set of low-level samples to select a predicted trajectory for the moving object; and controls operation of the robot based, at least in part, on the predicted trajectory.
    Type: Application
    Filed: December 1, 2021
    Publication date: December 29, 2022
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Xin Huang, Igor Gilitschenski, Guy Rosman, Stephen G. McGill, JR., John Joseph Leonard, Ashkan Mohammadzadeh Jasour, Brian C. Williams
  • Patent number: 11436839
    Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: September 6, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUE OF TECHNOLOGY
    Inventors: Felix Maximilian Naser, Igor Gilitschenski, Guy Rosman, Alexander Andre Amini, Fredo Durand, Antonio Torralba, Gregory Wornell, William Freeman, Sertac Karaman, Daniela Rus
  • Patent number: 11427210
    Abstract: Systems and methods for predicting the trajectory of an object are disclosed herein. One embodiment receives sensor data that includes a location of the object in an environment of the object; accesses a location-specific latent map, the location-specific latent map having been learned together with a neural-network-based trajectory predictor during a training phase, wherein the neural-network-based trajectory predictor is deployed in a robot; inputs, to the neural-network-based trajectory predictor, the location of the object and the location-specific latent map, the location-specific latent map providing, to the neural-network-based trajectory predictor, a set of location-specific biases regarding the environment of the object; and outputs, from the neural-network-based trajectory predictor, a predicted trajectory of the object.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: August 30, 2022
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Guy Rosman, Igor Gilitschenski, Arjun Gupta, Sertac Karaman, Daniela Rus
  • Patent number: 11295162
    Abstract: An approach to place recognition from an image makes use of the detection of objects at a set of known places as well as at an unknown place. Images of the detected objects in an image of the unknown place are processed to yield respective numerical descriptors, and these descriptors are used to compare the unknown place to the known places to recognize the unknown place. At least some embodiments make use of a trained parameterized image processor to transform an image of an object to an object descriptor, and the training of the processor is meant to preserve distinctions between different instances of a type of object, as well as distinctions between entirely different types of objects.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: April 5, 2022
    Assignees: Massachusetts Institute of Technology, ETH Zurich
    Inventors: Daniela Rus, Sertac Karaman, Igor Gilitschenski, Andrei Cramariuc, Cesar Cadena, Roland Siegwart
  • Publication number: 20210389776
    Abstract: A controller for an autonomous vehicle is trained using simulated paths on a roadway and simulated observations that are formed by transforming images previously acquired on similar paths on that roadway. Essentially an unlimited number of paths may be simulated, enabling optimization approaches including reinforcement learning to be applied to optimize the controller.
    Type: Application
    Filed: June 11, 2021
    Publication date: December 16, 2021
    Inventors: Daniela Rus, Sertac Karaman, Igor Gilitschenski, Alexander Amini, Julia Moseyko, Jacob Phillips
  • Patent number: 11010622
    Abstract: A method of non-line-of-sight (NLoS) obstacle detection for an ego vehicle is described. The method includes capturing a sequence of images over a period with an image capture device. The method also includes storing the sequence of images in a cyclic buffer. The method further includes registering each image in the cyclic buffer to a projected image. The method includes performing the registering by estimating a homography H for each frame of the sequence of images to project to a view point of a first frame in the sequence of images and remove motion of the ego vehicle in the projected image. The method also includes enhancing the projected image. The method further includes classifying the projected image based on a scene determination. The method also includes issuing a control signal to the vehicle upon classifying the projected image.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: May 18, 2021
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUE OF TECHNOLOGY
    Inventors: Felix Maximilian Naser, Igor Gilitschenski, Alexander Andre Amini, Christina Liao, Guy Rosman, Sertac Karaman, Daniela Rus
  • Publication number: 20210133480
    Abstract: An approach to place recognition from an image makes use of the detection of objects at a set of known places as well as at an unknown place. Images of the detected objects in an image of the unknown place are processed to yield respective numerical descriptors, and these descriptors are used to compare the unknown place to the known places to recognize the unknown place. At least some embodiments make use of a trained parameterized image processor to transform an image of an object to an object descriptor, and the training of the processor is meant to preserve distinctions between different instances of a type of object, as well as distinctions between entirely different types of objects.
    Type: Application
    Filed: November 1, 2019
    Publication date: May 6, 2021
    Inventors: Daniela Rus, Sertac Karaman, Igor Gilitschenski, Andrei Cramariuc, Cesar Cadena, Roland Siegwart
  • Publication number: 20210081715
    Abstract: Systems and methods for predicting the trajectory of an object are disclosed herein. One embodiment receives sensor data that includes a location of the object in an environment of the object; accesses a location-specific latent map, the location-specific latent map having been learned together with a neural-network-based trajectory predictor during a training phase, wherein the neural-network-based trajectory predictor is deployed in a robot; inputs, to the neural-network-based trajectory predictor, the location of the object and the location-specific latent map, the location-specific latent map providing, to the neural-network-based trajectory predictor, a set of location-specific biases regarding the environment of the object; and outputs, from the neural-network-based trajectory predictor, a predicted trajectory of the object.
    Type: Application
    Filed: March 31, 2020
    Publication date: March 18, 2021
    Inventors: Guy Rosman, Igor Gilitschenski, Arjun Gupta, Sertac Karaman, Daniela Rus
  • Publication number: 20210049382
    Abstract: An object detection method includes receiving sensor data including a number of images associated with a sensor region as the actor traverses an environment, the plurality of images characterizing changes of illumination in the sensor region over time, the sensor region including a region to be traversed by the actor in the future, processing the plurality of images determine a change of illumination in sensor the region over time. The processing includes registering the plurality of images to a common coordinate system based at least in part on odometry data characterizing the actor's traversal of the environment, determining the change of illumination in the sensor region over time based on the registered plurality of images. The method further includes determining an object detection result based at least in part on the change of illumination in the sensor region over time.
    Type: Application
    Filed: October 30, 2020
    Publication date: February 18, 2021
    Inventors: Felix Maximilian Naser, Igor Gilitschenski, Alexander Andre Amini, Christina Liao, Guy Rosman, Sertac Karaman, Daniela Rus