Patents by Inventor Ersin Yumer

Ersin Yumer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11989847
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: May 21, 2024
    Assignee: UATC, LLC
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Patent number: 11858536
    Abstract: Example aspects of the present disclosure describe determining, using a machine-learned model framework, a motion trajectory for an autonomous platform. The motion trajectory can be determined based at least in part on a plurality of costs based at least in part on a distribution of probabilities determined conditioned on the motion trajectory.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: January 2, 2024
    Assignee: UATC, LLC
    Inventors: Jerry Junkai Liu, Wenyuan Zeng, Raquel Urtasun, Mehmet Ersin Yumer
  • Publication number: 20230365143
    Abstract: A system comprises an autonomous vehicle and a control device. The control device detects an event trigger that impacts the autonomous vehicle. In response, to detecting the event trigger, the control device enters the autonomous vehicle into a first degraded autonomy mode, In the first degraded autonomy mode, the control device communicates sensor data to an oversight server. The control device receives high-level commands from the oversight server. The one or more high-level commands indicate minimal risk maneuvers for the autonomous vehicle. The control device receives a maximum traveling speed for the autonomous vehicle from the oversight server. The control device navigates the autonomous vehicle using an adaptive cruise control according to the one or more high-level commands and the maximum traveling speed.
    Type: Application
    Filed: March 29, 2023
    Publication date: November 16, 2023
    Inventors: Mehmet Ersin Yumer, Xiaodi Hou
  • Publication number: 20230359202
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Raquel Urtasun, Yen-Chen Lin, Andrei Pokrovsky, Mengye Ren, Abbas Sadat, Ersin Yumer
  • Patent number: 11755014
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: September 12, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11734935
    Abstract: Methods and systems are disclosed for correlating synthetic LiDAR data to a real-world domain for use in training an model for use by autonomous vehicle when operating in an environment. To do this, the system will obtain a data set of synthetic LiDAR data, along with images of a real-world environment. The system will transfer the synthetic LiDAR data to a two-dimensional representation, use the two-dimensional representation and the images to train a model that a vehicle can use to operate in a real-world environment.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: August 22, 2023
    Assignee: ARGO AI, LLC
    Inventors: Kevin Chen, James Hays, Ersin Yumer
  • Patent number: 11620838
    Abstract: Systems and methods for answering region specific questions are provided. A method includes obtaining a regional scene question including an attribute query and a spatial region of interest for a training scene depicting a surrounding environment of a vehicle. The method includes obtaining a universal embedding for the training scene and an attribute embedding for the attribute query of the scene question. The universal embedding can identify sensory data corresponding to the training scene that can be used to answer questions concerning a number of different attributes in the training scene. The attribute embedding can identify aspects of an attribute that can be used to answer questions specific to the attribute. The method includes determining an answer embedding based on the universal embedding and the attribute embedding and determining a regional scene answer to the regional scene question based on the spatial region of interest and the answer embedding.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: April 4, 2023
    Assignee: UATC, LLC
    Inventors: Sean Segal, Wenjie Luo, Eric Randall Kee, Ersin Yumer, Raquel Urtasun, Abbas Sadat
  • Patent number: 11551429
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: January 10, 2023
    Assignee: UATC, LLC
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 11436842
    Abstract: Systems and methods are provided for representing a traffic signal device. The method includes receiving a digital image of a traffic signal device that includes one or more traffic signal elements, representing the traffic signal device as a raster image, each traffic signal element of the traffic signal device being represented by a mask corresponding to a location of the traffic signal element on the traffic signal device, representing each mask in a channel in the raster image, providing the raster image as an input to a neural network to classify a state for each of the one or more traffic signal elements, and receiving, from the neural network, a classified raster image, in which the classified raster image includes a plurality of masks, each mask representing a state of one of the one or more traffic signal elements.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: September 6, 2022
    Assignee: Argo AI, LLC
    Inventors: Guy Hotson, Richard L. Kwant, Ersin Yumer
  • Publication number: 20220165043
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Application
    Filed: February 10, 2022
    Publication date: May 26, 2022
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Publication number: 20210383616
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Application
    Filed: January 15, 2021
    Publication date: December 9, 2021
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Publication number: 20210287023
    Abstract: Systems and methods are provided for representing a traffic signal device. The method includes receiving a digital image of a traffic signal device that includes one or more traffic signal elements, representing the traffic signal device as a raster image, each traffic signal element of the traffic signal device being represented by a mask corresponding to a location of the traffic signal element on the traffic signal device, representing each mask in a channel in the raster image, providing the raster image as an input to a neural network to classify a state for each of the one or more traffic signal elements, and receiving, from the neural network, a classified raster image, in which the classified raster image includes a plurality of masks, each mask representing a state of one of the one or more traffic signal elements.
    Type: Application
    Filed: March 13, 2020
    Publication date: September 16, 2021
    Inventors: Guy Hotson, Richard L. Kwant, Ersin Yumer
  • Patent number: 11115645
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Patent number: 11106902
    Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: August 31, 2021
    Assignee: ADOBE INC.
    Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
  • Publication number: 20210263528
    Abstract: Methods and systems are disclosed for correlating synthetic LiDAR data to a real-world domain for use in training an model for use by autonomous vehicle when operating in an environment. To do this, the system will obtain a data set of synthetic LiDAR data, along with images of a real-world environment. The system will transfer the synthetic LiDAR data to a two-dimensional representation, use the two-dimensional representation and the images to train a model that a vehicle can use to operate in a real-world environment.
    Type: Application
    Filed: May 7, 2021
    Publication date: August 26, 2021
    Inventors: Kevin Chen, James Hays, Ersin Yumer
  • Patent number: 11069099
    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: July 20, 2021
    Assignee: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
  • Publication number: 20210200212
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: March 20, 2020
    Publication date: July 1, 2021
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11049296
    Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventor: Mehmet Ersin Yumer
  • Patent number: 11016496
    Abstract: Methods and systems are disclosed for correlating synthetic LiDAR data to a real-world domain for use in training an autonomous vehicle in how to operate in an environment. To do this, the system will obtain a data set of synthetic LiDAR data, transfer the synthetic LiDAR data to a two-dimensional representation, use the two-dimensional representation to train a model of a real-world environment, and use the trained model of the real-world environment to train an autonomous vehicle.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: May 25, 2021
    Assignee: Argo AI, LLC
    Inventors: Kevin Chen, James Hays, Ersin Yumer