Patents by Inventor Alejandro Troccoli

Alejandro Troccoli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230088912
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 23, 2023
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Publication number: 20230079196
    Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.
    Type: Application
    Filed: November 18, 2022
    Publication date: March 16, 2023
    Applicant: NVIDIA Corp.
    Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
  • Patent number: 11550325
    Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: January 10, 2023
    Assignee: NVIDIA CORP.
    Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
  • Patent number: 11514293
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: November 29, 2022
    Assignee: NVIDIA Corporation
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Publication number: 20210389769
    Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.
    Type: Application
    Filed: June 10, 2020
    Publication date: December 16, 2021
    Applicant: NVIDIA Corp.
    Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
  • Publication number: 20210183088
    Abstract: Apparatuses, systems, and techniques to identify object distance with one or more cameras. In at least one embodiment, one or more cameras capture at least two images, where one image is transformed to the other, and a neural network determines whether said object is in front of or behind a known distance, whereby an object's distance may be determined after a set of known distances are analyzed.
    Type: Application
    Filed: December 13, 2019
    Publication date: June 17, 2021
    Inventors: Orazio Gallo, Abhishek Badki, Alejandro Troccoli
  • Publication number: 20210124353
    Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.
    Type: Application
    Filed: January 4, 2021
    Publication date: April 29, 2021
    Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
  • Publication number: 20200249674
    Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.
    Type: Application
    Filed: February 5, 2019
    Publication date: August 6, 2020
    Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
  • Publication number: 20200082248
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 12, 2020
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Patent number: 10027893
    Abstract: Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: July 17, 2018
    Assignee: Nvidia Corporation
    Inventors: Steven Bell, Alejandro Troccoli, Kari Pulli
  • Publication number: 20170332018
    Abstract: Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.
    Type: Application
    Filed: May 10, 2016
    Publication date: November 16, 2017
    Inventors: Steven Bell, Alejandro Troccoli, Kari Pulli
  • Publication number: 20170085656
    Abstract: Methods of determining an absolute orientation and position of a mobile computing device are described for use in augmented reality applications, for instance. In one approach, the framework implemented herein detects known objects within a frame of a video feed. The video feed is captured in real time from a camera connected to a mobile computing device such as a smartphone or tablet computer, and location coordinates are associated with one or more known objects detected in the video feed. Based on the location coordinates of the known objects within the video frame, the user's position and orientation is triangulated with a high degree of precision.
    Type: Application
    Filed: September 22, 2015
    Publication date: March 23, 2017
    Inventors: Joshua ABBOTT, James van WELZEN, Alejandro TROCCOLI, Asad Ullah NAWEED
  • Patent number: 9571818
    Abstract: Techniques for generating robust depth maps from stereo images are described. A robust depth map is generated from a set of stereo images captured with and without flash illumination. The depth map is more robust than depth maps generated using conventional techniques because a pixel-matching algorithm is implemented that weights pixels in a matching window according to the ratio of light intensity captured using different flash illumination levels. The ratio map provides a rough estimate of depth relative to neighboring pixels that enables the flash/no-flash pixel-matching algorithm to devalue pixels that appear to be located at different depths than the central pixel in the matching window. In addition, the ratio map may be used to filter the generated depth map to generate a smooth estimate for the depth of objects within the stereo image.
    Type: Grant
    Filed: June 7, 2012
    Date of Patent: February 14, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Kari Pulli, Alejandro Troccoli, Changyin Zhou
  • Publication number: 20130329015
    Abstract: Techniques for generating robust depth maps from stereo images are described. A robust depth map is generated from a set of stereo images captured with and without flash illumination. The depth map is more robust than depth maps generated using conventional techniques because a pixel-matching algorithm is implemented that weights pixels in a matching window according to the ratio of light intensity captured using different flash illumination levels. The ratio map provides a rough estimate of depth relative to neighboring pixels that enables the flash/no-flash pixel-matching algorithm to devalue pixels that appear to be located at different depths than the central pixel in the matching window. In addition, the ratio map may be used to filter the generated depth map to generate a smooth estimate for the depth of objects within the stereo image.
    Type: Application
    Filed: June 7, 2012
    Publication date: December 12, 2013
    Inventors: Kari Pulli, Alejandro Troccoli, Changyin Zhou
  • Publication number: 20110067038
    Abstract: The graphics co-processing technique includes loading a shim layer library. The shim layer library loads and initializes a device driver interface of a first class on the primary adapter and a device driver interface of a second class on an unattached adapter. The shim layer also translates calls between the first device driver interface of the first class on the primary adapter and the second device driver interface of the second class on the unattached adapter.
    Type: Application
    Filed: December 30, 2009
    Publication date: March 17, 2011
    Applicant: NVIDIA CORPORATION
    Inventors: Alejandro Troccoli, Franck Diard
  • Publication number: 20110063304
    Abstract: The graphics co-processing technique includes receiving display operation for execution by a graphics processing unit on an unattached adapter. The display operation is split into a copy from a frame buffer of the graphics processing unit on the unattached adapter to a buffer in system memory, a copy from the buffer in system memory to a frame buffer of graphics processing unit on a primary adapter, and a present from the frame buffer of the graphics processing unit on the primary adapter to a display. Execution of the copy from the frame buffer of the graphics processing unit on the unattached adapter to the buffer in system memory and the copy from the buffer in system memory to the frame buffer of the graphics processing unit on the primary adapter are synchronized.
    Type: Application
    Filed: December 29, 2009
    Publication date: March 17, 2011
    Applicant: NVIDIA CORPORATION
    Inventors: Franck Diard, Alejandro Troccoli
  • Publication number: 20110063305
    Abstract: The graphics co-processing technique includes loading and initializing a device driver interface and a device specific kernel mode driver for a graphics processing unit on a primary adapter. A device driver interface and a device specific kernel mode driver for a graphics processing unit on an unattached adapter is also loaded and initialized without the device driver interface talking back to a runtime application programming interface or a thunk layer when a particular versions of an operating system will not allow the device driver interface on the unattached adapter to be loaded.
    Type: Application
    Filed: December 29, 2009
    Publication date: March 17, 2011
    Applicant: NVIDIA CORPORATION
    Inventors: Franck Diard, Alejandro Troccoli