Patents by Inventor Alejandro Troccoli
Alejandro Troccoli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230088912Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: ApplicationFiled: September 26, 2022Publication date: March 23, 2023Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Publication number: 20230079196Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.Type: ApplicationFiled: November 18, 2022Publication date: March 16, 2023Applicant: NVIDIA Corp.Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
-
Patent number: 11550325Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.Type: GrantFiled: June 10, 2020Date of Patent: January 10, 2023Assignee: NVIDIA CORP.Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
-
Patent number: 11514293Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: GrantFiled: September 9, 2019Date of Patent: November 29, 2022Assignee: NVIDIA CorporationInventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Publication number: 20210389769Abstract: Techniques to generate driving scenarios for autonomous vehicles characterize a path in a driving scenario according to metrics such as narrowness and effort. Nodes of the path are assigned a time for action to avoid collision from the node. The generated scenarios may be simulated in a computer.Type: ApplicationFiled: June 10, 2020Publication date: December 16, 2021Applicant: NVIDIA Corp.Inventors: Siva Kumar Sastry Hari, Iuri Frosio, Zahra Ghodsi, Anima Anandkumar, Timothy Tsai, Stephen W. Keckler, Alejandro Troccoli
-
Publication number: 20210183088Abstract: Apparatuses, systems, and techniques to identify object distance with one or more cameras. In at least one embodiment, one or more cameras capture at least two images, where one image is transformed to the other, and a neural network determines whether said object is in front of or behind a known distance, whereby an object's distance may be determined after a set of known distances are analyzed.Type: ApplicationFiled: December 13, 2019Publication date: June 17, 2021Inventors: Orazio Gallo, Abhishek Badki, Alejandro Troccoli
-
Publication number: 20210124353Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.Type: ApplicationFiled: January 4, 2021Publication date: April 29, 2021Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
-
Publication number: 20200249674Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.Type: ApplicationFiled: February 5, 2019Publication date: August 6, 2020Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
-
Publication number: 20200082248Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: ApplicationFiled: September 9, 2019Publication date: March 12, 2020Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Patent number: 10027893Abstract: Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.Type: GrantFiled: May 10, 2016Date of Patent: July 17, 2018Assignee: Nvidia CorporationInventors: Steven Bell, Alejandro Troccoli, Kari Pulli
-
Publication number: 20170332018Abstract: Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.Type: ApplicationFiled: May 10, 2016Publication date: November 16, 2017Inventors: Steven Bell, Alejandro Troccoli, Kari Pulli
-
Publication number: 20170085656Abstract: Methods of determining an absolute orientation and position of a mobile computing device are described for use in augmented reality applications, for instance. In one approach, the framework implemented herein detects known objects within a frame of a video feed. The video feed is captured in real time from a camera connected to a mobile computing device such as a smartphone or tablet computer, and location coordinates are associated with one or more known objects detected in the video feed. Based on the location coordinates of the known objects within the video frame, the user's position and orientation is triangulated with a high degree of precision.Type: ApplicationFiled: September 22, 2015Publication date: March 23, 2017Inventors: Joshua ABBOTT, James van WELZEN, Alejandro TROCCOLI, Asad Ullah NAWEED
-
Patent number: 9571818Abstract: Techniques for generating robust depth maps from stereo images are described. A robust depth map is generated from a set of stereo images captured with and without flash illumination. The depth map is more robust than depth maps generated using conventional techniques because a pixel-matching algorithm is implemented that weights pixels in a matching window according to the ratio of light intensity captured using different flash illumination levels. The ratio map provides a rough estimate of depth relative to neighboring pixels that enables the flash/no-flash pixel-matching algorithm to devalue pixels that appear to be located at different depths than the central pixel in the matching window. In addition, the ratio map may be used to filter the generated depth map to generate a smooth estimate for the depth of objects within the stereo image.Type: GrantFiled: June 7, 2012Date of Patent: February 14, 2017Assignee: NVIDIA CORPORATIONInventors: Kari Pulli, Alejandro Troccoli, Changyin Zhou
-
Publication number: 20130329015Abstract: Techniques for generating robust depth maps from stereo images are described. A robust depth map is generated from a set of stereo images captured with and without flash illumination. The depth map is more robust than depth maps generated using conventional techniques because a pixel-matching algorithm is implemented that weights pixels in a matching window according to the ratio of light intensity captured using different flash illumination levels. The ratio map provides a rough estimate of depth relative to neighboring pixels that enables the flash/no-flash pixel-matching algorithm to devalue pixels that appear to be located at different depths than the central pixel in the matching window. In addition, the ratio map may be used to filter the generated depth map to generate a smooth estimate for the depth of objects within the stereo image.Type: ApplicationFiled: June 7, 2012Publication date: December 12, 2013Inventors: Kari Pulli, Alejandro Troccoli, Changyin Zhou
-
Publication number: 20110067038Abstract: The graphics co-processing technique includes loading a shim layer library. The shim layer library loads and initializes a device driver interface of a first class on the primary adapter and a device driver interface of a second class on an unattached adapter. The shim layer also translates calls between the first device driver interface of the first class on the primary adapter and the second device driver interface of the second class on the unattached adapter.Type: ApplicationFiled: December 30, 2009Publication date: March 17, 2011Applicant: NVIDIA CORPORATIONInventors: Alejandro Troccoli, Franck Diard
-
Publication number: 20110063304Abstract: The graphics co-processing technique includes receiving display operation for execution by a graphics processing unit on an unattached adapter. The display operation is split into a copy from a frame buffer of the graphics processing unit on the unattached adapter to a buffer in system memory, a copy from the buffer in system memory to a frame buffer of graphics processing unit on a primary adapter, and a present from the frame buffer of the graphics processing unit on the primary adapter to a display. Execution of the copy from the frame buffer of the graphics processing unit on the unattached adapter to the buffer in system memory and the copy from the buffer in system memory to the frame buffer of the graphics processing unit on the primary adapter are synchronized.Type: ApplicationFiled: December 29, 2009Publication date: March 17, 2011Applicant: NVIDIA CORPORATIONInventors: Franck Diard, Alejandro Troccoli
-
Publication number: 20110063305Abstract: The graphics co-processing technique includes loading and initializing a device driver interface and a device specific kernel mode driver for a graphics processing unit on a primary adapter. A device driver interface and a device specific kernel mode driver for a graphics processing unit on an unattached adapter is also loaded and initialized without the device driver interface talking back to a runtime application programming interface or a thunk layer when a particular versions of an operating system will not allow the device driver interface on the unattached adapter to be loaded.Type: ApplicationFiled: December 29, 2009Publication date: March 17, 2011Applicant: NVIDIA CORPORATIONInventors: Franck Diard, Alejandro Troccoli