Patents by Inventor Danilo P. Groppa

Danilo P. Groppa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11478701
    Abstract: A rendering system includes a cloud component and a local edge component. The cloud component receives or retrieves legacy data from various sources and preprocesses the data into a format. The local edge component receives the preprocessed data from the cloud component and performs local rendering steps necessary to place the preprocessed data into a form suitable for a game engine. The game engine utilizes the preprocessed data to render an image stream. The system is embodied in a flight simulator.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: October 25, 2022
    Assignee: Rockwell Collins, Inc.
    Inventors: Abhishek Verma, Jeanette M. Ling, Rishabh Kaushik, Danilo P Groppa, Triston Thorpe
  • Patent number: 11379034
    Abstract: A mixed reality (MR) system is disclosed. The MR system may determine a first predicted head pose corresponding to a time that virtual reality imagery is rendered, determine a second predicted head pose corresponding to a selected point in time during a camera shutter period, and combine the virtual reality imagery with the stereoscopic imagery based on the first predicted head pose and the second predicted head pose. A simulator that employs remote (e.g., cloud) rendering is also disclosed. The simulator/client device may determine a first pose (e.g., vehicle pose and/or head pose), receive video imagery rendered by a remote server based on the first pose, and apply a timewarp correction to the video imagery based on a comparison of the first pose and a second pose.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: July 5, 2022
    Assignee: Rockwell Collins, Inc.
    Inventors: Jason C. Wenger, Peter R. Bellows, Danilo P. Groppa, Jeanette M. Ling, Richard M. Rademaker
  • Patent number: 11016560
    Abstract: A mixed reality (MR) system is disclosed. The MR system may determine a first predicted head pose corresponding to a time that virtual reality imagery is rendered, determine a second predicted head pose corresponding to a selected point in time during a camera shutter period, and combine the virtual reality imagery with the stereoscopic imagery based on the first predicted head pose and the second predicted head pose. A simulator that employs remote (e.g., cloud) rendering is also disclosed. The simulator/client device may determine a first pose (e.g., vehicle pose and/or head pose), receive video imagery rendered by a remote server based on the first pose, and apply a timewarp correction to the video imagery based on a comparison of the first pose and a second pose.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: May 25, 2021
    Assignee: Rockwell Collins, Inc.
    Inventors: Jason C. Wenger, Peter R. Bellows, Danilo P. Groppa, Jeanette M. Ling, Richard M. Rademaker
  • Patent number: 10965929
    Abstract: A video processing device for a mixed reality system is disclosed. A mixed reality system may include a computer system configured to generate a virtual reality video stream and a head mounted device communicatively coupled to the computer system. The head mounted device may include a display, a depth sensor, and a stereoscopic camera system. The video processing device can be communicatively coupled to the computer system and the head mounted device. The video processing device can be configured to employ confidence-based fusion for depth mapping and/or exploit parallelism in high-speed video distortion correction.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: March 30, 2021
    Assignee: Rockwell Collins, Inc.
    Inventors: Peter R. Bellows, Danilo P. Groppa
  • Patent number: 10269159
    Abstract: A head wearable device, a method, and a system. The head wearable device may include a display, a camera, a convolutional neural network (CNN) processor, and a processor. The CNN processor may be configured to: receive real scene image data from the camera; identify and classify objects in a real scene image; and generate object classification and position data. The processor may be configured to receive the real scene image data; receive the object classification and position data from the CNN processor; perform an image segmentation operation on the real scene image to fill in the objects; generate filled-in object data indicative of filled-in objects; generate a pixel mask; receive virtual scene image data; create mixed reality scene image data; and output the mixed reality scene image data to the display for presentation to a user.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: April 23, 2019
    Inventors: Peter R. Bellows, Danilo P. Groppa
  • Patent number: 10235806
    Abstract: Methods and systems for selectively merging real-world objects into a virtual environment are disclosed. The method may include: receiving a first input for rendering of a virtual environment, a second input for rendering of a real-world environment, and a depth information regarding the rendering of the real-world environment; identifying at least one portion of the rendering of the real-world environment that is within a depth range and differentiable from a predetermined background; generating a merged rendering including the at least one portion of the rendering of the real-world environment into the rendering of the virtual environment; and displaying the merged rendering to a user.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: March 19, 2019
    Assignee: Rockwell Collins, Inc.
    Inventors: Danilo P. Groppa, Loyal J. Pyczynski
  • Publication number: 20190035125
    Abstract: A head wearable device, a method, and a system. The head wearable device may include a display, a camera, a convolutional neural network (CNN) processor, and a processor. The CNN processor may be configured to: receive real scene image data from the camera; identify and classify objects in a real scene image; and generate object classification and position data. The processor may be configured to receive the real scene image data; receive the object classification and position data from the CNN processor; perform an image segmentation operation on the real scene image to fill in the objects; generate filled-in object data indicative of filled-in objects; generate a pixel mask; receive virtual scene image data; create mixed reality scene image data; and output the mixed reality scene image data to the display for presentation to a user.
    Type: Application
    Filed: July 27, 2017
    Publication date: January 31, 2019
    Inventors: Peter R. Bellows, Danilo P. Groppa
  • Patent number: 10152775
    Abstract: A head wearable device, a method, and a system. The head wearable device may include a display, a camera, a buffer, and a processor. The buffer may be configured to buffer a portion of real scene image data corresponding to a real scene image from the camera. The processor may be configured to: perform a combined distortion correction operation; perform a foreground separation operation; perform a smoothing operation on blending values; perform a chromatic aberration distortion correction operation; receive virtual scene image data corresponding to a virtual scene image; blend processed real scene image data with the virtual scene image data to create a mixed reality scene image as mixed reality scene image data; and output the mixed reality scene image data to the display for presentation to a user.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: December 11, 2018
    Assignee: Rockwell Collins, Inc.
    Inventors: Peter R. Bellows, Danilo P. Groppa, Brad A. Walker
  • Publication number: 20160148429
    Abstract: Methods and systems for selectively merging real-world objects into a virtual environment are disclosed. The method may include: receiving a first input for rendering of a virtual environment, a second input for rendering of a real-world environment, and a depth information regarding the rendering of the real-world environment; identifying at least one portion of the rendering of the real-world environment that is within a depth range and differentiable from a predetermined background; generating a merged rendering including the at least one portion of the rendering of the real-world environment into the rendering of the virtual environment; and displaying the merged rendering to a user.
    Type: Application
    Filed: November 21, 2014
    Publication date: May 26, 2016
    Applicant: ROCKWELL COLLINS, INC.
    Inventors: Danilo P. Groppa, Loyal J. Pyczynski