Patents by Inventor Mark Bilinski
Mark Bilinski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240412399Abstract: A Method and System for Utilizing Virtual Cameras in Point Cloud Environments to Support Computer Vision Object Recognition. More specifically, a method of object recognition with a virtual camera, comprising providing a three-dimensional point cloud and an associated two-dimensional panoramic image, each comprising at least one object of interest, constructing a one-to-one grid map, performing detection and localization on the object of interest, constructing a 3D bounding box around the object of interest, forming a virtual camera system around the bounding box oriented towards the object of interest, rotating the virtual camera around the bounding box, calculating a recognition score for each of the plurality of synthetic images, determining a best angle based on the recognition score, generating a best synthetic image based on the best angle, and obtaining an object recognition prediction based on the best synthetic image. Additionally, a method of text recognition with a virtual camera.Type: ApplicationFiled: June 12, 2023Publication date: December 12, 2024Applicant: The United States of America as represented by the Secretary of the NavyInventors: Adrian Mai, Mark Bilinski, Raymond Chey Provost
-
Publication number: 20240365009Abstract: An unmanned aerial vehicle (UAV) system comprising at least one UAV platform and a user system. Each UAV platform can comprise a first camera system and a shading system. The user system can comprise a targeting system and a second camera system. The targeting system can select a target, the first camera system can detect the location of the target and at least one light source, and each UAV platform can move to an intermediary location between the target and one of the at least one light sources. Additional embodiments can include a lighting system on the UAV platforms to illuminate targets.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Daniel S. JENNINGS, Mark BILINSKI, Danielle R. MATUSIAK, Robert C. ROCHA
-
Publication number: 20240331270Abstract: A rendering system that includes at least one processor configured to receive a plurality of data points, receive at least one priority assignment, assign at least one priority value to the plurality of data points based on the at least one priority assignment, and render the plurality of data points based on the at least one priority value.Type: ApplicationFiled: March 30, 2023Publication date: October 3, 2024Applicant: United States of America as represented by the Secretary of the NavyInventors: Garrison B. Price, Fred W. Greene, Mark Bilinski
-
Publication number: 20230394771Abstract: An apparatus, system, and method for augmented reality tracking of unmanned systems using multimodal input processing, comprising receiving multimodal inputs, calculating unmanned vehicle positions, providing identifiers associated with the unmanned vehicles location, and superimposing the indicators on an augmented reality display. Furthermore, this may include providing an operator/pilot with telemetry information pertaining to unmanned vehicles, task or assignment information, and more.Type: ApplicationFiled: March 3, 2023Publication date: December 7, 2023Applicant: The United States of America as represented by the Secretary of the NavyInventors: Mark Bilinski, Shibin Parameswaran, Martin Thomas Jaszewski, Daniel Sean Jennings
-
Publication number: 20230206387Abstract: The present invention relates to a systems and methods to perform 3D localization of target objects in point cloud data using a corresponding 2D image. According to an illustrative embodiment of the present disclosure, a target environment is imaged with a camera to generate a 2D panorama and a scanner to generate a 3D point cloud. The 2D panorama is mapped to the point cloud with a 1 to 1 grid map. The target objects are detected and localized in 2D before being mapped back to the 3D point cloud.Type: ApplicationFiled: December 23, 2021Publication date: June 29, 2023Applicant: United States of America as represented by Secretary of the NavyInventors: Adrian Mai, Mark Bilinski, Raymond Chey Provost
-
Patent number: 11477652Abstract: The system and methods described herein aids in the defense of unmanned vehicles, such as aerial vehicles, from wifi cyber attacks. Such attacks usually do not last long and in the case of many point-to-point command and control systems, the attacks originate from close proximity to the unmanned vehicle. The system and methods described herein allow a team to rapidly identify and physically respond to an adversary trying to take control of the unmanned vehicle. Another aspect of the embodiment taught herein is to allow for the location of a wifi signal in a hands-free manner by able to visualize the source of the signal using an augmented reality display coupled to an antenna array.Type: GrantFiled: November 25, 2019Date of Patent: October 18, 2022Assignee: United States of America as represented by the Secretary of the NavyInventors: Mark Bilinski, Gerald Thomas Burnette, Fred William Greene, Garrison Buckminster Price
-
Patent number: 11062523Abstract: The invention relates to creating actual object data for mixed reality applications. In some embodiments, the invention includes using a mixed reality controller to (1) define a coordinate system frame of reference for a target object, the coordinate system frame of reference including an initial point of the target object and at least one directional axis that are specified by a user of the mixed reality controller, (2) define additional points of the target object, and (3) define interface elements of the target object. A 3D model of the target object is generated based on the coordinate system frame of reference, the additional points, and the interface elements. After receiving input metadata for defining interface characteristics for the interface elements displayed on the 3D model, the input metadata is sued to generate a workflow for operating the target object in a mixed reality environment.Type: GrantFiled: July 15, 2020Date of Patent: July 13, 2021Assignee: The Government of the United States of America, as represented by the Secretary of the NavyInventors: Larry Clay Greunke, Mark Bilinski, Christopher James Angelopoulos, Michael Joseph Guerrero
-
Publication number: 20210160696Abstract: The system and methods described herein aids in the defense of unmanned vehicles, such as aerial vehicles, from wifi cyber attacks. Such attacks usually do not last long and in the case of many point-to-point command and control systems, the attacks originate from close proximity to the unmanned vehicle. The system and methods described herein allow a team to rapidly identify and physically respond to an adversary trying to take control of the unmanned vehicle. Another aspect of the embodiment taught herein is to allow for the location of a wifi signal in a hands-free manner by able to visualize the source of the signal using an augmented reality display coupled to an antenna array.Type: ApplicationFiled: November 25, 2019Publication date: May 27, 2021Inventors: Mark Bilinski, Gerald Thomas Burnette, Fred William Greene, Garrison Buckminster Price
-
Publication number: 20210019947Abstract: The invention relates to creating actual object data for mixed reality applications. In some embodiments, the invention includes using a mixed reality controller to (1) define a coordinate system frame of reference for a target object, the coordinate system frame of reference including an initial point of the target object and at least one directional axis that are specified by a user of the mixed reality controller, (2) define additional points of the target object, and (3) define interface elements of the target object. A 3D model of the target object is generated based on the coordinate system frame of reference, the additional points, and the interface elements. After receiving input metadata for defining interface characteristics for the interface elements displayed on the 3D model, the input metadata is sued to generate a workflow for operating the target object in a mixed reality environment.Type: ApplicationFiled: July 15, 2020Publication date: January 21, 2021Inventors: Larry Clay Greunke, Mark Bilinski, Christopher James Angelopoulos, Michael Joseph Guerrero
-
Patent number: 10528595Abstract: A method for synchronizing datasets comprising the steps of: (1) partitioning each dataset into a plurality of bins according to a first partitioning rule, wherein each bin contains a random subset of elements of symmetric difference taken from a universe of all possible elements, (2) performing a first round of polynomial interpolation (PI) at a first encoding threshold on each bin of the first-partitioned datasets, wherein if any bin contains a number of elements that is less than or equal to the first encoding threshold the elements contained therein are decoded during the first PI round, and wherein if any bin contains a number of elements that is greater than the first encoding threshold the elements contained therein are not decoded during the first PI round; and (3) synchronizing the datasets based on the decoded elements.Type: GrantFiled: April 13, 2017Date of Patent: January 7, 2020Assignee: United States of America as Represented by the Secretary of the NavyInventors: Mark Bilinski, Ryan Gabrys
-
Patent number: 10438413Abstract: A method for using a virtual reality (VR) headset to view a two-dimensional (2D) technical drawing of a physical object of a real-world, real-world environment in three dimensions (3D), the method comprising: using LiDAR to produce a 3D point cloud of the real-world environment; scaling and aligning the 2D technical drawing to match the size and orientation of the physical object as depicted in the 3D point cloud; overlaying the 2D technical drawing (including all labels and dimensions) over the physical object as depicted in the 3D point cloud; and visually comparing the 3D point cloud representation of the physical object to the 2D technical drawing by simultaneously displaying the 3D point cloud of the real-world environment and the overlaid 2D technical drawing to a user with the VR headset.Type: GrantFiled: November 7, 2017Date of Patent: October 8, 2019Assignee: United States of America as represented by the Secretary of the NavyInventors: Mark Bilinski, Larry Clay Greunke
-
Publication number: 20190139306Abstract: A method for using a virtual reality (VR) headset to view a two-dimensional (2D) technical drawing of a physical object of a real-world, real-world environment in three dimensions (3D), the method comprising: using LiDAR to produce a 3D point cloud of the real-world environment; scaling and aligning the 2D technical drawing to match the size and orientation of the physical object as depicted in the 3D point cloud; overlaying the 2D technical drawing (including all labels and dimensions) over the physical object as depicted in the 3D point cloud; and visually comparing the 3D point cloud representation of the physical object to the 2D technical drawing by simultaneously displaying the 3D point cloud of the real-world environment and the overlaid 2D technical drawing to a user with the VR headset.Type: ApplicationFiled: November 7, 2017Publication date: May 9, 2019Inventors: Mark Bilinski, Larry Clay Greunke
-
Publication number: 20180350152Abstract: An imaging system includes: a first imaging data receiving component that receives first three-dimensional image data of a first area and a portion of a second area; a first data storage component that stores the first three-dimensional image data; a second imaging data receiving component that receives second three-dimensional image data of the second area and a portion of the first area; a second data storage component that stores the second three-dimensional image data; a stitching component that stitches the first three-dimensional image data together with the second three-dimensional image data to produce stitched three-dimensional image data of the first area and the second area; and a stitched data storage component that stores the stitched three-dimensional image data. The first area is optically isolated from the second area with the exception of through a throughway.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Applicant: United States of America as represented by Secretary of the NavyInventors: Mark Bilinski, Stephen C. Cox, Larry C. Greunke, Nikolai V. Lukashuk
-
Publication number: 20180300384Abstract: A method for synchronizing datasets comprising the steps of: (1) partitioning each dataset into a plurality of bins according to a first partitioning rule, wherein each bin contains a random subset of elements of symmetric difference taken from a universe of all possible elements, (2) performing a first round of polynomial interpolation (PI) at a first encoding threshold on each bin of the first-partitioned datasets, wherein if any bin contains a number of elements that is less than or equal to the first encoding threshold the elements contained therein are decoded during the first PI round, and wherein if any bin contains a number of elements that is greater than the first encoding threshold the elements contained therein are not decoded during the first PI round; and (3) synchronizing the datasets based on the decoded elements.Type: ApplicationFiled: April 13, 2017Publication date: October 18, 2018Inventors: Mark Bilinski, Ryan Gabrys