Patents by Inventor Jan Wei Pan

Jan Wei Pan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11068721
    Abstract: An apparatus is provided for automated object tracking in a video feed. The apparatus receives and sequentially processes a plurality of frames of the video feed to track objects. In particular, a plurality of objects in a frame are detected and assigned to a respective track fragment. A kinematic, visual, temporal or machine learning-based feature of an object is then identified and stored in metadata associated with the track fragment. A track fragment for the object is identified in earlier frames based on a comparison of the feature and a corresponding feature in metadata associated with the earlier frames. The track fragments for the object in the frame and the object in the earlier frames are linked to form a track of the object. The apparatus then outputs the video feed with the track of the object as an overlay thereon.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: July 20, 2021
    Assignee: THE BOEING COMPANY
    Inventors: Jan Wei Pan, Hieu Tat Nguyen, Zachary Jorgensen, Yuri Levchuk
  • Publication number: 20200265630
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Applicant: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan
  • Patent number: 10737446
    Abstract: A system for process control of a composite fabrication process comprises an automated composite placement head, a vision system, and a computer system. The automated composite placement head is configured to lay down composite material. The vision system is connected to the automated composite placement head and configured to produce image data during an inspection of the composite material, wherein the inspection takes place at least one of during or after laying down the composite material. The computer system is configured to identify inconsistencies in the composite material visible within the image data, and make a number of metrology decisions based on the inconsistencies.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: August 11, 2020
    Assignee: The Boeing Company
    Inventors: Jeffery Lee Marcoe, Jan Wei Pan
  • Patent number: 10643368
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: May 5, 2020
    Assignee: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan
  • Patent number: 10607463
    Abstract: An apparatus is provided for automated object and activity tracking in a live video feed. The apparatus receives and processes a live video feed to identify a plurality of objects and activities therein. The apparatus also generates natural language text that describes a storyline of the live video feed using the plurality of objects and activities so identified. The live video feed is processed using computer vision, natural language processing and machine learning, and a catalog of identifiable objects and activities. The apparatus then outputs the natural language text audibly or visually with a display of the live video feed.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 31, 2020
    Assignee: The Boeing Company
    Inventors: Jan Wei Pan, Yuri Levchuk, Zachary Jorgensen
  • Patent number: 10504236
    Abstract: A method of testing a battery includes causing a battery in a test environment to produce a fire having a flame that extends out from the battery, and capturing a digital image of a scene that includes at least a portion of a test environment and the flame, the digital image being formed using visible light. The method includes uploading the digital image to a computer configured to produce a quiver plot and identify points on the quiver plot that define a polygon that is an approximate outline of the flame. The computer is configured to determine dimensions of the polygon, and translate the dimensions from the quiver plot to the digital image, and from the digital image to dimensions of the flame in the scene. And the computer is configured to generate a displayable report that includes at least the dimensions of the flame.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: December 10, 2019
    Assignee: The Boeing Company
    Inventors: Elizabeth Killelea, Joseph Gonzalez, Jan Wei Pan
  • Publication number: 20190213750
    Abstract: A method of testing a battery includes causing a battery in a test environment to produce a fire having a flame that extends out from the battery, and capturing a digital image of a scene that includes at least a portion of a test environment and the flame, the digital image being formed using visible light. The method includes uploading the digital image to a computer configured to produce a quiver plot and identify points on the quiver plot that define a polygon that is an approximate outline of the flame. The computer is configured to determine dimensions of the polygon, and translate the dimensions from the quiver plot to the digital image, and from the digital image to dimensions of the flame in the scene. And the computer is configured to generate a displayable report that includes at least the dimensions of the flame.
    Type: Application
    Filed: January 8, 2018
    Publication date: July 11, 2019
    Inventors: Elizabeth Killelea, Joseph Gonzalez, Jan Wei Pan
  • Publication number: 20180374253
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Application
    Filed: June 27, 2017
    Publication date: December 27, 2018
    Applicant: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan
  • Publication number: 20180311914
    Abstract: A system for process control of a composite fabrication process comprises an automated composite placement head, a vision system, and a computer system. The automated composite placement head is configured to lay down composite material. The vision system is connected to the automated composite placement head and configured to produce image data during an inspection of the composite material, wherein the inspection takes place at least one of during or after laying down the composite material. The computer system is configured to identify inconsistencies in the composite material visible within the image data, and make a number of metrology decisions based on the inconsistencies.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 1, 2018
    Inventors: Jeffery Lee Marcoe, Jan Wei Pan
  • Publication number: 20180285648
    Abstract: An apparatus is provided for automated object tracking in a video feed. The apparatus receives and sequentially processes a plurality of frames of the video feed to track objects. In particular, a plurality of objects in a frame are detected and assigned to a respective track fragment. A kinematic, visual, temporal or machine learning-based feature of an object is then identified and stored in metadata associated with the track fragment. A track fragment for the object is identified in earlier frames based on a comparison of the feature and a corresponding feature in metadata associated with the earlier frames. The track fragments for the object in the frame and the object in the earlier frames are linked to form a track of the object. The apparatus then outputs the video feed with the track of the object as an overlay thereon.
    Type: Application
    Filed: March 30, 2017
    Publication date: October 4, 2018
    Inventors: Jan Wei Pan, Hieu Tat Nguyen, Zachary Jorgensen, Yuri Levchuk
  • Publication number: 20180165934
    Abstract: An apparatus is provided for automated object and activity tracking in a live video feed. The apparatus receives and processes a live video feed to identify a plurality of objects and activities therein. The apparatus also generates natural language text that describes a storyline of the live video feed using the plurality of objects and activities so identified. The live video feed is processed using computer vision, natural language processing and machine learning, and a catalog of identifiable objects and activities. The apparatus then outputs the natural language text audibly or visually with a display of the live video feed.
    Type: Application
    Filed: December 9, 2016
    Publication date: June 14, 2018
    Inventors: Jan Wei Pan, Yuri Levchuk, Zachary Jorgensen