Patents by Inventor Minh N. Do
Minh N. Do has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11521044Abstract: Techniques regarding action detection based on motion in receptive fields of a neural network model are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise a motion component that can extract a motion vector from a plurality of adaptive receptive fields in a deformable convolution layer of a neural network model. The computer executable components can also comprise an action detection component that can generate a spatio-temporal feature by concatenating the motion vector with a spatial feature extracted from the deformable convolution layer.Type: GrantFiled: May 17, 2018Date of Patent: December 6, 2022Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOISInventors: Khoi-Nguyen C. Mac, Raymond Alexander Yeh, Dhiraj Joshi, Minh N. Do, Rogerio Feris, Jinjun Xiong
-
Publication number: 20190354835Abstract: Techniques regarding action detection based on motion in receptive fields of a neural network model are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise a motion component that can extract a motion vector from a plurality of adaptive receptive fields in a deformable convolution layer of a neural network model. The computer executable components can also comprise an action detection component that can generate a spatio-temporal feature by concatenating the motion vector with a spatial feature extracted from the deformable convolution layer.Type: ApplicationFiled: May 17, 2018Publication date: November 21, 2019Inventors: Khoi-Nguyen C. Mac, Raymond Alexander Yeh, Dhiraj Joshi, Minh N. Do, Rogerio Feris, Jinjun Xiong
-
Patent number: 10325360Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information to identify foreground regions associated with a target user.Type: GrantFiled: October 16, 2017Date of Patent: June 18, 2019Assignee: The Board of Trustees of the University of IllinoisInventors: Quang H. Nguyen, Minh N. Do, Sanjay J Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Publication number: 20180089814Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information to identify foreground regions associated with a target user.Type: ApplicationFiled: October 16, 2017Publication date: March 29, 2018Inventors: Quang H. Nguyen, Minh N. Do, Sanjay J Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Patent number: 9792676Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor o£ the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: GrantFiled: December 23, 2016Date of Patent: October 17, 2017Assignee: The Board of Trustees of the University of IllinoisInventors: Quang H. Nguyen, Minh N. Do, Sanjay J. Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Publication number: 20170223234Abstract: A color image and a depth image of a live video are received. Each of the color image and the depth image are processed to identify the foreground and the background of the live video. The background of the live video is removed in order to create a foreground video that comprises the foreground of the live video. A control input may be received to control the embedding of the foreground video into a second background from a background feed. The background feed may also comprise virtual objects such that the foreground video may interact with the virtual objects.Type: ApplicationFiled: April 17, 2017Publication date: August 3, 2017Inventors: Minh N. Do, Quang II Nguyen, Dennis Lin, Sanjay Patel
-
Patent number: 9654765Abstract: A system is disclosed for executing depth image-based rendering of a 3D image by a computer having a processor and that is coupled with one or more color cameras and at least one depth camera. The color cameras and the depth camera are positionable at different arbitrary locations relative to a scene to be rendered. In some examples, the depth camera is a low resolution camera and the color cameras are high resolution. The processor is programmed to propagate depth information from the depth camera to an image plane of each color camera to produce a propagated depth image at each respective color camera, to enhance the propagated depth image at each color camera with the color and propagated depth information thereof to produce corresponding enhanced depth images, and to render a complete, viewable image from one or more enhanced depth images from the color cameras. The processor may be a graphics processing unit.Type: GrantFiled: February 3, 2014Date of Patent: May 16, 2017Assignee: THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOISInventors: Quang H Nguyen, Minh N Do, Sanjay J Patel
-
Publication number: 20170109872Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor o£ the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: ApplicationFiled: December 23, 2016Publication date: April 20, 2017Inventors: Quang H. Nguyen, Minh N. Do, Sanjay J. Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Systems and methods for embedding a foreground video into a background feed based on a control input
Patent number: 9628722Abstract: A color image and a depth image of a live video are received. Each of the color image and the depth image are processed to identify the foreground and the background of the live video. The background of the live video is removed in order to create a foreground video that comprises the foreground of the live video. A control input may be received to control the embedding of the foreground video into a second background from a background feed. The background feed may also comprise virtual objects such that the foreground video may interact with the virtual objects.Type: GrantFiled: March 30, 2011Date of Patent: April 18, 2017Assignee: PERSONIFY, INC.Inventors: Minh N. Do, Quang Il. Nguyen, Dennis Lin, Sanjay I. Patel -
Patent number: 9530044Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: GrantFiled: July 21, 2015Date of Patent: December 27, 2016Assignee: THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOISInventors: Quang H Nguyen, Minh N Do, Sanjay J. Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Patent number: 9300946Abstract: A method, apparatus, system, and computer program product for of digital imaging. Multiple cameras comprising lenses and digital images sensors are used to capture multiple images of the same subject, and process the multiple images using difference information (e.g., an image disparity map, an image depth map, etc.). The processing commences by receiving a plurality of image pixels from at least one first image sensor, wherein the first image sensor captures a first image of a first color, receives a stereo image of the first color, and also receives other images of other colors. Having the stereo imagery, then constructing a disparity map and an associated depth map by searching for pixel correspondences between the first image and the stereo image. Using the constructed disparity map, captured images are converted into converted images, which are then combined with the first image, resulting in a fused multi-channel color image.Type: GrantFiled: July 9, 2012Date of Patent: March 29, 2016Assignee: PERSONIFY, INC.Inventors: Minh N. Do, Quang H. Nguyen, Benjamin Chidester, Long Dang, Sanjay J. Patel
-
Publication number: 20150356716Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: ApplicationFiled: July 21, 2015Publication date: December 10, 2015Inventors: Quang H. Nguyen, Minh N. Do, Sanjay J. Patel, Daniel P. Dabbelt, Dennis J. Lin
-
Patent number: 9087229Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: GrantFiled: February 6, 2014Date of Patent: July 21, 2015Assignee: UNIVERSITY OF ILLINOISInventors: Quang H. Nguyen, Minh N. Do, Sanjay J. Patel, Daniel P. Dabbeit, Dennis J. Lin
-
Patent number: 9053573Abstract: A color image and a depth image of a live video are received. A user is extracted from the information of the color image and the depth image. Spurious depth vales may be corrected. Points or pixels of an image as seen from a viewpoint of a reference camera at a reference camera location are mapped to points of the image as would be seen from a viewpoint of a virtual camera at a virtual camera location. As such, a transformed color image is generated. Disoccluded pixels may be processed to address any gaps within the transformed color image.Type: GrantFiled: April 29, 2011Date of Patent: June 9, 2015Assignee: PERSONIFY, INC.Inventors: Dennis Lin, Quang H. Nguyen, Minh N. Do, Sanjay J. Patel
-
Patent number: 9008457Abstract: An RGB color image and an infrared intensity image of a live video are received. The RGB color image is converted to a colorspace image comprising a channel corresponding to a brightness value. Each pixel of the converted colorspace image is evaluated to determine whether the brightness channel of the pixel exceeds a threshold value. If the brightness channel of the pixel exceeds the threshold value, the infrared intensity value of a corresponding pixel from the infrared intensity image is mixed into the pixel's channel value that corresponds to brightness. The converted colorspace image is converted back to an RGB color image.Type: GrantFiled: May 31, 2011Date of Patent: April 14, 2015Assignee: Pesonify, Inc.Inventors: Mert Dikmen, Sanjay J. Patel, Dennis Lin, Quang H. Nguyen, Minh N. Do
-
Publication number: 20140294288Abstract: A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.Type: ApplicationFiled: February 6, 2014Publication date: October 2, 2014Inventors: Quang H Nguyen, Minh N Do, Sanjay J Patel, Daniel P Dabbelt, Dennis J Lin
-
Publication number: 20140293010Abstract: A system is disclosed for executing depth image-based rendering of a 3D image by a computer having a processor and that is coupled with one or more color cameras and at least one depth camera. The color cameras and the depth camera are positionable at different arbitrary locations relative to a scene to be rendered. In some examples, the depth camera is a low resolution camera and the color cameras are high resolution. The processor is programmed to propagate depth information from the depth camera to an image plane of each color camera to produce a propagated depth image at each respective color camera, to enhance the propagated depth image at each color camera with the color and propagated depth information thereof to produce corresponding enhanced depth images, and to render a complete, viewable image from one or more enhanced depth images from the color cameras. The processor may be a graphics processing unit.Type: ApplicationFiled: February 3, 2014Publication date: October 2, 2014Inventors: Quang H. Nguyen, Minh N Do, Sanjay J. Patel
-
Patent number: 8818028Abstract: A color image and a depth image of a live video are received. Each of the color image and the depth image are processed to identify a foreground, background, and an unknown region band of the live video. The unknown region band may comprise pixels between the foreground and the background. Further processing is performed to segment the pixels of the unknown region band between the foreground and the background. As such, processing is performed on the unknown region band in order to provide an improved user foreground video.Type: GrantFiled: April 8, 2011Date of Patent: August 26, 2014Assignee: Personify, Inc.Inventors: Quang H. Nguyen, Greg Meyer, Minh N. Do, Dennis Lin, Sanjay J. Patel
-
Publication number: 20130010073Abstract: A method, apparatus, system, and computer program product for of digital imaging. Multiple cameras comprising lenses and digital images sensors are used to capture multiple images of the same subject, and process the multiple images using difference information (e.g., an image disparity map, an image depth map, etc.). The processing commences by receiving a plurality of image pixels from at least one first image sensor, wherein the first image sensor captures a first image of a first color, receives a stereo image of the first color, and also receives other images of other colors. Having the stereo imagery, then constructing a disparity map and an associated depth map by searching for pixel correspondences between the first image and the stereo image. Using the constructed disparity map, captured images are converted into converted images, which are then combined with the first image, resulting in a fused multi-channel color image.Type: ApplicationFiled: July 9, 2012Publication date: January 10, 2013Inventors: Minh N. Do, Quang H. Nguyen, Benjamin Chidester, Long Dang, Sanjay J. Patel
-
Publication number: 20110293179Abstract: An RGB color image and an infrared intensity image of a live video are received. The RGB color image is converted to a colorspace image comprising a channel corresponding to a brightness value. Each pixel of the converted colorspace image is evaluated to determine whether the brightness channel of the pixel exceeds a threshold value. If the brightness channel of the pixel exceeds the threshold value, the infrared intensity value of a corresponding pixel from the infrared intensity image is mixed into the pixel's channel value that corresponds to brightness. The converted colorspace image is converted back to an RGB color image.Type: ApplicationFiled: May 31, 2011Publication date: December 1, 2011Inventors: Mert Dikmen, Sanjay J. Patel, Dennis Lin, Quang J. Nguyen, Minh N. Do