Patents by Inventor Chelhwon KIM

Chelhwon KIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10798359
    Abstract: Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: October 6, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Patrick Chiu, Michael Patrick Cutter, Chelhwon Kim, Surendar Chandra
  • Publication number: 20200311468
    Abstract: A computer-implemented method of localization for an indoor environment is provided, including receiving, in real-time, a dynamic query from a first source, and static inputs from a second source; extracting features of the static inputs by applying a metric learning convolutional neural network (CNN), and aggregating the extracted features of the static inputs to generate a feature transformation; and iteratively extracting features of the dynamic query on a deep CNN as an embedding network and fusing the feature transformation into the deep CNN, and applying a triplet loss function to optimize the embedding network and provide a localization result.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Inventors: Chelhwon Kim, Chidansh Amitkumar Bhatt, Miteshkumar Patel, Donald Kimber
  • Patent number: 10785421
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting and separating the background in the received real time video conference stream from the user; and replacing the separated background with a background received from a system of a second user or with a pre-recorded background.
    Type: Grant
    Filed: December 8, 2018
    Date of Patent: September 22, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Patent number: 10768692
    Abstract: In a telepresence scenario with remote users discussing a document or a slide, it can be difficult to follow which parts of the document are being discussed. One way to address this problem is to provide feedback by showing where the user's hand is pointing at on the document, which also enables more expressive gestural communication than a simple remote cursor. An important practical problem is how to transmit this remote feedback efficiently with high resolution document images. This is not possible with standard videoconferencing systems which have insufficient resolution. We propose a method based on using hand skeletons to provide the feedback. The skeleton can be captured using a depth camera or a webcam (with a deep network algorithm), and the small data can be transmitted at a high frame rate (without a video codec).
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: September 8, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Patrick Chiu, Joseph Andrew Alkuino de la Pena, Laurent Denoue, Jun Shingu
  • Patent number: 10691217
    Abstract: A method at a computer system includes obtaining an electronic document comprising document elements, and injecting into the document in association with one of the document elements one or more hotspot attributes, the hotspot attributes defining attributes of a hotspot that is displayable in conjunction with the document element when the document is displayed, the hotspot attributes being associated with predefined physical gestures and respective document actions; such that the hotspot, when displayed as part of a displayed document, indicates that a viewer of the displayed document can interact with the displayed document using the predefined physical gestures (i) performed at a position that overlap a displayed version of the document in a field of view of a camera system and (ii) captured by the camera system, wherein a physical gesture results in a respective document action being performed on the displayed document.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: June 23, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Patrick Chiu, Joseph Andrew Alkuino de la Peña, Laurent Denoue, Chelhwon Kim
  • Publication number: 20200186727
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting and separating the background in the received real time video conference stream from the user; and replacing the separated background with a background received from a system of a second user or with a pre-recorded background.
    Type: Application
    Filed: December 8, 2018
    Publication date: June 11, 2020
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Publication number: 20200167715
    Abstract: Example implementations described herein are directed to systems and methods for skill assessment, such as hand washing compliance in hospitals, or assembling products in factories. Example implementations involve body part tracking (e.g., hands), skeleton tracking and deep neural networks to detect and recognize sub-tasks and to assess the skill on each sub-task. Furthermore, the order of the sub-tasks is checked for correctness. Beyond monitoring individual users, example implementations can be used for analyzing and improving workflow designs with multiple sub-tasks.
    Type: Application
    Filed: November 27, 2018
    Publication date: May 28, 2020
    Inventors: Chidansh Amitkumar Bhatt, Patrick Chiu, Chelhwon Kim, Qiong Liu, Hideto Oda, Yanxia Zhang
  • Publication number: 20200103976
    Abstract: In example implementations described herein, there is a smart hat that is configured to receive gesture inputs through an accelerometer or through touch events on the hat, with light emitting diodes providing feedback to the wearer. The smart hat can be configured to be connected wirelessly to an external apparatus to control the apparatus by transmitting and receiving messages (e.g., commands) between the hat and the apparatus wirelessly. Further, a network of smart hats may be managed by an external device depending on the desired implementation.
    Type: Application
    Filed: October 1, 2018
    Publication date: April 2, 2020
    Inventors: Christine Marie Dierk, Scott Carter, Patrick Chiu, Anthony Dunnigan, Chelhwon Kim, Donald Kimber, Nami Tokunaga, Hajime Ueno
  • Publication number: 20200050353
    Abstract: Systems and methods described herein utilize a deep learning algorithm to recognize gestures and other actions on a projected user interface provided by a projector. A camera that incorporates depth information and color information records gestures and actions detected on the projected user interface. The deep learning algorithm can be configured to be engaged when an action is detected to save on processing cycles for the hardware system.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 13, 2020
    Inventors: Patrick CHIU, Chelhwon KIM
  • Publication number: 20200012850
    Abstract: A real-time end-to-end system for capturing ink strokes written with ordinary pen and paper using a commodity video camera is described. Compare to traditional camera-based approaches, which typically separate out the pen tip localization and pen up/down motion detection, described is a unified approach that integrates these two steps using a deep neural network. Furthermore, the described system does not require manual initialization to locate the pen tip. A preliminary evaluation demonstrates the effectiveness of the described system on handwriting recognition for English and Japanese phrases.
    Type: Application
    Filed: July 3, 2018
    Publication date: January 9, 2020
    Inventors: Chelhwon Kim, Patrick Chiu, Hideto Oda
  • Publication number: 20190347509
    Abstract: Systems and methods directed to utilizing a first camera system to capture first images of one or more people in proximity to a tabletop; utilizing a second camera system to capture second images of one or more documents in proximity to the tabletop; generating a query for a database derived from people recognition conducted on the first images and text extraction on the second images; determining a first ranked list of people and a second ranked list of documents based on results of the query, the results based on a calculated ranked list of two-mode networks; and providing an interface on a display to access information about one or more people from the first ranked list of people and one or more documents from the second ranked list of documents.
    Type: Application
    Filed: May 9, 2018
    Publication date: November 14, 2019
    Inventors: Patrick CHIU, Chelhwon KIM, Hajime UENO, Yulius TJAHJADI, Anthony DUNNIGAN, Francine CHEN, Jian ZHAO, Bee-Yian LIEW, Scott CARTER
  • Publication number: 20190258311
    Abstract: In a telepresence scenario with remote users discussing a document or a slide, it can be difficult to follow which parts of the document are being discussed. One way to address this problem is to provide feedback by showing where the user's hand is pointing at on the document, which also enables more expressive gestural communication than a simple remote cursor. An important practical problem is how to transmit this remote feedback efficiently with high resolution document images. This is not possible with standard videoconferencing systems which have insufficient resolution. We propose a method based on using hand skeletons to provide the feedback. The skeleton can be captured using a depth camera or a webcam (with a deep network algorithm), and the small data can be transmitted at a high frame rate (without a video codec).
    Type: Application
    Filed: February 21, 2018
    Publication date: August 22, 2019
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Patrick Chiu, Joseph Andrew Alkuino de la Pena, Laurent Denoue, Jun Shingu
  • Publication number: 20180307316
    Abstract: A method at a computer system includes obtaining an electronic document comprising document elements, and injecting into the document in association with one of the document elements one or more hotspot attributes, the hotspot attributes defining attributes of a hotspot that is displayable in conjunction with the document element when the document is displayed, the hotspot attributes being associated with predefined physical gestures and respective document actions; such that the hotspot, when displayed as part of a displayed document, indicates that a viewer of the displayed document can interact with the displayed document using the predefined physical gestures (i) performed at a position that overlap a displayed version of the document in a field of view of a camera system and (ii) captured by the camera system, wherein a physical gesture results in a respective document action being performed on the displayed document.
    Type: Application
    Filed: April 20, 2017
    Publication date: October 25, 2018
    Inventors: Patrick Chiu, Joseph Andrew Alkuino de la Peña, Laurent Denoue, Chelhwon Kim
  • Publication number: 20180255287
    Abstract: Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model.
    Type: Application
    Filed: May 4, 2018
    Publication date: September 6, 2018
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Patrick CHIU, Michael Patrick CUTTER, Chelhwon KIM, Surendar CHANDRA
  • Patent number: 9992471
    Abstract: Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: June 5, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Patrick Chiu, Michael Patrick Cutter, Chelhwon Kim, Surendar Chandra
  • Patent number: 9747499
    Abstract: Described are systems and methods for recognizing paper documents on a tabletop using an overhead camera mounted on pan-tilt servos. The described automated system first finds paper documents on a cluttered desk based on a text probability map, constructed using multiple images acquired at fixed grid positions, and then captures a sequence of high-resolution overlapping frames of the located document(s), which are then fused together and perspective-rectified, using computed homography, to reconstruct a high quality and fronto-parallel document image that is of sufficient quality required for optical character recognition. The extracted textual information may be used, for example, for indexing and search, document repository and/or language translation applications.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: August 29, 2017
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Patrick Chiu, Hao Tang
  • Publication number: 20160259971
    Abstract: Described are systems and methods for recognizing paper documents on a tabletop using an overhead camera mounted on pan-tilt servos. The described automated system first finds paper documents on a cluttered desk based on a text probability map, constructed using multiple images acquired at fixed grid positions, and then captures a sequence of high-resolution overlapping frames of the located document(s), which are then fused together and perspective-rectified, using computed homography, to reconstruct a high quality and fronto-parallel document image that is of sufficient quality required for optical character recognition. The extracted textual information may be used, for example, for indexing and search, document repository and/or language translation applications.
    Type: Application
    Filed: March 3, 2015
    Publication date: September 8, 2016
    Inventors: Chelhwon Kim, Patrick Chiu, Hao Tang
  • Publication number: 20130242054
    Abstract: Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model.
    Type: Application
    Filed: November 30, 2012
    Publication date: September 19, 2013
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Patrick CHIU, Michael Patrick CUTTER, Chelhwon KIM, Surendar CHANDRA