Patents by Inventor Chelhwon KIM

Chelhwon KIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230394740
    Abstract: Systems and methods are directed to generating a three-dimensional (3D) model of a first image set. The first image set (e.g., input image set) corresponds to different viewing angles of an object that is subject to 3D modeling. A volumetric density function is generated from the first image set. A second image set (e.g., a textured image set) is generated from the volumetric density function and from a predefined color function. The first image set is blended with the second image set to generate a third image set (e.g., an image set with temporary textures). To generate the 3D model, a 3D surface model is generated from the third image set. In addition, a texture map of the 3D surface model is generated from the first image set. A computing system is configured to render the 3D surface model and texture map for display.
    Type: Application
    Filed: August 21, 2023
    Publication date: December 7, 2023
    Inventors: Chelhwon Kim, Nicolas Dahlquist
  • Publication number: 20230396753
    Abstract: Systems and methods are directed to multiview format detection. A multiview image that comprises a plurality of tiled view images is accessed. A cross-correlation map from the multiview image may be generated by autocorrelating the multiview image with a shifted copy of the multiview image. The cross-correlation map may be sampled at a plurality of predefined locations of the cross-correlation map to identify a set of cross-correlation values. A multiview format of the multiview image may be detected by classifying a feature set of the multiview image that comprises the set of cross-correlation values. The multiview image may be configured to be rendered on a multiview display based on the multiview format.
    Type: Application
    Filed: August 15, 2023
    Publication date: December 7, 2023
    Inventors: Chelhwon Kim, Yiwen Hua, David A. Fattal
  • Patent number: 11405735
    Abstract: A computer-implemented method, comprising detecting a first audio output in a first room, and detecting a portion of the first audio output in a second room, determining whether the portion of the first audio output in the second room meets a trigger requirement, and for the determining that the portion meets the trigger requirement, providing an action to reduce the portion of the first audio input in the second room.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: August 2, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Matthew Len Lee, Chelhwon Kim, Patrick Chiu, Miteshkumar Patel
  • Patent number: 11343445
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting and separating the background in the received real time video conference stream from the user; and replacing the separated background with a background received from a system of a second user or with a pre-recorded background.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: May 24, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Patent number: 11343446
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting the background in the received real time video conference stream from the user; and matching the first background and a second background associated with the second user.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: May 24, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Patent number: 11288871
    Abstract: Example implementations described herein are directed to the transmission of hand information from a user hand or other object to a remote device via browser-to-browser connections, such that the hand or other object is oriented correctly on the remote device based on orientation measurements received from the remote device. Such example implementations can facilitate remote assistance in which the user of the remote device needs to view the hand or object movement as provided by an expert for guidance.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: March 29, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Chelhwon Kim, Patrick Chiu, Yulius Tjahjadi, Donald Kimber, Qiong Liu
  • Patent number: 11227406
    Abstract: A computer-implemented method, comprising applying training images of an environment divided into zones to a neural network, and performing classification to label a test image based on a closest zone of the zones; extracting a feature from retrieved training images and pose information of the test image that match the closest zone; performing bundle adjustment on the extracted feature by triangulating map points for the closest zone to generate a reprojection error, and minimizing the reprojection error to determine an optimal pose of the test image; and for the optimal pose, providing an output indicative of a location or probability of a location of the test image at the optimal pose within the environment.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: January 18, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Miteshkumar Patel, Jingwei Song, Andreas Girgensohn, Chelhwon Kim
  • Publication number: 20210392451
    Abstract: A computer-implemented method, comprising detecting a first audio output in a first room, and detecting a portion of the first audio output in a second room, determining whether the portion of the first audio output in the second room meets a trigger requirement, and for the determining that the portion meets the trigger requirement, providing an action to reduce the portion of the first audio input in the second room.
    Type: Application
    Filed: June 16, 2020
    Publication date: December 16, 2021
    Inventors: Matthew Len Lee, Chelhwon Kim, Patrick Chiu, Miteshkumar Patel
  • Publication number: 20210272317
    Abstract: A computer-implemented method, comprising applying training images of an environment divided into zones to a neural network, and performing classification to label a test image based on a closest zone of the zones; extracting a feature from retrieved training images and pose information of the test image that match the closest zone; performing bundle adjustment on the extracted feature by triangulating map points for the closest zone to generate a reprojection error, and minimizing the reprojection error to determine an optimal pose of the test image; and for the optimal pose, providing an output indicative of a location or probability of a location of the test image at the optimal pose within the environment.
    Type: Application
    Filed: February 28, 2020
    Publication date: September 2, 2021
    Inventors: Miteshkumar PATEL, Jingwei SONG, Andreas GIRGENSOHN, Chelhwon KIM
  • Patent number: 11093886
    Abstract: Example implementations described herein are directed to systems and methods for skill assessment, such as hand washing compliance in hospitals, or assembling products in factories. Example implementations involve body part tracking (e.g., hands), skeleton tracking and deep neural networks to detect and recognize sub-tasks and to assess the skill on each sub-task. Furthermore, the order of the sub-tasks is checked for correctness. Beyond monitoring individual users, example implementations can be used for analyzing and improving workflow designs with multiple sub-tasks.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: August 17, 2021
    Assignee: FUJIFILM BUSINESS INNOVATION CORP.
    Inventors: Chidansh Amitkumar Bhatt, Patrick Chiu, Chelhwon Kim, Qiong Liu, Hideto Oda, Yanxia Zhang
  • Patent number: 11069259
    Abstract: A computer implemented method is provided that includes embedding a received signal in a first modality, re-embedding the embedded received signal of the first modality into a signal of a second modality, and generating an output in the second modality, and based on the output, rendering a signal in the second modality that is configured to be sensed, wherein the embedding, re-embedding and generating applies a model that is trained by performing an adversarial learning operation associated with discriminating actual examples of the target distribution from the generated output, and performing a metric learning operation associated with generating the output having perceptual distances.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: July 20, 2021
    Assignee: FUJIFILM BUSINESS INNOVATION CORP.
    Inventors: Andrew Allan Port, Doga Buse Cavdir, Chelhwon Kim, Miteshkumar Patel, Donald Kimber, Qiong Liu
  • Publication number: 20210142568
    Abstract: Example implementations described herein are directed to the transmission of hand information from a user hand or other object to a remote device via browser-to-browser connections, such that the hand or other object is oriented correctly on the remote device based on orientation measurements received from the remote device. Such example implementations can facilitate remote assistance in which the user of the remote device needs to view the hand or object movement as provided by an expert for guidance.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Chelhwon KIM, Patrick CHIU, Yulius TJAHJADI, Donald KIMBER, Qiong LIU
  • Patent number: 10997402
    Abstract: A real-time end-to-end system for capturing ink strokes written with ordinary pen and paper using a commodity video camera is described. Compare to traditional camera-based approaches, which typically separate out the pen tip localization and pen up/down motion detection, described is a unified approach that integrates these two steps using a deep neural network. Furthermore, the described system does not require manual initialization to locate the pen tip. A preliminary evaluation demonstrates the effectiveness of the described system on handwriting recognition for English and Japanese phrases.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: May 4, 2021
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Patrick Chiu, Hideto Oda
  • Patent number: 10990185
    Abstract: In example implementations described herein, there is a smart hat that is configured to receive gesture inputs through an accelerometer or through touch events on the hat, with light emitting diodes providing feedback to the wearer. The smart hat can be configured to be connected wirelessly to an external apparatus to control the apparatus by transmitting and receiving messages (e.g., commands) between the hat and the apparatus wirelessly. Further, a network of smart hats may be managed by an external device depending on the desired implementation.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: April 27, 2021
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Christine Marie Dierk, Scott Carter, Patrick Chiu, Anthony Dunnigan, Chelhwon Kim, Donald Kimber, Nami Tokunaga, Hajime Ueno
  • Patent number: 10977525
    Abstract: A computer-implemented method of localization for an indoor environment is provided, including receiving, in real-time, a dynamic query from a first source, and static inputs from a second source; extracting features of the static inputs by applying a metric learning convolutional neural network (CNN), and aggregating the extracted features of the static inputs to generate a feature transformation; and iteratively extracting features of the dynamic query on a deep CNN as an embedding network and fusing the feature transformation into the deep CNN, and applying a triplet loss function to optimize the embedding network and provide a localization result.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 13, 2021
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Chidansh Amitkumar Bhatt, Miteshkumar Patel, Donald Kimber
  • Publication number: 20210097888
    Abstract: A computer implemented method is provided that includes embedding a received signal in a first modality, re-embedding the embedded received signal of the first modality into a signal of a second modality, and generating an output in the second modality, and based on the output, rendering a signal in the second modality that is configured to be sensed, wherein the embedding, re-embedding and generating applies a model that is trained by performing an adversarial learning operation associated with discriminating actual examples of the target distribution from the generated output, and performing a metric learning operation associated with generating the output having perceptual distances.
    Type: Application
    Filed: April 10, 2020
    Publication date: April 1, 2021
    Inventors: Andrew Allan PORT, Doga Buse CAVDIR, Chelhwon KIM, Miteshkumar PATEL, Donald KIMBER, Qiong LIU
  • Publication number: 20210006731
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting and separating the background in the received real time video conference stream from the user; and replacing the separated background with a background received from a system of a second user or with a pre-recorded background.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 7, 2021
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Publication number: 20210006732
    Abstract: A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting the background in the received real time video conference stream from the user; and matching the first background and a second background associated with the second user.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 7, 2021
    Inventors: Laurent Denoue, Scott Carter, Chelhwon Kim
  • Patent number: 10810457
    Abstract: Systems and methods directed to utilizing a first camera system to capture first images of one or more people in proximity to a tabletop; utilizing a second camera system to capture second images of one or more documents in proximity to the tabletop; generating a query for a database derived from people recognition conducted on the first images and text extraction on the second images; determining a first ranked list of people and a second ranked list of documents based on results of the query, the results based on a calculated ranked list of two-mode networks; and providing an interface on a display to access information about one or more people from the first ranked list of people and one or more documents from the second ranked list of documents.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: October 20, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Patrick Chiu, Chelhwon Kim, Hajime Ueno, Yulius Tjahjadi, Anthony Dunnigan, Francine Chen, Jian Zhao, Bee-Yian Liew, Scott Carter
  • Patent number: 10811055
    Abstract: A computer-implemented method is provided for coordinating sensed poses associated with real-time movement of a first object in a live video feed, and pre-recorded poses associated with movement of a second object in a video. The computer-implemented method comprises applying a matching function to determine a match between a point of one of the sensed poses and a corresponding point of the pre-recorded poses, and based on the match, determining a playtime and outputting at least one frame of the video associated with the second object for the playtime.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: October 20, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Donald Kimber, Laurent Denoue, Maribeth Joy Back, Patrick Chiu, Chelhwon Kim, Yanxia Zhang