Patents by Inventor Yun Fu

Yun Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11651526
    Abstract: An apparatus and corresponding method for frontal face synthesis. The apparatus comprises a decoder that synthesizes a high-resolution (HR) frontal-view (FV) image of a face from received features of a low-resolution (LR) non-frontal-view (NFV) image of the face. The HR FV image is of a higher resolution relative to a lower resolution of the LR NFV image. The decoder includes a main path and an auxiliary path. The auxiliary path produces auxiliary-path features from the received features and feeds the auxiliary-path features produced into the main path for synthesizing the HR FV image. The auxiliary-path features represent a HR NFV image of the face at the higher resolution. As such, an HR identity-preserved frontal face can be synthesized from one or many LR faces with various poses and may be used in types of commercial applications, such as video surveillance.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: May 16, 2023
    Assignee: Northeastern University
    Inventors: Yun Fu, Yu Yin
  • Publication number: 20230126178
    Abstract: Embodiments identify joints of a multi-limb body in an image. One such embodiment unifies depth of a plurality of multi-scale feature maps generated from an image of a multi-limb body to create a plurality of feature maps each having a same depth. In turn, for each of the plurality of feature maps having the same depth, an initial indication of one or more joints in the image is generated. The one or more joints are located at an interconnection of a limb to the multi-limb body or at an interconnection of a limb to another limb. To continue, a final indication of the one or more joints in the image is generated using each generated initial indication of the one or more joints.
    Type: Application
    Filed: February 10, 2021
    Publication date: April 27, 2023
    Inventors: Yun Fu, Songyao Jiang, Bin Sun
  • Publication number: 20230110393
    Abstract: A method for image transformation including receiving, from an electronic device, an input image having represented therein an object having a predefined region with a selected characteristic, extracting, from the input image, an isolated image corresponding to the predefined region, inputting the isolated image into a baseline generator, trained by an offline unbalanced neural network, that generates a new image that represents a modification to the predefined region in which the selected characteristic is replaced with a baseline characteristic, and generating an output image that reflects a modification of the input image to include a representation of the new image.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 13, 2023
    Applicant: Shiseido Company, Limited
    Inventors: Shuhui Jiang, Aneesh Bhat, Kai Ho Edgar Cheung, Yun Fu
  • Publication number: 20230106115
    Abstract: A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 6, 2023
    Applicant: TVision Insights, Inc.
    Inventors: Inderbir Sidhu, Yanfeng Liu, Yun Fu
  • Publication number: 20230096349
    Abstract: A method is provided for fabricating a film layer. A cathode film layer of lithium ion batteries is fabricated through atmospheric plasma spraying (APS) without using polymer adhesive. The ratio of its active substance can even reach 100%. Moreover, the cathode film layer fabricated by APS obtains pores, where, with the coordination of a liquid electrolyte, electrolyte penetration paths are provided to significantly increase the area of reaction. Hence, the effective thickness of the film layer is relatively thick and the capacity of battery is increased. As an example, the thickness of a film layer of lithium cobalt oxide fabricated accordingly reaches more than 100 microns; and its maximum electric capacity per unit area reaches 6 milliampere-hours per square centimeter (mAh/cm2). Thus, the performance of the follow-on solid-state lithium-ion battery is improved and its high-volume manufacturing cost is reduced.
    Type: Application
    Filed: September 27, 2021
    Publication date: March 30, 2023
    Inventors: Chun-Laing Chang, Chun-Huang Tsai, Chang-Shiang Yang, Cheng-Yun Fu, Min-Chuan Wang, Tien-Hsiang Hsueh
  • Publication number: 20220386759
    Abstract: The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions.
    Type: Application
    Filed: May 29, 2022
    Publication date: December 8, 2022
    Applicant: Shiseido Company, Limited
    Inventors: Yun Fu, Shuyang Wang, Sumin Lee, Songyao Jiang, Bin Sun, Haiyi Mao, Kai Ho Edgar Cheung
  • Patent number: 11509956
    Abstract: A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: November 22, 2022
    Assignee: TVISION INSIGHTS, INC.
    Inventors: Inderbir Sidhu, Yanfeng Liu, Yun Fu
  • Patent number: 11494938
    Abstract: Embodiments provide functionality for identifying joints and limbs in images. An embodiment extracts features from an image to generate feature maps and, in turn, processes the feature maps using a single convolutional neural network trained based on a target model that includes joints and limbs. The processing generates both a directionless joint confidence map indicating confidence with which pixels in the image depict one or more joints and a directionless limb confidence map indicating confidence with which the pixels in the image depict one or more limbs between adjacent joints of the one or more joints, wherein adjacency of joints is provided by the target model. To continue, indications of the one or more joints and the one or more limbs in the image are generated using the directionless joint confidence map, the directionless limb confidence map, and the target model. Embodiments can be deployed on mobile and embedded systems.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: November 8, 2022
    Assignee: Northeastern University
    Inventors: Yun Fu, Yue Wu
  • Publication number: 20220353102
    Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Application
    Filed: July 13, 2022
    Publication date: November 3, 2022
    Inventors: SIMON LAU, YUN FU, JAMES MASON ALTREUTER, BRIAN FRANCIS WILLIAMS, XIAOKE HUANG, TAO XING, WEN SUN, TAO LU, KAISUKE NAKAJIMA, KEAN KHEONG CHIN, HITESH ANAND GUPTA, JULIUS CHENG, JING PAN, SAM SONG LIANG
  • Publication number: 20220343918
    Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Application
    Filed: July 13, 2022
    Publication date: October 27, 2022
    Inventors: YUN FU, TAO XING, KAISUKE NAKAJIMA, BRIAN FRANCIS WILLIAMS, JAMES MASON ALTREUTER, XIAOKE HUANG, SIMON LAU, SAM SONG LIANG, KEAN KHEONG CHIN, WEN SUN, JULIUS CHENG, HITESH ANAND GUPTA
  • Patent number: 11431517
    Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: August 30, 2022
    Assignee: Otter.ai, Inc.
    Inventors: Simon Lau, Yun Fu, James Mason Altreuter, Brian Francis Williams, Xiaoke Huang, Tao Xing, Wen Sun, Tao Lu, Kaisuke Nakajima, Kean Kheong Chin, Hitesh Anand Gupta, Julius Cheng, Jing Pan, Sam Song Liang
  • Patent number: 11423911
    Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: August 23, 2022
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Tao Xing, Kaisuke Nakajima, Brian Francis Williams, James Mason Altreuter, Xiaoke Huang, Simon Lau, Sam Song Liang, Kean Kheong Chin, Wen Sun, Julius Cheng, Hitesh Anand Gupta
  • Publication number: 20220254157
    Abstract: Embodiments provide functionality for identifying joints and limbs in frames of video that use indications of joints and limbs from a previous frame. One such embodiment processes a current frame of video to determine initial predictions of joint and limb locations in the current frame. In turn, indications of the joint and limb locations in the current frame are generated by refining the initial predictions of the joint and limb locations based on indications of respective joint and limb locations from a previous frame. Embodiments provide results that are insensitive to occlusions and results that have less shaking and vibration.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 11, 2022
    Inventors: Yun Fu, Songyao Jiang
  • Patent number: 11404385
    Abstract: In a described example, an electrical apparatus includes: a metal layer formed over a non-device side of a semiconductor device die, the semiconductor device die having devices formed on a device side of the semiconductor device die opposite the non-device side; a first side of the metal layer bonded to a die mount pad on a package substrate; a second side of the metal layer formed over a roughened surface on the non-device side of the semiconductor device die, the roughened surface having an average surface roughness (Ra) between 40 nm and 500 nm; bond pads on the semiconductor device die electrically coupled to conductive leads on the package substrate; and mold compound covering at least a portion of the semiconductor device die and at least a portion of the conductive leads.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: August 2, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Qin Xu Yu, Jian Jun Kong, She Yu Tang, Yun Fu An
  • Patent number: 11344102
    Abstract: The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: May 31, 2022
    Assignee: Shiseido Company, Limited
    Inventors: Yun Fu, Haiyi Mao
  • Publication number: 20220159341
    Abstract: A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people.
    Type: Application
    Filed: February 7, 2022
    Publication date: May 19, 2022
    Applicant: TVision Insights, Inc.
    Inventors: Inderbir Sidhu, Yanfeng Liu, Yun Fu
  • Publication number: 20220156554
    Abstract: A neural network (NN) and corresponding method employ an NN element (NNE) that includes a depthwise convolutional layer (DCL). The DCL outputs respective features by performing spatial convolution of respective input features having an original number of dimensions. The NNE includes a compression-expansion (CE) module that includes a first convolutional layer (CL) and second CL. The first CL outputs respective features as a function of respective input features. The respective features output from the first CL have a reduced number of dimensions relative to the original number of dimensions. The second CL outputs respective features, having the original number of dimensions, as a function of the respective features output from the first CL. The NNE further includes an add operator that outputs respective features as a function of the respective features output from the second CL and DCL.
    Type: Application
    Filed: June 3, 2020
    Publication date: May 19, 2022
    Inventors: Yun Fu, Bin Sun
  • Patent number: 11210549
    Abstract: A method includes: storing (i) a reference image of a chute for receiving objects, and (ii) a region of interest mask corresponding to a location of the chute in a field of view of an image sensor; at a processor, controlling the image sensor to capture an image of the chute; applying an illumination adjustment to the image; selecting, at the processor, a portion of the image according to the region of interest mask; generating a detection image based on a comparison of the selected portion and the reference image; determining, based on the detection image, a fullness indicator for the chute; and providing the fullness indicator to notification system.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: December 28, 2021
    Assignee: Zebra Technologies Corporation
    Inventors: Bin Sun, Yan Zhang, Kevin J. O'Connell, Yun Fu
  • Publication number: 20210327454
    Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.
    Type: Application
    Filed: March 23, 2021
    Publication date: October 21, 2021
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, GELEI CHEN, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
  • Publication number: 20210319797
    Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.
    Type: Application
    Filed: April 28, 2021
    Publication date: October 14, 2021
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING