Patents by Inventor Hong-Ming Tseng

Hong-Ming Tseng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12100028
    Abstract: Disclosed embodiments describe text-driven AI-assisted short-form video creation. Text from a website is extracted to generate extracted text. Possible summary sentences are formed from the extracted text. The forming is based on natural language processing. The summary sentences are ranked according to an engagement metric. Summary sentences from the possible summary sentences are picked based on a threshold engagement metric value. A list of video scenes is generated based on the summary sentences. Each video scene is associated with a summary sentence. A media asset from a media asset library is chosen for each video scene within the list of video scenes. The choosing is accomplished by machine learning. The list of video scenes, including the media asset that was chosen for each video scene, is compiled into a short-form video. Media is extracted from the website included into the short-form video. The compiling includes a dynamically generated image.
    Type: Grant
    Filed: December 5, 2023
    Date of Patent: September 24, 2024
    Assignee: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Jeremiah Kevin Tu, Hong-Ming Tseng, Xiaochen Zhang
  • Publication number: 20240290024
    Abstract: Disclosed embodiments provide techniques for dynamic synthetic video chat agent replacement. A human host receives a request for a video chat initiated by a user. An image for a synthetic host including a representation of an individual is retrieved. The image of the synthetic host is selected based on information about the user. Aspects of the individual included in the image are extracted using one or more processors. A video performance by the human host responding to the statement or query by the user is captured. A synthetic host performance is created in which the video performance of the human host is dynamically replaced by the individual that was extracted so that the synthetic host performance responds to the user statement or query. The synthetic host performance is rendered to the user and supplemented with additional synthetic host performances as the video chat continues.
    Type: Application
    Filed: February 23, 2024
    Publication date: August 29, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Edwin Chiu, Jerry Ting Kwan Luk, Hong-Ming Tseng, Jeremiah Kevin Tu
  • Publication number: 20240292070
    Abstract: Disclosed embodiments provide techniques for iterative AI prompt optimization for video generation. A first text template to be read by a large language model (LLM) neural network is accessed. The template includes control parameters that are populated from within a website. The populated template is submitted as a request to the LLM neural network, which generates a first video script. The first video script is used to create a first short-form video. The first short-form video is evaluated based on one or more performance metrics. The text template, short-form video, website information, and evaluation are used to train a machine learning model that is used to create a second text template. The second text template can be used to generate a second short-form video. The evaluation of iterative text templates and resulting short-form videos continues until a usable video is produced.
    Type: Application
    Filed: April 10, 2024
    Publication date: August 29, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Edwin Chiu, Hong-Ming Tseng, Xiaochen Zhang
  • Publication number: 20240292069
    Abstract: Disclosed embodiments provide techniques for synthesized realistic metahuman short-form videos. A photorealistic representation of an individual is accessed from media sources including videos, photographs, livestreams, or a 360-degree recording of a human host. The individual can be selected based on information on the viewer of the short-form video, such as purchase history, viewing history, and metadata. The photorealistic representation is isolated using machine learning and is used to create a three-dimensional (3D) model of the individual, based on a game engine. A realistic synthetic performance is created by combining the 3D model of the individual with animation generated by a game engine. The synthesized performance can include the voice of the human host. The synthesized performance is inserted into a metaverse environment and rendered to a viewer as a short-form video, including an ecommerce window with an on-screen product card and virtual purchase cart.
    Type: Application
    Filed: February 23, 2024
    Publication date: August 29, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Hong-Ming Tseng, Edwin Chiu, Wu-Hsi Li, Jeremiah Kevin Tu
  • Publication number: 20240267573
    Abstract: Disclosed embodiments provide techniques for livestream with synthetic scene insertion. A prerecorded livestream featuring a host is rendered to one or more viewers. An operator accesses a video segment related to the prerecorded livestream. The related video segment includes a performance by an individual. The operator retrieves an image of the host of the prerecorded livestream and creates from the related video segment a synthesized video segment that includes the performance of the individual accomplished by the host. One or more insertion points are determined within the prerecorded livestream for the insertion of the synthesized video segment. The operator inserts the synthesized video segment into the prerecorded livestream at the one or more determined insertion points. The insertion is accomplished dynamically and appears seamless to the viewer. The remainder of the prerecorded video is rendered to the one or more viewers after the insertion point.
    Type: Application
    Filed: February 2, 2024
    Publication date: August 8, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Hong-Ming Tseng, Edwin Chiu, Wu-Hsi Li, Jerry Ting Kwan Luk, Jeremiah Kevin Tu
  • Publication number: 20240233775
    Abstract: Disclosed embodiments provide techniques for augmented performance replacement in a short-form video. A short-form video is accessed, including a performance by a first individual. Using one or more processors, the performance of the first individual is isolated. Specific elements of the performance including gestures, clothing, expressions, and accessories are included in the isolation process. An image of a second individual is retrieved and information on the second individual is extracted from the image. A second short-form video is created by replacing the performance of the first individual with the second individual. The second short-form video is augmented based on viewer interaction. The augmenting of the second short-form video occurs dynamically. The augmenting includes additional audio content based on comments, responses to live polls or surveys, or questions and answers from viewers. The augmenting includes switching audio content in the second short-form video with additional audio content.
    Type: Application
    Filed: January 9, 2024
    Publication date: July 11, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Edwin Chiu, Jerry Ting Kwan Luk, Hong-Ming Tseng, Jeremiah Kevin Tu, Ziming Zhuang
  • Publication number: 20240185306
    Abstract: Disclosed embodiments describe text-driven AI-assisted short-form video creation. Text from a website is extracted to generate extracted text. Possible summary sentences are formed from the extracted text. The forming is based on natural language processing. The summary sentences are ranked according to an engagement metric. Summary sentences from the possible summary sentences are picked based on a threshold engagement metric value. A list of video scenes is generated based on the summary sentences. Each video scene is associated with a summary sentence. A media asset from a media asset library is chosen for each video scene within the list of video scenes. The choosing is accomplished by machine learning. The list of video scenes, including the media asset that was chosen for each video scene, is compiled into a short-form video. Media is extracted from the website included into the short-form video. The compiling includes a dynamically generated image.
    Type: Application
    Filed: December 5, 2023
    Publication date: June 6, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Jeremiah Kevin Tu, Hong-Ming Tseng, Xiaochen Zhang
  • Publication number: 20240119509
    Abstract: Techniques for object highlighting in an ecommerce short-form video are disclosed. The object highlighting can be associated with a product within the video, defining a highlighted product. The object highlighting can be performed automatically utilizing computer-implemented techniques. A short-form video from a library of short-form videos is accessed. A plurality of objects from a catalog of products featured in the short-form video is recognized. At least one of the plurality of objects displayed by a host is identified. A first object from the plurality of objects is selected. The first object is highlighted, which causes it to be surrounded by a boundary overlay in the short-form video. A representation of the first object is dynamically inserted into an on-screen product card. An ecommerce purchase of the first object is enabled within the short-form video.
    Type: Application
    Filed: October 4, 2023
    Publication date: April 11, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Edwin Chiu, Shi Feng, Michael A. Shoss, Hong-Ming Tseng, Ziming Zhuang
  • Publication number: 20240020336
    Abstract: Disclosed embodiments provide techniques for search using generative model synthesized images. An image, or set of images, is generated based on a text query, and the generated image or images are used as input to an image-based search query. In some instances, this technique provides more comprehensive and effective search results. In cases where a user wishes to search for something for which no known image currently exists, disclosed embodiments can provide more effective results than a text-based query. Disclosed embodiments generate an input image based on the text query using generative model synthesized image techniques. The input image is used to perform an image-based search, and sorted results are returned. Disclosed embodiments are particularly useful when searching for visual art in the form of images and/or videos.
    Type: Application
    Filed: July 11, 2023
    Publication date: January 18, 2024
    Applicant: Loop Now Technologies, Inc.
    Inventors: Wu-Hsi Li, Hong-Ming Tseng