Patents Examined by Leon-Viet Q. Nguyen
  • Patent number: 11961299
    Abstract: A method and an apparatus for generating a video fingerprint are disclosed. The method includes: performing shot boundary detection on content of a video; determining a time duration of each shot according to positional points of the shot boundary, and compose the time duration of each shot into a shot boundary time slice sequence; and obtaining video fingerprint information according to the time slice sequence.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: April 16, 2024
    Assignee: Alibaba Group Holding Limited
    Inventor: Changguo Chen
  • Patent number: 11948664
    Abstract: Amino acid sequences of proteins can be produced using an autoencoder. For example, amino acid sequences of variant proteins can be produced by an autoencoder that is fed an amino acid sequence of a base protein as input. A decoding component of the autoencoder can include at least one or more components of a generative adversarial network.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: April 2, 2024
    Assignee: Just-Evotec Biologics, Inc.
    Inventors: Jeremy Martin Shaver, Tileli Amimeur, Randal Robert Ketchem
  • Patent number: 11935281
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: March 19, 2024
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 11922651
    Abstract: A device may receive a first image. The device may process the first image to identify an object in the first image and a location of the object within the first image. The device may extract a second image from the first image based on the location of the object within the first image. The device may process the second image to determine at least one of a coarse-grained viewpoint estimate or a fine-grained viewpoint estimate associated with the object. The device may determine an object viewpoint associated with the second vehicle based on the at least one of the coarse-grained viewpoint estimate or the fine-grained viewpoint estimate. The device may perform one or more actions based on the object viewpoint.
    Type: Grant
    Filed: December 2, 2022
    Date of Patent: March 5, 2024
    Assignee: Verizon Connect Development Limited
    Inventors: Simone Magistri, Francesco Sambo, Douglas Coimbra De Andrade, Fabio Schoen, Matteo Simoncini, Luca Bravi, Stefano Caprasecca, Luca Kubin, Leonardo Taccari
  • Patent number: 11922728
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11908241
    Abstract: The present invention refers to automatics and computing technology, namely to the field of processing images and video data, namely to correction the eyes image of interlocutors in course of video chats, video conferences with the purpose of gaze redirection. A method of correction of the image of eyes wherein the method obtains, at least, one frame with a face of a person, whereupon determines positions of eyes of the person in the image and forms two rectangular areas closely circumscribing the eyes, and finally replaces color components of each pixel in the eye areas for color components of a pixel shifted according to prediction of the predictor of machine learning. Technical effect of the present invention is rising of correction accuracy of the image of eyes with the purpose of gaze redirection, with decrease of resources required for the process of handling a video image.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: February 20, 2024
    Inventors: Daniil Sergeyevich Kononenko, Victor Sergeyevich Lempitsky
  • Patent number: 11908118
    Abstract: The present disclosure provides a visual model for image analysis of material characterization and analysis method thereof. By collecting and labeling big data of microscopic images, the present disclosure establishes an image data set of material characterization; and uses this data set for high-throughput deep learning, establishes a neural network model and dynamic statistical model based on deep learning, to identify and locate atomic or lattice defects, and automatically mark the lattice spacing, obtain the classification and statistics of the true shape of the microscopic particles of the material, quantitatively analyze the tissue dynamics of the material.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: February 20, 2024
    Assignee: CITIC Dicastal Co., Ltd.
    Inventors: Zuo Xu, Yuancheng Cao, Wuxin Sha, Zhihua Zhu, Hanqi Wu, Fanpeng Cheng
  • Patent number: 11900566
    Abstract: An image capture device includes an image sensor and a processor. The image sensor is configured to capture a first plurality of frames, a second plurality of frames, and a third plurality of frames. The processor includes a first denoising layer and a second denoising layer. The first denoising layer includes a first denoiser, a second denoiser, and a third denoiser. The first denoiser is configured to denoise the first plurality of frames and output a first denoised frame. The second denoiser is configured to denoise the second plurality of frames and output a second denoised frame. The third denoiser is configured to denoise the third plurality of frames and output a third denoised frame. The second denoising layer includes a fourth denoiser. The fourth denoiser is configured to output a denoised frame based on the first denoised frame, the second denoised frame, and the third denoised frame.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: February 13, 2024
    Assignee: GoPro, Inc.
    Inventors: Matias Tassano Ferrés, Thomas Nicolas Emmanuel, Julie Delon
  • Patent number: 11887343
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for generating a simulation scene. The method includes: acquiring scene parameters of a benchmark scene, where a dimensionality of the scene parameters of the benchmark scene is M; inputting the scene parameters of the benchmark scene into an encoder that is trained, and acquiring encoding parameters according to an output result of the encoder, where a dimensionality of the encoding parameters is N, and N<M; adjusting the encoding parameters to obtain adjusted encoding parameters, and inputting respectively the adjusted encoding parameters into a decoder that is trained; and generating a simulation scene according to the scene parameters of the reconstructed scene that are output by the decoder. According to the method, generating massive simulation scenes similar to the benchmark scene based on the benchmark scene can be achieved, which meets the diversity requirements for the simulation scenes.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: January 30, 2024
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Junfei Zhang, Chen Yang, Qingrui Sun, Dun Luo, Jiming Mao, Fangfang Dong
  • Patent number: 11869184
    Abstract: The present invention relates to a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image, obtaining heart disease diagnosis assistance information of the testee according to the target retinal image, via a heart disease diagnosis assistance neural network model which obtains diagnosis assistance information that is used for diagnosis of the target heart disease according to the retinal image; and outputting the heart disease diagnosis assistance information of the testee.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 9, 2024
    Assignee: Medi Whale Inc.
    Inventors: Tae Geun Choi, Geun Yeong Lee, Hyung Taek Rim
  • Patent number: 11861854
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Patent number: 11850728
    Abstract: Provided are a reception apparatus, a reception system, a reception method, and a storage medium that can naturally provide a personal conversation in accordance with a user without requiring the user to register the personal information thereof in advance. A disclosure includes a face information acquisition unit that acquires face information of a user; a face matching unit that matches, against face information of one user, the face information registered in a user information database in which user information including the face information of the user and the reception information is registered; and a user information management unit that, when a result of matching of the face information performed by the face matching unit is unmatched, registers the user information of the one user to the user information database.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: December 26, 2023
    Assignee: NEC CORPORATION
    Inventors: Nobuaki Kawase, Makoto Igarashi
  • Patent number: 11856239
    Abstract: A method and apparatus for sample adaptive offset without sign coding. The method includes selecting an edge offset type for at least a portion of an image, classifying at least one pixel of at least the portion of the image into edge shape category, calculating an offset of the pixel, determining the offset is larger or smaller than a predetermined threshold, changing a sign of the offset based on the threshold determination; and performing entropy coding accounting for the sign of the offset and the value of the offset.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: December 26, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Woo-Shik Kim, Do-Kyoung Kwon
  • Patent number: 11850113
    Abstract: A system includes one or more processors coupled to a non-transitory memory, where the one or more processors are configured to generate, using a training set comprising one or more two-dimensional (2D) training images of a dental arch and a three-dimensional (3D) dental arch model, a machine-learning model configured to generate 3D models of dental arches from 2D images of the dental arches, receive one or more 2D images of a dental arch of a user obtained by a user device of the user, and execute the machine-learning model using the one or more 2D images as input to generate a 3D model of the dental arch of the user.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: December 26, 2023
    Assignee: SDC U.S. SmilePay SPV
    Inventors: Jordan Katzman, Christopher Yancey, Tim Wucher
  • Patent number: 11849110
    Abstract: The present invention relates to an apparatus and method for encoding and decoding an image by skip encoding. The image-encoding method by skip encoding, which performs intra-prediction, comprises: performing a filtering operation on the signal which is reconstructed prior to an encoding object signal in an encoding object image; using the filtered reconstructed signal to generate a prediction signal for the encoding object signal; setting the generated prediction signal as a reconstruction signal for the encoding object signal; and not encoding the residual signal which can be generated on the basis of the difference between the encoding object signal and the prediction signal, thereby performing skip encoding on the encoding object signal.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: December 19, 2023
    Assignees: Electronics and Telecommunications Research Institute, Kwangwoon University Industry-Academic Collaboration Foundation, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Sung Chang Lim, Ha Hyun Lee, Se Yoon Jeong, Hui Yong Kim, Suk Hee Cho, Jong Ho Kim, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Dong Gyu Sim, Seoung Jun Oh, Gwang Hoon Park, Sea Nae Park, Chan Woong Jeon
  • Patent number: 11847661
    Abstract: Systems and methods for authenticating material samples are provided. Digital images of the samples are processed to extract computer-vision features, which are used to train a classification algorithm along with location and optional time information. The extracted features/information of a test sample are evaluated by the trained classification algorithm to identify the test sample. The results of the evaluation are used to track and locate counterfeits or authentic products.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: December 19, 2023
    Assignee: 3M Innovative Properties Company
    Inventors: Nicholas A. Asendorf, Jennifer F. Schumacher, Robert D. Lorentz, James B. Snyder, Golshan Golnari, Muhammad Jamal Afridi
  • Patent number: 11848091
    Abstract: A motion estimation system 80 includes a pose acquisition unit 81 and an action estimation unit 82. The pose acquisition unit 81 acquires, in time series, pose information representing a posture of one person and a posture of another person identified simultaneously in a situation in which a motion of the one person affects a motion of the other person. The action estimation unit 82 divides the acquired time series pose information on each person by unsupervised learning to estimate an action series that is a series of motions including two or more pieces of pose information.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: December 19, 2023
    Assignee: NEC CORPORATION
    Inventors: Yutaka Uno, Masahiro Kubo, Yuji Ohno, Masahiro Hayashitani, Yuan Luo, Eiji Yumoto
  • Patent number: 11836910
    Abstract: Techniques are described for performing estimations based on image analysis. In some implementations, one or more images may be received of at least portion(s) of a physical object, such as a vehicle. The image(s) may show damage that has occurred to the portion(s) of the physical object, such as damage caused by an accident. The image(s) may be transmitted to an estimation engine that performs pre-processing operation(s) on the image(s), such as operation(s) to excerpt one or more portion(s) of the image(s) for subsequent analysis. The image(s), and/or the pre-processed image(s), may be provided to an image analysis service, which may analyze the image(s) and return component state information that describes a state (e.g., damage extent) of the portion(s) of the physical object shown in the image(s). Based on the component state information, the estimation engine may determine a cost estimate to repair and/or replace damaged component(s).
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: December 5, 2023
    Assignee: United Services Automobile Association (USAA)
    Inventors: Robert Kenneth Dohner, Kristina Tomasetti, John McChesney TenEyck, Jr., David Golia, Robert S. Welborn, III
  • Patent number: 11830234
    Abstract: An image processing method and an image processing apparatus are provided. The image processing method includes acquiring information of a first region of interest (ROI) in a first frame, estimating information of a second ROI in a second frame that is received after the first frame, based on the acquired information of the first ROI, and sequentially storing, in a memory, subframes that are a portion of the second frame, each of the subframes being a line of the second frame. The image processing method further includes determining whether a portion of the stored subframes includes the second ROI, based on the estimated information of the second ROI, and based on the portion of the stored subframes being determined to include the second ROI, processing the portion of the stored subframes.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: November 28, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jingu Heo, Byong Min Kang
  • Patent number: 11823477
    Abstract: A computer-implemented method for extracting data from a table comprising a plurality of cells is provided. The method comprises: receiving image data representing a physical layout of the table and words or characters in the table; generating a plurality of tokens, each token representing a semantic unit of the words or characters; and generating at least one language embedding and at least one position embedding associated with each token. The method further comprises: associating at least one token with a cell; receiving the tokens at an input of a cell classifier trained to generate a cell classification for each cell based on the language embeddings and the position embeddings of the at least one token associated with the cell; and determining a relationship exists between at least two cells based at least in part on the cell classifications and the position embeddings of the tokens.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: November 21, 2023
    Assignee: MOORE AND GASPERECZ GLOBAL, INC.
    Inventors: Mahdi Ramezani, Ghazal Sahebzamani, Robin Saxifrage, Marcel Jose Guevara