Patents Examined by Quan M Hua
  • Patent number: 11403850
    Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a surrounding environment of a vehicle. The system and method also include completing an action localization model to model a temporal context of actions occurring within the surrounding environment of the vehicle based on the video data and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses from the action localization model and the action adaption model to complete spatio-temporal action localization of individuals and actions that occur within the surrounding environment of the vehicle.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: August 2, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
  • Patent number: 11398021
    Abstract: A method for inspecting a pillar-shaped honeycomb formed body before firing includes a step a1 of capturing at least one of a first end surface and a second end surface of a pillar-shaped honeycomb formed body before firing with a camera to generate an image of the at least one of the first end surface and the second end surface; a step b1 of measuring a size of an opening of a plurality of cells in the image generated by step a1; and a step c1 of identifying abnormal cells with the opening having a size deviating from a predetermined allowable range from the cells based on a measurement result of step b1, and measuring a number of the abnormal cells.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: July 26, 2022
    Assignee: NGK Insulators, Ltd.
    Inventors: Hirotada Nakamura, Junya Hasegawa
  • Patent number: 11367199
    Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: June 21, 2022
    Assignee: Adobe Inc.
    Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
  • Patent number: 11367307
    Abstract: Provided is a method for processing images. The method can include: acquiring a target face image, and performing face key point detection on the target face image; acquiring a first fusion image by fusing a virtual special effect and a face part matched in the target face image based on a face key point detection result; acquiring an occlusion mask of the target face image; and generating a second fusion image based on the occlusion mask and the first fusion image.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: June 21, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Shanshan Wu, Paliwan Pahaerding, Ni Ai
  • Patent number: 11357599
    Abstract: A method for producing an aligner for teeth of a patient, the method including initially capturing a virtual current model of the teeth with a current position of the teeth: segmenting the teeth in the virtual current model and interpolating interdental surfaces of the teeth; generating a virtual nominal model of the teeth including undercut portions from the virtual current model; automatically determining the undercut portions in the virtual nominal model; automatically removing all surface elements that are associated with the undercuts from the virtual nominal model; subsequently closing gaps in the virtual nominal model created by removing the surface elements wherein closing the gaps is performed by interpolation for blocking out; automatically blocking out the undercut portions in the virtual nominal model to produce a blocked out virtual nominal model; producing a real model of the teeth based on the blocked out virtual nominal model; applying a synthetic material foil to the real model by a deep draw
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: June 14, 2022
    Assignee: CA-DIGITAL GmbH
    Inventors: Yong-Min Jo, Daniela Dudai
  • Patent number: 11361194
    Abstract: The technology disclosed generates variation correction coefficients on a cluster-by-cluster basis to correct inter-cluster intensity profile variation for improved base calling. An amplification coefficient corrects scale variation. Channel-specific offset coefficients correct shift variation along respective intensity channels. The variation correction coefficients for a target cluster are generated based on combining analysis of historic intensity data generated for the target cluster at preceding sequencing cycles of a sequencing run with analysis of current intensity data generated for the target cluster at a current sequencing cycle of the sequencing run. The variation correction coefficients are then used to correct next intensity data generated for the target cluster at a next sequencing cycle of the sequencing run. The corrected next intensity data is then used to base call the target cluster at the next sequencing cycle.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: June 14, 2022
    Assignee: ILLUMINA, INC.
    Inventors: Eric Jon Ojard, Abde Ali Hunaid Kagalwalla, Rami Mehio, Nitin Udpa, Gavin Derek Parnaby, John S. Vieceli
  • Patent number: 11356802
    Abstract: In one embodiment, a method comprises receiving, via a mobile station, contextual information or geographic location data relating to a plurality of members of the population within the geographic region, identifying a common element in the received contextual information relating to at least two members of the population as a basis for defining a geofence to include the at least two members, wherein the common element is identified upon a comparison of the first and second contextual information, and defining the boundary of the geofence.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: June 7, 2022
    Assignee: eBay Inc.
    Inventor: Matthew Scott Zises
  • Patent number: 11330447
    Abstract: Systems and methods for providing an improved cellular user quality of experience (QoE) are disclosed. The system can comprise a database from multiple data points to monitor and analyze cellular user experiences holistically. The system supplements conventional quality of service (QoS) metrics with user-side, application provider, and internet provider data, among other things. The data can be used to create highly granular service maps. The data can also be used in methods for analyzing and solving network issues, including slowdowns, dropped calls, and network availability are also disclosed. Improved analysis of network, user equipment (UE), and application issues can locate and solve QoE issues, improving cellular customer satisfaction, retention, and loyalty.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: May 10, 2022
    Assignee: T-Mobile USA, Inc.
    Inventor: Kevin Lau
  • Patent number: 11321838
    Abstract: In one embodiment, a method for eye-tracking comprises capturing images of a user using one or more cameras, the captured images of the user depicting at least an eye of the user, storing the captured images of the user in a storage device, reading, from the storage device, a down-sampled version of the captured images of the user, detecting one or more first segments in the down-sampled version of the captured images by processing the down-sampled version of the captured images using a machine-learning model, the one or more first segments comprising features of the eye of the user, reading, from the storage device, one or more second segments in the captured images corresponding to the one or more first segments in the down-sampled version of the captured images, and computing a gaze of the user based on the one or more second segments in the captured images.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: May 3, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Jeffrey Hung Wong, Martin Henrik Tall, Jixu Chen, Kapil Krishnakumar
  • Patent number: 11323859
    Abstract: Apparatuses and methods in a communication system are provided. The solution for transmitting a message for multiple vehicular recipients comprises determining (200) the vehicular recipients of the message in one or more coverage areas; estimating (202) the amount of resources needed in transmission the message using broadcast or separate unicast messages or the number of vehicular recipients that cannot receive broadcast message, or both, and transmitting (204) the message either as a single broadcast message to the vehicular recipients in each coverage area or as a unicast message separately to each vehicular recipient or both, depending on the amount of resources needed in transmission or the number of vehicular recipients that cannot receive the broadcast message, or both.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: May 3, 2022
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Peter Szilagyi, Csaba Vulkán
  • Patent number: 11317414
    Abstract: In an embodiment, a network entity (e.g., a base station, a location server, etc.) transmits, to a user equipment (UE), at least one base station almanac (BSA) message that indicates (i) a set of transmission point locations associated with at least one base station, the set of transmission point locations including at least one transmission point location of a base station that is based upon a plurality of different transmission point locations associated with the base station, and (ii) a mapping of each of a plurality of beams to the at least one transmission point location. The UE receives the transmitted at least one BSA message.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: April 26, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Bilal Sadiq, Junyi Li, Pavan Kumar Vitthaladevuni, Joseph Binamira Soriaga, Alexandros Manolakos
  • Patent number: 11308628
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system is trained to generate a matte for an input image utilizing contextual information within the image. Patches from the image and a corresponding trimap are extracted, and alpha values for each individual image patch are predicted based on correlations of features in different regions within the image patch. Predicting alpha values for an image patch may also be based on contextual information from other patches extracted from the same image. This contextual information may be determined by determining correlations between features in the query patch and context patches. The predicted alpha values for an image patch form a matte patch, and all matte patches generated for the patches are stitched together to form an overall matte for the input image.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: April 19, 2022
    Assignee: ADOBE INC.
    Inventor: Ning Xu
  • Patent number: 11308357
    Abstract: A data generation apparatus for automated travel, the data generation apparatus being a data collection apparatus characterized by comprising: obtaining means for obtaining external environment information; and labeling means for adding, to focus information included in the external environment information obtained by the obtaining means, a label corresponding to passing of a vehicle through a position at which the external environment information has been collected.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: April 19, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Shun Iwasaki
  • Patent number: 11301971
    Abstract: The present disclosure relates to a method and device for obtaining a second image from a first image when the dynamic range of the luminance of the first image is greater than the dynamic range of the luminance of the second image. The disclosure describes deriving at least one component representative of the colors of the second image from the first image, and maximizing at least one derived component according to a maximum value depending on a linear-light luminance component of the first image.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: April 12, 2022
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Robin Le Naour, David Touze, Catherine Serre
  • Patent number: 11304040
    Abstract: Techniques described herein provide for identification of a mobile device that belongs to an observed pedestrian, by a vehicle. According to embodiments, a vehicle can receive a mobile device message including a first set of pedestrian-identifying features, and the vehicle can use vehicle sensor data to extract a second set of pedestrian-identifying features for an observed pedestrian. If the features match, the vehicle can determine that the observed pedestrian is in possession of the mobile device, and the vehicle can subsequently communicate with the mobile device as needed regarding the status of the pedestrian.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: April 12, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Anantharaman Balasubramanian, Saadallah Kassir, Kapil Gulati, Shuanshuan Wu
  • Patent number: 11301713
    Abstract: An object is to provide an information processing apparatus capable of preventing utilization percentage of PEs from decreasing in a series of processes in CNN. An information processing apparatus (1) according to the present disclosure includes a PE (Processing Element) Grid (20) configured to perform a convolution by using a plurality of Kernels for Input matrix data and thereby generate a different Output matrix data for each of the used Kernels, the PE Grid (20) including a plurality of PEs configured to calculate pixels constituting the Output matrix data, and a Parallelism Controller (10) configured to determine, based on the Input matrix data or a dimension of the Output matrix data, and the number of the Kernels, whether pixels included in respective Output matrix data should be parallelly calculated or a plurality of pixels included in one Output matrix data should be parallelly calculated.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: April 12, 2022
    Assignee: NEC CORPORATION
    Inventor: Salita Sombatsiri
  • Patent number: 11291864
    Abstract: The present disclosure provides a method for imaging of moving subjects. The method may include determining a motion range of a region of interest (ROI) of a subject in an axial direction. The method may also include causing a radiation source to emit, at each of a plurality of axial positions relative to the subject, radiation beams to the ROI to generate an image frame of the ROI. The radiation beams corresponding to the plurality of axial positions may jointly cover the motion range of the ROI in the axial direction. The method may further include determining a position of the ROI in the axial direction based on the image frames of the ROI, and determining, based on the positions of the ROI in the axial directions, at least one time bin in which therapeutic beams are to be emitted to the ROI.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: April 5, 2022
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventor: Jonathan Maltz
  • Patent number: 11270100
    Abstract: A face image detection method includes: recognizing facial features of a face image, and determining two pupil centers of the face image; connecting the two pupil centers, and determining a center point of a line segment whose endpoints are the two pupil centers; and selecting K columns of pixels from a local image region including the center point, calculating a gradient value of each pixel in each column of pixels, and generating K gradient vectors including gradient values of all columns of pixels, determining, based on a result of comparing the K gradient vectors with a specified threshold, whether glasses are worn on the face image.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: March 8, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hongwei Hu, Chen Dong, Wenmei Gao
  • Patent number: 11270164
    Abstract: A system, including a processor and a memory, the memory including instructions to be executed by the processor to train a deep neural network based on a plurality of real-world images, determine the accuracy of the deep neural network is below a threshold based on identifying one or more physical features by the deep neural network, including one or more object types, in the plurality of real-world images and generate a plurality of synthetic images based on the accuracy of the deep neural network is below a threshold based on identifying the one or more physical features using a photo-realistic image rendering software program and a generative adversarial network. The instructions can include further instructions to retrain the deep neural network based on the plurality of real-world images and the plurality of synthetic images and output the retrained deep neural network.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: March 8, 2022
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Vijay Nagasamy, Deepti Mahajan, Rohan Bhasin, Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut murali
  • Patent number: 11263750
    Abstract: Introduced here are computer programs and associated computer-implemented techniques for training and then applying computer-implemented models designed for segmentation of an object in the frames of video. By training and then applying the segmentation model in a cyclical manner, the errors encountered when performing segmentation can be eliminated rather than propagated. In particular, the approach to segmentation described herein allows the relationship between a reference mask and each target frame for which a mask is to be produced to be explicitly bridged or established. Such an approach ensures that masks are accurate, which in turn means that the segmentation model is less prone to distractions.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: March 1, 2022
    Assignee: Adobe Inc.
    Inventor: Ning Xu