Patents Examined by Ruiping Li
  • Patent number: 11954839
    Abstract: A leak source specification assistance device includes a processing unit that performs processing, based on a movement of a first pixel that is one of pixels constituting an image including a leaking fluid region (gas region), to determine a second pixel that is one of the pixels constituting the image and is a movement source pixel of the first pixel, and repeats the processing with the second pixel newly set as the first pixel to determine a final movement source pixel. The processing unit repeats the processing on each of a plurality of the pixels constituting the image to determine the final movement source pixel for each of the plurality of pixels.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: April 9, 2024
    Assignee: KONICA MINOLTA, INC.
    Inventor: Motohiro Asano
  • Patent number: 11954769
    Abstract: The invention refers to providing a system that allows to reduce the computational costs when using an iterative reconstructional algorithm. The system (100) comprises a providing unit (110) for providing CT projection data, a base image generation unit (120) for generating a base image based on the projection data, a modifying unit (130) for generating a modified image, wherein an image value of a voxel of the base image is modified based on the image value of the voxel, and an image reconstruction unit (140) for reconstructing an image using an iterative reconstruction algorithm that uses the modified image as a start image. Since the modifying unit is adapted to modify the base image, the base image can be modified such as to form an optimal start image for the chosen iterative reconstruction such that a faster convergence of the iterative reconstruction can be accomplished.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: April 9, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Claas Bontus
  • Patent number: 11948360
    Abstract: One embodiment of the present invention sets forth a technique for selecting a frame of video content that is representative of a media title. The technique includes applying an embedding model to a plurality of faces included in a set of frames of the video content to generate a plurality of face embeddings. The technique also includes aggregating the plurality of face embeddings into a plurality of clusters representing a plurality of characters included in the media title. The technique further includes computing a plurality of prominence scores for the plurality of characters based on one or more attributes of the plurality of clusters, and selecting, from the set of frames, a frame of video content as representative of the media title based on one or more prominence scores for one or more characters included in the frame.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: April 2, 2024
    Assignee: NETFLIX, INC.
    Inventors: Shervin Ardeshir Behrostaghi, Nagendra K. Kamath
  • Patent number: 11928594
    Abstract: Training images can be synthesized in order to obtain enough data to train a model (e.g., a neural network) to recognize various classifications of a type of object. Images can be synthesized by blending images of objects labeled using those classifications into selected background images. To improve results, one or more operations are performed to determine whether the synthesized images can still be used as training data, such as by verifying one or more objects of interested represented in those images is not occluded, or at least satisfies a threshold level of acceptance. The training images can be used with real world images to train the model.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: March 12, 2024
    Inventors: Jonathan Lwowski, Abhijit Majumdar
  • Patent number: 11922649
    Abstract: It provides the measurement data of each portion of the target object with a high accuracy. The measurement data calculation apparatus 1020 includes an obtaining unit 1024A, an extraction unit 1024B, a conversion unit 1024C, and a calculation unit 1024D. The obtaining unit 1024A obtains the image data in which the target object is photographed and the full length data of the target object. The extraction unit 1024B extracts the shape data indicating the shape of the target object from the image data. The conversion unit 1024C converts and silhouettes the shape data based on the full length data. The calculation unit 1024D uses the shape data converted by the conversion unit 1024C to calculate the measurement data of each portion of the target object.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: March 5, 2024
    Assignee: Arithmer Inc.
    Inventors: Daisuke Sato, Hiroki Yato, Chikashi Arita, Yoshihisa Ishibashi, Takashi Nakano, Ryosuke Sasaki, Ryosuke Tajima, Yoshihiro Ohta
  • Patent number: 11915501
    Abstract: An object detection method and apparatus include obtaining a point cloud of a scene that includes location information of points. The point cloud is mapped to a 3D voxel representation. A convolution operation is performed on the feature information of the 3D voxel to obtain a convolution feature set and initial positioning information of a candidate object region is determined based on the convolution feature set. A target point is located in the candidate object region in the point cloud is determined and the initial positioning information of the candidate object region is adjusted based on the location information and target convolution feature information of the target point. Positioning information of a target object region is obtained to improve object detection accuracy.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: February 27, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yi Lun Chen, Shu Liu, Xiao Yong Shen, Yu Wing Tai, Jia Ya Jia
  • Patent number: 11915355
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: February 27, 2024
    Assignee: Snap Inc.
    Inventors: Yurii Volkov, Pavel Savchenkov, Nikolai Smirnov, Aleksandr Mashrabov
  • Patent number: 11915519
    Abstract: A group to be authenticated in face authentication is efficiently registered in a system. An information processing system includes a face detection unit configured to detect a face from an image in which a plurality of faces of persons are shown, a determination unit configured to determine whether or not the face detected by the face detection unit satisfies a predetermined condition, and a registration information generation unit configured to generate registration information, the registration information being information in which a partial image of each of a plurality of faces that have been determined to satisfy the predetermined condition is associated with an identifier identifying a group to be authenticated in face authentication.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: February 27, 2024
    Assignee: NEC CORPORATION
    Inventor: Yuki Shimizu
  • Patent number: 11907816
    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Nikaash Puri, Ayush Chopra, Anubha Kabra
  • Patent number: 11904863
    Abstract: A method for passing a curve, the method may include sensing, by a vehicle sensor, environment information regarding an environment of the vehicle; sensing at least one current propagation parameter of the vehicle; detecting, based on at least the environment information, (a) that the vehicle is about to reach the curve, (b) one or more first road conditions of a first road segment that precedes the curve; determining one or more curve passing propagation parameters to be applied by the vehicle while passing the curve, wherein the determining is based, at least in part, on the one or more first road conditions and on the at least one current propagation parameter; and responding to the determining.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: February 20, 2024
    Assignee: Autobrains Technologies Ltd.
    Inventors: Igal Raichelgauz, Karina Odinaev
  • Patent number: 11908199
    Abstract: Image data captured from a traveling vehicle is considered, and it is not possible to reduce the transmission band of the image data. It is assumed that a radar 4 mounted in a traveling vehicle 10 detects a certain distant three-dimensional object at a time T in a direction of a distance d1 [m] and an angle ?1. Since the vehicle 10 travels at a vehicle speed Y [km/h], it is predicted that a camera 3 is capable of capturing the distant three-dimensional object at a time (T+?T) and an angle ?1 or at a distance d2 [m]. Therefore, if a control unit 2 outputs a request to the camera 3 in advance, so as to cut out an image of the angle ?1 or the distance d2 [m] at the time (T+?T), when the time (T+?T) comes, the camera 3 transfers a whole image and a high-resolution image being a cutout image of only a partial image, to the control unit 2.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: February 20, 2024
    Assignee: Hitachi Astemo, Ltd.
    Inventors: Tetsuya Yamada, Teppei Hirotsu, Tomohito Ebina, Kazuyoshi Serizawa, Shouji Muramatsu
  • Patent number: 11902576
    Abstract: A three-dimensional data encoding method includes: classifying three-dimensional points included in point cloud data into layers, based on geometry information of the three-dimensional points; generating first information indicating whether to permit referring to, for a current three-dimensional point included in the three-dimensional points, attribute information of another three-dimensional point belonging to a same layer as the current three-dimensional point; and encoding attribute information of the current three-dimensional point to generate a bitstream, by or without referring to the attribute information of the other three-dimensional point according to the first information. The bitstream includes the first information.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: February 13, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Toshiyasu Sugio, Noritaka Iguchi
  • Patent number: 11899469
    Abstract: A method of integrity monitoring for visual odometry comprises capturing a first image at a first time epoch with stereo vision sensors, capturing a second image at a second time epoch, and extracting features from the images. A temporal feature matching process is performed to match the extracted features, using a feature mismatching limiting discriminator. A range, or depth, recovery process is performed to provide stereo feature matching between two images taken by the stereo vision sensors at the same time epoch, using a range error limiting discriminator. An outlier rejection process is performed using a modified RANSAC technique to limit feature moving events. Feature error magnitude and fault probabilities are characterized using overbounding Gaussian models.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: February 13, 2024
    Assignee: Honeywell International Inc.
    Inventors: Vibhor L Bageshwar, Xiao Cao, Yawei Zhai
  • Patent number: 11893828
    Abstract: System and method for training a human perception predictor to determine level of perceived similarity between data samples, the method including: receiving at least one media file, determining at least one identification region for each media file, applying at least one transformation on each identification region for each media file until at least one modified media file is created, receiving input regarding similarity between each modified media file and the corresponding received media file, and training a machine learning model with an objective function configured to predict similarity between media files by a human observer in accordance with the received input.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: February 6, 2024
    Assignee: DE-IDENTIFICATION LTD.
    Inventors: Gil Perry, Sella Blondheim, Eliran Kuta, Yoav Hacohen
  • Patent number: 11886493
    Abstract: A method is disclosed for indexing 3D digital models, retrieving them, comparing them and displaying the results in a 3D space. The method comprises four complementary parts, i.e. displaying, comparing/searching, reconciling the faces, and classifying the results. These parts can overlap with each other or can be implemented separately. A method is described for retrieving 3D models that share certain similarities of form with a reference 3D model, involving a first step of analysis in order to generate representations (descriptors). The process of searching/comparing 3D models based on descriptors partially related to the faces optionally requires a process of pairing and reconciling the faces. The results are displayed in a single 3-dimensional space and, owing to a mark on the faces of the 3D models, makes it possible to distinguish several types of difference between similar 3D models.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: January 30, 2024
    Assignee: 7893159 CANADA INC.
    Inventors: Roland Maranzana, Omar Msaaf
  • Patent number: 11880905
    Abstract: An image processing method and an image processing device are provided. The image processing method includes: acquiring an input image; extracting first and second pixel groups in the input image, wherein the first pixel group comprises first pixels with different positions, and the second pixel group comprises second pixels with different positions, the positions of the first pixels are different from the positions of the second pixels, the number of the first pixels are equal to the number of the second pixels, and the position of each first pixel in the first pixel group corresponds to the position of a respective second pixel in the second pixel group; when a preset similarity condition is satisfied between the first pixel and the respective second pixel, determining a first processing result of the first pixel as a second processing result of the respective second pixel.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 23, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Ran Duan
  • Patent number: 11881051
    Abstract: A communications device (100) for classifying an instance (110) using Machine Learning (ML) is provided. The communications device is operative to acquire a feature vector representing the instance, classify the instance using a local first ML model, calculate a confidence level, and, if the calculated confidence level is less than a threshold confidence level, acquire information identifying one or more other communications devices, and transmit a classification request message comprising the feature vector to the one or more other communications devices.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: January 23, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Tommy Arngren, Markus Andersson, Rickard Cöster, Tomas Frankkila
  • Patent number: 11875589
    Abstract: The invention relates to securing of an article against forgery and falsifying of its associated data, and particularly of data relating to its belonging to a specific batch of articles, while allowing offline or online checking of the authenticity of a secured article and conformity of its associated data with respect to that of a genuine article.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 16, 2024
    Assignee: SICPA HOLDING SA
    Inventors: Eric Decoux, Philippe Gillet, Philippe Thevoz, Elisabeth Wallace
  • Patent number: 11869256
    Abstract: Methods, systems, and programs are presented for simultaneous recognition of objects within a detection space utilizing three-dimensional (3D) cameras configured for capturing 3D images of the detection space. One system includes the 3D cameras, calibrated based on a pattern in a surface of the detection space, a memory, and a processor. The processor combines data of the 3D images to obtain pixel data and removes, from the pixel data, background pixels of the detection space to obtain object pixel data associated with objects in the detection space. Further, the processor creates a geometric model of the object pixel data, the geometric model including surface information of the objects in the detection space, generates one or more cuts in the geometric model to separate objects and obtain respective object geometric models, and performs object recognition to identify each object in the detection space based on the respective object geometric models.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: January 9, 2024
    Assignee: Mashgin Inc.
    Inventors: Abhinai Srivastava, Mukul Dhankhar
  • Patent number: 11861936
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov