Patents Examined by Amir Alavi
  • Patent number: 11860838
    Abstract: A data labeling method, apparatus and system are provided. The method includes: sampling a data source according to an evaluation task for the data source to obtain sampled data; generating a labeling task from the sampled data; sending the labeling task to a labeling device; and receiving a labeled result of the labeling task from the labeling device. As such, an automatic evaluation of data can be implemented by using the evaluation task, and evaluation efficiency is improved.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: January 2, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECIINOLOGY CO., LTD.
    Inventors: Guanchao Wang, Yuqian Jiang, Shuhao Zhang, Tao Jiang, Siqi Wang
  • Patent number: 11847572
    Abstract: A method for detecting fraudulent interactions may include receiving interaction data, including a first plurality of interactions with (first) fraud labels and a second plurality of interactions (without fraud labels). Second fraud label data for each of the second plurality of interactions may be generated with a first neural network (e.g., classifying whether each interaction is fraudulent or not). Generated interaction data and generated fraud label data may be generated with a second neural network. Discrimination data for each of the second plurality of interactions and generated interactions may be generated with a third neural network (e.g., classifying whether the respective interaction is real or not). Error data may be determined based on the discrimination data (e.g., whether the respective interaction is correctly classified). At least one of the neural networks may be trained based on the error data. A system and computer program product are also disclosed.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: December 19, 2023
    Assignee: Visa International Service Association
    Inventors: Hangqi Zhao, Fan Yang, Chiranjeet Chetia, Claudia Carolina Barcenas Cardenas
  • Patent number: 11847549
    Abstract: Provided are an optical device which is capable of optically implementing an activation function of an artificial neural network and an optical neural network apparatus which includes the optical device. The optical device may include: a beam splitter splitting incident light into first light and second light; an image sensor disposed to sense the first light; an optical shutter configured to transmit or block the second light; and a controller controlling operations of the optical shutter, based on an intensity of the first light measured by the image sensor.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: December 19, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Duhyun Lee, Jaeduck Jang
  • Patent number: 11836960
    Abstract: An object detection device (1) includes an object detection unit (2) that detects an object from an image including the object by neural computation using a CNN. The object detection unit (2) includes: a feature amount extraction unit (2a) that extracts a feature amount of the object from the image; an information acquisition unit (2b) that obtains a plurality of object rectangles indicating candidates for the position of the object on the basis of the feature amount and obtains information and a certainty factor of a category of the object for each of the object rectangles; and an object tag calculation unit (2c) that calculates, for each of the object rectangles, an object tag indicating which object in the image the object rectangle is linked to, on the basis of the feature amount.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 5, 2023
    Assignee: Konica Minolta, Inc.
    Inventor: Fumiaki Sato
  • Patent number: 11836972
    Abstract: A computing system receives, from a client device, an image of a content item uploaded by a user of the client devices. The computing system divides the image into one or more overlapping patches. The computing system identifies, via a first machine learning model, one or more distortions present in the image based on the image and the one or more overlapping patches. The computing system determines that the image meets a threshold level of quality. Responsive to the determining, the computing system corrects, via a second machine learning model, the one or more distortions present in the image based on the image and the one or more overlapping patches. Each patch of the one or more overlapping patches are corrected. The computing system reconstructs the image of the content item based on the one or more corrected overlapping patches.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: December 5, 2023
    Assignee: INTUIT INC.
    Inventors: Saisri Padmaja Jonnalagedda, Xiao Xiao
  • Patent number: 11836623
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: December 5, 2023
    Assignee: UATC, LLC
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11823437
    Abstract: The present disclosure provides a target detection and model training method and apparatus, a device and a storage medium, and relates to the field of artificial intelligence, and in particular, to computer vision and deep learning technologies, which may be applied to smart city and intelligent transportation scenarios. The target detection method includes: performing feature extraction processing on an image to obtain image features of a plurality of stages of the image; performing position coding processing on the image to obtain a position code of the image; obtaining detection results of the plurality of stages of a target in the image based on the image features of the plurality of stages and the position code; and obtaining a target detection result based on the detection results of the plurality of stages.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: November 21, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Xiao Tan, Xiaoqing Ye, Hao Sun
  • Patent number: 11823441
    Abstract: A machine learning apparatus for extracting a region from an input image, comprises: an inference unit configured to output the region by inference processing for the input image; and an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data, wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: November 21, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tsuyoshi Kobayashi
  • Patent number: 11810359
    Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: November 7, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Yu Qiao, Qiang Zhang, Baocai Yin, Haiyin Piao, Zhenjun Du
  • Patent number: 11793491
    Abstract: The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: October 24, 2023
    Assignee: Shenzhen Mindray Bio-Medical Electronics Co., Ltd.
    Inventor: Glen W. McLaughlin
  • Patent number: 11798246
    Abstract: An electronic device is provided. The electronic device includes a camera, a display, and a processor configured to obtain a first image including one or more external objects by using the camera, display to output a three-dimensional (3D) object generated based on attributes related to a face among the one or more external objects using the display, receive a selection of at least one graphic attribute from a plurality of graphic attributes which can be applied to the 3D object, generate a 3D avatar for the face based on the at least one graphic attribute, and generate a second image including at least one object reflecting a predetermined facial expression or motion using the 3D avatar.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: October 24, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wooyong Lee, Yonggyoo Kim, Byunghyun Min, Dongil Son, Chanhee Yoon, Kihuk Lee, Cheolho Cheong
  • Patent number: 11790644
    Abstract: Techniques and apparatus for generating dense natural language descriptions for video content are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to receive a source video comprising a plurality of frames, determine a plurality of regions for each of the plurality of frames, generate at least one region-sequence connecting the determined plurality of regions, apply a language model to the at least one region-sequence to generate description information comprising a description of at least a portion of content of the source video. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: October 17, 2023
    Assignee: INTEL CORPORATION
    Inventors: Yurong Chen, Jianguo Li, Zhou Su, Zhiqiang Shen
  • Patent number: 11790549
    Abstract: A system includes a neural network implemented by one or more computers, in which the neural network includes an image depth prediction neural network and a camera motion estimation neural network. The neural network is configured to receive a sequence of images. The neural network is configured to process each image in the sequence of images using the image depth prediction neural network to generate, for each image, a respective depth output that characterizes a depth of the image, and to process a subset of images in the sequence of images using the camera motion estimation neural network to generate a camera motion output that characterizes the motion of a camera between the images in the subset. The image depth prediction neural network and the camera motion estimation neural network have been jointly trained using an unsupervised learning technique.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Reza Mahjourian, Martin Wicke, Anelia Angelova
  • Patent number: 11783508
    Abstract: A system comprises an encoder configured to compress and encode data for a three-dimensional mesh using a video encoding technique. To compress the three-dimensional mesh, the encoder determines sub-meshes and for each sub-mesh: texture patches and geometry patches. Also the encoder determines patch connectivity information and patch texture coordinates for the texture patches and geometry patches. The texture patches and geometry patches are packed into video image frames and encoded using a video codec. Additionally, the encoder determines boundary stitching information for the sub-meshes. A decoder receives a bit stream as generated by the encoder and reconstructs the three-dimensional mesh.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventors: Khaled Mammou, Alexandros Tourapis, Jungsun Kim
  • Patent number: 11781426
    Abstract: A line of coherent radiation is projected on a bed on which one or more particles is located, the one or more particles flowing produced as a result of a downhole operation in the borehole. An image of the bed is captured wherein one or more particles on the bed deflect the line of coherent radiation. One or more image edges is detected based on the captured image. A subset of the one or more image edges is identified as corresponding to edges of the one or more particles, based in part on changes in intensity of the captured image. Information about the one or more particles, including information about of size, shape and volume, can be determined from the one or more image edges corresponding to the edges of the one or more particles.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 10, 2023
    Assignee: Halliburton Energy Services, Inc.
    Inventors: Abhijit Kulkarni, Prashant Shekhar
  • Patent number: 11775614
    Abstract: An apparatus includes an interface and a processor. The interface may be configured to receive pixel data from a capture device. The processor may be configured to (i) process the pixel data arranged as one or more video frames, (ii) extract features from the one or more video frames, (iii) generate fused maps for at least one of disparity and optical flow in response to the features extracted, (iv) generate regenerated image frames by performing warping on a first subset of the video frames based on (a) the fused maps and (b) first parameters, (v) perform a classification of a sample image frame based on second parameters, and (vi) update the first parameters and the second parameters in response to whether the classification is correct. The classification generally comprises indicating whether the sample image frame is one of a second subset of the video frames or one of the regenerated image frames.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: October 3, 2023
    Assignee: Ambarella International LP
    Inventor: Zhikan Yang
  • Patent number: 11775612
    Abstract: In order to provide a learning data generating apparatus that is able to efficiently restrain erroneous detections, the learning data generating apparatus includes a data acquisition unit configured to acquire learning data including teacher data, and a generation unit configured to generate generated learning data based on the learning data and a generating condition, wherein the generation unit converts teacher data of a positive instance into teacher data of a negative instance according to a preset rule when generating the generated learning data.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: October 3, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shinji Yamamoto, Takato Kimura
  • Patent number: 11775788
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 3, 2023
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Patent number: 11770551
    Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
  • Patent number: 11769254
    Abstract: Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient. The system includes a communication interface configured to receive the medical image acquired by an image acquisition device. The system further includes at least one processor. The at least one processor is configured to detect a medical condition based on the medical image and automatically generate text information describing the medical condition. The at least one processor is further configured to construct the diagnosis report, where the diagnosis report includes at least one image view showing the medical condition and a report view including the text information describing the medical condition. The system also includes a display configured to display the diagnosis report.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: September 26, 2023
    Assignee: KEYA MEDICAL TECHNOLOGY CO., LTD.
    Inventors: Qi Song, Hanbo Chen, Zheng Te, Youbing Yin, Junjie Bai, Shanhui Sun