Patents Examined by Amir Alavi
  • Patent number: 11886799
    Abstract: The presently disclosed inventive concepts are directed to systems, computer program products, and methods for intelligent screen automation. According to one embodiment, a method includes: determining one or more logical relationships between textual elements and non-textual elements of one or more images of a user interface; building a hierarchy comprising some or all of the non-textual elements and some or all of the textual elements in order to form a data structure representing functionality of the user interface; and outputting the data structure to a memory.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: January 30, 2024
    Assignee: KOFAX, INC.
    Inventors: Vadim Alexeev, Benjamin De Coninck Owe
  • Patent number: 11882292
    Abstract: The present invention relates to an image encoding and decoding technique, and more particularly, to an image encoder and decoder using unidirectional prediction. The image encoder includes a dividing unit to divide a macro block into a plurality of sub-blocks, a unidirectional application determining unit to determine whether an identical prediction mode is applied to each of the plurality of sub-blocks, and a prediction mode determining unit to determine a prediction mode with respect to each of the plurality of sub-blocks based on a determined result of the unidirectional application determining unit.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: January 23, 2024
    Assignees: Electronics and Telecommunications Research Institute, Kwangwoon University Industry-Academic Collaboration Foundation, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Hae Chul Choi, Se Yoon Jeong, Sung-Chang Lim, Jin Soo Choi, Jin Woo Hong, Dong Gyu Sim, Seoung-Jun Oh, Chang-Beom Ahn, Gwang Hoon Park, Seung Ryong Kook, Sea-Nae Park, Kwang-Su Jeong
  • Patent number: 11882299
    Abstract: A predictive contrastive representation method for multivariate time-series data processing includes: mapping temporal coding information at a current moment and future situational information by using a logarithmic bilinear model to obtain a similarity; training the similarity according to a noise contrastive estimation method and prediction situational label data, and constructing, based on a training result, a predictive contrastive loss function of the temporal coding information at the current moment and the future situational information; sampling the prediction situational label data based on a corresponding optimal loss in the predictive contrastive loss function, optimizing the predictive contrastive loss function by using a direct proportion property between the sampling probability and the similarity, constructing mutual information between the temporal coding information at the current moment and the future situational information based on the optimized predictive contrastive loss function, and pe
    Type: Grant
    Filed: June 7, 2023
    Date of Patent: January 23, 2024
    Assignee: NATIONAL UNIVERSITY OF DEFENSE TECHNOLOGY
    Inventors: Yanghe Feng, Yulong Zhang, Longfei Zhang
  • Patent number: 11861728
    Abstract: Techniques for building and managing data models are provided. According to certain aspects, systems and methods may enable a user to input parameters associated with building one or more data models, including parameters associated with sampling, binning, and other factors. The systems and methods may automatically generate program code that corresponds to the inputted parameters and display the program code for review by the user. The systems and methods may build the data models and generate charts and plots depicting aspects of the data models. Additionally, the systems and methods may combine data models and select champion data models.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: January 2, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Weixin Wu, Philip Sangpil Moon, Scott Farris
  • Patent number: 11860838
    Abstract: A data labeling method, apparatus and system are provided. The method includes: sampling a data source according to an evaluation task for the data source to obtain sampled data; generating a labeling task from the sampled data; sending the labeling task to a labeling device; and receiving a labeled result of the labeling task from the labeling device. As such, an automatic evaluation of data can be implemented by using the evaluation task, and evaluation efficiency is improved.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: January 2, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECIINOLOGY CO., LTD.
    Inventors: Guanchao Wang, Yuqian Jiang, Shuhao Zhang, Tao Jiang, Siqi Wang
  • Patent number: 11847572
    Abstract: A method for detecting fraudulent interactions may include receiving interaction data, including a first plurality of interactions with (first) fraud labels and a second plurality of interactions (without fraud labels). Second fraud label data for each of the second plurality of interactions may be generated with a first neural network (e.g., classifying whether each interaction is fraudulent or not). Generated interaction data and generated fraud label data may be generated with a second neural network. Discrimination data for each of the second plurality of interactions and generated interactions may be generated with a third neural network (e.g., classifying whether the respective interaction is real or not). Error data may be determined based on the discrimination data (e.g., whether the respective interaction is correctly classified). At least one of the neural networks may be trained based on the error data. A system and computer program product are also disclosed.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: December 19, 2023
    Assignee: Visa International Service Association
    Inventors: Hangqi Zhao, Fan Yang, Chiranjeet Chetia, Claudia Carolina Barcenas Cardenas
  • Patent number: 11847549
    Abstract: Provided are an optical device which is capable of optically implementing an activation function of an artificial neural network and an optical neural network apparatus which includes the optical device. The optical device may include: a beam splitter splitting incident light into first light and second light; an image sensor disposed to sense the first light; an optical shutter configured to transmit or block the second light; and a controller controlling operations of the optical shutter, based on an intensity of the first light measured by the image sensor.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: December 19, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Duhyun Lee, Jaeduck Jang
  • Patent number: 11836960
    Abstract: An object detection device (1) includes an object detection unit (2) that detects an object from an image including the object by neural computation using a CNN. The object detection unit (2) includes: a feature amount extraction unit (2a) that extracts a feature amount of the object from the image; an information acquisition unit (2b) that obtains a plurality of object rectangles indicating candidates for the position of the object on the basis of the feature amount and obtains information and a certainty factor of a category of the object for each of the object rectangles; and an object tag calculation unit (2c) that calculates, for each of the object rectangles, an object tag indicating which object in the image the object rectangle is linked to, on the basis of the feature amount.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 5, 2023
    Assignee: Konica Minolta, Inc.
    Inventor: Fumiaki Sato
  • Patent number: 11836972
    Abstract: A computing system receives, from a client device, an image of a content item uploaded by a user of the client devices. The computing system divides the image into one or more overlapping patches. The computing system identifies, via a first machine learning model, one or more distortions present in the image based on the image and the one or more overlapping patches. The computing system determines that the image meets a threshold level of quality. Responsive to the determining, the computing system corrects, via a second machine learning model, the one or more distortions present in the image based on the image and the one or more overlapping patches. Each patch of the one or more overlapping patches are corrected. The computing system reconstructs the image of the content item based on the one or more corrected overlapping patches.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: December 5, 2023
    Assignee: INTUIT INC.
    Inventors: Saisri Padmaja Jonnalagedda, Xiao Xiao
  • Patent number: 11836623
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: December 5, 2023
    Assignee: UATC, LLC
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11823437
    Abstract: The present disclosure provides a target detection and model training method and apparatus, a device and a storage medium, and relates to the field of artificial intelligence, and in particular, to computer vision and deep learning technologies, which may be applied to smart city and intelligent transportation scenarios. The target detection method includes: performing feature extraction processing on an image to obtain image features of a plurality of stages of the image; performing position coding processing on the image to obtain a position code of the image; obtaining detection results of the plurality of stages of a target in the image based on the image features of the plurality of stages and the position code; and obtaining a target detection result based on the detection results of the plurality of stages.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: November 21, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Xiao Tan, Xiaoqing Ye, Hao Sun
  • Patent number: 11823441
    Abstract: A machine learning apparatus for extracting a region from an input image, comprises: an inference unit configured to output the region by inference processing for the input image; and an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data, wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: November 21, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tsuyoshi Kobayashi
  • Patent number: 11810359
    Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: November 7, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Yu Qiao, Qiang Zhang, Baocai Yin, Haiyin Piao, Zhenjun Du
  • Patent number: 11793491
    Abstract: The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: October 24, 2023
    Assignee: Shenzhen Mindray Bio-Medical Electronics Co., Ltd.
    Inventor: Glen W. McLaughlin
  • Patent number: 11798246
    Abstract: An electronic device is provided. The electronic device includes a camera, a display, and a processor configured to obtain a first image including one or more external objects by using the camera, display to output a three-dimensional (3D) object generated based on attributes related to a face among the one or more external objects using the display, receive a selection of at least one graphic attribute from a plurality of graphic attributes which can be applied to the 3D object, generate a 3D avatar for the face based on the at least one graphic attribute, and generate a second image including at least one object reflecting a predetermined facial expression or motion using the 3D avatar.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: October 24, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wooyong Lee, Yonggyoo Kim, Byunghyun Min, Dongil Son, Chanhee Yoon, Kihuk Lee, Cheolho Cheong
  • Patent number: 11790644
    Abstract: Techniques and apparatus for generating dense natural language descriptions for video content are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to receive a source video comprising a plurality of frames, determine a plurality of regions for each of the plurality of frames, generate at least one region-sequence connecting the determined plurality of regions, apply a language model to the at least one region-sequence to generate description information comprising a description of at least a portion of content of the source video. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: October 17, 2023
    Assignee: INTEL CORPORATION
    Inventors: Yurong Chen, Jianguo Li, Zhou Su, Zhiqiang Shen
  • Patent number: 11790549
    Abstract: A system includes a neural network implemented by one or more computers, in which the neural network includes an image depth prediction neural network and a camera motion estimation neural network. The neural network is configured to receive a sequence of images. The neural network is configured to process each image in the sequence of images using the image depth prediction neural network to generate, for each image, a respective depth output that characterizes a depth of the image, and to process a subset of images in the sequence of images using the camera motion estimation neural network to generate a camera motion output that characterizes the motion of a camera between the images in the subset. The image depth prediction neural network and the camera motion estimation neural network have been jointly trained using an unsupervised learning technique.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Reza Mahjourian, Martin Wicke, Anelia Angelova
  • Patent number: 11783508
    Abstract: A system comprises an encoder configured to compress and encode data for a three-dimensional mesh using a video encoding technique. To compress the three-dimensional mesh, the encoder determines sub-meshes and for each sub-mesh: texture patches and geometry patches. Also the encoder determines patch connectivity information and patch texture coordinates for the texture patches and geometry patches. The texture patches and geometry patches are packed into video image frames and encoded using a video codec. Additionally, the encoder determines boundary stitching information for the sub-meshes. A decoder receives a bit stream as generated by the encoder and reconstructs the three-dimensional mesh.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventors: Khaled Mammou, Alexandros Tourapis, Jungsun Kim
  • Patent number: 11781426
    Abstract: A line of coherent radiation is projected on a bed on which one or more particles is located, the one or more particles flowing produced as a result of a downhole operation in the borehole. An image of the bed is captured wherein one or more particles on the bed deflect the line of coherent radiation. One or more image edges is detected based on the captured image. A subset of the one or more image edges is identified as corresponding to edges of the one or more particles, based in part on changes in intensity of the captured image. Information about the one or more particles, including information about of size, shape and volume, can be determined from the one or more image edges corresponding to the edges of the one or more particles.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 10, 2023
    Assignee: Halliburton Energy Services, Inc.
    Inventors: Abhijit Kulkarni, Prashant Shekhar
  • Patent number: 11775614
    Abstract: An apparatus includes an interface and a processor. The interface may be configured to receive pixel data from a capture device. The processor may be configured to (i) process the pixel data arranged as one or more video frames, (ii) extract features from the one or more video frames, (iii) generate fused maps for at least one of disparity and optical flow in response to the features extracted, (iv) generate regenerated image frames by performing warping on a first subset of the video frames based on (a) the fused maps and (b) first parameters, (v) perform a classification of a sample image frame based on second parameters, and (vi) update the first parameters and the second parameters in response to whether the classification is correct. The classification generally comprises indicating whether the sample image frame is one of a second subset of the video frames or one of the regenerated image frames.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: October 3, 2023
    Assignee: Ambarella International LP
    Inventor: Zhikan Yang