Patents Examined by David F Dunphy
-
Patent number: 11823699Abstract: Methods and systems are provided for implementing source separation techniques, and more specifically performing source separation on mixed source single-channel and multi-channel audio signals enhanced by inputting lip motion information from captured image data, including selecting a target speaker facial image from a plurality of facial images captured over a period of interest; computing a motion vector based on facial features of the target speaker facial image; and separating, based on at least the motion vector, audio corresponding to a constituent source from a mixed source audio signal captured over the period of interest. The mixed source audio signal may be captured from single-channel or multi-channel audio capture devices. Separating audio from the audio signal may be performed by a fusion learning model comprising a plurality of learning sub-models. Separating the audio from the audio signal may be performed by a blind source separation (“BSS”) learning model.Type: GrantFiled: May 23, 2022Date of Patent: November 21, 2023Assignee: Alibaba Group Holding LimitedInventor: Yun Li
-
Patent number: 11816180Abstract: Disclosed is a method for classifying mixed signals, comprising: receiving mixed signals; performing calculation on a matrix corresponding to the mixed signals by means of a preset Principal Component Analysis method to obtain to-be-classified mixed signals and to determine the number of types of signals contained in the to-be-classified mixed signals; determining a separation matrix based on the number of types of signals contained in the to-be-classified mixed signals; separating individual signals in the to-be-classified mixed signals by means of the separation matrix to obtain to-be-identified signals; calculating a preset number of high-order cumulants corresponding to each to-be-identified signal in the to-be-identified signals respectively; taking the calculated high-order cumulants as characteristics of the to-be-identified signal corresponding to the high-order cumulants respectively; inputting the characteristics of the to-be-identified signal into a preset classification model; and obtaining a moduType: GrantFiled: March 10, 2020Date of Patent: November 14, 2023Assignee: BEIJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONSInventors: Zhiyong Feng, Kezhong Zhang, Zhiqing Wei, Li Xu, Che Ji
-
Patent number: 11810659Abstract: The present disclosure provides a medical image processing apparatus capable of readily creating, from a medical image, an electronic document that displays a three-dimensional body organ model. The medical image processing apparatus performs control to acquire patient information from DICOM additional information of medical image data designated when the creation of the electronic document has been instructed, and to create the electronic document of the three-dimensional body organ model corresponding to the medical image data, the electronic document containing the acquire patient information. To which patient the three-dimensional body organ model belongs can be identified on the electronic document.Type: GrantFiled: April 9, 2021Date of Patent: November 7, 2023Assignee: Canon Kabushiki KaishaInventors: Yusuke Imasugi, Tsuyoshi Sakamoto, Noriaki Miyake
-
Patent number: 11810339Abstract: Aspects of the disclosure relate to anomaly detection in cybersecurity training modules. A computing platform may receive information defining a training module. The computing platform may capture a plurality of screenshots corresponding to different permutations of the training module. The computing platform may input, into an auto-encoder, the plurality of screenshots corresponding to the different permutations of the training module, wherein inputting the plurality of screenshots corresponding to the different permutations of the training module causes the auto-encoder to output a reconstruction error value. The computing platform may execute an outlier detection algorithm on the reconstruction error value, which may cause the computing platform to identify an outlier permutation of the training module. The computing platform may generate a user interface comprising information identifying the outlier permutation of the training module.Type: GrantFiled: May 10, 2022Date of Patent: November 7, 2023Assignee: Proofpoint, Inc.Inventor: Adam Jason
-
Patent number: 11810304Abstract: Depth information from a depth sensor, such as a LiDAR system, is used to correct perspective distortion for decoding an optical pattern in a first image acquired by a camera. Image data from the first image is spatially correlated with the depth information. The depth information is used to identify a surface in the scene and to distort the first image to generate a second image, such that the surface in the second image is parallel to an image plane of the second image. The second image is then analyzed to decode an optical pattern on the surface identified in the scene.Type: GrantFiled: July 12, 2022Date of Patent: November 7, 2023Assignee: Scandit AGInventors: Matthias Bloch, Christian Floerkemeier, Bernd Schoner
-
Patent number: 11798263Abstract: A computing system detects a defective object. An image is received of a manufacturing line that includes objects in a process of being manufactured. Each pixel included in the image is classified as a background pixel class, a non-defective object class, or a defective object class using a trained neural network model. The pixels included in the image that were classified as the non-defective object class or the defective object class are grouped into polygons. Each polygon is defined by a contiguous group of pixels classified as the non-defective object class or the defective object class. Each polygon is classified in the non-defective object class or in the defective object class based on a number of pixels included in a respective polygon that are classified in the non-defective object class relative to a number of pixels included in the respective polygon that are classified in the defective object class.Type: GrantFiled: April 4, 2023Date of Patent: October 24, 2023Assignee: SAS Institute Inc.Inventors: Kedar Shriram Prabhudesai, Jonathan Lee Walker, Sanjeev Shyam Heda, Varunraj Valsaraj, Allen Joseph Langlois, Frederic Combaneyre, Hamza Mustafa Ghadyali, Nabaruna Karmakar
-
Patent number: 11798280Abstract: A method for augmenting point of interest in AR video. The method includes: extracting first frame of AR video depicting object being tracked including point of interest, by first pipeline; locating, in first frame set of feature points forming boundary of object; transferring frame with determined set of feature points to second pipeline; and by second pipeline determining coordinate system of set of feature points; calculating first location parameters of feature points; selecting first and second reference points from feature points, wherein first location parameters of point of interest are defined by first location parameters of first and second reference points; transmitting first location parameters of point of interest from second pipeline to first pipeline; and augmenting in first pipeline AR video with first location parameters of point of interest received from second pipeline.Type: GrantFiled: March 31, 2021Date of Patent: October 24, 2023Assignee: Revieve OyInventors: Jakke Kulovesi, Joonas Hamunen, Samuli Siivinen
-
Patent number: 11790652Abstract: Systems and methods are presented for detecting physical contacts effectuated by actions performed by an entity participating in an event. An action, performed by the entity, is detected based on a sequence of pose data associated with the entity's performance in the event. A contact with another entity in the event is detected based on data associated with the detected action. The action and the contact detections are employed by neural-network based detectors.Type: GrantFiled: October 24, 2022Date of Patent: October 17, 2023Assignee: Disney Enterprises, Inc.Inventors: Justin Ali Kennedy, Kevin John Prince, Carlos Augusto Dietrich, Dirk Van Dall
-
Patent number: 11783582Abstract: An eyewear device with camera-based compensation that improves the user experience for user's having partial blindness or complete blindness. The camera-based compensation determines objects, converts determined objects to text, and then converts the text to audio that is indicative of the objects and that is perceptible to the eyewear user. The camera-based compensation may use a region-based convolutional neural network (RCNN) to generate a feature map including text that is indicative of objects in images captured by a camera. Relevant text of the feature map is then processed through a text to speech algorithm featuring a natural language processor to generate audio indicative of the objects in the processed images.Type: GrantFiled: July 29, 2022Date of Patent: October 10, 2023Assignee: Snap Inc.Inventor: Stephen Pomes
-
Patent number: 11783601Abstract: A driver fatigue detection method based on combining a pseudo-three-dimensional (P3D) convolutional neural network (CNN) and an attention mechanism includes: 1) extracting a frame sequence from a video of a driver and processing the frame sequence; 2) performing spatiotemporal feature learning through a P3D convolution module; 3) constructing a P3D-Attention module, and applying attention on channels and a feature map through the attention mechanism; and 4) replacing a 3D global average pooling layer with a 2D global average pooling layer to obtain more expressive features, and performing a classification through a Softmax classification layer. By analyzing the yawning behavior, blinking and head characteristic movements, the yawning behavior is well distinguished from the talking behavior, and it is possible to effectively distinguish between the three states of alert state, low vigilant state and drowsy state, thus improving the predictive performance of fatigue driving behaviors.Type: GrantFiled: August 18, 2020Date of Patent: October 10, 2023Assignee: Nanjing University of Science and TechnologyInventors: Yong Qi, Yuan Zhuang
-
Patent number: 11783192Abstract: A computer implemented method for recognizing facial expressions by applying feature learning and feature engineering to face images. The method includes conducting feature learning on a face image comprising feeding the face image into a first convolution neural network to obtain a first decision, conducting feature engineering on a face image, comprising the steps of automatically detecting facial landmarks in the face image, transforming the facial features into a two-dimensional matrix, and feeding the two-dimensional matrix into a second convolution neural network to obtain a second decision, computing a hybrid decision based on the first decision and the second decision, and recognizing a facial expression in the face image in accordance to the hybrid decision.Type: GrantFiled: March 9, 2022Date of Patent: October 10, 2023Assignee: Shutterfly, LLCInventor: Leo Cyrus
-
Patent number: 11775578Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.Type: GrantFiled: August 10, 2021Date of Patent: October 3, 2023Assignee: Adobe Inc.Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
-
Patent number: 11776257Abstract: Disclosed embodiments provide systems and user devices for enhancing vehicle identification with preprocessing. The systems or user devices may comprise at least one memory device, an augmentation tool, and at least one processor. The at least one processor may be configured to execute instructions to receive an image depicting a vehicle, analyze the image, and determine a first predicted identity of the vehicle and a first confidence value distribution. The at least one processor may further select a processing technique for modifying the image and analyze the modified image to determine a second predicted identity of the vehicle and a second confidence value distribution. The system may further compare the second confidence value distribution to a predetermined threshold or to the first confidence value distribution to select the first or second predicted identity for transmission to a user.Type: GrantFiled: March 10, 2022Date of Patent: October 3, 2023Assignee: Capital One Services, LLCInventors: Micah Price, Chi-san Ho, Aamer Charania, Sunil Vasisht
-
Patent number: 11756162Abstract: A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.Type: GrantFiled: March 14, 2016Date of Patent: September 12, 2023Assignee: Imagination Technologies LimitedInventors: Marc Vivet, Paul Brasnett
-
Patent number: 11756299Abstract: Systems, computer program products, and methods are described herein for preserving image and acoustic sensitivity using reinforcement learning. The present invention is configured to initiate a file editing engine on the audiovisual file to separate the audiovisual file into a video component and an audio component; initiate a convolutional neural network (CNN) algorithm on the video component to identify one or more sensitive portions in the one or more image frames; initiate an audio word2vec algorithm on the audio component to identify one or more sensitive portions in the audio component; initiate a masking algorithm on the one or more image frames and the audio component; generate a masked video component and a masked audio component based on at least implementing the masking action policy; and bind, using the file editing engine, the masked video component and the masked audio component to generate a masked audiovisual file.Type: GrantFiled: October 28, 2022Date of Patent: September 12, 2023Assignee: BANK OF AMERICA CORPORATIONInventor: Madhusudhanan Krishnamoorthy
-
Patent number: 11756676Abstract: A plurality of analysis functions each corresponding to an organ are managed, and organ information is stored in such a manner as to correlate with a corresponding type of analysis function. The organ information indicates which of a plurality of regions included in the organ is to be subjected to thinning. Specification of one of the analysis functions is received from a user, and medical image data is acquired. A plurality of regions of an organ included in the acquired medical image data are identified. The identified plurality of regions of the organ, a region to be subjected to thinning is determined on the basis of the stored organ information and the received type of the analysis function. Thinning is performed on the determined region of the organ. An image of the thinned region is displayed together with an image of a region not subjected to thinning.Type: GrantFiled: November 10, 2021Date of Patent: September 12, 2023Assignee: Canon Kabushiki KaishaInventors: Tsuyoshi Sakamoto, Yusuke Imasugi
-
Patent number: 11747823Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy. The sensory data are gathered from an operational camera and one or more auxiliary sensors.Type: GrantFiled: September 20, 2021Date of Patent: September 5, 2023Assignee: Trifo, Inc.Inventors: Zhe Zhang, Grace Tsai, Shaoshan Liu
-
Patent number: 11741756Abstract: Systems and methods are presented for generating statistics associated with a performance of a participant in an event, wherein pose data associated with the participant, performing in the event, are processed in real time. Pose data associated with the participant may comprise positional data of a skeletal representation of the participant. Actions performed by the participant may be determined based on a comparison of segments of the participant's pose data to motion patterns associated with actions of interests.Type: GrantFiled: October 24, 2022Date of Patent: August 29, 2023Assignee: Disney Enterprises, Inc.Inventors: Kevin John Prince, Carlos Augusto Dietrich, Dirk Van Dall
-
Patent number: 11741687Abstract: Systems, and method and computer readable media that store instructions for configuring spanning elements of a signature generator.Type: GrantFiled: March 27, 2020Date of Patent: August 29, 2023Assignee: CORTICA LTD.Inventors: Igal Raichelgauz, Adrian Kaho Chan
-
Patent number: 11741753Abstract: Generating visual data by defining a first action into a first set of objects and corresponding first set of motions, and defining a second action into a second set of objects and corresponding second set of motions. A relationship is then determined for the second action to the first action in terms of relationships between corresponding constituent objects and motions. Objects and motions are detected from visual data of first action. Visual data is composed for the second action from the data by transforming the constituent objects and motions detected in first action based on the corresponding determined relationships.Type: GrantFiled: November 23, 2021Date of Patent: August 29, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nalini K. Ratha, Sharathchandra Pankanti, Lisa Marie Brown