Patents Examined by David Perlman
-
Patent number: 12266181Abstract: Embodiments are disclosed for receiving a user input and an input video comprising multiple frames. The method may include extracting a text feature from the user input. The method may further include extracting a plurality of image features from the frames. The method may further include identifying one or more keyframes from the frames that include the object. The method may further include clustering one or more groups of the one or more keyframes. The method may further include generating a plurality of segmentation masks for each group. The method may further include determining a set of reference masks corresponding to the user input and the object. The method may further include generating a set of fusion masks by combining the plurality of segmentation masks and the set of reference masks. The method may further include propagating the set of fusion masks and outputting a final set of masks.Type: GrantFiled: November 19, 2021Date of Patent: April 1, 2025Assignee: Adobe Inc.Inventors: Shivam Nalin Patel, Kshitiz Garg, Han Guo, Ali Aminian, Aashish Misraa
-
Systems and methods of deep learning for large-scale dynamic magnetic resonance image reconstruction
Patent number: 12265145Abstract: A method for performing magnetic resonance imaging on a subject comprises obtaining undersampled imaging data, extracting one or more temporal basis functions from the imaging data, extracting one or more preliminary spatial weighting functions from the imaging data, inputting the one or more preliminary spatial weighting functions into a neural network to produce one or more final spatial weighting functions, and multiplying the one or more final spatial weighting functions by the one or more temporal basis functions to generate an image sequence. Each of the temporal basis functions corresponds to at least one time-varying dimension of the subject. Each of the preliminary spatial weighting functions corresponds to a spatially-varying dimension of the subject. Each of the final spatial weighting functions is an artifact-free estimation of the one of the one or more preliminary spatial weighting functions.Type: GrantFiled: September 10, 2020Date of Patent: April 1, 2025Assignee: CEDARS-SINAI MEDICAL CENTERInventors: Anthony Christodoulou, Debiao Li, Yuhua Chen -
Patent number: 12260526Abstract: One embodiment provides a computer-implemented method that includes providing a dynamic list structure that stores one or more detected object bounding boxes. Temporal analysis is applied that updates the dynamic list structure with object validation to reduce temporal artifacts. A two-dimensional (2D) buffer is utilized to store a luminance reduction ratio of a whole video frame. The luminance reduction ratio is applied to each pixel in the whole video frame based on the 2D buffer. One or more spatial smoothing filters are applied to the 2D buffer to reduce a likelihood of one or more spatial artifacts occurring in a luminance reduced region.Type: GrantFiled: August 9, 2022Date of Patent: March 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Kamal Jnawali, Joonsoo Kim, Chenguang Liu, Chang Su
-
Patent number: 12260649Abstract: A device may process the surveillance video data to segment vehicles, and may utilize a segmentation guided attention network model with the vehicles to determine traffic density count data. The device may process an image segmentation map, with a regression analysis model, to derive traffic signal timing. The device may process the surveillance video data, with a deep learning model, to identify objects, and may utilize a YOLO model, with the objects, to determine object types. The device may utilize a curriculum loss model with the objects to determine crowd count data, and may process the surveillance video data, with a video analytics model, to identify first events. The device may process the surveillance video data, with a classifier and deep network models, to identify second events, and may process the determined information, with a dynamic text-based explanation model, to generate a text-based explanation and/or a failure prediction.Type: GrantFiled: November 4, 2022Date of Patent: March 25, 2025Assignee: Accenture Global Solutions LimitedInventors: Subramaniaprabhu Jagadeesan, Bikash Chandra Mahato
-
Patent number: 12260645Abstract: Systems, methods, and computer program products of intelligent image analysis using object detection models to identify objects and locate and detect features in an image are disclosed. The systems, methods, and computer program products include automated learning to identify the location of an object to enable continuous identification and location of an object in an image during periods when the object may be difficult to recognize or during low visibility conditions.Type: GrantFiled: January 10, 2024Date of Patent: March 25, 2025Assignee: DEVON ENERGY CORPORATIONInventors: Amos James Hall, Jared Lee Markes, Beau Travis Rollins, Michael Alan Adler, Jr.
-
Patent number: 12254604Abstract: A system comprises a picture and metadata captured by a content capture system; a recognizable characteristic datastore configured to store recognizable characteristics of different users; a module configured to identify a time and a location associated with the picture based on the metadata, and to identify one or more potential target systems within a predetermined range of the location at the time; a characteristic recognition module configured to retrieve the recognizable characteristics of one or more potential users associated with the potential target systems, and evaluate whether the picture includes one or more representations of at least one actual target user from the potential users based on the recognizable characteristics of the potential users; a distortion module configured to distort a feature of the representations of the least one actual target user in response to the determination; a communication module configured to communicate the distorted picture to a computer network.Type: GrantFiled: September 21, 2023Date of Patent: March 18, 2025Assignee: Privowny, Inc.Inventor: Herve Le Jouan
-
Patent number: 12254086Abstract: Systems, devices, and methods are disclosed for encoding behavioral information into an image format to facilitate image based behavioral identification.Type: GrantFiled: June 2, 2022Date of Patent: March 18, 2025Assignee: Fortinet, Inc.Inventor: Sameer Khanna
-
Patent number: 12236686Abstract: A distributed monitoring and analytics system is configured to automatically monitor conditions in a remote oil field. The distributed monitoring and analytics system generally includes one or more mobile monitoring units that each includes a vehicle, a sensor package within the vehicle that is configured to produce one or more sensor outputs as the mobile monitoring unit traverses the remote oil field, and an onboard computer configured to process the output from the sensor package. The sensor package can include any number of sensors, including a camera that outputs a video signal for computer vision analysis and a gas detector that outputs a gas detection signal based on the detection of fugitive gas emissions within the remote oil field.Type: GrantFiled: October 14, 2021Date of Patent: February 25, 2025Assignee: Baker Hughes Oilfield Operations LLCInventors: John Westerheide, Dustin Sharber, Jeffrey Potts, Mahendra Joshi, Xiaoqing Ge, Jeremy Van Dam
-
Patent number: 12217330Abstract: A computer-implemented method for generating labeled training data for an artificial intelligence machine is provided.Type: GrantFiled: November 3, 2020Date of Patent: February 4, 2025Assignee: THALESInventors: Thierry Ganille, Guillaume Pabia, Christian Nouvel
-
Patent number: 12214487Abstract: A vision-based tactile measurement method is provided, performed by a computer device (e.g., a chip) connected to a tactile sensor, the tactile sensor including a sensing face and an image sensing component, and the sensing face being provided with a marking pattern. The method includes: obtaining an image sequence collected by the image sensing component of the sensing face, each image of the image sequence comprising one instance of the marking pattern; calculating a difference feature of the marking patterns in adjacent images of the image sequence; and processing the difference feature of the marking patterns using a feedforward neural network to obtain a tactile measurement result, a quantity of hidden layers in the feedforward neural network being less than a threshold.Type: GrantFiled: July 7, 2021Date of Patent: February 4, 2025Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yu Zheng, Zhongjin Xu, Zhengyou Zhang
-
Patent number: 12211306Abstract: A method for counting suckling piglets based on self-attention spatiotemporal feature fusion is disclosed, which includes: detecting a side-lying sow in a video frame by using CenterNet to acquire a key frame of suckling piglets and a region of interest of the video frame, and overcome the interference of the movement of non-suckling piglets on the spatiotemporal feature extraction for the region of interest; transforming spatiotemporal features extracted by a spatiotemporal two-stream convolutional network from a key frame video clip into a spatiotemporal feature vector, inputting the obtained spatiotemporal feature vector into a temporal, a spatial and a fusion transformer to obtain a self-attention matrix; performing element-wise product for the self-attention matrix and the fused spatiotemporal features to obtain a self-attention spatiotemporal feature map; and inputting the self-attention spatiotemporal feature map into a regression branch of the number of suckling piglets to complete the counting of theType: GrantFiled: January 5, 2023Date of Patent: January 28, 2025Inventors: Yueju Xue, Haiming Gan, Wenhao Hou, Chengguo Xu
-
Patent number: 12211213Abstract: In order to perform quantitative analysis on an object in an image, it is important to accurately identify the object, but when plural objects are in contact with each other, it is potential that a target portion cannot be accurately identified. An image is segmented into a foreground region and a background region, the foreground region being a region in which an object for which quantitative information is to be calculated is shown, and the background region being a region other than the foreground region. With respect to a first object and a second object in contact with each other in the image, a contact point between the first object and the second object is detected based on a region segmentation result output by a segmentation unit.Type: GrantFiled: February 6, 2020Date of Patent: January 28, 2025Assignee: HITACHI HIGH-TECH CORPORATIONInventors: Anirban Ray, Hideharu Hattori, Yasuki Kakishita, Taku Sakazume
-
Patent number: 12211260Abstract: Systems, apparatuses and methods may provide for technology that processes an inference workload in a first subset of layers of a neural network that prevents or inhibits data dependent branch operations, conducts an exit determination as to whether an output of the first subset of layers satisfies one or more exit criteria, and selectively bypasses processing of the output in a second subset of layers of the neural network based on the exit determination. The technology may also speculatively initiate the processing of the output in the second subset of layers while the exit determination is pending. Additionally, when the inference workloads include a plurality of batches, the technology may mask one or more of the plurality of batches from processing in the second subset of layers.Type: GrantFiled: November 27, 2023Date of Patent: January 28, 2025Assignee: Intel CorporationInventors: Haim Barad, Barak Hurwitz, Uzi Sarel, Eran Geva, Eli Kfir, Moshe Island
-
Patent number: 12212762Abstract: This application discloses a point cloud encoding method and apparatus, a point cloud decoding method and apparatus, and a storage medium for point cloud encoding and/or decoding, and belongs to a data processing field. The method includes: first obtaining auxiliary information of a to-be-encoded patch, and then encoding the auxiliary information and a first index of the to-be-encoded patch into a bitstream. Values of the first index may be a first value, a second value, and a third value. Different values indicate different types of patches. Therefore, different types of patches can be distinguished by using the first index. For different types of patches, their corresponding auxiliary information encoded into a bitstream may comprise different contents. This can simplify a format of information encoded into the bitstream, reduce bit overheads of the bitstream, and improve encoding efficiency.Type: GrantFiled: September 17, 2021Date of Patent: January 28, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Kangying Cai, Dejun Zhang
-
Patent number: 12205341Abstract: The present invention relates to a neural network-based high-resolution image restoration method and system, including: performing feature extraction on a target frame in a network input to obtain a first feature, performing feature extraction on a first frame and an adjacent frame and an optical flow between the first frame and the adjacent frame to obtain a second feature, and concatenating the first feature and the second feature to obtain a shallow layer feature; performing feature extraction and refinement on the shallow layer feature to obtain a plurality of output first features and a plurality of output second features; performing feature decoding on the plurality of output second features, and concatenating decoded features along channel dimensionality to obtain features; and performing weight distribution on the features to obtain final features, and restoring an image. The present invention can effectively help to improve image quality.Type: GrantFiled: January 7, 2021Date of Patent: January 21, 2025Assignee: SOOCHOW UNIVERSITYInventors: Jianling Hu, Lihang Gao, Dong Liao, Jianfeng Yang, Honglong Cao
-
Patent number: 12204653Abstract: A system and method, in particular computer implemented method for determining a perturbation for attacking and/or validating an association tracker. The method includes providing digital image data that includes an object, determining with the digital image data a first feature that characterizes the object, providing in particular from a storage a second feature that characterizes a tracked object, determining the perturbation depending on a measure of a similarity between the first feature and the second feature.Type: GrantFiled: May 6, 2022Date of Patent: January 21, 2025Assignee: ROBERT BOSCH GMBHInventors: Anurag Pandey, Jan Hendrik Metzen, Nicole Ying Finnie, Volker Fischer
-
Patent number: 12198412Abstract: An information processing device is provided that includes an operation control unit which controls the operations of an autonomous mobile object that performs an action according to a recognition operation. Based on the detection of the start of teaching related to pattern recognition learning, the operation control unit instructs the autonomous mobile object to obtain information regarding the learning target that is to be learnt in a corresponding manner to a taught label. Moreover, an information processing method is provided that is implemented in a processor and that includes controlling the operations of an autonomous mobile object which performs an action according to a recognition operation. Based on the detection of the start of teaching related to pattern recognition learning, the controlling of the operations includes instructing the autonomous mobile object to obtain information regarding the learning target that is learnt in a corresponding manner to a taught label.Type: GrantFiled: November 14, 2023Date of Patent: January 14, 2025Assignee: SONY GROUP CORPORATIONInventors: Masato Nishio, Yuhei Yabe, Tomoo Mizukami
-
Patent number: 12198451Abstract: The present disclosure relates to a method of matching a text with a design performed by an apparatus for matching a text with a design. According to an embodiment of the present disclosure, the method may comprise acquiring an image from information including images and texts; learning features of the acquired image; extracting texts from the information and performing learning about a pair of an extracted text and the acquired image; extracting a trend word extracted at least a predetermined reference number of times among the extracted texts; performing learning about a pair of the trend word and the acquired image; and identifying a design feature matched with the trend word among learned features of the image.Type: GrantFiled: December 29, 2020Date of Patent: January 14, 2025Assignee: DESIGNOVELInventors: Woo Sang Song, Ki Young Shin, Jian Ri Li
-
Patent number: 12190561Abstract: A method for the clustering and identification of animals in acquired images based on physical traits is provided, where a trait feature is a scalar or vector quantity that is a property of a trait and trait distance is a measure of discrepancy between the same trait features of two animals, and also introduced are several different ways of implementing clustering using trait features.Type: GrantFiled: May 24, 2023Date of Patent: January 7, 2025Inventors: Stephanie Sujin Choi, Hyeong In Choi
-
Patent number: 12190596Abstract: The travelling route recognition unit recognizes, based on an image captured by an imaging apparatus capturing an area in the vicinity of the vehicle, a travelling route where the vehicle travels. The classifying unit classifies the travelling route into at least one of a predetermined plurality of road models. The edge point extracting unit extracts edge points necessary for expressing the road model classified by the classifying unit among edge points indicating a boundary of the travelling route recognized by the travelling route recognition unit. The parameter generation unit correlates road model information indicating the road model classified by the classifying unit with the edge points extracted by the edge point extracting unit and generates a travelling route parameter indicating the travelling route recognized by the travelling route recognition unit. The transmission unit transmits the travelling route parameter generated by the parameter generation unit to the server.Type: GrantFiled: October 21, 2021Date of Patent: January 7, 2025Assignee: DENSO CORPORATIONInventors: Kentarou Shiota, Naoki Nitanda, Kazuma Ishigaki, Shinya Taguchi