Patents Examined by David F Dunphy
  • Patent number: 11756676
    Abstract: A plurality of analysis functions each corresponding to an organ are managed, and organ information is stored in such a manner as to correlate with a corresponding type of analysis function. The organ information indicates which of a plurality of regions included in the organ is to be subjected to thinning. Specification of one of the analysis functions is received from a user, and medical image data is acquired. A plurality of regions of an organ included in the acquired medical image data are identified. The identified plurality of regions of the organ, a region to be subjected to thinning is determined on the basis of the stored organ information and the received type of the analysis function. Thinning is performed on the determined region of the organ. An image of the thinned region is displayed together with an image of a region not subjected to thinning.
    Type: Grant
    Filed: November 10, 2021
    Date of Patent: September 12, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tsuyoshi Sakamoto, Yusuke Imasugi
  • Patent number: 11747823
    Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy. The sensory data are gathered from an operational camera and one or more auxiliary sensors.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: September 5, 2023
    Assignee: Trifo, Inc.
    Inventors: Zhe Zhang, Grace Tsai, Shaoshan Liu
  • Patent number: 11741753
    Abstract: Generating visual data by defining a first action into a first set of objects and corresponding first set of motions, and defining a second action into a second set of objects and corresponding second set of motions. A relationship is then determined for the second action to the first action in terms of relationships between corresponding constituent objects and motions. Objects and motions are detected from visual data of first action. Visual data is composed for the second action from the data by transforming the constituent objects and motions detected in first action based on the corresponding determined relationships.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: August 29, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nalini K. Ratha, Sharathchandra Pankanti, Lisa Marie Brown
  • Patent number: 11741756
    Abstract: Systems and methods are presented for generating statistics associated with a performance of a participant in an event, wherein pose data associated with the participant, performing in the event, are processed in real time. Pose data associated with the participant may comprise positional data of a skeletal representation of the participant. Actions performed by the participant may be determined based on a comparison of segments of the participant's pose data to motion patterns associated with actions of interests.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: August 29, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Kevin John Prince, Carlos Augusto Dietrich, Dirk Van Dall
  • Patent number: 11741368
    Abstract: In one aspect, hierarchical image segmentation is applied to an image formed of a plurality of pixels, by classifying the pixels according to a hierarchical classification scheme, in which at least some of those pixels are classified by a parent level classifier in relation to a set of parent classes, each of which is associated with a subset of child classes, and each of those pixels is also classified by at least one child level classifier in relation to one of the subsets of child classes, wherein each of the parent classes corresponds to a category of visible structure, and each of the subset of child classes associated with it corresponds to a different type of visible structure within that category.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: August 29, 2023
    Assignee: Five AI Limited
    Inventors: John Redford, Sina Samangooei
  • Patent number: 11741687
    Abstract: Systems, and method and computer readable media that store instructions for configuring spanning elements of a signature generator.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: August 29, 2023
    Assignee: CORTICA LTD.
    Inventors: Igal Raichelgauz, Adrian Kaho Chan
  • Patent number: 11734827
    Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: August 22, 2023
    Assignee: Matterport, Inc.
    Inventor: Gary Bradski
  • Patent number: 11727585
    Abstract: Provided is an information processing device including an acquisition unit that acquires a first captured image, a second captured image, and a distance to a subject, and a derivation unit that derives an imaging position distance which is a distance between the first imaging position and the second imaging position, on the basis of a plurality of pixel coordinates for specifying a plurality of pixels of more than three pixels which are present in the same planar region as an emission position irradiated with the directional light beam on the real space and correspond to the position on the real space in each of the first captured image and the second captured image which are acquired by the acquisition unit, emission position coordinates which are derived on the basis of the distance acquired by the acquisition unit, a focal length of an imaging lens, and dimensions of imaging pixels.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: August 15, 2023
    Assignee: FUJIFILM CORPORATION
    Inventor: Tomonori Masuda
  • Patent number: 11723344
    Abstract: Aspects of this disclosure include a system for providing non-contact, computer-vision based monitoring of the health and pollination activity of a beehive. The system may include camera positioned proximate to a beehive. The camera may include an onboard processor that analyzes video of the beehive captured by the camera and calculates an activity value that estimates a number of bees moving about the beehive. The video calculated activity values may be uploaded to a server where they can be accessed via a user device. The user device may allow the user to display interactive plots of the activity values over a variety of time bases. The disclosed beehive monitoring system relies on relatively lost-cost hardware and requires neither modification to the hive nor special constraints on the placement of the camera.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: August 15, 2023
    Assignee: KELTRONIX, INC.
    Inventors: Kelton Temby, Jonathan Simpson
  • Patent number: 11721130
    Abstract: The present disclosure relates to a weakly supervised video activity detection method and system based on iterative learning. The method includes: extracting spatial-temporal features of a video that contains actions; constructing a neural network model group; training a first neural network model according to the class label of the video, a class activation sequence output by the first neural network model, and a video feature output by the first neural network model; training the next neural network model according to the class label of the video, a pseudo temporal label output by the current neural network model, a class activation sequence output by the next neural network model, and a video feature output by the next neural network model; and performing action detection on the test video according to the neural network model corresponding to the highest detection accuracy value.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: August 8, 2023
    Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Yan Song, Rong Zou, Xiangbo Shu
  • Patent number: 11720648
    Abstract: A deep learning machine includes a classification unit having a labeling criterion and configured to label input data according to the labeling criterion, a conversion unit configured to integerize input data labeled as a first type requiring integerization among the input data labeled by the classification unit, a first learning data unit configured to receive the input data of the first type integerized through the conversion unit and to infer output data, and a second learning data unit configured to receive input data labeled as a second type requiring no integerization and to infer the output data.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: August 8, 2023
    Assignee: HYUNDAI MOBIS CO., LTD.
    Inventor: Hyuk Lee
  • Patent number: 11721102
    Abstract: A method of identifying fixing in a tennis match includes collecting one or more metrics related to a player in the tennis match using one or more computing devices, comparing the collected one or more metrics to one or more standards, and determining, based on the comparison using an algorithm that will identify a pattern or reoccurrence of unusual metrics, whether the player has deliberately lost one or more points in the tennis match.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: August 8, 2023
    Inventor: Fredric Goldstein
  • Patent number: 11715286
    Abstract: Disclosed is a method for recognizing a marine object based on hyperspectral data including collecting target hyperspectral data; preprocessing the target hyperspectral data; and detecting and identifying an object included in the target hyperspectral data based on a marine object detection and identification model, trained through learning of the detection and identification of the marine object. According to the present invention, the preprocessing and processing of the hyperspectral data collected in real time according to a communication state may be performed in the sky or on the ground.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: August 1, 2023
    Assignee: KOREA INSTITUTE OF OCEAN SCIENCE & TECHNOLOGY
    Inventors: Dongmin Seo, Sangwoo Oh
  • Patent number: 11710255
    Abstract: An object identification and collection method is disclosed. The method includes receiving a pick-up path that identifies a route in which to guide an object-collection system over a target geographical area to pick up objects, determining a current location of the object-collection system relative to the pick-up path, and guiding the object-collection system along the pick-up path over the target geographical area based on the current location. The method further includes capturing images in a direction of movement of the object-collection system along the pick-up path, identifying a target object in the images; tracking movement of the target object through the images, determining that the target object is within range of an object picker assembly on the object-collection system based on the tracked movement of the target object, and instructing the object picker assembly to pick up the target object.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: July 25, 2023
    Assignee: TerraClear Inc.
    Inventors: Brent Ronald Frei, Dwight Galen McMaster, Michael Racine, Jacobus du Preez, William David Dimmit, Isabelle Butterfield, Clifford Holmgren, Dafydd Daniel Rhys-Jones, Thayne Kollmorgen, Vivek Ullal Nayak
  • Patent number: 11710305
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for validating or rejecting automated detections of an entity being tracked within an environment in order to generate a track representative of a travel path of the entity within the environment. The automated detections of the entity may be generated by an artificial intelligence (AI) algorithm. The track may represent a travel path of the tracked entity across a set of image frames. The track may contain one or more tracklets, where each tracklet includes a set of validated detections of the entity across a subset of the set of image frames and excludes any rejected detections of the entity. Each tracklet may also contain one or more user-provided detections in scenarios in which the tracked entity is observed or otherwise known to be present in an image frame but automated detection of the entity did not occur.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: July 25, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Leah Anderson, Mark Montoya, Andrew Elder, Alisa Le, Ezra Zigmond, Jocelyn Rivero
  • Patent number: 11704814
    Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: July 18, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
  • Patent number: 11704810
    Abstract: System and techniques for detecting a crop related row from an image are described herein. An image that includes several rows—where the several rows including crop rows and furrows—can be obtained. The image can be segmented to produce a set of image segments. A filter can be shifted across respective segments of the set of image segments to get a set of positions. A line can be fit members of the set of positions, the line representing a crop row or furrow.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: July 18, 2023
    Assignee: Raven Industries, Inc.
    Inventors: Yuri Sneyders, John D. Preheim, Jeffrey Allen Van Roekel
  • Patent number: 11698955
    Abstract: Some implementations provide input-triggered user verification. This may involve trigging a user verification (e.g., capture of an image, sound, fingerprint, etc.) to verify a user's identity based on input (e.g., typing) received ad the device. Triggering the user-verification based on receiving input may help ensure that the image, sound, fingerprint, etc. is captured at a time when the user is close to the device, touching the finger-print sensor, and/or in view of the camera during the capturing. Some implementations provide user verification based on a user-identification of a previously selected image. This may involve using an inmate-selected picture or other image to recover a forgotten alphanumeric reference. Some implementations of the invention disclosed herein provide user verification based on a computer-vision identification of a wearable identification tag. This may involve using an image of the user's identification tag worn on the user's wrist to verify the user's identity.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: July 11, 2023
    Assignee: Confinement Telephony Technology, LLC
    Inventors: Timothy Edwin Pabon, John Vincent Townsend, III, Robert James Deglman, Rick Allen Lubbehusen
  • Patent number: 11694428
    Abstract: Disclosed is a method for detecting Ophiocephalus argus cantor under intra-class occulusion based on cross-scale layered feature fusion, including image collecting, image processing and network model, where collected images are labeled, image sizes are adjusted to obtain input images, and the input images are input into an object detection network, integrated by convolution and inserted into cross-scale layered feature fusion modules, characterized by including dividing all features input into the cross-scale layered feature fusion modules into n layers, composed of s feature mapping subsets, and fusing features of each feature mapping subset with that of other feature mapping subsets, and connecting; carrying out convolution operation, outputting training result; adjusting network parameters by a loss function to obtain parameters for a network model; inputting final output candidate boxes into a non-maximum suppression module to screen correct prediction boxes, so that prediction result is obtained.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: July 4, 2023
    Assignee: Ludong University
    Inventors: Jun Yue, Yifei Zhang, Qing Wang, Zhenbo Li, Guangjie Kou, Jun Zhang, Shixiang Jia, Ning Li
  • Patent number: 11694426
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for identifying traffic control features based on telemetry patterns within digital image representations of vehicle telemetry information. The disclosed systems can generate a digital image representation based on collected telemetry information to represent the frequency of different speed-location combinations for transportation vehicles passing through a traffic area. The disclosed systems can also apply a convolutional neural network to analyze the digital image representation and generate a predicted classification of a type of traffic control feature that corresponds to the digital image representation of vehicle telemetry information. The disclosed systems further train the convolutional neural network to determine traffic control features based on training data.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: July 4, 2023
    Assignee: Lyft, Inc.
    Inventors: Deeksha Goyal, Han Suk Kim, James Kevin Murphy, Albert Yuen