Patents Examined by Soo Jin Park
-
Patent number: 11615527Abstract: A system for automatically analyzing a video recording of a colonoscopy includes a processor and memory storing instructions, which when executed by the processor, cause the processor to receive the video recording of the colonoscopy performed on the colon and detect informative frames in the video recording. A frame is informative if the clarity of the frame is above a threshold or if the frame includes clinically relevant information about the colon. The instructions cause the processor to generate scores indicating severity levels of a disease for a plurality of the informative frames, estimate locations of the plurality of the informative frames in the colon, and generate an output indicating a distribution of the scores over one or more segments of the colon by combining the scores generated for the plurality of the informative frames and the estimated locations of the plurality of the informative frames in the colon.Type: GrantFiled: May 15, 2020Date of Patent: March 28, 2023Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Kayvan Najarian, Heming Yao, Sayedmohammadreza Soroushmehr, Jonathan Gryak, Ryan W. Stidham
-
Patent number: 11599993Abstract: At least one processor of an apparatus functions as a generation unit that identifies at least an outer edge of a specific region in a surface layer of an object and that generates outer edge candidates, and a control unit that selects an outer edge candidate based on an instruction from a user among the generated outer edge candidates.Type: GrantFiled: March 24, 2020Date of Patent: March 7, 2023Assignee: CANON KABUSHIKI KAISHAInventors: Daisuke Gunji, Tomonobu Hiraishi
-
Patent number: 11594009Abstract: Even if an object to be detected is not remarkable in images, and the input includes images including regions that are not the object to be detected and have a common appearance on the images, a region indicating the object to be detected is accurately detected. A local feature extraction unit 20 extracts a local feature of a feature point from each image included in an input image set. An image-pair common pattern extraction unit 30 extracts, from each image pair selected from images included in the image set, a common pattern constituted by a set of feature point pairs that have similar local features extracted by the local feature extraction unit 20 in images constituting the image pair, the set of feature point pairs being geometrically similar to each other.Type: GrantFiled: May 7, 2019Date of Patent: February 28, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shuhei Tarashima, Takashi Hosono, Yukito Watanabe, Jun Shimamura, Tetsuya Kinebuchi
-
Patent number: 11592557Abstract: A method, apparatus and computer program product for fusing information, to be performed by a device comprising a processor and a memory device, the method comprising: receiving one or more distance readings related to the environment from a Lidar device emitting light in a predetermined wavelength; receiving an image captured by a multi spectra camera, the multi spectra camera being sensitive at least to visible light and to the predetermined wavelength; identifying within the image points or areas having the predetermined wavelength; identifying one or more objects within the image; identifying correspondence between each of the light points or areas and one of the readings; associating the object with a distance, based on the reading and points or areas within the object; and outputting indication of the object and the distance associated with the at least one object.Type: GrantFiled: June 1, 2017Date of Patent: February 28, 2023Assignee: OSR ENTERPRISES AGInventors: Yosef Ben-Ezra, Samuel Hazak, Yaniv Ben-Haim, Yoni Schiff, Shai Nissim
-
Patent number: 11568167Abstract: In some embodiments, a first plurality of representations are extracted from a first data set. A first set of distributions are generated based on the first plurality of representations. A machine learning model is trained based on the first plurality of representations and the first set of distributions. A second plurality of representations are extracted from a second data set different from the first data set. The machine learning model is executed based on the second plurality of representations to produce a second set of distributions. An anomaly score is determined for each datum from the second data set to produce a set of anomaly scores. The set of anomaly scores are determined based on the first set of distributions and the second set of distributions. A notification is generated when at least one anomaly score from the set of anomaly scores is larger than a predetermined threshold.Type: GrantFiled: May 25, 2022Date of Patent: January 31, 2023Assignee: Arthur AI, Inc.Inventors: Keegan E. Hines, John P. Dickerson, Karthik Rao, Rowan Cheung, Reese M. E. Hyde
-
Patent number: 11557033Abstract: A method, a computer program product, and a computer system for classifying bacteria. The method comprises extracting a morphology signature corresponding to one or more bacteria and extracting a motility signature corresponding to the one or more bacteria. The method further comprises merging the morphology signature and the motility signature into a merged vector signature and classifying the one or more bacteria based on the merged vector signature.Type: GrantFiled: August 9, 2019Date of Patent: January 17, 2023Assignee: International Business Machines CorporationInventors: Venkat K. Balagurusamy, Vince Siu, Sahil Dureja, Prabhakar Kudva, Joseph Ligman, Matthew Harrison Tong, Donna N Eng Dillenberger, Ashwin Dhinesh Kumar
-
Patent number: 11544953Abstract: Systems, methods and media are disclosed for identifying the crossing of a virtual barrier. A person in a 3D image of a room may be circumscribed by a bounding box. The position of the bounding box may be monitored over time, relative to the virtual barrier. If the bounding box touches or crosses the virtual barrier, an alert may be sent to the person being monitored, a caregiver or a clinician. Bounding box tracking may be used in addition to or instead of an initial tracking process, such as skeletal tracking.Type: GrantFiled: May 12, 2021Date of Patent: January 3, 2023Assignee: CERNER INNOVATION, INC.Inventor: Michael Kusens
-
Patent number: 11532164Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for group identification and monitoring. The methods, systems, and apparatus include actions of determining, using one or more first frames of a video sequence, that two people each of whom is depicted in at least one of the first frames satisfy a grouping criteria that indicates that the two people should be grouped for video analysis, determining, using one or more second frames of the video sequence at least some of which were captured after the one or more first frames and which depict at least one of the two people, that the two people satisfy a degrouping criteria that indicates that the two people should not be grouped for video analysis, determining that a physical distance between the two people violates a distance criteria, and providing an alert that the distance criteria is violated.Type: GrantFiled: May 19, 2021Date of Patent: December 20, 2022Assignee: Alarm.com IncorporatedInventors: James Siemer, Allison Beach, Weihong Yin, Jeffrey A. Bedell, John Stanley
-
Patent number: 11531844Abstract: A method is provided for non-invasively predicting characteristics of one or more cells and cell derivatives. The method includes training a machine learning model using at least one of a plurality of training cell images representing a plurality of cells and data identifying characteristics for the plurality of cells. The method further includes receiving at least one test cell image representing at least one test cell being evaluated, the at least one test cell image being acquired non-invasively and based on absorbance as an absolute measure of light, and providing the at least one test cell image to the trained machine learning model. Using machine learning based on the trained machine learning model, characteristics of the at least one test cell are predicted. The method further includes generating, by the trained machine learning model, release criteria for clinical preparations of cells based on the predicted characteristics of the at least one test cell.Type: GrantFiled: March 15, 2019Date of Patent: December 20, 2022Assignee: The United States of America, as represented by the Secretary, Department of Health & Human ServicesInventors: Kapil Bharti, Nathan A. Hotaling, Nicholas J. Schaub, Carl G. Simon
-
Patent number: 11508073Abstract: A method for determining an angle of a tip of a ripper shank includes a controller receiving an input command. The controller estimates an angle of the tip based on one or more parameters of the input command. Further, the controller acquires a video feed of the ripper shank and detects an object in the video feed. The controller identifies the object as one of the ripper shank or a component movable with a movement of the ripper shank based on a match of a color of the object to a predefined color and a match of a profile of the object to a predefined profile. The controller co-relates the profile to an angular value in a map table and sets the angular value to be an actual angle of the tip over the angle of the tip estimated based on the input command.Type: GrantFiled: April 9, 2021Date of Patent: November 22, 2022Assignee: Caterpillar Inc.Inventors: David Conway Atkinson, Nolan Finch
-
Patent number: 11507771Abstract: Provided are methods, including computer-implemented methods, devices, and computer-program products applying systems and methods for pallet identification. According to some embodiments of the invention, a pallet may be visually identified through photographs without attaching physical labels. Thus, the status of pallets may be monitored (e.g., their location and structural integrity) as they move through the supply chain.Type: GrantFiled: November 2, 2020Date of Patent: November 22, 2022Assignee: BXB Digital Pty LimitedInventors: Michael Souder, Daniel Bricarello, Juan Castillo, Prasad Srinivasamurthy
-
Patent number: 11501447Abstract: Systems and method directed to performing video object segmentation are provided. In examples, video data representing a sequence of image frames and video data representing an object mask may be received at a video object segmentation server. Image features may be generated based on a first image frame of the sequence of image frames, image features may be generated based on a second image frame of the sequence of image frames; and object features may be generated based on the object mask. A transform matrix may be computed based on the image features of the first image frame and image features of the second image frame; the transform matrix may be applied to the object features resulting in transformed object features. A predicted object mask associated with the second image frame may be obtained by decoding the transformed object features.Type: GrantFiled: March 4, 2021Date of Patent: November 15, 2022Assignee: LEMON INC.Inventors: Linjie Yang, Ziyu Jiang, Ding Liu, Longyin Wen
-
Patent number: 11501100Abstract: A computer system and associated processes are disclosed for grouping similar real estate properties into contiguous neighborhoods, and for generating neighborhood-specific models capable of estimating property values within their respective neighborhoods. A clustering component uses various sources of property-level data to group properties based on measures of property similarity. For example, the clustering component may use features extracted from property images to identify properties with similar characteristics. As another example, the clustering component may measure property similarity based on how frequently specific properties are designated as comparable in appraisal reports. A model generator uses a machine learning process to determine, for specific neighborhoods, correlations between property attributes and values, and uses these correlations to generate the neighborhood specific models.Type: GrantFiled: March 6, 2020Date of Patent: November 15, 2022Assignee: CoreLogic Solutions, LLCInventors: Wei Geng, Duncan Chen, Bin He, Jon Wierks, David Stiff, Howard A. Botts
-
Patent number: 11494784Abstract: The evaluation information generating system of the present invention evaluates a ground area owned by a user, using satellite data observed by an artificial satellite, and includes a user information acquisition unit that acquires user information that is personal information of the user, a ground area information acquisition unit that acquires ground area information including a position of the ground area, a satellite data acquisition unit that acquires the satellite data, a situation detection unit that detects a situation of the ground area based on the satellite data, and an evaluation data generating unit that generates evaluation data of the user or the ground area based on the user information, ground area information, and situation of the ground area.Type: GrantFiled: December 28, 2017Date of Patent: November 8, 2022Assignee: SKY Perfect JSAT CorporationInventor: Daisuke Hirata
-
Patent number: 11493496Abstract: Methods and systems for estimating a surface runoff based on a pixel scale are disclosed. In some embodiments, the method includes the following steps: (1) calculating a vegetation canopy interception water storage, a litterfall interception water storage, and a soil water storage according to an obtained original remote sensing dataset of a climate element in a study area; (2) calculating a vegetation-soil interception water conservation in the study area based on an established vegetation-soil interception water conservation estimation model according to the vegetation canopy interception water storage, the litterfall interception water storage, the soil water storage, and monthly precipitation; and (3) calculating a surface runoff in the study area based on an established water balance water conservation estimation model according to the monthly precipitation, monthly snowmelt, monthly actual evapotranspiration, and the vegetation-soil interception water conservation in the study area.Type: GrantFiled: April 30, 2020Date of Patent: November 8, 2022Assignee: Institute of Geochemistry, Chinese Academy of SciencesInventors: Xiaoyong Bai, Shijie Wang, Luhua Wu, Fei Chen, Miao Zhou, Yichao Tian, Guangjie Luo, Qin Li, Jinfeng Wang, Yuanhuan Xie, Yujie Yang, Chaojun Li, Yuanhong Deng, Zeyin Hu, Shiqi Tian, Qian Lu, Chen Ran, Min Liu
-
Patent number: 11481895Abstract: Systems and methods are provided for automatically imaging and analyzing cell samples in an incubator. An actuated microscope operates to generate images of samples within wells of a sample container across days, weeks, or months. A plurality of images is generated for each scan of a particular well, and the images within such a scan are used to image and analysis metabolically active cells in the well. Tins analysis includes generating a “range image” by subtracting the minimum intensity value, across the scan, for each pixel from the maximum intensity value. This range image thus emphasizes cells or portions of cells that exhibit changes in activity over a scan period (e.g., neurons, myocytes, cardiomyocytes) while de-emphasizing regions that exhibit consistently high intensities when images (e.g., regions exhibiting a great deal of autofluorescence unrelated to cell activity).Type: GrantFiled: October 30, 2018Date of Patent: October 25, 2022Assignee: Sartorius BioAnalytical Instruments, Inc.Inventors: Daniel Appledorn, Eric Endsley, Nevine Holtz, Brad Neagle, David Rock, Kirk Schroeder
-
Patent number: 11482022Abstract: A method and apparatus of a device that classifies an image is described. In an exemplary embodiment, the method includes tiling at least one region of interest of the input image into a set of tiles. For each tile, the method includes extracting a feature vector of the tile by applying a convolutional neural network, wherein a feature is a local descriptor of the tile; and computing a score of the tile from the extracted feature vector, said tile score being representative of a contribution of the tile into a classification of the input image. The method also includes sorting a set of the tile scores and selecting a subset of the tile scores based on their value and/or their rank in the sorted set. The method also includes applying a classifier to the selected tile scores in order to classify the input image.Type: GrantFiled: February 23, 2021Date of Patent: October 25, 2022Assignees: Owkin, Inc., Owkin France SASInventors: Pierre Courtiol, Eric W. Tramel, Marc Sanselme, Gilles Wainrib
-
Patent number: 11462054Abstract: Embodiments of the present disclosure describe mechanisms for a radar-based indoor localization and tracking system. One example can include monitoring unit that includes a radar source, a camera unit, and one or more processors coupled to the radar element and the camera unit. The monitoring unit is configured to generate point cloud data associated with an object; execute Point Cloud Library (PCL) preprocessing based, at least, on the point cloud data; execute Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering; execute multi-object tracking on the object; and execute an image PCL overlay based on the point cloud data to generate real-time data associated with the object.Type: GrantFiled: October 21, 2020Date of Patent: October 4, 2022Assignee: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANYInventors: Foroohar Foroozan, Wassim Rafrafi
-
Patent number: 11455836Abstract: The present disclosure relates to methods and apparatuses for dynamic action detection and dynamic action control. A dynamic action detection method includes: adding an image frame in a video stream into a first queue, to obtain the first queue with a partially updated image frame; detecting a dynamic action in the partially updated image frame; and in response to the dynamic action not matching with the action detection result, updating an action detection result according to the dynamic action, the action detection result including an action reference result or a previously detected action detection result. A dynamic action detection in the present disclosure is executed according to the partially updated image frame in the first queue, so that the dynamic action can be detected more timely; and the dynamic action can also be rapidly determined, so that the action detection result is more accurate and the detection efficiency is higher.Type: GrantFiled: June 28, 2019Date of Patent: September 27, 2022Assignee: Shanghai SenseTime Intelligent Technology Co., Ltd.Inventors: Tianyuan Du, Chendi Yu, Yadi Yang
-
Predicting overall survival in early stage lung cancer with feature driven local cell graphs (FEDEG)
Patent number: 11455718Abstract: Embodiments include accessing an image of a region of tissue demonstrating cancerous pathology; detecting a plurality of cells represented in the image; segmenting a cellular nucleus of a first member of the plurality of cells and a cellular nucleus of at least one second, different member of the plurality of cells; extracting a set of nuclear morphology features from the plurality of cells; constructing a feature driven local cell graph (FeDeG) based on the set of nuclear morphology features and a spatial relationship between the cellular nuclei using a mean-shift clustering approach; computing a set of FeDeG features based on the FeDeG; providing the FeDeG features to a machine learning classifier; receiving, from the machine learning classifier, a classification of the region of tissue as a long-term or a short-term survivor, based, at least in part, on the set of FeDeG features; and displaying the classification.Type: GrantFiled: February 1, 2019Date of Patent: September 27, 2022Assignee: Case Western Reserve UniversityInventors: Anant Madabhushi, Cheng Lu