Patents Examined by Nay Maung
-
Patent number: 11900584Abstract: A method for judging freshness of cultured fish product based on eye image recognition is provided, which relates to the field of image processing and includes the following steps: obtaining eye area and eye center point of each cultured fish product; obtaining a first data category and a second data category of each gray scale change sequence according to each gray scale change sequence of each eye area; calculating a first mean value and a second mean value of each gray scale change sequence; obtaining the fish eye turbidity of each eye area according to the first mean value and the second mean value; obtaining the fish eye plumpness of each eye area according to the first data category and the second data category; and obtaining the freshness of each cultured fish product according to the fish eye turbidity and fish eye plumpness in each eye area.Type: GrantFiled: August 31, 2023Date of Patent: February 13, 2024Assignee: SHANDONG UNIVERSITY OF TECHNOLOGYInventors: Lanlan Zhu, Xudong Wu, Xiuting Wei, Qingxiang Zhang, Hengjia Ni, Ruining Kang, Lei Liu
-
Patent number: 11896320Abstract: A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.Type: GrantFiled: January 20, 2023Date of Patent: February 13, 2024Assignees: Smith & Nephew, Inc., Smith & Nephew Orthopaedics AG, Smith & Nephew Asia Pacific Pte. LimitedInventors: Gaëtan Marti, Maurice Hälg, Ranjith Steve Sivagnanaselvam
-
Patent number: 11861847Abstract: A background image generator (21) stores, in a storage device (3), a skeleton image obtained by calculating a feature quantity for each of multiple first thermal images (Din1) obtained by imaging by a thermal image sensor (1) in the same field of view or multiple sorted images (Dc) generated from the first thermal images, generating an average image from the first thermal images or sorted images, sharpening the average image, and then extracting a skeleton component. An image corrector (22) corrects, by using the skeleton image stored in the storage device (3), a second thermal image (Din2) obtained by imaging by the thermal image sensor in the same field of view as the first thermal images, thereby generating a corrected thermal image. It is possible to generate a sharp thermal image with a high S/N ratio.Type: GrantFiled: March 11, 2019Date of Patent: January 2, 2024Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Kohei Kurihara, Koichi Yamashita, Daisuke Suzuki
-
Patent number: 11861881Abstract: Techniques for training a first electronic neural network classifier to identify a presence of a particular property in a novel supra-image while ignoring a spurious correlation of the presence of the particular property with a presence of an extraneous property are presented.Type: GrantFiled: September 22, 2021Date of Patent: January 2, 2024Assignee: PROSCIA INC.Inventors: Julianna Ianni, Rajath Elias Soans, Kameswari Devi Ayyagari, Saul Kohn
-
Patent number: 11854703Abstract: Systems and methods for providing a novel framework to simulate the appearance of pathology on patients who otherwise lack that pathology. The systems and methods include a “simulator” that is a generative adversarial network (GAN). Rather than generating images from scratch, the systems and methods discussed herein simulate the addition of diseases-like appearance on existing scans of healthy patients. Focusing on simulating added abnormalities, as opposed to simulating an entire image, significantly reduces the difficulty of training GANs and produces results that more closely resemble actual, unmodified images. In at least some implementations, multiple GANs are used to simulate pathological tissues on scans of healthy patients to artificially increase the amount of available scans with abnormalities to address the issue of data imbalance with rare pathologies.Type: GrantFiled: June 10, 2019Date of Patent: December 26, 2023Assignee: ARTERYS INC.Inventors: Hok Kan Lau, Jesse Lieman-Sifry, Sean Patrick Sall, Berk Dell Norman, Daniel Irving Golden, John Axerio-Cilies, Matthew Joseph Didonato
-
Patent number: 11854191Abstract: An image processing method is provided for displaying cells from a plurality of pathology images or overall images. A respective overall image represents a respective patient tissue sample or a respective patient cell sample.Type: GrantFiled: March 2, 2021Date of Patent: December 26, 2023Assignee: EUROIMMUN Medizinische Labordiagnostika AGInventors: Christian Marzahl, Stefan Gerlach, Joern Voigt, Christine Kroeger
-
Patent number: 11845464Abstract: Driver behavior risk assessment and pedestrian awareness may include an receiving an input stream of images of an environment including one or more objects within the environment, estimating an intention of an ego vehicle based on the input stream of images and a temporal recurrent network (TRN), generating a scene representation based on the input stream of images and a graph neural network (GNN), generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.Type: GrantFiled: January 29, 2021Date of Patent: December 19, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Nakul Agarwal, Yi-Ting Chen
-
Patent number: 11842274Abstract: A controlling method of an electronic apparatus may include: obtaining image data and metadata regarding the image data, the image data comprising a first image frame and a second image frame that is subsequent to the first image frame; obtaining information regarding a region of interest of the first image frame by inputting the first image frame to a first neural network model; obtaining a similarity between the first image frame and the second image frame based on motion vector information included in the metadata; and detecting whether there is a manipulated area in the second image frame based on the similarity.Type: GrantFiled: January 4, 2021Date of Patent: December 12, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Gilwoo Song, Jaehyun Kwon, Jinwoo Nam, Heeseung Shin, Minseok Choi
-
Patent number: 11841920Abstract: The technology disclosed introduces two types of neural networks: “master” or “generalists” networks and “expert” or “specialists” networks. Both, master networks and expert networks, are fully connected neural networks that take a feature vector of an input hand image and produce a prediction of the hand pose. Master networks and expert networks differ from each other based on the data on which they are trained. In particular, master networks are trained on the entire data set. In contrast, expert networks are trained only on a subset of the entire dataset. In regards to the hand poses, master networks are trained on the input image data representing all available hand poses comprising the training data (including both real and simulated hand images).Type: GrantFiled: February 14, 2017Date of Patent: December 12, 2023Assignee: Ultrahaptics IP Two LimitedInventors: Jonathan Marsden, Raffi Bedikian, David Samuel Holz
-
Patent number: 11832582Abstract: A leg (205) detection system comprising: a robotic arm (200) comprising a gripping portion (208) for holding a teat cup (203, 210) for attaching to a teat (1102, 1104, 1106, 1108, 203S, 203) of a dairy livestock (200, 202, 203); an imaging system coupled to the robotic arm (200) and configured to capture a first three-dimensional (3D) image (138, 2400, 2500) of a rearview of the dairy livestock (200, 202, 203) in a stall (402), the imaging system comprising a 3D camera (136, 138) or a laser (132), wherein each pixel of the first 3D image (138, 2400, 2500) is associated with a depth value; one or more memory (104) devices configured to store a reference (3D) 3D image (138, 2400, 2500) of the stall (402) without any dairy livestock (200, 202, 203); and a processor (102) communicatively coupled to the imaging system and the one or more memory (104) devices, the processor (102) configured to: access the first 3D image (138, 2400, 2500) and the reference (3D) 3D image (138, 2400, 2500); subtract the first 3D imageType: GrantFiled: August 17, 2017Date of Patent: December 5, 2023Assignee: Technologies Holdings Corp.Inventors: Mark A. Foresman, Bradley J. Prevost, Marijn Van Aart, Peter Willem van der Sluis, Alireza Janani
-
Patent number: 11836919Abstract: A method for capturing digital data for fabricating a dental splint involves displaying a GUI on a display of a smartphone that provides an alignment feature for a user to align a camera of the smartphone to a first position that captures teeth of a person, receiving digital video of the teeth, overlaying the alignment feature on the digital video of the teeth on the display, moving the alignment feature on the screen in a manner that causes the user to move the smartphone relative to the teeth to maintain alignment with the alignment feature, capturing digital image information of the teeth as the alignment feature is moved, the captured digital image information including depth information, and transmitting the captured digital image information, including the depth information, from the smartphone for use in fabricating a dental splint.Type: GrantFiled: March 20, 2023Date of Patent: December 5, 2023Assignee: Asesso Health Inc.Inventor: William C. Cliff
-
Patent number: 11825784Abstract: A hydroponic system is managed by obtaining hydroponic data using a plurality of sensors in a hydroponic cultivator. Data from the plurality of sensors is communicated to a data aggregator, which renders the sensed data from the plurality of sensors in a predetermined data format, as aggregator output data. The aggregator output data is communicated to a fog computing unit, which, in turn, communicates with outside computing routed over the Internet backbone or “cloud”. The fog computing unit is used to control operation of the hydroponic cultivator based on direct control commands executed through the fog control unit and data inputs obtained by communications with outside computing routed over the Internet backbone or “cloud”.Type: GrantFiled: June 16, 2023Date of Patent: November 28, 2023Assignee: KING FAISAL UNIVERSITYInventor: Suresh Sankaranarayanan
-
Patent number: 11829449Abstract: Techniques for determining a classification probability of an object in an environment are discussed herein. Techniques may include analyzing sensor data associated with an environment from a perspective, such as a top-down perspective, using multi-channel data. From this perspective, techniques may determine channels of multi-channel input data and additional feature data. Channels corresponding to spatial features may be included in the multi-channel input data and data corresponding to non-spatial features may be included in the additional feature data. The multi-channel input data may be input to a first portion of a machine-learned (ML) model, and the additional feature data may be concatenated with intermediate output data from the first portion of the ML model, and input into a second portion of the ML model for subsequent processing and to determine the classification probabilities. Additionally, techniques may be performed on a multi-resolution voxel space representing the environment.Type: GrantFiled: December 30, 2020Date of Patent: November 28, 2023Assignee: Zoox, Inc.Inventor: Samir Parikh
-
Patent number: 11830607Abstract: A system for facilitating image finding analysis includes one or more processors and one or more hardware storage devices storing instructions that are executable by the one or more processors to configure the system to perform acts such as (i) presenting an image on a user interface, the image being one of a plurality of images provided on the user interface in a navigable format, (ii) obtaining a voice annotation for the image, the voice annotation being based on a voice signal of a user, and (iii) binding the voice annotation to at least one aspect of the image, wherein the binding modifies metadata of the image based on the voice annotation.Type: GrantFiled: September 7, 2022Date of Patent: November 28, 2023Assignee: AI METRICS, LLCInventors: Andrew Dennis Smith, Robert B. Jacobus, Paige Elaine Severino
-
Patent number: 11823392Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.Type: GrantFiled: August 2, 2022Date of Patent: November 21, 2023Assignee: Board of Regents, The University of Texas SystemInventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong
-
Patent number: 11823479Abstract: A non-transitory computer readable medium (107, 127) stores instructions executable by at least one electronic processor (101, 113) to perform a component co-replacement recommendation method (200).Type: GrantFiled: June 2, 2020Date of Patent: November 21, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Sarif Kumar Naik, Vidya Ravi, Ravindra Balasaheb Patil, Meru Adagouda Patil
-
Patent number: 11823436Abstract: Systems and methods are disclosed for generating a specialized machine learning model by receiving a generalized machine learning model generated by processing a plurality of first training images to predict at least one cancer characteristic, receiving a plurality of second training images, the first training images and the second training images include images of tissue specimens and/or images algorithmically generated to replicate tissue specimens, receiving a plurality of target specialized attributes related to a respective second training image of the plurality of second training images, generating a specialized machine learning model by modifying the generalized machine learning model based on the plurality of second training images and the target specialized attributes, receiving a target image corresponding to a target specimen, applying the specialized machine learning model to the target image to determine at least one characteristic of the target image, and outputting the characteristic of the tarType: GrantFiled: March 31, 2022Date of Patent: November 21, 2023Assignee: Paige.AI, Inc.Inventors: Belma Dogdas, Christopher Kanan, Thomas Fuchs, Leo Grady
-
Patent number: 11823404Abstract: A method of image processing in a structured light imaging system is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair, and wherein the captured image includes a binary pattern projected into the scene by the projector, applying a filter to the rectified captured image to generate a local threshold image, wherein the local threshold image includes a local threshold value for each pixel in the rectified captured image, and extracting a binary image from the rectified captured image wherein a value of each location in the binary image is determined based on a comparison of a value of a pixel in a corresponding location in the rectified captured image to a local threshold value in a corresponding location in the local threshold image.Type: GrantFiled: May 26, 2021Date of Patent: November 21, 2023Assignee: Texas Instruments IncorporatedInventor: Vikram Vijayanbabu Appia
-
Patent number: 11812152Abstract: A method and an apparatus for controlling a video frame image in a live classroom, and a computer readable storage medium and an electronic device are provided. The method includes: acquiring image information of a target person in the video frame image; determining a plurality of detection points according to the image information; determining distribution information of the plurality of detection points based on relationship between the plurality of detection points and a preset first area; determining adjustment information of a camera based on the distribution information and camera parameters; and adjusting the camera according to the adjustment information, so that at least a part of the image information of the target person in the video frame image is located in a preset second area, wherein the second area is located in the first area.Type: GrantFiled: July 29, 2021Date of Patent: November 7, 2023Assignee: BEIJING AMBOW SHENGYING EDUCATION AND TECHNOLOGY CO., LTD.Inventors: Jin Huang, Gang Huang, Kesheng Wang, Yin Yao, Qiaoling Xu
-
Patent number: 11810326Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.Type: GrantFiled: July 28, 2021Date of Patent: November 7, 2023Assignee: Adobe Inc.Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman