Patents Examined by Tahmina N Ansari
  • Patent number: 11594056
    Abstract: A learning apparatus learns a machine learning model for performing semantic segmentation of determining a plurality of classes in an input image in units of pixels by extracting, for each layer, features which are included in the input image and have different frequency bands of spatial frequencies. A learning data analysis unit analyzes the frequency bands included in an annotation image of learning data. A learning method determination unit determines a learning method using the learning data based on an analysis result of the frequency bands by the learning data analysis unit. A learning unit learns the machine learning model via the determined learning method using the learning data.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: February 28, 2023
    Assignee: FUJIFILM Corporation
    Inventor: Takashi Wakui
  • Patent number: 11593715
    Abstract: A system for training a neurome that emulates a brain of a user comprises a non-invasive brain interface assembly configured for detecting neural activity of the user in response to analog instances of a plurality of stimuli peripherally input into the brain of the user from at least one source of content, memory configured for storing a neurome configured for outputting a plurality of determined brain states of an avatar in response to inputs of the digital instances of the plurality of stimuli, and a neurome training processor configured for determining a plurality of brain states of the user based on the detected neural activity of the user, and modifying the neurome based on the plurality of determined brain states of the user and the plurality of determined brain states of the avatar.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: February 28, 2023
    Assignee: HI LLC
    Inventors: Bryan Johnson, Ethan Pratt, Jamu Alford, Husam Katnani, Julian Kates-Harbeck, Ryan Field, Gabriel Lerner, Antonio H. Lara
  • Patent number: 11583187
    Abstract: Methods and systems for controlling aneurysm initiation or formation in an individual are presented; the technique comprises receiving morphological data of an artery being indicative of at least first and second geometrical parameters of the artery along its trajectory; analyzing the data to identify at least one flow-diverting location along the artery satisfying first and second predetermined conditions of the geometrical parameters; classifying the individual as having or not having disposition for future formation of an aneurysm, depending respectively on whether the at least one flow-diverting location is identified or not and generating classification data; and generating prediction data for the individual with regard to future aneurysm formation.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: February 21, 2023
    Assignee: ANEUSCREEN LTD.
    Inventor: Haim Ezer
  • Patent number: 11586863
    Abstract: Provided are an image fusion classification method and device. The method includes that: a three-dimensional weight matrix of a hyperspectral image is obtained by use of a Support Vector Machine (SVM) classifier (101); superpixel segmentation is performed on the hyperspectral image to obtain K superpixel images, K being a positive integer (102); the three-dimensional weight matrix is regularized by use of a superpixel-image-based segmentation method to obtain a regular matrix (103); and a class that a sample belongs to is determined according to the regular matrix (104).
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: February 21, 2023
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Sen Jia, Bin Deng, Jiasong Zhu, Lin Deng, Qingquan Li
  • Patent number: 11574387
    Abstract: A method of processing image data for an image, the image data including colour data expressed in a first colour space, transforms the colour data to a luminance-normalised colour space, and performs one or more image processing operations on the transformed colour data to generate processed image data.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: February 7, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Ruan Lakemond, Fabian Angarita
  • Patent number: 11574411
    Abstract: In accordance with at least some embodiments of the present disclosure, a process to improve computed tomography (CT) to cone beam computed tomography (CBCT) registration is disclosed. The process may include receiving a CT image generated by CT-scanning of an object, and receiving a CBCT image generated by CBCT-scanning of the object. The process may include generating an image mask based on Digital Imaging and Communications in Medicine (DICOM) information extracted from the CBCT image. For a specific pixel in the CBCT image, the image mask contains a corresponding data-field indicating whether the specific pixel contains image data generated based on the CBCT-scanning of the object. The process may further include generating a registered image by utilizing the image mask to perform a DIR between the CT image and the CBCT image.
    Type: Grant
    Filed: January 23, 2021
    Date of Patent: February 7, 2023
    Inventors: Tobias Gass, Thomas Schwere, Marco Lessard, Joerg Desteffani
  • Patent number: 11568624
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for managing virtual surveillance windows for video surveillance. The methods, systems, and apparatus include actions of obtaining an original video, generating a downscaled video from the original video, detecting a first event at a location from the downscaled video using a first classifier, generating a windowed video from the original video based on the location, detecting a second event from the windowed video, and performing an action in response to detecting the second event.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: January 31, 2023
    Assignee: ObjectVideo Labs, LLC
    Inventors: Narayanan Ramanathan, Allison Beach
  • Patent number: 11562486
    Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 24, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Patent number: 11556740
    Abstract: A method for automatically training a machine learning system to detect and identify a sensor triggering event associated with an internet of things (IoT) device is provided. The method may include capturing sensor data and capturing sound clips associated with the IoT device. The method may further include identifying the sensor triggering event associated with the IoT device. The method may further include sending an alert of the identified sensor triggering event. The method may also include correlating the captured sensor data, the captured sound clips, and the identified sensor triggering event. The method may further include identifying a second sensor triggering event by determining similarities between the correlated data associated with the identified sensor triggering event and additional sensor and sound data that is captured based on the second sensor triggering event.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: January 17, 2023
    Assignee: International Business Machines Corporation
    Inventors: Lisa Seacat DeLuca, David Kaminsky
  • Patent number: 11551027
    Abstract: Implementations of the subject matter described herein relate to object detection based on deep neural network. With a given input image, it is desired to determine a class and a boundary of one or more objects within the input image. Specifically, a plurality of channel groups is generated from a feature map of an image, the image including at least a region corresponding to a first grid. A target feature map is extracted from at least one of the plurality of channel groups associated with a cell of the first grid. Information related to an object within the region is determined based on the target feature map. The information related to the object may be a class and/or a boundary of the object.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: January 10, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jingjing Fu, Yao Zhai, Yan Lu
  • Patent number: 11547804
    Abstract: An MRI system (100) is proposed (for generating one or more images of a body-part of a patient under analysis); the MRI system (100) comprises an injector head assembly (155), for injecting at least one medical fluid into the patient, having a clock unit (340) for providing a clock signal with a clock frequency. The MRI system (100) comprises means (420-425; 445-460) for adjusting the clock frequency in response to a manual command and/or to a detection of a degradation of the images. An injector system (155,165) for use in this MRI system (100) is also proposed. Moreover, a corresponding method (500) for managing the injector head assembly (155) is proposed. A computer program (400) for implementing the method (500) and a corresponding computer program product are also proposed.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: January 10, 2023
    Assignee: ACIST MEDICAL SYSTEMS, INC.
    Inventors: Steven Mark Guy Rolfe, Volker Kremer, Eckhard Buchholtz, Michael Van De Bruck, Gunter Bruckmann
  • Patent number: 11544927
    Abstract: The present application discloses a video type detection method, apparatus, electronic device and storage medium. A specific implementation solution is as follows: obtaining N key frames of a first video, where N is an integer greater than 1, and a type of the first video is to be detected; obtaining M confidence scores corresponding to each of the N key frames by inputting each of the N key frames into M algorithm models corresponding to the first video type respectively, where M is an integer greater than 1; determining a confidence score of the first video by a fusion strategy algorithm model according to N×M confidence scores of the N key frames; and comparing the confidence score of the first video with a confidence score threshold corresponding to a first video type, to determine whether the type of the first video is the first video type or not.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: January 3, 2023
    Inventors: Bing Dai, Zhi Ye, Yangxi Li
  • Patent number: 11529056
    Abstract: According to some embodiments there is provided a method for structured light scanning of an intra-oral scene, comprising: projecting onto the intra-oral scene a color-coded pattern comprising an arrangement of entities having edges between them; each entity comprising a different narrow band of wavelengths; and detecting the projected pattern as a plurality of pixels in an acquired image of the scene using at least two narrowband filters, wherein for each pixel of at least 95% of the pixels of an entity of interest comprising a first band of wavelengths, a contribution of light from a second band of wavelengths of an adjacent entity is less than 10%. Some embodiments relate to a scanner system for structured light scanning of an intra-oral scene.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: December 20, 2022
    Assignee: Dentlytec G.P.L. Ltd.
    Inventors: Benny Pesach, Amitai Reuvenny, Georgy Melamed
  • Patent number: 11526762
    Abstract: Method and system of training a machine learning neural network (MLNN). The method comprises receiving a set of input features at respective input layers of the MLNN. The MLNN implemented in a processor and comprises an output layer interconnected to input layers via intermediate layers. The input features are associated with input feature data of a patient medical condition. Then selecting, responsive to a data qualification threshold level, a subset of the input layers while deactivating a remainder of the set of input layers. The intermediate layers are configured with an initial matrix of weights. Then training the MLNN based at least in part upon adjusting the initial matrix of weights based on a supervised classification that provides, via the output layer, one of negative and positive patient diagnostic states.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: December 13, 2022
    Assignee: Ventech Solutions, Inc.
    Inventors: Satish Chenchal, Ravi Kunduru
  • Patent number: 11487974
    Abstract: A computer-implemented method, which enables the data to be clustered without being required to perform any distance calculations among the points of the dataset, includes assigning points of a dataset to cells of a cellular automaton; assigning each cell, having a data point assigned, a distinct state value and a constant temperature value; and assigning all cells, to which a data point is not assigned, a unique state value different from the state values utilized for cells having a data point and to a temperature lower than the constant temperature value; selecting a cell in the cellular automaton randomly; calculating the average temperature of the selected cell and its neighbor cells; setting the temperature of the cells having no data point, as the average temperature; if a neighbor cell temperature is above the predetermined threshold value, moving this neighbor cell to the state of the selected cell.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: November 1, 2022
    Assignee: YEDITEPE UNIVERSITESI
    Inventors: Emin Erkan Korkmaz, Enes Burak Dundar
  • Patent number: 11488337
    Abstract: A control method includes outputting an estimated feature value based on first layout data generated based on a first album creation parameter, generating a second album creation parameter based on the output estimated feature value, generating third layout data based on the generated second album creation parameter and an image used for second layout data generated by a user, updating the first album creation parameter to a third album creation parameter based on a feature value based on the generated third layout data and a feature value based on the second layout data, performing verification of the third album creation parameter, and generating fourth layout data based on the third album creation parameter when a result of the verification of the third album creation parameter is a first result, and not generating when the result of the verification of the third album creation parameter is a second result.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: November 1, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hiroyasu Kunieda, Shinjiro Hori, Takayuki Yamada
  • Patent number: 11481916
    Abstract: A method, system and computer program product for emulating depth data of a three-dimensional camera device is disclosed. The method includes concurrently operating the radar device and the 3D camera device to generate training radar data and training depth data respectively. Each of the radar device and the 3D camera device has a respective field of view. The field of view of the radar device overlaps the field of view of the 3D camera device. The method also includes inputting the training radar and depth data to the neural network. The method also includes employing the training radar and depth data to train the neural network. Once trained, the neural network is configured to receive real radar data as input and to output substitute depth data.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 25, 2022
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Yanyan Hu, Kevin Piette, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11475574
    Abstract: Methods for determining a unit load device (ULD) door status are disclosed herein. An example method includes capturing a set of image data featuring the ULD. The example method further includes segmenting the set of image data to identify a top portion of the ULD, and determining an amplitude of the top portion of the ULD. The example method further includes determining the ULD door status based on whether the amplitude of the top portion of the ULD exceeds an amplitude threshold.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: October 18, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Justin F. Barish, Adithya H. Krishnamurthy
  • Patent number: 11461904
    Abstract: According to some aspects, methods and systems may include receiving, by a computing device, metadata identifying an event occurring in a video program, and determining an expected motion of objects in the identified event. The methods and systems may further include analyzing motion energy in the video program to identify video frames in which the event occurs, and storing information identifying the video frames in which the event occurs.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: October 4, 2022
    Assignee: Comcast Cable Communications, LLC
    Inventors: Erik Schwartz, Jan Neumann, Hans Sayyadi, Stefan Deichmann
  • Patent number: 11462031
    Abstract: The present disclosure proposes a computer implemented of object recognition of an object to be identified using a method for reconstruction of a 3D point cloud. The method comprises the steps of acquiring, by a mobile device, a plurality of pictures of said object, sending the acquired pictures to a cloud server, reconstructing, by the cloud server, a 3D points cloud reconstruction of the object, performing a 3D match search in a 3D database using the 3D points cloud reconstruction, to identify the object, the 3D match search comprising a comparison of the reconstructed 3D points cloud of the object with 3D points clouds of known objects stored in the 3D database.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: October 4, 2022
    Assignee: APPLICATIONS MOBILES OVERVIEW INC.
    Inventors: Danae Blondel, Laurent Juppe