Patents Examined by Michelle M Entezari
  • Patent number: 11664112
    Abstract: The present disclosure provides a tissue density analysis system. The system includes an acquisition module configured to obtain image data and tissue density distribution data; a display module configured to display the obtained tissue density distribution data in one or more charts; a processing module configured to adjust the tissue density distribution data displayed in the one or more charts; and a storage module configured to store the image data, the tissue density distribution data and an instruction.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: May 30, 2023
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Shiquan Huang, Changyun Qiu, Weiwen Nie
  • Patent number: 11663456
    Abstract: A mechanism is described for facilitating the transfer of features learned by a context independent pre-trained deep neural network to a context dependent neural network. The mechanism includes extracting a feature learned by a first deep neural network (DNN) model via the framework, wherein the first DNN model is a pre-trained DNN model for computer vision to enable context-independent classification of an object within an input video frame and training, via the deep learning framework, a second DNN model for computer vision based on the extracted feature, the second DNN model an update of the first DNN model, wherein training the second DNN model includes training the second DNN model based on a dataset including context-dependent data.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventor: Raanan Yonatan Yehezkel Rohekar
  • Patent number: 11663697
    Abstract: A device for assembling at least two shots of a scene acquired by at least one sensor includes a memory and processing circuitry. The processing circuitry is configured to save, in the memory, a first data set contained in a first signal generated by each pixel of the sensor and indicative of a first shot of the scene, and a second data set contained in a second signal generated by each pixel of the sensor and indicative of a second shot of the scene. The processing circuitry is further configured to assemble the first and second shots on the basis of the content of the first and second data sets of a plurality of pixels in order to form a resulting scene.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: May 30, 2023
    Assignee: STMICROELECTRONICS (GRENOBLE 2) SAS
    Inventors: Gregory Roffet, Stephane Drouard, Roger Monteith
  • Patent number: 11657651
    Abstract: An information processing apparatus monitors a plurality of persons on a premises by one or more image sensors installed on the premises. The information processing apparatus includes a controller configured to determine a tendency of behavior of at least one person of the plurality of persons according to a length of a blank time, the blank time being a time during which the at least one person does not appear in an image captured by the one or more image sensors.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: May 23, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Takayuki Oidemizu, Kazuyuki Inoue, Ryosuke Kobayashi, Yurika Tanaka, Tomokazu Maya, Satoshi Komamine
  • Patent number: 11647213
    Abstract: The present disclosure generally relates to a device and a method of decoding a color picture from a bitstream.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: May 9, 2023
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Sebastien Lasserre, Fabrice Leleannec, Yannick Olivier
  • Patent number: 11640665
    Abstract: Aspects of the technology described herein relate to detecting degrade ultrasound imaging frame rate. Some embodiments include receiving ultrasound data from the ultrasound device, generating ultrasound images from the ultrasound data, taking one or more measurements of ultrasound imaging frame rate based on the ultrasound images, comparing the one or more measurements of ultrasound imaging frame rate to an reference ultrasound imaging frame rate value, and based on a result of comparing the one or more measurements of ultrasound imaging frame rate to the reference ultrasound imaging frame rate value, providing a notification and/or disabling an ultrasound imaging preset and/or an ultrasound imaging mode with which the ultrasound device was configured.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: May 2, 2023
    Assignee: BFLY OPERATIONS, INC.
    Inventors: Maxim Zaslavsky, Krishna Ersson, Vineet Shah, Abraham Neben, Benjamin Horowitz, Renee Esses, Kirthi Bellamkonda
  • Patent number: 11620751
    Abstract: Systems and methods for automatically excluding artifacts from an analysis of a biological specimen image are disclosed. An exemplary method includes obtaining an immunohistochemistry (IHC) image and a control image, determining whether the control image includes one or more artifacts, upon a determination that the control image includes one or more artifacts, identifying one or more artifact regions within the IHC image by mapping the one or more artifacts from the control image to the IHC image, and performing image analysis of the IHC image where any identified artifact regions are excluded from the image analysis.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: April 4, 2023
    Assignee: VENTANA MEDICAL SYSTEMS, INC.
    Inventor: Anindya Sarkar
  • Patent number: 11600006
    Abstract: An apparatus and method for encoding objects in a camera-captured image with a deep neural network pipeline including multiple convolutional neural networks or convolutional layers. After identifying at least a portion of the camera-capture image, a first convolutional layer is applied to the at least the portion of the camera-captured image and multiple subregion representations are pooled from the output of the first convolutional layer. One or more additional convolutions are performed. At least one deconvolution is performed and concatenated with the output of one or more convolutions. One or more final convolutions are performed. The at least the portion of the camera-captured image is classified as an object category in response to an output of the one or more final convolutions.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: March 7, 2023
    Assignee: HERE Global B.V.
    Inventors: Souham Biswas, Sanjay Kumar Boddhu
  • Patent number: 11593597
    Abstract: A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: February 28, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Yasen Hu, Shuqing Zeng
  • Patent number: 11587325
    Abstract: A method for detecting people entering and leaving a field is provided in an embodiment of the disclosure. The method includes the following. An event detection area corresponding to an entrance is set, and the event detection area includes an upper boundary, a lower boundary, and an internal area, and the lower boundary includes a left boundary, a right boundary, and a bottom boundary; a person image corresponding to a person in an image stream is detected and tracked; and whether the person passes through or does not pass through the entrance is determined according to a first detection result and a second detection result.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: February 21, 2023
    Assignee: Industrial Technology Research Institute
    Inventors: Shu-Hsin Chang, Yao-Chin Yang, Yi-Yu Su, Kun-Hsien Lu
  • Patent number: 11587346
    Abstract: Ink-processing technology is set forth herein for detecting a gesture that a user performs in the course of interacting with an ink document. The technology operates by identifying a grouping of ink strokes created by the user. The technology then determines whether the grouping expresses a gesture based on a combination of spatial information and image information, both of which describe the grouping. That is, the spatial information describes a sequence of positions traversed by the user in drawing the grouping of ink strokes using an ink capture device, while the image information refers to image content in an image produced by rendering the grouping into image form. The technology also provides a technique for identifying the grouping by successively expanding a region of analysis, to ultimately provide a spatial cluster of ink strokes for analysis.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: February 21, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oz Solomon, Oussama Elachqar, Sergey Aleksandrovich Doroshenko, Nima Mohajerin, Badal Yadav
  • Patent number: 11576628
    Abstract: Emission imaging data are reconstructed to generate a low dose reconstructed image. Standardized uptake value (SUV) conversion (30) is applied to convert the low dose reconstructed image to a low dose SUV image. A neural network (46, 48) is applied to the low dose SUV image to generate an estimated full dose SUV image. Prior to applying the neural network the low dose reconstructed image or the low dose SUV image is filtered using a low pass filter (32). The neural network is trained on a set of training low dose SUV images and corresponding training full dose SUV images to transform the training low dose SUV images to match the corresponding training full dose SUV images, using a loss function having a mean square error loss component (34) and a loss component (36) that penalizes loss of image texture and/or a loss component (38) that promotes edge preservation.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: February 14, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Sydney Kaplan, Yang-Ming Zhu, Andriy Andreyev, Chuanyong Bai, Steven Michael Cochoff
  • Patent number: 11548310
    Abstract: This patent document discloses physical documents including metameric ink pairs. One claim recites a document comprising: a first surface; a second surface, in which the first surface comprises a first set of print structures and a second set of print structures, in which the first set of print structures and the second set of print structures collective convey an encoded signal discernable from optical scan data representing at least a first portion of the first surface, in which the first set of print structures is provided on the first surface with a first ink and the second set of print structures is provided on the first surface with a second, different ink, and in which the first ink and the second, different ink comprise a metameric pair. Of course, other claims and combinations are described as well.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: January 10, 2023
    Assignee: Digimarc Corporation
    Inventors: Tony F. Rodriguez, Geoffrey B. Rhoads, Ravi K. Sharma
  • Patent number: 11514695
    Abstract: Technology is described herein for parsing an ink document having a plurality of ink strokes. The technology performs stroke-level processing on the plurality of ink strokes to produce stroke-level information, the stroke-level information identifying at least one characteristic associated with each ink stroke. The technology also performs object-level processing on individual objects within the ink document to produce object-level information, the object-level information identifying one or more groupings of ink strokes in the ink document. The technology then parses the ink document into constituent parts based on the stroke-level information and the object-level information. In some implementations, the technology converts the ink stroke data into an ink image. The stroke-level processing and/or the object-level processing may operate on the ink image using one or more neural networks.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: November 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oussama Elachqar, Badal Yadav, Oz Solomon, Sergey Aleksandrovich Doroshenko, Nima Mohajerin
  • Patent number: 11501543
    Abstract: The present invention discloses a system and a method for language independent automatic license plate localization by analysis of plurality of images in real-time under day-light condition without using any external light. In one embodiment, the system can work without any spatiality constraints and/or demographic considerations without any restriction on jurisdiction and can effectively localize license plates (LPs) of any type consisting of alpha-numeric characters and symbols. In other embodiment, methods for search-space reduction system based on motion-based filtration and LPs localization based on edge-active-region filtration with high frame-per-second (FPS) throughput are described. In another embodiment, a dual-binarization scheme is described for color invariant LP localization.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: November 15, 2022
    Assignee: VIDEONETICS TECHNOLOGY PRIVATE LIMITED
    Inventors: Sudeb Das, Apurba Gorai, Tinku Acharya
  • Patent number: 11501572
    Abstract: In various examples, a set of object trajectories may be determined based at least in part on sensor data representative of a field of view of a sensor. The set of object trajectories may be applied to a long short-term memory (LSTM) network to train the LSTM network. An expected object trajectory for an object in the field of view of the sensor may be computed by the LSTM network based at least in part an observed object trajectory. By comparing the observed object trajectory to the expected object trajectory, a determination may be made that the observed object trajectory is indicative of an anomaly.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: November 15, 2022
    Assignee: NVIDIA Corporation
    Inventors: Milind Naphade, Shuo Wang
  • Patent number: 11495074
    Abstract: A face recognition unlocking device includes a collection device configured to obtain information of a user, a controller configured to determine whether face recognition of the user succeeds, based on the information of the user, and calculate a location of the user for success in the face recognition, and an output device configured to guide the user to move.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: November 8, 2022
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventor: Min Gu Park
  • Patent number: 11490986
    Abstract: Methods and systems for displaying an image of a medical procedure (e.g., an intraoperative image) with additional information (e.g., data) that can augment the image of the medical procedure are provided. Augmenting the image can include overlaying data from a first image to a second image. Overlaying the data can involve determining, for a point or multiple points in the first image, a matching location or multiple matching locations in a second image. The first image and the second image can be of a patient. Determining the matching location can involve using a rotation and scale invariant geometrical relationship. The matching locations can be used as the basis for the overlay.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: November 8, 2022
    Assignee: BEYEONICS SURGICAL LTD.
    Inventor: Rani BEn-Yishai
  • Patent number: 11488415
    Abstract: A three-dimensional facial shape estimating device (300) includes a face image acquiring unit (301) configured to acquire a plurality of image frames that capture a subject's face; a face information acquiring unit (302) having, preset therein, a predetermined number of facial feature points, the face information acquiring unit (302) being configured to acquire, for each of the plurality of image frames, face information that indicates a position of each of the predetermined number of facial feature points of the subject's face within the image frame; and a three-dimensional shape estimating unit (303) configured to perform mapping of each of the predetermined number of facial feature points of the subject's face between the plurality of image frames based on the face information of each of the plurality of image frames and to estimate the three-dimensional shape of the subject's face based on a result from the mapping.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: November 1, 2022
    Assignee: NEC CORPORATION
    Inventor: Akinori Ebihara
  • Patent number: 11484251
    Abstract: A method of generating a correction plan for a knee of a patient includes obtaining a ratio of reference bone density to reference ligament tension in a reference population. A bone of the knee of the patient may be imaged. From the image of the bone, a first dataset may be determined including at least one site of ligament attachment and existing dwell points of a medial femoral condyle and lateral femoral condyle of the patient on a tibia of the patient. Desired positions of contact in three dimensions of the femoral condyles of the patient with the tibia of the patient may be obtained by determining a relationship in which a ratio of bone density to ligament tension of the patient is substantially equal to the ratio of reference bone density to reference ligament tension.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: November 1, 2022
    Assignee: Howmedica Osteonics Corp.
    Inventors: Gokce Yildirim, Sally Liarno, Mark Gruczynski