Patents Examined by Delomia L Gilliard
  • Patent number: 11966901
    Abstract: Disclosed is a method for identifying and monitoring a shopping behavior in a user. The method includes capturing images from a depth camera mounted on a shelf unit, identifying a user from the captured image, identifying joints of the identified user by performing a deep neural network (DNN) body joint detection on the captured images; detecting and tracking actions of the identified user over a first time period; tracking an object from the bins over a second time period by associating the object with one or more joints among the identified joints that have entered the bins within the shelf unit, and determining an action of the identified user based at least in part on the associated object with the one or more joints and results from the deep learning identification on the bounding box.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: April 23, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Amir Hossein Khalili, Bhooshan Supe, Jung Ick Guack, Shantanu Patel, Gaurav Saraf, Baisub Lee, Helder Silva, Julie Huynh, Jaigak Song
  • Patent number: 11965901
    Abstract: A management system including a processor, the processor is configured to acquire an image obtained by imaging an outer surface of each of plural sample containers and a boundary container, the sample container containing a sample and in which subject information of a subject from whom the sample is collected is given to the outer surface, the boundary container in which group boundary information indicating a boundary between plural groups of subjects is given to the outer surface, recognize the subject information and the group boundary information based on the image, and associate a test result related to the sample contained in each of the sample containers with a test order which includes the subject information and in which the group is divided corresponding to the group boundary information, based on a result of the recognition and the test order.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: April 23, 2024
    Assignee: FUJIFILM CORPORATION
    Inventors: Yoshihiro Seto, Haruyasu Nakatsugawa
  • Patent number: 11961230
    Abstract: A discerning device that discerns a cell mass includes: a storage unit that stores a trained model that has been subjected to machine learning on the basis of training data in which an index associated with a first cell mass out of a predetermined index including at least one index indicating a feature of a cell mass is correlated with information indicating whether a state of the first cell mass is a first state or a second state that is different from the first state; an image-analyzing unit that acquires an index associated with a second cell mass out of the predetermined index; and a discerning-processing unit that discerns whether a state of the second cell mass is the first state or the second state on the basis of the index associated with the second cell mass and the trained model.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: April 16, 2024
    Assignee: JSR Corporation
    Inventor: Daichi Suemasa
  • Patent number: 11954911
    Abstract: For enabling safer operation of freight trains, a fixed state inspection apparatus includes: a frame image group acquisition unit acquiring frame image group information being information including a plurality of frame images acquired by continuously capturing images of a freight train along a traveling direction, at least one of the frame images including a fixing mechanism that is provided in association with a container in the freight train and can switch between a fixed state and a released state between members; a detection unit detecting the fixing mechanism in an unfixed state different from the fixed state in the freight train, based on the frame image group information; and an output unit generating and outputting detection position information being information allowing determination of a position of the detected fixing mechanism in the freight train, based on a detection frame image being a frame image including the detected fixing mechanism.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: April 9, 2024
    Assignee: NEC CORPORATION
    Inventor: Rina Yakuwa
  • Patent number: 11954914
    Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 9, 2024
    Assignee: NVIDIA Corporation
    Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
  • Patent number: 11953313
    Abstract: Provided is a three-dimensional measurement device, including an illumination system (I) and an imaging system (II). The illumination system includes, along an illumination light path, a light source (8), a light beam shaping apparatus (8), a pattern modulation apparatus (6), and a projection lens (2). The pattern modulation apparatus is configured to form a coded pattern. The light beam shaping apparatus is configured to shape light emitted by the light source into near-parallel light. The projection lens is configured to project the coded pattern onto a target object. The imaging system includes an imaging lens (3), a first beam-splitting system (12, 13), and an image sensor group including N image sensors (9, 10, 11). The first beam-splitting system is configured to transmit the coded pattern received by the imaging lens and projected onto the target object to the N image sensors of the image sensor group.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: April 9, 2024
    Assignee: CHENGDU PIN TAI DING FENG BUSINESS ADMINISTRATION
    Inventors: Qianwen Xiang, Wanjia Ao, Youmin Zhuang, Ruojia Wang, Zixin Xie, Mintong Wu, Ruihan Zhang
  • Patent number: 11956519
    Abstract: A method, apparatus and computer program product are provided to signal grouping types in an image container file. Relative to the construction of an image container file, the method, apparatus and computer program product construct an image container file having a group box with a grouping type associated with burst-captured images, time-synchronized images captured by a plurality of image capture devices or an image item associated with an audio track. With respect to the processing of an image container file, the method, apparatus and computer program product permit an image container file having a group box with a grouping type associated with one of burst-captured images, time-synchronized images captured by a plurality of image capture devices or an image item associated with an audio track to be processed to cause one or more image items from the image container file to be rendered in accordance with the grouping type.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: April 9, 2024
    Assignee: Nokia Technologies Oy
    Inventors: Emre Aksu, Miska Hannuksela, Olli Kilpelainen, Jonne Juhani Mäkinen, Juha-Pekka Hippeläinen, Jani Kattelus
  • Patent number: 11948400
    Abstract: An action detection method based on a human skeleton feature and a storage medium belong to the field of computer vision, and the method includes: for each person, extracting a series of body keypoints in every frame of the video as the human skeleton feature; calculating a body structure center point and approximating rigid motion area by using the human skeleton feature as a calculated value from the skeleton feature state, and predicting an estimated value in the next frame; performing target matching according to the estimated and calculated value, correlating the human skeleton feature belonging to the same target to obtain a skeleton feature sequence, and then correlating features of each keypoint in the temporal domain to obtain a spatial-temporal domain skeleton feature; inputting the skeleton feature into an action detection model to obtain an action category. In the disclosure, the accuracy of action detection is improved.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: April 2, 2024
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Li Yu, Han Yu
  • Patent number: 11940290
    Abstract: A navigation system may include a processor programmed to receive, from a camera of the host vehicle, one or more images captured from an environment of the host vehicle, and analyze the one or more images to detect an indicator of an intersection. The processor may also be programmed to determine, based on output received from at least one sensor of the host vehicle, a stopping location of the host vehicle relative to the detected intersection, and analyze the one or more images to determine an indicator of whether one or more other vehicles are in front of the host vehicle. The processor may further be programmed to send the stopping location of the host vehicle and the indicator of whether one or more other vehicles are in front of the host vehicle to a server for use in updating a road navigation model.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: March 26, 2024
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Tomer Eshet, Daniel Braunstein
  • Patent number: 11935298
    Abstract: A system and method of predicting a team's formation on a playing surface are disclosed herein. A computing system retrieves one or more sets of event data for a plurality of events. Each set of event data corresponds to a segment of the event. A deep neural network, such as a mixture density network, learns to predict an optimal permutation of players in each segment of the event based on the one or more sets of event data. The deep neural network learns a distribution of players for each segment based on the corresponding event data and optimal permutation of players. The computing system generates a fully trained prediction model based on the learning. The computing system receives target event data corresponding to a target event. The computing system generates, via the trained prediction model, an expected position of each player based on the target event data.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: March 19, 2024
    Assignee: STATS LLC
    Inventors: Jennifer Hobbs, Sujoy Ganguly, Patrick Joseph Lucey
  • Patent number: 11935248
    Abstract: System and methods are provided for detecting occupancy state in a vehicle having an interior passenger compartment the systems and methods include analyzing data of the interior passenger compartment to yield at least one probability of a current occupancy state of said vehicle and further analyzing additional data of the interior passenger compartment to yield predicted probabilities of said occupancy state of said vehicle, wherein each probability of the predicted probabilities relate to the probability of changing said current occupancy state to different occupancy state combining the current at least one probability of the current occupancy state with the predicted probabilities of the occupancy state to yield an updated probability of an updated occupancy state of said vehicle; determining the current occupancy state based on said updated probability of an updated occupancy state of said vehicle and generating an output to control one or more devices or applications based on the determined occupancy sta
    Type: Grant
    Filed: February 17, 2020
    Date of Patent: March 19, 2024
    Assignee: GENTEX CORPORATION
    Inventor: Yuval Gronau
  • Patent number: 11930160
    Abstract: A polarization imaging unit 20 acquires a polarized image including polarization pixels with a plurality of polarization directions. An information compression unit 40 sets reference image information based on polarized image information of reference polarization pixels with at least a plurality of polarization directions in the polarized image and generates difference information between the polarized image information of each of polarization pixels different from the reference polarization pixels in the polarized image and the reference image information. In addition, the information compression unit 40 reduces the amount of information of the difference information generated for each of the polarization pixels with the plurality of polarization directions to generate compressed image information including the reference image information and the difference information with the reduced amount of information.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: March 12, 2024
    Assignee: SONY CORPORATION
    Inventors: Yuhi Kondo, Yasutaka Hirasawa, Takefumi Nagumo, Toshinori Ihara, Teppei Kurita, Legong Sun
  • Patent number: 11918295
    Abstract: According to some embodiments, the process includes the steps of: a) taking a sagittal preoperative x-ray of the vertebral column of the patient to be treated, extending from the cervical vertebrae to the femoral heads; b) on that x-ray, identifying points on S1, S2, T12 et C7; c) depicting, on the said x-ray, curved segments beginning at the center of the plate of S1 et going to the center of the plate of C7; e) identifying, on that x-ray, the correction(s) to be made to the vertebral column, including the identification of posterior osteotomies to make; f) pivoting portions of said x-ray relative to other portions of that x-ray, according to osteotomies to be made; g) performing, on said x-ray, a displacement of the sagittal curvature segment extending over the vertebral segment to be corrected; h) from a straight vertebral rod (TV), producing the curvature of that rod according to the shape of said sagittal curvature segment in said displacement position.
    Type: Grant
    Filed: October 6, 2021
    Date of Patent: March 5, 2024
    Assignee: MEDICREA INTERNATIONAL
    Inventors: Thomas Mosnier, David N. Ryan
  • Patent number: 11922658
    Abstract: A pose tracking method, a pose tracking device and an electronic device. The method comprises: acquiring continuous multiple images of a scanned object and an initial pose of an image capturing unit (S10); by taking the initial pose as an initial value, acquiring, on the basis of a previous frame image and a current frame image in the continuous multiple-frame images, a first calculated pose of the current frame image by using a first algorithm (S12); by taking the first calculated pose as an initial value, acquiring, on the basis of the current frame image and a current frame image reconstruction model, a second calculated pose of the current frame image by using a second algorithm (S14); and updating the initial pose of the image capturing unit according to the second calculated pose, and repeating the described steps to achieve pose tracking of the image capturing unit (S16).
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: March 5, 2024
    Assignee: ArcSoft Corporation Limited
    Inventors: Long Zhang, Chao Zhang, Jin Wang
  • Patent number: 11922640
    Abstract: A method for 3D object tracking is described. The method includes inferring first 2D semantic keypoints of a 3D object within a sparsely annotated video stream. The method also includes matching the first 2D semantic keypoints of a current frame with second 2D semantic keypoints in a next frame of the sparsely annotated video stream using embedded descriptors within the current frame and the next frame. The method further includes warping the first 2D semantic keypoints to the second 2D semantic keypoints to form warped 2D semantic keypoints in the next frame. The method also includes labeling a 3D bounding box in the next frame according to the warped 2D semantic keypoints in the next frame.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: March 5, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee
  • Patent number: 11914851
    Abstract: Provided is an object detection device or the like which efficiently generates good-quality training data. This object detection device is provided with: a detection unit which uses a dictionary to detect objects from an input image; a reception unit which displays, on a display device, the input image accompanied by a display emphasizing partial areas of detected objects, and receives, from one operation of an input device, a selection of a partial area and an input of the class of the selected partial area; a generation unit which generates training data from the image of the selected partial area and the inputted class; and a learning unit which uses the training data to learn the dictionary.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: February 27, 2024
    Assignee: NEC CORPORATION
    Inventor: Yusuke Takahashi
  • Patent number: 11917195
    Abstract: A method and a device for encoding/decoding an image according to the present invention may determine a reference region for intra prediction of a current block, derive an intra prediction mode of the current block on the basis of a predetermined MPM candidate group, and perform intra prediction on the current block on the basis of the reference region and the intra prediction mode.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: February 27, 2024
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11899713
    Abstract: A system and method for making categorized music tracks available to end user applications. The tracks may be categorized based on computer-derived rhythm, texture and pitch (RTP) scores for tracks derived from high-level acoustic attributes, which is based on low level data extracted from the tracks. RTP scores are stored in a universal database common to all of the music publishers so that the same track, once RTP scored, does not need to be re-RTP scored by other music publishers. End user applications access an API server to import collections of tracks published by publishers, to create playlists and initiate music streaming. Each end user application is sponsored by a single music publisher so that only tracks capable of being streamed by the music publisher are available to the sponsored end user application.
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: February 13, 2024
    Assignee: APERTURE INVESTMENTS, LLC
    Inventors: Jacquelyn Fuzell-Casey, Skyler Fuzell-Casey
  • Patent number: 11900629
    Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
    Type: Grant
    Filed: February 27, 2023
    Date of Patent: February 13, 2024
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Yue Wu, Michael Grabner, Cheng-Chieh Yang