Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11030541
    Abstract: Described is a system and method for proactively determining and resolving events having low confidence scores and/or resolving disputed events. For example, when an event, such as a pick of an item from an inventory location within a materials handling facility occurs, the event aspects (e.g., user involved in the event, item involved in the event, action performed) are determined based on information provided from one or more input components (e.g., camera, weight sensor). If each event aspect cannot be determined with a high degree of confidence, the event information is provided to an associate for resolution.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: June 8, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Wyatt David Camp, Dilip Kumar, Amber Autrey Taylor, Jason Michael Famularo, Thomas Meilandt Mathiesen, Jared Joseph Frank
  • Patent number: 11031128
    Abstract: Augmented reality-based training and troubleshooting is described for medical devices. An electronic mobile device can be equipped with an AR application that, when executed, causes the electronic mobile device to provide augmented reality-based training on how to set up, or perform maintenance on, one or more components of a medical device. The AR application, when executed, can also cause the electronic mobile device to provide augmented reality-based troubleshooting for one or more components of a medical device.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: June 8, 2021
    Assignee: Fresenius Medical Care Holdings, Inc.
    Inventors: Kulwinder S. Plahey, Stephen A. Merchant, James Peterson, Harvey Cohen, Matthew Buraczenski, John E. Tremblay
  • Patent number: 11031044
    Abstract: A method, system and computer program product for self-learned and probabilistic-based prediction of inter-camera object movement is disclosed. The method includes building and storing a transition model defined by transition probability and transition time distribution data generated during operation of a first video camera and one or more other video cameras over time. The method also includes employing at least one balance flow algorithm on the transition probability and transition time distribution data to determine a subset of the video cameras to initiate a search for an object based on a query. The method also includes running the search for the object over the subset of the video cameras.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: June 8, 2021
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Chia Ying Lee, Aleksey Lipchin, Ying Wang, Kangyan Liu
  • Patent number: 11030811
    Abstract: A system for augmented reality layout includes an augmented reality layout server and an augmented reality layout device, including a processor; a non-transitory memory; an input/output; a model viewer providing two-dimensional top, three-dimensional, and augmented reality views of a design model; a model editor; a model synchronizer, which aligns and realigns the design model with a video stream of an environment; a model cache; and an object cache. Also disclosed is method for augmented reality layout including creating model outline, identifying alignment vector, creating layout, verifying design model, editing design model, and realigning design model.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: June 8, 2021
    Assignee: Orbit Technology Corporation
    Inventors: Satoshi Sakai, Kalev Kask, Anand H. Subbaraman, Alexey A. Kiskachi
  • Patent number: 11032465
    Abstract: An imaging element of an imaging apparatus has a configuration in which a plurality of pixels provided with a plurality of photoelectric conversion units for receiving light fluxes transmitting each of different pupil partial regions of an image forming optical system are arrayed. A CPU performs control to acquire a plurality of viewpoint images corresponding to different pupil partial regions from the imaging element and to generate output images using an image processing unit. The CPU and the image processing unit sets a detection range of an image shift amount using a photographing condition of an input image and a conversion coefficient for a conversion an image shift amount to a defocus amount, generates an image shift amount distribution of the detection range on the basis of a plurality of viewpoint images, and generates an image shift difference amount distribution. Image processing in accordance with the image shift amount distribution is performed to generate refocused images as output images.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: June 8, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Akihiko Kanda, Koichi Fukuda
  • Patent number: 11030454
    Abstract: A machine learning scheme can be trained on a set of labeled training images of a subject in different poses, with different textures, and with different background environments. The label or marker data of the subject may be stored as metadata to a 3D model of the subject or rendered images of the subject. The machine learning scheme may be implemented as a supervised learning scheme that can automatically identify the labeled data to create a classification model. The classification model can classify a depicted subject in many different environments and arrangements (e.g., poses).
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: June 8, 2021
    Assignee: Snap Inc.
    Inventors: Xuehan Xiong, Zehao Xue
  • Patent number: 11030456
    Abstract: A method and system for geo-localization in sensor-deprived or sensor-limited environments includes receiving an image file for geo-localization of a location depicted by the image file in a sensor-deprived environment; applying a plurality of geo-localization modules to the image file; generating, by each of the plurality of geo-localization modules, a module output, each module output including a module geolocation for the location and a module confidence score for the module geolocation; generating an ensemble geolocation output, the ensemble geolocation output including an ensemble geolocation for the location and an ensemble confidence score for the ensemble geolocation, the ensemble geolocation output being a weighted combination of the module outputs; and displaying the ensemble geolocation output to a display.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: June 8, 2021
    Assignee: BOOZ ALLEN HAMILTON INC.
    Inventors: Courtney Crosby, Jessica Yang, Adam Letcher, Alec McLean
  • Patent number: 11030757
    Abstract: A queue analyzing method is applied to an image monitoring apparatus for determining whether a rear object belongs to a queue of a front object. The queue analyzing method includes computing an angle difference and an interval between the rear object and the front object, transforming an original interval threshold into an amended interval threshold via the angle difference, comparing the interval with the amended interval threshold, and determining the rear object and the front object belong to the same queue when the interval is smaller than the amended interval threshold.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: June 8, 2021
    Assignee: VIVOTEK INC.
    Inventor: Cheng-Chieh Liu
  • Patent number: 11030355
    Abstract: User interface systems and methods for roof estimation are described. Example embodiments include a roof estimation system that provides a user interface configured to facilitate roof model generation based on one or more aerial images of a building roof. In one embodiment, roof model generation includes image registration, image lean correction, roof section pitch determination, wire frame model construction, and/or roof model review. The described user interface provides user interface controls that may be manipulated by an operator to perform at least some of the functions of roof model generation. The user interface is further configured to concurrently display roof features onto multiple images of a roof. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: June 8, 2021
    Assignee: Eagle View Technologies, Inc.
    Inventor: Chris Pershing
  • Patent number: 11030358
    Abstract: User interface systems and methods for roof estimation are described. Example embodiments include a roof estimation system that provides a user interface configured to facilitate roof model generation based on one or more aerial images of a building roof. In one embodiment, roof model generation includes image registration, image lean correction, roof section pitch determination, wire frame model construction, and/or roof model review. The described user interface provides user interface controls that may be manipulated by an operator to perform at least some of the functions of roof model generation. In one embodiment, the user interface provides user interface controls that facilitate the determination of pitch of one or more sections of a building roof. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: June 8, 2021
    Assignee: Eagle View Technologies, Inc.
    Inventor: Chris Pershing
  • Patent number: 11030555
    Abstract: An issue tracking system for tracking software development tasks is described herein. The issue tracking system may be configured to receive new issue requests from a client device and associate the new issue requests with one or more clusters of previously stored issue records. The issue tracking system may also determine similarity between issues in a first cluster of stored issue records and issues in a second cluster that is associated with a different software development project. Based on a determination that the issue similarity exceeds a threshold, the user may be prompted with one or more recommendations for a subsequent issue request or issue request content.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: June 8, 2021
    Assignees: ATLASSIAN PTY LTD., ATLASSIAN INC.
    Inventors: Noam Bar-on, Sukho Chung
  • Patent number: 11030768
    Abstract: Cameras capture time-stamped images of predefined areas. Individuals and items are tracked in the images. Occluded items detected in the images are preprocessed to remove pixels associated with occluded information and the pixels remaining associated with the items are cropped. The preprocessed and cropped images are provided to a trained machine-learning algorithm or a trained neural network trained to classify and identify the items. Output received from the trained neural network provides item identifiers for the items that are present in the original images.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: June 8, 2021
    Assignee: NCR Corporation
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Patent number: 11030770
    Abstract: The present invention provides a marker that can uniquely estimate the posture within wide-angle range while maintaining the flatness, and a posture estimating method. The marker includes a two-dimensional pattern code and a posture detection pattern emitting different light depending on an observation direction of the pattern code at around the first axis on a two-dimensional plane formed by the pattern code. The method using the marker has a Step-1 of using a captured image of the marker to identify a color of the light or a pattern drawn by the light, Step-2 of determining, depending on the color or the pattern identified in Step-1, a posture of the marker at around the first axis, and a Step-3 of determining a posture of the marker depending on a positional relation between the posture at around the first axis determined in Step-2 and the elements constituting the captured image.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: June 8, 2021
    Assignee: National Institute of Advanced Industrial Science and Technology
    Inventor: Hideyuki Tanaka
  • Patent number: 11030525
    Abstract: Presented are deep learning-based systems and methods for fusing sensor data, such as camera images, motion sensors (GPS/IMU), and a 3D semantic map to achieve robustness, real-time performance, and accuracy of camera localization and scene parsing useful for applications such as robotic navigation and augment reality. In embodiments, a unified framework accomplishes this by jointly using camera poses and scene semantics in training and testing. To evaluate the presented methods and systems, embodiments use a novel dataset that is created from real scenes and comprises dense 3D semantically labeled point clouds, ground truth camera poses obtained from high-accuracy motion sensors, and pixel-level semantic labels of video camera images. As demonstrated by experimental results, the presented systems and methods are mutually beneficial for both camera poses and scene semantics.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: June 8, 2021
    Assignees: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.
    Inventors: Peng Wang, Ruigang Yang, Binbin Cao, Wei Xu
  • Patent number: 11023717
    Abstract: The present application provides a method, an apparatus, a device and a system for processing commodity identification and a storage medium, where the method includes: receiving image information transmitted by a camera apparatus and a distance signal transmitted by a distance sensor corresponding to the camera apparatus; determining a start frame and an end frame for a pickup behavior of a user according to the image information and the distance signal; and determining, according to the start frame and the end frame for the pickup behavior of the user, information of a commodity taken by the user. By performing a commodity identification on the start frame and the end frame for the pickup behavior of the user, and determining the information of the commodity taken by the user, commodity identification efficiency is effectively improved.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: June 1, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Peng Fu, Kuipeng Wang, Yuexiang Hu, Qiang Zhou, Yanwen Fan, Haofeng Kou, Shengyi He, Renyi Zhou, Yanghua Fang, Yingze Bao
  • Patent number: 11021172
    Abstract: Disclosed are a system for controlling a host vehicle including one or more image sensors and one or more radar sensors and controller configured to recognize a target vehicle and measure a front coordinate of the target vehicle and generate a warning according to whether the front coordinate of the target vehicle is located in a preset blind spot alert area of the host vehicle. The present disclosure may determine whether to activate an warning based on a front coordinate of a target vehicle and prevent a malfunction when activating the warning, thereby providing a driver with driving safety and driving convenience.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: June 1, 2021
    Assignee: MANDO CORPORATION
    Inventors: Tak Gen Kim, Jae Suk Kim
  • Patent number: 11023739
    Abstract: The present invention provides a technique for enhancing the added value of flow line information. The flow line combining device is provided with: an acquisition unit for acquiring first flow line information indicating a trail of positions determined by using a first method and second flow line information indicating a trail of positions determined by using a second method which is different from the first method; a determination unit for assessing overlap in the trails respectively indicated by the acquired first flow line information and the second flow line information; and a combining unit for generating third flow line information which combines the first flow line information and second flow line information if the trail overlap assessed by the determination unit meets a predetermined condition.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: June 1, 2021
    Inventor: Akiko Oshima
  • Patent number: 11023730
    Abstract: Obtain access to a three-dimensional point cloud representation of an object including poses of a scanning digital camera and corresponding video frames. Down-sample the three-dimensional point cloud representation to obtain a set of region-of-interest candidates. Filter the region-of-interest candidates to select those of the region-of-interest candidates having appearance changes, which distinguish different visual states, as selected regions of interest, based at least in part on the poses of the camera. Generate region of interest images for the selected regions of interest from corresponding ones of the video frames; and train a deep learning recognition model based on the region of interest images. the trained deep learning recognition model can be used, for example, to determine a visual state of the object for repair instructions.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: June 1, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bing Zhou, Sinem Guven Kaya, Shu Tao
  • Patent number: 11023041
    Abstract: A system for producing images for a display apparatus. The system includes image source(s) and processor. The processor is configured to obtain information indicative of angular size of field of view providable by image renderer of display apparatus; obtain information indicative of gaze direction of user; receive sequence of images from image source(s); and process sequence of images to generate sequence of processed images. When processing sequence of images, processor is configured to crop a given image, based on gaze direction of user and angular size of field of view, to generate processed image. Angular size of field of view represented by processed image is larger than angular size of field of view providable by the image renderer.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 1, 2021
    Assignee: Varjo Technologies Oy
    Inventors: Klaus Melakari, Urho Konttori
  • Patent number: 11023039
    Abstract: A visual line detection apparatus includes a light source, a position detection unit configured to detect positions of pupil centers and positions of corneal reflection centers, a curvature radius calculation unit configured to calculate corneal curvature radii from a position of the light source and the positions of the corneal reflection centers, a viewpoint detection unit configured to detect viewpoints from the positions of the pupil centers and the corneal curvature radii, an extraction unit configured to extract pupil parameters indicating sizes of the pupils of the right and left respective eyeballs from the image of the eyeballs, and a correction unit configured to correct the viewpoint of the left eyeball and the viewpoint of the right eyeball on the basis of the pupil parameters and calculate a synthesized viewpoint.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: June 1, 2021
    Assignee: JVC KENWOOD Corporation
    Inventor: Shuji Hakoshima
  • Patent number: 11023095
    Abstract: An interactive virtual world having avatars. Scenes in the virtual world as seen by the eyes of the avatars are presented on user devices controlling the avatars. In one approach, a method includes identifying a location of an avatar in a virtual world, and a point of gaze of the avatar; adjusting, based on the point of gaze, a lens that directs available light received by the lens so that the lens can focus on objects at all distances; collecting, using the adjusted lens, image data; and generating a scene of the virtual world as seen by the avatar, the scene based on the collected image data, the location of the avatar, and the point of gaze of the avatar.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: June 1, 2021
    Assignee: Cinemoi North America, LLC
    Inventor: Daphna Davis Edwards Ziman
  • Patent number: 11023125
    Abstract: Disclosed are a mobile terminal having a display unit for outputting screen information and capable of enhancing a user's convenience related to the screen information, and a method for controlling the same. The mobile terminal includes: a touch screen configured to display screen information, and a controller configured to select a region of the touch screen based on first and second touch inputs when the first and second touch inputs applied to different points of the touch screen are maintained for a reference amount of time without being released, and configured to execute a function related to the selected region when a touch input corresponding to a preset condition is applied to the touch screen while the first and second touch inputs are maintained without being released.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: June 1, 2021
    Assignee: LG Electronics Inc.
    Inventors: Jinok Park, Seonggoo Kang
  • Patent number: 11023731
    Abstract: Disclosed is a data recognition model construction apparatus. The data recognition model construction apparatus includes a video inputter configured to receive a video, an image composition unit configured to, based on a common area included in each of a plurality of images that form at least a portion of the video, generate a composition image by overlaying at least a portion of the plurality of images, a learning data inputter configured to receive the generated composition image, a model learning unit configured to make a data recognition model learn using the generated composition image, and a model storage configured to store the learnt data recognition model.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: June 1, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-man Kim, Chan-jong Park, Do-jun Yang, Hyun-woo Lee
  • Patent number: 11017218
    Abstract: The present invention provides a technology that can reduce erroneous detection and detect a suspicious person from an image at high accuracy. A suspicious person detection device according to one example embodiment of the present invention includes: an eye direction detection unit that detects an eye direction of a subject; a face direction detection unit that detects a face direction of the subject; an environment information acquisition unit that acquires environment information indicating arrangement of an object around the subject; and a determination unit that, based on the face direction, the eye direction, and the environment information, determines whether or not the subject is showing suspicious behavior.
    Type: Grant
    Filed: July 3, 2017
    Date of Patent: May 25, 2021
    Assignee: NEC CORPORATION
    Inventor: Atsushi Moriya
  • Patent number: 11017217
    Abstract: A method and system of controlling an appliance includes: receiving, from a first home appliance, a request to start video image processing for detecting a motion gesture of a user; processing a sequence of image frames captured by a camera corresponding to the first home appliance to identify a first motion gesture; selecting a second home appliance as a target home appliance for the first motion gesture in accordance with one or more target selection criteria, including first target selection criteria based on a location of the user relative to the first home appliance and second target selection criteria based on a level of match between the first motion gesture and a first control gesture corresponding to the second home appliance; and generating a control command to control the second home appliance in accordance with the first control gesture corresponding to the second home appliance.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: May 25, 2021
    Assignee: MIDEA GROUP CO., LTD.
    Inventors: Mohamad Al Jazaery, Suresh Mani
  • Patent number: 11017236
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include evaluating sequence pairs representing segments of object trajectories. Assuming the objects interact, each of the sequences of the sequence pair may be mapped to a sequence cluster of an adaptive resonance theory (ART) network. A rareness value for the pair of sequence clusters may be determined based on learned joint probabilities of sequence cluster pairs. A statistical anomaly model, which may be specific to an interaction type or general to a plurality of interaction types, is used to determine an anomaly temperature, and alerts are issued based at least on the anomaly temperature. In addition, the ART network and the statistical anomaly model are updated based on the current interaction.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: May 25, 2021
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Dennis G. Urech, Wesley Kenneth Cobb
  • Patent number: 11017233
    Abstract: Method for receiving an input onto a graphical user interface at a client device, capturing an image frame at the client device, the image frame comprising a depiction of an object, identifying the object within the image frame, accessing media content associated with the object within a media repository in response to identifying the object, and causing presentation of the media content within the image frame at the client device.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: May 25, 2021
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Hao Hu, Yanjia Li, Xing Mei, Kevin Dechau Tang
  • Patent number: 11017507
    Abstract: An image processing device which is capable of accurately detects and corrects areas affected by clouds even if multiple types or layers of clouds present in images, is disclosed. The device includes: a cloud spectrum selection unit for selecting at least one spectrum for each of pixels from spectra of one or more clouds present in an input image; an endmember extraction unit for extracting spectra of one or more endmembers other than the one or more clouds from those of the input image; and an unmixing unit for deriving fractional abundances of the respective spectra of one or more endmembers and a selected spectrum of one of the one or more clouds for the each of pixels in the input image.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: May 25, 2021
    Assignee: NEC CORPORATION
    Inventors: Madhuri Mahendra Nagare, Eiji Kaneko, Hirofumi Aoki
  • Patent number: 11019254
    Abstract: An image processing apparatus that causes an effect of emission of virtual light to be applied in a captured image includes an acquisition unit, a determination unit, and a correction unit. The acquisition unit is configured to acquire ambient light distribution information in the captured image. The determination unit is configured, based on the ambient light distribution information acquired by the acquisition unit, to determine a color characteristic of a virtual light source that is a light source of the virtual light. The correction unit is configured, based on the color characteristic of the virtual light source determined by the determination unit, to make a correction by which the effect of the virtual light is applied in a partial area of the captured image.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 25, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Shinya Hirai, Kaori Tajima, Shun Matsui
  • Patent number: 11017558
    Abstract: Described herein is a method of registering a position and an orientation of one or more cameras in a camera imaging system. The method includes: receiving, from a depth imaging device, data indicative of a three dimensional image of the scene. The three dimensional image is calibrated with a reference frame relative to the scene. The reference frame includes a reference position and reference orientation. A three dimensional position of each of the cameras within the three dimensional image is determined relative to the reference frame. An orientation of each camera is determined to relative to the reference frame. The position and orientation of each camera is combined to determine a camera pose relative to the reference frame.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: May 25, 2021
    Assignee: SEEING MACHINES LIMITED
    Inventors: John Noble, Timothy James Henry Edwards
  • Patent number: 11019013
    Abstract: Aspects of the subject disclosure may include, for example, receiving an image, delivery instructions, and metadata associated with the image from a first device associated with a first user. The delivery instructions indicate to deliver the image to a second device associated with a second user, and the delivery instructions comprise security features and the metadata comprises a plurality of security preferences for delivery. Further, the plurality of security features and the plurality of security preferences are implemented on the image. In response to determination of a security risk due to the implemented security features or security preferences, the image is not delivered to the second device and a message is delivered to the first device indicating that the image was not delivered. Other embodiments are disclosed.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: May 25, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Robert J. Sayko, Chi-To Lin, Douglas M. Nortz, Russell P. Sharples
  • Patent number: 11017551
    Abstract: The present teaching relates to method, system, medium, and implementations for identifying object of interest. Image data acquired by a camera with respect to a scene are received. One or more users are detected, during a period of time, from the image data who are present at the scene. Three dimensional (3D) gazing rays of the one or more users during the period of time are estimated. One or more intersections of such 3D gazing rays are identified and are used to determine at least one object of interest of the one or more users.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: May 25, 2021
    Assignee: DMAI, INC.
    Inventor: Nishant Shukla
  • Patent number: 11019322
    Abstract: According to one embodiment, an estimation system includes a monocular imaging unit and processing circuitry. The monocular imaging unit acquires, at a time of capturing, an image and first data relating to an actual distance to an object captured in the image. The processing circuitry estimates a position of the imaging unit by using the image and the first data.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: May 25, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tatsuma Sakurai, Takuma Yamamoto, Nao Mishima
  • Patent number: 11013477
    Abstract: A system for assisting an x-ray operator with properly positioning a patient's body part to be x-rayed. The system uses a range sensor and/or a camera supported on an x-ray emitter to collect data about the patient's body part to be x-rayed. The data is transmitted to a processor and compared to a selected reference envelope or image. The processor provides an x-ray operator with a positive or negative notification based on its analysis of the collected data and the selected reference envelope or image. A negative notification indicates that the patient's body part needs to be adjusted. A positive notification indicates that the patient's body part is ready to be x-rayed.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: May 25, 2021
    Inventor: Jonathan Ross Vanhooser
  • Patent number: 11017513
    Abstract: Active sensor fusion systems and methods may include a plurality of sensors, a plurality of detection algorithms, and an active sensor fusion algorithm. Based on detection hypotheses received from the plurality of detection algorithms, the active sensor fusion algorithm may instruct or direct modifications to one or more of the plurality of sensors or the plurality of detection algorithms. In this manner, operations of the plurality of sensors or processing of the plurality of detection algorithms may be refined or adjusted to provide improved object detection with greater accuracy, speed, and reliability.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: May 25, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Pradeep Krishna Yarlagadda, Jean-Guillaume Dominique Durand
  • Patent number: 11009946
    Abstract: The embodiments of the disclosure provide a pupil center positioning apparatus and method, and a virtual reality device. The pupil center positioning apparatus may comprise: a matrix operation circuit configured to obtain a set of linear normal equations according to received N boundary point coordinates of a pupil area, N being a positive integer greater than 5; a parameter operation circuit configured to obtain parameters of an elliptic equation employing the Cramer's Rule according to the set of linear normal equations; and a coordinate operation circuit configured to obtain the center coordinate of the pupil area according to the parameters of the elliptic equation.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: May 18, 2021
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Gaoming Sun
  • Patent number: 11010918
    Abstract: An apparatus for angle-based localization of a position on a surface of an object includes an orientation sensor configured to be arranged in a known relationship relative to a position, to be identified, on a surface of an object at at least one measurement time and to capture angle information items in respect of the current orientation thereof at the measurement time, and a programmable device, with a processor and a memory, wherein the memory contains instructions and at least one assignment prescription for the object with the surface, where angle information items are assigned associated positions on the surface of the object, and wherein the instructions cause the programmable device to receive the angle information items captured at the measurement time by the orientation sensor and to establish the position to be identified by an assignment to the captured angle information items on the basis of the assignment prescription.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: May 18, 2021
    Assignee: SIEMENS ENERGY GLOBAL GMBH & CO. KG
    Inventors: Andrey Mashkin, Florian Röhr, Guido Schmidt
  • Patent number: 11010621
    Abstract: An apparatus and method automatically detects and positions structure faces. After receiving data points describing a geographical area, neighborhoods are defined based on the data points and classified as linear, planar, or volumetric. Neighborhoods are merged into at least one cluster based on local surface normals. At least one bounding frame is fit to the at least one cluster and modified based on a field of interest.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: May 18, 2021
    Assignee: HERE Global B.V.
    Inventors: David Doria, Engin Anil
  • Patent number: 11010605
    Abstract: Disclosed is an object-detection system configured to utilize multiple object-detection models that generate respective sets of object-detection conclusions to detect objects of interest within images of scenes. The object-detection system is configured to implement a series of functions to reconcile any discrepancies that exist in its multiple sets of object-detection conclusions in order to generate one set of conclusions for each perceived object of interest within a given image.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: May 18, 2021
    Assignee: Rapiscan Laboratories, Inc.
    Inventors: Claire Nord, Simanta Gautam, Bruno Brasil Ferrari Faviero, Steven Posun Yang, Jay Harshadbhai Patel, Ian Cinnamon
  • Patent number: 11010601
    Abstract: An intelligent assistant device is configured to communicate non-verbal cues. Image data indicating presence of a human is received from one or more cameras of the device. In response, one or more components of the device are actuated to non-verbally communicate the presence of the human. Data indicating context information of the human is received from one or more of the sensors. Using at least this data one or more contexts of the human are determined, and one or more components of the device are actuated to non-verbally communicate the one or more contexts of the human.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven Nabil Bathiche, Vivek Pradeep, Alexander Norman Bennett, Daniel Gordon O'Neil, Anthony Christian Reed, Krzysztof Jan Luchowiec, Tsitsi Isabel Kolawole
  • Patent number: 11012629
    Abstract: An image capturing apparatus includes: an image capturing unit that includes an imaging device in which pixels are arranged in a two-dimensional array and that outputs an image signal obtained by image capturing of a photographic subject by the imaging device; an image processing unit that generates a captured image based on the image signal; and a CPU that performs an exposure control process to determine whether an image of a moving subject is included in the captured image, that controls the amount of exposure of the imaging device by performing a first process in a case where the determination unit determines that an image of a moving subject is included in the captured image, and that controls the amount of exposure of the imaging device by performing a second process in a case where the determination unit determines that no moving-subject image is included in the captured image.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: May 18, 2021
    Assignee: FUJIFILM Corporation
    Inventors: Tomonori Masuda, Mototada Otsuru
  • Patent number: 11010646
    Abstract: Embodiments relate to tracking and determining a location of an object in an environment surrounding a user. A system includes one or more imaging devices and an object tracking unit. The system identifies an object in a search region, determines a tracking region that is smaller than the search region corresponding to the object, and scans the tracking region to determine a location associated with the object. The system may generate a ranking of objects, determine locations associated with the objects, and generate a model of the search region based on the locations associated with the objects.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: May 18, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Michael Hall, Byron Taylor
  • Patent number: 11010590
    Abstract: Provided is an image processing device including: a processor comprising hardware, the processor configured to: smooth a brightness value of a cell image including a plurality of cell clusters each including a plurality of cells so as to generate a smoothed image in which a gap existing between the cells in each of the cell clusters is filled in; binarize the smoothed image into a background region and a non-background region of each cell cluster; and segment the non-background region of the binarized smoothed image into a region for each of the cell clusters.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: May 18, 2021
    Assignee: OLYMPUS CORPORATION
    Inventor: Hideya Aragaki
  • Patent number: 11010613
    Abstract: The present disclosure relates to systems and methods for target identification in a video. The method may include obtaining a video including a plurality of frames of video data. The method may further include sampling, from the plurality of frames, each pair of consecutive sampled frames being spaced apart by at least one frame of the plurality of frames of the video data. The method may further include identifying, from the one or more sampled frames, a reference frame of video data using an identification model. The method may still further include determining a start frame and an end frame including the target object.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: May 18, 2021
    Assignee: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.
    Inventor: Guangda Yu
  • Patent number: 11004714
    Abstract: A load port includes a door and a mapping sensor. The door moves upward and downward between a closing position for closing an opening connected into a container with multiple stages for placing a plurality of substrates and an opening position for opening the opening. The mapping sensor is disposed integrally with the door and detects a state of the substrates. The mapping sensor includes a light emitting portion and an imaging portion. The light emitting portion emits an imaging light toward the substrates. The imaging portion captures an image of a reflected light of the imaging light.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: May 11, 2021
    Assignee: TDK CORPORATION
    Inventors: Tadamasa Iwamoto, Takuya Kudo
  • Patent number: 11006108
    Abstract: An image processing apparatus is provided. The image processing apparatus includes a processor configured to, in response to an image including a plurality of frames being input, change a predetermined parameter to a parameter corresponding to a compression rate of each of the plurality of frames for each frame, and process the input image by using the parameter changed for each frame, and an output interface configured to output the processed image.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: May 11, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ki-heum Cho, Yong-sup Park, Jae-yeon Park, Chang-han Kim, Il-jun Ahn, Hee-seok Oh, Tammy Lee, Min-su Cheon
  • Patent number: 11003925
    Abstract: An event prediction system includes an accumulation unit and a generator. The accumulation unit accumulates a plurality of pieces of learning data each including history information representing a situation of a mobile object upon occurrence of an event associated with driving of the mobile object. The generator generates a prediction model for prediction of relative coordinates of an occurrence place of the event relative to the mobile object by using the plurality of pieces of learning data. Each of the plurality of pieces of learning data further includes label information representing the relative coordinates of the occurrence place of the event relative to the mobile object.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: May 11, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Shohei Hayashi, Toshiya Mori, Tadashi Shibata, Nobuyuki Nakano, Masanaga Tsuji
  • Patent number: 11003928
    Abstract: A system uses video of a vehicle or other object to detect and classify an active turn sign on the object. The system generates an image stack by scaling and shifting a set of digital image frames from the video to a fixed scale, yielding a sequence of images over a time period. The system processes the image stack with a classifier to determine a pose of the object, as well as the state and class of each visible turn signals on the object. When the system determines that a turn signal is active, the system will predict an action that the object will take based on the class of that signal.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: May 11, 2021
    Assignee: Argo AI, LLC
    Inventors: Rotem Littman, Gilad Saban, Noam Presman, Dana Berman, Asaf Kagan
  • Patent number: 11003243
    Abstract: The disclosure discloses a calibration method and device, a storage medium and a processor. The method includes that: a calibration point is controlled to move on a predetermined motion track; a gaze track of a target object is acquired in a movement process of the calibration point; and a specified parameter is calibrated according to the predetermined motion track and the gaze track, the specified parameter is configured to predict a gaze point of the target object.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: May 11, 2021
    Assignee: BEIJING 7INVENSUN TECHNOLOGY CO., LTD.
    Inventors: Dongchun Ren, Wei Liu, Zhijiang Lou, Xiaohu Gong, Jian Wang, Fengmei Nie, Meng Yang
  • Patent number: 11003956
    Abstract: A method for training, using a plurality of training images with corresponding six degrees of freedom camera pose for a given environment and a plurality of reference images, each reference image depicting an object-of-interest in the given environment and having a corresponding two-dimensional to three-dimensional correspondence for the given environment, a neural network to provide visual localization by: for each training image, detecting and segmenting object-of-interest in the training image; generating a set of two-dimensional to two-dimensional matches between the detected and segmented objects-of-interest and corresponding reference images; generating a set of two-dimensional to three-dimensional matches from the generated set of two-dimensional to two-dimensional matches and the two-dimensional to three-dimensional correspondences corresponding to the reference images; and determining localization, for each training image, by solving a perspective-n-point problem using the generated set of two-dimens
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 11, 2021
    Inventors: Philippe Weinzaepfel, Gabriela Csurka, Yohann Cabon, Martin Humenberger