Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11367219
    Abstract: A video analysis apparatus is connected to an image capturing apparatus including a plurality of cameras, in which the video analysis apparatus, for a plurality of persons in images captured by the cameras, tracks at least one of the plurality of persons, detects a plurality of parts of the tracked person, and on the basis of information defining scores used to determine a best shot of each of the parts, computes a score for the part for each of frames of videos from security cameras, compares, for each of the parts, the scores for each part computed for each of the frames to determine a best shot of each of the parts, stores, for each of the parts, a feature value in association with the part, and compares the feature values of each of the parts of the plurality of persons to determine identity of the persons.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: June 21, 2022
    Assignee: HITACHI, LTD.
    Inventors: Naoto Akira, Atsushi Hiroike, Tsutomu Imada, Hiromu Nakamae
  • Patent number: 11367355
    Abstract: In an approach for contextual event awareness, a processor receives contextual information including motion and sound detection. A processor analyzes the contextual information to determine a proximity sequencing including context of an event. A processor applies machine learning to the proximity sequencing to form a risk assessment based on the context of the event impacting a user. A processor determines that the risk assessment exceeds a predetermined threshold. A processor provides a notification of a risk to the user.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 21, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shikhar Kwatra, Jeremy R. Fox, Zachary James Goodman, H. Ramsey Bissex, Ernest Bernard Williams, Jr.
  • Patent number: 11367267
    Abstract: Systems and methods are disclosed for locating a retroreflective object in a digital image and/or identifying a feature of the retroreflective object in the digital image. In certain environmental conditions, e.g. on a sunny day, or when the retroreflective material is damaged or soiled, it may be more challenging to locate the retroreflective object in the digital image and/or to identify a feature of the object in the digital image. The systems and methods disclosed herein may be particularly suited for object location and/or feature identification in situations in which there is a strong source of ambient light (e.g. on a sunny day) and/or when the retroreflective material on the object is damaged or soiled.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: June 21, 2022
    Assignee: GENETEC INC.
    Inventors: Louis-Antoine Blais-Morin, Pablo Agustin Cassani
  • Patent number: 11367291
    Abstract: A traffic signal display estimation system recognizes, based on the position information of a vehicle and a traffic signal information, a traffic signal included in a camera image, identifies a traffic signal display for each recognized traffic signal, and calculates, for each traffic signal, a first evaluation value indicating the certainty of the identified traffic signal display. The system integrates, based on a traffic-signal-to-traffic-signal relational information, a forward traffic signal that is ahead of the travelling direction and that the vehicle should follow and a traffic signal correlated with the forward traffic signal in terms of the traffic signal display, among a plurality of recognized traffic signals. When there is an inconsistency in traffic signal displays identified between a plurality of integrated traffic signals, the system determines a first estimated traffic signal display of the forward traffic signal, based on the first evaluation value for each traffic signal.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: June 21, 2022
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yusuke Hayashi, Taichi Kawanai, Kentaro Ichikawa, Taisuke Sugaiwa
  • Patent number: 11367176
    Abstract: A commodity management device includes a camera interface circuit connectable to a camera to capture an image of one of commodities arranged in a row along a first direction from a back plate to a front of the shelf. The device includes a sensor interface circuit connectable to a sensor measuring a distance to the commodities in the row. A processor is configured to detect a change in the distance measured by the sensor, identify the commodities based on the image, then acquire a thickness of each commodity in the row, and calculate the number of the commodities removed from the shelf based on the distance change and the thickness of the commodities.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: June 21, 2022
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventors: Takayuki Sawada, Tetsuya Nobuoka
  • Patent number: 11367266
    Abstract: Provided is an image recognition system that can easily perform image recognition on a side face of an item. An image recognition system according to one example embodiment of the present invention includes: a placement stage used for placing an item below an image capture device provided so as to perform capturing of a downward direction; a support structure configured to support the item at a predetermined angle relative to a top face of the placement stage; and an image recognition apparatus that identifies the item by performing image recognition on an image of the item acquired by the image capture device.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 21, 2022
    Assignee: NEC CORPORATION
    Inventors: Ryoma Iio, Kota Iwamoto, Hideo Yokoi, Ryo Takata, Kazuya Koyama
  • Patent number: 11367309
    Abstract: Provided is an imaging system capable of stereo-photographing with both of visible and infrared images, and improving color reproducibility in visible-light-photographing. The imaging system includes two imaging sensors 1, and two DBPFs 5 that have transmittance characteristics in a visible light band and a second wavelength band, are respectively provided correspondingly to the two imaging sensors, and serve as optical filters. The imaging system has: at least four kinds of filters, which have mutual different spectral transmission characteristics corresponding to wavelengths in the visible light band and whose transmissions in a second wavelength band approximate each other; and two color filters provided so as respectively correspond to the two imaging sensors. The imaging system measures a distance to a target based on two visible or infrared image signals.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: June 21, 2022
    Assignee: MAXELL, LTD.
    Inventors: Osamu Kawamae, Chiyo Ohno, Hiroyasu Otsubo, Osamu Ishizaki
  • Patent number: 11365974
    Abstract: A navigation system uses markers that are identifiable in images of an environment being navigated to determine the location of a portable device in the environment. The portable device takes images of the environment, and those images are analysed to identify markers in the images and the pose of the portable device based on the image of the marker. The identified marker and the determined pose of the portable device are then used to determine the location and orientation of the portable device in the environment being navigated.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: June 21, 2022
    Assignees: Arm Limited, Apicai Limited
    Inventors: Roberto Lopez Mendez, Daren Croxford
  • Patent number: 11367172
    Abstract: There is described a method of labeling wood products moved across a handling area of a production line. The method generally has, using a camera, generating an image representing a wood product moving across the handling area at a first moment in time; using a controller, determining coordinates of the wood product at the first moment in time based on the image; and anticipating coordinates of the wood product at a second moment in time assuming an incremental movement of the wood product at a given speed from the determined coordinates of the wood product at the first moment in time; and using a light projector, projecting, at the second moment in time, a wood product label at the anticipated coordinates of the wood product at the second moment in time.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: June 21, 2022
    Assignee: TIMBER TECHNOLOGY INC.
    Inventors: Marc Voyer, Marc-Antoine Paquet
  • Patent number: 11367174
    Abstract: A transfer system includes a first imaging and detection system that detects an object by using a captured first image, an industrial machine that operates by using information detected by the first imaging and detection system, a second imaging and detection system disposed upstream of the first imaging and detection system to detect an object by using a captured second image, and an operation terminal that outputs, to the second imaging and detection system, an instruction instructing that detection be performed, and to change imaging and detection system data used in the second imaging and detection system. The first imaging and detection system includes a receiving unit to receive the imaging and detection system data from the second imaging and detection system. The second imaging and detection system includes a transmitting unit to transmit the imaging and detection system data to the first imaging and detection system.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: June 21, 2022
    Assignee: FANUC CORPORATION
    Inventor: Masaru Oda
  • Patent number: 11361420
    Abstract: The present disclosure provides an apparatus and method for evaluate the quality of a three dimensional (3D) point cloud. The apparatus comprises an image segmenter to generate a segmented two-dimensional (2D) image for each of the plurality of images; a 2D mask generator to generate a 2D mask for each of the plurality of images from the 3D point cloud; a comparator to compare the segmented 2D image with the 2D mask to obtain a comparison result for each image; and an evaluator to evaluate the quality of the 3D point cloud based on aggregated comparison results for the plurality of images.
    Type: Grant
    Filed: September 26, 2020
    Date of Patent: June 14, 2022
    Assignee: INTEL CORPORATION
    Inventors: Jiansheng Chen, Xiaofeng Tong, Wenlong Li, Chen Ling, Amir Atzmoni
  • Patent number: 11361543
    Abstract: An object detection system may include an imager configured to generate image data indicative of an environment in which a machine is present, and a sensor configured to generate sensor data indicative of the environment in which the machine is present. The object detection system may further include an object detection controller including one or more object detection processors configured to receive an image signal indicative of the image data, identify an object associated with the image data, and determine a first location of the object relative to the position of the imager. The one or more object detection processors may also be configured to receive a sensor signal indicative of the sensor data, and determine, based at least in part on the sensor signal, the presence or absence of the object at the first location.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: June 14, 2022
    Assignee: Caterpillar Inc.
    Inventors: Jacob Charles Maley, Pranay Kumar Reddy Kontham, Robert Scott McFarland, Vamsi Krishna Pannala, Peter Joseph Petrany
  • Patent number: 11361536
    Abstract: A system and method of identifying and tracking objects comprises registering an identity of a person who visits an area designated for holding objects, capturing an image of the area designated for holding objects, submitting a version of the image to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, detecting an object in the version of the image, associating the registered identity of the person with the detected object, retraining the deep neural network using the version of the image if the deep neural network is unable to recognize the detected object, and tracking a location of the detected object while the detected object is in the area designated for holding objects.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: June 14, 2022
    Assignee: Position Imaging, Inc.
    Inventors: Narasimhachary Nallana Chakravarty, Guohua Min, Edward L. Hill, Brett Bilbrey
  • Patent number: 11360928
    Abstract: A processor is configured with a learning framework to characterize the residuals of attribute information and its coherence with network information for improved anomaly detection.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: June 14, 2022
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Jundong Li, Harsh Dani, Xia Hu, Huan Liu
  • Patent number: 11361202
    Abstract: Predictive analytics techniques are used to produce leading indicators of economic activity based on factors determined from a range of available data sources, such as public and/or private transportation data. A fee-based subscription system may be provided for the sharing of leading indicators to users. A consistent, semantic metadata structure is described as well as a hypothesis generating and testing system capable of generating predictive analytics models in a non-supervised or partially supervised mode.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: June 14, 2022
    Inventor: Brian McCarson
  • Patent number: 11361470
    Abstract: A method, apparatus and system for visual localization includes extracting appearance features of an image, extracting semantic features of the image, fusing the extracted appearance features and semantic features, pooling and projecting the fused features into a semantic embedding space having been trained using fused appearance and semantic features of images having known locations, computing a similarity measure between the projected fused features and embedded, fused appearance and semantic features of images, and predicting a location of the image associated with the projected, fused features. An image can include at least one image from a plurality of modalities such as a Light Detection and Ranging image, a Radio Detection and Ranging image, or a 3D Computer Aided Design modeling image, and an image from a different sensor, such as an RGB image sensor, captured from a same geo-location, which is used to determine the semantic features of the multi-modal image.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: June 14, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Zachary Seymour, Karan Sikka, Supun Samarasekera, Rakesh Kumar, Niluthpol Mithun
  • Patent number: 11361550
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatically creating a caption comprising a sequence of words in connection with digital content.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: June 14, 2022
    Assignee: YAHOO ASSETS LLC
    Inventors: Simao Herdade, Armin Kappeler, Kofi Boakye, Joao Vitor Baldini Soares
  • Patent number: 11361425
    Abstract: The disclosure discloses a method for dynamically monitoring the content of a rare earth element (REE) component based on a time-series feature. Using an image information acquisition device to periodically acquire a time-series image of a rare earth (RE) solution to be monitored; extracting a time-series feature of the time-series image in a mixed color space; determining whether a time-series feature value of the time-series image is in an expected interval of the mixed color space; calculating a histogram intersection distance between the time-series image and a sample image in a sample data set in the HSV color space, and determining the content of the REE component corresponding to the time-series image according to a component content corresponding to a sample image with a larger histogram intersection distance, if the determination result indicates no; otherwise, directly waiting for the acquisition of a time-series image at a next sampling time point.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: June 14, 2022
    Assignee: EAST CHINA JIAOTONG UNIVERSITY
    Inventors: Rongxiu Lu, Mingming Chen, Hui Yang, Jianyong Zhu, Gang Yang
  • Patent number: 11361592
    Abstract: A monitoring system, comprising: at least one device, comprising at least one processing circuitry configured for: receiving from at least one image sensor connected to the at least one processing circuitry, at least one digital image captured by the at least one image sensor; identifying a nature of at least one relationship between at least one first body part of at least one first person and at least one second body part of at least one second person, where the at least one first person and the at least one second person are identified in the at least one digital image; identifying at least one offending relationship according to the nature of the at least one relationship and a set of relationship rules; and outputting an indication of the at least one offending relationship.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: June 14, 2022
    Assignee: NEC Corporation Of America
    Inventors: Tsvi Lev, Yaacov Hoch
  • Patent number: 11361505
    Abstract: Techniques are provided for one or more three-dimensional models representing one or more objects. For example, an input image including one or more objects can be obtained. From the input image, a location field can be generated for each object of the one or more objects. A location field descriptor can be determined for each object of the one or more objects, and a location field descriptor for an object of the one or more objects can be compared to a plurality of location field descriptors for a plurality of three-dimensional models. A three-dimensional model can be selected from the plurality of three-dimensional models for each object of the one or more objects. A three-dimensional model can be selected for the object based on comparing a location field descriptor for the object to the plurality of location field descriptors for the plurality of three-dimensional models.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: June 14, 2022
    Assignee: QUALCOMM Technologies, Inc.
    Inventors: Alexander Grabner, Peter Michael Roth, Vincent Lepetit
  • Patent number: 11361559
    Abstract: A method for cargo management in a motor vehicle includes identifying the motor vehicle. The method gathers dimensional information of at least one object and gathers internal dimensions of a volume within the motor vehicle. The dimensional information of the at least one object is compared to the internal dimensions of the volume, and feedback is provided to a user. The feedback is one of: yes, the at least one object will fit; or no, the at least one object will not fit.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: June 14, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Robert C Jablonski, Ki Hyun Ahn, Joseph F Szczerba, Spencer W Chamberlain
  • Patent number: 11361249
    Abstract: Provided is an image processing device having an acquisition unit that acquires an image for learning used for machine learning; a target region setting unit that sets, to the image, a target region including a detection target; a detection region setting unit that sets, to the image, a detection region in which a teaching signal is required to be set; and a teaching signal setting unit that sets, to the detection region, a teaching signal that may take three or more values in accordance with a relevance between the detection region and the target region.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: June 14, 2022
    Assignee: NEC CORPORATION
    Inventor: Masato Yuki
  • Patent number: 11361548
    Abstract: This disclosure relates generally to multi instance visual tracking based on observer motion modelling. Conventional state-of-the-art methods have major challenges in dealing with the error rate, accuracy of tracking the moving objects when observer motion is also considered. Embodiments of the present disclosure provide a method for multi instance visual tracking considering the motion of observer. The method tracks moving objects in an input video comprising a set of image frames, by detecting the moving objects in each image frame. The method detects the moving objects in each image frame by calculating trifocal tensors and fundamental matrix of the previous frames. The disclosed multi instance visual tracking can be used in remote surveillance problem for tracking moving objects.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: June 14, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Hrishikesh Sharma, Balamuralidhar Purushothaman, Veera Hari Krishna Nukala
  • Patent number: 11361554
    Abstract: A system for performing object and activity recognition based on data from a camera and a radar sensor. The system includes a camera, a radar sensor, and an electronic processor. The electronic processor is configured to receive an image from the camera and determine a portion of the image including an object. The electronic processor is also configured to receive radar data from the radar sensor and determine radar data from the radar sensor associated with the object in the image from the camera. The electronic processor is also configured to convert the radar data associated with the object to a time-frequency image and analyze the time-frequency image and the image of the object to classify the object and an activity being performed by the object.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: June 14, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Rajani Janardhana, Krishna Mohan Chinni
  • Patent number: 11361452
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: June 14, 2022
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Patent number: 11361545
    Abstract: A monitoring device and an operation method thereof are provided to detect whether an object of interest appears in a video stream. The monitoring device includes a motion calculation circuit, a motion region determination circuit and a computing engine. The motion calculation circuit performs motion calculation on a current frame in the video stream to generate a motion map. The motion region determination circuit determines a motion region in the current frame according to the motion map. The motion region determination circuit notifies the computing engine with the motion region in the current frame. The computing engine performs an object of interest detection on the motion region in the current frame of the video stream to generate a detection result. The motion region determination circuit determines whether to ignore the motion region in a subsequent frame after the current frame according to the detection result.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: June 14, 2022
    Assignee: HIMAX TECHNOLOGIES LIMITED
    Inventors: Chin-Kuei Hsu, Ti-Wen Tang
  • Patent number: 11361453
    Abstract: A method and apparatus for detecting and tracking a target, an electronic device and a storage medium are provided. The method includes: cameras, detection modules, tracking modules and storage queues are provided in advance; the numbers of cameras, storage queues and tracking modules are equal, and there is a one-to-one correspondence between the cameras and the storage queues and between the cameras and the tracking modules; distributing data streams collected by the cameras to the plurality of detection modules; detecting, by the detection modules, the received data streams, and sending detection results of the data streams collected by the cameras, to the storage queues corresponding to the cameras; and extracting, by the tracking modules corresponding to the cameras, the detection results from the storage queues corresponding to the cameras, and using the detection results for tracking the target.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: June 14, 2022
    Assignee: APOLLO INTELLIGENT CONNECTIVITY (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Xiaoxing Zhu, Yongyi Sun, Chengfa Wang
  • Patent number: 11362657
    Abstract: A light switch network comprises a plurality of light switch units, each comprising a gesture interface to sense a user gesture by receiving at least one gesture signal from a sensing zone, and configured to exchange one or more gesture status signals with at least one other switch unit in the network in relation to the received gesture signal; each switch being enabled, on receiving the gesture signal: in a first mode, to change a designated switch mode and/or state in response to the gesture signal; or in a second mode, to not change the designated switch mode and/or state according to one or more conditions associated with the status signals received from the other switch unit.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: June 14, 2022
    Inventors: Andrew H. Lohbihler, Michael Kosic, Kevin Kowalchuk, Valentin M. Burtea
  • Patent number: 11354903
    Abstract: Techniques related to training and implementing a bidirectional pairing architecture for object detection are discussed. Such techniques include generating a first enhanced feature map for each frame of a video sequence by processing the frames in a first direction, generating a second enhanced feature map for frame by processing the frames in a second direction opposite the first, and determining object detection information for each frame using the first and second enhanced feature map for the frame.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: June 7, 2022
    Assignee: Intel Corporation
    Inventors: Yan Hao, Zhi Yong Zhu, Lu Li, Ciyong Chen, Kun Yu
  • Patent number: 11356641
    Abstract: At least one image of an interior of a vehicle is captured by at least one camera present in the vehicle. Image data of the at least one image that are denoted in reference to the vehicle are transmitted to an external server. A user is authorized by the external server to access the image data denoted in reference to the vehicle, for display to the user via VR goggles. An object detected by numerical methods is highlighted in color in the image depicted in the VR goggles.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: June 7, 2022
    Assignee: AUDI AG
    Inventor: Tobias Müller
  • Patent number: 11354870
    Abstract: A system for accurately positioning augmented reality (AR) content within a coordinate system such as the World Geodetic System (WGS) may include AR content tethered to trackable physical features. As the system is used by mobile computing devices, each mobile device may calculate and compare relative positioning data between the trackable features. The system may connect and group the trackable features hierarchically, as measurements are obtained. As additional measurements are made of the trackable features in a group, the relative position data may be improved, e.g., using statistical methods.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: June 7, 2022
    Assignee: YouAR INC.
    Inventors: Oliver Clayton Daniels, David Morris Daniels, Raymond Victor Di Carlo, Carlo J. Calica, Luke Timothy Hartwig
  • Patent number: 11354891
    Abstract: There is provided an image capturing apparatus that captures a plurality of images, calculates a three-dimensional position from the plurality of images, and outputs the plurality of images and information about the three-dimensional position. The image capturing apparatus includes an image capturing unit, a camera parameter storage unit, a position calculation unit, a position selection unit, and an image complementing unit. The image capturing unit outputs the plurality of images using at least three cameras. The camera parameter storage unit stores in advance camera parameters including occlusion information. The position calculation unit calculates three dimensional positions of a plurality of points. The position selection unit selects a piece of position information relating to a subject area that does not have an occlusion, and outputs selected position information. The image complementing unit generates a complementary image, and outputs the complementary image and the selected position information.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: June 7, 2022
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Kunio Nobori, Satoshi Sato, Takeo Azuma
  • Patent number: 11354855
    Abstract: Embodiments of the present disclosure relate to systems and methods for generating 3D models of floor plans using two-dimensional (2D) image inputs. According to an embodiment, a three-dimensional building model generation system is disclosed that can include a two-dimensional image receive module to receive a 2D image pertaining to the floor plan, an image processing module to processes the two-dimensional image to generate a binary image, a two-dimensional floor plan graph generation module to extract two-dimensional geometry from the binary image to generate a two-dimensional floor plan graph, and a three-dimensional model generation module to generate a 3D model of the floor plan by performing geometric extrusion of the two-dimensional floor plan graph based on one or more cyclic wall graphs and one or more connectives.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: June 7, 2022
    Assignee: SNAPTRUDE TECHNOLOGIES PRIVATE LIMITED
    Inventor: Syed Altaf Hussain Ganihar
  • Patent number: 11354909
    Abstract: In an approach for detecting queuing information, a processor analyzes a video monitoring a queue area. A processor detects a queue barrier in the queue area using an instance segmentation technique based on the video. A processor identifies a queue in the queue area using a heuristic technique. A processor recognizes a number of people in the queue. A processor provides an estimation of a wait time for the queue.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: June 7, 2022
    Assignee: International Business Machines Corporation
    Inventors: Chang Xu, Junsong Wang, Hang Liu, Yan Gong
  • Patent number: 11354794
    Abstract: A deposit detection device according to an embodiment includes an extraction module and an exclusion module. The extraction module extracts a candidate region for a deposit from a captured image captured by an imaging device. The exclusion module excludes from the candidate region a region that satisfies a predetermined exception condition. The exclusion module excludes from the candidate region a region that satisfies a first exception region when adhesion of a deposit to the imaging device is not detected, and excludes from the candidate region a region that satisfies a second exception condition different from the first exception condition when adhesion of the deposit to the imaging device is detected.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: June 7, 2022
    Assignee: DENSO TEN Limited
    Inventors: Nobuhisa Ikeda, Nobunori Asayama, Takashi Kono, Yasushi Tani, Daisuke Yamamoto, Daisuke Shiota, Teruhiko Kamibayashi
  • Patent number: 11354879
    Abstract: Techniques are described for detecting a periphery of a surface based on a point set representing the surface. The surface may correspond to a display medium upon which content is projected. A shape model may be matched and aligned to a contour of the point set. A periphery or edge of the surface and corresponding display medium may be determined based on the aligned shape model.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: June 7, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Ning Yao, Qiang Liu
  • Patent number: 11348245
    Abstract: A scanning window is used to scan an image frame of a sensor when doing object detection. In one approach, positions within the image frame are stored in memory. Each position corresponds to an object detection at that position for a prior frame of data. A first area of the image frame is determined based on the stored positions. When starting to analyze a new frame of data, the first area is scanned to detect at least one object. After scanning within the first area, at least one other area of the new image frame is scanned.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: May 31, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Gil Golov
  • Patent number: 11348255
    Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: May 31, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Antonio Dello Iacono, Yaacov Chernoi, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Patent number: 11348266
    Abstract: A method for monitoring headway to an object performable in a computerized system including a camera mounted in a moving vehicle. The camera acquires in real time multiple image frames including respectively multiple images of the object within a field of view of the camera. An edge is detected in in the images of the object. A smoothed measurement is performed of a dimension the edge. Range to the object is calculated in real time, based on the smoothed measurement.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: May 31, 2022
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Gideon P. Stein, Andras D. Ferencz, Ofer Avni
  • Patent number: 11348299
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: May 31, 2022
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Patent number: 11348275
    Abstract: Embodiments of the present application disclose methods and apparatuses for determining a bounding box of a target object, media, and devices. The method includes: obtaining attribute information of each of a plurality of key points of a target object; and determining a bounding box position of the target object according to the attribute information of each of the plurality of key points of the target object and to a preset neural network. The implementations of the present application can improve the efficiency and accuracy of determining a bounding box of a target object.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 31, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO. LTD.
    Inventors: Buyu Li, Quanquan Li, Junjie Yan
  • Patent number: 11348354
    Abstract: A human body tracking method, apparatus, and device, and a storage medium. The method includes: obtaining a current frame image captured by a target photographing device at a current moment; detecting each human body in the current frame image to obtain first position information of the each human body in the current frame image; calculating second position information of a first human body in the current frame image; determining target position information of the each human body in the current frame image according to the second position information of the first human body in the current frame image, the first position information of the each human body in the current frame image, and pedestrian features of all tracked pedestrians stored in a preset list.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: May 31, 2022
    Inventors: Zhigang Wang, Jian Wang, Xubin Li, Le Kang, Zeyu Liu, Xiao Liu, Hao Sun, Shilei Wen, Yingze Bao, Mingyu Chen, Errui Ding
  • Patent number: 11347046
    Abstract: Described herein are systems and methods for assessing a biological sample. The methods include: characterizing a speckled pattern to be applied by a diffuser; positioning a biological sample relative to at least one coherent light source such that at least one coherent light source illuminates the biological sample; diffusing light produced by the at least one coherent light source; capturing a plurality of illuminated images with the embedded speckle pattern of the biological sample based on the diffused light; iteratively reconstructing the plurality of speckled illuminated images of the biological sample to recover an image stack of reconstructed images; stitching together each image in the image stack to create a whole slide image, wherein each image of the image stack at least partially overlaps with a neighboring image; and identifying one or more features of the biological sample. The methods may be performed by a near-field Fourier Ptychographic system.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 31, 2022
    Assignee: Pathware Inc.
    Inventors: Rodger Michael Moore, Cordero Derrell Core, Jaron Nathaniel Nix, Thomas Karl, Samatha Hastings, Samir Yhann
  • Patent number: 11338753
    Abstract: An information processing apparatus (10) includes a computation unit (110), a selection unit (120), and a processing unit (130). The computation unit (110) analyzes a sensing result acquired by sensing a vehicle running on a road, and thereby computes the number of occupants in the vehicle and a certainty factor of the number of the occupants. As one example, the computation unit (110) computes reliability in the case of determining that a person exists related to each seat of the vehicle, and uses the reliability for each seat to compute a certainty factor related to each possible number as the number of the occupants in the vehicle. The selection unit (120) selects a sensing result with the computed certainty factor of the number of occupants which does not satisfy a predetermined criterion as an analysis result. The processing unit (130) allows the sensing result selected by the selection unit (120) to be in the state of being distinguishable from the other sensing results.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: May 24, 2022
    Assignee: NEC CORPORATION
    Inventor: Shinichi Miyamoto
  • Patent number: 11340709
    Abstract: In an example method, a target object is detected via a camera in a mobile device based on an embedded identifier on the target object. Sensor data of the mobile device is tracked to estimate a relative location or a relative orientation of the mobile device in relation to the target object. A relative gesture is detected via the mobile device based on the relative location or the relative orientation of the mobile device. One or more actions are performed in response to detecting the relative gesture.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: May 24, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Mithra Vankipuram, Craig Peter Sayers
  • Patent number: 11341731
    Abstract: A method for immersively displaying a scanned environment of a region to a set of users in a training environment wearing augmented reality head display units. The training environment includes a pseudo-GPS system, which allows position tracking over time. This enables rehearsing military operations before they occur.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: May 24, 2022
    Assignee: TIPPING POINT MEDICAL IMAGES, LLC
    Inventor: Robert Edwin Douglas
  • Patent number: 11341680
    Abstract: A gaze point estimation processing apparatus in an embodiment includes a storage configured to store a neural network as a gaze point estimation model and one or more processors. The storage stores a gaze point estimation model generated through learning based on an image for learning and information relating to a first gaze point for the image for learning. The one or more processors estimate information relating to a second gaze point with respect to an image for estimation from the image for estimation using the gaze point estimation model.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: May 24, 2022
    Assignee: PREFERRED NETWORKS, INC.
    Inventor: Masaaki Fukuda
  • Patent number: 11340706
    Abstract: Systems and methods for gesture recognition based on depth information from a camera include, at an electronic device having a camera system, capturing a video frame and depth information associated with the video frame, identifying a foreground portion of the video frame based on the depth information, and determining whether the foreground portion matches a predefined gesture in a database. In accordance with a determination that the foreground portion matches a predefined gesture in the database, the device determines whether one or more subsequent video frames matches the one or more predefined gestures to produce a recognized gesture.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 24, 2022
    Assignee: Zyetric Gaming Limited
    Inventors: Pak Kit Lam, Peter Han Joo Chong, Xiang Yu
  • Patent number: 11340701
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: May 24, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Patent number: 11343412
    Abstract: An electronic device receives depth sensor data that includes depths sensed in multiple zones in the field of view of a depth sensor. The device determines whether a user is in front of the device based on the depth sensor data. If the user is determined to be present, then the device causes a display to enter an operational mode. Otherwise, the device causes the display to enter a standby mode. The device may also determine whether the user's attention is on the device by determining whether the depth sensor data indicates that the user is facing the device. If so, the device causes the display to enter the operational mode. Otherwise, the device causes the display to enter a power saving mode. The device may use a machine learning algorithm to determine whether the depth sensor data indicates that the user is present and/or facing the device.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: May 24, 2022
    Assignee: Intel Corporation
    Inventors: Divyashree-Shivakumar Sreepathihalli, Michael Daniel Rosenzweig, Uttam K. Sengupta, Soethiha Soe, Prasanna Krishnaswamy