Object Tracking Patents (Class 348/169)
  • Patent number: 11555910
    Abstract: A technique for tracking objects includes: determining a set of detected measurements based on a received return signal; determining a group that includes a set of group measurements and a set of group tracks; creating a merged factor, including a merged set of track state hypotheses associated with a merged set of existing tracks including a first set of existing tracks and a second set of existing tracks, by calculating the cross-product of a first set of previous track state hypotheses and a second set of previous track state hypotheses; determining a first new factor and a second new factor; calculating a first set of new track state hypotheses for the first new factor based on a first subset of the group measurements; and calculating a second set of new track state hypotheses for the second new factor based on a second subset of the group measurements.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: January 17, 2023
    Assignee: Motional AD LLC
    Inventor: Lingji Chen
  • Patent number: 11553129
    Abstract: Systems, apparatuses and methods may provide for technology that detects an unidentified individual at a first location along a trajectory in a scene based on a video feed of the scene, wherein the video feed is to be associated with a stationary camera, and selects a non-stationary camera from a plurality of non-stationary cameras based on the trajectory and one or more settings of the selected non-stationary camera. The technology may also automatically instruct the selected non-stationary camera to adjust at least one of the one or more settings, capture a face of the individual at a second location along the trajectory, and identify the unidentified individual based on the captured face of the unidentified individual.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: January 10, 2023
    Assignee: Intel Corporation
    Inventors: Mateo Guzman, Javier Turek, Marcos Carranza, Cesar Martinez-Spessot, Dario Oliver, Javier Felip Leon, Mariano Tepper
  • Patent number: 11546451
    Abstract: An electronic device is provided the disclosure. The electronic device includes: a body; an image capture device, rotatably disposed on the body to capture an image of an object; a display, configured at a first side of the body and including a display zone, the display zone is configured to display the image of the object; a motor set, electronically connected with the image capture device; and a processor, electronically connected with the image capture device, the display, and the motor set and configured to control the motor set, wherein the display zone includes a center part, when at least part of the object displayed at the display zone is not in the center part, the processor controls the motor set to drive the image capture device to track the object.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: January 3, 2023
    Assignee: ASUSTEK COMPUTER INC.
    Inventors: I-Hsi Wu, Jen-Pang Hsu, Ching-Hsuan Chen
  • Patent number: 11544503
    Abstract: A domain alignment technique for cross-domain object detection tasks is introduced. During a preliminary pretraining phase, an object detection model is pretrained to detect objects in images associated with a source domain using a source dataset of images associated with the source domain. After completing the pretraining phase, a domain adaptation phase is performed using the source dataset and a target dataset to adapt the pretrained object detection model to detect objects in images associated with the target domain. The domain adaptation phase may involve the use of various domain alignment modules that, for example, perform multi-scale pixel/path alignment based on input feature maps or perform instance-level alignment based on input region proposals.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Christopher Tensmeyer, Vlad Ion Morariu, Varun Manjunatha, Tong Sun, Nikolaos Barmpalios, Kai Li, Handong Zhao, Curtis Wigington
  • Patent number: 11539911
    Abstract: In general, the present disclosure is directed to an artificial window system that can simulate the user experience of a traditional window in environments where exterior walls are unavailable or other constraints make traditional windows impractical. In an embodiment, an artificial window consistent with the present disclosure includes a window panel, a panel driver, and a camera device. The camera device captures a plurality of image frames representative of an outdoor environment and provides the same to the panel driver. A controller of the panel driver sends the image frames as a video signal to cause the window panel to visually output the same. The window panel may further include light panels, and the controller may extract light characteristics from the captured plurality of image frames to send signals to the light panels to cause the light panels to mimic outdoor lighting conditions.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 27, 2022
    Assignee: DPA VENTURES, INC.
    Inventors: Pooja Devendran, Partha Dutta, Saurabh Ullal, Anand Devendran, Kedar Gupta, Mark Pettus
  • Patent number: 11537211
    Abstract: A display apparatus that includes a movement amount acquirer, a movement amount corrector, and an input processor is provided. The display apparatus acquires a first movement amount of a user's finger in a vertical direction with respect to a virtual operation surface and a second movement amount thereof in a horizontal direction. The display apparatus corrects the first movement amount or the second movement amount when it determines that the input operation is a predetermined operation, and inputs an input operation based on the first movement amount and the second movement amount. When it is determined that the input operation is to move the user's finger in the vertical direction with respect to the virtual operation surface, the display apparatus corrects the second movement amount.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: December 27, 2022
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Yasuhiro Miyano, Takashi Yamamoto, Kohichi Sugiyama
  • Patent number: 11528407
    Abstract: A method includes dividing a field of view into a plurality of zones and sampling the field of view to generate a photon count for each zone of the plurality of zones, identifying a focal sector of the field of view and analyzing each zone to select a final focal object from a first prospective focal object and a second prospective focal object.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: December 13, 2022
    Assignees: STMICROELECTRONICS SA, STMICROELECTRONICS, INC., STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LIMITED
    Inventors: Darin K. Winterton, Donald Baxter, Andrew Hodgson, Gordon Lunn, Olivier Pothier, Kalyan-Kumar Vadlamudi-Reddy
  • Patent number: 11514616
    Abstract: In a system for providing augmented reality to a person disposed in a real-world, physical environment, a camera is configured to capture multiple real-world images of a physical environment. The system includes a processor configured to use the real-world images to generate multiple images of a virtual object that correspond to the multiple real-world images. The system further includes a display configured to display to the person in real time, a succession of the generated images that correspond to then-current multiple real-world images, such that the person perceives the virtual object to be positioned within the physical environment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 29, 2022
    Assignee: Strathspey Crown, LLC
    Inventor: Robert Edward Grant
  • Patent number: 11508085
    Abstract: A display system including: display apparatus; display-apparatus-tracking means; input device; processor. The processor is configured to: detect input event and identify actionable area of input device; process display-apparatus-tracking data to determine pose of display apparatus in global coordinate space; process first image to identify input device and determine relative pose thereof with respect to display apparatus; determine pose of input device and actionable area in global coordinate space; process second image to identify user's hand and determine relative pose thereof with respect to display apparatus; determine pose of hand in global coordinate space; adjust poses of input device and actionable area and pose of hand such that adjusted poses align with each other; process first image, to generate extended-reality image in which virtual representation of hand is superimposed over virtual representation of actionable area; and render extended-reality image.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 22, 2022
    Assignee: Varjo Technologies Oy
    Inventors: Ari Antti Peuhkurinen, Aleksei Romanov, Evgeny Zuev, Yuri Popov, Tomi Lehto
  • Patent number: 11509623
    Abstract: A method includes identifying a plurality of local tracklets from a plurality of targets, creating a plurality of global tracklets from the plurality of local tracklets, wherein each global tracklet comprises a set of local tracklet of the plurality of local tracklets, wherein the set of local tracklet corresponds to a target of the plurality of targets; extracting motion features of the target from the each global tracklet of the plurality of global tracklets, wherein the motion features of each target of the plurality of targets from each global tracklet of the plurality of global tracklets are distinguishable from the motion features of remaining targets of the plurality of targets from remaining global tracklets; transforming the motion features into an address code by using a hashing process; and transmitting a plurality of address codes and a transformation parameter of the hashing process to a communication device.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: November 22, 2022
    Assignee: Purdue Research Foundation
    Inventors: He Wang, Siyuan Cao
  • Patent number: 11501455
    Abstract: A tracking system includes a camera subsystem that includes cameras that capture vide of a space. Each camera is coupled with a camera client that determines local coordinates of people in the captured video. The camera clients generate frames that include color frames and depth frames labeled with an identifier number of the camera and their corresponding timestamps. The camera clients generate tracks that include metadata describing historical people detections, tracking identifications, timestamps, and the identifier number of the camera. The camera clients send the frames and tracks to cluster servers that maintain the frames and tracks such that they are retrievable using their corresponding labels. A camera server queries the cluster servers to receive the frames and tracks using their corresponding labels. The camera server determines the physical positions of people in the space based on the determined local coordinates.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: November 15, 2022
    Assignee: 7-ELEVEN, INC.
    Inventors: Jon Andrew Crain, Sailesh Bharathwaaj Krishnamurthy, Kyle Dalal, Shahmeer Ali Mirza
  • Patent number: 11490015
    Abstract: A method and apparatus for capturing digital video includes displaying a preview of a field of view of the imaging device in a user interface of the imaging device. A sequence of images is captured. A main subject and a background in the sequence of images is determined, wherein the main subject is different than the background. A sequence of modified images for use in a final video is obtained, wherein each modified image is obtained by combining two or more images of the sequence of images such that the main subject in the modified image is blur free and the background is blurred. The sequence of modified images is combined to obtain the final video, which is stored in a memory of the imaging device, and displayed in the user interface.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: November 1, 2022
    Assignee: CLEAR IMAGING RESEARCH, LLC
    Inventor: Fatih M. Ozluturk
  • Patent number: 11482009
    Abstract: A method for generating depth information of a street view image using a two-dimensional (2D) image includes calculating distance information of an object on a 2D map using the 2D map corresponding to a street view image; extracting semantic information on the object from the street view image; and generating depth information of the street view image based on the distance information and the semantic information.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 25, 2022
    Assignee: NAVER LABS CORPORATION
    Inventors: Donghwan Lee, Deokhwa Kim
  • Patent number: 11475664
    Abstract: The invention relates to a system (1) for identifying a device using a camera and for remotely controlling the identified device. The system is configured to obtain an image (21) captured with a camera. The image captures at least a surrounding of a remote controllable device (51). The system is further configured to analyze the image to recognize one or more objects (57) and/or features in the surrounding of the remote controllable device and select an identifier associated with at least one of the one or more objects and/or features from a plurality of identifiers stored in a memory. The memory comprises associations between the plurality of identifiers and remote controllable devices and the selected identifier is associated with the remote controllable device. The system is further configured to determine a control mechanism for controlling the remote controllable device and control the remote controllable device using the determined control mechanism.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: October 18, 2022
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Bartel Marinus Van De Sluis, Dzmitry Viktorovich Aliakseyeu, Mustafa Tolga Eren, Dirk Valentinus Rene Engelen
  • Patent number: 11477393
    Abstract: A method of view selection in a teleconferencing environment includes receiving a frame of image data from an optical sensor such as a camera, detecting one or more conference participants within the frame of image data, and identifying an interest region for each of the conference participants. Identifying the interest region comprises estimating head poses of participants to determine where a majority of the participants are looking and determining if there is an object in that area. If a suitable object is in the area at which the participants are looking, such as a whiteboard or another person, the image data corresponding to the object will be displayed on a display device or sent to a remote teleconference endpoint.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: October 18, 2022
    Assignee: PLANTRONICS, INC.
    Inventors: David A. Bryan, Wei-Cheng Su, Stephen Paul Schaefer, Alain Elon Nimri, Casey King
  • Patent number: 11475541
    Abstract: An image recognition apparatus includes circuitry. The circuitry is configured to input an image of an object captured by a camera. The circuitry is further configured to divide, based on a predetermined positioning point, the image into a plurality of regions, set a process region that includes the respective divided region, and set a rotation of the respective process region so that a positional relationship between up and down of the object in the respective process region matches up. The circuitry is further configured to perform the rotation to the image corresponding to the respective process region and perform a recognition process to the image after rotation.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: October 18, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tomokazu Kawahara, Tomoyuki Shibata
  • Patent number: 11468684
    Abstract: A system for situational awareness monitoring within an environment, wherein the system includes one or more processing devices configured to receive an image stream including a plurality of captured images from each of a plurality of imaging devices, the plurality of imaging devices being configured to capture images of objects within the environment and at least some of the imaging devices being positioned within the environment to have at least partially overlapping fields of view, identify overlapping images in the different image streams, the overlapping images being images captured by imaging devices having overlapping fields of view, analyse the overlapping images to determine object locations within the environment, analyse changes in the object locations over time to determine object movements within the environment, compare the object movements to situational awareness rules and use results of the comparison to identify situational awareness events.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: October 11, 2022
    Assignee: COMMONWEALTH SCIENTIFIC AND INDUSTRIAL RESEARCH ORGANISATION
    Inventors: Paul Damien Flick, Nicholas James Panitz, Peter John Dean, Marc Karim Elmouttie, Gregoire Krahenbuhl, Sisi Liang
  • Patent number: 11461736
    Abstract: To present to a user whether a person to be visited is staying in a target area and whether the person to be visited is in a state where the person is unable to deal with a visitor, the present invention detects whether there is any person staying in the target area based on images from a camera, detects whether each person staying in the target area is in a state where the person is unable to deal with a visitor, and generates display information displaying state information regarding whether each person is in the state where the person is unable to deal with a visitor, together with stay information regarding whether there is any person staying in the target area.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: October 4, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Sonoko Hirasawa, Takeshi Fujimatsu
  • Patent number: 11449149
    Abstract: Implementations set forth herein relate to effectuating device arbitration in a multi-device environment using data available from a wearable computing device, such as computerized glasses. The computerized glasses can include a camera, which can be used to provide image data for resolving issues related to device arbitration. In some implementations, a direction that a user is directing their computerized glasses, and/or directing their gaze (as detected by the computerized glasses with prior permission from the user), can be used to prioritize a particular device in a multi-device environment. A detected orientation of the computerized glasses can also be used to determine how to simultaneously allocate content between a graphical display of the computerized glasses and another graphical display of another client device. When content is allocated to the computerized glasses, content-specific gestures can be enabled and actionable at the computerized glasses.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: September 20, 2022
    Assignee: GOOGLE LLC
    Inventors: Alexander Chu, Jarlan Perez
  • Patent number: 11450111
    Abstract: A video scene detection machine learning model is provided. A computer device receives feature vectors corresponding to audio and video components of a video. The computing device provides the feature vectors as input to a trained neural network. The computing device receives from the trained neural network, a plurality of output feature vectors that correspond to shots of the video. The computing device applies optimal sequence grouping to the output feature vectors. The computing device further trains the trained neural network based, at least in part, on the applied optimal sequence grouping.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: September 20, 2022
    Assignee: International Business Machines Corporation
    Inventors: Daniel Nechemia Rotman, Rami Ben-Ari, Udi Barzelay
  • Patent number: 11442022
    Abstract: Imaging device and method for reading an image sensor in the imaging device. The imaging device has optics with which the imaging device can be focused on objects. The image sensor has a plurality of sensor lines, wherein each sensor line comprises a plurality of preferably linearly arranged, preferably individually readable pixel elements. A pixel range is defined with the pixel range comprising at least a section of a sensor line. The reading of the image sensor is restricted to the pixel elements (6) in the pixel range.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: September 13, 2022
    Assignee: B&R INDUSTRIAL AUTOMATION GMBH
    Inventors: Walter Walkner, Gerhard Beinhundner, Andreas Waldl
  • Patent number: 11445119
    Abstract: An image capturing control apparatus is provided and detects a first target subject and a second target subject, converts intra-angle-of-view coordinates of each of the first and second target subjects into pan and tilt coordinate values, store the pan coordinate value and the tilt coordinate value of each of the first and the second target subject, determine an angle of view so as to include the first and second target subjects based on the stored pan and tilt coordinate values, and control an angle of view of the image capturing apparatus based on the determined angle of view.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: September 13, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takuya Toyoda
  • Patent number: 11445305
    Abstract: A hearing aid comprises a sensor configured for detecting a focus of an end user on a real sound source, a microphone assembly configured for converting sounds into electrical signals, a speaker configured for converting the electrical signals into sounds, and a control subsystem configured for modifying the direction and/or distance of a greatest sensitivity of the microphone assembly based on detected focus. A virtual image generation system comprises memory storing a three-dimensional scene, a sensor configured for detecting a focus of the end user on a sound source, a speaker configured for conveying sounds to the end user, and a control subsystem configured for causing the speaker to preferentially convey a sound originating from the sound source in response to detection of the focus, and for rendering image frames of the scene, and a display subsystem configured for sequentially displaying the image frames to the end user.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: September 13, 2022
    Assignee: Magic Leap, Inc.
    Inventors: George Alistair Sanger, Samuel A. Miller, Brian Schmidt, Anastasia Andreyevna Tajik
  • Patent number: 11417013
    Abstract: Disclosed herein are apparatuses and methods for iteratively mapping a layout of an environment. The implementations include receiving a visual stream from a camera installed in the environment, wherein the visual stream depicts a view of the environment, and wherein positional parameters of the camera and dimensions of the environment are set to arbitrary values. The implementations include monitoring a plurality of persons in the visual stream. For each person in the plurality of persons, the implementations further includes identifying a respective path that the person moves along in the view, updating the dimensions of the environment captured in the view, based on an estimated height of the person and movement speed along the respective path, and updating the positional parameters of the camera based on the updated dimensions of the environment. The implementations further includes mapping a layout of the environment captured in the view of the camera.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: August 16, 2022
    Assignee: Sensormatic Electornics, LLC
    Inventor: Michael C. Stewart
  • Patent number: 11418708
    Abstract: A method includes operating a remote image capture system within a defined geolocation region. The remote image capture system includes a camera, position orientation controls for the camera, ambient sensors and a control unit. A user of a client device that enters the geolocation region is designated based upon identifying information received from the client device and user facial recognition data. Attributes of the geolocation region are communicated to the client device. An image capture location within the geolocation region is communicated to the client device. The client device is provided a prompt to activate the camera.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: August 16, 2022
    Assignee: Super Selfie, Inc
    Inventors: Paul Laine, Greg Shirakyan
  • Patent number: 11402843
    Abstract: The technology relates to controlling a vehicle in an autonomous driving mode. For example, sensor data identifying a plurality of objects may be received. Pairs of objects of the plurality of objects may be identified. For each identified pair of objects of the plurality of objects, a similarity value which indicates whether the objects of that identified pair of objects can be responded to by the vehicle as a group may be determined. The objects of one of the identified pairs of objects may be clustered together based on the similarity score. The vehicle may be controlled in the autonomous mode by responding to each object in the cluster in a same way.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: August 2, 2022
    Assignee: Waymo LLC
    Inventors: Jared Stephen Russell, Fang Da
  • Patent number: 11393190
    Abstract: An object identification method determines whether a first monitoring image and a second monitoring image captured by a monitoring camera apparatus have the same object. The object identification method includes acquiring the first monitoring image at a first point of time to analyze a first object inside a first angle of view of the first monitoring image, acquiring the second monitoring image at a second point of the time different from the first point of time to analyze a second object inside the first angle of view of the second monitoring image, estimating a first similarity between the first object inside the first angle of view of the first monitoring image and the second object inside the first angle of view of the second monitoring image; and determining whether the first object and the second object are the same according to comparison result of the first similarity with a threshold.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: July 19, 2022
    Assignee: VIVOTEK INC.
    Inventor: Cheng-Chieh Liu
  • Patent number: 11392280
    Abstract: Image selection apparatus includes: information acquisition unit and server information acquisition unit configured to acquire position information of plurality of moving objects; display unit; display stop unit configured to input an image stop command; display control unit configured to control the display unit so that a plurality of icon images corresponding to the plurality of moving objects whose position information is acquired by the information acquisition unit and server information acquisition unit is displayed on the display unit and, when the image stop command is inputted by the display stop unit, motion of the plurality of icon images displayed on the display unit is stopped; and collection destination assignment unit configured to select an arbitrary icon image from among the plurality of icon images in response to user operation during the motion of the plurality of icon images is stopped by the display control unit.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: July 19, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Takahiro Oyama, Yuki Hagiwara
  • Patent number: 11386306
    Abstract: As agents move about a materials handling facility, tracklets representative of the position of each agent are maintained along with a confidence score indicating a confidence that the position of the agent is known. If the confidence score falls below a threshold level, image data of the agent associated with the low confidence score is obtained and processed to generate one or more embedding vectors representative of the agent at a current position. Those embedding vectors are then compared with embedding vectors of other candidate agents to determine a set of embedding vectors having a highest similarity. The candidate agent represented by the set of embedding vectors having the highest similarity score is determined to be the agent and the position of that candidate agent is updated to the current position, thereby re-identifying the agent.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: July 12, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Tian Lan, Jayakrishnan Eledath, Hoi Cheung Pang
  • Patent number: 11381751
    Abstract: A handheld gimbal includes a handheld part and a gimbal. The handheld part is configured with a human-machine interface component. The gimbal is mounted at the handheld part and configured to mount a camera device to photograph a target object. The human-machine interface component includes a display screen and a processor. The display screen is configured to display a photographing image captured by the camera device. The photographing image includes an image of the target object. The processor is configured to automatically recognize the target object, obtain a motion instruction of controlling a motion of the gimbal according to a motion status of the image of the target object, and control the motion of the gimbal according to the motion instruction.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: July 5, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventor: Guanliang Su
  • Patent number: 11373277
    Abstract: A motion detection method using an image processing device may include: based on the noise level of a current frame, obtaining a weighted average of a sum of absolute differences (SAD) value and an absolute difference of sums (ADS) value; setting the weighted average as an initial motion detection value of each pixel; and selectively performing max filtering on motion detection values of the pixels to obtain a final motion detection value.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: June 28, 2022
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventor: Eun Cheol Choi
  • Patent number: 11367124
    Abstract: An object tracking system that includes a sensor, a weight sensor, and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect a weight increase on the weight sensor and to determine a weight increase amount on the weight sensor. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart with an item weight that is closest to the weight increase amount, and to remove the identified item from the digital cart associated with the person.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: June 21, 2022
    Assignee: 7-ELEVEN, INC.
    Inventors: Shahmeer Ali Mirza, Sarath Vakacharla, Sailesh Bharathwaaj Krishnamurthy, Deepanjan Paul
  • Patent number: 11351436
    Abstract: A golf launch monitor is configured to determine a flight characteristic of a golf ball. The golf launch monitor includes two low-speed cameras, a trigger device, and a processor. The trigger device is configured to detect a golf swing. The processor is configured to instruct, upon the trigger device detecting said golf swing, the first camera to capture the first ball image; instruct the second camera to capture the second ball image after a time interval, wherein the time interval is less than the first frame rate and the second frame rate; and determine, based at least in part on the first ball image and the second ball image, the flight characteristic of the golf ball.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: June 7, 2022
    Inventors: David M. Hendrix, Caleb A. Pinter, Elliot S. Wilder, Jeffrey B. Wigh, Maxwell C. Goldberg, William Perry Copus, Jr.
  • Patent number: 11354683
    Abstract: A method and system for creating an anonymous shopper panel based on multi-modal sensor data fusion. The anonymous shopper panel can serve as the same traditional shopper panel who reports their household information, such as household size, income level, demographics, etc., and their purchase history, yet without any voluntary participation. A configuration of vision sensors and mobile access points can be used to detect and track shoppers as they travel a retail environment. Fusion of those modalities can be used to form a trajectory. The trajectory data can then be associated with Point of Sale data to form a full set of shopper behavior data. Shopper behavior data for a particular visit can then be compared to data from previous shoppers' visits to determine if the shopper is a revisiting shopper. The shopper's data can then be aggregated for multiple visits to the retail location. The aggregated shopper data can be filtered using application-specific criteria to create an anonymous shopper panel.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: June 7, 2022
    Assignee: VideoMining Corporation
    Inventors: Joonhwa Shin, Rajeev Sharma, Youngrock R Yoon, Donghun Kim
  • Patent number: 11333535
    Abstract: Provided is a method for more accurately correcting position coordinates of a point on an object to be imaged, the coordinates being identified based on values detected by linear scales. A visual field is moved to a measurement point defined on a recessed portion formed on a calibration plate, and an image is captured (step S13-1), edges are detected from an image of sides of the recessed portion (step 313-2), an intersection of the edges is calculated (step S13-3), values of the intersection as actually measured by the linear scales are saved (step S13-4), and position coordinates of the point on the object to be imaged as detected by the linear scales are corrected by using a true value and a difference.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 17, 2022
    Assignee: OMRON CORPORATION
    Inventors: Keita Ebisawa, Shingo Hayashi
  • Patent number: 11336831
    Abstract: An image processing device adjusts an imaging direction and an imaging magnification by performing driving control of an imaging device for each image processing process and sets an imaging position of a measurement target. The image processing device calculates an evaluation value of set feature parts in a captured image input at the time of measurement on the basis of captured image data and compares the evaluation value with determination conditions. If the determination condition is satisfied, the image processing device performs predetermined image processing and outputs a result of the image processing to a control device.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: May 17, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Tomoya Asanuma, Genki Cho, Hiroto Oka
  • Patent number: 11320912
    Abstract: Techniques for gesture-based device connections are described. For example, a method may comprise receiving video data corresponding to motion of a first computing device, receiving sensor data corresponding to motion of the first computing device, comparing, by a processor, the video data and the sensor data to one or more gesture models, and initiating establishment of a wireless connection between the first computing device and a second computing device if the video data and sensor data correspond to gesture models for the same gesture. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: May 3, 2022
    Assignee: INTEL CORPORATION
    Inventors: Giuseppe Raffa, Sangita Sharma
  • Patent number: 11320913
    Abstract: Techniques for gesture-based device connections are described. For example, a method may comprise receiving video data corresponding to motion of a first computing device, receiving sensor data corresponding to motion of the first computing device, comparing, by a processor, the video data and the sensor data to one or more gesture models, and initiating establishment of a wireless connection between the first computing device and a second computing device if the video data and sensor data correspond to gesture models for the same gesture. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: May 3, 2022
    Assignee: INTEL CORPORATION
    Inventors: Giuseppe Raffa, Sangita Sharma
  • Patent number: 11298049
    Abstract: A method includes obtaining, by a device and from a dynamic vision sensor (DVS), a set of images of an object that identifies that the object has moved. The method includes determining, by the device, that the object is associated with a predetermined posture based on the set of images. The method includes determining, by the device, a group to which the object belongs based on an attribute of the object and an attribute of the group. The method includes determining, by the device, whether the object is associated with the dangerous situation based on identifying that the object is associated with the predetermined posture and based on setting information associated with the group to which the object belongs.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: April 12, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Joon-Ho Kim, Joo-Young Kim, Hyun-Jae Baek, Do-Jun Yang
  • Patent number: 11295179
    Abstract: Methods, systems, and techniques for monitoring an object-of-interest within a region involve receiving at least data from two sources monitoring a region and correlating that data to determine that an object-of-interest depicted or represented in data from one of the sources is the same object-of-interest depicted or represented in data from the other source. Metadata identifying that the object-of-interest from the two sources is the same object-of-interest is then stored for later use in, for example, object tracking.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: April 5, 2022
    Assignee: AVIGILON CORPORATION
    Inventors: Moussa Doumbouya, Yanyan Hu, Kevin Piette, Pietro Russo, Mahesh Saptharishi, Bo Yang Yu
  • Patent number: 11279497
    Abstract: A gimbal rotation method includes controlling a driving assembly based on an angle command indicating a target angle to drive a gimbal to rotate to the target angle in a first time period having a pre-set length, determining whether a new angle command is received within the first time period, and, if not, estimating an estimated target angle based on gimbal rotation angles indicated by a plurality of previously-received angle commands and controlling the driving assembly to drive the gimbal to rotate from the target angle to the estimated target angle in a second time period having the pre-set length.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: March 22, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Yifan Wu, Huaiyu Liu
  • Patent number: 11270108
    Abstract: An object tracking apparatus for a sequence of images, wherein a plurality of tracks have been obtained for the sequence of images, and each of the plurality of tracks is obtained by detecting an object in several images included in the sequence of images. The apparatus comprises matching track pair determining unit configured to determine a matching track pair from the plurality of tracks, wherein the matching track pair comprise a previous track and a subsequent track which correspond to the same object and are discontinuous, and combining unit configured to combine the previous track and the subsequent track included in the matching track pair.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: March 8, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shiting Wang, Qi Hu, Dongchao Wen
  • Patent number: 11270122
    Abstract: An image processing system has a memory storing a video depicting a multi-entity event, a trained reinforcement learning policy and a plurality of domain specific language functions. A graph formation module computes a representation of the video as a graph of nodes connected by edges. A trained machine learning system recognizes entities depicted in the video and recognizes attributes of the entities. Labels are added to the nodes of the graph according to the recognized entities and attributes. The trained machine learning system computes a predicted multi-entity event depicted in the video. For individual ones of the edges of the graph, select a domain specific language function from the plurality of domain specific language functions and assign it to the edge, the selection being made at least according to the reinforcement learning policy. An explanation is formed from the domain specific language functions.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 8, 2022
    Assignee: 3D Industries Limited
    Inventors: Sukrit Shankar, Seena Rejal
  • Patent number: 11270564
    Abstract: Various embodiments of the present invention provide systems and method for monitoring of physical movement in relation to regions where movement is either unconditionally or conditionally unauthorized.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: March 8, 2022
    Assignee: BI INCORPORATED
    Inventor: Joseph P. Newell
  • Patent number: 11263903
    Abstract: Provided is an information provision system capable of reducing secondary damage in the case of a disaster. The information provision system includes: a vehicle; and an information provision apparatus communicatively connected to the vehicle and configured to provide information to the vehicle. The information provision apparatus includes an acquirer that acquires fire-related information indicating information about fire, a fire detector that detects an occurrence of fire based on the fire-related information, and a fire information generator that generates, in response to the detection of the occurrence of fire by the fire detector, fire avoidance information for avoiding the fire, and transmits the fire avoidance information to the vehicle. The vehicle includes an information receiver that receives the fire avoidance information, and a display controller that causes a display to display the fire avoidance information.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: March 1, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Shuichi Kazuno
  • Patent number: 11263761
    Abstract: A method for controlling a movable object to track a target object includes determining a change in one or more features between a first image frame and a second image frame, and adjusting a movement of the movable object based on the change in the one or more features between the first image frame and the second image frame. The one or more features are associated with the target object, and the first image frame and the second image frame are captured at different points in time using an imaging device on the movable object.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: March 1, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Jie Qian, Cong Zhao, Xuyang Feng
  • Patent number: 11265469
    Abstract: A device has a camera that is moveable by one or more actuators. During operation the camera moves. For example, the camera may move to follow a user as they move within the physical space. Mechanical limitations result in the camera movement exhibiting discontinuities, such as small jerks or steps from one orientation to another while panning. If the camera is acquiring video data while moving, the resulting video data may appear jittery and be unpleasant for a user to view. An offset is determined between an intended orientation of the camera at a specified time and an actual orientation of the camera at that time. A portion of raw image data acquired at the specified time is cropped using the offset to produce cropped image data that is free from jitter due to the movement discontinuities.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: March 1, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Om Prakash Gangwal, Jonathan Ross
  • Patent number: 11250943
    Abstract: Sample traceability device and method for medical research and/or diagnosis. The invention relates to a sample traceability device for medical research and/or diagnosis, comprising a one-dimensional or two-dimensional optical code, the device comprising a system for reading one-dimensional or two-dimensional optical codes and a sample tracing control device comprising a sample tracing database manager, and a user interface screen. Said traceability device also comprises: an area for depositing at least two samples, a system for illuminating the deposit area, at least one digital camera oriented towards said deposit area, and a device for processing the image, said image processing device comprising: a module for locating said optical codes, and a module for reading the located optical codes, the sample tracing control device automatically receiving the information generated by the module for reading the located optical codes.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: February 15, 2022
    Assignee: EMS PATHOLOGY MEDICAL SOLUTIONS, S.L.
    Inventors: Francesc Roig Munill, Daniel Badia Pey, Joan Xufre Neto, Kilian Gozalbo Torne
  • Patent number: 11252323
    Abstract: A visual tracker can be configured to obtain profile data associated with a pose of a living entity. In response to detecting a person, a camera can be selected from cameras. Additionally, in response to selecting the camera, the system can receive video data from the camera representative of a stance of the person. Consequently, the stance of the person can be estimated, resulting in an estimated stance.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: February 15, 2022
    Assignee: The Hong Kong University of Science And Technology
    Inventors: Carlos Bermejo Fernandez, Pan Hui
  • Patent number: 11250259
    Abstract: Provided is a method for blending of agricultural product utilizing hyperspectral imaging. At least one region along a sample of agricultural product is scanned using at least one light source of different wavelengths. Hyperspectral images are generated from the at least one region. A spectral fingerprint for the sample of agricultural product is formed from the hyperspectral images. A plurality of samples of agricultural product is blended based on the spectral fingerprints of the samples according to parameters determined by executing a blending algorithm.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: February 15, 2022
    Assignee: Altria Client Services LLC
    Inventors: Seetharama C. Deevi, Henry M. Dante, Qiwei Liang, Samuel Timothy Henry