Object Tracking Patents (Class 348/169)
  • Patent number: 11595614
    Abstract: Intelligent reframing techniques are described in which content (e.g., a movie) can be generated in a different aspect ratio than previously provided. These techniques include obtaining various video frames having a first aspect ratio. Various objects can be identified within the frames. An object having the highest degree of importance in a frame can be selected and a focal point can be calculated based at least in part on that object. A modified version of the content can be generated in a second aspect ratio that is different from the first aspect ratio. The modified version can be generated using the focal point calculated based on the object having the greatest degree of importance. Using these techniques, the content can be provided in a different aspect ratio while ensuring that the most important features of the frame still appear in the new version of the content.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: February 28, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Hooman Mahyar, Arjun Cholkar
  • Patent number: 11594033
    Abstract: Multiple cameras capture videos within a secure room. When individuals are detected as entering the room, identities of the individuals are resolved. When an asset is exposed in a field of view of one of the cameras, the individuals' eye and head movements are tracked from the videos with respect to one another and the asset. Additionally, touches made by any of the individuals on the asset are tracked from the videos. The eye and head movements are correlated with the touches or lack of touches according to a security policy for the asset. Any violations of the security policy are written to a secure audit log for the room and the asset.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: February 28, 2023
    Assignee: NCR Corporation
    Inventors: Sudip Rahman Khan, Matthew Robert Burris, Christopher John Costello, Gregory Joseph Hartl
  • Patent number: 11594031
    Abstract: A system and method to automatically generate a secondary video stream based on an incoming primary video stream. The method including performing video analytics on the primary video stream to generate one or more analysis results, detecting the first target of interest using the analysis results, automatically extracting a first secondary video stream that captures at least a portion of a first target of interest and has a field of view smaller than that of the primary video stream, tracking the first target of interest, displaying the first secondary video stream, detecting a second target of interest using the analysis results, automatically adapting the first secondary video stream from the primary video stream to capture a portion of the first and second targets of interest, tracking the second target of interest, and displaying the first secondary stream including the portion of the first and second targets of interest.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: February 28, 2023
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: W. Andrew Scanlon, Andrew J. Chosak, John I. W. Clark, Robert A. Cutting, Alan J. Lipton, Gary W. Myers
  • Patent number: 11587243
    Abstract: A tracking system includes a camera subsystem that includes cameras that capture vide of a space. Each camera is coupled with a camera client that determines local coordinates of people in the captured video. The camera clients generate frames that include color frames and depth frames labeled with an identifier number of the camera and their corresponding timestamps. The camera clients generate tracks that include metadata describing historical people detections, tracking identifications, timestamps, and the identifier number of the camera. The camera clients send the frames and tracks to cluster servers that maintain the frames and tracks such that they are retrievable using their corresponding labels. A camera server queries the cluster servers to receive the frames and tracks using their corresponding labels. The camera server determines the physical positions of people in the space based on the determined local coordinates.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: February 21, 2023
    Assignee: 7-ELEVEN, INC.
    Inventors: Jon Andrew Crain, Sailesh Bharathwaaj Krishnamurthy, Kyle Dalal, Shahmeer Ali Mirza
  • Patent number: 11587284
    Abstract: In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: February 21, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Dane M. Coffey, Evan M. Goldberg, Steven M. Chapman, Daniel L. Baker, Matthew Deuel, Mark R. Mine
  • Patent number: 11583260
    Abstract: The invention relates to a method for predicting and testing physiological conditions of a female mammal related to an increased level of ferning present in a dried mucous body fluid sample of the female mammal, comprising: capturing an image of the dried mucous body fluid sample via a camera of a mobile telecommunication device through a magnifying lens releasably coupled to an objective of the camera, detecting the presence of crystals in the sample by processing the image, determining the crystal density within the sample from the detected crystals, predicting the physiological condition of the female mammal by comparing the crystal density to reference crystal density data, wherein increased crystal density is indicative of increased ferning level.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 21, 2023
    Inventor: Zajzon Bodo
  • Patent number: 11583898
    Abstract: Method for producing and maintaining an assignment of object data of an object to a changing physical position of the object in a sorting device, with the steps: feeding in an object at an entry point to the sorting device, capturing and storing identity data as part of the object data of the object; on the basis of sorting data, determining a transfer point assigned to the object; transporting the item as far as the specified transfer point; discharging the object at the specified transfer point; capturing optical object data of each object ejected at the transfer point once the object has reached a predetermined position on the sorting device, at the transfer point or along a delivery path extending from the transfer point to a removal point, and storing the optical object data as a further part of the object data of the object; transporting the object along the delivery path; and during that transportation, tracking the discharged object on the basis of the optical object data by means of tracking, and sto
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: February 21, 2023
    Assignee: BEUMER Group GmbH & Co. KG
    Inventors: Jan Behling, Jan Josef Jesper
  • Patent number: 11589138
    Abstract: A method and apparatus for use in a digital imaging device for correcting image blur in digital images by combining plurality of images. The plurality of images that are combined include a main subject that can be selected by user input or automatically by the digital imaging device. Blur correction can be performed to make the main subject blur-free while the rest of the image is blurred. All of the image may be made blur-free or the main subject can be made blur-free at the expense of the rest of the image. Result is a blur corrected image that is recorded in a memory.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: February 21, 2023
    Assignee: CLEAR IMAGING RESEARCH, LLC
    Inventor: Fatih M. Ozluturk
  • Patent number: 11586974
    Abstract: A system and method for multi-agent reinforcement learning in a multi-agent environment that include receiving data associated with the multi-agent environment in which an ego agent and a target agent are traveling and learning a single agent policy that is based on the data associated with the multi-agent environment and that accounts for operation of at least one of: the ego agent and the target agent individually. The system and method also include learning a multi-agent policy that accounts for operation of the ego agent and the target agent with respect to one another within the multi-agent environment. The system and method further include controlling at least one of: the ego agent and the target agent to operate within the multi-agent environment based on the multi-agent policy.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: February 21, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: David Francis Isele, Kikuo Fujimura, Anahita Mohseni-Kabir
  • Patent number: 11580651
    Abstract: An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: February 14, 2023
    Assignee: MIM SOFTWARE, INC.
    Inventor: Jonathan William Piper
  • Patent number: 11580930
    Abstract: Systems, methods, and apparatuses are disclosed for overcoming latency and loss of signal detection in remote control displays. An exemplary system includes a remote control, a host computing device, and one or more target systems communicatively coupled to each other over a wired and/or wireless network. One method includes receiving, by the remote control and from a host computing device, a first video frame captured by a target device, determining a first time corresponding to receipt of the first video frame, receiving, from the host computing device, a second video frame, determining a second time corresponding to receipt of the second video frame, comparing the time difference to a latency threshold, and causing an alert graphic element to be displayed indicating a latency in communication.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: February 14, 2023
    Assignee: Mason Electric Co.
    Inventors: Robert Jay Myles, Thomas C. Ruberto, Christopher L. Sommerville, Harout Markarian, Kelly P. Ambriz
  • Patent number: 11580717
    Abstract: A method and a device for determining a placement region of an item are disclosed. The method according to the present disclosure comprises: acquiring position information of an electronic identification at a bar display screen; and determining the placement region of the item according to the position information and a preset mapping relationship.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: February 14, 2023
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Shu Wang, Hui Rao, Zhiguo Zhang, Xin Li, Xiaohong Wang
  • Patent number: 11568660
    Abstract: In one embodiment, a computing system is configured to, during a first tracking session, detect first landmarks in a first image of the environment surrounding a user, and determine a first location of the user by comparing detected first landmarks to a landmark database. During a second tracking session, the computing system captures motion data and estimates a second location of the user based on the motion data and first user location. Based on the motion data and first user location, the computing system detects landmarks in a second image at a second location. The system accesses expected landmarks from the landmark database visible at the second location and determines the estimated second location of the user is inaccurate by comparing the expected landmarks with the second landmarks. The computing system re-localizes the user by comparing the landmarks in the landmark database and third landmarks in a third image.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: January 31, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventor: Christian Forster
  • Patent number: 11566762
    Abstract: A socket with a night light function includes a housing, a connecting unit and a light emitting unit. The housing includes a base, a transmitting light member and a cover. The transmitting light member is sandwiched between the base and the cover. The connecting unit is disposed in the housing and is electrically connected to a plug. The light emitting unit includes a light emitting element disposed in the base, a photosensitive switch, a touch switch and a controller. The light emitting element, the photosensitive switch and the touch switch are all in signal connection with the controller. The photosensitive switch collaborates with the touch switch to control the operation of the light emitting element; the light intensity around the socket is increased by means of the light emitting element and the transmitting light member. The socket is convenient to use and has multiple functions.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: January 31, 2023
    Assignee: TONGHUA INTELLIGENT TECHNOLOGY CO., LTD
    Inventor: Zhijin Li
  • Patent number: 11555910
    Abstract: A technique for tracking objects includes: determining a set of detected measurements based on a received return signal; determining a group that includes a set of group measurements and a set of group tracks; creating a merged factor, including a merged set of track state hypotheses associated with a merged set of existing tracks including a first set of existing tracks and a second set of existing tracks, by calculating the cross-product of a first set of previous track state hypotheses and a second set of previous track state hypotheses; determining a first new factor and a second new factor; calculating a first set of new track state hypotheses for the first new factor based on a first subset of the group measurements; and calculating a second set of new track state hypotheses for the second new factor based on a second subset of the group measurements.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: January 17, 2023
    Assignee: Motional AD LLC
    Inventor: Lingji Chen
  • Patent number: 11553129
    Abstract: Systems, apparatuses and methods may provide for technology that detects an unidentified individual at a first location along a trajectory in a scene based on a video feed of the scene, wherein the video feed is to be associated with a stationary camera, and selects a non-stationary camera from a plurality of non-stationary cameras based on the trajectory and one or more settings of the selected non-stationary camera. The technology may also automatically instruct the selected non-stationary camera to adjust at least one of the one or more settings, capture a face of the individual at a second location along the trajectory, and identify the unidentified individual based on the captured face of the unidentified individual.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: January 10, 2023
    Assignee: Intel Corporation
    Inventors: Mateo Guzman, Javier Turek, Marcos Carranza, Cesar Martinez-Spessot, Dario Oliver, Javier Felip Leon, Mariano Tepper
  • Patent number: 11544503
    Abstract: A domain alignment technique for cross-domain object detection tasks is introduced. During a preliminary pretraining phase, an object detection model is pretrained to detect objects in images associated with a source domain using a source dataset of images associated with the source domain. After completing the pretraining phase, a domain adaptation phase is performed using the source dataset and a target dataset to adapt the pretrained object detection model to detect objects in images associated with the target domain. The domain adaptation phase may involve the use of various domain alignment modules that, for example, perform multi-scale pixel/path alignment based on input feature maps or perform instance-level alignment based on input region proposals.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Christopher Tensmeyer, Vlad Ion Morariu, Varun Manjunatha, Tong Sun, Nikolaos Barmpalios, Kai Li, Handong Zhao, Curtis Wigington
  • Patent number: 11546451
    Abstract: An electronic device is provided the disclosure. The electronic device includes: a body; an image capture device, rotatably disposed on the body to capture an image of an object; a display, configured at a first side of the body and including a display zone, the display zone is configured to display the image of the object; a motor set, electronically connected with the image capture device; and a processor, electronically connected with the image capture device, the display, and the motor set and configured to control the motor set, wherein the display zone includes a center part, when at least part of the object displayed at the display zone is not in the center part, the processor controls the motor set to drive the image capture device to track the object.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: January 3, 2023
    Assignee: ASUSTEK COMPUTER INC.
    Inventors: I-Hsi Wu, Jen-Pang Hsu, Ching-Hsuan Chen
  • Patent number: 11537211
    Abstract: A display apparatus that includes a movement amount acquirer, a movement amount corrector, and an input processor is provided. The display apparatus acquires a first movement amount of a user's finger in a vertical direction with respect to a virtual operation surface and a second movement amount thereof in a horizontal direction. The display apparatus corrects the first movement amount or the second movement amount when it determines that the input operation is a predetermined operation, and inputs an input operation based on the first movement amount and the second movement amount. When it is determined that the input operation is to move the user's finger in the vertical direction with respect to the virtual operation surface, the display apparatus corrects the second movement amount.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: December 27, 2022
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Yasuhiro Miyano, Takashi Yamamoto, Kohichi Sugiyama
  • Patent number: 11539911
    Abstract: In general, the present disclosure is directed to an artificial window system that can simulate the user experience of a traditional window in environments where exterior walls are unavailable or other constraints make traditional windows impractical. In an embodiment, an artificial window consistent with the present disclosure includes a window panel, a panel driver, and a camera device. The camera device captures a plurality of image frames representative of an outdoor environment and provides the same to the panel driver. A controller of the panel driver sends the image frames as a video signal to cause the window panel to visually output the same. The window panel may further include light panels, and the controller may extract light characteristics from the captured plurality of image frames to send signals to the light panels to cause the light panels to mimic outdoor lighting conditions.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 27, 2022
    Assignee: DPA VENTURES, INC.
    Inventors: Pooja Devendran, Partha Dutta, Saurabh Ullal, Anand Devendran, Kedar Gupta, Mark Pettus
  • Patent number: 11528407
    Abstract: A method includes dividing a field of view into a plurality of zones and sampling the field of view to generate a photon count for each zone of the plurality of zones, identifying a focal sector of the field of view and analyzing each zone to select a final focal object from a first prospective focal object and a second prospective focal object.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: December 13, 2022
    Assignees: STMICROELECTRONICS SA, STMICROELECTRONICS, INC., STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LIMITED
    Inventors: Darin K. Winterton, Donald Baxter, Andrew Hodgson, Gordon Lunn, Olivier Pothier, Kalyan-Kumar Vadlamudi-Reddy
  • Patent number: 11514616
    Abstract: In a system for providing augmented reality to a person disposed in a real-world, physical environment, a camera is configured to capture multiple real-world images of a physical environment. The system includes a processor configured to use the real-world images to generate multiple images of a virtual object that correspond to the multiple real-world images. The system further includes a display configured to display to the person in real time, a succession of the generated images that correspond to then-current multiple real-world images, such that the person perceives the virtual object to be positioned within the physical environment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 29, 2022
    Assignee: Strathspey Crown, LLC
    Inventor: Robert Edward Grant
  • Patent number: 11508085
    Abstract: A display system including: display apparatus; display-apparatus-tracking means; input device; processor. The processor is configured to: detect input event and identify actionable area of input device; process display-apparatus-tracking data to determine pose of display apparatus in global coordinate space; process first image to identify input device and determine relative pose thereof with respect to display apparatus; determine pose of input device and actionable area in global coordinate space; process second image to identify user's hand and determine relative pose thereof with respect to display apparatus; determine pose of hand in global coordinate space; adjust poses of input device and actionable area and pose of hand such that adjusted poses align with each other; process first image, to generate extended-reality image in which virtual representation of hand is superimposed over virtual representation of actionable area; and render extended-reality image.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 22, 2022
    Assignee: Varjo Technologies Oy
    Inventors: Ari Antti Peuhkurinen, Aleksei Romanov, Evgeny Zuev, Yuri Popov, Tomi Lehto
  • Patent number: 11509623
    Abstract: A method includes identifying a plurality of local tracklets from a plurality of targets, creating a plurality of global tracklets from the plurality of local tracklets, wherein each global tracklet comprises a set of local tracklet of the plurality of local tracklets, wherein the set of local tracklet corresponds to a target of the plurality of targets; extracting motion features of the target from the each global tracklet of the plurality of global tracklets, wherein the motion features of each target of the plurality of targets from each global tracklet of the plurality of global tracklets are distinguishable from the motion features of remaining targets of the plurality of targets from remaining global tracklets; transforming the motion features into an address code by using a hashing process; and transmitting a plurality of address codes and a transformation parameter of the hashing process to a communication device.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: November 22, 2022
    Assignee: Purdue Research Foundation
    Inventors: He Wang, Siyuan Cao
  • Patent number: 11501455
    Abstract: A tracking system includes a camera subsystem that includes cameras that capture vide of a space. Each camera is coupled with a camera client that determines local coordinates of people in the captured video. The camera clients generate frames that include color frames and depth frames labeled with an identifier number of the camera and their corresponding timestamps. The camera clients generate tracks that include metadata describing historical people detections, tracking identifications, timestamps, and the identifier number of the camera. The camera clients send the frames and tracks to cluster servers that maintain the frames and tracks such that they are retrievable using their corresponding labels. A camera server queries the cluster servers to receive the frames and tracks using their corresponding labels. The camera server determines the physical positions of people in the space based on the determined local coordinates.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: November 15, 2022
    Assignee: 7-ELEVEN, INC.
    Inventors: Jon Andrew Crain, Sailesh Bharathwaaj Krishnamurthy, Kyle Dalal, Shahmeer Ali Mirza
  • Patent number: 11490015
    Abstract: A method and apparatus for capturing digital video includes displaying a preview of a field of view of the imaging device in a user interface of the imaging device. A sequence of images is captured. A main subject and a background in the sequence of images is determined, wherein the main subject is different than the background. A sequence of modified images for use in a final video is obtained, wherein each modified image is obtained by combining two or more images of the sequence of images such that the main subject in the modified image is blur free and the background is blurred. The sequence of modified images is combined to obtain the final video, which is stored in a memory of the imaging device, and displayed in the user interface.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: November 1, 2022
    Assignee: CLEAR IMAGING RESEARCH, LLC
    Inventor: Fatih M. Ozluturk
  • Patent number: 11482009
    Abstract: A method for generating depth information of a street view image using a two-dimensional (2D) image includes calculating distance information of an object on a 2D map using the 2D map corresponding to a street view image; extracting semantic information on the object from the street view image; and generating depth information of the street view image based on the distance information and the semantic information.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 25, 2022
    Assignee: NAVER LABS CORPORATION
    Inventors: Donghwan Lee, Deokhwa Kim
  • Patent number: 11475664
    Abstract: The invention relates to a system (1) for identifying a device using a camera and for remotely controlling the identified device. The system is configured to obtain an image (21) captured with a camera. The image captures at least a surrounding of a remote controllable device (51). The system is further configured to analyze the image to recognize one or more objects (57) and/or features in the surrounding of the remote controllable device and select an identifier associated with at least one of the one or more objects and/or features from a plurality of identifiers stored in a memory. The memory comprises associations between the plurality of identifiers and remote controllable devices and the selected identifier is associated with the remote controllable device. The system is further configured to determine a control mechanism for controlling the remote controllable device and control the remote controllable device using the determined control mechanism.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: October 18, 2022
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Bartel Marinus Van De Sluis, Dzmitry Viktorovich Aliakseyeu, Mustafa Tolga Eren, Dirk Valentinus Rene Engelen
  • Patent number: 11475541
    Abstract: An image recognition apparatus includes circuitry. The circuitry is configured to input an image of an object captured by a camera. The circuitry is further configured to divide, based on a predetermined positioning point, the image into a plurality of regions, set a process region that includes the respective divided region, and set a rotation of the respective process region so that a positional relationship between up and down of the object in the respective process region matches up. The circuitry is further configured to perform the rotation to the image corresponding to the respective process region and perform a recognition process to the image after rotation.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: October 18, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tomokazu Kawahara, Tomoyuki Shibata
  • Patent number: 11477393
    Abstract: A method of view selection in a teleconferencing environment includes receiving a frame of image data from an optical sensor such as a camera, detecting one or more conference participants within the frame of image data, and identifying an interest region for each of the conference participants. Identifying the interest region comprises estimating head poses of participants to determine where a majority of the participants are looking and determining if there is an object in that area. If a suitable object is in the area at which the participants are looking, such as a whiteboard or another person, the image data corresponding to the object will be displayed on a display device or sent to a remote teleconference endpoint.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: October 18, 2022
    Assignee: PLANTRONICS, INC.
    Inventors: David A. Bryan, Wei-Cheng Su, Stephen Paul Schaefer, Alain Elon Nimri, Casey King
  • Patent number: 11468684
    Abstract: A system for situational awareness monitoring within an environment, wherein the system includes one or more processing devices configured to receive an image stream including a plurality of captured images from each of a plurality of imaging devices, the plurality of imaging devices being configured to capture images of objects within the environment and at least some of the imaging devices being positioned within the environment to have at least partially overlapping fields of view, identify overlapping images in the different image streams, the overlapping images being images captured by imaging devices having overlapping fields of view, analyse the overlapping images to determine object locations within the environment, analyse changes in the object locations over time to determine object movements within the environment, compare the object movements to situational awareness rules and use results of the comparison to identify situational awareness events.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: October 11, 2022
    Assignee: COMMONWEALTH SCIENTIFIC AND INDUSTRIAL RESEARCH ORGANISATION
    Inventors: Paul Damien Flick, Nicholas James Panitz, Peter John Dean, Marc Karim Elmouttie, Gregoire Krahenbuhl, Sisi Liang
  • Patent number: 11461736
    Abstract: To present to a user whether a person to be visited is staying in a target area and whether the person to be visited is in a state where the person is unable to deal with a visitor, the present invention detects whether there is any person staying in the target area based on images from a camera, detects whether each person staying in the target area is in a state where the person is unable to deal with a visitor, and generates display information displaying state information regarding whether each person is in the state where the person is unable to deal with a visitor, together with stay information regarding whether there is any person staying in the target area.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: October 4, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Sonoko Hirasawa, Takeshi Fujimatsu
  • Patent number: 11450111
    Abstract: A video scene detection machine learning model is provided. A computer device receives feature vectors corresponding to audio and video components of a video. The computing device provides the feature vectors as input to a trained neural network. The computing device receives from the trained neural network, a plurality of output feature vectors that correspond to shots of the video. The computing device applies optimal sequence grouping to the output feature vectors. The computing device further trains the trained neural network based, at least in part, on the applied optimal sequence grouping.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: September 20, 2022
    Assignee: International Business Machines Corporation
    Inventors: Daniel Nechemia Rotman, Rami Ben-Ari, Udi Barzelay
  • Patent number: 11449149
    Abstract: Implementations set forth herein relate to effectuating device arbitration in a multi-device environment using data available from a wearable computing device, such as computerized glasses. The computerized glasses can include a camera, which can be used to provide image data for resolving issues related to device arbitration. In some implementations, a direction that a user is directing their computerized glasses, and/or directing their gaze (as detected by the computerized glasses with prior permission from the user), can be used to prioritize a particular device in a multi-device environment. A detected orientation of the computerized glasses can also be used to determine how to simultaneously allocate content between a graphical display of the computerized glasses and another graphical display of another client device. When content is allocated to the computerized glasses, content-specific gestures can be enabled and actionable at the computerized glasses.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: September 20, 2022
    Assignee: GOOGLE LLC
    Inventors: Alexander Chu, Jarlan Perez
  • Patent number: 11445119
    Abstract: An image capturing control apparatus is provided and detects a first target subject and a second target subject, converts intra-angle-of-view coordinates of each of the first and second target subjects into pan and tilt coordinate values, store the pan coordinate value and the tilt coordinate value of each of the first and the second target subject, determine an angle of view so as to include the first and second target subjects based on the stored pan and tilt coordinate values, and control an angle of view of the image capturing apparatus based on the determined angle of view.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: September 13, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takuya Toyoda
  • Patent number: 11442022
    Abstract: Imaging device and method for reading an image sensor in the imaging device. The imaging device has optics with which the imaging device can be focused on objects. The image sensor has a plurality of sensor lines, wherein each sensor line comprises a plurality of preferably linearly arranged, preferably individually readable pixel elements. A pixel range is defined with the pixel range comprising at least a section of a sensor line. The reading of the image sensor is restricted to the pixel elements (6) in the pixel range.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: September 13, 2022
    Assignee: B&R INDUSTRIAL AUTOMATION GMBH
    Inventors: Walter Walkner, Gerhard Beinhundner, Andreas Waldl
  • Patent number: 11445305
    Abstract: A hearing aid comprises a sensor configured for detecting a focus of an end user on a real sound source, a microphone assembly configured for converting sounds into electrical signals, a speaker configured for converting the electrical signals into sounds, and a control subsystem configured for modifying the direction and/or distance of a greatest sensitivity of the microphone assembly based on detected focus. A virtual image generation system comprises memory storing a three-dimensional scene, a sensor configured for detecting a focus of the end user on a sound source, a speaker configured for conveying sounds to the end user, and a control subsystem configured for causing the speaker to preferentially convey a sound originating from the sound source in response to detection of the focus, and for rendering image frames of the scene, and a display subsystem configured for sequentially displaying the image frames to the end user.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: September 13, 2022
    Assignee: Magic Leap, Inc.
    Inventors: George Alistair Sanger, Samuel A. Miller, Brian Schmidt, Anastasia Andreyevna Tajik
  • Patent number: 11418708
    Abstract: A method includes operating a remote image capture system within a defined geolocation region. The remote image capture system includes a camera, position orientation controls for the camera, ambient sensors and a control unit. A user of a client device that enters the geolocation region is designated based upon identifying information received from the client device and user facial recognition data. Attributes of the geolocation region are communicated to the client device. An image capture location within the geolocation region is communicated to the client device. The client device is provided a prompt to activate the camera.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: August 16, 2022
    Assignee: Super Selfie, Inc
    Inventors: Paul Laine, Greg Shirakyan
  • Patent number: 11417013
    Abstract: Disclosed herein are apparatuses and methods for iteratively mapping a layout of an environment. The implementations include receiving a visual stream from a camera installed in the environment, wherein the visual stream depicts a view of the environment, and wherein positional parameters of the camera and dimensions of the environment are set to arbitrary values. The implementations include monitoring a plurality of persons in the visual stream. For each person in the plurality of persons, the implementations further includes identifying a respective path that the person moves along in the view, updating the dimensions of the environment captured in the view, based on an estimated height of the person and movement speed along the respective path, and updating the positional parameters of the camera based on the updated dimensions of the environment. The implementations further includes mapping a layout of the environment captured in the view of the camera.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: August 16, 2022
    Assignee: Sensormatic Electornics, LLC
    Inventor: Michael C. Stewart
  • Patent number: 11402843
    Abstract: The technology relates to controlling a vehicle in an autonomous driving mode. For example, sensor data identifying a plurality of objects may be received. Pairs of objects of the plurality of objects may be identified. For each identified pair of objects of the plurality of objects, a similarity value which indicates whether the objects of that identified pair of objects can be responded to by the vehicle as a group may be determined. The objects of one of the identified pairs of objects may be clustered together based on the similarity score. The vehicle may be controlled in the autonomous mode by responding to each object in the cluster in a same way.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: August 2, 2022
    Assignee: Waymo LLC
    Inventors: Jared Stephen Russell, Fang Da
  • Patent number: 11392280
    Abstract: Image selection apparatus includes: information acquisition unit and server information acquisition unit configured to acquire position information of plurality of moving objects; display unit; display stop unit configured to input an image stop command; display control unit configured to control the display unit so that a plurality of icon images corresponding to the plurality of moving objects whose position information is acquired by the information acquisition unit and server information acquisition unit is displayed on the display unit and, when the image stop command is inputted by the display stop unit, motion of the plurality of icon images displayed on the display unit is stopped; and collection destination assignment unit configured to select an arbitrary icon image from among the plurality of icon images in response to user operation during the motion of the plurality of icon images is stopped by the display control unit.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: July 19, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Takahiro Oyama, Yuki Hagiwara
  • Patent number: 11393190
    Abstract: An object identification method determines whether a first monitoring image and a second monitoring image captured by a monitoring camera apparatus have the same object. The object identification method includes acquiring the first monitoring image at a first point of time to analyze a first object inside a first angle of view of the first monitoring image, acquiring the second monitoring image at a second point of the time different from the first point of time to analyze a second object inside the first angle of view of the second monitoring image, estimating a first similarity between the first object inside the first angle of view of the first monitoring image and the second object inside the first angle of view of the second monitoring image; and determining whether the first object and the second object are the same according to comparison result of the first similarity with a threshold.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: July 19, 2022
    Assignee: VIVOTEK INC.
    Inventor: Cheng-Chieh Liu
  • Patent number: 11386306
    Abstract: As agents move about a materials handling facility, tracklets representative of the position of each agent are maintained along with a confidence score indicating a confidence that the position of the agent is known. If the confidence score falls below a threshold level, image data of the agent associated with the low confidence score is obtained and processed to generate one or more embedding vectors representative of the agent at a current position. Those embedding vectors are then compared with embedding vectors of other candidate agents to determine a set of embedding vectors having a highest similarity. The candidate agent represented by the set of embedding vectors having the highest similarity score is determined to be the agent and the position of that candidate agent is updated to the current position, thereby re-identifying the agent.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: July 12, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Tian Lan, Jayakrishnan Eledath, Hoi Cheung Pang
  • Patent number: 11381751
    Abstract: A handheld gimbal includes a handheld part and a gimbal. The handheld part is configured with a human-machine interface component. The gimbal is mounted at the handheld part and configured to mount a camera device to photograph a target object. The human-machine interface component includes a display screen and a processor. The display screen is configured to display a photographing image captured by the camera device. The photographing image includes an image of the target object. The processor is configured to automatically recognize the target object, obtain a motion instruction of controlling a motion of the gimbal according to a motion status of the image of the target object, and control the motion of the gimbal according to the motion instruction.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: July 5, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventor: Guanliang Su
  • Patent number: 11373277
    Abstract: A motion detection method using an image processing device may include: based on the noise level of a current frame, obtaining a weighted average of a sum of absolute differences (SAD) value and an absolute difference of sums (ADS) value; setting the weighted average as an initial motion detection value of each pixel; and selectively performing max filtering on motion detection values of the pixels to obtain a final motion detection value.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: June 28, 2022
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventor: Eun Cheol Choi
  • Patent number: 11367124
    Abstract: An object tracking system that includes a sensor, a weight sensor, and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect a weight increase on the weight sensor and to determine a weight increase amount on the weight sensor. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart with an item weight that is closest to the weight increase amount, and to remove the identified item from the digital cart associated with the person.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: June 21, 2022
    Assignee: 7-ELEVEN, INC.
    Inventors: Shahmeer Ali Mirza, Sarath Vakacharla, Sailesh Bharathwaaj Krishnamurthy, Deepanjan Paul
  • Patent number: 11354683
    Abstract: A method and system for creating an anonymous shopper panel based on multi-modal sensor data fusion. The anonymous shopper panel can serve as the same traditional shopper panel who reports their household information, such as household size, income level, demographics, etc., and their purchase history, yet without any voluntary participation. A configuration of vision sensors and mobile access points can be used to detect and track shoppers as they travel a retail environment. Fusion of those modalities can be used to form a trajectory. The trajectory data can then be associated with Point of Sale data to form a full set of shopper behavior data. Shopper behavior data for a particular visit can then be compared to data from previous shoppers' visits to determine if the shopper is a revisiting shopper. The shopper's data can then be aggregated for multiple visits to the retail location. The aggregated shopper data can be filtered using application-specific criteria to create an anonymous shopper panel.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: June 7, 2022
    Assignee: VideoMining Corporation
    Inventors: Joonhwa Shin, Rajeev Sharma, Youngrock R Yoon, Donghun Kim
  • Patent number: 11351436
    Abstract: A golf launch monitor is configured to determine a flight characteristic of a golf ball. The golf launch monitor includes two low-speed cameras, a trigger device, and a processor. The trigger device is configured to detect a golf swing. The processor is configured to instruct, upon the trigger device detecting said golf swing, the first camera to capture the first ball image; instruct the second camera to capture the second ball image after a time interval, wherein the time interval is less than the first frame rate and the second frame rate; and determine, based at least in part on the first ball image and the second ball image, the flight characteristic of the golf ball.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: June 7, 2022
    Inventors: David M. Hendrix, Caleb A. Pinter, Elliot S. Wilder, Jeffrey B. Wigh, Maxwell C. Goldberg, William Perry Copus, Jr.
  • Patent number: 11336831
    Abstract: An image processing device adjusts an imaging direction and an imaging magnification by performing driving control of an imaging device for each image processing process and sets an imaging position of a measurement target. The image processing device calculates an evaluation value of set feature parts in a captured image input at the time of measurement on the basis of captured image data and compares the evaluation value with determination conditions. If the determination condition is satisfied, the image processing device performs predetermined image processing and outputs a result of the image processing to a control device.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: May 17, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Tomoya Asanuma, Genki Cho, Hiroto Oka
  • Patent number: 11333535
    Abstract: Provided is a method for more accurately correcting position coordinates of a point on an object to be imaged, the coordinates being identified based on values detected by linear scales. A visual field is moved to a measurement point defined on a recessed portion formed on a calibration plate, and an image is captured (step S13-1), edges are detected from an image of sides of the recessed portion (step 313-2), an intersection of the edges is calculated (step S13-3), values of the intersection as actually measured by the linear scales are saved (step S13-4), and position coordinates of the point on the object to be imaged as detected by the linear scales are corrected by using a true value and a difference.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 17, 2022
    Assignee: OMRON CORPORATION
    Inventors: Keita Ebisawa, Shingo Hayashi