Patents Issued in February 14, 2023
  • Patent number: 11580705
    Abstract: A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: February 14, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Frank Thomas Steinbrücker, David Geoffrey Molyneaux, Zhongle Wu, Xiaolin Wei, Jianyuan Min, Yifu Zhang
  • Patent number: 11580706
    Abstract: Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before run-time and involving real-world constraints (23). Real-world information (22) is captured in there al 3D scene and the scenario is executed at runtime (14) in presence of there al-world constraints. When there al-world constraints are not identified (12) from there al-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of there al 3D scene. Application to mixed reality.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: February 14, 2023
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Anthony Laurent, Matthieu Fradet, Caroline Baillard
  • Patent number: 11580707
    Abstract: An adjustable frame assembly for augmented reality eyewear. The frame assembly includes a face portion for supporting at least one waveguide that creates an eye box, a support rest for supporting the face portion on a user, and a coupling for adjusting the position of the face portion relative to the support rest. This enables movement of the waveguide eye box relative to the support rest to position the eye box in front of the wearer's eyes.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: February 14, 2023
    Assignee: Snap Inc.
    Inventors: Julio Cesar Castañeda, Samuel Bryson Thompson
  • Patent number: 11580708
    Abstract: Provided herein are method, apparatus, and computer program products for generating a first and second three dimensional interactive environment. The first three dimensional interactive environment may contain one or more engageable virtual interfaces that correspond to one or more items. Upon engagement with a virtual interface the second three dimensional interactive environment is produced to virtual simulation related to the one or more items.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: February 14, 2023
    Assignee: GROUPON, INC.
    Inventor: Scott Werner
  • Patent number: 11580709
    Abstract: An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: February 14, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Javier Antonio Busto, Jonathan Brodsky
  • Patent number: 11580710
    Abstract: A multiuser, collaborative augmented reality (AR) system employs individual AR devices for viewing real-world anchors, that is, physical models that are recognizable to the camera and image processing module of the AR device. To mitigate ambiguous configurations when used in the collaborative mode, each anchor is registered with a server to ensure that only uniquely recognizable anchors are simultaneously active at a particular location. The system permits collaborative AR to span multiple sites, by associating a portal with an anchor at each site. Using the location of their corresponding AR device as a proxy for their position, AR renditions of the other participating users are provided. This AR system is particularly well suited for games.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: February 14, 2023
    Inventors: Jordan Kent Weisman, William Gibbens Redmann
  • Patent number: 11580711
    Abstract: Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: February 14, 2023
    Assignee: Multinarity Ltd
    Inventors: Tamir Berliner, Tomer Kahan, Orit Dolev, Doron Assayas Terre
  • Patent number: 11580712
    Abstract: An information processing apparatus creates a first virtual object expressing a physical object that is detected from physical object information obtained from a physical object information acquisition unit. The information processing apparatus determines a display state of the first virtual object in accordance with a result of detecting collision between the first virtual object and a second virtual object. The information processing apparatus creates, on the basis of a virtual space including the first virtual object and the second virtual object, position-orientation of an HMD, the determined display state, and a physical space image obtained from the HMD, a mixed reality image in combination of an image of the virtual space and the physical space image, and displays the created mixed reality image on the HMD.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: February 14, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Akinao Mihara, Yasumi Tanaka
  • Patent number: 11580713
    Abstract: Examples are disclosed that relate to motion compensation on a single photon avalanche detector (SPAD) array camera. One example provides a method enacted on an imaging device comprising a SPAD array camera and a motion sensor, the SPAD array camera comprising a plurality of pixels. The method comprises acquiring a plurality of subframes of image data. Each subframe of image data comprises a binary value for each pixel. Based upon motion data from the motion sensor, the method further comprises determining a change in pose of the imaging device between adjacent subframes, applying a positional offset to a current subframe based upon the motion data to align a location of a stationary imaged feature in the current subframe with a location of the stationary imaged feature in a prior subframe to create aligned subframes, summing the aligned subframes to form an image, and outputting the image.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: February 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 11580714
    Abstract: Methods and systems are disclosed for displaying an augmented reality virtual object on a multimedia device. One method comprises detecting, in an augmented reality environment displayed using a first device, a virtual object; detecting, within the augmented reality environment, a second device, the second device comprising a physical multimedia device; and generating, at the second device, a display comprising a representation of the virtual object.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: February 14, 2023
    Assignee: Worldpay Limited
    Inventors: Kevin Gordon, Charlotte Spender
  • Patent number: 11580715
    Abstract: A notched 2D shape may encode information. For instance, a physical tag may display, form or include a polygon that is modified by notches and by one or more holes. This notched 2D shape may encode data that identifies, or provides information regarding, a physical product to which the tag is physically attached. Alternatively, this notched 2D shape may encode any other type of information, such as information about what we sometimes call a product shape or shape matrix. The notched shape may be an octagon that is modified by notches and by one or more holes.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: February 14, 2023
    Assignee: Shape Matrix Geometric Instruments, LLC
    Inventors: Jonathan Cramer, Mehdi Ben Slama, Nicholas Petraco
  • Patent number: 11580716
    Abstract: An apparatus for color-dependent detection of image contents includes a light input coupling apparatus, carrier medium, measuring region, output coupling region, and camera apparatus. The light input coupling apparatus includes a light source to emit light at a first wavelength. The carrier medium receives the light and transmits the light by internal reflection to the measuring region. The measuring region includes a first diffraction structure that outputs light at the first wavelength. The first diffraction structure is formed as a multiplex diffraction structure to input light in a second wavelength range. The output coupling region includes a second diffraction structure formed as a multiplex diffraction structure that outputs light at the first wavelength and the second wavelength range. The camera apparatus captures light output from the carrier medium to the camera apparatus, and provides the light in a form of image data which correlates with the light.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: February 14, 2023
    Assignee: AUDI AG
    Inventors: Markus Klug, Tobias Moll, Johannes Scheuchenpflug
  • Patent number: 11580717
    Abstract: A method and a device for determining a placement region of an item are disclosed. The method according to the present disclosure comprises: acquiring position information of an electronic identification at a bar display screen; and determining the placement region of the item according to the position information and a preset mapping relationship.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: February 14, 2023
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Shu Wang, Hui Rao, Zhiguo Zhang, Xin Li, Xiaohong Wang
  • Patent number: 11580718
    Abstract: A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: February 14, 2023
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Christopher Grant Padwick, William Louis Patzoldt, Benjamin Kahn Cline, Olgert Denas, Sonali Subhash Tanna
  • Patent number: 11580719
    Abstract: A method for dynamically quantizing feature maps of a received image. The method includes convolving an image based on a predicted maximum value, a predicted minimum value, trained kernel weights and the image data. The input data is quantized based on the predicted minimum value and predicted maximum value. The output of the convolution is computed into an accumulator and re-quantized. The re-quantized value is output to an external memory. The predicted min value and the predicted max value are computed based on the previous max values and min values with a weighted average or a pre-determined formula. Initial min value and max value are computed based on known quantization methods and utilized for initializing the predicted min value and predicted max value in the quantization process.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: February 14, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Kumar Desappan, Manu Mathew, Pramod Kumar Swami, Praveen Eppa
  • Patent number: 11580720
    Abstract: In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: February 14, 2023
    Assignee: NEC CORPORATION
    Inventor: Hiroo Ikeda
  • Patent number: 11580721
    Abstract: The information processing apparatus (2000) includes a feature point detection unit (2020), a determination unit (2040), an extraction unit (2060), and a comparison unit (2080). A feature point detection unit (2020) detects a plurality of feature points from the query image. The determination unit (2040) determines, for each feature point, one or more object images estimated to include the feature point. The extraction unit (2060) extracts an object region estimated to include the object in the query image in association with the object image of the object estimated to be included in the object region, on the basis of the result of the determination. The comparison unit (2080) cross-checks the object region with the object image associated with the object region and determines an object included in the object region.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: February 14, 2023
    Assignee: NEC CORPORATION
    Inventor: Kota Iwamoto
  • Patent number: 11580722
    Abstract: Provided herein are a calibration method for a fingerprint sensor and a display device using the calibration method, where, in the calibration method for a fingerprint sensor, the fingerprint sensor includes a substrate, a light-blocking layer located on a first surface of the substrate and having openings formed in a light-blocking mask, a light-emitting element layer located on the light-blocking layer and having a plurality of light-emitting elements, and a sensor layer located on a second surface of the substrate and having a plurality of photosensors; and the calibration method includes generating calibration data through white calibration and dark calibration, and applying offsets to the plurality of photosensors using the calibration data.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: February 14, 2023
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Mun Su Kim, Kee Yong Kim, Jung Hun Sin, Han Su Cho
  • Patent number: 11580723
    Abstract: Embodiments described herein provide systems and processes for scene-aware object detection. This can involve an object detector that modulates its operations based on image location. The object detector can be a neural network detector or a scanning window detector, for example.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: February 14, 2023
    Assignee: INVISION AI INC.
    Inventors: Karim Ali, Carlos Joaquin Becker
  • Patent number: 11580724
    Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: February 14, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
  • Patent number: 11580725
    Abstract: Systems, device, and methods are provided for imaging at least one target object within a medium, including acquiring multiple sets of RF signals and generating plurality of DAS images and analyzing the plurality of DAS images to detect one or more target object in the plurality of DAS images and further visualizing the at least one target object.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: February 14, 2023
    Assignee: VAYYAR IMAGING LTD.
    Inventors: Matan Birger, Yuval Shamuel Lomnitz, Tanya Chernyakova
  • Patent number: 11580726
    Abstract: Image acquisition and interpretation method used for articles of commerce, consisting of one or more gondolas or shelves (A) in which shelves (X) are installed, to which are added four repair marks (B, E) that, in combination with a mobile device with a photographic camera (D), will take and process images in an automated way thanks to areas designed for this purpose (C) where reference marks (F, G) are displayed, correcting the perspective deformation and cutting the image to the area delimited by the repair marks, to then send the processed images to a computer server that can index data such as quantity and details of articles, number of units of each, physical characteristics, prices, etc.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: February 14, 2023
    Inventor: Eduardo Gersberg
  • Patent number: 11580727
    Abstract: System and methods for processing audio signals are disclosed. In one implementation, a system may comprise a wearable camera configured to capture images from an environment of a user; a microphone configured to capture sounds from the environment of the user; and a processor. The processor may be configured to receive at least one image of the plurality of images, the at least one image comprising a plurality of image portions associated with corresponding image portion timestamps; receive at least one audio signal representative of the sounds captured by the at least one microphone; identify an audio timestamp associated with a portion of the audio signal; identify an image portion from among the plurality of image portions, the image portion having an image portion timestamp associated with the audio timestamp; and analyze the image portion to identify a voice originating from an object represented in the image.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: February 14, 2023
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 11580728
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a storage device, for electric grid asset detection are enclosed. An electric grid asset detection method includes: obtaining overhead imagery of a geographic region that includes electric grid wires; identifying the electric grid wires within the overhead imagery; and generating a polyline graph of the identified electric grid wires. The method includes replacing curves in polylines within the polyline graph with a series of fixed lines and endpoints; identifying, based on characteristics of the fixed lines and endpoints, a location of a utility pole that supports the electric grid wires; detecting an electric grid asset from street level imagery at the location of the utility pole; and generating a representation of the electric grid asset for use in a model of the electric grid.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: February 14, 2023
    Assignee: X Development LLC
    Inventors: Ananya Gupta, Phillip Ellsworth Stahlfeld, Bangyan Chu
  • Patent number: 11580729
    Abstract: A pattern recognition system including an image gathering unit that gathers at least one digital representation of a field, an image analysis unit that pre-processes the at least one digital representation of a field, an annotation unit that provides a visualization of at least one channel for each of the at least one digital representation of the field, where the image analysis unit generates a plurality of image samples from each of the at least one digital representation of the field, and the image analysis unit splits each of the image samples into a plurality of categories.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: February 14, 2023
    Inventors: Naira Hovakymian, Hrant Khachatrian, Karen Ghandilyan
  • Patent number: 11580730
    Abstract: A method for image-guided agriculture includes receiving images; processing the images to generate reflectance maps respectively corresponding to spectral bands; synthesizing the reflectance maps to generate a multispectral image including vegetation index information of a target area; receiving crop information in regions of the target area; and assessing crop conditions for the regions based on the identified crop information and the vegetation index information.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: February 14, 2023
    Assignee: GEOSAT Aerospace & Technology
    Inventors: Cheng-Fang Lo, Kuang-Yu Chen, Te-Che Lin, Hsiu-Hsien Wen, Ting-Jung Chang
  • Patent number: 11580731
    Abstract: Methods, devices, and systems may be utilized for detecting one or more properties of a plant area and generating a map of the plant area indicating at least one property of the plant area. The system comprises an inspection system associated with a transport device, the inspection system including one or more sensors configured to generate data for a plant area including to: capture at least 3D image data and 2D image data; and generate geolocational data. The datacenter is configured to: receive the 3D image data, 2D image data, and geolocational data from the inspection system; correlate the 3D image data, 2D image data, and geolocational data; and analyze the data for the plant area. A dashboard is configured to display a map with icons corresponding to the proper geolocation and image data with the analysis.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: February 14, 2023
    Assignee: Adroit Robotics
    Inventors: Jose Angelo Gurzoni, Jr., Plinio Thomaz Aquino, Jr., Milton Perez Cortez, Jr.
  • Patent number: 11580732
    Abstract: An electronic device according to various embodiments includes a communication circuit, a memory, and a processor, and the processor is configured to: receive a first image from a first external electronic device by using the communication circuit; perform image recognition with respect to the first image by using the first image; generate information regarding an external object included in the first image, based on a result of the recognition; based on the information regarding the external object satisfying a first designated condition, transmit at least a portion of the first image to a second external electronic device corresponding to the first designated condition; and, based on the information regarding the external object satisfying a second designated condition, transmit the at least portion of the first image to a third external electronic device corresponding to the second designated condition.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: February 14, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dasom Lee, Jungeun Lee, Sungoh Kim, Hyunhee Park
  • Patent number: 11580733
    Abstract: Systems, methods and techniques for automatically recognizing two-dimensional real world objects with an augmented reality display device, and augmenting or enhancing the display of such real world objects by superimposing virtual images such as a still or video advertisement, a story or other virtual image presentation. In non-limiting embodiments, the real world object includes visible features including visible security features and a recognition process takes the visible security features into account when recognizing the object and/or displaying superimposed virtual images.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: February 14, 2023
    Assignee: AR, LLC
    Inventors: Stefan W. Herzberg, Megan Herzberg
  • Patent number: 11580734
    Abstract: Aspects of the subject disclosure may include, for example, a camera positioned to capture image information of an immersive experience presented to one or more users engaged in the immersive experience and located in an immersive experience space, a processing system and a memory that stores executable instructions to facilitate performance of operations including receiving the image information from the camera, detecting objects located in the immersive experience space with the one or more users, the objects including at least one virtual object created by the immersive experience, determining the at least one virtual object is a projected virtual object of the immersive experience, generating a signal indicating the at least one virtual object is a projected virtual object, and a projector, responsive to the signal, to provide a visual indication in the immersive experience space to identify the projected virtual object as a virtual object to the one or more users engaged in the immersive experience.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: February 14, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Joseph Soryal
  • Patent number: 11580735
    Abstract: Aspects of the subject disclosure may include, for example, observing a plurality of objects viewed through a smart lens, wherein the plurality of objects are in a frame of an image viewed by the smart lens, determining an identification for an object of the plurality of objects, assigning tag information for the object based on the identification, storing the tag information for the object and the frame in which the object was observed, receiving a recall request for the object, retrieving the tag information for the object and the frame responsive to the receiving the recall request, and displaying the tag information and the frame. Other embodiments are disclosed.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: February 14, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Roque Rios, III
  • Patent number: 11580736
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for parallel processing of video frames using neural networks. One of the methods includes receiving a video sequence comprising a respective video frame at each of a plurality of time steps; and processing the video sequence using a video processing neural network to generate a video processing output for the video sequence, wherein the video processing neural network includes a sequence of network components, wherein the network components comprise a plurality of layer blocks each comprising one or more neural network layers, wherein each component is active for a respective subset of the plurality of time steps, and wherein each layer block is configured to, at each time step at which the layer block is active, receive an input generated at a previous time step and to process the input to generate a block output.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: February 14, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Simon Osindero, Joao Carreira, Viorica Patraucean, Andrew Zisserman
  • Patent number: 11580737
    Abstract: Methods and systems provide for search results within segmented communication session content. In one embodiment, the system receives a transcript and video content of a communication session between participants, the transcript including timestamps for a number of utterances associated with speaking participants; processes the video content to extract textual content visible within the frames of the video content; segments frames of the video content into a number of contiguous topic segments; determines a title for each topic segment; assigns a category label for each topic segment; receives a request from a user to search for specified text within the video content; determines one or more titles or category labels for which a prediction of relatedness with the specified text is present; and presents content from at least one topic segment associated with the one or more titles or category labels for which a prediction of relatedness is present.
    Type: Grant
    Filed: July 31, 2022
    Date of Patent: February 14, 2023
    Assignee: Zoom Video Communications, Inc.
    Inventors: Andrew Miller-Smith, Renjie Tao, Ling Tsou
  • Patent number: 11580738
    Abstract: Systems and methods for improved operations of ski lifts increase skier safety at on-boarding and off-boarding locations by providing an always-on, always-alert system that “watches” these locations, identifies developing problem situations, and initiates mitigation actions. One or more video cameras feed live video to a video processing module. The video processing module feeds resulting sequences of images to an artificial intelligence (AI) engine. The AI engine makes an inference regarding existence of a potential problem situation based on the sequence of images. This inference is fed to an inference processing module, which determines if the inference processing module should send an alert or interact with the lift motor controller to slow or stop the lift.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: February 14, 2023
    Assignee: Clone667, LLC
    Inventor: Bryan Scott Queen
  • Patent number: 11580739
    Abstract: A detection apparatus includes one or more processors. The processors set at least one time-period candidate. The processors input, to a first model that inputs a feature acquired from a plurality of time-series images and the time-period candidate and outputs at least one first likelihood indicating a likelihood of occurrence of at least one action previously determined as a detection target and correction information for acquisition of at least one correction time period resulting from correction of the at least one time-period candidate, the feature and the time-period candidate, and acquire the first likelihood and the correction information output from the first model. The processors detect, based on the at least one correction time period acquired based on the correction information and the first likelihood, the action included in the time-series images and a start time and a finish time of a time period of occurrence of the action.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: February 14, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Yuta Shirakawa
  • Patent number: 11580740
    Abstract: Methods, systems, and media for adaptive presentation of a video content item based on an area of interest are provided.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: February 14, 2023
    Assignee: Google LLC
    Inventors: Scott Davies, Justin Lewis
  • Patent number: 11580741
    Abstract: Disclosed are a method and an apparatus for detecting abnormal objects in a video. The method for detecting abnormal objects in a video reconstructs a restored batch by applying each input batch to which an inpainting pattern is applied to a trained auto-encoder model, and fuses a time domain reconstruction error using time domain restored frames output by extracting and restoring a time domain feature point by applying a spatial domain reconstruction error and a plurality of successive frames using a restored frame output by combining the reconstructed restoring batch to a trained LSTM auto-encoder model to estimate an area where an abnormal object is positioned.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: February 14, 2023
    Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITY
    Inventors: Yong Guk Kim, Long Thinh Nguyen
  • Patent number: 11580742
    Abstract: Provided are a target character video clip playing method, system and apparatus, and a storage medium. The method comprises: using image recognition technology to perform target character recognition on an entire video, positioning a plurality of video clips containing target characters, and obtaining a first playing time period set corresponding to the video clips; according to audio clips corresponding to each character marked within the entire video, obtaining a second playing time period set corresponding to the audio clips of the various characters; merging the time periods included in the playing time period sets, and obtaining a sum playing time period set of the target characters; according to a sorting of various playing timelines within the sum playing time period set, performing video playing of the target characters.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: February 14, 2023
    Assignee: Shenzhen TCL New Technology Co., Ltd.
    Inventor: Jian Bao
  • Patent number: 11580743
    Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: February 14, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
  • Patent number: 11580744
    Abstract: An information processing apparatus includes a frame image acquisition section adapted to acquire a plurality of consecutive frame images included in a moving image displayed on a screen, and a matching process section adapted to perform, for each of the plurality of acquired frame images, a matching process of detecting an area that matches a template image representing appearance of a display element to be detected. An area in which the display element is being displayed on the screen is identified on a basis of a result of performing the matching process on the plurality of frame images.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: February 14, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Yuta Murata, Takanori Shimizu, Ryuichi Hayashida
  • Patent number: 11580745
    Abstract: Methods and systems for detecting visual relations in a video are disclosed.
    Type: Grant
    Filed: August 17, 2018
    Date of Patent: February 14, 2023
    Assignee: NATIONAL UNIVERSITY OF SINGAPORE
    Inventors: Tongwei Ren, Jingfan Guo, Tat-Seng Chua, Xindi Shang
  • Patent number: 11580746
    Abstract: Some embodiments relate to a system for automated gaming recognition, the system comprising: at least one image sensor configured to capture image frames of a field of view including a table game; at least one depth sensor configured to capture depth of field images of the field of view; and a computing device configured to receive the image frames and the depth of field images, and configured to process the received image frames and depth of field images in order to produce an automated recognition of at least one gaming state appearing in the field of view. Embodiments also relate to methods and computer-readable media for automated gaming recognition. Further embodiments relate to methods and systems for monitoring game play and/or gaming events on a gaming table.
    Type: Grant
    Filed: February 15, 2021
    Date of Patent: February 14, 2023
    Assignee: SenSen Networks Group Pty Ltd
    Inventors: Nhat Dinh Minh Vo, Subhash Challa, Zhi Li
  • Patent number: 11580747
    Abstract: Systems, methods, and computer-readable for multi-spatial scale object detection include generating one or more object trackers for tracking at least one object detected from on one or more images. One or more blobs are generated for the at least one object based on tracking motion associated with the at least one object. One or more tracklets are generated for the at least one object based on associating the one or more object trackers and the one or more blobs, the one or more tracklets including one or more scales of object tracking data for the at least one object. One or more uncertainty metrics are generated using the one or more object trackers and an embedding of the one or more tracklets. A training module for detecting and tracking the at least one object using the embedding and the one or more uncertainty metrics is generated using deep learning techniques.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: February 14, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Hugo Mike Latapie, Franck Bachet, Enzo Fenoglio, Sawsen Rezig, Carlos M. Pignataro, Guillaume Sauvage De Saint Marc
  • Patent number: 11580748
    Abstract: A scalable tracking system processes video of a space to track the positions of objects within a space. The tracking system determines local coordinates for the objects within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for an object during that time window.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: February 14, 2023
    Assignee: 7-ELEVEN, INC.
    Inventors: Sailesh Bharathwaaj Krishnamurthy, Sarath Vakacharla, Trong Nghia Nguyen, Shahmeer Ali Mirza, Madan Mohan Chinnam, Caleb Austin Boulio
  • Patent number: 11580749
    Abstract: A scalable tracking system processes video of a space to track the positions of people within a space. The tracking system determines local coordinates for the people within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for a person during that time window.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: February 14, 2023
    Assignee: 7-ELEVEN, INC.
    Inventors: Sailesh Bharathwaaj Krishnamurthy, Sarath Vakacharla, Trong Nghia Nguyen, Shahmeer Ali Mirza, Madan Mohan Chinnam, Caleb Austin Boulio
  • Patent number: 11580750
    Abstract: A recording apparatus includes: a captured data acquisition unit configured to acquire captured data captured by a camera that captures an image of an outside of a vehicle; an event detection unit configured to detect an event with respect to the vehicle; an attachment/detachment detection unit configured to detect an attachment/detachment state of the recording apparatus with respect to the vehicle; and a recording controller configured to store, when the event detection unit has detected the event, captured data for a predetermined period of time due to the detected event as first event recording data, invalidate, when it is detected by the attachment/detachment detection unit that the recording apparatus has been detached from the vehicle, the detection of the event by the event detection unit after the detection of the detachment, and store captured data after the detection of the detachment as second event recording data.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: February 14, 2023
    Assignee: JVCKENWOOD CORPORATION
    Inventors: Keita Hayashi, Toshitaka Murata, Yasutoshi Sakai, Hirofumi Taniyama
  • Patent number: 11580751
    Abstract: A drive recorder according to an embodiment of the present disclosure includes: an imaging unit that is mounted on a vehicle and captures a video of the surroundings of the vehicle; a video recording unit that has, recorded therein, video data captured; a network connecting unit that receives accident information including a time and date when an accident occurred and a place where the accident occurred; and a video retrieving unit that determines whether any video data captured in a predetermined time period and in a predetermined region are available in the video data recorded in the video recording unit, the predetermined time period including the time and date when the accident occurred, the predetermined region including the place where the accident occurred.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: February 14, 2023
    Assignee: JVCKENWOOD Corporation
    Inventors: Manamu Takahashi, Hideki Takehara, Akinori Suyama, Tatsumi Naganuma, Satoru Hirose, Takeshi Aoki
  • Patent number: 11580752
    Abstract: A method and an apparatus for supporting a camera-based environment recognition by a means of transport using road wetness information from a first ultrasonic sensor. The method includes: recording a first signal representing an environment of the means of transport by the first ultrasonic sensor of the means of transport; recording a second signal representing the environment of the means of transport by a camera of the means of transport; obtaining road wetness information on the basis of the first signal; selecting a predefined set of parameters from a plurality of predefined sets of parameters as a function of the road wetness information; and performing an environment recognition on the basis of the second signal in conjunction with the predefined set of parameters.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: February 14, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Paul Ruhnau, Simon Weissenmayer
  • Patent number: 11580753
    Abstract: A license plate detection and recognition system receives training data comprising images of license plates. The system prepares ground truth data from the training data based predefined parameters. The system trains a first machine learning algorithm based on the ground truth data to generate a license plate detection model. The license plate detection model is configured to detect one or more regions in the images. The one or more regions contains a candidate for a license plate. The LPDR system generates a bounding box for each region. The LPDR system trains a second machine learning algorithm based on the ground truth data and the license plate detection model to generate a license plate recognition model. The license plate recognition model generates a sequence of alphanumeric characters with a level of recognition confidence for the sequence.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: February 14, 2023
    Assignee: Nortek Security & Control LLC
    Inventors: Ilya Popov, Dmitry Yashunin, Semen Budenkov, Krishna Khadloya
  • Patent number: 11580754
    Abstract: A system and method for large-scale lane marking detection using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle; receiving point cloud data from a distance and intensity measuring device mounted on the vehicle; fusing the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data; and generating a lane marking map from the set of lane marking points.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: February 14, 2023
    Assignee: TUSIMPLE, INC.
    Inventors: Xue Mei, Xiaodi Hou, Dazhou Guo, Yujie Wei