Sporting Event Patents (Class 348/157)
  • Patent number: 11553126
    Abstract: Systems and methods to control operations of a camera based on one or more sensors attached to one or more actors. Sensor data collected from the sensors is analyzed to identify a state of an actor. The state of the actor is used to determine an operation parameter of the camera, such as the zoom level of the camera and/or the direction of the camera, and control the operation of the camera. For example, an actor who is in a state about to perform an interesting action can be selected from a plurality of actors; and the direction and the zoom level of the camera can be adjusted to focus on the selected actor in capture one or more subsequent images.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: January 10, 2023
    Assignee: AlpineReplay, Inc.
    Inventors: David J. Lokshin, Anatole M. Lokshin
  • Patent number: 11528308
    Abstract: Technologies for end of frame marking and detection in streaming digital media content include a source computing device communicatively coupled to a destination computing device. The source computing device is configured to encode a frame of digital media content and insert an end of frame marker into a transport stream header of a network packet that includes an encoded payload corresponding to a chunk of data of the frame of digital media content. The destination computing device is configured to de-packetize received network packets and parse the transport stream headers of the received network packets to determine whether the network packet corresponds to an end of frame of the frame of digital media content. The destination computing device is further configured to transmit the encoded payloads of the received network packets to a decoder in response to a determination that the end of frame network packet has been received. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: December 13, 2022
    Assignee: Intel Corporation
    Inventors: Brian E. Rogers, Karthik Veeramani
  • Patent number: 11527021
    Abstract: The present invention is directed to solving an issue arising when a background image is generated. An image processing system generates a foreground image containing a foreground object based on an image captured by an imaging apparatus included in a first imaging apparatus group. The image processing system generates a background image not containing the foreground object based on an image of the imaging region captured by an imaging apparatus included in a second imaging apparatus group different from the first imaging apparatus group. The image processing system generates a virtual viewpoint image based on the generated foreground image and background image.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: December 13, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiroyasu Ito
  • Patent number: 11527055
    Abstract: A system capable of determining which recognition algorithms should be applied to regions of interest within digital representations is presented. A preprocessing module utilizes one or more feature identification algorithms to determine regions of interest based on feature density. The preprocessing modules leverages the feature density signature for each region to determine which of a plurality of diverse recognition modules should operate on the region of interest. A specific embodiment that focuses on structured documents is also presented. Further, the disclosed approach can be enhanced by addition of an object classifier that classifies types of objects found in the regions of interest.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: December 13, 2022
    Assignee: NANT HOLDINGS IP, LLC
    Inventors: Mustafa Jaber, Jeremi M. Sudol, Bing Song
  • Patent number: 11526267
    Abstract: A setting method for setting a parameter of a virtual viewpoint relating to a virtual viewpoint video to be generated based on images captured by a plurality of cameras includes receiving a first user operation relating to a first parameter relating to a position of a virtual viewpoint, determining a settable range of a second parameter of the virtual viewpoint by using the first parameter based on the received first user operation, and setting the first parameter based on the received first user operation and the second parameter that is based on a second user operation different from the first user operation and falls within the determined settable range as parameters of the virtual viewpoint.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: December 13, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takashi Hanamoto
  • Patent number: 11482049
    Abstract: A media verification device receives baseline media, which includes videos confirmed to include a target subject. The device determines, based on the baseline media for the target subject, a set of baseline features associated with the target subject. A baseline profile is determined for the target subject based on the set of baseline features. When test media which includes a video purported to include the test subject is received, test features are determined for the test media. A test profile is determined for the test media based on the set of test features. The test profile is compared to the baseline profile for the test subject. Based on this comparison, a confidence score is determined. If the confidence score is not greater than a threshold value, the test media is determined to include a synthetic video of the target subject, and an alert is provided.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: October 25, 2022
    Assignee: Bank of America Corporation
    Inventors: Elena Kvochko, George A. Albero, Daniel Joseph Serna, Michael Emil Ogrinz
  • Patent number: 11477252
    Abstract: It is possible to capture video information using one or more body mounted cameras, to transmit that information over a wireless communication channel, and to process that information, such as by using angular momentum information captured by gyroscopes, to obtain an image which is suitable for viewing in real time. This technology can be applied in a variety of contexts, such as sporting events, and can also be applied to information which is captured and stored for later use, either in addition to, or as an alternative to, streaming that information for real time viewing. Such video information can be captured by components fully enclosed within a hat clip enclosure that is mountable on a brim of a hat.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: October 18, 2022
    Assignee: Action Streamer, LLC
    Inventors: Christopher S. McLennan, Edward Jay Harnish, II
  • Patent number: 11475603
    Abstract: An apparatus and method for three-dimensional (3D) geometric data compression, includes storage of a first 3D geometric mesh of a first data size, which includes a 3D representation of a plurality of objects in a 3D space. The apparatus includes circuitry that receives motion tracking data of the plurality of objects from a plurality of position trackers. The motion tracking data includes motion information of each of the plurality of objects from a first position to a second position in the 3D space. The 3D geometric mesh is segmented into a plurality of 3D geometric meshes corresponding to the plurality of objects, based on the motion tracking data. As a result of the segmentation of the 3D geometric mesh before encoding and the use of motion tracking data, the plurality of 3D geometric meshes are efficiently encoded.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: October 18, 2022
    Assignee: SONY CORPORATION
    Inventor: Danillo Graziosi
  • Patent number: 11463624
    Abstract: An imaging device includes a first imaging unit that captures a first capturing region, a second imaging unit that captures a second capturing region of the first capturing region, a holding unit that holds position information corresponding to the second capturing region captured by the second imaging unit, an output unit that outputs a cutout image of the second capturing region corresponding to the position information held by the holding unit out of a captured image captured by the first imaging unit, and a change unit that, when one of a plurality of cutout images output by the output unit is selected, controls the second imaging unit so as to capture the second capturing region corresponding to the selected cutout image.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: October 4, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kentaro Fukunaga
  • Patent number: 11446812
    Abstract: A cable-driven robot has a base, a platform movable with respect to the base, a plurality of motors mounted on the base, and a plurality of movement cables of the platform each fixed at a first end at a respective fixing point to the platform and at a second end thereof to a respective motor. The robot further includes a shaft rotatably mounted on the platform, a supplementary movement cable, and a supplementary activating motor mounted at a respective position on the base. The supplementary movement cable is fixed, at a first end, to the supplementary activating motor and is wound with a portion of its length on the shaft.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 20, 2022
    Assignee: MARCHESINI GROUP S.P.A.
    Inventor: Giuseppe Monti
  • Patent number: 11451757
    Abstract: Methods, systems and apparatuses may provide for technology that automatically determines, based on camera calibration data and trajectory data associated with a projectile in a game, a plurality of camera angles. The technology may also automatically generate, based on the plurality of camera angles, a camera path for a volumetric content replay of a three-dimensional (3D) region of interest around a highlight moment in the game.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: September 20, 2022
    Assignee: Intel Corporation
    Inventors: Qiang Li, Wenlong Li, Doron T Houminer, Chen Ling, Diego Prilusky
  • Patent number: 11425317
    Abstract: Systems and processes are provided for interactive reassignment of character faces in an audio video program including receiving, via an audio video input, an audio video program, receiving, via a user interface, a request to substitute an original character face within the audio video program with an alternative character face, delaying, using a buffer, the audio video program to generate a delayed audio video program, detecting, with a processor, an occurrence of the original character face within the audio video program, the processor being further operative for replacing an image of the original character face in the delayed audio video program with an image of the alternative character face to generate a modified delayed audio video program and coupling the modified delayed audio video program to a display and loudspeaker.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: August 23, 2022
    Assignee: Sling Media Pvt. Ltd.
    Inventor: Jayaraghavendra Chintakunta
  • Patent number: 11417106
    Abstract: A method for real-time crowd management includes receiving LiDAR point cloud data from area of interest, forming a 3D static surface model of area of interest from LiDAR point cloud data, obtaining real-time CCTV camera images of area of interest, adding real-time CCTV camera images to 3D static surface model to generate real-time dynamic 3D model, identifying dynamic objects in the 3D model, generating a density map of an area of interest, adding density map to the 3D model, identifying which dynamic objects are people, replacing each person with animated character, displaying real-time dynamic 3D model, monitoring the 3D model in the area of interest for dangerous situations, simulating evacuation of area of interest by manipulating animated characters onto pathways leading away from area of interest, forming an evacuation strategy for crowd, and transmitting a notice of the dangerous situation and the evacuation strategy to an emergency authority.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: August 16, 2022
    Assignee: King Abdulaziz University
    Inventors: Bander A. Alzahrani, Ahmed Barnawi, Min Chen
  • Patent number: 11379517
    Abstract: A photography searching system that is used to organize, share, and/or output event photography for event participants. Best used for races or large events, the system helps to organize photographs using associated available data such as date, time, or location of where the photograph was taken, the name of an event participant, a number corresponding to a number worn by the event participant (bib number or participant number), a color corresponding to a clothing color worn by the event participant, as well as the net time it takes participants to complete a event, etc., also known as data search terms. A photographer will upload these photographs to the system, they will be sorted and categorized in the database, and as in most events, each event participant will have multiple photographs taken of them. This system presents an interface on which a user inputs at least one query parameter and relevant photos are then presented to them.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: July 5, 2022
    Inventor: Griffin Edward Kelly
  • Patent number: 11373318
    Abstract: A kinematic analysis system captures and records participant motion via a plurality of video cameras. A participant feature and participant pose are identified in a frame of video data. The feature and pose are correlated across a plurality of frames. A three-dimensional path of the participant is determined based on correlating the feature and pose across the plurality of frames. A potential impact is identified based on analysis of the participant's path.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: June 28, 2022
    Assignee: Vulcan Inc.
    Inventors: Samuel Allan McKennoch, Tariq Osman Omar, Cecil Lee Quartey, Keith Rosema, Alan Caplan, Richard Earl Simpkinson, Istvan Fulop
  • Patent number: 11361380
    Abstract: Apparatuses, systems, and methods are provided for the usage of enhanced pictures (e.g., photos) of tangible objects (e.g., property, cars, etc.) damaged in an accident and answers to questions about the accident to better assess the effect of the damage (e.g., repair expenses and accompanying changes to an insurance policy). A pre-FNOL system may receive responses to one or more questions regarding an accident and one or more enhanced pictures of the tangible property damaged in the accident. The pre-FNOL system may use the responses to the one or more questions and the one or more enhanced pictures to determine repair costs associated with the damaged property and accompanying changes to the insurance policy if an insurance claim were to be filed to cover the determined repairs costs.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: June 14, 2022
    Assignee: Allstate Insurance Company
    Inventors: John P. Kelsh, Clint J. Marlow, Nicole M. Hildebrandt
  • Patent number: 11317073
    Abstract: An information processing apparatus comprises an identification unit configured to identify a position and an orientation of an image capturing apparatus; a reception unit configured to receive an input associated with a switching operation for switching an image to be output between a captured image acquired by the image capturing apparatus and a virtual viewpoint image generated based on a plurality of captured images captured from different directions; and a determination unit configured to determine a position and an orientation of a virtual viewpoint corresponding to the virtual viewpoint image such that, in a specific period after the input associated with the switching operation is received, the position and the orientation of the virtual viewpoint and the identified position and the identified orientation of the image capturing apparatus match each other.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: April 26, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuna Maruyama
  • Patent number: 11295138
    Abstract: A method includes for each of a plurality of successive images in a camera video stream, searching for at least one person present in the image and defining, in the image, for each person found, a field, called person field, at least partially surrounding that person; for each of at least one person found, gathering into a track segment several person fields derived from successive images and at least partially surrounding that same person; for each track segment, identifying the person in that track segment, by a visual signature from that person, this identification including: for each person field in the track segment, determining a visual signature from the person in that track segment, called local visual signature; determining an aggregated visual signature from the local visual signatures; and identifying the person in that track segment from the aggregated visual signature.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: April 5, 2022
    Assignee: BULL SAS
    Inventors: Rémi Druihle, Cécile Boukamel-Donnou, Benoit Pelletier
  • Patent number: 11288438
    Abstract: Systems and methods are provided for performing a video-grounded dialogue task by a neural network model using bi-directional spatial-temporal reasoning. According to some embodiments, the systems and methods implement a dual network architecture or framework. This framework includes one network or reasoning module that learns dependencies between text and video in the direction of spatial?temporal, and another network or reasoning module that learns in the direction of temporal?spatial. The output of the multimodal reasoning modules may be combined to learn dependencies between language features in dialogues. The result joint representation is used as a contextual feature to the decoding components which allow the model to semantically generate meaningful responses to the users. In some embodiments, pointer networks are extended to the video-grounded dialogue task to allow the model to point to specific tokens from multiple source sequences to generate responses.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: March 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Hung Le, Chu Hong Hoi
  • Patent number: 11263467
    Abstract: The present disclosure provides an image processing apparatus and system which downscales an image which is generated from data provided by a sensor. The downscaled image is then analyzed to determine the location of one or more regions of interest in the image. The regions of interest can then be cropped from the original image and those cropped regions of interest processed by a computer vision engine.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: March 1, 2022
    Assignee: Apical Limited
    Inventors: Alexey Kornienko, David Hanwell
  • Patent number: 11247099
    Abstract: A system and method for projecting a collimated beam of light from a light source to form an illuminated spot on a play surface, where an athlete is positioned adjacent to the illuminated spot. The light source is controlled with logic stored in a training pattern database, whereby the illuminated spot follows a predetermined training pattern on the play surface, prompting the athlete to move on the play surface in a movement pattern corresponding to the training pattern of the spot on the play surface.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 15, 2022
    Inventor: Lombro James Ristas
  • Patent number: 11250485
    Abstract: A system and method for filtering digital images stored on a blockchain database to locate one or more specific digital images from a corpus of digital images from an event includes receiving a search criteria from a user for searching through the corpus of digital images stored on the blockchain database, filtering the corpus of digital images stored on the blockchain database based on the plurality of factors that match the search criteria, locating the one or more specific digital photographs that match the search criteria among the corpus of digital images stored on the blockchain database, as a function of the filtering, presenting the one or more specific digital photographs to the user for selection and purchase of the one or more specific digital photographs, and processing a purchase order for the one or more specific digital photographs selected for purchase by the user.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: February 15, 2022
    Assignee: International Business Machines Corporation
    Inventor: Jeremy R. Fox
  • Patent number: 11240569
    Abstract: Systems and methods for video presentation and analytics for a sporting event are disclosed. In one embodiment, the sporting event is an auto racing event. A server platform is provided to collect and analyze real-time raw data and historical raw data, and compare drivers/vehicles from a current auto racing event and/or a historical auto racing event. The server platform is operable to overlay a ghost driver/vehicle on the images of a driver/vehicle in the current auto racing event based on the comparison. The server platform also provides a GUI for displaying the current auto racing event with enhanced features.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: February 1, 2022
    Assignee: SPORTSMEDIA TECHNOLOGY CORPORATION
    Inventor: Gerard J. Hall
  • Patent number: 11185755
    Abstract: A system includes at least one processor and at least one non-transitory computer-readable media communicatively coupled to the at least one processor. In some embodiments, the at least one non-transitory computer-readable media stores instructions which, when executed, cause the processor to perform operations including receiving a first set of sensor data within a first time frame and receiving a set of skycam actions within the first time frame. In certain embodiments, the operations also include generating a set of reference actions corresponding to the first set of sensor data and the set of skycam actions. In some embodiments, the operations also include receiving a second set of sensor data associated with a second game status, a second game measurement, or both. The operations also include generating a sequence of skycam actions based on a comparison between the second set of sensor data and the set of reference actions.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: November 30, 2021
    Assignee: Intel Corporation
    Inventors: Fai Yeung, Patrick Youngung Shon, Shaun Peter Carrigan, Gilson Goncalves de Lima, Vasanthi Jangala Naga
  • Patent number: 11179600
    Abstract: A method for calculating the position of an athlete, referred to as target, on a sports field, including estimating an approximate position of the target using a radio-based positioning system, the system including tracking sensors attached to several athletes on the sports field and antennas installed around the sports field, defining a search space around the approximate position, detecting an athlete in the search space using an optical-based positioning system, the system including cameras installed above and/or around the sports field and image recognition device, determining an accurate position of the detected athlete, attributing the accurate position to the target.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: November 23, 2021
    Assignee: SWISS TIMING LTD
    Inventors: Alexander Hiemann, Thomas Kautz
  • Patent number: 11182926
    Abstract: A system for recognizing collision positions of a plurality of moving objects in a screen includes an infrared camera configured to obtain an image frame when a plurality of moving objects move toward a screen, a memory configured to store a program for calculating a collision position in the screen on the basis of the image frame, and a processor configured to execute the program stored in the memory. By executing the program, the processor detects all of the plurality of moving objects from the image frame to generate a graph where moving objects included in a successive image frame are connected to one another, and calculates a collision position of the moving object included in the image frame and a collision position of the moving object in the screen on the basis of the graph.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: November 23, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong Sung Kim, Myung Gyu Kim, Woo Suk Kim, Seong Min Baek, Sang Woo Seo, Sung Jin Hong
  • Patent number: 11182642
    Abstract: A system and method of generating a player tracking prediction are described herein. A computing system retrieves a broadcast video feed for a sporting event. The computing system segments the broadcast video feed into a unified view. The computing system generates a plurality of data sets based on the plurality of trackable frames. The computing system calibrates a camera associated with each trackable frame based on the body pose information. The computing system generates a plurality of sets of short tracklets based on the plurality of trackable frames and the body pose information. The computing system connects each set of short tracklets by generating a motion field vector for each player in the plurality of trackable frames. The computing system predicts a future motion of a player based on the player's motion field vector using a neural network.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: November 23, 2021
    Assignee: STATS LLC
    Inventors: Long Sha, Sujoy Ganguly, Xinyu Wei, Patrick Joseph Lucey, Aditya Cherukumudi
  • Patent number: 11176705
    Abstract: A method for optimizing camera layout for areas requiring surveillance comprises constructing a three-dimensional model of a scene subject to surveillance and related scene variables, configuring a computation range, constructing a plurality of simulation scenes using the three-dimensional model and the scene variables and recording the framing of pixels in the plurality of simulation scenes by a plurality of cameras according to the computation range, and further calculating the number of pixels required for visibility of an object to be recognized from the recorded framing of pixels. A camera set is selected from the plurality of cameras according to a convergence requirement, and a computation as to camera optimization layout is performed with the camera set to obtain one or more layout schemes.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: November 16, 2021
    Assignee: Shenzhen Fugui Precision Ind. Co., Ltd.
    Inventors: Chang-Ching Liao, Shao-Wen Wang, Shih-Cheng Wang
  • Patent number: 11164394
    Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system operatively coupled to a processor, a two-dimensional image, and determining, by the system, auxiliary data for the two-dimensional image, wherein the auxiliary data comprises orientation information regarding a capture orientation of the two-dimensional image. The method further comprises, deriving, by the system, three-dimensional information for the two-dimensional image using one or more neural network models configured to infer the three-dimensional information based on the two-dimensional image and the auxiliary data.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: November 2, 2021
    Assignee: Matterport, Inc.
    Inventor: David Alan Gausebeck
  • Patent number: 11141645
    Abstract: The present invention relates to physical athletic ball games played using augmented reality smart glasses, and more specifically, to physical real-time athletic ball games where at least two players at different game locations anywhere in the world play athletic ball games against one another using smart glasses. The present invention also relates to the reading of statistical sports data and implementing that data through augmented reality smart glasses in the form of visual graphics so the player wearing the glasses can interact with real sports players in an augmented reality setting.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: October 12, 2021
    Assignee: Real Shot Inc.
    Inventor: Paul Anton
  • Patent number: 11138470
    Abstract: Systems, methods, and computer readable media related to training and/or using a neural network model. The trained neural network model can be utilized to generate (e.g., over a hidden layer) a spectral image based on a regular image, and to generate output indicative of one or more features present in the generated spectral image (and present in the regular image since the spectral image is generated based on the regular image). As one example, a regular image may be applied as input to the trained neural network model, a spectral image generated over multiple layers of the trained neural network model based on the regular image, and output generated over a plurality of additional layers based on the spectral image. The generated output may be indicative of various features, depending on the training of the additional layers of the trained neural network model.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: October 5, 2021
    Assignee: GOOGLE LLC
    Inventor: Alexander Gorban
  • Patent number: 11130019
    Abstract: A system and method for recording and broadcasting motion of a sports ball and individual players includes a sports ball with cameras embedded within for recording trajectory and locomotion, a sensor module for storing video footage and telemetry metadata, and cameras mounted on the sports equipment of individual players. The sensor module includes an Inertia Measuring Unit, a transceiver, a memory, a power source, and a processor, all operatively connected to one another. The sports ball sends data to a wireless data transmission grid mounted under a sports pitch and/or to antennas for transfer to a data processing server which determines a real ball direction thereafter sent to a stadium camera system that generates further 360 degree action-focused broadcasting data. The data is sent to the processing server which processes the raw footage received from the sports ball and individual players to produce clean footage for various applications.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: September 28, 2021
    Assignee: THE CALANY Holding S. À R.L.
    Inventor: Cevat Yerli
  • Patent number: 11127156
    Abstract: A method of device tracking is provided. Based on a captured image containing a marker, first spatial position is acquired. Based on a captured image of a scene, second spatial position is acquired. Based on at least one of the first spatial position and the second spatial position, a terminal device may be positioned and tracked.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: September 21, 2021
    Assignee: GUANGDONG VIRTUAL REALITY TECHNOLOGY CO., LTD.
    Inventors: Yongtao Hu, Guoxing Yu, Jingwen Dai
  • Patent number: 11107327
    Abstract: A system and method of placing a location-based bet is presented in embodiments herein. Bets, offers, and incentives, may be presented to a user via a mobile device based on a location of the mobile device relative to a geographic region. The geographic region may be associated with a sporting event or a sporting venue such as a sport arena. Different bets, offers, and incentives may be presented inside the geographic region than outside the geographic region based on the different experiences for the fans in each geographic region.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: August 31, 2021
    Assignee: FanThreeSixty, LLC
    Inventors: Nic Kline, Bart Hampton, Jason Houseworth
  • Patent number: 11093025
    Abstract: Provided is a virtual-reality provision system having: a tracking data acquisition unit that acquires tracking data on a flying object thrown by an athlete, the data being obtained from sensor information from a sensor that tracks the flying object; a three-dimensional-display-data generation unit that uses the acquired tracking data to generate three-dimensional display data used to display, in a virtual space, a flight video of a virtual flying object corresponding to said flying object; at least one virtual-space provision system that displays the flight video of the virtual flying object in the virtual space by using the three-dimensional display data; and a transmission unit that transmits the three-dimensional display data to the at least one virtual-space provision system.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: August 17, 2021
    Assignee: BASCULE INC.
    Inventors: Yoshio Kakehashi, Masayoshi Boku
  • Patent number: 11068705
    Abstract: Disclosed are systems, methods, and computer-readable media for a hybrid cloud structure for machine-learning based object recognition. In one aspect, a system includes one or more video-capable access points; and one or more processors configured to receive image data from the one or more video-capable access points; perform, at a first processor of the one or more processors, a first process to detect one or more objects of interest in the image data; generate vector IDs for one or more objects detected in the image data; perform, at a second processor of the one or more processors, a second process to identify the one or more objects in the vector IDs; and generate at least one offline trail for the one or more objects based on statistics associated with the one or more objects identified.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: July 20, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Ashutosh Arwind Malegaonkar, Haihua Xiao, Rizhi Chen, Li Kang, Siqi Ling, Mingen Zheng
  • Patent number: 11069142
    Abstract: A system or method includes a platform to allow users to coordinate images captured by a separated worn camera with images captured by a held camera, displaying such images at real-time to a user through a user interface on the held camera or a user interface on a separate device. The separate worn camera is contemplated to provide a separate image feed to provide one or more augmentations to an image captured by the held camera.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 20, 2021
    Assignee: WORMHOLE LABS, INC.
    Inventors: Curtis Hutten, Robert D. Fish
  • Patent number: 11050845
    Abstract: Aspects of the subject disclosure may include, for example, partitioning content of a plurality of media streams into media segments to generate a plurality of media segments associated with a media event, determining a first set of media segments from the plurality of media segments according to social media information associated with a social media group, transmitting the first set of media segments to first user equipment of a first member of the social media group, detecting a change in membership of the social media group, updating the first set of media segments according to the change in membership of the social media group to generate a modified set of media segments, and transmitting the modified set of media segments to the first user equipment for presentation at the first user equipment. Other embodiments are disclosed.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: June 29, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Venson Shaw, Sangar Dowlatkhah, Zhi Cui
  • Patent number: 11050977
    Abstract: Systems and methods are described for immersive remote participation in live events hosted by interactive environments and experienced by users in immersive realities.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: June 29, 2021
    Assignee: TMRW Foundation IP & Holding SARL
    Inventor: Cevat Yerli
  • Patent number: 11025942
    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 1, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
  • Patent number: 11025999
    Abstract: A system for automatic creation a scenario video clip with a predefined object or a group of objects in the frame comprises: a shooting unit, a data storage module, a predefined object or a group of objects identification in primary video data unit; an object or a group of objects data input unit for their identification; provided that the system in addition comprises: a relevant video data retrieval unit with the predefined object or the group of objects in the frame; a relevant video data processing unit; at least one scenario pattern including data set for operation of shooting unit, retrieval unit and processing unit.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: June 1, 2021
    Assignee: FUN EDITOR LLC
    Inventors: Anton Vladimirovich Rozhenkov, Sergey Sergeevich Klyuev, Denis Evgenyevich Kalinichenko, Dmitry Vyacheslavovich Gurichev
  • Patent number: 11017588
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: May 25, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuhiro Yoshimura, Kaori Taya, Shugo Higuchi, Tatsuro Koizumi
  • Patent number: 11004267
    Abstract: An information processing apparatus enables a user viewing a displayed virtual viewpoint image to easily understand the state in a generation target scene of the virtual viewpoint image. The information processing apparatus generates a layout that is a figure representing a position of an object included in an imaging target area captured by a plurality of imaging units from different directions, and controls a display unit to display a virtual viewpoint image and the generated layout. The virtual viewpoint image is generated based on images acquired by the plurality of imaging units and viewpoint information indicating a virtual viewpoint.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: May 11, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kazuhiro Yoshimura, Yosuke Okubo
  • Patent number: 10998870
    Abstract: There is provided an information processing apparatus, an information processing method, and a program that enable output of a sound to be heard at an assumed viewing/listening position of a zoom image when an image is displayed as the zoom image. In the case of image content such as a sports broadcast, the individual location information, direction and posture information, and audio data of each player as an object are stored separately for direct sound and reverberant sound, at the time of recording. At the time of reproducing a zoom image, the direct sound and the reverberant sound are mixed according to the direction of a player as an object with respect to an assumed viewing/listening position in the zoom image, so that a sound to be heard at the assumed viewing/listening position is output. The present disclosure can be applied to a content reproduction apparatus.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: May 4, 2021
    Assignee: SONY CORPORATION
    Inventors: Keiichi Aoyama, Satoshi Suzuki, Koji Furusawa
  • Patent number: 10999571
    Abstract: A display control apparatus configured to perform display control so as to display information on a plurality of image capturing apparatuses configured to capture images for generating a virtual viewpoint image includes acquisition means configured to acquire information on the plurality of image capturing apparatuses, and display control means configured to cause a display unit to display information on a communication connection of the plurality of image capturing apparatuses for transmitting an image captured by each of the plurality of image capturing apparatuses based on the information acquired by the acquisition means.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: May 4, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasushi Shikata, Yoshiki Iwakiri
  • Patent number: 10950276
    Abstract: Upon capture of video data for a match of a sport at a first time, an apparatus performs detection of event information from the captured video data during a first time-period starting from the first time, where the event information includes information identifying an occurrence timing of an event that occurs in the match of the sport, an event type of the event, and an occurrence position of the event. The apparatus reproduces the video data, on a display screen, with a delay by a second time-period obtained by adding a third time-period longer than or equal to a predetermined time-period to the first time-period, and, upon detection of the event information, continues displaying the event type and the occurrence position of the event, for the predetermined time-period, from a timing that is the predetermined time before the occurrence timing of the event within the reproduced video data.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: March 16, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Tomohiro Takizawa, Hiroto Motoyama, Kazuhiro Arima, Shinichi Akiyama
  • Patent number: 10948922
    Abstract: A method of navigating an autonomous vehicle includes receiving pulsed illumination from an object in the vehicle environment and decoding the pulsed illumination. The object is identified using the decoded pulsed illumination of the pulsed illumination, and the autonomous vehicle navigated through the vehicle environment based on the identification of the object. Obstacle avoidance methods and navigation systems for autonomous vehicles are also described.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: March 16, 2021
    Assignee: Sensors Unlimited, Inc.
    Inventors: Curt Dvonch, Jonathan Nazemi
  • Patent number: 10939203
    Abstract: An audio forecasting algorithm that is adjusted (or trained), by machine learning, prior to a sports contest that will be broadcast. The audio forecasting algorithm is then used to position a set of mobile microphones on an ongoing basis during the sports contest. In some embodiments, a band forecasting algorithm is used in the audio forecasting algorithm. In some embodiments, a swarm based correlation algorithm is used in the audio forecasting algorithm.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: March 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Gary William Reiss, Eduardo Morales, Nancy Anne Greco, David Alvra Wood, III
  • Patent number: 10937185
    Abstract: A system for detecting an articulate body pose from an imagery content includes an imaging module for capturing the imagery content, and a processor that is operable to obtain a top-down view of the imagery content, and process the top-down view to detect the articulate body pose using a machine learning algorithm, wherein the articulate body pose includes a plurality of joints. The processing includes creating a part confidence map corresponding to each joint of the articulate body pose, generating a heatmap by projecting the part confidence map on the top-down view of the imagery content, creating a part affinity map corresponding to each body part, generating a vector map by projecting the part affinity map on the top-down view of the imagery content, and generating a body-framework corresponding to the articulate body pose, using the heatmap and the vector map.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: March 2, 2021
    Assignee: Everseen Limited
    Inventor: Dan Pescaru
  • Patent number: 10917621
    Abstract: An information processing apparatus according to this invention includes a generating unit configured to generate a virtual viewpoint image in accordance with a position and/or line-of-sight direction of a viewpoint, and a notifying unit configured to send a notification of information about quality of the virtual viewpoint image generated by the generating unit.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: February 9, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yoshiki Iwakiri