IMAGE CAPTURE, PROCESSING AND DELIVERY AT GROUP EVENTS
Methods, systems, and devices are disclosed for image acquisition and distribution of individuals at large events. In one aspect, a method for providing an image of attendees at an event includes operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue, processing the images to form a processed image, and distributing the processed image to the individual. The processing includes mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming the processed image based on the image space.
This patent document claims the benefit of priority of U.S. Provisional Patent Application No. 61/739,586 entitled “IMAGE CAPTURE, PROCESSING AND DELIVERY IN GROUP EVENTS” filed on Dec. 19, 2012. The entire content of the above patent application is incorporated by reference as part of the disclosure of this patent document.
TECHNICAL FIELDThis patent document relates to systems, devices, and processes for image capture, processing and delivery to various users at group events including attendees at sporting events and entertainment events.
BACKGROUNDGroup events typically bring large crowds of people to event venues for watching live activities or performances, often to the enjoyment of the spectator. During various group events, particularly large group events including sports or concerts, the reactions of individuals watching the live performances can be highly animated. A photograph taken at such an event may provide the individual with pleasant memories of the event.
Photos are becoming more commonly shared through social media using online social networks. An online social network is an online service, platform, or site that focuses on social networks and relations between individuals, groups, organizations, etc., that forms a social structure determined by their interactions, e.g., which can include shared interests, activities, backgrounds, or real-life connections. A social network service can include a representation of each user (e.g., as a user profile), social links, and a variety of additional services. For example, user profiles can include photos, lists of interests, contact information, and other personal information. Online social network services are web-based and provide means for users to interact over the Internet, e.g., such as private or public messaging, e-mail, instant messaging, etc. Social networking sites allow users to share photos, ideas, activities, events, and interests within their individual networks.
SUMMARYTechniques, systems, and devices are disclosed for implementing an image-capture, processing and delivery system to obtain the reaction images of individuals at large events, e.g., including sports games, and to provide a crowd sourced security system.
In one aspect, a method for providing an image of attendees at an event includes operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue, processing the images to form a processed image, and distributing the processed image to the individual. The processing includes mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming the processed image based on the image space.
Implementations of the method can optionally include one or more of the following features. For example, the event venue can include at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event. For example, the attendees can include fans or spectators at a sporting event. For example, the predetermined positions can include seating in the event venue. In some implementations of the method, for example, the operating the one or more image capturing devices can include manually triggering them to record the images at an operator-selected instance based on an occurrence of the event. In some implementations of the method, for example, the operating the one or more image capturing devices can include automatically triggering them to record the images based on at least one of sound, visual stimulus, or mechanical perturbation generated at the event venue. In some implementations of the method, for example, the operating the one or more image capturing devices can include temporally capturing a series of images of the attendees after one of a manual triggering or an automatic triggering of the one or more image capturing devices. For example, the series of images can be captured at a speed of at least two images per second. For example, the one or more image capturing devices can be automated to record the images by continuously panning in one or both of horizontal and vertical directions along a predetermined trajectory to capture the series of images with a predetermined focusing of the locations in the event venue. For example, the one or more image capturing devices can be automated to record the images by moving to and stopping at a plurality of imaging positions along a predetermined trajectory to capture the series of images while stopped at the corresponding imaging position, in which the one or more image capturing devices are configured to have a predetermined focusing of the locations in the event venue. For example, the one or more image capturing devices can be configured to have a predetermined focusing of the locations in the event venue. In some implementations of the method, for example, the forming the processed image based on the image space can include producing a segmented image. For example, the producing the segmented image can include cropping at least one of the recorded images to a size defined by the image space. For example, the producing the segmented image can further include overlapping two or more of the recorded images to form a merged image. In some implementations of the method, for example, the distributing can include wirelessly transmitting the processed image to a mobile device of the individual. In some implementations, for example, the method can further include producing a graphical user interface on the mobile device to present the processed image to the individual. For example, the graphical interface can also presents event-related content with the processed image. For example, such event-related content can include information associated with the event and an image of an occurrence of the event, in which the occurrence temporally corresponds to the processed image. For example, the graphical interface can include an interface to report a security-related incident to authorities at the event venue. In some implementations of the method, for example, the processing the images can further include attaching meta data with image data of the processed image, in which, in some examples, can provide links to external websites as part of the processed image. In some implementations, for example, the method further includes wirelessly transmitting a message to prompt the individual of the event to provide location information via the graphical user interface on the mobile device.
In another aspect, an imaging service system includes a plurality of cameras arranged in an event venue to capture images of attendees at an event corresponding to an occurrence of the event, a trigger module in communication with the plurality of cameras to initiate the capture of the images, and one or more computers in communication with the cameras to receive the captured images and provide coordinates to the captured images that correspond to locations in the event venue to associate individuals among the attendees to respective locations in the event venue.
Implementations of the imaging service system can optionally include one or more of the following features. For example, the captured images of the attendees display one or more attendees' reaction to the occurrence of the event. For example, the event venue can include at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event. For example, the attendees can include fans or spectators at a sporting event. For example, the locations can correspond to seating in the event venue. For example, the plurality of cameras are arranged in the event venue to capture the images of the attendees at multiple directions. For example, the plurality of cameras can temporally capture a series of images of the attendees. In some implementations of the system, for example, the one or more computers can form a processed image of an individual or individuals proximate the location of the individual using the coordinates. In some implementations of the system, for example, the one or more computers can distribute the processed image to the individual using wireless communication to a mobile device of the individual. For example, the one or more computers can send the processed image to a social network site. For example, the one or more computers can allow purchase of the processed image by the individual. In some implementations of the system, for example, the trigger module can be a manual trigger to initiate the capture of the images at an operator-selected instance based on the occurrence of the event. In some implementations of the system, for example, the trigger module can be an automatic trigger to initiate the capture of the images based on a detection of at least one of a sound, visual stimulus, or mechanical perturbation at the event. In some implementations, for example, the system can further include a plurality of lighting devices to direct light at selected sections of the event venue corresponding to sections where the plurality of cameras capture the images, in which the lighting devices are in communication with the trigger module and configured to emit light when triggered on the selected sections to be imaged. For example, the plurality of lighting devices can be configured to direct the light at the selected sections with angles corresponding to imaging angles formed between the camera and the section to be imaged.
In another aspect, an imaging system for providing images of attendees at an event includes a plurality of cameras arranged in an event venue to capture images of attendees at an event corresponding to an occurrence of the event, and one or more computers in communication with the cameras to receive the captured images and provide coordinates to the captured images that correspond to locations in the event venue to associate individuals among the attendees to respective locations in the event venue, in which the captured images of the attendees display one or more attendees' reaction to the occurrence of the event.
Implementations of the system can optionally include one or more of the following features. In some implementations, for example, the system can further include a trigger module in communication with the plurality of cameras to initiate the capture of the images. In some implementations, for example, the system can further include a plurality of lighting devices to direct light at selected sections of the event venue corresponding to sections where the plurality of cameras capture the images, in which the lighting devices are in communication with the trigger module and configured to emit light when triggered on the selected sections to be imaged. For example, the trigger module can be a manual trigger to initiate the capture of the images at an operator-selected instance based on the occurrence of the event. For example, the trigger module is an automatic trigger to initiate the capture of the images based on a detection of at least one of a sound, visual stimulus, or mechanical perturbation at the event. For example, the event venue can include at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event. For example, the attendees can include fans or spectators at a sporting event. For example, the locations can correspond to seating in the event venue. For example, the plurality of cameras are arranged in the event venue to capture the images of the attendees at multiple directions. For example, the plurality of cameras can temporally capture a series of images of the attendees. In some implementations of the system, for example, the one or more computers can form a processed image of an individual or individuals proximate the location of the individual using the coordinates. In some implementations of the system, for example, the one or more computers can distribute the processed image to the individual using wireless communication to a mobile device of the individual. For example, the one or more computers can send the processed image to a social network site. For example, the one or more computers can allow purchase of the processed image by the individual.
In another aspect, a method for providing crowd sourcing for security at an event includes operating one or more image capturing devices to capture images of attendees of an event situated at locations in an event venue, processing the captured images to form security reference images, in which the processing includes mapping the locations of the attendees in the captured images to a grid including coordinates corresponding to predetermined positions associated with the event venue, distributing at least one of the security reference images to at least some of the attendees, receiving a message from an attendee identifying at least one of a position or an object in the security reference image, in which the message indicates an alleged disturbance in the event venue, processing the message to determine the location of the alleged disturbance using the identified position or object in the security reference image, and providing an alert message to an authority associated with the event to alert the authority of the alleged disturbance, the alert message including the determined location.
Implementations of the crowd sourcing security method can optionally include one or more of the following features. For example, the one or more image capturing devices can be configured to capture the images of attendees at one or more instances prior to and during the event. For example, each of the security reference images can be associated with a particular section or sections of the event venue. In some implementations of the method, the processing can further include defining an image space based on a particular location in the event venue using the coordinates, and segmenting the captured images to a size defined by the image space to form a reduced-size security reference image. In some implementations of the method, for example, the distributing can include wirelessly transmitting the security reference images to a mobile device of the attendee. For example, the message received from the attendee can be an anonymous message.
The subject matter described in this patent document can be implemented in specific ways that provide one or more of the following features. For example, some implementations of the disclosed technology includes a hardware and software system and a user interface, e.g., for capturing, processing, distributing and viewing images of crowds during large events. The hardware system can include digital cameras and imagers that are used to rapidly capture attendees of the crowd at an event, e.g., in short periods of time, during specific moments of the event. A variety of mechanisms can be used to adjust the camera-viewing/imaging angle and ensure the images captured are of the correct subject and the image quality and capture speed is high. For example, the captured images can be processed using attendee location information and predetermined locations of the event venue, e.g., including mapping the images to a grid. In some implementations, for example the image processing can include segmenting, overlapping, and/or dividing the captured images. For example, the processed images can be packaged so the individuals in the crowd can easily and rapidly obtain their photograph. The exemplary packaging system can also provide a tool in which individuals can drop indicators on the images to identify other people in the crowd, e.g., based on the grids. The content generated by this hardware and software systems can be viewed in an interface that combines event-related content with images captured during a moment that generates reactions by the attendees, in which such combined is presented and displayed to a user in real-time during or after the event it in a specific manner.
During various group events, particularly large group events including sports or concerts, the reactions of individuals watching the live performances are highly animated. A photograph of these situations provides a unique and yet highly beneficial and desired memento or keepsake for a spectator, especially if the image can be captured at a precise moment, tailored to remind the spectator of that specific moment, and easily and rapidly obtained. However, to achieve this, there are many technical difficulties. For example, some main issues or difficulties include capturing the image or images in a short period of time and at just the right moment, capturing the image or images in focus of the individual spectator and/or group of spectators in the context of the moment, preparing the captured image or images so they can be easily and rapidly accessed, e.g., such as delivering the image or images directly to the user and/or integrating the image content and/or the image or images into a social network, e.g., particularly a social network with a series of specific mechanisms with a unique interface.
One of the easiest forms of communication is through photos. Photos capture and convey special moments, and sharing them is a way to show others that moment. Images form the core of an interactive social media network. Today, large social networks users are starting to experience social media fatigue. For example, large social network users can have too many friends who share information they are not interested in, and thus there is an opportunity for smaller, niche social networks focusing on a specific interest. As the social media market continues to segment, users may no longer spend a majority of time on one network and instead may visit a number of smaller networks that are more in line with their interests. The disclosed technology can be used to address this social networking shift, e.g., such as providing a sports specific social network.
Techniques, systems, and devices are disclosed for rapid acquisition, processing, and delivery of images that capture the reaction images of individuals at group events, e.g., including, but not limited to, large events such as sports games, concerts, etc. The disclosed techniques, systems, and devices can also be implemented to provide a crowd sourced security system.
The disclosed technology can include a platform to capture photos of individuals at an event and process and distribute the photos to users of the platform. For example, a series of images can be taken and made available rapidly, providing a virtual layout of the individuals in the crowd during the event. When shared, the photos show images of users enjoying themselves, which is an entirely new medium through which fans and advertisers/brands can interact with one another. Also, this allows a unique security function for other crowd members to highlight any issues caused by other individuals. For example, using a mobile application, a user can visually identify (e.g., using their image as a reference) the inappropriate individual and give a reason to why they are raising this issue. This can alert event security staff and can provide an image and pre-mapped seat number of the accused perpetrator.
In some aspects, the disclosed technology includes a camera system that captures images of individuals in a crowd during time periods of an event at an event venue, e.g., in which the time periods can be associated with crowd reactions to instances, moments, or occurrences of the event. For example, the camera system can include one or more camera devices or modules configured in the event venue to capture one or more images of locations corresponding to predetermined positions (e.g., seats, aisles, sections, sky boxes, locations the field, court, etc.) in the event venue. The disclosed technology includes a processing system that processes the captured images to produce one or more processed images corresponding any individual in the crowd (e.g., based on the predetermined positions) during the captured time period showing the individual's reaction for the associated instance, moment, or occurrence of the event. In some implementations, the processing system can distribute the processed images to the individuals via an application on a mobile device.
The disclosed camera system can include a trigger system to activate the camera devices or modules to take the images. For example, the trigger system can be activated manually, e.g., by an on-site person at the event venue or remotely by an off-site person viewing the event live from an off-site facility. For example, the trigger system can be used to trigger individual or multiple designated modules or devices of the camera system.
The disclosed camera system can be configured to upload the images to a server from which the individuals can access their own images. For example, the disclosed technology includes an image content management system that allows the individuals to share their images on social networks.
In some implementations, the camera system can include static cameras focused on specific sections of the crowd (e.g., the predetermined positions).
In some implementations, the camera system can include the one or more camera devices or modules in a panning system that captures images while moving. For example, the cameras in the panning system can be configured such that the focus is pre-set to change as it pans and is timed to be at an optimum focus when an image is taken.
The disclosed camera system can include a mechanism to rapidly move the one or more camera devices or modules to focus on a specific area of the crowd and then stop it to take an image, e.g., which can continue for a series of images. For example, the camera can be a DSLR camera with a telephoto lens, which is attached to a stepper motor to rapidly change the camera angle and stop to take an image and then move to the next position to capture the next image in the series. For example, the camera can be configured with a pre-set focus to change as it pans and is timed to be at an optimum focus when an image is taken, e.g., when stopped. The camera system can include a mechanism (e.g., such as another stepper motor attached to the camera) to adjust the camera angle along a different axis, in which the movements (e.g., of the motors) are timed together. In some examples, the mechanism can be configured such that a physical blocking mechanism accurately stops the camera moving mechanism. In other examples, the mechanism can be configured such that a friction mechanism accurately stops the camera moving mechanism. For example, the mechanism can be configured such that elastic tension is used to act as the camera moving mechanism. For example, the mechanism can be configured such that a spring force is used to act as the camera moving mechanism. For example, the mechanism can be configured such that gas or liquid injection is used to act as the camera moving mechanism. In some examples, the one or more camera devices or modules are triggered by the stopping/stabilization movement mechanism, in which the image can be captured when the camera is static and stabilized, and once the image has been taken, this relays to the camera moving mechanism so that the next movement can occur. In some implementations, once the series of images has been taken, the camera moving mechanism can return the camera devices or modules to their original position and ready the devices to be retriggered. In some implementations, once the series of images has been taken, the camera moving mechanism of the camera system can stay in the finished position so that when the system is next triggered, the images are taken in the reverse manner.
In some implementations, the one or more camera devices or modules of the camera system can focus into a mirror such that the mirror changes angle to adjust the section of the crowd being captured. In some implementations, the one or more camera devices or modules of the camera system can be attached to one or more vibration platforms to prevent shaking of the environment in which they are housed. In some implementations, a gyro can be used to pre-gauge how much overshoot or shake a movement causes when stopping and stabilizing, in which the gyro can then cancel out the movement, e.g., using pre-set calibrations to balance the over/undershoot movement, e.g., using precise timings. For example, an external counterbalance can be used to cancel out the over/undershoot.
The disclosed processing system can be configured to label each captured image corresponding to the what section of the crowd for which it was captured and to what instance, moment, or occurrence for which it was captured (e.g., the camera system was triggered). For example, the processing system can attach the label to pre-made grids, e.g., specific to the section of the crowd of which the image was captured. For example, the processing system can form the processed images corresponding to defined sections (e.g., by assigning sections to the grid for a particular location) to include a particular individual that is coded to his/her location in the event venue, e.g., in which the processed image can be sent to the individual. For example, the individuals can use a mobile device, via an application, to access the images. For example, the processed images can be sent to the particular individual's mobile device, e.g., via the application. For example, the particular individual can be displayed as part of the image they are in. In some implementations, the disclosed processing system can be configured to label the images taken, and specific grids are added to these specific labels.
In some implementations, described system can include an image flow from the camera system to a cloud or laptop then cloud or laptop to a mobile device. For example, the individuals can request these images by providing the details of their specific area or location during the event. The processing system can pool the captured images together for each set of images taken of the individual. For example, this can allow these images to be accessed together by using coordinates of the predetermined positions to quickly direct the user to the correct image locations or by moving the segmented images to a separate location to be viewed in sequence. In some implementations, the captured images can be connected to specific information of the moment captured. For example, the captured images can be connected to specific images of the event moment. In some examples, the images can be captured to include a slight overlapping area, in which the processing system processes the areas to be split into sections, thereby producing a choice depending on individual's location that ensures that when an image is requested, a cropped image isn't displayed/delivered. For example, the captured images can be instantly overlaid on other images.
The described system can also be utilized for crowd source security using the captured images by the camera system. For example, in some implementations, the camera system can capture images of the crowd rapidly, e.g., and the captured images can be provided to the attendees at the event, e.g., via their mobile device, to aid event security. The exemplary crowd source security system can utilize the grid to reference the images taken. For example, the exemplary crowd source security system can send a reference image to attendees displaying their location in order to locate other individuals in the crowd, e.g., including images taken prior to the start of the event or at intermittent instances during the event. For example, the exemplary system can send the attendees their location displayed in real time, e.g., based on a request, in order to locate other individuals in the crowd. The attendees can drop identifying markers or tags on other individuals. For example, the attendees can choose from a list of reasons to why they were identifying another individual, e.g., such as ‘disturbing other attendees’, or the attendees can write a statement to why they were identifying an individual. The exemplary crowd source security system can send an image of the accused individual to event security staff. For example, a dropped marker can be included on the image sent to event security staff that is associated with a seating or position information of the accused individual (e.g., based on the location grid) so that that seat or location is now identified. The exemplary crowd source security system can be configured such that only a selected group of the total event staff are sent notifications of images and/or seat locations of the accused individual(s), e.g., the selected group corresponding to a specific section of the event that they are working in. The exemplary crowd source security system can be configured to notify users that are located (e.g., checked-in) close to the accused individual(s) to confirm or disconfirm the allegations.
The described system can also include a user interface implemented in a software and/or mobile application (‘app’) or Internet site such as a web portal and executed on a variety of devices to be operated by one or more types of various users.
In some implementations, the user interface can implemented on a mobile device as a mobile device application or website accessed on a browser on the mobile device that receives image content, e.g., the processed image from the processing system. For example, the user interface can use a user's mobile device location signal to display the event they are attending. For example, the user interface can use a user's mobile device location signal and time of the event to notify the user to access the application content and/or after the event to access the application content. For example, the user interface can use a user's mobile device to store the user's location at the event so that the content can be sent or accessed later. For example, users can specify their exact location or seating during the event so that a specific image, or series of images, can be sent to the user after they have been captured. For example the series of images can be of the same moment or of different moments of the event. For example, the user interface can include a user profile to store user data, e.g., including location or seat number for a series of events or a season.
In some implementations, for example, the user interface can be operated by the individual such that the individual can choose the image he/she desires and a size or portion (e.g., how much of the photograph) desired within that image, in which the size can be reduced by cropping it. The user interface can include linking the crowd content images to images of the event moment (e.g., such as a sports player scoring the goal), e.g., in which these images are ‘twinned’, such that they can be uploaded to a social network and associated together. For example, the ‘twinned’ images can include one of the user's reaction moments and the other one of the moment causing the reaction. The user interface can include linking the crowd content images to information of that event, e.g., such as emblems or text, which can be displayed in a connected manor, such that they can be uploaded to a social network and associated together. In some implementations, the user interface can be operated to collaborate the image content to produce a social network newsfeed, e.g., each specific to that user's connections. For example, the connections can be from other social networks. In some implementations, the user interface can be operated such that, when displaying the two images in a social network newsfeed, they interact by one overlapping the top of the other until scrolled, when scrolled these images adjust to reveal the image below, to provide a seamless adjustment in image display. For example, the overlapping image reduces in size as the interface is scrolled. Also, for example, the overlapping image reduces in visibility as the interface is scrolled. Also, for example, the overlapping image slides out of the way as the interface is scrolled. In some implementations, the user interface can be operated such that, when displaying the two images in a social network newsfeed, they interact when scrolled, by adjusting the prominence of one of the images.
In some implementations, the user interface can be operated to display a panoramic image in which a series of images are loaded with various levels of resolution, e.g., all specific to the requested image, so that the user can rapidly see the detail of their specific image without waiting for all the other images detail to load, but is still able to witness the scale of the panoramic. For example, when viewing the panoramic image, the sections of images are loaded in specific areas around the user's image, e.g., which can be horizontally or vertically loaded after viewing each section. The user interface can be operated such that, when the user checks-in or receives image content for a specific game, the data of the game (e.g., such as the result or scorers) are logged and combined to other game stats the user has collected from other games to produce a statistics display, personalized to the user.
In some implementations, images and links can be displayed attached to each of the pieces of content, such as images, added by users, e.g., in which these links can be specific to the viewer's data and data in the news feed of their social network. For example, the user interface can be operated to display images and content when checking into games and when obtaining a reaction photo.
In some implementations, the user interface can provide the user with an option to purchase a hardcopy of their image after viewing it. In some examples, the user interface can provide the user with an option to purchase a hardcopy of another user image after viewing it. For example, the user interface can be configured to store the users payment details, e.g., so the hardcopy image can be purchased with less steps required, including quick click through and saved details for repeat purchases.
In some implementations, the user interface can enable the processed image to be shared on social networks. For example, for each image shared on social networks, displays a specific website link, which is attached, and can be viewed by other social network users. For example, the link attached is specific depending on a variety of factors, e.g., including determined by the user, the event, the moment captured, the sharing time, which alter the content within the link or alters the link itself. The content that is displayed in the link is specific to the user that supplied the link, e.g., this content can be other images taken of that user during the event during different moments. The content that is displayed in the link is specific to that event/moment.
The processing system can include an image database, in which the images collate from the cloud and are labeled under each moment. For example, these images are pre-grouped so that once captured they go to the appropriate group, e.g., so users and/or image publishers can locate a desired image faster, e.g., groups including families, best quality, passionate fans, etc.
The described system can include a lighting system, which can be configured to operate with the disclosed camera system. In some implementations, the lighting system can focus light rays on specific sections of a crowd when an image of that section is being captured, e.g., in which the light rays move with the camera angle. The lighting system can be configured to utilize a mirror or reflective surface to redirect light rapidly instead of the whole lighting system moving.
In an aspect of the disclosed technology, a method for providing an image of an attendee at an event includes operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue; processing the images, in which the processing includes mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming a processed image based on the image space; and distributing the processed image to the individual, or, focusing their display on the image location and area of that processed image that is specific to them.
In another aspect, an imaging service system of the disclosed technology includes a plurality of cameras arranged in an event venue to capture images of attendees at an event, and one or more computers in communication with the cameras to receive the captured images and provide coordinates to the captured images that correspond to locations in the event venue to associate individuals among the attendees to respective locations in the event venue.
Exemplary Hardware
The present technology includes: a series of rapidly moving mechanisms that alters, either the specific position of the camera's or cameras' angle, to focus on specific areas or, the angle of a reflective mirrored surface that a camera or cameras are facing, allowing a series of photographs to be taken. Each camera system focuses on a specific section of the crowd.
A variety of mechanisms can be used to rapidly move the cameras or mirrors angle, stop at specific positions and then rapidly stabilize for the image to be taken, if the camera or mirror requires stopping at all. Magnets/electromagnets, including stepper motors, an electric motor, elastic tension, a spring mechanism, pistons or compression supplying a movement through an gas or liquid medium and gravity can power the movement of the cameras or mirrors position. The camera can take images while continually moving or during the mirrors continual movement or be stopped and stabilized to take the image by using magnets/electromagnets, a physical barrier, friction or a stop in power to stop the movement mechanism. There are multiple combinations of these rapid camera/mirror movement mechanisms combined with the precise and rapid stop/stabilization mechanisms to photograph fan reaction images. Multiple cameras can be held by the moving mechanism as well as multiple cameras can focus on one moving mirror mechanism.
If the images are taken while the system is panning the images are timed with the moving mechanism to ensure each shot is taken of a specific and predetermined area.
If stopping the camera/mirror to take the image, the camera is triggered remotely. The focus function of the camera can be triggered as the movement for each shot is nearing the static position or when fully static and stabilized, to ensure the correct focus is used for that section of crowd to obtain a clear image. The image can be captured when the camera is static and stabilized. Once the image has been taken this relays to the moving mechanism via optics or other signal such as an electric signal so that the next movement can occur. The amount of movement degrees can be varied for each shot to ensure that only the specific people in that area are captured. The movement could also be timed in-between a timing of image capture, both mechanisms are triggered at the same time.
If stopping the camera to take an image, the system is triggered remotely and an image is taken, this triggers the movement of the motor or motors to the next position and also the next focus position, both of which have been preset. When the camera arrives at the next position and once the focus is correct the next image is triggered. This sequence continues until the series of images have been taken, the system is then placed in standby ready to be retriggered.
If multiple cameras are used for one moving mechanism, the movement can be delayed until all cameras have signaled that the shot has been taken or a specific amount, for example 5 of 8 are complete so the next movement is made so the delay does not heavily affect the system. If each camera controls the robotics then the above point does not need to apply.
If the images are being captured without the system stopping, the focus of the lens, on the cameras, can be adjusted manually in a predetermined manner, either manually using a robotic adaption that turns the lens dial to a specific point or electronically. This is timed so that during the camera/mirror panning, in between the shots, the focus is adjusted so that when the camera angle is pointing at the desired subject area the shot is taken with the specific focus corresponding with it. Therefore each image will have its own focus setting predetermined. This mechanism can also briefly stop for the image to be taken, without having to reduce the systems speed by waiting for the focus to adjust.
If the camera does not require a manual focus adjustment the images are taken timed specifically to the mechanism movement.
This cycle continues until a series of images have been captured and once complete, the camera or mirror can then be moved back to its starting position ready to repeat the process. The camera or mirror may not require to be moved back to its starting position, it could remain at its finishing position and move in the reverse way to the previous series of images. Both methods could be repeated after the system or systems have been triggered to produce repeat images of the series that has been taken after the same trigger moment.
The system set up involves one or more stations, within the zoom proximity of the cameras focus. The cameras and moving mechanism modules are then positioned on or are attached to these areas. Once calibrated, these modules are placed in the same specific area for each use. The trigger of the modules, to start taking photographs and movement, is done remotely and manually. A central trigger can control multiple mechanisms, ensuring all images are captured from the same reaction moment and reduce the operators required.
The mirror mechanism can also capture large amounts of images in a different set up. This is a series of mirrors aligned at different angles which rapidly drop down after each image has been taken.
This movement can again be controlled as stated above using the camera, have multiple cameras focusing on the aligned mirrors and have the manual focus altering mechanism.
A final method that involves no movement is to have a series of cameras or image taking devices set to a specific subject that is triggered to take an image of only that subject during the event, once triggered at specific moments.
Due to potential blurring of images, caused by the vibration of either the adjacent mechanisms movement or vibrations from the stadium, a stabilization platform will absorb any unwanted shake from the system.
Exemplary Software
The series of images captured can be labeled in a specific order and then relayed to an area in which software adds a specific grid to each, or the labeled images can be added/overlaid on the specific grid. This grid is coded to specific predetermined co-ordinates which will relate to individual sections on each image once the grid and image have combined. This area could be an on-site laptop, a cloud system or an external computer.
The image flow is: from the camera to the cloud/laptop, then the labeled image and specific grid are assigned together.
Individuals can request these images by providing the details of their specific area during the event, for example this could be seat numbers. The area or seat code specified by an individual relates to a specific code on the specific grid that is attached to the image taken of them. They can then be sent the specific area of the specific image that they are in. This means that every image request for each seat or area will obtain a specific area part of the image that is sent. This could also operate using another method which directs the individual to the part of the image/grid they have requested, sending them to a specific location on the specific image taken.
Multiple images can be captured per moment when the camera system is triggered and set to repeat its cycle. When the images are requested, or position of an individual is identified, the images of each moment captured are pooled so that when the user wishes to view them they can quickly see each image after one another. This can be done by the images being prepared or pooled together for each set of images taken of the individual, or by loading or pooling the co-ordinates to quickly direct the user to the correct image. These two methods can also be implemented for all images taken throughout the event, so the user can view different moments instantly too.
The images taken are connected to specific information and images of the moment captured.
So that all of the users can be captured in a group for every moment in the same photograph, each image can be taken of an overlapping area. The software ensures that when an image is requested, it can deliver the image from the appropriate photograph, which has the larger area away from the photograph edge. This ensures no individuals are cut in half, if at the edge of the photograph, ensuring individuals can be reliably supplied with a full image for each moment captured.
These images may also be instantly overlaid on a previously made panoramic image, to portray the impression that the whole panoramic was produced during the period in which the ‘moment’ images were taken.
For the security side, a series of images are rapidly taken during a period in the event, these images are attached with the seating/area grids and are available to be accessed by the users during the event. Users can then identify another individual by placing a marker on their image. This marker can identify the seat number using the predefined grid attached using either the marker being placed in an area of the grid relating to a seat number/area or the marker being closest to a specific point, again attached to a specific seat number or area. The image and/or seat number of the accused is then sent to a database or to mobile devices. Each image/seat number/area is coded to a specific area that device is operating in, e.g., only the specific image/seat number is sent to the event steward which has a particular area of the crowd to manage. The users can also identify what the accused individual has supposedly done by using a drop down option or comment on the incident.
The software system can differentiate the security image set taken from the reaction moment images taken, and both sets of images are placed into separate functional routes.
Exemplary App and Internet Site
In some aspects, the disclosed technology includes a software application (‘app’) to provide users with the disclosed technology with a unique experience to receive, enjoy, and share content associated with the events attended by the users. The exemplary app can be implemented by the user on his/her mobile device in real-time during an attended event and after such events, as well as on his/her computer devices.
For example, images can be accessible via an Internet site or mobile application. This content will form a social network in which individuals can connect with other users to share photographs of the events attended. A personal profile can be used store the images they have requested or have taken. When utilizing the mobile application the users' device will utilize its location signal to prioritize the event they are attending, it will also notify them if they are in the vicinity of the event, during the event.
The individual will be able to specify their exact location or seating during the event so that the specific image, or series of images can be sent to them after they have been captured. This relates to the grids on the coded photographs produced by the software. There is also an option to keep this specific area/seat location saved for the duration of multiple events at the same venue, e.g., during a season.
The specific image, or code to access the image, within the photograph is sent to the individual after every ‘moment’ captured and can be pooled together so the individual can browse multiple images taken during the same moment or different moments. The individual can choose the image they desire and how much of the photograph they desire within that image. They can also edit the image by adding a variety of personalization options, such as filters, captions, templates, joining images accessed within the network together etc.
The images taken can be linked to images of the event moment and information of that event, such as emblems or text, and are displayed in a connected manor. This specifically could be by combining the user image with or an image of the moment. The interface in which the two images, one of the user and of one of the moment, interact is that one is on top of the other until scrolled. This adjusts the users' view of one of the images so the other becomes more prominent. This could be the image reducing in size, slides out the way, fading. This interface allows a viewer to scroll through other users images in a ‘newsfeed’ to view both the moment and the reaction of the movement in an uninterrupted manor on a small screen. As the vertical or horizontal movement of the scrolling news feed occurs, one of the overlapping images alters to reveal the ‘twinned’ image, during the same scrolling movement.
When an individual accesses their specific image, in order for them to experience the scale of the panoramic movement captured, without a large loading time, a series of images are loaded with various levels of resolution, all specific to the requested image. This starts with the whole panoramic with very low resolution, each image, that is further zoomed into, towards the desired image, has a higher resolution. This is a seamless process and allows the user to witness the manual or automatic appearance of a high-resolution rapid zoom interface, when only the final requested image area is loaded in full resolution.
If accessing another image, in the series of images of that user, the final images also load during the same panoramic-loading-period, so the user can quickly access each one. The same previously loaded, low-resolution panoramic images are kept for the same experience to be repeated but the final few zoomed in images of the different moment is replaced. After the loading has completed, gradually images around the users image begin to load.
Due to the large amounts of high resolution images being viewed, and for a better user experience, the pixels are loaded in specific areas of each photograph, which could be horizontally or vertically loaded, opposed to waiting for the whole image to load. As the user views images in the panoramic, the adjacent images load, the images that are activated for loading also take into account scrolling behavior and which direction the user is generally moving towards.
Another interface option for mobile applications involves having a specific image load depending in which direction is swiped/pressed, representing a batch of images that are viewed, opposed to a seamless transition.
The finalized image can then be shared on the social network as well as other social networks or email addresses.
For each image shared, a specific link to additional website content is attached. Depending on a variety of factors, determined by the user, the event, the moments captured, the sharing time, all alter the content within the link or alters the link itself. This content/link can be adjusted based on a predetermined formula related to specific content of the image sharer.
The mobile application is the primary point of interaction for users. The app is meant to enhance the event experience through a number of features. Primarily, a simple means to connect with friends and fellow fans to share photos of each other that were captured with the camera technology during the event. Second, if used at a sports event the mobile app will record the outcome of the game (win or loss) to the profile of the user. This data will become the basis of a “Stat Tracking” system that allows sports fans to keep track of their teams performance specifically when they (the user) is in attendance. Third, the mobile app will allow users to anonymously report other fans that are being disruptive, aggressive or ruining the match day experience for others. When referring to sports events the following can also be applied to all events which draws large crowds such as concerts, festivals celebrations etc. When referencing sports or games this can also be replaced with other events mentioned previously. Seat numbers may also be replaced with a different method of locating an individual such as stand names, sections or areas etc.
Exemplary AdvantagesFor example, the disclosed technology can be implemented at sports events, in which the camera system and data processing systems of the disclosed technology operate together to create a unique user experience and an entirely new advertising medium that directly benefits sports teams, fans, and advertisers. This is achieved by capturing emotional photos with the hardware, uploading these photos to a cloud server and then using an app and social network platform to retrieve and deliver this data/photos to users. These images can then be shared on a social platform or through a variety of existing platforms.
Advertisers, brands, and sports teams are in a constant battle to create and deliver new and engaging content that allows them to connect better with their consumers. For example, nothing conveys emotion and feeling better than a reaction image during the event.
Some exemplary advantages of the hardware include the following. The rapid image capture will ensure each image can be taken as close as possible in time to the previous image, to capture the same reaction moment. This will mean fewer cameras will be required to capture images of the entire audience. Having a manual control of the focus will further reduce the delay time. The flow of controls mean that the images captured will be in focus and also respond as quickly as possible.
If using mirrors with multiple cameras, less moving modules will be required to produce the same series of images. If using drop down mirrors this allows a very quick change in camera angle by having no need for stopping or stabilization delay time. The manual remote triggering will allow the specific moments to be captured at an accurate point by reacting to crowd behavior.
Some exemplary advantages of the software include the following. The photographs being instantly assigned to grids allow individuals to access their images as soon as they are available. Having individual grids with each image allow the user to receive their exact location within the image. This allows them to instantly view the area of the image they are in without searching within the photograph; it brings the image to the user. The images being pooled for each moment allow the user to instantly see a series of images capturing them for each moment. The specific information the image is assigned to allows the user to view this information on a mobile or website interface acting as a reference to the event and moment their image is assigned to.
By ensuring that the most suitable image is sent, without photograph edge or ‘stitching’ issues, ensures a reliable image quality received. Overlaying the ‘moment’ images on a previously made panoramic enables the users to experience the scale of the image taken instantly, without the time delay of stitching the panoramic images together required.
By being able to rapidly take the images and build a virtual map of the crowd allows an interface in which the users can anonymously and rapidly notify security of issues as soon as the event begins. This not only provides the event security with specific alerts on issues occurring during the event but also provides a list of faces and seat numbers to deal with after the hectic event.
Some exemplary advantages of the app/site include the following. By notifying users with the application who are near a venue during the event period allows a specific reminder to access the images, timed to users precisely and in a targeted manor, avoiding the annoyance when the content doesn't relate to that user.
Allowing the user to send information to locate them within all the images allows the images or image location to be sent to them, avoiding the hassle of manually locating them. By pooling these images together enables the user to rapidly compare the images taken of them during the event.
Having a ‘twinned’ image interface allows the two (or more) images to be associated with each other displaying both the moment of the event and users reaction in a seamless scrolling interface on a newsfeed, allowing rapid viewing of many different user ‘moments’.
By having a quick panorama loading method, it allows the users to gain a sense of scale of the event while not having to wait for the loading of many groups of pixels they do not wish to view in detail.
By changing the content or link, that is associated to the images uploaded, based on the user, event and moment, allows the content to remain dynamic and specific so there is more of an incentive to click through.
In some implementations of the disclosed technology, for example, the disclosed image capture, image processing, and social networking platform can be directed to sporting events. Sports are filled with those dramatic moments—the Hail Mary, the walk-off homerun, the buzzer beater. But after it's all over, how can people preserve and share these great memories? What if a fan was able to go back and relive these moments? The disclosed technology allows fans to capture and share pictures of themselves during the most amazing moments in sports, e.g., without ever touching a camera. For example, the camera technology is preinstalled in stadiums, and the camera system captures images of every fan during key, e.g., including the historic, highly emotional moments of games or matches. Fans then type their seat number or spectator location into a user interface implemented on a mobile application or website to access and share their photos, e.g., with friends or with other fans or the sporting organizations. Exemplary images captured by the camera system can offer users content that cannot otherwise be captured and never before seen. For example, the reactions of passionate fans when their team scores is uncontrollable, which is what makes this content hilarious, entertaining and timeless.
For example, the exemplary hardware component of the described systems can capture photos to generate the base of the content that can form the backbone of the social network and photo-sharing platform. These photos are of specific reaction moments of fans watching the live events. For example, the sports social network are focused on sharing users experiences from sporting events. This will create a simple way for fans to keep track of each other and the events they attend. These photos represent visual souvenirs during the most interesting moments. Users can download the app of the disclosed technology to access their photos, and this can be interfaced with existing social networks, e.g., such as Twitter, Facebook, and Instagram, or others. This photo content can leverage these existing social networks to create impressions of the images and to bring greater demand for them.
Exemplary EmbodimentsAn image capture, processing, and delivery system of the disclosed technology includes a plurality of the image capturing modules 104 arranged in the event venue 101 to capture the images of attendees at an event. In some implementations of the system, for example, the image capturing modules 104 are configured within stations, which are fixed to the event venue 101. These exemplary stations house the image capturing modules 104 (e.g., camera modules, which include a camera moving mechanism holding and camera/lens). The exemplary camera modules can each have power and Internet cabling connected to the module, e.g., from the existing infrastructure of the event venue 101, to transfer electricity to and data to/from the module.
In some implementations, for example, to capture an image of the moment the crowd is reacting to, separate cameras can be located either on or off the station. These can have a set position or be remotely controlled to follow the action, take continuous shots of the action or be triggered manually.
The image capture, processing, and delivery system can be used to acquire images at daytime events in outdoor event venues where ambient is present or indoor event venues that provide adequate lighting. Also, the image capture, processing, and delivery system is capable of taking images at night events and/or poorly lit events, e.g., such as concerts or poorly lit sporting events deep within some stands. To provide light to the subject, without affecting the users experience, the disclosed image capture, processing, and delivery system can include one or more light sources that can be timed to focus on the section of crowd from which the images are to be taken, and pulsed while the image is being captured. For example, implementation of the exemplary lighting system of the disclosed technology can ensure that a user does not receive constant glare, and the light sources can be used for multiple sections of crowd. The light sources can move to focus on a desired focus point or can be focused on a moving mirror that reflects the light to the correct angle required, e.g., producing an easier method to rapidly move the focus area of the light. For example, the camera and pulsing and positioning of the light sources are configured to interact so the timings are precise.
In one embodiment, for example, the moving mechanism 114 of an image capturing module can be configured as an electromagnetic stepper motor 114a.
The combination of the rapid moving mechanisms and the rapid stopping and stabilization mechanism can form a hardware robotics system of the disclosed technology for unmanned control of the image capturing system configured at an event venue that rapidly moves an imaging unit (e.g., camera (or a mirror in the imaging direction of the camera) to change the image angle and then stop and stabilize it to capture an image.
For example, multiple movement mechanisms and stop/stabilization mechanisms can be combined to enhance speed and precision of the image capturing modules. For example, such movement can be horizontal and vertical panning. In some implementations, for example, the cameras or mirrors may not need to be stopped/stabilized while panning to capture the photographs in focus. In some implementations, for example, the images can be timed with the movement speed to ensure each image is taken of a particular section of crowd. For example, positional and temporal data can be associated with each captured image. In some implementations, for example, the cameras may also not need to be moved at all, in which each is positioned to focus on a section of crowd and the images are taken when triggered.
Referring to
In another embodiment, for example, the moving mechanism 114 of an image capturing module can be configured as an electromagnetic stepper motor 114b.
In another embodiment, for example, the moving mechanism 114 can be configured to rapidly stop/stabilize the moving camera/lens 113 (or mirror) by using a physical block that halts the movement at a specific place. This block can be timed to move into place or may receive a trigger when a specific part of the moving piece is in the correct place to be stopped.
In other implementations, friction can also be used to stop the movement of the moving mechanism 114 to stop/stabilize the camera module 112 (or mirror) during image capturing to aid in the rapid capture of reaction images of spectators during an event.
Another method to cause rapid movement of the camera module 112 (or mirror module to assist in image capture by the camera module 112) to the next angle position can be to apply an elastic force to cause rapid movement of the module and release elastic tension that has previously been stored.
In some implementations, for example, the moving mechanism 114 can be configured to cause rapid movement of camera module 112 (or mirror) to the next angle position via exerting and releasing tension from a spring, as shown in
In some implementations, for example, the moving mechanism 114 can be configured to move the camera module 112 (or mirror) by using air pressure to apply force against a moving piece.
For example, when the camera's or the mirror's angle is changed, the subject in a crowd will be at a different distance, which means a different focus is required for the camera to take an image that is not blurred. For example, adjusting the focus of a camera can take time. The disclosed image capturing technology can rapidly capture images by pre-setting the focus parameters of the camera 113 (or mirror) for each position an image is pre-determined to acquire/take, e.g., and thereby reducing any delay in capturing each image and making the system faster. In some implementations, for example, this can be done electronically, and in other implementations, this can be done manually.
This process is illustrated in the diagram of
For example, after the user has identified his/her location in the crowd, each of the sections of the processed image that he/she is in can be pooled together so that the user can quickly obtain a series of images of him/her/themselves, and thus when requested each of the processed images from the series of images can be rapidly displayed or provided to the user, e.g., on the user device.
In some implementations, for example, during an event, each moment captured of the crowd can be associated with the information associated with that event.
For example, as each image capture module is set-up to capture images of multiple, different sections of crowd in response to a particular moment of the event, this could potentially cause some people to be cut at the edges of each photograph and therefore offer poor image quality of some attendees during that moment. The disclosed image capturing and processing technology resolves this potential issue.
In some implementations, for example, the image capturing and processing technique can rapidly produce a panoramic image of the crowd after specific moments using a premade panoramic image and including an overlay the specific images at particular points to fill in the crowd.
One of the main aspects of the disclosed technology is an application (app) that provides the foundation for a mobile, sports specific social network. Other functionality like navigation, security, log-in etc., that also contribute to the app experience are also detailed in a list of exemplary functional features. For example, a user may log in using Facebook credentials, Twitter, or another social network or create a new profile to access the mobile app. If a user logs in with Facebook credentials, then their friends list can be pulled in, e.g., automatically. A user may use their email, Facebook, or phone contact list to find and invite new users to the app. A user can “follow” each other to be able to view each other's photos. For example, Geo-Location feature and current date/time from users smart phone can allow the app to identify which sports game user is attending. For example, by a user entering his/her seat number, the user “checks-in” to the game and this data is saved to the user's profile. For example, by “checking-in” to a game, this allows users to browse their photos from the match. While browsing photos, users can view the metadata (e.g., touchdown, homerun, etc.) of the moment on the field, e.g., including pictures, from which the users demonstrating their reactions to that moment were photographed. For example, a user may apply different photographic filters to the user's images. For example, a user may use native multi-touch features to pinch and spread to zoom his/her photo to the desired crop. For example, a user may add a caption to their photo. For example, photos can be shared on the Feed of the app, in which these photos are visible to those contacts that follow the user. For example, photos can be ‘liked’, ‘cheered’ or ‘booed’ and commented on by other users. A user can share their photos on Facebook, Twitter or Instagram or other social networks if they allow permission to do so. For example, the app can provide a repository to view past photos organized by the game attended. When checked-in to a game, the result of the game (win or loss) is recorded to users profile. For example, a user profile can include a chosen profile photo, an editable text bio, and stats from the game(s) attended, including statistics personalized to the user's experience and interaction with their attendance to particular events (e.g., such as a particular team): for example, a team's and/or a user's win-loss record, win %, win streak, team win % when user attends, and team win % when user does not attend. For example, a user can have ability to use a search function that allows him/her to find event-, team- and player-specific photo content from other users who are not friends with the use of hashtags and ‘@’ symbols. A user will be able to upload photos to the app that are from their camera roll or able to take a photo themselves. A user will be able to “drop a pin” on other fans in the venue and report them for aggressive or disruptive behavior. User will be able to set an alert, which signifies that there is an issue (e.g., regarding safety or security) in that area.
The app can be used in a variety of cases. Some examples include: marketing materials at sporting events can be used to prompt downloading of the app by the user; “checking-in” to a game can also be prompted at the game, e.g., in which “Fans, remember to check-in to the game using the app to get your free photos after the game; the app notifies the user when at an event to enter their seat number. The app can be used during games (e.g., directly after a moment of the game captured by the image capturing system), immediately after games and any time after games.
The UI of the Fan Feed can include a persistent navigation bar, e.g., found along the bottom of the screen, which can function similar to other mobile applications as the primary means of navigation. For example, the navigation bar can include exemplary icons like those shown in
-
- Home Icon—Brings user to the “Feed” that shows all of their own and friends photos posted along w/any associated “likes” “boos” or comments.
- Magnifying Glass—Brings user to the search function to find event, team and player specific photo content from other users who are not friends with the use of hashtags and ‘@’ symbols similar to twitter (e.g. #yankees, #homeruns, #buzzerbeater).
- Camera—Brings user to a split screen that allows a swipe to choose between browsing their photos generated or taking and/or uploading their own.
- Bar Graph—Brings user to their profile page that show their profile photo, bio, games attended and your accumulated stats.
- Map Pin—Brings user to the “Check-in” screen to allow them to check-in to a new game.
- Cog (off of the profile tab)—Access to the “Report a fan” feature along with settings for sharing, privacy, version number, legal and help menus.
The exemplary app features described here are also for a website based interface.
For example, in some implementations, each image that is shared on a social network can be configured to a link to an external website added to it. This can allow for various brands to advertise in conjunction with the image content produced. For example, these links can be adjusted according to the users data, the event the image was taken at, the moment the image was of and the time the image was shared.
For example, in some implementations, the app can also include an option to purchase hard copy versions of the image content generated.
In some aspects, for example, a method for image capture and delivery to one or more attendees at an event can include capturing images the attendees in a crowd while they are viewing the event at the venue, in which the images captured are during the attendees emotional reaction to exciting moments during the event, and the reaction images are captured using one or more cameras which are in communication with a data processing system (e.g., server or servers). The attendee can provide information about their location at the venue to a server so when the reaction images are captured and processed, a specific image or set of images of the attendees reaction to the event moment are sent to the attendee. The cameras capture the entire crowd during reaction image capture, the location information can be used to recall images, in which the images can be sent or made available to the attendee during the event, and the reaction images can be accessed on a mobile or personal device via an application or website.
In some aspects, for example, a method for capturing and processing crowd images, in which the images of a crowd are of them reacting to an instance during an event at a venue, e.g., using an unmanned robotic camera system, can include the following. Images can be captured during specific moments of the crowds' reaction, in response to an instance in the event, which triggers the image capture sequence. The imaging sequence can be calibrated prior to the event and the positions are stored on the server. The robotics can include electric motors to provide multiple axis camera movement. Servers are in communication with the camera and movement robotics, for example, in which the server can be part of each camera unit and/or, for example, the server can be at the event venue or remote. The camera can be configured to pre-defined movement positions calibrated, and this sequence can be stored on a server and applied to the robotics when triggered. The robotics can move the camera to each position in the sequence to capture an image of a crowd area at each pre-defined position. The method can include capturing images in short periods of time between each image taken, e.g., capturing at least two images per second. The robotics can stop the camera to capture the image at each pre-defined position in the imaging sequence. The robotics can capture images while the robotics are still moving the camera for the imaging sequence. The robotics can slow down the robotics moving the camera to capture the image at each pre-defined position in the imaging sequence when there is slower movement. For example, during the image capture sequence, the image capture on the camera can be triggered when the camera has reached its pre-defined position. For example, during the image capture sequence, the image capture on the camera can be triggered when the camera has reached its pre-defined position and is stable using a feedback mechanism. For example, once the image has been captured, the camera feedback can relay a message to the server which triggers the robotic motors to move the camera to the next pre-defined position in the sequence to capture the next image. This continues through the series of images in the sequence. For example, for each capture position, the focus value on the camera can be preset so that the correct focus value is driven to the camera as it is moving to each position. In some examples, the focus drive on the camera can be controlled by sending information to the camera by the server. In some examples, the focus drive on the camera can be controlled manually by sending information from the server to an electric motor that drives the camera lens to a pre-defined value associated with its image capture position. Once the sequence is complete, the camera position is set at its next position, ready to be triggered in the next instance during the event that causes a crowd reaction.
In some aspects, for example, a method for capturing a calibrated sequence of images of a crowd and processing the images, in which the images of the crowd are of them reacting to an instance during an event at a venue, can include the following. For example, each image of the calibrated sequence can be used as a reference to define an image space within the larger captured image that corresponds to a potential attendee location. For example, this image space can be an iterative cropped area of the larger image. For example, a series of image spaces can be defined in each image to create an index that will represent a series of attendee locations during an event. For example, this index can later be applied to a captured image during the event to produce a series of smaller individual images that were part of the larger captured image, these are specific to the individual image space of an attendee. For example, the image processing can occur after the images are captured and using pre-defined information/index/mapping. For example, the attendee can provides his/her location information (e.g., including seat assignment), which can be used as the location to process the series of specific images of attendee at the event. For example, the location information can be manually entered by a user via a website, mobile device or computer application, or other. For example, the location information can be obtained automatically from a mobile device using geo-location. For example, each image space can be labelled, and the pre-defined label can be added to each image once captured. For example, the attendee location information can be assigned to the image space label so this image can be delivered to each attendee. For example, the attendees can be sent their individual images via a website, mobile device or computer application, or directly, etc. For example, the servers can be used to process the images either at the venue or remotely.
In some implementations, for example, a triggering system can be used to capture the images of the crowd reacting to an instance during the event, in which the triggering system is communicatively coupled to the image capture system. For example, the trigger can initiate due to an instance during the event, e.g., caused by an audio, visual, or mechanical perturbation stimulus. For example, all camera units in the venue can be triggered to capture images from the same trigger instance. For example, the trigger can include a manual trigger from an operator in communication with a server. For example, the trigger can be automatic, triggered by a sound, e.g., such as threshold decibel level or sound profile. For example, the trigger can be automatic from another detection system such as a visual or paired system. For example, the trigger can be based on emotions displayed in the crowd, based on a movement threshold of the crowd monitored by the cameras, and be used to identify the best images if they are being continually captured.
In some aspects, a method of image processing and delivery of images of a crowd, in which the captured images are of the crowd reacting to an instance during an event at a venue, can include associating with the captured images information used to describe the instance or moment that the crowd is reacting to at the event. For example, this information can be text or images; this information can be pre-constructed or added during the event after each instance; this information is added to the image as meta-data; and/or this information can be overlaid on the attendees cropped individual image.
In some aspects, a method of image processing and delivery of images of a crowd, in which the captured images are of the crowd reacting to an instance during an event at a venue, can include using lighting that is focused on the areas of crowd being captured (e.g., such as at a dark venue). For example, the lighting system can emit light in pulses and timed with the image capture sequence. For example, the lighting system can move its focus with the image sequence movement. For example, the lighting system movement or pulsing can be connected to the camera, server with feedback more movement or light pulsing/flashing. For example, the lighting system can be implemented to remain static while being reflected in a mirror to focus on the crowd being captured.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
Claims
1. A method for providing an image of attendees at an event, comprising:
- operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue;
- processing the images, the processing including: mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming a processed image based on the image space; and
- distributing the processed image to the individual.
2. The method of claim 1, wherein the event venue includes at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event.
3. The method of claim 1, wherein the attendees include fans or spectators at a sporting event.
4. The method of claim 1, wherein the predetermined positions include seating in the event venue.
5. The method of claim 1, wherein the operating includes manually triggering the one or more image capturing devices to record the images at an operator-selected instance based on an occurrence of the event.
6. The method of claim 1, wherein the operating includes automatically triggering the one or more image capturing devices to record the images based on at least one of sound, visual stimulus, or mechanical perturbation generated at the event venue.
7. The method of claim 1, wherein the operating includes temporally capturing a series of images of the attendees after one of a manual triggering or an automatic triggering of the one or more image capturing devices.
8. The method of claim 7, wherein the series of images are captured at a speed of at least two images per second.
9. The method of claim 7, wherein the one or more image capturing devices are automated to record the images by continuously panning in one or both of horizontal and vertical directions along a predetermined trajectory to capture the series of images with a predetermined focusing of the locations in the event venue.
10. The method of claim 7, wherein the one or more image capturing devices are automated to record the images by moving to and stopping at a plurality of imaging positions along a predetermined trajectory to capture the series of images while stopped at the corresponding imaging position, wherein the one or more image capturing devices are configured to have a predetermined focusing of the locations in the event venue.
11. The method of claim 1, wherein the one or more image capturing devices are configured to have a predetermined focusing of the locations in the event venue.
12. The method of claim 1, wherein the forming the processed image based on the image space includes producing a segmented image.
13. The method of claim 12, wherein the producing the segmented image includes cropping at least one of the recorded images to a size defined by the image space.
14. The method of claim 13, wherein the producing the segmented image further includes overlapping two or more of the recorded images to form a merged image.
15. The method of claim 1, wherein the distributing includes wirelessly transmitting the processed image to a mobile device of the individual.
16. The method of claim 15, further comprising producing a graphical user interface on the mobile device to present the processed image to the individual.
17. The method of claim 16, wherein the graphical interface further presents event-related content with the processed image.
18. The method of claim 17, wherein the event-related content includes one or both of information associated with the event and an image of an occurrence of the event, the occurrence temporally corresponding to the processed image.
19. The method of claim 16, wherein the graphical interface includes an interface to report a security-related incident to authorities at the event venue.
20. The method of claim 1, wherein the processing the images further includes attaching meta data with image data of the processed image.
21. The method of claim 1, wherein the processed images include links to external websites.
22. The method of claim 1, further comprising wirelessly transmitting a message to prompt the individual of the event to provide location information via the graphical user interface on the mobile device.
23. An imaging service system, comprising:
- a plurality of cameras arranged in an event venue to capture images of attendees at an event corresponding to an occurrence of the event;
- a trigger module in communication with the plurality of cameras to initiate the capture of the images; and
- one or more computers in communication with the cameras to receive the captured images and provide coordinates to the captured images that correspond to locations in the event venue to associate individuals among the attendees to respective locations in the event venue.
24. The system of claim 23, wherein the event venue includes at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event.
25. The system of claim 23, wherein the attendees include fans or spectators at a sporting event.
26. The system of claim 23, wherein the locations correspond to seating in the event venue.
27. The system of claim 23, wherein the plurality of cameras are arranged in the event venue to capture the images of the attendees at multiple directions.
28. The system of claim 23, wherein the plurality of cameras temporally capture a series of images of the attendees.
29. The system of claim 23, wherein the one or more computers form a processed image of an individual or individuals proximate the location of the individual using the coordinates.
30. The system of claim 29, wherein the one or more computers distribute the processed image to the individual using wireless communication to a mobile device of the individual.
31. The system of claim 29, wherein the one or more computers sends the processed image to a social network site.
32. The system of claim 29, wherein the one or more computers allow purchase of the processed image by the individual.
33. The system of claim 23, wherein the trigger module is a manual trigger to initiate the capture of the images at an operator-selected instance based on the occurrence of the event.
34. The system of claim 23, wherein the trigger module is an automatic trigger to initiate the capture of the images based on a detection of at least one of a sound, visual stimulus, or mechanical perturbation at the event.
35. The system of claim 23, wherein the captured images of the attendees display one or more attendees' reaction to the occurrence of the event.
36. The system of claim 23, further comprising:
- a plurality of lighting devices to direct light at selected sections of the event venue corresponding to sections where the plurality of cameras capture the images,
- wherein the lighting devices are in communication with the trigger module and configured to emit light when triggered on the selected sections to be imaged.
37. The system of claim 36, wherein the plurality of lighting devices are configured to direct the light at the selected sections with angles corresponding to imaging angles formed between the camera and the section to be imaged.
38. An imaging system for providing images of attendees at an event, comprising:
- a plurality of cameras arranged in an event venue to capture images of attendees at an event corresponding to an occurrence of the event; and
- one or more computers in communication with the cameras to receive the captured images and provide coordinates to the captured images that correspond to locations in the event venue to associate individuals among the attendees to respective locations in the event venue,
- wherein the captured images of the attendees display one or more attendees' reaction to the occurrence of the event.
39. The system of claim 38, further comprising:
- a trigger module in communication with the plurality of cameras to initiate the capture of the images.
40. The system of claim 39, further comprising:
- a plurality of lighting devices to direct light at selected sections of the event venue corresponding to sections where the plurality of cameras capture the images,
- wherein the lighting devices are in communication with the trigger module and configured to emit light when triggered on the selected sections to be imaged.
41. The system of claim 39, wherein the trigger module is a manual trigger to initiate the capture of the images at an operator-selected instance based on the occurrence of the event.
42. The system of claim 39, wherein the trigger module is an automatic trigger to initiate the capture of the images based on a detection of at least one of a sound, visual stimulus, or mechanical perturbation at the event.
43. The system of claim 38, wherein the event venue includes at least one of a stadium, an arena, a ballpark, an auditorium, a music hall, an amphitheater, a building to host the event, or an outdoor area to host the event.
44. The system of claim 38, wherein the attendees include fans or spectators at a sporting event.
45. The system of claim 38, wherein the locations correspond to seating in the event venue.
46. The system of claim 38, wherein the plurality of cameras are arranged in the event venue to capture the images of the attendees at multiple directions.
47. The system of claim 38, wherein the plurality of cameras temporally capture a series of images of the attendees.
48. The system of claim 38, wherein the one or more computers form a processed image of an individual or individuals proximate the location of the individual using the coordinates.
49. The system of claim 46, wherein the one or more computers distribute the processed image to the individual using wireless communication to a mobile device of the individual.
50. The system of claim 46, wherein the one or more computers sends the processed image to a social network site.
51. The system of claim 46, wherein the one or more computers allow purchase of the processed image by the individual.
52. A method for providing crowd sourcing for security at an event, comprising:
- operating one or more image capturing devices to capture images of attendees of an event situated at locations in an event venue;
- processing the captured images to form security reference images, the processing including mapping the locations of the attendees in the captured images to a grid including coordinates corresponding to predetermined positions associated with the event venue;
- distributing at least one of the security reference images to at least some of the attendees;
- receiving a message from an attendee identifying at least one of a position or an object in the security reference image, the message indicating an alleged disturbance in the event venue;
- processing the message to determine the location of the alleged disturbance using the identified position or object in the security reference image; and
- providing an alert message to an authority associated with the event to alert the authority of the alleged disturbance, the alert message including the determined location.
53. The method of claim 52, wherein the one or more image capturing devices capture the images of attendees at one or more instances prior to and during the event.
54. The method of claim 52, wherein each of the security reference image are associated with a particular section or sections of the event venue.
55. The method of claim 52, wherein the processing further includes:
- defining an image space based on a particular location in the event venue using the coordinates, and
- segmenting the captured images to a size defined by the image space to form a reduced-size security reference image.
56. The method of claim 52, wherein the distributing includes wirelessly transmitting the security reference images to a mobile device of the attendee.
57. The method of claim 52, wherein the message received from the attendee is an anonymous message.
Type: Application
Filed: Dec 19, 2013
Publication Date: Dec 3, 2015
Applicant: FANPICS, LLC (San Diego, CA)
Inventors: William Dickinson (San Diego, CA), Daniel Magy (Encinitas, CA), Marco Correia (San Diego, CA)
Application Number: 14/654,485