Patents by Inventor Anders Astrom
Anders Astrom has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180189532Abstract: This document describes systems, methods, devices, and other techniques for video camera self-calibration based on video information received from the video camera. In some implementations, a computing device receives video information characterizing a video showing a scene from a field of view of a video camera; detects an object that appears in the scene of the video; identifies a visual marking that appears on the detected object; determines a particular visual marking among a plurality of pre-defined visual markings that matches the visual marking that appears on the detected object; identifies one or more object characteristics associated with the particular visual marking; evaluates one or more features of the video with respect to the one or more object characteristics; and based on a result of evaluating the one or more features of the video with respect to the one or more object characteristics, sets a parameter of the video camera.Type: ApplicationFiled: December 30, 2016Publication date: July 5, 2018Inventors: Cyrille Bataller, Anders Astrom
-
Patent number: 10007849Abstract: Computerized methods and systems, including computer programs encoded on a computer storage medium, may detect event shown within digital video content captured by one or more video cameras, and correlate these detected events to real-world conditions that may not be captured within the digital video data. For example, a computing system may detect events shown within digital video content captured by one or more video cameras, and may obtain data that identifies at least one external event. The computer system may establish a predictive model that correlates values of event parameters that characterize the detected and external events during a first time period, and may apply the predictive model to an event parameter that characterizes an additional event detected during a second time period. Based on an outcome of the predictive model, the computing system may determine an expected value of the external event parameter during the second time period.Type: GrantFiled: May 27, 2016Date of Patent: June 26, 2018Assignee: Accenture Global Solutions LimitedInventors: Cyrille Bataller, Anders Astrom
-
Patent number: 9996749Abstract: Computerized methods and systems, including computer programs encoded on a computer storage medium, may detect events shown within digital video content captured by one or more video cameras during a prior time period, and predict an occurrence of an additional event during a future time period based on time-varying patterns among the detected events. For example, a computing system may detect events shown within digital video content captured by one or more video cameras, and may establish a predictive model that identifies one or more time-varying patterns in event parameter values that characterize the detected events within the prior time period. Based on an outcome of the predictive model, the computing system may determine an expected value of one of the event parameters during the second time period.Type: GrantFiled: May 27, 2016Date of Patent: June 12, 2018Assignee: Accenture Global Solutions LimitedInventors: Cyrille Bataller, Anders Astrom
-
Patent number: 9875392Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: GrantFiled: August 23, 2017Date of Patent: January 23, 2018Assignee: ACCENTURE GLOBAL SERVICES LIMITEDInventors: Cyrille Bataller, Anders Astrom
-
Publication number: 20170351907Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: ApplicationFiled: August 23, 2017Publication date: December 7, 2017Applicant: Accenture Global Services LimitedInventors: Cyrille BATALLER, Anders ASTROM
-
Patent number: 9773157Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: GrantFiled: December 7, 2015Date of Patent: September 26, 2017Assignee: ACCENTURE GLOBAL SERVICES LIMITEDInventors: Cyrille Bataller, Anders Astrom
-
Publication number: 20160350599Abstract: This document describes systems, methods, devices, and other techniques for video camera scene translation. In some implementations, a computing device accesses a first video that shows a first 2D scene of an environment; defines a trip wire at a first position of the first 2D scene; obtains a 3D model of at least a portion of the environment shown in the first 2D scene; maps the trip wire from the first position of the first 2D scene to a first position of the 3D model that corresponds to the first position of the first 2D scene; accesses a second video that shows a second 2D scene of the environment; and projects the trip wire from the first position of the 3D model to a first position of the second 2D scene that corresponds to a same location in the environment as the first position of the first 2D scene.Type: ApplicationFiled: May 31, 2016Publication date: December 1, 2016Inventors: Cyrille Bataller, Anders Astrom, Philippe Daniel
-
Publication number: 20160350597Abstract: Computerized methods and systems, including computer programs encoded on a computer storage medium, may detect event shown within digital video content captured by one or more video cameras, and correlate these detected events to real-world conditions that may not be captured within the digital video data. For example, a computing system may detect events shown within digital video content captured by one or more video cameras, and may obtain data that identifies at least one external event. The computer system may establish a predictive model that correlates values of event parameters that characterize the detected and external events during a first time period, and may apply the predictive model to an event parameter that characterizes an additional event detected during a second time period. Based on an outcome of the predictive model, the computing system may determine an expected value of the external event parameter during the second time period.Type: ApplicationFiled: May 27, 2016Publication date: December 1, 2016Inventors: Cyrille Bataller, Anders Astrom
-
Publication number: 20160350587Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying people depicted in images. In one aspect, a process includes receiving an image from a camera. A face of a person is detected in the image. The image is compared to a set of images stored in a local cache. A determination is made whether the face of the person matches a face of a person depicted by at least one image of the set of images. In response to determining that the face of the person does not match a face of a person depicted by at least one image of the set of images, a selection is made of a highest quality image of the face of the person. A server system compares the highest quality image to images from data for the multiple people to identify the person.Type: ApplicationFiled: May 31, 2016Publication date: December 1, 2016Inventors: Cyrille Bataller, Anders Astrom, Vitalie Schiopu, Hakim Khalafi
-
Publication number: 20160350334Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for storing facial recognition image data in a cache. One of the methods includes receiving an image from a camera, detecting, in the received image, a face of a person, searching a biometric data cache based on the detected face, in response to searching the biometric data cache based on the detected face, determining whether the biometric data cache includes data for the person, in response to a determination that the biometric data cache includes data for the person, using the data from the biometric data cache to determine an identifier for the person, and in response to a determination that the biometric data cache does not include data for the person: searching a data storage system based on the detected face of the person to determine whether the data storage system includes data for the person.Type: ApplicationFiled: January 12, 2016Publication date: December 1, 2016Inventors: Cyrille Bataller, Anders Astrom, Vitalie Schiopu, Hakim Khalafi
-
Publication number: 20160350596Abstract: Computerized methods and systems, including computer programs encoded on a computer storage medium, may detect events shown within digital video content captured by one or more video cameras during a prior time period, and predict an occurrence of an additional event during a future time period based on time-varying patterns among the detected events. For example, a computing system may detect events shown within digital video content captured by one or more video cameras, and may establish a predictive model that identifies one or more time-varying patterns in event parameter values that characterize the detected events within the prior time period. Based on an outcome of the predictive model, the computing system may determine an expected value of one of the event parameters during the second time period.Type: ApplicationFiled: May 27, 2016Publication date: December 1, 2016Inventors: Cyrille Bataller, Anders Astrom
-
Publication number: 20160104033Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: ApplicationFiled: December 7, 2015Publication date: April 14, 2016Applicant: ACCENTURE GLOBAL SERVICES LIMITEDInventors: Cyrille BATALLER, Anders ASTROM
-
Patent number: 9230157Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: GrantFiled: January 30, 2013Date of Patent: January 5, 2016Assignee: ACCENTURE GLOBAL SERVICES LIMITEDInventors: Cyrille Bataller, Anders Astrom
-
Patent number: 8971254Abstract: A network node is disclosed that communicates with a user equipment node in a communications system. The network node repetitively transmits first uplink transmission power control, TPC, commands on a first physical channel with a first channel configuration while repetitively transmitting second uplink TPC commands on a second physical channel with a second channel configuration. The first and second uplink TPC commands control uplink transmission power from the user equipment node to the network node. Related user equipment nodes and methods are disclosed.Type: GrantFiled: July 26, 2013Date of Patent: March 3, 2015Assignee: Telefonaktiebolaget L M Ericsson (publ)Inventors: Göran Kronquist, Anders Aström, Per Löfving
-
Publication number: 20150035990Abstract: Impact time between an image sensing circuitry and an object relatively moving at least partially towards, or away from, the image sensing circuitry can be computed. Image data associated with a respective image frame of a sequence (1 . . . N) of image frames sensed by said image sensing circuitry and which image frames are imaging said object can be received. For each one (i) of multiple pixel positions, a respective duration value (f(i)) indicative of a largest duration of consecutively occurring local extreme points in said sequence (1 . . . N) of image frames can be computed. A local extreme point is present in a pixel position (i) when an image data value of the pixel position (i) is a maxima or minima in relation to image data values of those pixel positions that are closest neighbours to said pixel position (i).Type: ApplicationFiled: January 20, 2012Publication date: February 5, 2015Inventors: Robert Forchheimer, Anders Aström
-
Publication number: 20140063191Abstract: Virtual access control may include detecting entry of a person into a virtual controlled zone, and counting and/or identifying people including the person entering into the virtual controlled zone. Virtual access control may further include determining an authorization of the person to continue through the virtual controlled zone based on a facial identification of the person, and alerting the person to stop, exit from, or continue through the virtual controlled zone based on the determined authorization. An alarm may be generated if the person violates directions provided by the alert.Type: ApplicationFiled: August 27, 2013Publication date: March 6, 2014Applicant: Accenture Global Services LimitedInventors: Cyrille Bataller, Alastair Partington, Anders Aström, Alessio Cavallini, David Mark Irish
-
Publication number: 20130308585Abstract: A network node is disclosed that communicates with a user equipment node in a communications system. The network node repetitively transmits first uplink transmission power control, TPC, commands on a first physical channel with a first channel configuration while repetitively transmitting second uplink TPC commands on a second physical channel with a second channel configuration. The first and second uplink TPC commands control uplink transmission power from the user equipment node to the network node. Related user equipment nodes and methods are disclosed.Type: ApplicationFiled: July 26, 2013Publication date: November 21, 2013Applicant: Telefonaktiebolaget L M Ericsson (Publ)Inventors: Göran Kronquist, Anders Aström, Per Löfving
-
Patent number: 8520622Abstract: A network node is disclosed that communicates with a user equipment node in a communications system. The network node repetitively transmits first uplink transmission power control, TPC, commands on a first physical channel with a first channel configuration while repetitively transmitting second uplink TPC commands on a second physical channel with a second channel configuration. The first and second uplink TPC commands control uplink transmission power from the user equipment node to the network node. Related user equipment nodes and methods are disclosed.Type: GrantFiled: July 6, 2011Date of Patent: August 27, 2013Assignee: Telefonaktiebolaget L M Ericsson (publ)Inventors: Goran Kronquist, Anders Astrom, Per Lofving
-
Publication number: 20130195316Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.Type: ApplicationFiled: January 30, 2013Publication date: August 1, 2013Applicant: Accenture Global Services LimitedInventors: Cyrille BATALLER, Anders ASTROM
-
Publication number: 20130069986Abstract: A mobile device, a method in a mobile device, a server and a method in a server for augmented reality.Type: ApplicationFiled: June 1, 2010Publication date: March 21, 2013Applicant: SAAB ABInventors: Niclas Fock, Gert Johansson, Anders Åström