Patents by Inventor Francesco Cricri

Francesco Cricri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200104711
    Abstract: A method, apparatus and computer program product provide an automated neural network training mechanism. The method, apparatus and computer program product receive a decoded noisy image and a set of input parameters for a neural network configured to optimize the decoded noisy image. A denoised image is generated based on the decoded noisy image and the set of input parameters. A denoised noisy error is computed representing an error between the denoised image and the decoded noisy image. The neural network is trained using the denoised noisy error and the set of input parameters and a ground truth noisy error value is received representing an error between the original image and the encoded image. The ground truth noisy error value is compared with the denoised noisy error to determine whether a difference between the ground truth noisy error value and the denoised noisy error is within a pre-determined threshold.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 2, 2020
    Inventors: Caglar AYTEKIN, Francesco CRICRI, Xingyang NI
  • Publication number: 20200015021
    Abstract: An apparatus for identifying which sound sources are associated with which microphone audio signals, the apparatus comprising including a processor configured to: determine/receive a position/orientation of at least one sound source relative to a microphone array; receive at least one microphone audio signal, each microphone audio signal received from a microphone; receive an audio-focussed audio signal from the microphone array, wherein the audio-focussed audio signal is directed from the microphone array towards the one of the at least one sound source so as to enhance the audio-focussed audio signal; compare the audio-focussed audio signal against each microphone audio signal to identify a match between one of the at least one microphone audio signal and the audio focussed audio signal; and associate the one of the at least one microphone with the at least one sound source, based on the identified match.
    Type: Application
    Filed: November 20, 2017
    Publication date: January 9, 2020
    Inventors: Jussi LEPPANEN, Antti ERONEN, Francesco CRICRI, Arto LEHTINIEMI
  • Publication number: 20200008004
    Abstract: A method comprising: causing analysis of a portion of a visual scene; causing modification of a first sound object to modify a spatial extent of the first sound object in dependence upon the analysis of the portion of the visual scene corresponding to the first sound object; and causing rendering of the visual scene and the corresponding sound scene including of the modified first sound object with modified spatial extent.
    Type: Application
    Filed: November 29, 2017
    Publication date: January 2, 2020
    Applicant: NOKIA TECHNOLOGIES OY
    Inventors: Antti ERONEN, Jussi LEPPÄNEN, Francesco CRICRI, Arto LEHTINIEMI
  • Patent number: 10521940
    Abstract: A method, apparatus, and computer product for: determining that the location of a user satisfies at least one spatial boundary condition; and in response to said determination, causing the presentation of an avatar to the user, wherein the presentation of the avatar comprises presenting an instruction given by the avatar to the user.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: December 31, 2019
    Assignee: Nokia Tecnologies Oy
    Inventors: Francesco Cricri, Jukka Saarinen
  • Patent number: 10524074
    Abstract: A method comprising: automatically applying a selection criterion or criteria to a sound object; if the sound object satisfies the selection criterion or criteria then performing one of correct or incorrect rendering of the sound object; and if the sound object does not satisfy the selection criterion or criteria then performing the other of correct or incorrect rendering of the sound object, wherein correct rendering of the sound object comprises at least rendering the sound object at a correct position within a rendered sound scene compared to a recorded sound scene and wherein incorrect rendering of the sound object comprises at least rendering of the sound object at an incorrect position in a rendered sound scene compared to a recorded sound scene or not rendering the sound object in the rendered sound scene.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: December 31, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Antti Eronen, Jussi Leppänen, Arto Lehtiniemi, Francesco Cricri
  • Patent number: 10482641
    Abstract: A method comprises providing video data representing at least part of virtual space to a user for viewing, identifying a current viewed sector of the virtual space based on user position, determining a sub-portion of said viewing sector, identifying an event occurring in a non-viewed sector of the virtual space, and displaying content indicative of the event in the sub-portion of said current viewing sector. The displaying step may comprise displaying a graphical notification of the event in the sub-portion, or in alternative embodiments, displaying video data showing the event in the sub-portion.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: November 19, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricri, Jukka Pentti Paivio Saarinen
  • Publication number: 20190313174
    Abstract: An apparatus for controlling a controllable position/orientation of at least one audio source within an audio scene, the audio scene including the at least one audio source; a capture device, the apparatus including a processor configured to: receive a physical position/orientation of the at least one audio source relative to a capture device capture orientation; receive an earlier physical position/orientation of the at least one audio source relative to the capture device capture orientation; receive at least one control parameter; and control a controllable position/orientation of the at least one audio source, the controllable position being between the physical position/orientation of the at least one audio source relative to the capture device capture orientation and the earlier physical position/orientation of the at least one audio source relative to the capture device capture orientation and based on the control parameter.
    Type: Application
    Filed: November 20, 2017
    Publication date: October 10, 2019
    Inventors: Jussi LEPPANEN, Arto LEHTINIEMI, Antti ERONEN, Francesco CRICRI
  • Publication number: 20190311259
    Abstract: According to the present disclosure, an apparatus includes at least one processor; and at least one memory including computer program code. The at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to receive media content for streaming to a user device; to train a neural network to be overfitted to at least a first portion of the media content; and to send the trained neural network and the first portion of the media content to the user equipment. In addition, another apparatus includes at least one processor; and at least one memory including computer program code. The at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to receive at least a first portion of media content and a neural network trained to be overfitted to the first portion of the media content; and to process the first portion of the media content using the overfitted neural network.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 10, 2019
    Inventors: Francesco Cricri, Caglar Aytekin, Emre Baris Aksu, Miika Sakari Tupala, Xingyang Ni
  • Patent number: 10437874
    Abstract: A method, an apparatus and computer program code is provided. The method comprises: responding to user input by making at least one alteration to a recording of a real scene in a first image content item; determining at least one altered characteristic of the recording of the real scene; determining whether one or more further image content items, different from the first image content item, have a recording of a real scene comprising the at least one determined altered characteristic; and causing at least one further image content item, having a recording of a real scene comprising the at least one determined altered characteristic, to be indicated to a user.
    Type: Grant
    Filed: August 10, 2016
    Date of Patent: October 8, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Jussi Leppänen, Francesco Cricri, Antti Eronen, Arto Lehtiniemi
  • Patent number: 10405123
    Abstract: This specification describes a method comprising determining whether an estimated position of an audio capture device which captures audio data is within boundaries of a predetermined area, and in response to a determination that the estimated position is not within the boundaries of the predetermined area, associating the captured audio data with an adjusted position.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: September 3, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricri, Jukka Saarinen
  • Patent number: 10397722
    Abstract: Apparatus including a processor configured to: receive a spatial audio signal associated with a microphone array configured to provide spatial audio capture and at least one additional audio signal associated with an additional microphone, the at least one additional microphone signal having been delayed by a variable delay determined such that the audio signals are time aligned; receive a relative position between a first position associated with the microphone array and a second position associated with the additional microphone; generate at least two output audio channel signals by processing and mixing the spatial audio signal and the at least one additional audio signal based on the relative position between the first position and the second position such that the at least two output audio channel signals present an augmented audio scene.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: August 27, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Antti Eronen, Jussi Leppanen, Arto Lehtiniemi, Matti Hamalainen, Sujeet Mate, Francesco Cricri, Mikko-Ville Laitinen, Mikko Tammi, Ville-Veikko Mattila
  • Publication number: 20190251360
    Abstract: The invention relates to a method, an apparatus and a computer program product for analyzing media content. The method comprises receiving media content; performing feature extraction of the media content at a plurality of convolution layers to produce a plurality of layer-specific feature maps; transmitting from the plurality of convolution layers a corresponding layer-specific feature map to a corresponding de-convolution layer of a plurality of de-convolution layers via a recurrent connection between the plurality of convolution layers and the plurality of de-convolution layers; and generating a reconstructed media content based on the plurality of feature maps.
    Type: Application
    Filed: September 27, 2017
    Publication date: August 15, 2019
    Inventors: Francesco Cricri, Mikko Honkala, Emre Baris Aksu, Xingyang Ni
  • Publication number: 20190187954
    Abstract: A method, apparatus and computer program code is provided. The method comprises: causing display of a virtual object at a first position in virtual space, the virtual object having a visual position and an aural position at the first position; processing positional audio data based on the aural position of the virtual object being at the first position; causing positional audio to be output to a user based on the processed positional audio data; changing the aural position of the virtual object from the first position to a second position in the virtual space, while maintaining the visual position of the virtual object at the first position; further processing positional audio data based on the aural position of the virtual object being at the second position; and causing positional audio to be output to the user based on the further processed positional audio data, while maintaining the visual position of the virtual object at the first position.
    Type: Application
    Filed: August 22, 2017
    Publication date: June 20, 2019
    Inventors: Francesco Cricri, Arto Lehtiniemi, Antti Eronen, Jussi Leppänen
  • Publication number: 20190139312
    Abstract: An apparatus configured to, based on a location of a plurality of distinct audio sources in virtual reality content captured of a scene, a first virtual reality view providing a view of the scene from a first point of view, wherein at least two of said audio sources are one or more of: a) within a first predetermined angular separation of one another in the first virtual reality view, b) positioned in the scene such that not all are within the field of view, provide for display of a second virtual reality view from second point of view satisfying a predetermined criterion, the predetermined criterion comprising a point of view from which said audio sources are separated by at least a second predetermined angular separation and are within a field of view of the second virtual reality view to provide for control of audio properties of said audio sources.
    Type: Application
    Filed: April 12, 2017
    Publication date: May 9, 2019
    Inventors: Jussi Leppänen, Arto Lehtiniemi, Antti Eronen, Francesco Cricrì
  • Publication number: 20190130193
    Abstract: An apparatus configured to: • in respect of virtual reality content comprising video imagery configured to provide a virtual reality space for viewing in virtual reality, wherein a virtual reality view presented to a user provides for viewing of the virtual reality content, the virtual reality view comprising a spatial portion of the video imagery that forms the virtual reality space and being smaller in spatial extent than the spatial extent of the video imagery of the virtual reality space and • based on one or more of; • i) a viewing direction in the virtual reality space of at least one virtual reality view provided to the user; and • ii) a selected object in the video imagery, • providing for one or more of generation or display of causal summary content comprising selected content from the virtual reality content at least prior to a time point in the virtual reality content currently viewed by the user, the causal summary content at least focussed on an object or event appearing in the at least one virt
    Type: Application
    Filed: April 12, 2017
    Publication date: May 2, 2019
    Applicant: Nokia Technologies Oy
    Inventors: Jussi Leppänen, Arto Lehtiniemi, Antti Eronen, Francesco Cricrì
  • Publication number: 20190129598
    Abstract: A method comprising: causing definition of a display window in a displayed virtual scene; displaying window content inside the display window, in the displayed virtual scene; in dependence upon a first user action, causing a first change in the window content displayed inside the display window to first window content different to the window content, without changing the display window; and in dependence upon a second user action causing a second change in the window content displayed in the display window to second window content, different to the first window content and the window content, and causing a variation in the display window to become a modified display window different to the display window.
    Type: Application
    Filed: April 27, 2017
    Publication date: May 2, 2019
    Inventors: Francesco Cricrì, Arto Lehtiniemi, Antti Eronen, Jussi Leppänen
  • Publication number: 20190122072
    Abstract: The invention relates to a method comprising receiving, by a neural network, a first image comprising at least one target object; receiving, by the neural network, a second image comprising at least one query object; and determining, by the neural network, whether the query object corresponds to the target object, wherein the neural network comprises a discriminator neural network of a generative adversarial network (GAN). The invention further relates to an apparatus and a computer program product that perform the method.
    Type: Application
    Filed: October 10, 2018
    Publication date: April 25, 2019
    Inventors: Francesco Cricrì, Emre Aksu, Xingyang Ni
  • Publication number: 20190113598
    Abstract: Certain examples of the present invention relate to a method, apparatus, system and computer program for controlling a positioning module and/or an audio capture module. Certain examples provide a method (100) comprising: associating (101) one or more positioning modules (501) with one or more audio capture modules (502); and controlling (102) one or more operations of the one or more positioning modules (501) and/or the associated one or more audio capture modules (502) in dependence upon: one or more pre-determined times (202(a)), and one or more pre-determined positions (202(b)).
    Type: Application
    Filed: May 16, 2017
    Publication date: April 18, 2019
    Inventors: Jussi Leppänen, Arto Lehtiniemi, Antti Eronen, Francesco Cricrì
  • Publication number: 20190102627
    Abstract: A method comprising: creating a visual indicator based on at least one of visual analysis or audio analysis performed for a content comprising at least one visual element, wherein the visual indicator is selectable such that upon a selection of the visual indicator, access to the content is provided.
    Type: Application
    Filed: March 27, 2017
    Publication date: April 4, 2019
    Inventors: Jussi LEPPÄNEN, Antti ERONEN, Arto LEHTINIEMI, Francesco CRICRI
  • Patent number: 10242289
    Abstract: A method for operating a computer graphic system, the method comprising: inputting a media content object (MCO) into a feature extractor comprising semantic abstraction levels; extracting feature maps from the MCO on each of the semantic layers; selecting at least a portion of the MCO to be analyzed; determining, based on the analysis of the feature maps from the portion of the MCO and the analysis of a previous state of a recognition unit, one or more feature maps selected from the feature maps of the semantic layers; determining a weight for each feature map; repeating the determining steps N times, each time processing, based on the analysis, each feature map by applying the corresponding weight; inputting the processed feature maps to the recognition unit; and analyzing a number of the processed feature maps until a prediction about the portion of the MCO is output.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 26, 2019
    Assignee: Nokia Technologies Oy
    Inventor: Francesco Cricri