Patents by Inventor Andrew Lovitt
Andrew Lovitt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200359158Abstract: A shared communication channel allows for the transmitting and receiving audio content between multiple users. Each user is associated with a headset configured to transmit and receive audio data to and from headsets of other users. After the headset of a first user receives audio data corresponding to a second user, the headset spatializes the audio data based upon the relative positions of the first and second users such that when the audio data is presented to the first user, the sounds of the audio data appear to originate at a location corresponding to the second user. The headset reinforces the audio data based upon a deviation between the location of the second user and a gaze direction of the first user, allowing for the first user to more clearly hear audio data from other users that they are paying attention to.Type: ApplicationFiled: May 26, 2020Publication date: November 12, 2020Inventors: William Owen Brimijoin, II, Philip Robinson, Andrew Lovitt
-
Patent number: 10819953Abstract: The disclosed computer-implemented method may include (1) receiving, at a first device, a first stream that includes first media data from a first media object, (2) receiving, at the first device, a second stream that includes second media data from a second media object, (3) mixing, at the first device, the first media data and the second media data into a third stream, (4) compiling, while mixing the third stream, a metadata stream that includes information enabling separation of the first media data and the second media data from the third stream, (5) transmitting, from the first device to a second device, the third stream, and (6) transmitting, from the first device to the second device, the metadata stream to enable the second device to separate the first media data and the second media data from the third stream. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: October 26, 2018Date of Patent: October 27, 2020Assignee: Facebook Technologies, LLCInventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Publication number: 20200314583Abstract: Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.Type: ApplicationFiled: April 22, 2020Publication date: October 1, 2020Inventors: Philip Robinson, Carl Schissler, Peter Henry Maresh, Andrew Lovitt, Sebastià Vicenç Amengual Gari
-
Patent number: 10721521Abstract: An audio system generates virtual acoustic environments with three-dimensional (3-D) sound from legacy video with two-dimensional (2-D) sound. The system relocates sound sources within the video from 2-D to into a 3-D geometry to create an immersive 3-D virtual scene of the video that can be viewed using a headset. Accordingly, an audio processing system obtains a video that includes flat mono or stereo audio being generated by one or more sources in the video. The system isolates the audio from each source by segmenting the individual audio sources. Reverberation is removed from the audio from each source to obtain each source's direct sound component. The direct sound component is then re-spatialized to the 3-D local area of the video to generate the 3-D audio based on acoustic characteristics obtained for the local area in the video.Type: GrantFiled: June 24, 2019Date of Patent: July 21, 2020Assignee: Facebook Technologies, LLCInventors: Philip Robinson, Sebastiá Vicenc̨ Amengual Gari, Andrew Lovitt, Carl Schissler, Peter Henry Maresh
-
Patent number: 10708706Abstract: A shared communication channel allows for the transmitting and receiving audio content between multiple users. Each user is associated with a headset configured to transmit and receive audio data to and from headsets of other users. After the headset of a first user receives audio data corresponding to a second user, the headset spatializes the audio data based upon the relative positions of the first and second users such that when the audio data is presented to the first user, the sounds of the audio data appear to originate at a location corresponding to the second user. The headset reinforces the audio data based upon a deviation between the location of the second user and a gaze direction of the first user, allowing for the first user to more clearly hear audio data from other users that they are paying attention to.Type: GrantFiled: May 7, 2019Date of Patent: July 7, 2020Assignee: Facebook Technologies, LLCInventors: William Owen Brimijoin, II, Philip Robinson, Andrew Lovitt
-
Patent number: 10679602Abstract: The disclosed computer-implemented method may include applying, via a sound reproduction system, sound cancellation that reduces an amplitude of various sound signals. The method further includes identifying, among the sound signals, an external sound whose amplitude is to be reduced by the sound cancellation. The method then includes analyzing the identified external sound to determine whether the identified external sound is to be made audible to a user and, upon determining that the external sound is to be made audible to the user, the method includes modifying the sound cancellation so that the identified external sound is made audible to the user. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: October 26, 2018Date of Patent: June 9, 2020Assignee: Facebook Technologies, LLCInventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Patent number: 10674307Abstract: Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.Type: GrantFiled: March 27, 2019Date of Patent: June 2, 2020Assignee: Facebook Technologies, LLCInventors: Philip Robinson, Carl Schissler, Peter Henry Maresh, Andrew Lovitt, Sebastiá Vicenç Amengual Gari
-
Patent number: 10674259Abstract: The disclosed computer-implemented method may include establishing and implementing a virtual microphone. The method may include receiving an input specifying a location for a virtual microphone that is configured to capture audio as if located in the specified location. The method may next include initializing physical microphones to begin capturing audio as if located at the specified location. The physical microphones may be electronically or physically oriented to listen from the specified location. The method may then include combining audio streams from the physical microphones to generate a combined audio signal that sounds as if recorded at the specified location. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: October 26, 2018Date of Patent: June 2, 2020Assignee: Facebook Technologies, LLCInventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Patent number: 10645520Abstract: An audio system on a headset presents, to a user, audio content simulating a target artificial reality environment. The system receives audio content from an environment and analyzes the audio content to determine a set of acoustic properties associated with the environment. The audio content may be user generated or ambient sound. After receiving a set of target acoustic properties for a target environment, the system determines a transfer function by comparing the set of acoustic properties and the target environment's acoustic properties. The system adjusts the audio content based on the transfer function and presents the adjusted audio content to the user. The presented adjusted audio content includes one or more of the target acoustic properties for the target environment.Type: GrantFiled: June 24, 2019Date of Patent: May 5, 2020Assignee: Facebook Technologies, LLCInventors: Sebastiá Vicenç Amengual Gari, Carl Schissler, Peter Henry Maresh, Andrew Lovitt, Philip Robinson
-
Publication number: 20200135163Abstract: The disclosed computer-implemented method may include applying, via a sound reproduction system, sound cancellation that reduces an amplitude of various sound signals. The method further includes identifying, among the sound signals, an external sound whose amplitude is to be reduced by the sound cancellation. The method then includes analyzing the identified external sound to determine whether the identified external sound is to be made audible to a user and, upon determining that the external sound is to be made audible to the user, the method includes modifying the sound cancellation so that the identified external sound is made audible to the user. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: Andrew Lovitt, Antonio John Miller, Phillip Robinson, Scott Selfon
-
Publication number: 20200134026Abstract: The disclosed computer-implemented method for performing natural language translation in AR may include accessing an audio input stream that includes words spoken by a speaking user in a first language. The method may next include performing active noise cancellation on the words in the audio input stream so that the spoken words are suppressed before reaching a listening user. Still further, the method may include processing the audio input stream to identify the words spoken by the speaking user, and translating the identified words spoken by the speaking user into a second, different language. The method may also include generating spoken words in the second, different language using the translated words, and replaying the generated spoken words in the second language to the listening user. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Publication number: 20200137488Abstract: The disclosed computer-implemented method may include establishing and implementing a virtual microphone. The method may include receiving an input specifying a location for a virtual microphone that is configured to capture audio as if located in the specified location. The method may next include initializing physical microphones to begin capturing audio as if located at the specified location. The physical microphones may be electronically or physically oriented to listen from the specified location. The method may then include combining audio streams from the physical microphones to generate a combined audio signal that sounds as if recorded at the specified location. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Patent number: 10595149Abstract: The disclosed computer-implemented method for performing directional beamforming according to an anticipated position may include accessing environment data indicating a sound source within an environment. The device may include various audio hardware components configured to generate steerable audio beams. The method may also include identifying the location of the sound source within the environment based on the accessed environment data, and then steering the audio beams of the device to the identified location of the sound source within the environment. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: December 4, 2018Date of Patent: March 17, 2020Assignee: Facebook Technologies, LLCInventors: Andrew Lovitt, Scott Phillip Selfon, Antonio John Miller
-
Patent number: 10574472Abstract: The disclosed computer-implemented method may include (1) establishing a communication channel to indirectly convey a conversation, (2) receiving, via the communication channel, a portion of the conversation, (3) presenting the portion of the conversation to a user, (4) receiving, via the communication channel, an additional portion of the conversation, (5) detecting an additional communication channel capable of conveying the conversation, (6) determining a human-perceivable difference between how the conversation has been conveyed via the communication channel and how the conversation will be conveyed via the additional communication channel, and (7) compensating for the human-perceivable difference when presenting the additional portion of the conversation to the user in order to smoothly transition the conversation from the communication channel to the additional communication channel. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: November 1, 2018Date of Patent: February 25, 2020Assignee: Facebook Technologies, LLCInventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
-
Patent number: 10394580Abstract: Systems, computer program products and computer program products for facilitating the dynamic addition and removal of operating system components on computing devices based on application or user interaction over time are disclosed. Such systems, computer program products and computer program products provide one or more API intercept layers, a rules engine and a hydrate engine that facilitates the ability to dynamically rehydrate or hydrate operating system components. In an embodiment, a minimal (or core) operating system image on a computing device is deployed. Then, required components may be dynamically added (i.e., “streamed” or “rehydrated”) from storage, a server or cloud service as required by an executing application program on the computing device. In another embodiment, a totally complete operating system image is deployed on a computing device. Then, unused components may be dynamically removed (i.e., “dehydrated”) from the computing device over time based on application or user interaction.Type: GrantFiled: July 19, 2016Date of Patent: August 27, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Michael Hall, Andrew Lovitt, Jeremiah Spradlin
-
Publication number: 20180341330Abstract: A controller is adapted to recognize an input from a user using an input interface, determine if a user gaze information indicates that the user is gazing at a device, and when the user gaze information indicates that the user is gazing at the device, route response information to the device.Type: ApplicationFiled: August 7, 2018Publication date: November 29, 2018Inventors: Crystal Lee PARKER, Mark Louis Wilson O'HANLON, Andrew LOVITT, Jason Ryan FARMER
-
Patent number: 10067563Abstract: User gaze information, which may include a user line of sight, user point of focus, or an area that a user is not looking at, is determined from user body, head, eye and iris positioning. The user gaze information is used to select a context and interaction set for the user. The interaction sets may include grammars for a speech recognition system, movements for a gesture recognition system, physiological states for a user health parameter detection system, or other possible inputs. When a user focuses on a selected object or area, an interaction set associated with that object or area is activated and used to interpret user inputs. Interaction sets may also be selected based upon areas that a user is not viewing. Multiple devices can share gaze information so that a device does not require its own gaze detector.Type: GrantFiled: October 27, 2017Date of Patent: September 4, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Crystal Lee Parker, Mark Louis Wilson O'Hanlon, Andrew Lovitt, Jason Ryan Farmer
-
Patent number: 10031737Abstract: Multiple devices having different architecture or platforms may be supported by the same application store. The related devices are used to synchronize the state of each device in a group, such as all the devices owned or used by a particular user. The devices themselves are used as separate payload delivery systems that are capable of sharing software, such as new or updated applications and operating systems, among the different types of devices in the group. A software payload may be created by a server that contains software for more than one device's architecture. They payload may include segments targeted for different platforms or architectures. Once the payload is loaded on one device, that device can then send the payload to the other devices within the group. Each device that receives the payload uses the appropriate software segment for its particular architecture or platform.Type: GrantFiled: February 16, 2012Date of Patent: July 24, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Andrew Lovitt
-
Publication number: 20180059781Abstract: User gaze information, which may include a user line of sight, user point of focus, or an area that a user is not looking at, is determined from user body, head, eye and iris positioning. The user gaze information is used to select a context and interaction set for the user. The interaction sets may include grammars for a speech recognition system, movements for a gesture recognition system, physiological states for a user health parameter detection system, or other possible inputs. When a user focuses on a selected object or area, an interaction set associated with that object or area is activated and used to interpret user inputs. Interaction sets may also be selected based upon areas that a user is not viewing. Multiple devices can share gaze information so that a device does not require its own gaze detector.Type: ApplicationFiled: October 27, 2017Publication date: March 1, 2018Inventors: Crystal Lee Parker, Mark Louis Wilson O'Hanlon, Andrew Lovitt, Jason Ryan Farmer
-
Publication number: 20180025731Abstract: Specialized recognition engines are configured to recognize acoustic objects. A policy engine can consume a recognition policy that defines the conditions under which specialized recognition engines are to be activated or deactivated. An arbitrator receives events fired by the specialized recognition engines and provides the events to listeners that have registered to receive notification of the occurrence of the events. If a specialized recognition engine recognizes an acoustic object, the policy engine can utilize the recognition policy to identify the specialized recognition engines that are to be activated or deactivated. The identified specialized recognition engines can then be activated or deactivated in order to implement a particular recognition scenario and to meet a particular power consumption requirement.Type: ApplicationFiled: July 21, 2016Publication date: January 25, 2018Inventor: Andrew Lovitt