Patents by Inventor Jason Francis Harrison
Jason Francis Harrison has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10979669Abstract: In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.Type: GrantFiled: October 5, 2018Date of Patent: April 13, 2021Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Patent number: 10915776Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. A user viewing video data captured by another user's client device identifies an object of interest in the video data to the other user's client device. The other user's client device modifies captured video data so a focal point of the captured video data is the object of interest and so the object of interest is magnified in the captured video data. Subsequently, the modified video data is transmitted to the client device of the user viewing the captured video data.Type: GrantFiled: October 5, 2018Date of Patent: February 9, 2021Assignee: Facebook, Inc.Inventors: Eric W. Hwang, Jason Francis Harrison, Timo Juhani Ahonen
-
Patent number: 10848687Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a sending client device captures and transmits video data to a receiving client, while receiving one or more video presentation settings of the receiving client device. The sending client device applies one or more models to the captured video data and compares output from the models to the video presentation settings of the receiving client device. Based on the comparison, the sending client device provides suggested modifications to one or more video presentation settings to the receiving client device. For example, the sending client device provides a suggestion to reorient a display device of the receiving client device.Type: GrantFiled: October 5, 2018Date of Patent: November 24, 2020Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Timo Juhani Ahonen, Eric W. Hwang, Belmer Perrella Garcia Negrillo
-
Patent number: 10838689Abstract: In one embodiment, a method includes receiving audio input during an audio-video communication session. The audio input is generated by a first sound source within an environment and a second sound source within the environment. The method includes receiving video input depicting the first sound source and the second sound source in the environment. The method includes identifying the first sound source and the second sound source using the audio input and the video input. The method includes predicting a first engagement metric for the first sound source and a second engagement metric for the second sound source based on the identifying. The method includes processing the audio input to generate an audio output signal based on a comparison of the first engagement metric and the second engagement metric. The method includes providing the audio output signal to a computing device associated with the audio-video communication session.Type: GrantFiled: September 19, 2019Date of Patent: November 17, 2020Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
-
Patent number: 10834358Abstract: A system includes multiple client devices that are capable of capturing and displaying video data, in which at least two of the client devices have different amounts of processing power. A connection is established at a client device having more processing power to a client device having less processing power. The client device having more processing power receives video data captured by the client device having less processing power as well as metadata associated with the video data via the connection as the video data are being captured by the client device having less processing power. The client device having more processing power processes the video data based on the metadata associated with the video data within a duration of the connection, thereby enhancing a quality of the video data. The processed video data are then displayed at the client device having more processing power.Type: GrantFiled: December 31, 2018Date of Patent: November 10, 2020Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Timo Juhani Ahonen
-
Patent number: 10659731Abstract: In one embodiment, a method includes accessing input data from one or more different input sources. The input sources include: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system. Based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session. The method also includes generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.Type: GrantFiled: October 5, 2018Date of Patent: May 19, 2020Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Publication number: 20200112690Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a sending client device captures and transmits video data to a receiving client, while receiving one or more video presentation settings of the receiving client device. The sending client device applies one or more models to the captured video data and compares output from the models to the video presentation settings of the receiving client device. Based on the comparison, the sending client device provides suggested modifications to one or more video presentation settings to the receiving client device. For example, the sending client device provides a suggestion to reorient a display device of the receiving client device.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: Jason Francis Harrison, Timo Juhani Ahonen, Eric W. Hwang, Belmer Perrella Garcia Negrillo
-
Publication number: 20200110958Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. A user viewing video data captured by another user's client device identifies an object of interest in the video data to the other user's client device. The other user's client device modifies captured video data so a focal point of the captured video data is the object of interest and so the object of interest is magnified in the captured video data. Subsequently, the modified video data is transmitted to the client device of the user viewing the captured video data.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: Eric W. Hwang, Jason Francis Harrison, Timo Juhani Ahonen
-
Publication number: 20200050420Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.Type: ApplicationFiled: September 19, 2019Publication date: February 13, 2020Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
-
Patent number: 10523864Abstract: In one embodiment, a method includes identifying, from a set of coordinate points that correspond to a plurality of surface points of a person in an environment, a coordinate point that corresponds to a facial feature of the person; generating a facial structure for a face of the person, wherein the facial structure: covers a plurality of facial features of the person; and substantially matches a pre-determined facial structure; generating a body skeletal structure for the person, wherein the body skeletal structure substantially matches a predetermined body skeletal structure and substantially aligns with the generated facial structure in at least one dimension of a two-dimensional coordinate plane; and associating the generated body skeletal structure and facial structure with the person in the environment.Type: GrantFiled: October 5, 2018Date of Patent: December 31, 2019Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Patent number: 10511808Abstract: In one embodiment, a method includes, at a first time during an audio-video communication session: (1) determining that a first participant is located in an environment; (2) locating a first body region of the first participant; (3) generating a first color histogram of the first body region that represents a first distribution of colors of the first body region. The method also includes, at a second time: (1) determining that a second participant is located in the environment; (2) locating a second body region of the second participant; (3) generating a second color histogram of the second body region that represents a second distribution of one or more colors of the second body region. The method also includes determining that the first and second color histograms are substantially alike, and based on the determination, determining that the first participant is the same as the second participant.Type: GrantFiled: October 5, 2018Date of Patent: December 17, 2019Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Patent number: 10462422Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.Type: GrantFiled: April 9, 2018Date of Patent: October 29, 2019Assignee: Facebook, Inc.Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
-
Publication number: 20190311480Abstract: In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.Type: ApplicationFiled: October 5, 2018Publication date: October 10, 2019Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Publication number: 20190313054Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.Type: ApplicationFiled: April 9, 2018Publication date: October 10, 2019Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
-
Publication number: 20190313056Abstract: In one embodiment, a method includes, at a first time during an audio-video communication session: (1) determining that a first participant is located in an environment; (2) locating a first body region of the first participant; (3) generating a first color histogram of the first body region that represents a first distribution of colors of the first body region. The method also includes, at a second time: (1) determining that a second participant is located in the environment; (2) locating a second body region of the second participant; (3) generating a second color histogram of the second body region that represents a second distribution of one or more colors of the second body region. The method also includes determining that the first and second color histograms are substantially alike, and based on the determination, determining that the first participant is the same as the second participant.Type: ApplicationFiled: October 5, 2018Publication date: October 10, 2019Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Publication number: 20190313058Abstract: In one embodiment, a method includes accessing input data from one or more different input sources. The input sources include: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system. Based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session. The method also includes generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.Type: ApplicationFiled: October 5, 2018Publication date: October 10, 2019Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
-
Publication number: 20190313013Abstract: In one embodiment, a method includes identifying, from a set of coordinate points that correspond to a plurality of surface points of a person in an environment, a coordinate point that corresponds to a facial feature of the person; generating a facial structure for a face of the person, wherein the facial structure: covers a plurality of facial features of the person; and substantially matches a pre-determined facial structure; generating a body skeletal structure for the person, wherein the body skeletal structure substantially matches a predetermined body skeletal structure and substantially aligns with the generated facial structure in at least one dimension of a two-dimensional coordinate plane; and associating the generated body skeletal structure and facial structure with the person in the environment.Type: ApplicationFiled: October 5, 2018Publication date: October 10, 2019Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq