Patents by Inventor Rune Oistein Aas

Rune Oistein Aas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230121654
    Abstract: Presented herein are techniques for cropping video streams to create an optimized layout in which participants of a meeting are a similar size. A user device receives a plurality of video streams, each video stream including at least one face of a participant participating in a video communication session. Faces in one or more of the plurality of video streams are cropped so that faces in the plurality of video streams are approximately equal in size, to produce a plurality of processed video streams. The plurality of processed video streams are sorted according to video stream widths to produce sorted video streams and the plurality of sorted video streams are distributed for display across a smallest number of rows possible on a display of the user device.
    Type: Application
    Filed: February 28, 2022
    Publication date: April 20, 2023
    Inventors: Kristian Tangeland, Rune Øistein Aas, Benoit Rouger
  • Patent number: 11418758
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: August 16, 2022
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Patent number: 11356488
    Abstract: During an online meeting, captured video content generated at an endpoint is analyzed. A participant at the endpoint is identified from the captured video content utilizing face recognition analysis of an isolated facial image of the participant within the captured video content. Identified participant information is generated, modified and/or maintained in response to one or more changes associated with the captured video content, where the one or more changes includes an identification of each participant at the endpoint and/or a change in location of one or more identified participants at the endpoint. In response to a determination of one or more criteria being satisfied, the identified participant information is provided in video content for transmission to a remote endpoint (to facilitate display of identifiers for one or more identified participants in the display at the remote endpoint).
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: June 7, 2022
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Paul Thomas Mackell, Christian Fjelleng Theien, Rune Øistein Aas
  • Publication number: 20210144337
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Application
    Filed: December 4, 2020
    Publication date: May 13, 2021
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Patent number: 10917612
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: February 9, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Publication number: 20200344278
    Abstract: During an online meeting, captured video content generated at an endpoint is analyzed. A participant at the endpoint is identified from the captured video content utilizing face recognition analysis of an isolated facial image of the participant within the captured video content. Identified participant information is generated, modified and/or maintained in response to one or more changes associated with the captured video content, where the one or more changes includes an identification of each participant at the endpoint and/or a change in location of one or more identified participants at the endpoint. In response to a determination of one or more criteria being satisfied, the identified participant information is provided in video content for transmission to a remote endpoint (to facilitate display of identifiers for one or more identified participants in the display at the remote endpoint).
    Type: Application
    Filed: April 24, 2019
    Publication date: October 29, 2020
    Inventors: Paul Thomas Mackell, Christian Fjelleng Theien, Rune Øistein Aas
  • Patent number: 10708544
    Abstract: In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: July 7, 2020
    Assignee: Cisco Technology, Inc.
    Inventors: Kristian Tangeland, Rune Oistein Aas, Christian Fjelleng Theien
  • Publication number: 20200068172
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Application
    Filed: October 28, 2019
    Publication date: February 27, 2020
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Patent number: 10516852
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: December 24, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Publication number: 20190356883
    Abstract: In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
    Type: Application
    Filed: May 16, 2018
    Publication date: November 21, 2019
    Inventors: Christian Fjelleng Theien, Rune Øistein Aas, Kristian Tangeland
  • Publication number: 20190199967
    Abstract: In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
    Type: Application
    Filed: February 27, 2019
    Publication date: June 27, 2019
    Inventors: Kristian Tangeland, Rune Oistein Aas, Christian Fjelleng Theien
  • Patent number: 10257465
    Abstract: In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: April 9, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Kristian Tangeland, Rune Oistein Aas, Christian Fjelleng Theien
  • Publication number: 20180249124
    Abstract: In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
    Type: Application
    Filed: March 1, 2018
    Publication date: August 30, 2018
    Inventors: Kristian Tangeland, Rune Oistein Aas, Christian Fjelleng Theien
  • Patent number: 9986360
    Abstract: A system that automatically calibrates multiple speaker tracking systems with respect to one another based on detection of an active speaker at a collaboration endpoint is presented herein. The system collects a first data point set of an active speaker at the collaboration endpoint using at least a first camera and a first microphone array. The system then receives a plurality of second data point sets from one or more secondary speaker tracking systems located at the collaboration endpoint. Once enough data points have been collected, a reference coordinate system is determined using the first data point set and the one or more second data point sets. Finally, after a reference coordinate system has been determined, the system generates the locations of the one or more secondary speaker tracking systems with respect to the first speaker tracking system.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: May 29, 2018
    Assignee: Cisco Technology, Inc.
    Inventors: Rune Øistein Aas, Kristian Tangeland, Erik Hellerud
  • Patent number: 9942518
    Abstract: In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: April 10, 2018
    Assignee: Cisco Technology, Inc.
    Inventors: Kristian Tangeland, Rune Oistein Aas, Christian Fjelleng Theien
  • Patent number: 9883143
    Abstract: A video conference endpoint includes a camera to capture video and a microphone array to sense audio. One or more preset views are defined. Images in the captured video are processed with a face detection algorithm to detect faces. Active talkers are detected from the sensed audio. The camera is controlled to capture video from the preset views, and from dynamic views created without user input and which include a dynamic overview and a dynamic close-up view. The camera is controlled to dynamically adjust each of the dynamic views to track changing positions of detected faces over time, and dynamically switch the camera between the preset views, the dynamic overview, and the dynamic close-up view over time based on positions of the detected faces and the detected active talkers relative to the preset views and the dynamic views.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: January 30, 2018
    Assignee: Cisco Technology, Inc.
    Inventors: Kristian Tangeland, Rune Oistein Aas, Erik Hellerud
  • Patent number: 9712783
    Abstract: A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are not both met, the endpoint reframes the view.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: July 18, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Glenn Robert Grimsrud Aarrestad, Rune Øistein Aas, Kristian Tangeland
  • Publication number: 20170099462
    Abstract: A video conference endpoint includes a camera to capture video and a microphone array to sense audio. One or more preset views are defined. Images in the captured video are processed with a face detection algorithm to detect faces. Active talkers are detected from the sensed audio. The camera is controlled to capture video from the preset views, and from dynamic views created without user input and which include a dynamic overview and a dynamic close-up view. The camera is controlled to dynamically adjust each of the dynamic views to track changing positions of detected faces over time, and dynamically switch the camera between the preset views, the dynamic overview, and the dynamic close-up view over time based on positions of the detected faces and the detected active talkers relative to the preset views and the dynamic views.
    Type: Application
    Filed: December 19, 2016
    Publication date: April 6, 2017
    Inventors: Kristian Tangeland, Rune Oistein Aas, Erik Hellerud
  • Patent number: 9584763
    Abstract: A video conference endpoint includes one or more cameras to capture video of different views and a microphone array to sense audio. One or more preset views are defined. The endpoint detects faces in the captured video and active audio sources from the sensed audio. The endpoint detects any active talker detected faces that coincide positionally with detected active audio sources, and also detects whether any active talker is in one of the preset views. Based on whether an active talker is detected in any of the preset views, the endpoint switches between capturing video of one of the preset views, and capturing video of a dynamic view.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: February 28, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Kristian Tangeland, Rune Oistein Aas, Erik Hellerud
  • Publication number: 20160227163
    Abstract: A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are not both met, the endpoint reframes the view.
    Type: Application
    Filed: March 3, 2016
    Publication date: August 4, 2016
    Inventors: Glenn Robert Grimsrud Aarrestad, Rune Øistein Aas, Kristian Tangeland