Patents by Inventor Johan Ludvig Nielsen
Johan Ludvig Nielsen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11671753Abstract: In one embodiment, a multi-microphone system for an endpoint device receives input signals for a remote conference between the endpoint device and at least one other endpoint device. The multi-microphone system may include at least a top microphone unit and a bottom microphone unit. A signal degradation event that causes degradation of signals received by the top microphone unit or the bottom microphone unit is detected. Then, based on information regarding the signal degradation event, it is determined whether the signal degradation event affects one or both of the top microphone unit and the bottom microphone unit. In response, an output signal is generated for transmission to the at least one other endpoint device, and the output signal uses a portion of the input signals that excludes signals received by the top microphone unit and/or the bottom microphone unit determined to be affected by the signal degradation event.Type: GrantFiled: August 27, 2021Date of Patent: June 6, 2023Assignee: Cisco Technology, Inc.Inventors: Johan Ludvig Nielsen, Patrick Ryan Tompion Achtelik, Per Arild Kahrs Dykesteen, Ragnvald Balch Barth
-
Publication number: 20230063260Abstract: In one embodiment, a multi-microphone system for an endpoint device receives input signals for a remote conference between the endpoint device and at least one other endpoint device. The multi-microphone system may include at least a top microphone unit and a bottom microphone unit. A signal degradation event that causes degradation of signals received by the top microphone unit or the bottom microphone unit is detected. Then, based on information regarding the signal degradation event, it is determined whether the signal degradation event affects one or both of the top microphone unit and the bottom microphone unit. In response, an output signal is generated for transmission to the at least one other endpoint device, and the output signal uses a portion of the input signals that excludes signals received by the top microphone unit and/or the bottom microphone unit determined to be affected by the signal degradation event.Type: ApplicationFiled: August 27, 2021Publication date: March 2, 2023Inventors: Johan Ludvig Nielsen, Patrick Ryan Tompion Achtelik, Per Arild Kahrs Dykesteen, Ragnvald Balch Barth
-
Patent number: 11425502Abstract: Methods and a system that automatically determines the spatial relationship of microphone assemblies with respect to a camera of a video conference endpoint through audio signal processing. The video conference endpoint may include at least a microphone assembly and a loudspeaker. The microphone assembly may include a plurality of co-located directional microphones. The video conference endpoint may detect, by the plurality of co-located directional microphones of the microphone assembly, audio emitted from the loudspeaker of the video conference endpoint. The video conference endpoint may then generate data representing a spatial relationship of the microphone assembly with respect to the loudspeaker based on a compilation of the audio detected by the co-located directional microphones of the microphone assembly.Type: GrantFiled: September 18, 2020Date of Patent: August 23, 2022Assignee: CISCO TECHNOLOGY, INC.Inventors: Johan Ludvig Nielsen, Håvard Kjellmo Arnestad, David Alejandro Rivas Méndez, Jochen C. Schirdewahn
-
Patent number: 11399100Abstract: A loudspeaker is driven with a loudspeaker signal to generate sound, and sound is converted to one or more microphone signals with one or more microphones. The microphone signals are concurrently transformed into far-field beam signals and near-field beam signals. The far-field beam signals and the near-field beam signals are concurrently processed to produce one or more far-field output signals and one or more near-field output signals, respectively. Echo is detected and canceled in the far-field beam signals and in the near-field beam signals. When the echo is not detected above a threshold, the one or more far-field output signals are outputted. When the echo is detected above the threshold, the one or more near-field output signals are outputted. A signal based on the one or more output signals is transmitted.Type: GrantFiled: June 28, 2019Date of Patent: July 26, 2022Assignee: CISCO TECHNOLOGY, INC.Inventors: Haohai Sun, Johan Ludvig Nielsen
-
Publication number: 20220095052Abstract: Methods and a system that automatically determines the spatial relationship of microphone assemblies with respect to a camera of a video conference endpoint through audio signal processing. The video conference endpoint may include at least a microphone assembly and a loudspeaker. The microphone assembly may include a plurality of co-located directional microphones. The video conference endpoint may detect, by the plurality of co-located directional microphones of the microphone assembly, audio emitted from the loudspeaker of the video conference endpoint. The video conference endpoint may then generate data representing a spatial relationship of the microphone assembly with respect to the loudspeaker based on a compilation of the audio detected by the co-located directional microphones of the microphone assembly.Type: ApplicationFiled: September 18, 2020Publication date: March 24, 2022Inventors: Johan Ludvig Nielsen, Håvard Kjellmo Arnestad, David Alejandro Rivas Méndez, Jochen C. Schirdewahn
-
Patent number: 11115625Abstract: At a video conference endpoint including a camera, a microphone array, and one or more microphone assemblies, the video conference endpoint may divide a video output of the camera into one or more tracking sectors and detect a head position for each participant in the video output. The video conference endpoint may determine within which tracking sector each detected head position is located. The video conference endpoint may determine active sound source positions of the actively speaking participants based on sound being detected or captured by the microphone array and microphone assemblies, and may determine within which tracking sector the active sound source positions are located. For each tracking sector that contains an active sound source position, the video conference endpoint may update the positional audio metadata for that particular tracking sector based on the active sound source positions and the detected head positions located in that tracking sector.Type: GrantFiled: December 14, 2020Date of Patent: September 7, 2021Assignee: CISCO TECHNOLOGY, INC.Inventors: Øivind Stuan, Johan Ludvig Nielsen
-
Publication number: 20200275199Abstract: A microphone array includes one or more front-facing microphones disposed on a front surface of the collaboration endpoint and a plurality of secondary microphones disposed on a second surface of the collaboration endpoint. The sound signals received at each of the one or more front-facing microphones and the plurality of secondary microphones are converted into microphone signals. When the sound signals have a frequency below a threshold frequency, an output signal is generated from microphone signals generated by the one or more front-facing microphones and the plurality of secondary microphones. When the sound signals have a frequency at or above a threshold frequency, an output signal is generated from microphone signals generated by only the one or more front-facing microphones.Type: ApplicationFiled: May 13, 2020Publication date: August 27, 2020Inventors: Gisle Langen Enstad, Haohai Sun, Johan Ludvig Nielsen
-
Patent number: 10687139Abstract: A microphone array includes one or more front-facing microphones disposed on a front surface of the collaboration endpoint and a plurality of secondary microphones disposed on a second surface of the collaboration endpoint. The sound signals received at each of the one or more front-facing microphones and the plurality of secondary microphones are converted into microphone signals. When the sound signals have a frequency below a threshold frequency, an output signal is generated from microphone signals generated by the one or more front-facing microphones and the plurality of secondary microphones. When the sound signals have a frequency at or above a threshold frequency, an output signal is generated from microphone signals generated by only the one or more front-facing microphones.Type: GrantFiled: September 20, 2019Date of Patent: June 16, 2020Assignee: Cisco Technology, Inc.Inventors: Gisle Langen Enstad, Haohai Sun, Johan Ludvig Nielsen
-
Publication number: 20200120418Abstract: A microphone array includes one or more front-facing microphones disposed on a front surface of the collaboration endpoint and a plurality of secondary microphones disposed on a second surface of the collaboration endpoint. The sound signals received at each of the one or more front-facing microphones and the plurality of secondary microphones are converted into microphone signals. When the sound signals have a frequency below a threshold frequency, an output signal is generated from microphone signals generated by the one or more front-facing microphones and the plurality of secondary microphones. When the sound signals have a frequency at or above a threshold frequency, an output signal is generated from microphone signals generated by only the one or more front-facing microphones.Type: ApplicationFiled: September 20, 2019Publication date: April 16, 2020Inventors: Gisle Langen Enstad, Haohai Sun, Johan Ludvig Nielsen
-
Patent number: 10491995Abstract: A microphone array includes one or more front-facing microphones disposed on a front surface of the collaboration endpoint and a plurality of secondary microphones disposed on a second surface of the collaboration endpoint. The sound signals received at each of the one or more front-facing microphones and the plurality of secondary microphones are converted into microphone signals. When the sound signals have a frequency below a threshold frequency, an output signal is generated from microphone signals generated by the one or more front-facing microphones and the plurality of secondary microphones. When the sound signals have a frequency at or above a threshold frequency, an output signal is generated from microphone signals generated by only the one or more front-facing microphones.Type: GrantFiled: October 11, 2018Date of Patent: November 26, 2019Assignee: Cisco Technology, Inc.Inventors: Gisle Langen Enstad, Haohai Sun, Johan Ludvig Nielsen
-
Publication number: 20190342456Abstract: A loudspeaker is driven with a loudspeaker signal to generate sound, and sound is converted to one or more microphone signals with one or more microphones. The microphone signals are concurrently transformed into far-field beam signals and near-field beam signals. The far-field beam signals and the near-field beam signals are concurrently processed to produce one or more far-field output signals and one or more near-field output signals, respectively. Echo is detected and canceled in the far-field beam signals and in the near-field beam signals. When the echo is not detected above a threshold, the one or more far-field output signals are outputted. When the echo is detected above the threshold, the one or more near-field output signals are outputted. A signal based on the one or more output signals is transmitted.Type: ApplicationFiled: June 28, 2019Publication date: November 7, 2019Inventors: Haohai Sun, Johan Ludvig Nielsen
-
Patent number: 10440322Abstract: A system that automatically configures the behavior of the display devices of a video conference endpoint. The controller may detect, at a microphone array having a predetermined physical relationship with respect to a camera, audio emitted from one or more loudspeakers, each loudspeaker having a predetermined physical relationship with respect to at least one of one or more display devices in a conference room. The controller may then generate data representing a spatial relationship between the one or more display devices and the camera based on the detected audio. Finally, the controller may assign video sources received by the endpoint to each of the one or more display devices based on the data representing the spatial relationship and the content of each received video source, and may also assign outputs from multiple video cameras to an outgoing video stream based on the on the data representing the spatial relationship.Type: GrantFiled: March 1, 2018Date of Patent: October 8, 2019Assignee: Cisco Technology, Inc.Inventors: Glenn R. G. Aarrestad, Lennart Burenius, Jochen Christof Schirdewahn, Johan Ludvig Nielsen
-
Patent number: 10389885Abstract: A loudspeaker is driven with a loudspeaker signal to generate sound, and sound is converted to one or more microphone signals with one or more microphones. The microphone signals are concurrently transformed into far-field beam signals and near-field beam signals. The far-field beam signals and the near-field beam signals are concurrently processed to produce one or more far-field output signals and one or more near-field output signals, respectively. Echo is detected and canceled in the far-field beam signals and in the near-field beam signals. When the echo is not detected above a threshold, the one or more far-field output signals are outputted. When the echo is detected above the threshold, the one or more near-field output signals are outputted. A signal based on the one or more output signals is transmitted.Type: GrantFiled: February 1, 2017Date of Patent: August 20, 2019Assignee: Cisco Technology, Inc.Inventors: Haohai Sun, Johan Ludvig Nielsen
-
Patent number: 10177859Abstract: In one embodiment, a method includes receiving ultrasound frequency sweeps in a sound receiving device. Each of the plurality of ultrasound frequency sweeps is centered on one of at least two predetermined frequencies. The method also includes converting the ultrasound frequency sweeps into an ultrasound message based on a central frequency of each of the ultrasound frequency sweeps received, and placing the ultrasound message into a receive buffer. Then at least a network address is extracted from the ultrasound message, and the network address is used to establish a communication session over a data network with a telecommunications device.Type: GrantFiled: September 30, 2016Date of Patent: January 8, 2019Assignee: Cisco Technology, Inc.Inventors: Ragnvald Barth, Sverre Huseby, Johan Ludvig Nielsen, Dan Peder Eriksen
-
Patent number: 10091575Abstract: A method and system for obtaining an audio signal. In one embodiment, the method comprises receiving a first sound signal at a first microphone arranged at a first height vertically above a substantially flat surface; receiving a second sound signal at a second microphone arranged at a second height vertically above the substantially flat surface; processing a signal provided by the first microphone using a low pass filter; processing a signal provided by the second microphone using a high pass filter; adding the signals processed by the low pass filter and the high pass filter to form a sum signal; and outputting the sum signal as an audio signal.Type: GrantFiled: July 1, 2015Date of Patent: October 2, 2018Assignee: Cisco Technology, Inc.Inventors: Johan Ludvig Nielsen, Gisle Langen Enstad
-
Publication number: 20180220007Abstract: A loudspeaker is driven with a loudspeaker signal to generate sound, and sound is converted to one or more microphone signals with one or more microphones. The microphone signals are concurrently transformed into far-field beam signals and near-field beam signals. The far-field beam signals and the near-field beam signals are concurrently processed to produce one or more far-field output signals and one or more near-field output signals, respectively. Echo is detected and canceled in the far-field beam signals and in the near-field beam signals. When the echo is not detected above a threshold, the one or more far-field output signals are outputted. When the echo is detected above the threshold, the one or more near-field output signals are outputted. A signal based on the one or more output signals is transmitted.Type: ApplicationFiled: February 1, 2017Publication date: August 2, 2018Inventors: Haohai Sun, Johan Ludvig Nielsen
-
Publication number: 20180192002Abstract: A system that automatically configures the behavior of the display devices of a video conference endpoint. The controller may detect, at a microphone array having a predetermined physical relationship with respect to a camera, audio emitted from one or more loudspeakers, each loudspeaker having a predetermined physical relationship with respect to at least one of one or more display devices in a conference room. The controller may then generate data representing a spatial relationship between the one or more display devices and the camera based on the detected audio. Finally, the controller may assign video sources received by the endpoint to each of the one or more display devices based on the data representing the spatial relationship and the content of each received video source, and may also assign outputs from multiple video cameras to an outgoing video stream based on the on the data representing the spatial relationship.Type: ApplicationFiled: March 1, 2018Publication date: July 5, 2018Inventors: Glenn R. G. Aarrestad, Lennart Burenius, Jochen Christof Schirdewahn, Johan Ludvig Nielsen
-
Publication number: 20180124354Abstract: A system that automatically configures the behavior of the display devices of a video conference endpoint. The controller may detect, at a microphone array having a predetermined physical relationship with respect to a camera, audio emitted from one or more loudspeakers, each loudspeaker having a predetermined physical relationship with respect to at least one of one or more display devices in a conference room. The controller may then generate data representing a spatial relationship between the one or more display devices and the camera based on the detected audio. Finally, the controller may assign video sources received by the endpoint to each of the one or more display devices based on the data representing the spatial relationship and the content of each received video source, and may also assign outputs from multiple video cameras to an outgoing video stream based on the on the data representing the spatial relationship.Type: ApplicationFiled: October 31, 2016Publication date: May 3, 2018Inventors: Glenn R. G. Aarrestad, Lennart Burenius, Jochen Christof Schirdewahn, Johan Ludvig Nielsen
-
Patent number: 9942513Abstract: A system that automatically configures the behavior of the display devices of a video conference endpoint. The controller may detect, at a microphone array having a predetermined physical relationship with respect to a camera, audio emitted from one or more loudspeakers, each loudspeaker having a predetermined physical relationship with respect to at least one of one or more display devices in a conference room. The controller may then generate data representing a spatial relationship between the one or more display devices and the camera based on the detected audio. Finally, the controller may assign video sources received by the endpoint to each of the one or more display devices based on the data representing the spatial relationship and the content of each received video source, and may also assign outputs from multiple video cameras to an outgoing video stream based on the on the data representing the spatial relationship.Type: GrantFiled: October 31, 2016Date of Patent: April 10, 2018Assignee: Cisco Technology, Inc.Inventors: Glenn R. G. Aarrestad, Lennart Burenius, Jochen Christof Schirdewahn, Johan Ludvig Nielsen
-
Patent number: 9674453Abstract: At a video conference endpoint including a microphone array and a camera, different camera framings are established to frame different views of a talker based on different sets of pan, tilt, and focal length settings of the camera. Different video frames of the different views are captured using the different camera framings, respectively. A sound source direction of the talker relative to the microphone array in a fixed three-dimensional (3D) global coordinate system is determined for the different views based on sound from the talker detected by the microphone array. The sound source direction relative to the microphone array is converted to different sound source positions in planar coordinates relative to the different video frames based on the different sets of pan, tilt, and focal length settings, respectively. The different video frames, the sound, and the different sound source positions in planar coordinates are transmitted.Type: GrantFiled: October 26, 2016Date of Patent: June 6, 2017Assignee: Cisco Technology, Inc.Inventors: Kristian Tangeland, Glenn R. G. Aarrestad, Johan Ludvig Nielsen, Øivind Stuan