Patents by Inventor Anuj Dev
Anuj Dev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230290333Abstract: The present disclosure relates to a hearing apparatus comprising: a bone conduction sensor configured to convert bone vibrations of voice sound information into a bone conduction signal; a signal processing unit configured to implement a synthetic speech generation process, the synthetic speech generation process implementing a speech model; wherein the synthetic speech generation process receives the bone conduction signal as a control input and outputs a synthetic speech signal.Type: ApplicationFiled: October 25, 2021Publication date: September 14, 2023Applicant: GN Hearing A/SInventors: Andreas TIEFENAU, Brian Dam PEDERSEN, Antonie Johannes HENDRIKSE, Anuj DEV
-
Patent number: 11073969Abstract: The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.Type: GrantFiled: March 4, 2019Date of Patent: July 27, 2021Assignee: Activevideo Networks, Inc.Inventors: Ronald Alexander Brockmann, Anuj Dev, Gerrit Hiddink
-
Publication number: 20200004408Abstract: The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.Type: ApplicationFiled: March 4, 2019Publication date: January 2, 2020Inventors: Ronald Alexander Brockmann, Anuj Dev, Gerrit Hiddink
-
Patent number: 10275128Abstract: The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.Type: GrantFiled: March 17, 2014Date of Patent: April 30, 2019Assignee: ActiveVideo Networks, Inc.Inventors: Ronald Alexander Brockmann, Anuj Dev, Gerrit Hiddink
-
Patent number: 10200744Abstract: A method of generating a blended output including an interactive user interface and one or more supplemental images. At a client device, a video stream containing an interactive user interface is received from a server using a first data communications channel configured to communicate video content and a command is transmitted to the server that relates to a user input received through the interactive user interface. In response to the transmitting, an updated user interface is received using the first data communications channel, and one or more supplemental images are received using a second data communications channel. Each supplemental image is associated with a corresponding transparency coefficient. The updated user interface and the one or more supplemental images are blended according to the transparency coefficient for each supplemental image to generate a blended output and the blended output is transmitted toward the display device for display thereon.Type: GrantFiled: April 26, 2016Date of Patent: February 5, 2019Assignee: ActiveVideo Networks, Inc.Inventors: Ronald A. Brockmann, Onne Gorter, Anuj Dev, Gerritt Hiddink
-
Publication number: 20170055023Abstract: A method of generating a blended output including an interactive user interface and one or more supplemental images. At a client device, a video stream containing an interactive user interface is received from a server using a first data communications channel configured to communicate video content and a command is transmitted to the server that relates to a user input received through the interactive user interface. In response to the transmitting, an updated user interface is received using the first data communications channel, and one or more supplemental images are received using a second data communications channel. Each supplemental image is associated with a corresponding transparency coefficient. The updated user interface and the one or more supplemental images are blended according to the transparency coefficient for each supplemental image to generate a blended output and the blended output is transmitted toward the display device for display thereon.Type: ApplicationFiled: April 26, 2016Publication date: February 23, 2017Inventors: Ronald A. Brockmann, Onne Gorter, Anuj Dev, Gerritt Hiddink
-
Patent number: 9326047Abstract: A method of combining an interactive user interface for generating a blended output that includes the interactive user interface and one or more supplemental images. At a client device remote from a server, a video stream that contains an interactive user interface is received from the server using a first data communications channel configured to communicate video content, and a command that relates to an interactive user interface is transmitted to the server. In response to the transmitting, an updated user interface is received from the server using the first data communications channel, and one or more supplemental images for supplementing the interactive user interface are received using a second data communications channel different from the first data communications channel. The updated user interface and the one or more supplemental images are blended to generate a blended output, which is transmitted toward the display device for display thereon.Type: GrantFiled: June 6, 2014Date of Patent: April 26, 2016Assignee: ACTIVEVIDEO NETWORKS, INC.Inventors: Ronald A. Brockmann, Onne Gorter, Anuj Dev, Gerritt Hiddink
-
Patent number: 9294785Abstract: A system method and computer program product for creating a composited video frame sequence for an application. A current scene state for the application is compared to a previous scene state wherein each scene state includes a plurality of objects. A video construction engine determines if properties of one or more objects have changed based upon a comparison of the scene states. If properties of one or more objects have changed based upon the comparison, the delta between the object's states is determined and this information is used by a fragment encoding module if the fragment has not been encoded before. The information is used to define, for example, the motion vectors for use by the fragment encoding module in construction of the fragments to be used by the stitching module to build the composited video frame sequence.Type: GrantFiled: April 25, 2014Date of Patent: March 22, 2016Assignee: ACTIVEVIDEO NETWORKS, INC.Inventors: Ronald Alexander Brockmann, Anuj Dev, Maarten Hoeben
-
Patent number: 9219922Abstract: A system method and computer program product for creating a composited video frame sequence for an application. A current scene graph state for the application is compared to a previous scene graph state wherein each scene graph state includes a plurality of hierarchical nodes that represent one or more objects at each node. A video construction engine determines if one or more objects have moved based upon a comparison of the scene graph states. If one or more objects have moved based upon the scene graph comparison, motion information about the objects is determined and the motion information is forwarded to a stitcher module. The motion information is used to define motion vectors for use by the stitcher module in construction of the composited video frame sequence.Type: GrantFiled: June 6, 2013Date of Patent: December 22, 2015Assignee: ACTIVEVIDEO NETWORKS, INC.Inventors: Ronald Alexander Brockmann, Anuj Dev, Maarten Hoeben
-
Patent number: 9204203Abstract: Systems and methods are provided for reducing and controlling playback latency in an unmanaged, buffered data network. A delay cost function is determined, the function representing the effect of playback latency on end user experience. An encoder transmits audiovisual data through the network to a client device. Network latency is measured, and the delay cost function is evaluated to establish an encoding bitrate for the encoder. The encoding of the audiovisual data is altered in response to dynamic network conditions, thereby controlling end-to-end playback latency of the system, which is represented by the playout length of data buffered between the encoder and the client device.Type: GrantFiled: April 3, 2012Date of Patent: December 1, 2015Assignee: ActiveVideo Networks, Inc.Inventors: Ronald A. Brockmann, Anuj Dev, Gerrit Hiddink, Joshua Dahlby, Lena Y. Pavlovskaia
-
Patent number: 9123084Abstract: System and methods are provided to cache encoded graphical objects that may be subsequently combined with other encoded video data to form a data stream decodable by a client device according to a format specification. Paint instructions relating to a graphical object are sent from a layout engine to a rendering library. A shim intercepts these instructions and determines whether the graphical object already has been rendered and encoded. If so, a cached copy of the object is transmitted to the client device. If not, the shim transparently passes the instructions to the rendering library, and the object is rendered, encoded, and cached. Hash values are used for efficiency. Methods are disclosed to detect and cache animations, and to cut and splice cached objects into encoded video data.Type: GrantFiled: April 12, 2012Date of Patent: September 1, 2015Assignee: ActiveVideo Networks, Inc.Inventors: Ronald A. Brockmann, Anuj Dev, Onne Gorter, Gerrit Hiddink, Maarten Hoeben
-
Publication number: 20140362930Abstract: A system method and computer program product for creating a composited video frame sequence for an application. A current scene state for the application is compared to a previous scene state wherein each scene state includes a plurality of objects. A video construction engine determines if properties of one or more objects have changed based upon a comparison of the scene states. If properties of one or more objects have changed based upon the comparison, the delta between the object's states is determined and this information is used by a fragment encoding module if the fragment has not been encoded before. The information is used to define, for example, the motion vectors for use by the fragment encoding module in construction of the fragments to be used by the stitching module to build the composited video frame sequence.Type: ApplicationFiled: April 25, 2014Publication date: December 11, 2014Applicant: ACTIVEVIDEO NETWORKS, INC.Inventors: RONALD ALEXANDER BROCKMANN, ANUJ DEV, MAARTEN HOEBEN
-
Publication number: 20140362086Abstract: A system method and computer program product for creating a composited video frame sequence for an application. A current scene graph state for the application is compared to a previous scene graph state wherein each scene graph state includes a plurality of hierarchical nodes that represent one or more objects at each node. A video construction engine determines if one or more objects have moved based upon a comparison of the scene graph states. If one or more objects have moved based upon the scene graph comparison, motion information about the objects is determined and the motion information is forwarded to a stitcher module. The motion information is used to define motion vectors for use by the stitcher module in construction of the composited video frame sequence.Type: ApplicationFiled: June 6, 2013Publication date: December 11, 2014Inventors: Ronald Alexander Brockmann, Anuj Dev, Maarten Hoeben
-
Publication number: 20140366057Abstract: A method of combining an interactive user interface for generating a blended output that includes the interactive user interface and one or more supplemental images. At a client device remote from a server, a video stream that contains an interactive user interface is received from the server using a first data communications channel configured to communicate video content, and a command that relates to an interactive user interface is transmitted to the server. In response to the transmitting, an updated user interface is received from the server using the first data communications channel, and one or more supplemental images for supplementing the interactive user interface are received using a second data communications channel different from the first data communications channel. The updated user interface and the one or more supplemental images are blended to generate a blended output, which is transmitted toward the display device for display thereon.Type: ApplicationFiled: June 6, 2014Publication date: December 11, 2014Inventors: Ronald A. Brockmann, Onne Gorter, Anuj Dev, Gerritt Hiddink
-
Publication number: 20140289627Abstract: The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.Type: ApplicationFiled: March 17, 2014Publication date: September 25, 2014Inventors: Ronald Alexander Brockmann, Anuj Dev, Gerrit Hiddink
-
Publication number: 20130272394Abstract: System and methods are provided to cache encoded graphical objects that may be subsequently combined with other encoded video data to form a data stream decodable by a client device according to a format specification. Paint instructions relating to a graphical object are sent from a layout engine to a rendering library. A shim intercepts these instructions and determines whether the graphical object already has been rendered and encoded. If so, a cached copy of the object is transmitted to the client device. If not, the shim transparently passes the instructions to the rendering library, and the object is rendered, encoded, and cached. Hash values are used for efficiency. Methods are disclosed to detect and cache animations, and to cut and splice cached objects into encoded video data.Type: ApplicationFiled: April 12, 2012Publication date: October 17, 2013Applicant: ACTIVEVIDEO NETWORKS, INCInventors: Ronald A. Brockmann, Anuj Dev, Onne Gorter, Gerrit Hiddink, Maarten Hoeben
-
Publication number: 20120257671Abstract: Systems and methods are provided for reducing and controlling playback latency in an unmanaged, buffered data network. A delay cost function is determined, the function representing the effect of playback latency on end user experience. An encoder transmits audiovisual data through the network to a client device. Network latency is measured, and the delay cost function is evaluated to establish an encoding bitrate for the encoder. The encoding of the audiovisual data is altered in response to dynamic network conditions, thereby controlling end-to-end playback latency of the system, which is represented by the playout length of data buffered between the encoder and the client device.Type: ApplicationFiled: April 3, 2012Publication date: October 11, 2012Applicant: ACTIVEVIDEO NETWORKS, INC.Inventors: Ronald A. Brockmann, Anuj Dev, Gerrit Hiddink, Joshua Dahlby, Lena Y. Pavlovskaia