Patents by Inventor Eugene Yasman
Eugene Yasman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11683251Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: GrantFiled: March 21, 2022Date of Patent: June 20, 2023Assignee: Intel CorporationInventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Publication number: 20220210037Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: ApplicationFiled: March 21, 2022Publication date: June 30, 2022Inventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Patent number: 11283700Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: GrantFiled: November 13, 2019Date of Patent: March 22, 2022Assignee: Intel CorporationInventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Patent number: 10986044Abstract: In some examples, a computing device for processing data streams includes storage to store instructions and a processor to execute the instructions. The processor is to execute the instructions to receive respective data streams provided from a plurality of data producer sensors. The processor is also to execute the instructions to stagger a time of triggering of a first of the plurality of data producer sensors relative to a time of triggering of a second of the plurality of data producer sensors to minimize a concurrency of data frames of the data stream received from the first data producer sensor and data frames of the data stream received from the second of the plurality of data producer sensors. The processor is also to execute the instructions to process the data streams from the plurality of data producer sensors in a time-shared manner. The processor is also to execute the instructions to provide the processed data streams to one or more consumer of the processed data streams.Type: GrantFiled: September 28, 2018Date of Patent: April 20, 2021Assignee: Intel CorporationInventors: Eugene Yasman, Liron Ain-Kedem
-
Patent number: 10915258Abstract: Systems and techniques for bi-directional negotiation for dynamic data chunking are described herein. A set of available features for a memory subsystem. The set of available features including latency of buffer locations of the memory subsystem. An indication of a first latency requirement of a first data consumer and a second latency requirement of a second data consumer may be obtained. A first buffer location of the memory subsystem for a data stream based on the first latency requirement may be negotiated with the first data consumer. A second buffer location of the memory subsystem for the data stream based on the second latency requirement may be negotiated with the second data consumer. An indication of the first buffer location may be provided to the first data consumer and an indication of the second buffer location may be provided to the second data consumer.Type: GrantFiled: December 28, 2017Date of Patent: February 9, 2021Assignee: Intel CorporationInventors: Eugene Yasman, Liron Ain-Kedem, Nir Gerber
-
Publication number: 20200092185Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: ApplicationFiled: November 13, 2019Publication date: March 19, 2020Inventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Patent number: 10511509Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: GrantFiled: April 7, 2017Date of Patent: December 17, 2019Assignee: Intel CorporationInventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Publication number: 20190044891Abstract: In some examples, a computing device for processing data streams includes storage to store instructions and a processor to execute the instructions. The processor is to execute the instructions to receive respective data streams provided from a plurality of data producer sensors. The processor is also to execute the instructions to stagger a time of triggering of a first of the plurality of data producer sensors relative to a time of triggering of a second of the plurality of data producer sensors to minimize a concurrency of data frames of the data stream received from the first data producer sensor and data frames of the data stream received from the second of the plurality of data producer sensors. The processor is also to execute the instructions to process the data streams from the plurality of data producer sensors in a time-shared manner. The processor is also to execute the instructions to provide the processed data streams to one or more consumer of the processed data streams.Type: ApplicationFiled: September 28, 2018Publication date: February 7, 2019Applicant: INTEL CORPORATIONInventors: Eugene Yasman, Liron Ain-Kedem
-
Publication number: 20190042123Abstract: Systems and techniques for bi-directional negotiation for dynamic data chunking are described herein. A set of available features for a memory subsystem. The set of available features including latency of buffer locations of the memory subsystem. An indication of a first latency requirement of a first data consumer and a second latency requirement of a second data consumer may be obtained. A first buffer location of the memory subsystem for a data stream based on the first latency requirement may be negotiated with the first data consumer. A second buffer location of the memory subsystem for the data stream based on the second latency requirement may be negotiated with the second data consumer. An indication of the first buffer location may be provided to the first data consumer and an indication of the second buffer location may be provided to the second data consumer.Type: ApplicationFiled: December 28, 2017Publication date: February 7, 2019Inventors: Eugene Yasman, Liron Ain-Kedem, Nir Gerber
-
Publication number: 20180295039Abstract: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.Type: ApplicationFiled: April 7, 2017Publication date: October 11, 2018Inventors: Eugene Yasman, Nir Gerber, Sumit Mohan, Jean-Pierre Giacalone
-
Publication number: 20150178032Abstract: Aspects disclosed in the detailed description include apparatuses and methods for using remote multimedia sink devices. Exemplary aspects of the present disclosure provide a multimedia remote display system comprising a multimedia source device configured to discover a remote multimedia sink device, which has a graphics processing unit (GPU) and supports a wireless network interface. The multimedia source device is also configured to handle the remote multimedia sink device as a local high-speed peripheral device, and opportunistically apply compression to a multimedia stream before rendering the multimedia stream on the remote multimedia sink device. By handling the remote multimedia sink device as a local high-speed peripheral device, and opportunistically applying compression to the multimedia stream, high-definition (HD) multimedia content may be rendered on the remote multimedia sink device without adversely impacting quality of the HD multimedia content.Type: ApplicationFiled: November 5, 2014Publication date: June 25, 2015Inventors: Alexander Gantman, Eugene Yasman