Patents by Inventor Victor Lee
Victor Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10855635Abstract: An electronic mail (email) message is received at in email transport infrastructure and has a traffic type identifier identifying a traffic type. Function processing logic in the email transport infrastructure conditionally processes the email message based on the traffic type.Type: GrantFiled: November 14, 2016Date of Patent: December 1, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Krishna Kumar Parthasarathy, Wayne M. Cranston, William J. Whalen, Neelamadhaba Mahapatro, Wilbert De Graaf, Piyush Gupta, Victor Lee, Mingfeng Xiong
-
Patent number: 10846041Abstract: An example electronic device includes: an audio and video (AV) processor, a computer subsystem, a control mechanism, and a media playing device. The AV processor includes an AV input port to receive external AV signals from an external media source. The computer subsystem is to provide internal AV signals to the AV processor. The control mechanism is to control the AV processor to select between the external AV signals and the internal AV signals, and enable the AV processor to transmit selected AV signals to the media playing device. The media playing device is to play the selected AV signals.Type: GrantFiled: May 18, 2015Date of Patent: November 24, 2020Assignee: Hewlett-Packard Development Comany, L.P.Inventors: Tao-Sheng Chu, Maureen Min-Chaun Lu, Yl-Ling Lo, Victor Lee, Chan-Liang Lin, Candy Wu
-
Publication number: 20200334161Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.Type: ApplicationFiled: July 6, 2020Publication date: October 22, 2020Applicant: Intel CorporationInventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
-
Patent number: 10768989Abstract: Methods and apparatus to provide virtualized vector processing are described. In one embodiment, one or more operations corresponding to a virtual vector request are distributed to one or more processor cores for execution.Type: GrantFiled: January 16, 2018Date of Patent: September 8, 2020Assignee: Intel CorporationInventors: Anthony Nguyen, Engin Ipek, Victor Lee, Daehyun Kim, Mikhail Smelyanskiy
-
Patent number: 10719317Abstract: Methods and apparatuses relating to a vector instruction with a register operand with an elemental offset are described. In one embodiment, a hardware processor includes a decode unit to decode a vector instruction with a register operand with an elemental offset to access a first number of elements in a register specified by the register operand, wherein the first number is a total number of elements in the register minus the elemental offset, access a second number of elements in a next logical register, wherein the second number is the elemental offset, and combine the first number of elements and the second number of elements as a data vector, and an execution unit to execute the vector instruction on the data vector.Type: GrantFiled: June 8, 2018Date of Patent: July 21, 2020Assignee: INTEL CORPORATIONInventors: Victor Lee, Ugonna Echeruo, George Chrysos, Naveen Mellempudi
-
Patent number: 10705967Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.Type: GrantFiled: October 15, 2018Date of Patent: July 7, 2020Assignee: Intel CorporationInventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young, Abhishek Sharma
-
Publication number: 20200179933Abstract: A digital microfluidics (DMF) device can be used to extract plasma from whole blood and manipulate the extracted plasma. The device can have a plasma separation membrane disposed between a sample inlet and sample outlet that leads into the DMF device. Once the plasma contacts the actuation electrodes of the DMF device, the plasma can be actively extracted from the whole blood sample by actuating the actuation electrodes to pull the plasma through plasma separation membrane.Type: ApplicationFiled: July 23, 2018Publication date: June 11, 2020Inventors: Mais J. JEBRAIL, Jorge Abraham SOTO MORENO, Victor LEE
-
Patent number: 10656944Abstract: Methods and apparatuses relating to a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache. In one embodiment, a hardware processor includes a decoder to decode a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache, wherein at least one operand of the prefetch instruction is to indicate a system memory address of an element of the multidimensional block of elements, a stride of the multidimensional block of elements, and boundaries of the multidimensional block of elements, and an execution unit to execute the prefetch instruction to generate system memory addresses of the other elements of the multidimensional block of elements, and load the multidimensional block of elements into the cache from the system memory addresses.Type: GrantFiled: June 8, 2018Date of Patent: May 19, 2020Assignee: Intel CorporationInventors: Victor Lee, Mikhail Smelyanskiy, Alexander Heinecke
-
Publication number: 20200061621Abstract: Digital microfluidic (DMF) apparatuses, systems, devices and associated fluid manipulation and extraction devices, and methods of using them are presented. The devices may be useful for analysis of clinical, laboratory, chemical, or biological samples. A fluid application and extraction interface device may include a waste reservoir with a fluid trap and a transfer conduit extending through the waste reservoir so that fluid may pass from the transfer conduit into the waste reservoir and be trapped within the waste chamber. A transfer conduit may be configured to double back on itself and to hold a fluid sample. A DMF apparatus may be configured to hold and process large sample volumes.Type: ApplicationFiled: June 27, 2019Publication date: February 27, 2020Inventors: Mais J. JEBRAIL, Alexandra J. Cho, Victor Lee
-
Patent number: 10476659Abstract: A system can include a digital oversampler configured to oversample an input data stream; a rate generator configured to select a frequency that is not less than an expected frequency of the input data stream; a rate generator clock of the rate generator configured to output a clock signal that has the selected frequency; a sample receiver configured to receive at least one sample of the input data stream from the digital oversampler; a sample counter configured to be incremented by each received sample responsive to a determination that the sample receiver has received at least one sample of the input data stream from the digital oversampler; a sample rate converter configured to accumulate samples from the sample receiver at the rate of a “toothless” clock signal, wherein the sample counter is configured to be decremented by the “toothless” clock signal at the selected frequency responsive to a determination that the sample receiver has not received at least one sample of the input data stream from the digiType: GrantFiled: July 30, 2018Date of Patent: November 12, 2019Assignee: AVNERA CORPORATIONInventors: Samuel J. Peters, II, Eric P. Etheridge, Victor Lee Hansen, Alexander C. Stange
-
Publication number: 20190232148Abstract: The present invention discloses an exercise device that is used to perform squat exercise. The device comprises a support bar, a height-adjustable support tube or column, a base, an electronic unit, and a seat. The support bar is positioned on a top portion of the support tube via a set of springs. The sprigs are configured to reposition the support bar to a horizontal position after the user leaves the seat and rises back into the upright body position. The electronic unit is configured to receive a set threshold value on maximum squat count from a user. The electronic unit records the squat count and provide an alert when the squat count exceeds the set threshold value on maximum squat count to be performed by the user, thereby it prevents injuries caused due to over enthusiastic during the exercise session.Type: ApplicationFiled: January 31, 2019Publication date: August 1, 2019Inventor: Victor Lee Franklin
-
Publication number: 20190214128Abstract: The invention concerns an Interoperability Environment comprising: a core software engine comprising means to collect and transfer electronic data from any number of sources including medical devices, clinical information systems, hospital information systems, a means to apply rules to improve compliance with hospital approved protocols, standards and guidances, a means to update all subsystems using any given parameter when the parameter is updated in the official recognized source of truth for that parameter, a means to populate the CIS with all required patient information, while at the same time maintaining all quality and process control data in a format supporting advanced analytics separate from the CIS data, a means to communicate notifications to any number of remote electronic devices without limitation of platform and comprising a hardware eco system comprising means to collect, translate, store and send electronic data to the core software engine for any electronic source via communication methodsType: ApplicationFiled: January 2, 2019Publication date: July 11, 2019Inventors: Gary Colister, Bishoy Magdalla, Giuseppe Saracino, William Murphy, Harish Lecamwasam, Victor Lee, Matt Schumacher, Kevin Gallagher
-
Patent number: 10322061Abstract: Among other things, a spa includes a spa shell and a water feature disposed on the spa shell. The water feature includes a ridge disposed in an interior area of the spa shell and having a top disposed near a water line of the spa shell. The water feature also includes a water inlet disposed adjacent to the ridge opposite a main body of water area in the interior area of the spa shell, the water inlet being placed lower than the top of the ridge. In addition; the water feature includes a slope descending from the top of the ridge in the direction of the water inlet, the slope having a patterned top surface configured to interact with water flowing over the ridge, down the slope and toward the water inlet.Type: GrantFiled: August 25, 2015Date of Patent: June 18, 2019Assignee: New Dimension One Spas, Inc.Inventors: Victor Lee Walker, Timothy P. Pflueger
-
Publication number: 20190140816Abstract: A system can include a digital oversampler configured to oversample an input data stream; a rate generator configured to select a frequency that is not less than an expected frequency of the input data stream; a rate generator clock of the rate generator configured to output a clock signal that has the selected frequency; a sample receiver configured to receive at least one sample of the input data stream from the digital oversampler; a sample counter configured to be incremented by each received sample responsive to a determination that the sample receiver has received at least one sample of the input data stream from the digital oversampler; a sample rate converter configured to accumulate samples from the sample receiver at the rate of a “toothless” clock signal, wherein the sample counter is configured to be decremented by the “toothless” clock signal at the selected frequency responsive to a determination that the sample receiver has not received at least one sample of the input data stream from the digiType: ApplicationFiled: July 30, 2018Publication date: May 9, 2019Inventors: Samuel J. Peters, II, Eric P. Etheridge, Victor Lee Hansen, Alexander C. Stange
-
Publication number: 20190138309Abstract: Methods and apparatuses relating to a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache. In one embodiment, a hardware processor includes a decoder to decode a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache, wherein at least one operand of the prefetch instruction is to indicate a system memory address of an element of the multidimensional block of elements, a stride of the multidimensional block of elements, and boundaries of the multidimensional block of elements, and an execution unit to execute the prefetch instruction to generate system memory addresses of the other elements of the multidimensional block of elements, and load the multidimensional block of elements into the cache from the system memory addresses.Type: ApplicationFiled: June 8, 2018Publication date: May 9, 2019Inventors: VICTOR LEE, Mikhail Smelyanskiy, Alexander Heinecke
-
Publication number: 20190138305Abstract: Methods and apparatuses relating to a vector instruction with a register operand with an elemental offset are described. In one embodiment, a hardware processor includes a decode unit to decode a vector instruction with a register operand with an elemental offset to access a first number of elements in a register specified by the register operand, wherein the first number is a total number of elements in the register minus the elemental offset, access a second number of elements in a next logical register, wherein the second number is the elemental offset, and combine the first number of elements and the second number of elements as a data vector, and an execution unit to execute the vector instruction on the data vector.Type: ApplicationFiled: June 8, 2018Publication date: May 9, 2019Inventors: Victor Lee, Ugonna Echeruo, George Chrysos, Naveen Mellempudi
-
Publication number: 20190057304Abstract: The present disclosure is directed to systems and methods of implementing an analog neural network using a pipelined SRAM architecture (“PISA”) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. One or more physical parameters, such as a stored charge or voltage, may be used to permit the generation of an in-memory analog output using a SRAM array. The generation of an in-memory analog output using only word-line and bit-line capabilities beneficially increases the computational density of the PISA circuit without increasing power requirements.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
-
Publication number: 20190057300Abstract: The present disclosure is directed to systems and methods of bit-serial, in-memory, execution of at least an nth layer of a multi-layer neural network in a first on-chip processor memory circuitry portion contemporaneous with prefetching and storing layer weights associated with the (n+1)st layer of the multi-layer neural network in a second on-chip processor memory circuitry portion. The storage of layer weights in on-chip processor memory circuitry beneficially decreases the time required to transfer the layer weights upon execution of the (n+1)st layer of the multi-layer neural network by the first on-chip processor memory circuitry portion. In addition, the on-chip processor memory circuitry may include a third on-chip processor memory circuitry portion used to store intermediate and/or final input/output values associated with one or more layers included in the multi-layer neural network.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
-
Publication number: 20190056885Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory, bit-serial, mathematical operations performed by a pipelined SRAM architecture (bit-serial PISA) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. The bit-serial PISA circuitry is coupled to PISA memory circuitry via a relatively high-bandwidth connection to beneficially facilitate the storage and retrieval of layer weights by the bit-serial PISA circuitry during execution. Direct memory access (DMA) circuitry transfers the neural network model and input data from system memory to the bit-serial PISA memory and also transfers output data from the PISA memory circuitry to system memory circuitry.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
-
Publication number: 20190057036Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma