Patents by Inventor Hanhua Feng
Hanhua Feng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9667735Abstract: A caching system is provided. The computing infrastructure runs off of a centralized storage, and data stored on the centralized store can also be retrieved from nearby machines that are part of the local infrastructure and have recently accessed the centralized store. Address-to-digest mappings are used to find an index of the desired data block. That digest is then used to hold where the data block is being cached. In some embodiments, the digest is hashed and the hash of the digest is used to determine where the data block is being cached. The data block is accessed from the cache using its cache, therefore different addresses may result in the retrieval of the same data block. For example, in a virtual machine environment, two different nodes may retrieve the same data block using different addresses.Type: GrantFiled: March 14, 2014Date of Patent: May 30, 2017Assignee: Infinio Systems, Inc.Inventors: Daniel Rubenstein, Vishal Misra, Hanhua Feng, Martin C. Martin
-
Patent number: 9264506Abstract: Systems, methods, and products for pull data transfer in a request-response model are provided herein. One aspect provides for generating output data utilizing at least one data generation station; and communicating via the at least one data generation station output data related to at least one data request received from at least one data requesting station responsive to at least one criterion, the at least one criterion comprising one of expiration of a time period or generation of a threshold amount of output data. Other embodiments and aspects are also described herein.Type: GrantFiled: May 11, 2012Date of Patent: February 16, 2016Assignee: International Business Machines CorporationInventors: Hanhua Feng, Anton Viktorovich Riabov
-
Patent number: 9104506Abstract: A method and computer program product for assembling and deploying multi-platform flow based applications. An information processing flow that produces a result is assembled, the information processing flow includes components connected by data links, a component includes software code that describes at least one of an input constraint or an output constraint of the component, and at least two of the components are deployable on different computing platforms. The information processing flow is partitioned into sub-flows, such that for each sub-flow every component in the sub-flow is deployable on the same computing platform. The sub-flows are deployed on their respective computing platforms.Type: GrantFiled: November 27, 2009Date of Patent: August 11, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Eric Bouillet, Mark D. Feblowitz, Hanhua Feng, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
-
Publication number: 20140280689Abstract: A caching system is provided. The computing infrastructure runs off of a centralized storage, and data stored on the centralized store can also be retrieved from nearby machines that are part of the local infrastructure and have recently accessed the centralized store. Address-to-digest mappings are used to find an index of the desired data block. That digest is then used to hold where the data block is being cached. In some embodiments, the digest is hashed and the hash of the digest is used to determine where the data block is being cached. The data block is accessed from the cache using its cache, therefore different addresses may result in the retrieval of the same data block. For example, in a virtual machine environment, two different nodes may retrieve the same data block using different addresses.Type: ApplicationFiled: March 14, 2014Publication date: September 18, 2014Applicant: Infinio Systems Inc.Inventors: Daniel Rubenstein, Vishal Misra, Hanhua Feng, Martin C. Martin
-
Publication number: 20140067908Abstract: Systems, methods, and products for pull data transfer in a request-response model are provided herein. One aspect provides for generating output data utilizing at least one data generation station; and communicating via the at least one data generation station output data related to at least one data request received from at least one data requesting station responsive to at least one criterion, the at least one criterion comprising one of expiration of a time period or generation of a threshold amount of output data. Other embodiments and aspects are also described herein.Type: ApplicationFiled: May 11, 2012Publication date: March 6, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Hanhua Feng, Anton Viktorovich Riabov
-
Publication number: 20130324863Abstract: A guide wire arrangement, a strip arrangement, a method of forming a guide wire arrangement, and a method of forming a strip arrangement are provided. The guide wire arrangement includes a strip; a sensor being disposed on a first portion of the strip; a chip being disposed next to the sensor on a second portion of the strip, wherein the second portion of the strip is next to the first portion of the strip; wherein the strip is folded at a folding point between the first portion of the strip and the second portion of the strip such that the first portion of the strip and the second portion of the strip form a stack of strip portions.Type: ApplicationFiled: November 2, 2011Publication date: December 5, 2013Inventors: Daquan Yu, Woo Tae Park, Li Shiah Lim, Muhammad Hamidullah, Rama Krishna Kotlanka, Vaidyanathan Kripesh, Hanhua Feng
-
Patent number: 8380965Abstract: An apparatus to facilitate design of a stream processing flow that satisfies an objective, wherein the flow includes at least three processing groups, wherein a first processing group includes a data source and an operator, a second processing group includes a data source and an operator and a third processing group includes a join operator at its input and another operator, wherein data inside each group is organized by channels and each channel is a sequence of data, wherein an operator producing a data channel does not generate new data for the channel until old data of the channel is received by all other operators in the same group, and wherein data that flows from the first and second groups to the third group is done asynchronously and is stored in a queue if not ready for processing by an operator of the third group.Type: GrantFiled: June 16, 2009Date of Patent: February 19, 2013Assignee: International Business Machines CorporationInventors: Eric Bouillet, Hanhua Feng, Zhen Liu, Anton V. Riabov
-
Patent number: 8290939Abstract: In a method for visualizing query results in stream processing systems, a visualization service receives a query from a client to visualize data in a stream processing application. The query is sent from the visualization service to a query-able operator of the stream processing application. At the query-able operator, an operation is performed using history data in the query-able operator to produce a first result that satisfies the query and the first result is sent to the visualization service. At the query-able operator, another operation is performed using new data received by the query-able operator to produce a second result that satisfies the query and the second result is sent to the visualization service. The first and second results are output from the visualization service to the client.Type: GrantFiled: June 30, 2010Date of Patent: October 16, 2012Assignee: International Busines Machines CorporationInventors: Eric Bouillet, Hanhua Feng, Anton V Riabov
-
Publication number: 20110302196Abstract: In a method for visualizing query results in stream processing systems, a visualization service receives a query from a client to visualize data in a stream processing application. The query is sent from the visualization service to a query-able operator of the stream processing application. At the query-able operator, an operation is performed using history data in the query-able operator to produce a first result that satisfies the query and the first result is sent to the visualization service. At the query-able operator, another operation is performed using new data received by the query-able operator to produce a second result that satisfies the query and the second result is sent to the visualization service. The first and second results are output from the visualization service to the client.Type: ApplicationFiled: June 30, 2010Publication date: December 8, 2011Applicant: International Business Machines CorporationInventors: Eric Bouillet, Hanhua Feng, Anton V. Riabov
-
Publication number: 20110131557Abstract: A method and computer program product for assembling and deploying multi-platform flow based applications. An information processing flow that produces a result is assembled, the information processing flow includes components connected by data links, a component includes software code that describes at least one of an input constraint or an output constraint of the component, and at least two of the components are deployable on different computing platforms. The information processing flow is partitioned into sub-flows, such that for each sub-flow every component in the sub-flow is deployable on the same computing platform. The sub-flows are deployed on their respective computing platforms.Type: ApplicationFiled: November 27, 2009Publication date: June 2, 2011Applicant: International Business Machines CorporationInventors: Eric Bouillet, Mark D. Feblowitz, Hanhua Feng, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
-
Patent number: 7924718Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.Type: GrantFiled: August 5, 2009Date of Patent: April 12, 2011Assignee: International Business Machines CorporationInventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
-
Patent number: 7889651Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.Type: GrantFiled: June 6, 2007Date of Patent: February 15, 2011Assignee: International Business Machines CorporationInventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
-
Publication number: 20100318768Abstract: An apparatus, including a memory device for storing a program, and a processor in communication with the memory device, the processor operative with the program to facilitate design of a stream processing flow that satisfies an objective, wherein the stream processing flow includes at least three processing groups, wherein a first processing group includes a data source and an operator, a second processing group includes a data source and an operator and a third processing group includes a join operator at its input and another operator, wherein data inside each group is organized by channels and each channel is a sequence of data, wherein an operator producing a data channel does not generate new data for the channel until old data of the channel is received by all other operators in the same group, and wherein data that flows from the first and second groups to the third group is done asynchronously and is stored in a queue if not ready for processing by an operator of the third group, and deploy the streamType: ApplicationFiled: June 16, 2009Publication date: December 16, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Eric Bouillet, Hanhua Feng, Zhen Liu, Anton V. Riabov
-
Publication number: 20100288689Abstract: A microfluidic filtration unit for trapping particles of a predetermined nominal size present in a fluid is provided. The unit comprises a fluid chamber connected to an inlet for introducing the fluid to be filtered and an outlet for discharging filtered fluid, a filtration barrier arranged within the fluid chamber, said filtration barrier comprising a plurality of pillars arranged substantially perpendicular to the path of fluid flow when fluid is introduced into the fluid chamber, said pillars being aligned to form at least one row extending across said path of fluid flow, wherein each of said at least one row of pillars in the filtration barrier comprises at least one fine filtration section comprising a group of pillars that are spaced apart to prevent particles to be filtered from the fluid from moving between adjacent pillars, and at least one coarse filtration section comprising a group of pillars that are spaced apart to permit the movement of particles between adjacent pillars.Type: ApplicationFiled: August 22, 2006Publication date: November 18, 2010Applicant: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCHInventors: Liang Zhu, Wen-Tso Liu, Hanhua Feng, Hong Miao Ji, William Cheng Yong Teo, Ramana Murthy Badam
-
Publication number: 20090300183Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.Type: ApplicationFiled: August 5, 2009Publication date: December 3, 2009Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
-
Publication number: 20080304516Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.Type: ApplicationFiled: June 6, 2007Publication date: December 11, 2008Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
-
Patent number: 7132307Abstract: A silicon condenser microphone is described. The silicon condenser microphone of the present invention comprises a perforated backplate comprising a portion of a single crystal silicon substrate, a support structure formed on the single crystal silicon substrate, and a floating silicon diaphragm supported at its edge by the support structure and lying parallel to the perforated backplate and separated from the perforated backplate by an air gap.Type: GrantFiled: December 15, 2003Date of Patent: November 7, 2006Assignee: Knowles Electronics, LLC.Inventors: Zhe Wang, Qingxin Zhang, Hanhua Feng
-
Patent number: 6908825Abstract: The invention relates to a method of making an integrated circuit inductor that comprises a silicon substrate and an oxide layer on the silicon substrate. In one aspect, the method comprises depositing an inductive loop on the oxide layer, and making a plurality of apertures in the oxide layer beneath the inductive loop. The method also comprises providing a plurality of bridges adjacent the apertures and provided by portions of the oxide layer between an inner region within the inductive loop and an outer region of the oxide layer without the inductive loop, the inductive loop being supported on the bridges. The method comprises forming a trench in the silicon substrate beneath the bridges, to provide an air gap between the inductive loop and the silicon substrate.Type: GrantFiled: November 14, 2002Date of Patent: June 21, 2005Assignee: Institute of MicroelectronicsInventors: Shuming Xu, Hanhua Feng, Pang Dow Foo, Bai Xu, Uppili Sridhar
-
Patent number: 6855640Abstract: When using hot alkaline etchants such as KOH, the wafer front side, where various devices and/or circuits are located, must be isolated from any contact with the etchant. This has been achieved by using two chambers that are separated from each other by the wafer that is to be etched. Etching solution in one chamber is in contact with the wafer's back surface while deionized water in the other chamber contacts the front surface. The relative liquid pressures in the chambers is arranged to be slightly higher in the chamber of the front surface so that leakage of etchant through a pin hole from back surface to front surface does not occur. As a further precaution, a monitor to detect the etchant is located in the DI water so that, if need be, etching can be terminated before irreparable damage is done.Type: GrantFiled: February 26, 2002Date of Patent: February 15, 2005Assignee: Institute of MicroelectronicsInventors: Zhe Wang, Qingxin Zhang, Pang Dow Foo, Hanhua Feng
-
Publication number: 20040179705Abstract: A silicon condenser microphone is described. The silicon condenser microphone of the present invention comprises a perforated backplate comprising a portion of a single crystal silicon substrate, a support structure formed on the single crystal silicon substrate, and a floating silicon diaphragm supported at its edge by the support structure and lying parallel to the perforated backplate and separated from the perforated backplate by an air gap.Type: ApplicationFiled: December 15, 2003Publication date: September 16, 2004Inventors: Zhe Wang, Qingxin Zhang, Hanhua Feng