Patents by Inventor Hanhua Feng

Hanhua Feng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9667735
    Abstract: A caching system is provided. The computing infrastructure runs off of a centralized storage, and data stored on the centralized store can also be retrieved from nearby machines that are part of the local infrastructure and have recently accessed the centralized store. Address-to-digest mappings are used to find an index of the desired data block. That digest is then used to hold where the data block is being cached. In some embodiments, the digest is hashed and the hash of the digest is used to determine where the data block is being cached. The data block is accessed from the cache using its cache, therefore different addresses may result in the retrieval of the same data block. For example, in a virtual machine environment, two different nodes may retrieve the same data block using different addresses.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: May 30, 2017
    Assignee: Infinio Systems, Inc.
    Inventors: Daniel Rubenstein, Vishal Misra, Hanhua Feng, Martin C. Martin
  • Patent number: 9264506
    Abstract: Systems, methods, and products for pull data transfer in a request-response model are provided herein. One aspect provides for generating output data utilizing at least one data generation station; and communicating via the at least one data generation station output data related to at least one data request received from at least one data requesting station responsive to at least one criterion, the at least one criterion comprising one of expiration of a time period or generation of a threshold amount of output data. Other embodiments and aspects are also described herein.
    Type: Grant
    Filed: May 11, 2012
    Date of Patent: February 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Hanhua Feng, Anton Viktorovich Riabov
  • Patent number: 9104506
    Abstract: A method and computer program product for assembling and deploying multi-platform flow based applications. An information processing flow that produces a result is assembled, the information processing flow includes components connected by data links, a component includes software code that describes at least one of an input constraint or an output constraint of the component, and at least two of the components are deployable on different computing platforms. The information processing flow is partitioned into sub-flows, such that for each sub-flow every component in the sub-flow is deployable on the same computing platform. The sub-flows are deployed on their respective computing platforms.
    Type: Grant
    Filed: November 27, 2009
    Date of Patent: August 11, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eric Bouillet, Mark D. Feblowitz, Hanhua Feng, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
  • Publication number: 20140280689
    Abstract: A caching system is provided. The computing infrastructure runs off of a centralized storage, and data stored on the centralized store can also be retrieved from nearby machines that are part of the local infrastructure and have recently accessed the centralized store. Address-to-digest mappings are used to find an index of the desired data block. That digest is then used to hold where the data block is being cached. In some embodiments, the digest is hashed and the hash of the digest is used to determine where the data block is being cached. The data block is accessed from the cache using its cache, therefore different addresses may result in the retrieval of the same data block. For example, in a virtual machine environment, two different nodes may retrieve the same data block using different addresses.
    Type: Application
    Filed: March 14, 2014
    Publication date: September 18, 2014
    Applicant: Infinio Systems Inc.
    Inventors: Daniel Rubenstein, Vishal Misra, Hanhua Feng, Martin C. Martin
  • Publication number: 20140067908
    Abstract: Systems, methods, and products for pull data transfer in a request-response model are provided herein. One aspect provides for generating output data utilizing at least one data generation station; and communicating via the at least one data generation station output data related to at least one data request received from at least one data requesting station responsive to at least one criterion, the at least one criterion comprising one of expiration of a time period or generation of a threshold amount of output data. Other embodiments and aspects are also described herein.
    Type: Application
    Filed: May 11, 2012
    Publication date: March 6, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hanhua Feng, Anton Viktorovich Riabov
  • Publication number: 20130324863
    Abstract: A guide wire arrangement, a strip arrangement, a method of forming a guide wire arrangement, and a method of forming a strip arrangement are provided. The guide wire arrangement includes a strip; a sensor being disposed on a first portion of the strip; a chip being disposed next to the sensor on a second portion of the strip, wherein the second portion of the strip is next to the first portion of the strip; wherein the strip is folded at a folding point between the first portion of the strip and the second portion of the strip such that the first portion of the strip and the second portion of the strip form a stack of strip portions.
    Type: Application
    Filed: November 2, 2011
    Publication date: December 5, 2013
    Inventors: Daquan Yu, Woo Tae Park, Li Shiah Lim, Muhammad Hamidullah, Rama Krishna Kotlanka, Vaidyanathan Kripesh, Hanhua Feng
  • Patent number: 8380965
    Abstract: An apparatus to facilitate design of a stream processing flow that satisfies an objective, wherein the flow includes at least three processing groups, wherein a first processing group includes a data source and an operator, a second processing group includes a data source and an operator and a third processing group includes a join operator at its input and another operator, wherein data inside each group is organized by channels and each channel is a sequence of data, wherein an operator producing a data channel does not generate new data for the channel until old data of the channel is received by all other operators in the same group, and wherein data that flows from the first and second groups to the third group is done asynchronously and is stored in a queue if not ready for processing by an operator of the third group.
    Type: Grant
    Filed: June 16, 2009
    Date of Patent: February 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Eric Bouillet, Hanhua Feng, Zhen Liu, Anton V. Riabov
  • Patent number: 8290939
    Abstract: In a method for visualizing query results in stream processing systems, a visualization service receives a query from a client to visualize data in a stream processing application. The query is sent from the visualization service to a query-able operator of the stream processing application. At the query-able operator, an operation is performed using history data in the query-able operator to produce a first result that satisfies the query and the first result is sent to the visualization service. At the query-able operator, another operation is performed using new data received by the query-able operator to produce a second result that satisfies the query and the second result is sent to the visualization service. The first and second results are output from the visualization service to the client.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: October 16, 2012
    Assignee: International Busines Machines Corporation
    Inventors: Eric Bouillet, Hanhua Feng, Anton V Riabov
  • Publication number: 20110302196
    Abstract: In a method for visualizing query results in stream processing systems, a visualization service receives a query from a client to visualize data in a stream processing application. The query is sent from the visualization service to a query-able operator of the stream processing application. At the query-able operator, an operation is performed using history data in the query-able operator to produce a first result that satisfies the query and the first result is sent to the visualization service. At the query-able operator, another operation is performed using new data received by the query-able operator to produce a second result that satisfies the query and the second result is sent to the visualization service. The first and second results are output from the visualization service to the client.
    Type: Application
    Filed: June 30, 2010
    Publication date: December 8, 2011
    Applicant: International Business Machines Corporation
    Inventors: Eric Bouillet, Hanhua Feng, Anton V. Riabov
  • Publication number: 20110131557
    Abstract: A method and computer program product for assembling and deploying multi-platform flow based applications. An information processing flow that produces a result is assembled, the information processing flow includes components connected by data links, a component includes software code that describes at least one of an input constraint or an output constraint of the component, and at least two of the components are deployable on different computing platforms. The information processing flow is partitioned into sub-flows, such that for each sub-flow every component in the sub-flow is deployable on the same computing platform. The sub-flows are deployed on their respective computing platforms.
    Type: Application
    Filed: November 27, 2009
    Publication date: June 2, 2011
    Applicant: International Business Machines Corporation
    Inventors: Eric Bouillet, Mark D. Feblowitz, Hanhua Feng, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
  • Patent number: 7924718
    Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.
    Type: Grant
    Filed: August 5, 2009
    Date of Patent: April 12, 2011
    Assignee: International Business Machines Corporation
    Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
  • Patent number: 7889651
    Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.
    Type: Grant
    Filed: June 6, 2007
    Date of Patent: February 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
  • Publication number: 20100318768
    Abstract: An apparatus, including a memory device for storing a program, and a processor in communication with the memory device, the processor operative with the program to facilitate design of a stream processing flow that satisfies an objective, wherein the stream processing flow includes at least three processing groups, wherein a first processing group includes a data source and an operator, a second processing group includes a data source and an operator and a third processing group includes a join operator at its input and another operator, wherein data inside each group is organized by channels and each channel is a sequence of data, wherein an operator producing a data channel does not generate new data for the channel until old data of the channel is received by all other operators in the same group, and wherein data that flows from the first and second groups to the third group is done asynchronously and is stored in a queue if not ready for processing by an operator of the third group, and deploy the stream
    Type: Application
    Filed: June 16, 2009
    Publication date: December 16, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eric Bouillet, Hanhua Feng, Zhen Liu, Anton V. Riabov
  • Publication number: 20100288689
    Abstract: A microfluidic filtration unit for trapping particles of a predetermined nominal size present in a fluid is provided. The unit comprises a fluid chamber connected to an inlet for introducing the fluid to be filtered and an outlet for discharging filtered fluid, a filtration barrier arranged within the fluid chamber, said filtration barrier comprising a plurality of pillars arranged substantially perpendicular to the path of fluid flow when fluid is introduced into the fluid chamber, said pillars being aligned to form at least one row extending across said path of fluid flow, wherein each of said at least one row of pillars in the filtration barrier comprises at least one fine filtration section comprising a group of pillars that are spaced apart to prevent particles to be filtered from the fluid from moving between adjacent pillars, and at least one coarse filtration section comprising a group of pillars that are spaced apart to permit the movement of particles between adjacent pillars.
    Type: Application
    Filed: August 22, 2006
    Publication date: November 18, 2010
    Applicant: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH
    Inventors: Liang Zhu, Wen-Tso Liu, Hanhua Feng, Hong Miao Ji, William Cheng Yong Teo, Ramana Murthy Badam
  • Publication number: 20090300183
    Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.
    Type: Application
    Filed: August 5, 2009
    Publication date: December 3, 2009
    Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
  • Publication number: 20080304516
    Abstract: Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes.
    Type: Application
    Filed: June 6, 2007
    Publication date: December 11, 2008
    Inventors: Hanhua Feng, Zhen Liu, Honghui Xia, Li Zhang
  • Patent number: 7132307
    Abstract: A silicon condenser microphone is described. The silicon condenser microphone of the present invention comprises a perforated backplate comprising a portion of a single crystal silicon substrate, a support structure formed on the single crystal silicon substrate, and a floating silicon diaphragm supported at its edge by the support structure and lying parallel to the perforated backplate and separated from the perforated backplate by an air gap.
    Type: Grant
    Filed: December 15, 2003
    Date of Patent: November 7, 2006
    Assignee: Knowles Electronics, LLC.
    Inventors: Zhe Wang, Qingxin Zhang, Hanhua Feng
  • Patent number: 6908825
    Abstract: The invention relates to a method of making an integrated circuit inductor that comprises a silicon substrate and an oxide layer on the silicon substrate. In one aspect, the method comprises depositing an inductive loop on the oxide layer, and making a plurality of apertures in the oxide layer beneath the inductive loop. The method also comprises providing a plurality of bridges adjacent the apertures and provided by portions of the oxide layer between an inner region within the inductive loop and an outer region of the oxide layer without the inductive loop, the inductive loop being supported on the bridges. The method comprises forming a trench in the silicon substrate beneath the bridges, to provide an air gap between the inductive loop and the silicon substrate.
    Type: Grant
    Filed: November 14, 2002
    Date of Patent: June 21, 2005
    Assignee: Institute of Microelectronics
    Inventors: Shuming Xu, Hanhua Feng, Pang Dow Foo, Bai Xu, Uppili Sridhar
  • Patent number: 6855640
    Abstract: When using hot alkaline etchants such as KOH, the wafer front side, where various devices and/or circuits are located, must be isolated from any contact with the etchant. This has been achieved by using two chambers that are separated from each other by the wafer that is to be etched. Etching solution in one chamber is in contact with the wafer's back surface while deionized water in the other chamber contacts the front surface. The relative liquid pressures in the chambers is arranged to be slightly higher in the chamber of the front surface so that leakage of etchant through a pin hole from back surface to front surface does not occur. As a further precaution, a monitor to detect the etchant is located in the DI water so that, if need be, etching can be terminated before irreparable damage is done.
    Type: Grant
    Filed: February 26, 2002
    Date of Patent: February 15, 2005
    Assignee: Institute of Microelectronics
    Inventors: Zhe Wang, Qingxin Zhang, Pang Dow Foo, Hanhua Feng
  • Publication number: 20040179705
    Abstract: A silicon condenser microphone is described. The silicon condenser microphone of the present invention comprises a perforated backplate comprising a portion of a single crystal silicon substrate, a support structure formed on the single crystal silicon substrate, and a floating silicon diaphragm supported at its edge by the support structure and lying parallel to the perforated backplate and separated from the perforated backplate by an air gap.
    Type: Application
    Filed: December 15, 2003
    Publication date: September 16, 2004
    Inventors: Zhe Wang, Qingxin Zhang, Hanhua Feng