Patents by Inventor Eng Lim Goh

Eng Lim Goh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240133680
    Abstract: There is provided an optical encoding system including a photodiode array and a code disk opposite to each other. The code disk is arranged with multiple code slits at a ring area corresponding to the photodiode array. A length direction of each photodiode of the photodiode array has at least one deviation angle with respect to a length direction of the multiple code slits to reduce the total harmonic distortion in photocurrents.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 25, 2024
    Inventors: Meng-Yee LIM, Priscilla Tze-Wei GOH, Kuan-Choong SHIM, Gim-Eng CHEW
  • Patent number: 11464133
    Abstract: Example containers are provided to retain and cool electronic devices in environments where power and/or coolant (e.g., airflow) is limited or finite. In examples, a container can include a housing and a conduit system. The housing can include a plurality of sides including a front side and a back side, and can be structured to retain at least a first computing device. In addition, the housing can provide for a first and second inlet opening and a first and second outlet opening on the back side of the container. The conduit system can be provided within the housing to guide the airflow received from each of the first and second outlet openings through an interior volume of the container to cause the airflow to exit from each of the first and second outlet openings.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: October 4, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Michael Scott, Bret Cuda, David Petersen, Eng Lim Goh, Mark R. Fernandez, John Kichury, Robert Behringer, Calandra Szulgit
  • Patent number: 11029847
    Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: June 8, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
  • Patent number: 10809779
    Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one the plurality of nodes as a function or at least one of the monitored temperatures.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: October 20, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
  • Publication number: 20200311583
    Abstract: Decentralized machine learning to build models is performed at nodes where local training datasets are generated. A blockchain platform may be used to coordinate decentralized machine learning (ML) over a series of iterations. For each iteration, a distributed ledger may be used to coordinate the nodes communicating via a decentralized network. A master node on the decentralized network, can include fault tolerance features. Fault tolerance involves determining whether a number of computing nodes in a population for participating in an iteration of training is above a threshold. The master node ensures that the minimum number of computing nodes for a population, indicated by the threshold, is met before continuing with an iteration. Thus, the master node can prevent decentralized ML from continuing with an insufficient population of participating node that may impact the precision of the model and/or the overall learning ability of the decentralized ML system.
    Type: Application
    Filed: April 1, 2019
    Publication date: October 1, 2020
    Inventors: SATHYANARAYANAN MANAMOHAN, Krishnaprasad Lingadahalli Shastry, Vishesh Garg, Eng Lim Goh
  • Publication number: 20200272945
    Abstract: Decentralized machine learning to build models is performed at nodes where local training datasets are generated. A blockchain platform may be used to coordinate decentralized machine learning over a series of iterations. For each iteration, a distributed ledger may be used to coordinate the nodes communicating via a blockchain network. A node can have a local training dataset that includes raw data, where the raw data is accessible locally at the computing node. Further, a node can train a local model based on the local training dataset during a first iteration of training a machine-learned model. The node can generate shared training parameters based on the local model in a manner that precludes any requirement for the raw data to be accessible by each of the other nodes on the blockchain network to perform the decentralized machine learning, while preserving privacy of the raw data.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: SATHYANARAYANAN MANAMOHAN, KRISHNAPRASAD LINGADAHALLI SHASTRY, VISHESH GARG, ENG LIM GOH
  • Publication number: 20200229319
    Abstract: Example containers are provided to retain and cool electronic devices in environments where power and/or coolant (e.g., airflow) is limited or finite. In examples, a container can include a housing and a conduit system. The housing can include a plurality of sides including a front side and a back side, and can be structured to retain at least a first computing device. In addition, the housing can provide for a first and second inlet opening and a first and second outlet opening on the back side of the container. The conduit system can be provided within the housing to guide the airflow received from each of the first and second outlet openings through an interior volume of the container to cause the airflow to exit from each of the first and second outlet openings.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Michael Scott, Bret Cuda, David Petersen, Eng Lim Goh, Mark R. Fernandez, John Kichury, Robert Behringer, Calandra Szulgit
  • Patent number: 10429909
    Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one of the plurality of nodes as a function of at least one of the monitored temperatures.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: October 1, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
  • Patent number: 10360701
    Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: July 23, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
  • Publication number: 20180157299
    Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one the plurality of nodes as a function or at least one of the monitored temperatures.
    Type: Application
    Filed: February 6, 2018
    Publication date: June 7, 2018
    Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
  • Publication number: 20170139607
    Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.
    Type: Application
    Filed: November 16, 2016
    Publication date: May 18, 2017
    Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
  • Publication number: 20170034946
    Abstract: A server is implemented within disk drive device or other drive device. The server-drive device may be used within a server tray having many disk drive devices, along with multiple other server trays in a cabinet of trays. One or more disk drive devices may be implemented in a server tray. The server-drive device may also be used in other applications. By implementing the server within the disk drive, valuable space is saved in a computing device.
    Type: Application
    Filed: July 28, 2016
    Publication date: February 2, 2017
    Inventors: Eng Lim Goh, John Kichury, Lance Evans
  • Publication number: 20160364249
    Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.
    Type: Application
    Filed: July 6, 2016
    Publication date: December 15, 2016
    Inventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
  • Publication number: 20160349812
    Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one of the plurality of nodes as a function of at least one of the monitored temperatures.
    Type: Application
    Filed: May 6, 2016
    Publication date: December 1, 2016
    Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
  • Publication number: 20160335131
    Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.
    Type: Application
    Filed: July 25, 2016
    Publication date: November 17, 2016
    Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, Charlton Port
  • Patent number: 9424098
    Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: August 23, 2016
    Assignee: Silicon Graphics International Corp.
    Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, Charlton Port
  • Patent number: 9389760
    Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: July 12, 2016
    Assignee: Silicon Graphics International Corporation
    Inventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
  • Publication number: 20150163954
    Abstract: A server is implemented within disk drive device or other drive device. The server-drive device may be used within a server tray having many disk drive devices, along with multiple other server trays in a cabinet of trays. One or more disk drive devices may be implemented in a server tray. The server-drive device may also be used in other applications. By implementing the server within the disk drive, valuable space is saved in a computing device.
    Type: Application
    Filed: December 9, 2014
    Publication date: June 11, 2015
    Inventors: Eng Lim Goh, John Kichury, Lance Evans
  • Publication number: 20140068627
    Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.
    Type: Application
    Filed: June 29, 2013
    Publication date: March 6, 2014
    Applicant: Silicon Graphics International Corp.
    Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, CHarlton Port
  • Patent number: RE44958
    Abstract: A method and apparatus for processing a primitive for potential display on a display device (having a plurality of pixels) determines if the primitive intersects at least a predetermined number of pixel fragments on the display device. The predetermined number is no less than one. The method and apparatus then cull the primitive as a function of whether the primitive intersects at least the predetermined number of pixel fragments. If it is culled, the primitive is not raster processed (i.e., not subjected to raster processing, whether or not complete).
    Type: Grant
    Filed: July 20, 2006
    Date of Patent: June 24, 2014
    Assignee: RPX Corporation
    Inventors: Stephen Moffitt, Eng Lim Goh