Patents by Inventor Eng Lim Goh
Eng Lim Goh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240133680Abstract: There is provided an optical encoding system including a photodiode array and a code disk opposite to each other. The code disk is arranged with multiple code slits at a ring area corresponding to the photodiode array. A length direction of each photodiode of the photodiode array has at least one deviation angle with respect to a length direction of the multiple code slits to reduce the total harmonic distortion in photocurrents.Type: ApplicationFiled: October 20, 2022Publication date: April 25, 2024Inventors: Meng-Yee LIM, Priscilla Tze-Wei GOH, Kuan-Choong SHIM, Gim-Eng CHEW
-
Patent number: 11464133Abstract: Example containers are provided to retain and cool electronic devices in environments where power and/or coolant (e.g., airflow) is limited or finite. In examples, a container can include a housing and a conduit system. The housing can include a plurality of sides including a front side and a back side, and can be structured to retain at least a first computing device. In addition, the housing can provide for a first and second inlet opening and a first and second outlet opening on the back side of the container. The conduit system can be provided within the housing to guide the airflow received from each of the first and second outlet openings through an interior volume of the container to cause the airflow to exit from each of the first and second outlet openings.Type: GrantFiled: January 14, 2019Date of Patent: October 4, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Michael Scott, Bret Cuda, David Petersen, Eng Lim Goh, Mark R. Fernandez, John Kichury, Robert Behringer, Calandra Szulgit
-
Patent number: 11029847Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.Type: GrantFiled: November 16, 2016Date of Patent: June 8, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
-
Patent number: 10809779Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one the plurality of nodes as a function or at least one of the monitored temperatures.Type: GrantFiled: February 6, 2018Date of Patent: October 20, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
-
Publication number: 20200311583Abstract: Decentralized machine learning to build models is performed at nodes where local training datasets are generated. A blockchain platform may be used to coordinate decentralized machine learning (ML) over a series of iterations. For each iteration, a distributed ledger may be used to coordinate the nodes communicating via a decentralized network. A master node on the decentralized network, can include fault tolerance features. Fault tolerance involves determining whether a number of computing nodes in a population for participating in an iteration of training is above a threshold. The master node ensures that the minimum number of computing nodes for a population, indicated by the threshold, is met before continuing with an iteration. Thus, the master node can prevent decentralized ML from continuing with an insufficient population of participating node that may impact the precision of the model and/or the overall learning ability of the decentralized ML system.Type: ApplicationFiled: April 1, 2019Publication date: October 1, 2020Inventors: SATHYANARAYANAN MANAMOHAN, Krishnaprasad Lingadahalli Shastry, Vishesh Garg, Eng Lim Goh
-
Publication number: 20200272945Abstract: Decentralized machine learning to build models is performed at nodes where local training datasets are generated. A blockchain platform may be used to coordinate decentralized machine learning over a series of iterations. For each iteration, a distributed ledger may be used to coordinate the nodes communicating via a blockchain network. A node can have a local training dataset that includes raw data, where the raw data is accessible locally at the computing node. Further, a node can train a local model based on the local training dataset during a first iteration of training a machine-learned model. The node can generate shared training parameters based on the local model in a manner that precludes any requirement for the raw data to be accessible by each of the other nodes on the blockchain network to perform the decentralized machine learning, while preserving privacy of the raw data.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Inventors: SATHYANARAYANAN MANAMOHAN, KRISHNAPRASAD LINGADAHALLI SHASTRY, VISHESH GARG, ENG LIM GOH
-
Publication number: 20200229319Abstract: Example containers are provided to retain and cool electronic devices in environments where power and/or coolant (e.g., airflow) is limited or finite. In examples, a container can include a housing and a conduit system. The housing can include a plurality of sides including a front side and a back side, and can be structured to retain at least a first computing device. In addition, the housing can provide for a first and second inlet opening and a first and second outlet opening on the back side of the container. The conduit system can be provided within the housing to guide the airflow received from each of the first and second outlet openings through an interior volume of the container to cause the airflow to exit from each of the first and second outlet openings.Type: ApplicationFiled: January 14, 2019Publication date: July 16, 2020Inventors: Michael Scott, Bret Cuda, David Petersen, Eng Lim Goh, Mark R. Fernandez, John Kichury, Robert Behringer, Calandra Szulgit
-
Patent number: 10429909Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one of the plurality of nodes as a function of at least one of the monitored temperatures.Type: GrantFiled: May 6, 2016Date of Patent: October 1, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
-
Patent number: 10360701Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.Type: GrantFiled: July 6, 2016Date of Patent: July 23, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
-
Publication number: 20180157299Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one the plurality of nodes as a function or at least one of the monitored temperatures.Type: ApplicationFiled: February 6, 2018Publication date: June 7, 2018Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
-
Publication number: 20170139607Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.Type: ApplicationFiled: November 16, 2016Publication date: May 18, 2017Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
-
Publication number: 20170034946Abstract: A server is implemented within disk drive device or other drive device. The server-drive device may be used within a server tray having many disk drive devices, along with multiple other server trays in a cabinet of trays. One or more disk drive devices may be implemented in a server tray. The server-drive device may also be used in other applications. By implementing the server within the disk drive, valuable space is saved in a computing device.Type: ApplicationFiled: July 28, 2016Publication date: February 2, 2017Inventors: Eng Lim Goh, John Kichury, Lance Evans
-
Publication number: 20160364249Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.Type: ApplicationFiled: July 6, 2016Publication date: December 15, 2016Inventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
-
Publication number: 20160349812Abstract: An apparatus and method thermally manage a high performance computing system having a plurality of nodes with microprocessors. To that end, the apparatus and method monitor the temperature of at least one of a) the environment of the high performance computing system and b) at least a portion of the high performance computing system. In response, the apparatus and method control the processing speed of at least one of the microprocessors on at least one of the plurality of nodes as a function of at least one of the monitored temperatures.Type: ApplicationFiled: May 6, 2016Publication date: December 1, 2016Inventors: Eng Lim Goh, Patrick Donlin, Andrew Warner
-
Publication number: 20160335131Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.Type: ApplicationFiled: July 25, 2016Publication date: November 17, 2016Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, Charlton Port
-
Patent number: 9424098Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.Type: GrantFiled: June 29, 2013Date of Patent: August 23, 2016Assignee: Silicon Graphics International Corp.Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, Charlton Port
-
Patent number: 9389760Abstract: A system may provide a visualization function during computational functions performed by a host system. Access to a library of functions including a visualization function is provided. Then, a computing application is executed. The execution of the computing application includes generating multi-dimensional data, invoking the visualization function from the library, and providing a visual representation of at least a portion of the multi-dimensional data for display within the computing application using the visualization function.Type: GrantFiled: June 29, 2013Date of Patent: July 12, 2016Assignee: Silicon Graphics International CorporationInventors: Eng Lim Goh, Hansong Zhang, Chandrasekhar Murthy
-
Publication number: 20150163954Abstract: A server is implemented within disk drive device or other drive device. The server-drive device may be used within a server tray having many disk drive devices, along with multiple other server trays in a cabinet of trays. One or more disk drive devices may be implemented in a server tray. The server-drive device may also be used in other applications. By implementing the server within the disk drive, valuable space is saved in a computing device.Type: ApplicationFiled: December 9, 2014Publication date: June 11, 2015Inventors: Eng Lim Goh, John Kichury, Lance Evans
-
Publication number: 20140068627Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.Type: ApplicationFiled: June 29, 2013Publication date: March 6, 2014Applicant: Silicon Graphics International Corp.Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, CHarlton Port
-
Patent number: RE44958Abstract: A method and apparatus for processing a primitive for potential display on a display device (having a plurality of pixels) determines if the primitive intersects at least a predetermined number of pixel fragments on the display device. The predetermined number is no less than one. The method and apparatus then cull the primitive as a function of whether the primitive intersects at least the predetermined number of pixel fragments. If it is culled, the primitive is not raster processed (i.e., not subjected to raster processing, whether or not complete).Type: GrantFiled: July 20, 2006Date of Patent: June 24, 2014Assignee: RPX CorporationInventors: Stephen Moffitt, Eng Lim Goh