Patents by Inventor Thomas BERNARD
Thomas BERNARD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240117479Abstract: A hydrocarbon fluid containment article having a wall with a surface that is wetted by hydrocarbon fluid. The surface includes an anti-coking coating. The anti-coking coating includes a copper salt, a silver salt, or a combination thereof. A gas turbine engine component including a wall having a first surface and an anti-coking coating on the first surface of the wall that is wetted by hydrocarbon fluid. The anti-coking coating including a copper salt, a silver salt, or a combination thereof that prevents the formation of gum or coke on a surface thereon. Methods for reducing the deposition of thermal decomposition products on a wall of an article are also provided.Type: ApplicationFiled: August 3, 2023Publication date: April 11, 2024Inventors: Lawrence Bernard Kool, Bangalore Aswatha Nagaraj, Thomas George Holland, Alfred Albert Mancini, Michael Anthony Benjamin
-
Publication number: 20240090491Abstract: An insert for an arthropod trapping device. The insert comprising a substrate and a frame for supporting the substrate, where a surface of the substrate has an adhesive disposed thereon, an optional mounting bracket spaced apart from the adhesive surface of the insert and located at a first end of the insert, and an optional graspable tab extending from the frame at a second end of the insert.Type: ApplicationFiled: November 20, 2023Publication date: March 21, 2024Inventors: Christopher Lawrence SMITH, Benjamin Patrick HALE, Adam James BURT, Erik John HASENOEHRL, Danilo ROSSI, Andrea PEDROTTI, Walter SORDO, Alessio GIOVANELLI, Brian Lee FLOYD, Hirotaka UCHIYAMA, Thomas Bernard WALKER, III, Anthony Xavier Jean-Yves CLERC
-
Patent number: 11873732Abstract: A vane of a turbine engine blade includes a first portion of structural resistance formed by two end portions including the leading and trailing edges and end strips of the lower surface and the upper surface, and of a core joining them. Two other portions of the blade are constructed of light material, composite for example, between the end portions to reconstitute the complete vane. The core has an oblique or diagonal extension between the end portions.Type: GrantFiled: April 1, 2021Date of Patent: January 16, 2024Assignee: SAFRAN AIRCRAFT ENGINESInventors: Rémi Philippe Guy Onfray, Dorian Alexandre Alban Bantwell, Alix Thomas Bernard Lejeune
-
Publication number: 20240012615Abstract: In an approach, a processor receives a plurality of first operand values, where the first operand values are integer values. A processor adds, using binary addition, the plurality of first operand values resulting in a sum value S. A processor determines a single combined modular correction term D for a binary sum of all operand values based on leading bits of the sum value S. A processor performs a modular addition of S and D resulting in a modular sum of said plurality of said first operand values.Type: ApplicationFiled: July 7, 2022Publication date: January 11, 2024Inventors: Silvia Melitta Mueller, Ulrich Mayer, Dominik Steenken, Yvo Thomas Bernard Mulder, Manoj Kumar
-
Patent number: 11860789Abstract: A cache purge simulation system includes a device under test with a cache skip switch. A first cache skip switch includes a configurable state register to indicate whether all of an associated cache is purged upon receipt of a cache purge instruction from a verification system or whether a physical partition that is smaller than the associated cache is purged upon receipt of the cache purge instruction from the verification system. A second cache skip switch includes a configurable start address register comprising a start address that indicates a beginning storage location of a physical partition of an associated cache and a configurable stop address register comprising a stop address that indicates a ending storage location of the physical partition of the associated cache.Type: GrantFiled: March 21, 2022Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Yvo Thomas Bernard Mulder, Ralf Ludewig, Huiyuan Xing, Ulrich Mayer
-
Publication number: 20230418558Abstract: Generation of test data for verifying a modular correction of a modular multiplication performed by a multiplier unit for very wide operands includes performing, by a multiplier unit using a computer, a modular multiplication by correcting a binary multiplication of two operands by a coarse-grained and a fine-grained correction. The computer selects adjacent intervals of the intermediate result, defines a sub-interval closely around a boundary between the adjacent intervals, and selects a value in the sub-interval. Moreover, the computer uses a first factorization algorithm for the value V for determining operands A?, B?, where the modular multiplication result of the operands corrected by the coarse-grained correction is in the sub-interval. The computer repeatedly determines A? plus varying ?-values as A? values, and determines B? values, so that the modular multiplication corrected by the coarse-grained correction is in the sub-interval.Type: ApplicationFiled: June 24, 2022Publication date: December 28, 2023Inventors: Yvo Thomas Bernard Mulder, Michael Johannes Jaspers, Silvia Melitta Mueller, Ulrich Mayer
-
Patent number: 11809190Abstract: Performance anomalies in autonomous vehicle can be difficult to identify, and the impact of such anomalies on systems within the autonomous vehicle may be difficult to understand. In examples, systems of the autonomous vehicle are modeled as nodes in a probabilistic graphical network. Probabilities of data generated at each of the nodes is determined. The probabilities are used to determine capabilities associated with higher level functions of the autonomous vehicle.Type: GrantFiled: April 30, 2021Date of Patent: November 7, 2023Assignee: Zoox, Inc.Inventors: Andreas Christian Reschka, Thomas Bernard Gacka, Collin MacGregor
-
Patent number: 11786503Abstract: This disclosure provides methods and pharmaceutical compositions for reducing or eliminating cardiotoxicity, particularly cardiotoxicity induced by a cancer treatment or other therapy. In some cases, the methods and compositions prevent or reduce cardiotoxicity caused by anthracycline treatment. The methods provided herein often comprise administering a protective agent such as myricetin, tricetin, robinetin, ficetin, vitexin, quercetin, dihydrorobinetin, kaempferol, 7,3?,4?,5?-tetrahydroxyflavone, and myricitrin in conjunction with the administration of a cancer drug or other treatment. They may comprise administering a protective agent in combination with dexrazoxane. The compositions provided herein include co-formulations of a protective agent with a different protective agent or with a cancer treatment (e.g., anthracycline drug).Type: GrantFiled: October 7, 2021Date of Patent: October 17, 2023Assignees: Auransa Inc., SCT II LLCInventors: Christopher G. Armstrong, Kevin J. Kim, Lisa Maria Lucia Pham, Eunhye Park, Zhong Zhong, Guanyi Huang, Joseph C. Wu, Sidney Paul Elmer, Viwat Visuthikraisee, Eithon Michael G. Cadag, Thomas Bernard Freeman, Pek Yee Lum
-
Publication number: 20230297509Abstract: A cache purge simulation system includes a device under test with a cache skip switch. A first cache skip switch includes a configurable state register to indicate whether all of an associated cache is purged upon receipt of a cache purge instruction from a verification system or whether a physical partition that is smaller than the associated cache is purged upon receipt of the cache purge instruction from the verification system. A second cache skip switch includes a configurable start address register comprising a start address that indicates a beginning storage location of a physical partition of an associated cache and a configurable stop address register comprising a stop address that indicates a ending storage location of the physical partition of the associated cache.Type: ApplicationFiled: March 21, 2022Publication date: September 21, 2023Inventors: Yvo Thomas Bernard Mulder, Ralf Ludewig, Huiyuan Xing, Ulrich Mayer
-
Publication number: 20230284607Abstract: An insert for an arthropod trapping device. The insert comprising a substrate and a frame for supporting the substrate, where a surface of the substrate has an adhesive disposed thereon, an optional mounting bracket spaced apart from the adhesive surface of the insert and located at a first end of the insert, and an optional graspable tab extending from the frame at a second end of the insert.Type: ApplicationFiled: March 14, 2023Publication date: September 14, 2023Inventors: Christopher Lawrence SMITH, Benjamin Patrick HALE, Adam James BURT, Erik John HASENOEHRL, Danilo ROSSI, Andrea PEDROTTI, Walter SORDO, Alessio GIOVANELLI, Brian Lee FLOYD, Hirotaka UCHIYAMA, Thomas Bernard WALKER, III, Anthony Xavier Jean-Yves CLERC
-
Publication number: 20230175402Abstract: A vane of a turbine engine blade includes a first portion of structural resistance formed by two end portions including the leading and trailing edges and end strips of the lower surface and the upper surface, and of a core joining them. Two other portions of the blade are constructed of light material, composite for example, between the end portions to reconstitute the complete vane. The core has an oblique or diagonal extension between the end portions.Type: ApplicationFiled: April 1, 2021Publication date: June 8, 2023Applicant: SAFRAN AIRCRAFT ENGINESInventors: Rémi Philippe Guy ONFRAY, Dorian Alexandre Alban BANTWELL, Alix Thomas Bernard LEJEUNE
-
Publication number: 20230116629Abstract: A DNN accelerator includes multiple compute tiles for sharing a workload of running a convolution. A halo pipeline in a compute tile can facilitate replications of halo data from the compute tile where the halo data is generated into another compute tile. The halo pipeline may receive a memory transaction for writing a data block. The halo pipeline may determine that the data block falls into a halo region in an input tensor of the convolution. The halo pipeline may generate a remote address for storing the data block in a memory of the other compute tile, e.g., based on a local address of the data block in a memory of the compute tile. The halo pipeline may adjust the remote address, e.g., based on a difference in dimensions of a tensor to be used by the compute tile and a tensor to be used by the other compute tile.Type: ApplicationFiled: October 13, 2022Publication date: April 13, 2023Applicant: Intel CorporationInventors: Martin-Thomas Grymel, David Thomas Bernard, Niall Hanrahan
-
Publication number: 20230072082Abstract: A system includes a first memory, a compiler, and a DNN accelerator. The DNN accelerator includes a DMA engine, an acceleration module, and a compute block. The compute block includes a second memory. The compiler may generate a task for transferring activations from the second memory to the first memory. The DMA engine may receive the task and read the activations from the second memory. The acceleration module may compress the activations to generate compressed activation data and write the compressed activation data into the external memory. The acceleration module may also store a size of the compressed activation data in the local memory, which may be used by the DMA engine to read the activation from the first memory to the second memory later. The compressed activation data may include non-zero activations and sparsity bitmaps. The compressed activation data may also include a header or zeropoint marker.Type: ApplicationFiled: October 28, 2022Publication date: March 9, 2023Inventors: Sudheendra Kadri, Andrea Deidda, Hassan Kamal, Martin-Thomas Grymel, Alfonso Tarazona Martinez, David Thomas Bernard
-
Publication number: 20230059976Abstract: An DNN accelerator may include a PE array performing MAC operations. The PE array may include PEs capable of MAC operations on quantized values. A PE may include subtractors for subtracting zeropoints from quantized activations and quantized weights to generate intermediate activations and intermediate weights. The intermediate activations and intermediate weights may be stored in data storage units in the PE and maybe used by an MAC unit in the PE. The subtractors may be placed outside the MAC unit but inside the PE. The MAC unit may perform sequential cycles of MAC operations. The MAC unit may include a plurality of multipliers. The intermediate activations and intermediate weights stored in the data storage units may be reused by different multipliers in different cycles of MAC operations. An output of the MAC unit or of the PE may be multiplied with a quantization scale to produce a floating-point value.Type: ApplicationFiled: October 18, 2022Publication date: February 23, 2023Applicant: Intel CorporationInventors: Deepak Abraham Mathaikutty, Arnab Raha, Raymond Jit-Hung Sung, Martin Power, Umer Iftikhar Cheema, David Thomas Bernard
-
Publication number: 20230016455Abstract: A deconvolution can be decomposed into multiple convolutions. Results of the convolutions constitute an output of the deconvolution. Zeros may be added to an input tensor of the deconvolution to generate an upsampled input tensor. Subtensors having the same size as the kernel of the deconvolution may be identified from the upsampled input tensor. A subtensor may include one or more input activations and one or more zeros. Subtensors having same distribution patterns of input activations may be used to generate a reduced kernel. The reduced kernel includes a subset of the kernel. The position of a weight in the reduced kernel may be the same as the positions of an input activation in the subtensor. Multiple reduced kernels may be generated based on multiple subtensors having different distribution patterns of activations. Each of the convolutions may use the input tensor and a different one of the reduced kernels.Type: ApplicationFiled: September 26, 2022Publication date: January 19, 2023Inventors: Alessandro Palla, David Thomas Bernard, Niall Hanrahan
-
Publication number: 20230017662Abstract: An DNN accelerator includes a DMA engine that can rearrange weight data layout. The DMA engine may read a weight tensor from a memory (e.g., DRAM). The weight tensor includes weights arranged in a 3D matrix. The DMA engine may partition the weight tensor into a plurality of virtual banks based on a structure of a PE array, e.g., based on the number of activated PE columns in the PE array. Then the DMA engine may partition a virtual bank into a plurality of virtual sub-banks. The DMA engine may also identify data blocks from different ones of the plurality of virtual sub-banks. A data block may include a plurality of input channels and may have a predetermined spatial size and storage size. The DMA engine form a linear data structure by interleaving the data blocks. The DMA engine can write the linear data structure into another memory (e.g., SRAM).Type: ApplicationFiled: September 16, 2022Publication date: January 19, 2023Inventors: Sudheendra Kadri, Darren Crews, Deepak Abraham Mathaikutty, Andrea Deidda, Arnab Raha, Kevin Brady, David Thomas Bernard
-
Publication number: 20230014656Abstract: A memory array of a compute tile may store activations or weights of a DNN. The memory array may include databanks for storing contexts, context MUXs, and byte MUXs. A databank may store a context with flip-flop arrays, each of which includes a sequence of flip-flops. A logic gate and an ICG unit may gate flip-flops and control whether states of the flip-flops can be changed. The data gating can prevent a context not selected for the databank from inadvertently toggling and wasting power A context MUX may read a context from different flip-flop arrays in a databank based on gray-coded addresses. A byte MUX can combine bits from different bytes in a context read by the context MUX. The memory array may be implemented with bit packing to reduce distance between the context MUX and byte MUX to reduce lengths of wires connecting the context MUXs and byte MUXs.Type: ApplicationFiled: September 23, 2022Publication date: January 19, 2023Inventors: Raymond Jit-Hung Sung, Deepak Abraham Mathaikutty, Amit Agarwal, David Thomas Bernard, Steven Hsu, Martin Power, Conor Byme, Arnab Raha
-
Publication number: 20230020929Abstract: A compute tile includes a WCB that receives a workload of writing an output tensor of a convolution into a local memory of the compute tile. The local memory may be a SRAM. The WCB receives write transactions. A write transaction includes a data block, which is a part of the output tensor, and metadata describing one or more attributes of the data block. The WCB may store write transactions in its internal buffers. The WCB may determine whether to combine two write transactions, e.g., based on an operation mode or metadata in the write transactions. In embodiments where the WCB determines to combine the two write transactions, the WCB may combine the two write transactions into a new write transaction and write the new write transaction into the local memory or an internal memory of the WCB. The total number of write transactions for the workload can be reduced.Type: ApplicationFiled: September 16, 2022Publication date: January 19, 2023Inventors: Martin-Thomas Grymel, David Thomas Bernard, Martin Power, Niall Hanrahan, Kevin Brady
-
Publication number: 20230018857Abstract: Sparsity processing within a compute block can be done on unpacked data. The compute block includes a sparsity decoder that generates a combined sparsity vector from an activation sparsity vector and a weight sparsity vector. The activation sparsity vector indicates positions of non-zero valued activations in an activation context. The weight sparsity vector indicates positions of non-zero valued weights in a weight context. The combined sparsity vector comprises one or more zero valued bits and one or more non-zero valued bits. The sparsity decoder may determine the position of a non-zero valued bit in the combined sparsity vector and determine an address for the non-zero valued activation and the non-zero valued weight based on the position of the non-zero valued bit. The non-zero valued activation and the non-zero valued weight may be provided to a PE for performing MAC operations.Type: ApplicationFiled: September 19, 2022Publication date: January 19, 2023Inventors: Martin Power, Conor Byrne, Niall Hanrahan, Deepak Abraham Mathaikutty, Arnab Raha, Raymond Jit-Hung Sung, David Thomas Bernard, Kevin Brady, Martin-Thomas Grymel
-
Publication number: 20230008622Abstract: An DNN accelerator may perform 1×N kernel decomposition to decompose a convolutional kernel into kernel vectors, each of which includes multiple weights. Through the kernel decomposition, a weight operand may be generated from a filter. The DNN accelerator converts an input tensor into input operands. An input operand includes activations and has the same size as the weight operand. The DNN accelerator may read a first activation in the input operand from memory to an internal memory of a first PE and read a second activation in the input operand from the memory to an internal memory of a second PE. The first PE may receive the second activation from the second PE through activation broadcasting between the two PEs and perform MAC operations on the input operand and weight operand. The second PE may perform MAC operations on another input operand in the input tensor and the weight operand.Type: ApplicationFiled: September 22, 2022Publication date: January 12, 2023Inventors: Richard Boyd, David Thomas Bernard, Deepak Abraham Mathaikutty, Martin Power, Niall Hanrahan