Patents by Inventor Eric Bainville

Eric Bainville has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914737
    Abstract: Embodiments described herein provide a compressed container format that enables the container to be decrypted and decompressed in a streaming manner. One embodiment provides a container format for encrypted archives in which data is compressed and encrypted in a segmented manner. A segment of the archive can be decompressed, decrypted, and checked for integrity before the entire archive is received. Metadata for the encrypted archive is also encrypted to secure details of data stored within the archive.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: February 27, 2024
    Assignee: APPLE INC.
    Inventors: Frederic Jacobs, Eric Bainville, Yannick L. Sierra
  • Patent number: 11914983
    Abstract: Aspects and features include using a virtual disk image to improve computational performance when applying a software patch. Compressed extents within a stored disk image are detected. The compressed extents are virtually reordered to form compressed forks within a virtual disk image and the compressed forks are selected for decompression based on code to be patched. A decompressed fork with the patch is virtually written to the same or another virtual disk image as an updated fork, and the virtual disk image is used to write to storage, either to overwrite the same stored disk image or to produce an updated, compressed disk image. In some examples, the virtual disk image is validated prior to writing to the compressed image by comparing an output hash from the compressed disk image with a known hash to validate the virtual disk image.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Christian T. Martelock, Ali Sazegari, Eric Bainville
  • Publication number: 20230393830
    Abstract: Aspects and features include using a virtual disk image to improve computational performance when applying a software patch. Compressed extents within a stored disk image are detected. The compressed extents are virtually reordered to form compressed forks within a virtual disk image and the compressed forks are selected for decompression based on code to be patched. A decompressed fork with the patch is virtually written to the same or another virtual disk image as an updated fork, and the virtual disk image is used to write to storage, either to overwrite the same stored disk image or to produce an updated, compressed disk image. In some examples, the virtual disk image is validated prior to writing to the compressed image by comparing an output hash from the compressed disk image with a known hash to validate the virtual disk image.
    Type: Application
    Filed: August 29, 2022
    Publication date: December 7, 2023
    Applicant: Apple Inc.
    Inventors: Christian T. Martelock, Ali Sazegari, Eric Bainville
  • Patent number: 11822921
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: November 21, 2023
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Ali Sazegari
  • Publication number: 20230344445
    Abstract: A method for encoding text includes grouping text as a sequence of bytes, the text comprising a string of characters, each byte corresponding to a character in the text. For each byte of the sequence of bytes: (a) each bit is processed from most significant bit to least significant bit to generate a context; and (b) a subsequent bit is predicted, using a prediction model, based on the context generated based on previously processed bits, prediction of the prediction model being a combination of predictions of a plurality of sub-models. An encoded bitstream is output based on the predicted bits. The encoded bitstream includes encoded data corresponding to the text.
    Type: Application
    Filed: December 7, 2022
    Publication date: October 26, 2023
    Inventors: Christian T. MARTELOCK, Ali SAZEGARI, Eric BAINVILLE
  • Publication number: 20230121984
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Application
    Filed: November 9, 2022
    Publication date: April 20, 2023
    Inventors: Eric Bainville, Ali Sazegari
  • Publication number: 20230081845
    Abstract: The subject technology groups received data in data blocks having a predetermined number of bytes. For each received data block, a compressed data block is written to an output buffer. The compressed data block includes a mask block having a same number of bits as the predetermined number, and a subsequent block. The mask block includes in a same order as bytes within the corresponding data block, a zero corresponding to a zero-byte within the data block, and a one corresponding to each non-zero byte within the data block. The subsequent block includes non-zero bytes within the corresponding data block in a same order as the non-zero bytes within the data block.
    Type: Application
    Filed: January 26, 2022
    Publication date: March 16, 2023
    Inventors: Christian MARTELOCK, Eric BAINVILLE, Ali SAZEGARI
  • Patent number: 11537399
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: December 27, 2022
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Ali Sazegari
  • Publication number: 20220092208
    Abstract: Embodiments described herein provide a compressed container format that enables the container to be decrypted and decompressed in a streaming manner. One embodiment provides a container format for encrypted archives in which data is compressed and encrypted in a segmented manner. A segment of the archive can be decompressed, decrypted, and checked for integrity before the entire archive is received. Metadata for the encrypted archive is also encrypted to secure details of data stored within the archive.
    Type: Application
    Filed: April 27, 2021
    Publication date: March 24, 2022
    Applicant: Apple Inc.
    Inventors: Frederic Jacobs, Eric Bainville, Yannick L. Sierra
  • Publication number: 20210342154
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Application
    Filed: July 12, 2021
    Publication date: November 4, 2021
    Inventors: Eric Bainville, Ali Sazegari
  • Patent number: 11086625
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: August 10, 2021
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Ali Sazegari
  • Patent number: 11042373
    Abstract: In an embodiment, a computation engine is configured to perform vector multiplications, producing either vector results or outer product (matrix) results. The instructions provided to the computation engine specify a matrix mode or a vector mode for the instructions. The computation engine performs the specified operation. The computation engine may perform numerous computations in parallel, in an embodiment. In an embodiment, the instructions may also specify an offset with the input memories, providing additional flexibility in the location of operands. More particularly, the computation engine may be configured to perform numerous multiplication operations in parallel and to accumulate results in a result memory, performing multiply-accumulate operations for each matrix/vector element in the targeted locations of the output memory.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Jeffry E. Gonion, Ali Sazegari, Gerard R. Williams, III
  • Publication number: 20210132942
    Abstract: A novel software updating method is provided. A target file is divided into segments, where some segments are updated by patching, while other segments are updated by archiving. The segmentation of the update allows very large files such as DYLD shared caches to be patched in-place, i.e., by using free space available within the file to perform patching rather than requiring enough free space on disk to store both the new version and the old version of the file. The segmentation of the update also allows each segment to be updated individually by the most optimal update method (copy, patch, or archive) so that the size of the update file can be minimized.
    Type: Application
    Filed: November 16, 2020
    Publication date: May 6, 2021
    Inventors: Eric Bainville, Ali Sazegari
  • Publication number: 20210124574
    Abstract: The embodiments set forth a technique that generates a multi-version patch file at a server computing device. The technique includes, modifying a first file to produce a plurality of versions associated with the first file, in which the plurality of versions includes: (i) a latest version associated with the first file, and (ii) at least two previous versions relative to the latest version. The technique also includes identifying a difference between the latest version and the two previous versions to produce first and second delta versions of the first file. Furthermore, the technique includes generating the multi-version patch file for installation by a client computing device, in which the multi-version patch file (i) includes the first and second delta versions, and (ii) causes a second file stored on the client computing device to be updated to the latest version using at least one of the first and second delta versions.
    Type: Application
    Filed: January 4, 2021
    Publication date: April 29, 2021
    Inventors: Eric BAINVILLE, Ali SAZEGARI
  • Patent number: 10990401
    Abstract: In an embodiment, a computation engine may perform dot product computations on input vectors. The dot product operation may have a first operand and a second operand, and the dot product may be performed on a subset of the vector elements in the first operand and each of the vector elements in the second operand. The subset of vector elements may be separated in the first operand by a stride that skips one or more elements between each element to which the dot product operation is applied. More particularly, in an embodiment, the input operands of the dot product operation may be a first vector having second vectors as elements, and the stride may select a specified element of each second vector.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: April 27, 2021
    Assignee: Apple Inc.
    Inventors: Tal Uliel, Eric Bainville, Jeffry E. Gonion, Ali Sazegari
  • Patent number: 10970078
    Abstract: In an embodiment, a computation engine may perform computations on input vectors having vector elements of a first precision and data type. The computation engine may convert the vector elements from the first precision to a second precision and may also interleave the vector elements as specified by an instruction issued by the processor to the computation engine. The interleave may be based on a ratio of a result precision and the second precision. An extract instruction may be supported to extract results from the computations and convert and deinterleave the vector elements to provide a compact result in a desired order.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: April 6, 2021
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Jeffry E. Gonion, Ali Sazegari, Erik K. Norden
  • Publication number: 20210072994
    Abstract: In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 11, 2021
    Inventors: Eric Bainville, Ali Sazegari
  • Patent number: 10877754
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: December 29, 2020
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Patent number: 10860310
    Abstract: A novel software updating method is provided. A target file is divided into segments, where some segments are updated by patching, while other segments are updated by archiving. The segmentation of the update allows very large files such as DYLD shared caches to be patched in-place, i.e., by using free space available within the file to perform patching rather than requiring enough free space on disk to store both the new version and the old version of the file. The segmentation of the update also allows each segment to be updated individually by the most optimal update method (copy, patch, or archive) so that the size of the update file can be minimized.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: December 8, 2020
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Ali Sazegari
  • Patent number: 10831488
    Abstract: In an embodiment, a computation engine may offload work from a processor (e.g. a CPU) and efficiently perform computations such as those used in LSTM and other workloads at high performance. In an embodiment, the computation engine may perform computations on input vectors from input memories in the computation engine, and may accumulate results in an output memory within the computation engine. The input memories may be loaded with initial vector data from memory, incurring the memory latency that may be associated with reading the operands. Compute instructions may be performed on the operands, generating results in an output memory. One or more extract instructions may be supported to move data from the output memory to the input memory, permitting additional computation on the data in the output memory without moving the results to main memory.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: November 10, 2020
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Jeffry E. Gonion, Ali Sazegari, Gerard R. Williams, III, Andrew J. Beaumont-Smith