Patents by Inventor Daren Croxford

Daren Croxford has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220129534
    Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more processing devices to facilitate and/or support one or more operations and/or techniques for authenticating an identity of a subject. In particular, some embodiments are directed to techniques for authentication of an identity of a subject as being an identity of a particular unique individual based, at least in part, on involuntary responses by the subject to sensory stimuli.
    Type: Application
    Filed: July 26, 2019
    Publication date: April 28, 2022
    Inventors: Daren Croxford, Roberto Lopez Mendez, Mbou Eyole, Matthew James Horsnell
  • Publication number: 20220129321
    Abstract: An information processing apparatus is described for processing a workload. The information processing apparatus comprises a processor and a memory element connected to the processor via a data link. In advance of processing a workload, the information processing apparatus estimates an access time required to transfer an amount of the workload that is to be transferred from the external memory element to the processor, and estimates a processing time for the processor to process the workload. A processing rate characteristic of the processor and/or a data transfer rate between the memory and the processor is set in dependence upon the estimated processing time and estimated access time. Methods for varying a quality of service (QoS) value of requests to the external memory element are also described.
    Type: Application
    Filed: October 28, 2020
    Publication date: April 28, 2022
    Inventors: Daren CROXFORD, Sharjeel SAEED, Jayavarapu Srinivasa RAO, Aaron DEBATTISTA
  • Publication number: 20220126837
    Abstract: A vehicle-assist system comprising one or more sensors to monitor an environment of a vehicle and an eye-tracking system, including an eye-tracking sensor, to determine a gaze characteristic of a driver of the vehicle. The vehicle-assist system is to detect a hazard, and determine a hazard location of the hazard, in the environment of the vehicle. Based on the hazard location and the gaze characteristic of the driver, the vehicle-assist system is to output an indication of the hazard to the driver.
    Type: Application
    Filed: October 27, 2020
    Publication date: April 28, 2022
    Inventor: Daren CROXFORD
  • Patent number: 11315303
    Abstract: When a programmable execution unit of a graphics processor is executing a graphics processing program to render a frame that represents a view of a scene using a ray tracing process, and the ray tracing process requires the determination of geometry that will be intersected by a ray, the programmable execution unit sends a message to a ray tracing acceleration data structure traversal circuit of the graphics processor, for the ray tracing acceleration data structure traversal circuit to perform a traversal of a ray tracing acceleration data structure for the scene to determine geometry for the scene that may be intersected by the ray. The ray tracing acceleration data structure traversal circuit then returns to the programmable execution unit an indication of geometry that may be intersected by the ray, and the programmable execution unit uses the indicated geometry to determine any geometry that is intersected by the ray.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: April 26, 2022
    Assignees: Arm Limited, Apical Limited
    Inventors: Sharjeel Saeed, Daren Croxford, Mathieu Jean Joseph Robart
  • Patent number: 11308682
    Abstract: A method comprising the steps of generating a first representation and a second representation, where the first representation represents a first view of a computer-generated scene obtained from a first virtual camera and the second representation represents a second view of the computer-generated scene obtained from a second virtual camera. Each of the first and second representation comprises a plurality of rays which intersect with objects of the scene. A relationship is determined between a ray of the first representation and a ray of the second representation; which are grouped based on the relationship, to form a group of substantially similar rays. One or more of the groups of substantially similar rays are processed substantially simultaneously to produce a first a second rendered view of the computer-generated scene. The first the second rendered view are output to one or more display devices.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: April 19, 2022
    Assignees: Apical Limited, Arm Limited
    Inventors: Daren Croxford, Mathieu Jean Joseph Robart
  • Patent number: 11301728
    Abstract: A method of processing image data representative of at least part of an image using a computing system to detect at least one class of object in the image. The method comprises processing the image data using a neural network system selected from a plurality of neural network systems including a first neural network system arranged to detect a class of objects, and a second neural network system arranged to detect the class of objects. The first neural network system comprises a first plurality of layers and the second neural network system comprises a second plurality of layers. The second neural network system has at least one of: more layers than the first neural network system; more neurons than the first neural network system; and more interconnections between neurons than the first neural network system. The method comprises obtaining a trigger and, on the basis of the trigger, processing the image data using a selected one of the first and second neural network systems.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: April 12, 2022
    Assignee: Apical Ltd.
    Inventor: Daren Croxford
  • Publication number: 20220099836
    Abstract: A method and apparatus for measuring depth using a time-of-flight (ToF) depth sensor is described. The apparatus includes an emitter configured to emit a signal towards a scene comprising one or more regions with light or sound, this emitter being controllable to adjust at least one of an intensity and a modulation frequency of the signal output from the emitter. The apparatus also includes a signal sensor, configured to detect an intensity of the signal from the emitter that has been reflected by the scene. A controller is configured to receive context information about the scene for depth capture by the time-of-flight depth sensor and to adjust at least one of the intensity and modulation frequency of the signal output by the emitter in dependence on the context information.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Daren CROXFORD, Roberto LOPEZ MENDEZ, Viacheslav CHESNOKOV, Maxim NOVIKOV, Mina Ivanova DIMOVA
  • Patent number: 11270492
    Abstract: A method of operating a graphics processing system that generates “spacewarped” frames for display is disclosed. Primitive motion vectors are used to determine the motion of objects appearing in rendered application frames. The so-determined motion is then used to generate “spacewarped” versions of the rendered application frames.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 8, 2022
    Assignees: Arm Limited, Apical Limited
    Inventors: Daren Croxford, Roberto Lopez Mendez
  • Publication number: 20220066207
    Abstract: A head-mounted unit for assisting a user, such as a hearing-impaired user, is provided. The head-mounted unit comprises tracking sensors for monitoring a user wearing the head-mounted unit in order to determine a gaze direction which the user is looking. A sensor detects a sound source located in the identified gaze direction. Sound from the sound source may be recognised using speech recognition on captured audio from the sound source or computer vision on images of the sound source. A user interface provides information to the user to assist the user in recognising sound from the sound source.
    Type: Application
    Filed: August 23, 2021
    Publication date: March 3, 2022
    Inventors: Daren CROXFORD, Laura Johanna LÄHTEENMÄKI, Sean Tristram LeGuay ELLIS
  • Publication number: 20220067203
    Abstract: Provided is a technology including an apparatus in the form of a privacy-aware model-based machine learning engine comprising a dispatcher responsive to receipt of a data request from an open model-based machine learning engine to initiate data capture; a data capture component responsive to the dispatcher to capture data comprising sensitive and non-sensitive data to a first dataset; a sensitive data detector operable to scan the first dataset to detect the sensitive data; a sensitive data obscuration component responsive to the sensitive data detector to create an obscured representation of the sensitive data to be stored with the non-sensitive data in a second dataset; and a delivery component operable to deliver the second dataset to the open model-based machine learning engine.
    Type: Application
    Filed: August 23, 2021
    Publication date: March 3, 2022
    Inventors: Remy POTTIER, Yves Thomas LAPLANCHE, Daren CROXFORD
  • Publication number: 20220068243
    Abstract: When a graphics processor is processing data for an application on a host processor, the graphics processor generates in advance of their being required for display by the application a plurality of frame sequences corresponding to a plurality of different possible “future states” for the application. The graphics processing system, when producing a frame in a sequence of frames corresponding to a given future state for the application, determines one or more region(s) of the frame that are to be produced at a first, higher quality, and producing the determined region(s) of the frame at a first, higher quality, whereas other regions of the frame are produced at a second, lower quality.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 3, 2022
    Applicant: Arm Limited
    Inventors: Daren Croxford, Guy Larri
  • Publication number: 20220066739
    Abstract: A system includes a fixed-point accumulator for storing numbers in an anchored fixed-point number format, a data interface arranged to receive a plurality of weight values and a plurality of data values represented in a floating-point number format, and logic circuitry. The logic circuitry is configured to: determine an anchor value indicative of a value of a lowest significant bit of the anchored fixed-point number format; convert at least a portion of the plurality of data values to the anchored fixed-point number format; perform MAC operations between the converted at least portion and respective weight values, using fixed-point arithmetic, to generate an accumulation value in the anchored fixed-point number format; and determine an output element of a later of a neural network in dependence on the accumulation value.
    Type: Application
    Filed: August 27, 2020
    Publication date: March 3, 2022
    Inventors: Daren CROXFORD, Guy LARRI
  • Publication number: 20220060481
    Abstract: A computer-implemented method for an augmented-reality system is provided. The computer-implemented method comprises obtaining sensed data, representing an environment in which the AR system is located, determining that the AR system is in a location associated with a first authority characteristic, and controlling access to the sensed data for one or more applications operating in the AR system. Each of the one or more applications is associated with a respective authority characteristic. Controlling access to the sensed data for a said application is performed in dependence on the first authority characteristic and a respective authority characteristic associated with the said application. An AR system comprising one or more sensors, storage for storing sensed data, one or more application modules, and one or more processors arranged to perform the computer-implemented method is provided.
    Type: Application
    Filed: August 24, 2020
    Publication date: February 24, 2022
    Inventors: Roberto LOPEZ MENDEZ, Daren CROXFORD, Ioan-Cristian SZABO, Mina Ivanova DIMOVA
  • Patent number: 11257468
    Abstract: A user-mountable extended reality (XR) device capable of receiving and storing at least one of a plurality of user vision capability profiles. The user-mountable XR device comprises a data processing system configured to process input data representative of an input image to perform a modification of the input image based on performing a selection of a given profile of the at least one of the plurality of user vision capability profiles, thereby generating output data representative of an output image for display by the user-mountable XR device. Also described is a method of controlling such a device.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: February 22, 2022
    Assignee: Arm Limited
    Inventors: Daren Croxford, Roberto Lopez Mendez
  • Publication number: 20220038270
    Abstract: A data processing system including storage. The data processing system also includes at least one processor to generate output data using at least a portion of a first neural network layer and generate a key associated with at least the portion of the first neural network layer. The at least one processor is further operable to obtain the key from the storage and obtain a version of the output data for input into a second neural network layer. Using the key, the at least one processor is further operable to determine whether the version of the output data differs from the output data.
    Type: Application
    Filed: July 28, 2020
    Publication date: February 3, 2022
    Inventors: Sharjeel SAEED, Daren CROXFORD, Dominic Hugo SYMES
  • Publication number: 20220036577
    Abstract: A system for estimating a current camera pose corresponding to a current point in time using a previous camera pose corresponding to a previous point in time, of a camera configured to generate a sequence of image frames. The system performs operations, including: generating, using one or more neural networks, a neural network pose prediction for the current image frame; and adjusting a previous camera pose using inertial measurement unit data representing a motion of the camera between the previous point in time and the current point in time, to provide an inertial measurement unit pose prediction for the current point in time. The inertial measurement unit pose prediction, and the neural network pose prediction are combined in order to estimate the current camera pose.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Roberto LOPEZ MENDEZ, Daren CROXFORD, Mina Ivanova DIMOVA, Mohamed Nour Nader Fathy ABOUELSEOUD
  • Patent number: 11222394
    Abstract: A device has a content processing component operable in first and second content processing states, a display, at least one sensor operable to output sensor data indicative of at least one eye positional characteristic of a user, and a processor. The processor is configured to process the data, and in the first processing state, determine a region of the display corresponding to a foveal region of an eye of a user, and perform foveated processing of content to be displayed on the display such that a relatively high-quality video content is generated for display in the region and a relatively low-quality video content is generated for display outside the region. The second processing state is entered in response to a trigger. In the second processing state, the foveated processing used is overridden such that relatively low-quality video content is generated for display in at least a portion of the region.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: January 11, 2022
    Assignee: Arm Limited
    Inventors: Daren Croxford, Mbou Eyole
  • Patent number: 11205077
    Abstract: A method is described for operating on a frame of a video to generate a feature map of a neural network. The method determines if a block of the frame is an inter block or an intra block, and performs an inter block process in the event that the block is an inter block and/or an intra block process in the event that the block is an intra block. The inter block process determines a measure of differences between the block of the frame and a reference block of a reference frame of the video, and performs either a first process or a second process based on the measure to generate a segment of the feature map. The intra block process determines a measure of flatness of the block of the frame, and performs either a third process or a fourth process based on the measure to generate a segment of the feature map.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 21, 2021
    Assignee: Arm Limited
    Inventors: Jayavarapu Srinivasa Rao, Daren Croxford, Dominic Hugo Symes
  • Publication number: 20210390777
    Abstract: An AR system is provided, the AR system including one or more sensors, storage, one or more communications modules, and one or more processors. The one or more sensors generate sensed data representing at least part of an environment in which the AR system is located. The one or more communications modules transmit localization data to be used in determining the location and orientation of the AR system. The one or more processors are arranged to obtain sensed data representing an environment in which the AR system is located, process the sensed data to identify a first portion of the sensed data which represents redundant information, derive localization data, wherein the localization data is derived from the sensed data and the first portion is obscured during the derivation of the localization data, and transmit at least a portion of the localization data using the one or more communication modules.
    Type: Application
    Filed: June 10, 2020
    Publication date: December 16, 2021
    Inventors: Roberto LOPEZ MENDEZ, Daren CROXFORD, Ioan-Cristian SZABO, Mina Ivanova DIMOVA
  • Publication number: 20210383673
    Abstract: An AR system includes a user interface, one or more sensors arranged to generate sensor data representing part of an environment in which a user of the AR system is located, and a memory. The memory is arranged to store object association data associating the user with one or more objects in the environment, and object location data indicating a respective location of each of the one or more objects. The AR system is arranged to determine a position of the user; determine an updated location of one of the one or more objects in dependence on the generated sensor data and the determined position of the user, update the stored object location data to indicate the determined updated location of said one of the one or more objects, and output information depending on the updated location of said one of the one or more objects via the user interface.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Daren Croxford, Sean Tristram LeGuay Ellis, Laura Johanna Lähteenmäki