Patents by Inventor David A. Hayward
David A. Hayward has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190089926Abstract: RAW camera images may be processed by a computer system using either a particular application or a system level service. In either case, at least some parameters needed for the processing are preferably separated from the executable binary of the application or service, and are provided in separate, non-executable, data-only files. Each of these files can correspond to a particular camera or other imaging device. When a user of the system attempts to open a RAW image file from an unsupported device, the local system may contact a server for on-demand download and on-the-fly installation of the required support resource.Type: ApplicationFiled: November 19, 2018Publication date: March 21, 2019Inventors: Michael Balle-Pedersen, David Hayward, Travis W. Brown
-
Patent number: 10232333Abstract: An apparatus for membrane emulsification. In one embodiment, the apparatus comprises a membrane defining a plurality of apertures connecting a first phase on a first side of the membrane to a second phase on a second, different side of the membrane, such that egression of the first phase into the second phase via the plurality of apertures creates an emulsion, and wherein the membrane is an oscillating cylindrical membrane.Type: GrantFiled: July 12, 2016Date of Patent: March 19, 2019Assignee: MICROPORE TECHNOLOGIES LTD.Inventors: Bruce Williams, Richard Holdich, Iain Cumming, Pedro Silva, David Hayward
-
Publication number: 20190043165Abstract: This invention provides methods for spatially localized image editing. For example, an input image is divided into multiple bins in each dimension. For each bin, a histogram is computed, along with local image statistics such as mean, medium and cumulative histogram. Next, for each tile, a type of adjustment is determined and applied, including adjustment associated with Exposure, Brightness, Shadows, Highlights, Contrast, and Blackpoint. The adjustments are done for all tiles in the input image to render a small adjustment image. The small image is then interpolated, for example, using an edge-preserving interpolation, to get a full size adjustment image with adjustment curve for each pixel. Subsequently, per-pixel image adjustments can be performed across an entire input image to render a final adjusted image.Type: ApplicationFiled: October 11, 2018Publication date: February 7, 2019Applicant: Apple Inc.Inventors: Garrett M. Johnson, David Hayward
-
Patent number: 10191636Abstract: This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.Type: GrantFiled: December 1, 2016Date of Patent: January 29, 2019Assignee: Apple Inc.Inventors: David Hayward, Chendi Zhang, Alexandre Naaman, Richard R. Dellinger, Giridhar S Murthy
-
Patent number: 10147166Abstract: This invention provides methods for spatially localized image editing. For example, an input image is divided into multiple bins in each dimension. For each bin, a histogram is computed, along with local image statistics such as mean, medium and cumulative histogram. Next, for each tile, a type of adjustment is determined and applied, including adjustment associated with Exposure, Brightness, Shadows, Highlights, Contrast, and Blackpoint. The adjustments are done for all tiles in the input image to render a small adjustment image. The small image is then interpolated, for example, using an edge-preserving interpolation, to get a full size adjustment image with adjustment curve for each pixel. Subsequently, per-pixel image adjustments can be performed across an entire input image to render a final adjusted image.Type: GrantFiled: September 23, 2016Date of Patent: December 4, 2018Assignee: APPLE INC.Inventors: Garrett M. Johnson, David Hayward
-
Patent number: 10136097Abstract: RAW camera images may be processed by a computer system using either a particular application or a system level service. In either case, at least some parameters needed for the processing are preferably separated from the executable binary of the application or service, and are provided in separate, non-executable, data-only files. Each of these files can correspond to a particular camera or other imaging device. When a user of the system attempts to open a RAW image file from an unsupported device, the local system may contact a server for on-demand download and on-the-fly installation of the required support resource.Type: GrantFiled: February 2, 2012Date of Patent: November 20, 2018Assignee: Apple Inc.Inventors: Michael Balle-Pedersen, David Hayward, Travis W. Brown
-
Publication number: 20180305086Abstract: A plastic container used for holding fluid material with an attachable handle is provided. The container includes a body, a plurality of walls, a spout, an attachable handle and a sliding structure. The spout is a hollow, cylindrical portion that extends from an opening in one of the walls. The cylindrical portion is configured to insert or remove the fluid material from the container. The attachable handle includes a rail structure that further includes a pair of non-parallel offset rails and a latch. The latch has a latching surface that is located at the ends of the offset rails farthest from each other. The slide structure includes a generally rectangular projection. The projection further includes a pair of non-parallel offset grooves and a latching surface. The latching surface is located where the grooves are farthest from each other. The offset grooves are adapted to mate with the offset rails and are fully engaged by the offset rails when the latch is engaged with the latching surface.Type: ApplicationFiled: June 25, 2018Publication date: October 25, 2018Inventors: Laura Flanagan-Kent, Stephen J. Kocis, David A. Hayward, Frederick P. Minkemeyer, Gary L. Mengeu, Edmund L. White
-
Patent number: 10029823Abstract: A plastic container used for holding fluid material with an attachable handle is provided. The container includes a body, a plurality of walls, a spout, an attachable handle and a sliding structure. The spout is a hollow, cylindrical portion that extends from an opening in one of the walls. The cylindrical portion is configured to insert or remove the fluid material from the container. The attachable handle includes a rail structure that further includes a pair of non-parallel offset rails and a latch. The latch has a latching surface that is located at the ends of the offset rails farthest from each other. The slide structure includes a generally rectangular projection. The projection further includes a pair of non-parallel offset grooves and a latching surface. The latching surface is located where the grooves are farthest from each other. The offset grooves are adapted to mate with the offset rails and are fully engaged by the offset rails when the latch is engaged with the latching surface.Type: GrantFiled: June 23, 2015Date of Patent: July 24, 2018Assignee: Silgan Plastics LLCInventors: Laura Flanagan-Kent, Stephen J. Kocis, David A. Hayward, Frederick P. Minkemeyer, Gary L. Mengeu, Edmund L. White
-
Publication number: 20180089799Abstract: This invention provides methods for spatially localized image editing. For example, an input image is divided into multiple bins in each dimension. For each bin, a histogram is computed, along with local image statistics such as mean, medium and cumulative histogram. Next, for each tile, a type of adjustment is determined and applied, including adjustment associated with Exposure, Brightness, Shadows, Highlights, Contrast, and Blackpoint. The adjustments are done for all tiles in the input image to render a small adjustment image. The small image is then interpolated, for example, using an edge-preserving interpolation, to get a full size adjustment image with adjustment curve for each pixel. Subsequently, per-pixel image adjustments can be performed across an entire input image to render a final adjusted image.Type: ApplicationFiled: September 23, 2016Publication date: March 29, 2018Inventors: Garrett M. Johnson, David Hayward
-
Publication number: 20180088776Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, pyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.Type: ApplicationFiled: October 2, 2017Publication date: March 29, 2018Inventors: Ricardo Motta, Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
-
Publication number: 20180089800Abstract: Disclosed herein are methods and systems for fast and edge preserving upsampling of a small data image based on one or more guide images. During the upsampling process, selected data from the one or more guide images are combined with data from the data image to generate an upsampled pixel in an upsampled image. The upsampling can occur directly from the data image or sequentially via one or more intermediate upsampled images.Type: ApplicationFiled: June 26, 2017Publication date: March 29, 2018Inventors: David HAYWARD, Garrett M. JOHNSON
-
Publication number: 20180059511Abstract: At least certain embodiments described herein provide a continuous autofocus mechanism for an image capturing device. The continuous autofocus mechanism can perform an autofocus scan for a lens of the image capturing device and obtain focus scores associated with the autofocus scan. The continuous autofocus mechanism can determine an acceptable band of focus scores based on the obtained focus scores. Next, the continuous autofocus mechanism can determine whether a current focus score is within the acceptable band of focus scores. A refocus scan may be performed if the current focus score is outside of the acceptable band of focus scores.Type: ApplicationFiled: July 28, 2017Publication date: March 1, 2018Inventors: Ralph Brunner, David Hayward
-
Publication number: 20180015432Abstract: An apparatus for membrane emulsification. In one embodiment, the apparatus comprises a membrane defining a plurality of apertures connecting a first phase on a first side of the membrane to a second phase on a second, different side of the membrane, such that egression of the first phase into the second phase via the plurality of apertures creates an emulsion, and wherein the membrane is an oscillating cylindrical membrane.Type: ApplicationFiled: July 12, 2016Publication date: January 18, 2018Applicant: Micropore Technologies LtdInventors: Bruce Williams, Richard Holdich, Iain Cumming, Pedro Silva, David Hayward
-
Patent number: 9778815Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.Type: GrantFiled: July 13, 2016Date of Patent: October 3, 2017Assignee: Apple Inc.Inventors: Ricardo Motta, Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
-
Patent number: 9720302Abstract: At least certain embodiments described herein provide a continuous autofocus mechanism for an image capturing device. The continuous autofocus mechanism can perform an autofocus scan for a lens of the image capturing device and obtain focus scores associated with the autofocus scan. The continuous autofocus mechanism can determine an acceptable band of focus scores based on the obtained focus scores. Next, the continuous autofocus mechanism can determine whether a current focus score is within the acceptable band of focus scores. A refocus scan may be performed if the current focus score is outside of the acceptable band of focus scores.Type: GrantFiled: June 24, 2014Date of Patent: August 1, 2017Assignee: Apple Inc.Inventors: Ralph Brunner, David Hayward
-
Publication number: 20170115846Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.Type: ApplicationFiled: July 13, 2016Publication date: April 27, 2017Inventors: Ricardo Motta, Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
-
Publication number: 20170083218Abstract: This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.Type: ApplicationFiled: December 1, 2016Publication date: March 23, 2017Inventors: David Hayward, Chendi Zhang, Alexandre Naaman, Richard R. Dellinger, Giridhar S. Murthy
-
Patent number: 9531947Abstract: This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.Type: GrantFiled: May 2, 2014Date of Patent: December 27, 2016Assignee: Apple Inc.Inventors: David Hayward, Chendi Zhang, Alexandre Naaman, Richard R. Dellinger, Giridhar S Murthy
-
Publication number: 20160358309Abstract: Embodiments are directed toward systems and methods segment an input image for performance of a warp kernel that executes by a graphics processing unit (GPU) the warp kernel on an array of dummy data, wherein cells of the array are populated with data representing the cells' respective locations within the array, determine, from an output array obtained from execution of the warp kernel on the dummy data, a segmentation size, and build by the GPU an output image from the input image by executing the warp kernel on the input image according to the segmentation size.Type: ApplicationFiled: June 3, 2016Publication date: December 8, 2016Inventors: David Hayward, Alexandre Naaman
-
Patent number: 9417763Abstract: The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.Type: GrantFiled: December 15, 2014Date of Patent: August 16, 2016Assignee: Apple Inc.Inventors: Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke