Patents by Inventor William Y. Conwell

William Y. Conwell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10007964
    Abstract: Imagery captured by an autonomous robot is analyzed to discern digital watermark patterns. In some embodiments, identical but geometrically-inconsistent digital watermark patterns are discerned in an image frame, to aid in distinguishing multiple depicted instances of a particular item. In other embodiments, actions of the robot are controlled or altered in accordance with image processing performed by the robot on a digital watermark pattern. The technology is particularly described in the context of retail stores in which the watermark patterns are encoded, e.g., on product packaging, shelving, and shelf labels. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: June 26, 2018
    Assignee: Digimarc Corporation
    Inventors: Sean Calhoon, William Y. Conwell
  • Publication number: 20180174620
    Abstract: Arrangements involving portable devices (e.g., smartphones and tablet computers) are disclosed. One arrangement enables a content creator to select software with which that creator's content should be rendered—assuring continuity between artistic intention and delivery. Another utilizes a device camera to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography.
    Type: Application
    Filed: November 27, 2017
    Publication date: June 21, 2018
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Geoffrey B. Rhoads, William Y. Conwell, Jerrine K. Owen, Adnan M. Alattar, Eliot Rogers, Brett A. Bradley, Alastair M. Reed, Robert Craig Brandis
  • Publication number: 20180158133
    Abstract: A decade from now, a visit to the supermarket will be a very different experience than the familiar experiences of decades past. Product packaging will come alive with interactivity—each object a portal into a rich tapestry of experiences, with contributions authored by the product brand, by the store selling the product, and by other shoppers. The present technology concerns arrangements for authoring and delivering such experiences. A great variety of other features and technologies are also detailed.
    Type: Application
    Filed: November 9, 2017
    Publication date: June 7, 2018
    Inventors: Bruce L. Davis, Geoffrey B. Rhoads, Tony F. Rodriguez, Edward B. Knudson, William Y. Conwell
  • Patent number: 9886845
    Abstract: Mobile phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some aspects relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others concern user interface improvements. Other aspects relate to imaging architectures, in which a mobile phone's image sensor is one in a chain of stages that successively act on packetized instructions/data, to capture and later process imagery. Still other aspects relate to distribution of processing tasks between the mobile device and remote resources (“the cloud”). Elemental image processing (e.g., simple filtering and edge detection) can be performed on the mobile phone, while other operations can be referred out to remote service providers.
    Type: Grant
    Filed: August 11, 2014
    Date of Patent: February 6, 2018
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Nicole Rhoads, Brian T. MacIntosh, William Y. Conwell
  • Patent number: 9836929
    Abstract: A variety of haptic improvements useful in mobile devices are detailed. In one, a smartphone captures image data from a physical object, and discerns an object identifier from the imagery (e.g., using watermark, barcode, or fingerprint techniques). This identifier is sent to a remote data structure, which returns data defining a distinct haptic signature associated with that object. This smartphone then renders this haptic signal to the user. (Related embodiments identify the object using other means, such as location, or NFC chip.) In another arrangement, haptic feedback signals social network information about a product or place (e.g., the user's social network friends “Like” a particular brand of beverage). In yet another arrangement, the experience of watching a movie on a television screen is augmented by tactile effects issued by a tablet computer on the viewer's lap.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: December 5, 2017
    Assignee: Digimarc Corporation
    Inventors: Tony F. Rodriguez, William Y. Conwell
  • Patent number: 9830950
    Abstract: Arrangements involving portable devices (e.g., smartphones and tablet computers) are disclosed. One arrangement enables a content creator to select software with which that creator's content should be rendered—assuring continuity between artistic intention and delivery. Another utilizes a device camera to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: November 28, 2017
    Assignee: Digimarc Corporation
    Inventors: Tony F. Rodriguez, William Y. Conwell
  • Patent number: 9818150
    Abstract: A decade from now, a visit to the supermarket will be a very different experience than the familiar experiences of decades past. Product packaging will come alive with interactivity—each object a portal into a rich tapestry of experiences, with contributions authored by the product brand, by the store selling the product, and by other shoppers. The present technology concerns arrangements for authoring and delivering such experiences. A great variety of other features and technologies are also detailed.
    Type: Grant
    Filed: January 10, 2014
    Date of Patent: November 14, 2017
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, William Y. Conwell
  • Patent number: 9785841
    Abstract: Both fingerprinting and watermark decoding processes are applied to received items of audio-visual content. Further processing is applied as well. This further processing depends on output data from the watermark decoding process, and can cause two items of seemingly-identical audio-visual content to be further-processed in different ways.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: October 10, 2017
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, William Y. Conwell
  • Patent number: 9788043
    Abstract: A portable device, such as a cell phone, is used to “forage” media content from a user's environment. For example, it may listen to a television viewed by a traveler in an airport lounge. By reference to digital watermark or other data extracted from the content, the device can identify the television program, and enable a variety of actions. For example, the device may instruct a DVR to record the remainder of the television program—or determine when the program will be rebroadcast, and instruct the DVR to record the program in its entirety at that later time. The device may also identify content that preceded (or follows) the foraged content. Thus, a user who tunes-in just at the end of an exciting sporting event can capture one of the following commercials, identify the preceding program, and download same for later viewing.
    Type: Grant
    Filed: November 14, 2008
    Date of Patent: October 10, 2017
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Brian T. MacIntosh, William Y. Conwell
  • Publication number: 20170257474
    Abstract: A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. The image processing tasks applied to the scene can be selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone can apply more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In some arrangements, data may be referred to the cloud for analysis, or for gleaning. Cognition, and identification of appropriate device response(s), can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed.
    Type: Application
    Filed: March 23, 2017
    Publication date: September 7, 2017
    Inventors: Geoffrey B. Rhoads, Tony F. Rodriguez, Gilbert B. Shaw, Bruce L. Davis, William Y. Conwell
  • Publication number: 20170249491
    Abstract: In some arrangements, product packaging is digitally watermarked over most of its extent to facilitate high-throughput item identification at retail checkouts. Imagery captured by conventional or plenoptic cameras can be processed (e.g., by GPUs) to derive several different perspective-transformed views—further minimizing the need to manually reposition items for identification. Crinkles and other deformations in product packaging can be optically sensed, allowing such surfaces to be virtually flattened to aid identification. Piles of items can be 3D-modelled and virtually segmented into geometric primitives to aid identification, and to discover locations of obscured items. Other data (e.g., including data from sensors in aisles, shelves and carts, and gaze tracking for clues about visual saliency) can be used in assessing identification hypotheses about an item. Logos may be identified and used—or ignored—in product identification. A great variety of other features and arrangements are also detailed.
    Type: Application
    Filed: March 17, 2017
    Publication date: August 31, 2017
    Inventors: Brian T. MacIntosh, Tony F. Rodriguez, Bruce L. Davis, Geoffrey B. Rhoads, John D. Lord, Alastair M. Reed, Eric D. Evans, Rebecca L. Gerlach, Yang Bai, John F. Stach, Tomas Filler, Marc G. Footen, Sean Calhoon, William Y. Conwell
  • Publication number: 20170243317
    Abstract: A sequence of images depicting an object is captured, e.g., by a camera at a point-of-sale terminal in a retail store. The object is identified, such as by a barcode or watermark that is detected from one or more of the images. Once the object's identity is known, such information is used in training a classifier (e.g., a machine learning system) to recognize the object from others of the captured images, including images that may be degraded by blur, inferior lighting, etc. In another arrangement, such degraded images are processed to identify feature points useful in fingerprint-based identification of the object. Feature points extracted from such degraded imagery aid in fingerprint-based recognition of objects under real life circumstances, as contrasted with feature points extracted from pristine imagery (e.g., digital files containing label artwork for such objects).
    Type: Application
    Filed: March 1, 2017
    Publication date: August 24, 2017
    Inventors: Tony F. Rodriguez, Osama M. Alattar, Hugh L. Brunk, Joel R. Meyer, William Y. Conwell, Ajith Mulki Kamath
  • Publication number: 20170236407
    Abstract: Mobile phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some aspects relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others concern user interface improvements. Other aspects relate to imaging architectures, in which a mobile phone's image sensor is one in a chain of stages that successively act on packetized instructions/data, to capture and later process imagery. Still other aspects relate to distribution of processing tasks between the mobile device and remote resources (“the cloud”). Elemental image processing (e.g., simple filtering and edge detection) can be performed on the mobile phone, while other operations can be referred out to remote service providers.
    Type: Application
    Filed: August 11, 2014
    Publication date: August 17, 2017
    Inventors: Geoffrey B. Rhoads, Nicole Rhoads, Brian T. MacIntosh, William Y. Conwell
  • Publication number: 20170143249
    Abstract: Reference imagery of dermatological conditions is compiled in a crowd-sourced database (contributed by clinicians and/or the lay public), together with associated diagnosis information. A user later submits a query image to the system (e.g., captured with a smartphone). Image-based derivatives for the query image are determined (e.g., color histograms, FFT-based metrics, etc.), and are compared against similar derivatives computed from the reference imagery. This comparison identifies diseases that are not consistent with the query image, and such information is reported to the user. Depending on the size of the database, and the specificity of the data, 90% or more of candidate conditions may be effectively ruled-out, possibly sparing the user from expensive and painful biopsy procedures, and granting some peace of mind (e.g., knowledge that an emerging pattern of small lesions on a forearm is probably not caused by shingles, bedbugs, malaria or AIDS).
    Type: Application
    Filed: November 28, 2016
    Publication date: May 25, 2017
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Alastair M. Reed, John Stach, Geoffrey B. Rhoads, William Y. Conwell, Shankar Thagadur Shivappa, Ravi K. Sharma, Richard F. Gibson
  • Patent number: 9609107
    Abstract: A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. In one detailed arrangement, image processing tasks applied to the scene are selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone applies more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In another detailed arrangement, data is referred to the cloud for analysis, or for gleaning. In still another detailed arrangement, cognition, and identification of appropriate device response(s), is aided by collateral information, such as context. A great number of other features and arrangements are also detailed.
    Type: Grant
    Filed: July 2, 2013
    Date of Patent: March 28, 2017
    Assignee: Digimarc Corporation
    Inventors: Tony F. Rodriguez, Gilbert B. Shaw, William Y. Conwell
  • Patent number: 9609117
    Abstract: The present technology concerns improvements to smart phones and related sensor-equipped systems. Some embodiments involve spoken clues, e.g., by which a user can assist a smart phone in identifying what portion of imagery captured by a smart phone camera should be processed, or identifying what type of image processing should be conducted. Some arrangements include the degradation of captured content information in accordance with privacy rules, which may be location-dependent, or based on the unusualness of the captured content, or responsive to later consultation of the stored content information by the user. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: March 28, 2017
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Geoffrey B. Rhoads, William Y. Conwell, John Stach
  • Patent number: 9594983
    Abstract: A sequence of images depicting an object is captured, e.g., by a camera at a point-of-sale terminal in a retail store. The object is identified, such as by a barcode or watermark that is detected from one or more of the images. Once the object's identity is known, such information is used in training a classifier (e.g., a machine learning system) to recognize the object from others of the captured images, including images that may be degraded by blur, inferior lighting, etc. In another arrangement, such degraded images are processed to identify feature points useful in fingerprint-based identification of the object. Feature points extracted from such degraded imagery aid in fingerprint-based recognition of objects under real life circumstances, as contrasted with feature points extracted from pristine imagery (e.g., digital files containing label artwork for such objects).
    Type: Grant
    Filed: August 1, 2014
    Date of Patent: March 14, 2017
    Assignee: Digimarc Corporation
    Inventors: Osama M. Alattar, William Y. Conwell
  • Patent number: 9557162
    Abstract: A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. The image processing tasks applied to the scene can be selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone can apply more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In some arrangements, data may be referred to the cloud for analysis, or for gleaning. Cognition, and identification of appropriate device response(s), can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed.
    Type: Grant
    Filed: July 16, 2013
    Date of Patent: January 31, 2017
    Assignee: Digimarc Corporation
    Inventors: Tony F Rodriguez, William Y. Conwell
  • Publication number: 20160322082
    Abstract: Arrangements involving portable devices (e.g., smartphones and tablet computers) are disclosed. One arrangement enables a content creator to select software with which that creator's content should be rendered—assuring continuity between artistic intention and delivery. Another utilizes a device camera to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography.
    Type: Application
    Filed: April 27, 2016
    Publication date: November 3, 2016
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Geoffrey B. Rhoads, William Y. Conwell, Jerrine K. Owen, Adnan M. Alattar, Eliot Rogers, Brett A. Bradley, Alastair M. Reed, Robert Craig Brandis
  • Patent number: 9445763
    Abstract: Audio sounds are captured from a subject's body, e.g., using a smartphone or a worn array of microphones. Plural features are derived from the captured audio, and serve as fingerprint information. One such feature may be a time interval over which a threshold part of spectral energy in the audio is expressed. Another may be a frequency bandwidth within which a second threshold part of the spectral energy is expressed. Such fingerprint information is provided to a knowledge base that contains reference fingerprint data and associated metadata. The knowledge base matches the fingerprint with reference fingerprint data, and provides associated metadata in return—which can comprise diagnostic information related to the captured sounds. In some arrangements, an audio signal or pressure waveform stimulates the body at one location, and is sensed at another, to discern information about the intervening transmission medium. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: September 20, 2016
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, Tony F. Rodriguez, William Y. Conwell, Shankar Thagadur Shivappa, Ravi K. Sharma, Richard F. Gibson