Abstract: Methods, apparatuses and systems directed to detecting objects in user-uploaded multimedia such as photos and videos, determining the location at which the media was captured, inferring a set of users of a social network who were physically present at the time and place of capture, and pushing remarketing content to the set of inferred users for the detected objects, or alternatively, the competitors of the detected concepts.
Type:
Grant
Filed:
August 18, 2011
Date of Patent:
June 6, 2017
Assignee:
Facebook, Inc.
Inventors:
Justin Mitchell, Samuel Odio, David Harry Garcia
Abstract: A history generating apparatus includes an image generating section, an image reading section and a history generating section. The image generating section generates a bitmap first image and generates initial attribute information indicating attributes of respective pixels of the first image. The image reading section reads an image based on the first image formed on paper so as to generate a bitmap second image. The history generating section separates the second image into image regions by attribute indicated by the initial attribute information, performs image processing on each of the image regions according to the attribute and combines the image regions so as to generate a history image.
Abstract: An information processing method and an electronic device are provided in the present disclosure. The method includes: acquiring an adjustment instruction to an icon object in a display content, where the adjustment instruction includes parameter information, a display mode of at least one of the icon object of a first type and the icon object of a second type is determined according to the parameter information, the icon object of a first type is configured to carry the icon object of a second type; adjusting the icon object in the display content based on the parameter information in response to the adjustment instruction to make the icon object present an adjusted display effect via the display unit.
Abstract: An image processing apparatus including: an acquisition unit configured to acquire a boundary between a target area and a non-target area from an image; a setting unit configured to set an undefined area on the periphery of the boundary in a width based on feature quantities of peripheral pixels of the boundary; and a generating unit configured to define an area excluding the undefined area from the target area as the foreground area, define an area excluding the undefined area from the non-target area as the background area, and generate area information that specifies the foreground area, the background area, and the undefined area.
Abstract: Various embodiments enable a computing device to perform tasks such as processing an image to recognize text or an object in an image to identify a particular product or related products associated with the text or object. In response to recognizing the text or the object as being associated with a product available for purchase from an electronic marketplace, one or more advertisements or product listings associated with the product can be displayed to the user. Accordingly, additional information for the associated product can be displayed, enabling the user to learn more about and purchase the product from the electronic marketplace through the portable computing device.
Type:
Grant
Filed:
September 23, 2015
Date of Patent:
May 9, 2017
Assignee:
A9.com, Inc.
Inventors:
Xiaofan Lin, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Atul Kumar, Yu Lou
Abstract: A method for creating an announcement stream for a geographic region is provided. The method receives, at a designated computer system, characterizing metadata for a first audio/video stream; analyzes a second audio/video stream to obtain characterizing metadata for the second video stream; compares, with the computer system, the characterizing metadata for the first video stream to the characterizing metadata for the second video stream to generate offset data; and calculates timing information corresponding to segment boundaries for the second video stream using the offset data.
Abstract: A computer-implemented system and method for generating and placing cluster groups is provided. A set of clusters each having one or more documents is maintained. A portion of the clusters is assigned to two or more spines. The cluster spines are ordered by cluster length. Those cluster spines that are unique are identified in decreasing cluster length order and placed. One or more of the remaining spines are placed in relation to at least one of the placed unique spines. At least one of the remaining clusters is placed in relation to one or more of the unique placed spines. Groups of clusters are displayed. Each group includes two or more of one such unique placed spine, the remaining spines placed in relation to that placed unique spine; and the remaining clusters placed in relation to the unique placed spine.
Abstract: Provided are an apparatus and method for automatically generating a visual annotation with respect to a massive image based on a visual language. The apparatus for automatically generating a visual annotation based on a visual language includes an image input unit configured to receive an input image, an image analyzing unit configured to extract feature information of the input image received by the image input unit, a searching unit configured to search a similar image with respect to the input Image and text information included in the similar image by using the feature information extracted by the image analyzing unit, and a visual annotation configuring unit configured to configure a visual annotation with respect to the input image by using the text information searched by the searching unit.
Type:
Grant
Filed:
June 30, 2014
Date of Patent:
March 28, 2017
Assignee:
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
Abstract: A handwriting input/output system that allows letters, characters and figures to be input by hand. The handwriting input/output system includes an imaging device that captures and transmits the image of a medium provided with a writing area having a dot pattern that defines coordinate information and code information, a dot pattern analysis device that calculates trajectory information by storing the image data of the dot pattern and analyzing the code of the image data, a trajectory recognition device recognizes the trajectory information of the letter, character, and figure traced on the writing area based on a change in the analyzed coordinate information, and a process instruction device transmits a process instruction based on the recognized information together with the trajectory information to an information processing device, whereby a letter, character, or figure can be input by hand.
Abstract: A method, apparatus and computer program product are provided for image registration in the gradient domain. A method is provided including receiving a first image and second image; and registering the first and second images in a gradient domain. The registration of the first and second images in the gradient domain includes applying an energy minimization function based on total variation.
Abstract: An electronic device having a memory, a display, control apparatus and a data processor. When the data processor executes a program stored in the memory of the electronic device, the electronic device receives an image containing a person, the image further containing a face component, the face component containing the face of the person; displays the image on the display; locates the face component within the image; emphasizes the face component; receives image annotation information concerning the face component; and saves the image annotation information to the memory of the electronic device. The image annotation information may take the form of contact information for use in a contact database. A method uses the face component to locate other images in the database that also contain the person, and adds the annotation information to those images.
Abstract: The invention relates to a method for predicting a pixel block of an image using a weighted sum of pixel blocks belonging to patches of a dictionary from a set of candidate patches, each patch being formed of a pixel block of an image and a causal neighborhood of this pixel block. The method is wherein a subset of candidate patches is obtained from the set of candidate patches and the dictionary is formed of a patch of the subset of candidate patches, called the first patch, and at least one other patch of said set of candidate patches, called the second patch.
Abstract: Disclosed here is a system for a vehicle including a personal electronic device in communication with a vehicle head unit, the personal electronic device synchronized with the vehicle head unit for controlling which application icons appear on the vehicle head unit. Also disclosed is a computer storage media having embodied thereon computer-useable instructions that, when executed, perform a method, the method includes synchronizing the display of a plurality of applications that each operate a respective vehicle system between a vehicle head unit and a personal electronic device.
Type:
Grant
Filed:
November 20, 2014
Date of Patent:
January 10, 2017
Assignee:
Toyota Motor Engineering & Manufacturing North America, Inc.
Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for tagging media files from a media capture device with location information gathered from a portable device when the portable device determines that a user of the media capture device frequents the location and that the media item was captured in the location.
Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
Type:
Grant
Filed:
May 28, 2015
Date of Patent:
January 3, 2017
Assignee:
Adobe Systems Incorporated
Inventors:
Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
Abstract: A method (100) is provided for characterizing a functionality of a flash (14) from at least one image (200) captured with a camera (12) using the flash (14). The method (100) includes: analyzing the image (200) by segmenting (106) the image (200) into at least one sub-region (206, 208), and applying a metric (108) to the sub-region (206, 208) to measure an image property within the sub-region (206, 208); and determining (110) the functionality of the flash (14) in response to a result of the applied metric.
Type:
Grant
Filed:
May 15, 2012
Date of Patent:
January 3, 2017
Assignee:
Palo Alto Research Center, Incorporated
Inventors:
Ajay Raghavan, Juan Liu, Robert R. Price
Abstract: In a combined image obtained by superposing a character image on an image such that a second text rendered in the character image overlaps a first text rendered in the image, an image processing device according to the present invention determines a part of a character portion constituting a pre-translation character string, not covered by non-transparent pixels constituting a post-translation character string as a deficient pixel D, and corrects color attribute of the deficient pixel D in the combined image by utilizing color attribute of a part of the combined image or of the image.
Abstract: This invention relates to a method for mapping Wi-Fi fingerprints in a given geographical area. A user, provided with a receiver, sequentially collects Wi-Fi signal strength measurements by moving along the path thereof, with each measurement being constituted by a vector of RSS levels received from the various Wi-Fi access points. A graph that is representative of the path is determined from the sequence of measurements. Thanks to a geometric model of the area, a topographical characterization of the path is carried out on the basis of the previously obtained graph. The positions of the various measurements of the sequence are provided by the topographical characterization and stored with said measurements in a database.
Type:
Grant
Filed:
February 18, 2013
Date of Patent:
December 6, 2016
Assignee:
Pole Star
Inventors:
Baptiste Godefroy, Nicolas Etienne, Jean-Baptiste Prost, Cyril Laderriere
Abstract: A message can be encoded in an image file by mapping at least one bit of the message onto each sub-unit of the image file, and adjusting a distinguishable characteristic of each sub-unit according to the corresponding bit to produce a modified image file. The message can be decoded from the message file by comparing each sub-unit of the modified image file with a corresponding sub-unit of the original image file, and identifying at least one bit of the message based on each comparison.
Abstract: An information processing apparatus that is openable by a first case and a second case being unfolded, and includes an imaging section which is provided in the second case and opposed to the first case where a read target medium is placed, a recognition section which, when a predetermined indicator on the first case side is photographed by the imaging section, recognizes the predetermined indicator by analyzing a photographed image, a judgment section which judges whether or not the first case and the second case have been closed into a predetermined state, based on a result of recognition of the indicator by the recognition section, and a determination section which determines the photographed image taken by the imaging section as a storage target, when the judgment section judges that the first case and the second case have been closed into the predetermined state.
Abstract: Provided is a new hierarchical methodology having a series of computational steps such as adaptive window creation, 2-D SWT application, masking, and boundary tracing is proposed. The techniques and systems are able to detect and quantify fracture as well as to generate recommendations for decision-making and treatment planning in traumatic pelvic injuries.
Type:
Grant
Filed:
November 1, 2013
Date of Patent:
November 1, 2016
Assignee:
Virginia Commonwealth University
Inventors:
Jie Wu, Rosalyn Hobson Hargraves, Kayvan Najarian, Ashwin Belle, Kevin R. Ward
Abstract: Embodiments of the present disclosure provide a method for guiding a user to capture an image of a target object using an image capturing device. In an embodiment, the method of the present disclosure comprises determining a bounding area for image to be captured and capturing at least one frame of the image upon detecting image to be inside the bounding area. Then, the target object in captured at least one frame is segmented by separating the target object from the rest of the image. Further, at least one of symmetry and self-similarity of the segmented target object is determined. In addition, at least one image parameter is determined by a sensor. The method then provides inputs for guiding user to capture a final image of the target object, based on at least one of determined symmetry, self-similarity, and at least one image parameter.
Abstract: An image forming apparatus includes a forming unit that forms a toner image with a substantially flat toner containing a substantially flat metal pigment on a movable body; a transfer unit that forms a nip with the movable body and transfers the toner image on a medium transported to the nip; and a controller that, if at least one of first and second conditions, the first condition in which an image width from data for allowing the forming unit to form the toner image is larger than a predetermined width, the second condition in which an area coverage from the data is higher than a predetermined area coverage, is satisfied, causes the forming unit to form a toner image with a corrected area coverage lower than the area coverage from the data.
Abstract: An apparatus and method of controlling a mobile terminal by detecting a face or an eye in an input image are provided. The method includes performing face recognition on an input image facing and being captured by an image input unit equipped on the front face of the mobile terminal; determining, based on the face recognition, user state information that includes whether a user exists, a direction of the user's face, a distance from the mobile terminal, and/or a position of the user's face; and performing a predetermined function of the mobile terminal according to the user state information. According to the method, functions of the mobile terminal may be controlled without direct inputs from the user.
Type:
Grant
Filed:
December 9, 2015
Date of Patent:
October 4, 2016
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Byung-Jun Son, Hong-Il Kim, Tae-Hwa Hong
Abstract: An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator.
Abstract: In one embodiment, a security label comprises a random arrangement of printed LEDs. During fabrication of the label, the LEDs are energized, and the resulting dot pattern is converted into a unique digital first code and stored in a database. The label is then attached to an object to be later authenticated, or the LEDs are printed directly on the object, such as a passport, license, bank note, certificate, etc. For authenticating the object, the LEDs are energized and the dot pattern is converted into a code. The code is compared to the first code stored in the database. If there is a match, the object is authenticated. The label may also have a printed second code associated with the first code, and both codes must match codes stored in the database for authentication. The general shape of the printed pattern may convey the proper orientation of the pattern.
Abstract: According to one embodiment, an existent person count estimation apparatus includes motion sensors and following units. The collection unit generates human sensing information. The instance prediction unit predicts second instances from the first instances by using the transition matrix. The likelihood calculation unit calculates likelihoods of the second instances using the time information items. The instance selection unit selects one or more third instances having likelihoods higher than a threshold. The output unit generates output information including estimate values of existent person counts for the first areas included in the third instances.
Abstract: A one-touch application is launched by a user of a mobile communication device (MCD) selecting a single selector element on the MCD, and in response the MCD automatically and without further user interaction establishes near field communication (NFC) with an electronic product, receives identifying data from the electronic product over NFC, and uploads the identifying data wirelessly to a cloud server associated with a manufacturer of the electronic product. The server sends back to the MCD and/or to the electronic product information about the electronic product.
Type:
Grant
Filed:
July 16, 2014
Date of Patent:
September 13, 2016
Assignee:
Sony Corporation
Inventors:
Christopher Mark Ohren, Charles Donald Hedrick, Jr., Herbert Sleeper, Sarvesh Chinnappa, Samuel David Rosewall, Jr., Jason Leigh Transfiguracion
Abstract: In embodiments, one or more computer-readable media may have instructions stored thereon which, when executed by a processor of a computing device provide the computing device with a redaction module. The redaction module may be configured to receive a request to redact a selection of text from a document and identify instances of the text occurring within the document through an analysis of word coordinate information of an image of the document. The redaction module may further be configured to generate redaction information, including redaction coordinates, the redaction coordinates may be based on the word coordinate information associated with respective instances of the text occurring within the document. The redactions, when applied to the image in accordance with the redaction coordinates, may redact the respective instances of the text. Other embodiments may be described and/or claimed.
Type:
Grant
Filed:
September 6, 2013
Date of Patent:
September 6, 2016
Assignee:
Lighthouse Document Technologies, Inc.
Inventors:
Christopher Byron Dahl, Debora Noemi Motyka Jones, Kevin Patrick O'Neill, Geoffrey Alan David Belger, Vladas Walter Mazelis, Nathaniel Byington, Beau Hodges Holt, John Charles Olson
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location relative to an object and a type of a light source that illuminated the object when the image was captured, are described. A method performed by a process executing on a computer system includes identifying an object of interest in a digital image. The method further includes projecting at least a portion of the digital image corresponding to the object of interest onto a three dimensional (3D) model that includes a polygon-mesh corresponding to the object's shape. The method further includes determining one or more properties of a light source that illuminated the object in the digital image at an instant that the image was captured based at least in part on a characteristic of one or more polygons in the 3D model onto which the digital image portion was projected.
Abstract: Various embodiments provide methods and systems for identifying text in an image by applying suitable text detection parameters in text detection. The suitable text detection parameters can be determined based on parameter metric feedback from one or more text identification subtasks, such as text detection, text recognition, preprocessing, character set mapping, pattern matching and validation. In some embodiments, the image can be defined into one or more image regions by performing glyph detection on the image. Text detection parameters applying to each of the one or more image regions can be adjusted based on measured one or more parameter metrics in the respective image region.
Type:
Grant
Filed:
August 3, 2015
Date of Patent:
September 6, 2016
Inventors:
Xiaofan Lin, Adam Wiggen Kraft, Yu Lou, Douglas Ryan Gray, Colin Jon Taylor
Abstract: An example is an image processing method including a step of acquiring data containing an image, an extraction step of extracting a first image region from the acquired data based on the type of software for generating the acquired data, an extraction step of extracting a second image region the same as or similar to each of images held in a storage unit from the acquired data by comparing the acquired data and each of the images, an extraction step of extracting a third image region the same as or similar to each of the images from the acquired data by comparing the image feature amount of the acquired data and the image feature amount of each of the images, and a step of identifying an image in the acquired data to be stored in the storage unit based on the image regions and reliabilities of the extraction steps.
Abstract: Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Type:
Grant
Filed:
August 17, 2015
Date of Patent:
August 23, 2016
Assignee:
Battelle Memorial Institute
Inventors:
Ryan E. Hohimer, Frank L. Greitzer, Shawn D. Hampton
Abstract: An object detection apparatus mounted in a system for detecting a target object in various changing environmental conditions. In the apparatus, an acquirer acquires either or both of information indicative of an external environment around the system and information indicative of an operating state of the system. A determiner determines an environmental condition with respect to an input image according to the acquired information. A setter sets, for each of plural image recognition methods each being a combination of one of the plural image recognition dictionaries and one of the plural image recognition techniques, a weighting factor according to the environmental condition determined by the determiner. A detector detects the object in the input image by applying each of the plural image recognition methods to the input image to obtain recognition results reflecting the weighting factors, and collectively evaluating the recognition results.
Abstract: A system and method is presented for assembling information contained within a financial instrument for subsequent verification of the financial instrument, wherein the financial instrument includes an identifier corresponding to a payee of the financial instrument. An imaging device, such as, for example, a mobile phone or the like, is used to obtain an image of the financial instrument prior to the financial instrument being transferred to the payee. Information is extracted from the image of the financial instrument, and the information is transferred to a storage area that is separate and remote from the imaging device.
Type:
Grant
Filed:
February 27, 2013
Date of Patent:
August 2, 2016
Assignee:
Bottomline Technologies (DE) Inc.
Inventors:
Michael Vigue, Sean Mallean, Jessica Cheney
Abstract: The present technology relates to an image processing device and method, and program, whereby a high-quality loop moving image can be obtained more easily. Upon continuous shot images of multiple frames serving as material of a loop moving image being input, the continuous shot images are divided into multiple segments including motion segments including a moving subject and a still segment including a subject with no motion. An image of the motion segment region of the continuous shot images is formed into a loop moving image, and a segment loop moving image is generated. Also, an image of a still segment region of the continuous shot images is clipped out, and a segment image which is a still image is generated. Further, these segment loop moving image and segment image are integrated, to form one loop moving image. The present invention can be applied to an image processing device.
Abstract: A method for processing a batch of scanned images is provided. The method comprises processing the scanned images into documents. For documents of multiple pages, the method comprises maintaining a page-based coordinate system to specify a location of structures within a page and joining the pages to form a multi-page sheet having a sheet-based coordinate system to specify a location of structures within the multi-page sheet. The method comprises performing a data extraction operation to extract data from each document, said data extraction operation including a page mode wherein structures are detected on individual pages using the page-based coordinate system and a document mode wherein structures are detected within the entire document using the sheet-based coordinate system.
Abstract: Provided is a medical image processing device capable of notifying the diagnosis personnel that a segmentation error has occurred or may have occurred during tissue segmentation processing. This medical image processing device specifies a gray matter image of a subject, smoothes the gray matter image, and, in accordance with an elevation function for calculating an absolute Z score, calculates an elevation value. Next, the medical image processing device compares the evaluation value with a pre-defined threshold value and determines the segmentation result, and, if the separation result is determined to be abnormal, warns that segmentation result is abnormal and displays a segmentation result display screen showing the segmentation result.
Abstract: There is provided a method of processing a digital image including: (a) obtaining a plurality of images; (b) converting the plurality of images into histograms; (c) setting one of the plurality of images as a reference image and another of the plurality of images as a comparison target image; (d) adjusting a distribution of the histogram of the reference image to match a distribution of the histogram of the comparison target image to produce an adjusted reference image; (e) comparing a difference between the adjusted reference image and the comparison target image to produce a masking image; (f) applying the masking image to the comparison target image to produce an adjusted comparison target image; and (g) combining the reference image and the adjusted comparison target image to produce a high dynamic range (HDR) image. Accordingly, even if there is a complex motion on a subject, a clear image without an image overlap or a ghost effect may be obtained when producing the HDR image.
Abstract: A method (2304) of decoding a QR code having two initially detected finder patterns (2901, 2902; 2911, 2912) is provided. The method forms (2402) a pattern matching template (2700, 2800) based on characteristics of the detected finder patterns and determines (2403) at least one candidate region (2904, 2905; 2913, 2914) about the detected finder patterns. The candidate region is typically based at least on the relative positions of the detected finder patterns. The method detects (2404) a previously undetected third finder pattern of the QR code in the at least one candidate region by correlating content of the candidate region with the pattern matching template. With the identified third finder pattern and each of the two initially detected finder patterns, decoding (2305) the QR code can then be performed. Also disclosed is a method of detecting a two-dimensional code comprising known target features and coded data in an image.
Type:
Grant
Filed:
December 1, 2009
Date of Patent:
May 31, 2016
Assignee:
CANON KABUSHIKI KAISHA
Inventors:
James Swayn, Alvin Wai Leong Yeap, Stephen Edward Ecob
Abstract: Systems, methods, and computer program products for smart, automated capture of textual information using optical sensors of a mobile device are disclosed. The textual information is provided to a mobile application or workflow without requiring the user to manually enter or transfer the data without requiring user intervention such as a copy/paste operation. The capture and provision context-aware, and can normalize or validate the captured textual information prior to entry in the workflow or mobile application. Other information necessary by the workflow and available to the mobile device optical sensors may also be captured and provided, in a single automatic process. As a result, the overall process of capturing information from optical input using a mobile device is significantly simplified and improved in terms of accuracy of data transfer/entry, speed and efficiency of workflows, and user experience.
Abstract: Determining and using word information entropies includes: determining one or more categories that correspond to a plurality of queries; sorting the plurality of queries into one or more groups based at least in part on the determined categories of the plurality of queries; segmenting queries that correspond to each of the one or more groups into a first plurality of phrases, wherein each phrase includes one or more words; determining occurrence probabilities for the plurality of phrases; and determining word information entropies for the plurality of phrases based at least in part on the determined occurrence probabilities.
Abstract: A terminal device, an information display system and a method of controlling the terminal device are disclosed. The terminal device includes a camera unit configured to capture a picture in which an external device connected with a network is included, a control unit configured to detect the external device from the captured picture, a communication unit configured to transmit and receive data and a display unit configured to display the captured picture. If the detected external device in the captured picture is selected, the control unit controls the communication unit to receive information on the external device based on a sensing history of the external device and can control the display unit to display the received information on the external device on the display unit.
Abstract: A method for modeling and tracking a subject using image depth data includes locating the subject's trunk in the image depth data and creating a three-dimensional (3D) model of the subject's trunk. Further, the method includes locating the subject's head in the image depth data and creating a 3D model of the subject's head. The 3D models of the subject's head and trunk can be exploited by removing pixels from the image depth data corresponding to the trunk and the head of the subject, and the remaining image depth data can then be used to locate and track an extremity of the subject.
Type:
Grant
Filed:
December 19, 2013
Date of Patent:
May 3, 2016
Assignee:
Intel Corporation
Inventors:
Gershom Kutliroff, Amit Bleiweiss, Itamar Glazer, Maoz Madmoni
Abstract: A glass type mobile terminal including a transparent screen; a frame configured to secure the transparent screen in front of a user's eyes wearing the glass type mobile terminal; a camera mounted to the frame and configured to photograph an image in front of the user's eyes; a memory; an image recognition unit configured to extract information from the image photographed by the camera; and a controller configured to compare the extracted information with related information stored in the memory, and display the related information to the transparent screen on the transparent screen along with the captured image.
Abstract: An information display apparatus creates determination image which indicates the fact that a reference object in a plurality of objects arranged in a row satisfies a predetermined rule or that the reference object does not satisfy the rule, creates an image to be displayed by superimposing the determination image on the acquired image, and displays the image to be displayed.
Abstract: A method, system, and computer-readable storage medium for displaying a message on a display according to a movement pattern of an object. The system first receives a first signal that indicates a presence of an object at a first location with respect to a display. Based on the first signal, the system then selects a message to display at the display. Finally, the system displays the message at the display based, at least in part, on a first temporal pattern associated with the first signal.
Abstract: Provided is an information processing apparatus including a reception section that receives a request for a process relative to management of a publication, a code acquisition section that acquires plural codes affixed to the publication, a determination section that determines whether or not a code corresponding to the process is present in the plural acquired codes, and a process execution section that executes the process based on the code when it is determined that the code corresponding to the process is present.
Abstract: A system and method for automatically recognizing license plate information, the method comprising receiving an image of a license plate, and generating a plurality of image processing data sets, wherein each image processing data set of the plurality of image processing data sets is associated with a score of a plurality of scores by a scoring process comprising determining one or more image processing parameters, generating the image processing data set by processing the image using the one or more image processing parameters, generating the score based on the image processing data, and associating the image processing data set with the score.
Type:
Grant
Filed:
May 7, 2012
Date of Patent:
March 22, 2016
Assignee:
Xerox Corporation
Inventors:
Aaron Michael Burry, Yonghui Zhao, Vladimir Kozitsky
Abstract: Approaches to enable a computing device, such as a phone or tablet computer, to detect when text contained in an image captured by the camera is sufficiently close to the edge of the screen and to infer whether the text is likely to be cut off by the edge of the screen such that the text contained in the image is incomplete. If the incomplete text corresponds to actionable text associated with a function that can be invoked on the computing device, the computing device may wait until the remaining portion of the actionable text is captured by the camera and made available for processing before invoking the corresponding function on the computing device.