Reading Aids For The Visually Impaired Patents (Class 382/114)
  • Patent number: 10140507
    Abstract: A virtual reality (VR) headset configured to be worn by a user. The VR headset comprises: i) a forward-looking vision sensor for detecting objects in the forward field of view of the VR headset; ii) a downward-looking vision sensor for detecting objects in the downward field of view of the VR headset; iii) a controller coupled to the forward-looking vision sensor and the downward-looking vision sensor. The controller is configured to: a) detect a hand in a first image captured by the forward-looking vision sensor; b) detect an arm of the user in a second image captured by the downward-looking vision sensor; and c) determine whether the detected hand in the first image is a hand of the user.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: November 27, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Gaurav Srivastava
  • Patent number: 10126826
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: November 13, 2018
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 10058454
    Abstract: An apparatus, system or method for aiding the vision of visually impaired individuals having a retina with reduced functionality, which overcomes the drawbacks of the background art by overcoming such reduced and/or uneven retinal function.
    Type: Grant
    Filed: August 19, 2013
    Date of Patent: August 28, 2018
    Assignee: IC INSIDE LTD.
    Inventors: Haim Chayet, Boris Greenberg, Lior Ben-Hur
  • Patent number: 9811885
    Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: November 7, 2017
    Assignee: ABBYY DEVELOPMENT LLC
    Inventors: Konstantin Bocharov, Mikhail Kostyukov
  • Patent number: 9684055
    Abstract: A method and system are provided for controlling a measurement device remotely through gestures performed by a user. The method includes providing a relationship between each of a plurality of commands and each of a plurality of user gestures. A gesture is performed by the user with the user's body that corresponds to one of the plurality of user gestures. The gesture performed by the user is detected. A first command is determined from one of the plurality of commands based at least in part on the detected gesture. Then the first command is executed with the laser tracker.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 20, 2017
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Robert E. Bridges, David H. Parker, Kelley Fletcher
  • Patent number: 9626000
    Abstract: A reading machine that operates in various modes includes image correction processing is described. The reading device pre-processes an image for optical character recognition by receiving the image and determining whether text in the image is too large or small for optical character recognition processing by determining that text height falls outside of a range in which optical character recognition software will recognize text in a digitized image. If necessary the image is resized according to whether the text is too large or too small.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: April 18, 2017
    Assignee: KNFB READER, LLC
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
  • Patent number: 9619688
    Abstract: Navigation techniques including map based and object recognition based and especially adapted for use in a portable reading machine are described.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: April 11, 2017
    Assignee: KNFB READER, LLC
    Inventor: Rafael Maya Zetune
  • Patent number: 9618748
    Abstract: A method and apparatus of displaying a magnified image comprising obtaining an image of a scene using a camera with greater resolution than the display, and capturing the image in the native resolution of the display by either grouping pixels together, or by capturing a smaller region of interest whose pixel resolution matches that of the display. The invention also relates to a method whereby the location of the captured region of interest may be determined by external inputs such as the location of a person's gaze in the displayed unmagnified image, or coordinates from a computer mouse. The invention further relates to a method whereby a modified image can be superimposed on an unmodified image, in order to maintain the peripheral information or context from which the modified region of interest has been captured.
    Type: Grant
    Filed: September 27, 2010
    Date of Patent: April 11, 2017
    Assignee: eSight Corp.
    Inventors: Rejean J. Y. B. Munger, Robert G. Hilkes, Marc Perron, Nirmal Sohi
  • Patent number: 9507561
    Abstract: Exemplary embodiments are described wherein an auxiliary sensor attachable to a touchscreen computing device provides an additional form of user input. When used in conjunction with an accessibility process in the touchscreen computing device, wherein the accessibility process generates audible descriptions of user interface features shown on a display of the device, actuation of the auxiliary sensor by a user affects the manner in which concurrent touchscreen input is processed and audible descriptions are presented.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 29, 2016
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: Frank A. Mckiel, Jr.
  • Patent number: 9491836
    Abstract: Methods and apparatus for determining the relative electrical positions of lighting units (202a, 202b, 202c, 202d) arranged in a linear configuration along a communication bus (204) are provided. The methods may involve addressing each lighting unit (202a, 202b, 202c, 202d) of the linear configuration once, and counting a number of detected events at the position of each lighting unit. The number of detected events may be unique to each electrical position, thus providing an indication of the relative position of a lighting unit within the linear configuration. The methods may be implemented at least in part by a controller (210) common to multiple lighting units of a lighting system, or may be implemented substantially by the lighting units (202a, 202b, 202c, 202d) themselves.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: November 8, 2016
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Ihor Lys
  • Patent number: 9436887
    Abstract: Devices and a method are provided for providing context-related feedback to a user. In one implementation, the method comprises capturing real time image data from an environment of the user. The method further comprises identifying in the image data a hand-related trigger. Multiple context-based alternative actions are associated with the hand-related trigger. Further, the method comprises identifying in the image data an object associated with the hand-related trigger. The object is further associated with a particular context. Also, the method comprises selecting one of the multiple alternative actions based on the particular context. The method further comprises outputting the context-related feedback based on a result of the executed alternative action.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: September 6, 2016
    Assignee: OrCam Technologies, Ltd.
    Inventors: Yonatan Wexler, Erez Na'Aman, Amnon Shashua
  • Patent number: 9418407
    Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: August 16, 2016
    Assignee: ABBYY Development LLC
    Inventors: Konstantin Bocharov, Mikhail Kostyukov
  • Patent number: 9389682
    Abstract: A method for presenting content on a display screen is provided. The method initiates with presenting first content on the display screen, the first content being associated with a first detected viewing position of a user that is identified in a region in front of the display screen. At least part of second content is presented on the display screen along with the first content, the second content being progressively displayed along a side of the display screen in proportional response to a movement of the user from the first detected viewing position to a second detected viewing position of the user.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: July 12, 2016
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ryuji Nakayama
  • Patent number: 9377867
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: June 28, 2016
    Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 9367126
    Abstract: A method for providing a dynamic perspective-based presentation of content on a cellular phone is provided, comprising: presenting a first portion of a content space on a display screen of the cellular phone; tracking a location of a user's head in front of the display screen; detecting a lateral movement of the user's head relative to the display screen; progressively exposing an adjacent second portion of the content space, from an edge of the display screen opposite a direction of the lateral movement, in proportional response to the lateral movement of the user's head relative to the display screen.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: June 14, 2016
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ryuji Nakayama
  • Patent number: 9311917
    Abstract: A machine, system and method for user-guided teaching of deictic references and referent objects of deictic references to a conversational system. The machine includes a system bus for communicating data and control signals received from the conversational system to the computer system, a data and control bus for connecting devices and sensors in the machine, a bridge module for connecting the data and control bus to the system bus, respective machine subsystems coupled to the data and control bus, the respective machine subsystems having a respective user interface for receiving a deictic reference from a user, a memory coupled to the system bus for storing deictic references and objects of the deictic references learned by the conversational system and a central processing unit coupled to the system bus for executing the deictic references with respect to the objects of the deictic references learned.
    Type: Grant
    Filed: January 21, 2009
    Date of Patent: April 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Liam D. Comerford, Mahesh Viswanathan
  • Patent number: 9263026
    Abstract: A screen reader software product for low-vision users, the software having a reader module collecting textual and non-textual display information generated by a web browser or word processor. Font styling, interface layout information and the like are communicated to the end user by sounds broadcast simultaneously rather than serially with the synthesized speech to improve the speed and efficiency in which information may be digested by the end user.
    Type: Grant
    Filed: July 11, 2014
    Date of Patent: February 16, 2016
    Assignee: Freedom Scientific, Inc.
    Inventors: Christian D. Hofstader, Glen Gordon, Eric Damery, Ralph Ocampo, David Baker, Joseph K. Stephen
  • Patent number: 9213911
    Abstract: A device and method are provided for recognizing text on a curved surface. In one implementation, the device comprises an image sensor configured to capture from an environment of a user multiple images of text on a curved surface. The device also comprises at least one processor device. The at least one processor device is configured to receive a first image of a first perspective of text on the curved surface, receive a second image of a second perspective of the text on the curved surface, perform optical character recognition on at least parts of each of the first image and the second image, combine results of the optical character recognition on the first image and on the second image, and provide the user with a recognized representation of the text, including a recognized representation of the first portion of text.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: December 15, 2015
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 9191554
    Abstract: Some implementations include using a trained classifier to identify page-turn events in a video. The video may be divided into multiple segments based on the page-turn events, with each segment of the multiple segments corresponding to a pair of adjacent pages in a book. Exemplar frames that provide non-redundant data compared to other frames may be chosen from each segment. The exemplar frames may be cropped to include content portions of pages. The exemplar frames may be aligned such that a pixel is located in a same position in each frame. Optical character recognition (OCR) may be performed on exemplar frames and the OCR for exemplar frames in each segment may be combined. The exemplar frames in each segment may be combined to create a composite image for each pair of adjacent pages in the book, and OCR may be performed on the composite image.
    Type: Grant
    Filed: November 14, 2012
    Date of Patent: November 17, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Vasant Manohar, Sridhar Godavarthy, Viswanath Sankaranarayanan
  • Patent number: 9165478
    Abstract: A method and system for use in a user system, for accessing information related to a physical document. An electronic copy of an existing physical document is identified and located. The electronic copy of the physical document is an exact replica of the physical document. One or more pages of the physical document are identified. A selected part of the physical document is identified using the position of points on the identified one or more pages of the physical document and in response, data related to the selected part of the physical document is retrieved from the electronic copy of the physical document. The retrieved data is presented visually to a visually impaired person or orally to a blind person on the user system, which enables the visually impaired person to see or hear, respectively, the retrieved data.
    Type: Grant
    Filed: April 15, 2004
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fernando Incertis Carro, Sharon M. Trewin
  • Patent number: 9129374
    Abstract: Embodiments of the present invention provide an image sharpening method and device. The method includes performing bilateral filtering processing and difference of Gaussians filtering processing on original image information to obtain first image-layer information and second image-layer information respectively. The first image-layer information is subtracted from the original image information to obtain third image-layer information. Fusion and superimposition processing is performed on the second image-layer information and the third image-layer information to obtain fourth image-layer information. The original image information and the fourth image information are added to obtain processed image information.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: September 8, 2015
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Xianxiang Xu
  • Patent number: 8920174
    Abstract: An electro-tactile display includes an electrode substrate provided with a plurality of stimulation electrodes, a conductive gel layer positioned between the stimulation electrodes and the skin of a wearer, a switching circuit section electrically connected to the stimulation electrodes, a stimulation pattern generating section electrically connected to the switching circuit, and means for alleviating a sensation experienced by the wearer as a result of the stimulation electrodes. In one aspect, the means for alleviating a sensation is configured from the conductive gel layer. The conductive gel layer has a resistance value equivalent to that of the horny layer of the skin. In another aspect, the means for alleviating a sensation is configured from the stimulation determination means and the threshold value adjustment means.
    Type: Grant
    Filed: December 7, 2006
    Date of Patent: December 30, 2014
    Assignees: The University of Tokyo, Eye Plus Plus, Inc.
    Inventors: Susumu Tachi, Hiroyuki Kajimoto, Yonezo Kanno
  • Patent number: 8908995
    Abstract: A method of operating a dimensioning system to determine dimensional information for objects is disclosed. A number of images are acquired. Objects in at least one of the acquired images are computationally identified. One object represented in the at least one of the acquired images is computationally initially selected as a candidate for processing. An indication of the initially selected object is provided to a user. At least one user input indicative of an object selected for processing is received. Dimensional data for the object indicated by the received user input is computationally determined.
    Type: Grant
    Filed: January 12, 2010
    Date of Patent: December 9, 2014
    Assignee: Intermec IP Corp.
    Inventors: Virginie Benos, Vincent Bessettes, Franck Laffargue
  • Patent number: 8884899
    Abstract: Provided is a thin three-dimensional interactive display which enables multi-touch sensing and three-dimensional gesture recognition. The three-dimensional interactive display includes a light source for irradiating an object to be detected with a light, a light modulation layer, into which a scattered light generated by irradiating the object with the light from the light source enters, at least for modulating an intensity of the scattered light, a transparent light-receiving layer for receiving the light transmitted through the light modulation layer, and a display panel or a back light panel disposed on the opposite side of the transparent light-receiving layer from the light modulation layer. The transparent light-receiving layer has a two-dimensional array of light-receiving elements.
    Type: Grant
    Filed: May 17, 2012
    Date of Patent: November 11, 2014
    Assignee: Sony Corporation
    Inventors: Wei Luo, Yuichi Tokita, Yoshio Goto, Seiji Yamada, Satoshi Nakamaru
  • Publication number: 20140270398
    Abstract: An apparatus and method are provided for identifying and audibly presenting textual information within captured image data. In one implementation, a method is provided for audibly presenting text retrieved from a captured image. According to the method, at least one image of text is received from an image sensor, and the text may include a first portion and a second portion. The method includes identifying contextual information associated with the text, and accessing at least one rule associating the contextual information with at least one portion of text to be excluded from an audible presentation associated with the text. The method further includes performing an analysis on the at least one image to identify the first portion and the second portion, and causing the audible presentation of the first portion.
    Type: Application
    Filed: December 20, 2013
    Publication date: September 18, 2014
    Applicant: ORCAM TECHNOLOGIES LTD.
    Inventors: Yonatan WEXLER, Amnon SHASHUA
  • Patent number: 8792138
    Abstract: A method that includes receiving an image, automatically determining at least one region of interest in the image based on at least one color deficiency type from a plurality of color deficiency types, modifying the image by correcting the at least one region of interest and producing an output of the modified image.
    Type: Grant
    Filed: February 8, 2012
    Date of Patent: July 29, 2014
    Assignee: Lexmark International, Inc.
    Inventors: Aaron Jacob Boggs, Scott Timothy Cramer, Matthew Ryan Keniston, Rodney Evan Sproul, Daniel Lee Thomas, Lane Thomas Butler
  • Patent number: 8649551
    Abstract: An image processing system is described which is arranged to highlight information in image displays by selectively blurring less important areas of an image. By generating such displays comprising areas which are in focus and areas which are out of focus, a viewer's attention is preferentially drawn towards those areas of an image which appear sharp. By having a display system which is arranged to generate such images a means is provided to direct a viewer's attention towards considering the sharp areas of the image display first. Further the selective blurring portions of an image reduces rather than increases the amount of information presented to a viewer and hence reduces the likelihood that a viewer will become overloaded with information. Display systems of this type are therefore especially applicable to complex control environments as means of directing viewer's attention.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: February 11, 2014
    Assignee: University of Newcastle Upon Tyne
    Inventor: Yoav Tadmor
  • Publication number: 20140037149
    Abstract: Navigation techniques including map based and object recognition based and especially adapted for use in a portable reading machine are described.
    Type: Application
    Filed: October 8, 2013
    Publication date: February 6, 2014
    Applicant: K-NFB READING TECHNOLOGY, INC.
    Inventor: Rafael Maya Zetune
  • Patent number: 8633982
    Abstract: A monitoring system for use with a sewing machine. The monitoring system includes a camera assembly mounted to a base of the sewing machine with a camera that collects images from a bottom side of the fabric. The camera assembly delivers images of the back side of the fabric to a monitor assembly that includes a display device. The display device displays the images collected by the camera. The monitor can be mounted to an upper or arm portion of the sewing machine for convenient viewing by the operator during use of the sewing machine.
    Type: Grant
    Filed: December 26, 2008
    Date of Patent: January 21, 2014
    Assignee: A Quilter's Eye, Inc.
    Inventors: Susan Gylling, Ren Livingston
  • Patent number: 8605141
    Abstract: There is presented a system and method for providing real-time object recognition to a remote user. The system comprises a portable communication device including a camera, at least one client-server host device remote from and accessible by the portable communication device over a network, and a recognition database accessible by the client-server host device or devices. A recognition application residing on the client-server host device or devices is capable of utilizing the recognition database to provide real-time object recognition of visual imagery captured using the portable communication device to the remote user of the portable communication device. In one embodiment, a sighted assistant shares an augmented reality panorama with a visually impaired user of the portable communication device where the panorama is constructed from sensor data from the device.
    Type: Grant
    Filed: February 24, 2011
    Date of Patent: December 10, 2013
    Assignee: Nant Holdings IP, LLC
    Inventors: Orang Dialameh, Douglas Miller, Charles Blanchard, Timothy C. Dorcey, Jeremi M Sudol
  • Patent number: 8594387
    Abstract: Embodiments of the invention provide devices and methods for capturing text found in a variety of sources and transforming it into a different user-accessible formats or medium. For example, the device can capture text from a magazine and provide it to the user as spoken words through headphones or speakers. Such devices are useful for individuals such as those having reading difficulties (such as dyslexia), blindness, and other visual impairments arising from diabetic retinopathy, cataracts, age-related macular degeneration (AMD), and glaucoma.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: November 26, 2013
    Assignee: Intel-GE Care Innovations LLC
    Inventors: Lea Kobeli, Evelyne Chaubert, Jeffrey Salazar, Gretchen Anderson, Ben Foss, Matthew Wallace Peterson
  • Patent number: 8538087
    Abstract: The invention deals with an aid device for reading a printed text, comprising a data acquisition peripheral with a camera and a communication interface, said peripheral being movable by a user on a printed text to frame a portion of text, a processing unit, communication means between the peripheral and the processing unit, and a vocal reproduction device. The processing unit is programmed to acquire a sequence of images framed by the camera, to detect when the user has stopped on the text, to recognize at least one word which the user intends reading, and to reproduce the sound of said at least one word by means of vocal synthesis by means of the vocal reproduction device.
    Type: Grant
    Filed: July 8, 2009
    Date of Patent: September 17, 2013
    Assignee: Universita' Degli Studi di Brescia
    Inventors: Umberto Minoni, Mauro Bianchi
  • Patent number: 8537279
    Abstract: A microform imaging apparatus comprising a chassis including a microform media support structure configured to support a microform media within a plane substantially orthogonal to a first optical axis, a fold mirror supported along the first optical axis to reflect light along a second optical axis that is angled with respect to the first optical axis, a lens supported along one of the first and second optical axis, an area sensor supported along the second optical axis, a first adjuster for moving the area sensor along at least a portion of the second optical axis and a second adjuster for moving the lens along at least a portion of the one of the first and second optical axis.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: September 17, 2013
    Assignee: e-ImageData Corp.
    Inventor: Todd A. Kahle
  • Patent number: 8514239
    Abstract: An image processing apparatus includes a color converting unit that converts input image data into image forming data used for image formation; and a control unit that controls the image formation by the image forming data, wherein the color converting unit converts each of a plurality of predetermined colors that are difficult for colorblind people to mutually distinguish among colors included in a color space of the input image data, as difficult colors for colorblind people, into a same color in a color space of the image forming data.
    Type: Grant
    Filed: June 11, 2010
    Date of Patent: August 20, 2013
    Assignee: Ricoh Company, Limited
    Inventor: Seiji Miyahara
  • Patent number: 8468021
    Abstract: Disclosed is a system and method for converting a digital number to text and for pronouncing the digital number. The system includes a filtration system for determining whether the digital number has nonnumeric symbols and for generating a filtrated number, an analyzing system for analyzing the filtrated number, a composition system configured to collect words associated with ternary units of the filtrated number, a linking system configured to link the words, and a pronouncing system for pronouncing the linked words.
    Type: Grant
    Filed: July 15, 2010
    Date of Patent: June 18, 2013
    Assignee: King Abdulaziz City for Science and Technology
    Inventors: Abdullah Al-Zamil, Fayez Al-Hargan
  • Patent number: 8452057
    Abstract: A method controls a projection of a projector. The method predetermines hand gestures, and assigns an operation function of an input device to each of the predetermined hand gestures. When an electronic file is projected onto a screen, the projector receives an image of a speaker captured by an image-capturing device connected to the projector. The projector identifies whether a hand gesture of the speaker matches one of the predetermined hand gestures. If the hand gesture matches one of the hand gestures, the projector may execute a corresponding assigned operation function.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: May 28, 2013
    Assignee: Hon Hai Precision Industry Co., Ltd.
    Inventors: Chien-Lin Chen, Shao-Wen Wang
  • Patent number: 8412531
    Abstract: The present invention provides a user interface for providing press-to-talk-interaction via utilization of a touch-anywhere-to-speak module on a mobile computing device. Upon receiving an indication of a touch anywhere on the screen of a touch screen interface, the touch-anywhere-to-speak module activates the listening mechanism of a speech recognition module to accept audible user input and displays dynamic visual feedback of a measured sound level of the received audible input. The touch-anywhere-to-speak module may also provide a user a convenient and more accurate speech recognition experience by utilizing and applying the data relative to a context of the touch (e.g., relative location on the visual interface) in correlation with the spoken audible input.
    Type: Grant
    Filed: June 10, 2009
    Date of Patent: April 2, 2013
    Assignee: Microsoft Corporation
    Inventors: Anne K. Sullivan, Lisa Stifelman, Kathleen J. Lee, Su Chuin Leong
  • Patent number: 8406568
    Abstract: Briefly, in accordance with one or more embodiments, an image-processing system is capable of receiving an image containing text, applying optical character recognition to the image, and then audibly reproducing the text via text-to-speech synthesis. Prior to optical character recognition, an orientation corrector is capable of detecting an amount of angular rotation of the text in the image with respect to horizontal, and then rotating the image by an appropriate amount to sufficiently align the text with respect to horizontal for optimal optical character recognition. The detection may be performed using steerable filters to provide an energy versus orientation curve of the image data. A maximum of the energy curve may indicate the amount of angular rotation that may be corrected by the orientation corrector.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: March 26, 2013
    Assignee: Intel Corporation
    Inventor: Oscar Nestares
  • Patent number: 8391566
    Abstract: A method of identifying a person by his iris through determining an interior limit and using a predefined exterior limit to form an analysis zone. A code associated with the analysis zone is generated and compared with a previously generated reference code. If there is no match another predefined exterior limit is used. The process repeats as long as predefined exterior limits exist or until a positive match is made.
    Type: Grant
    Filed: November 12, 2008
    Date of Patent: March 5, 2013
    Assignee: Morpho
    Inventor: Martin Cottard
  • Patent number: 8355542
    Abstract: Methods and apparatus for facilitating detection of a presence or an absence of at least one underground facility within a dig area. A digital image that does not include an aerial image of a geographic area including the dig area is displayed on a display device. Via a user input device associated with the display device, at least one indicator is added to the displayed digital image to provide at least one indication of the dig area and thereby generate a marked-up digital image. Information relating to the marked up digital image is electronically transmitted and/or electronically stored so as to facilitate the detection of the presence or the absence of the at least one underground facility within the dig area.
    Type: Grant
    Filed: January 16, 2009
    Date of Patent: January 15, 2013
    Assignee: Certusview Technologies, LLC
    Inventors: Steven E. Nielsen, Curtis Chambers
  • Patent number: 8331628
    Abstract: Methods and system for providing vision assistance using a portable telephone with a built-in camera. In some embodiments, the system identifies the value of a bank note by determining the average number of transitions between black and white in each vertical line of pixels corresponding to a numeric digit. In other embodiments, the system captures an image and identifies an object in the image by comparing the value of each pixel in the image to a threshold intensity and marking the pixels that exceed the threshold. The system then generates a plurality of candidate groups by grouping marked pixels that are within a predetermined distance from other marked pixels. The object is identified based on the relative position of each candidate group to other candidate groups.
    Type: Grant
    Filed: December 8, 2009
    Date of Patent: December 11, 2012
    Inventors: Georgios Stylianou, Stavros Papastavrou
  • Patent number: 8284999
    Abstract: A reading machine has processing for detecting common text between a pair of individual images. The reading machine combines the text from the pair of images into a file or data structure if common text is detected, and determines if incomplete text phrases are present in the common text. If incomplete text phrases are present, the machine signals a user to move an image input device in a direction to capture more of the text.
    Type: Grant
    Filed: November 22, 2010
    Date of Patent: October 9, 2012
    Assignee: K-NFB Reading Technology, Inc.
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson, Lev Lvovsky
  • Patent number: 8269890
    Abstract: A digital microform imaging apparatus which includes an approximately monochromatic illumination source transmitting an incident light through a diffuse window along a first optical axis of the apparatus. A microform media support is configured to support a microform media after the diffuse window and along the first optical axis. An approximately 45 degree fold mirror reflects the incident light transmitted through the microform media approximately 90 degrees along a second optical axis. An imaging subsystem includes a lens connected to a first carriage which is linearly adjustable approximately parallel with the second optical axis, and an area sensor connected to a second carriage which is linearly adjustable approximately parallel with the second optical axis.
    Type: Grant
    Filed: May 15, 2007
    Date of Patent: September 18, 2012
    Assignee: E-IMAGE Data Corporation
    Inventor: Todd A. Kahle
  • Patent number: 8265344
    Abstract: Methods and apparatus for generating a searchable electronic record of a locate operation performed by a locate technician, in which a presence or an absence of at least one underground facility within a dig area is identified. An image of a geographic area comprising the dig area is electronically received, and combined with image-related information so as to generate the searchable electronic record. The image-related information comprises at least a geographic location associated with the dig area, and a timestamp indicative of when the locate operation occurred. The searchable electronic record of the locate operation is electronically transmitted and/or electronically stored so that performance of the location operation is verifiable.
    Type: Grant
    Filed: February 5, 2009
    Date of Patent: September 11, 2012
    Assignee: Certusview Technologies, LLC
    Inventors: Steven Nielsen, Curtis Chambers
  • Patent number: 8264716
    Abstract: A method for ringtone, voice, and sound notification of printer status, comprising obtaining status information of a printer, converting it into an audible report, and delivering the audible report. The method is especially useful for visually-impaired users and for shared printers in crowded situations where it is difficult for each user to see the panel or monitor display. The methods also include detection by the events controller, UI manager instructing an audio manager, codec decoding an audio file in firmware and hardware organization; job owner identification information embedded into a print job with a unique tag; user identification sound data embedded in a print job; audible report for multiple jobs in a job queue, with positional information; text-to-speech conversion; unique ringtone melody for each user, comprising department prefix, higher pitch modulation for higher priority, and automatically converting an alphanumeric character into the corresponding note.
    Type: Grant
    Filed: April 26, 2006
    Date of Patent: September 11, 2012
    Assignees: KYOCERA Document Solutions Inc., KYOCERA Document Solutions Development America, Inc.
    Inventors: Zheila L. Ola, Arthur E. Alacar, Barry Sia, Tomoyuki Tanaka
  • Patent number: 8254686
    Abstract: The present invention discloses an on-line identifying method of hand-written Arabic letter. The advantage of the present invention is that the multilayer coarse classification algorithm based on the local characteristic of Arabic letter fully utilize the various local characteristics of Arabic letter, obtain the first candidate letter aggregation matching with the inputted hand-written Arabic letter according to the first level coarse classification formed by the stroke number of letter, and then obtain the second candidate letter aggregation matching with inputted hand-written Arabic letter according to the other local characteristics and the first candidate letter aggregation. The application of the algorithm enables that the inputted hand-written Arabic letter only need to match with the standard letter stored in the predetermined letter library and the corresponding standard letters of the second candidate letter aggregation.
    Type: Grant
    Filed: November 21, 2008
    Date of Patent: August 28, 2012
    Assignee: Ningbo Sunrun Elec. & Info. St & D Co., Ltd.
    Inventors: Jiaming He, Jianfen Wen, Dexiang Jia, Jing Chen, Ping Chen, Chengchen Ma, Zhouyi Fan, Hongzhen Ding, Zhihui Shi, Aijun Shi, Linghui Fan
  • Patent number: 8249309
    Abstract: A portable reading machine detects poor image conditions for performing optical character recognition processing. The portable reading machine receives an image of sufficient resolution to distinguish lines of text but not necessarily of sufficient resolution to distinguish individual characters and processes the image to determine imaging conditions from the image. The reading machine reports imaging conditions to the user.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: August 21, 2012
    Assignee: K-NFB Reading Technology, Inc.
    Inventors: Raymond C. Kurzweil, Paul Albrecht, James Gashel, Lucy Gibson, Lev Lvovsky
  • Patent number: 8239032
    Abstract: One embodiment a vision substitution system for communicating audio and tactile representations of features within visual representations includes selecting (1) activity-related parameters; obtaining (2) images or other visual representations according to the activity-related parameters; acquiring (3) features including shapes and corners related to the visual representations according to the activity-related parameters; and outputting (4) effects related to the features on audio and/or tactile displays according to the activity-related parameters. The corners of shapes and other lineal features are emphasized via special audio and tactile effects while apparently-moving effects trace out perimeters and/or shapes. Coded impulse effects communicate categorical visual information. Special speech and braille codes can communicate encoded categorical properties and the arrangements of properties.
    Type: Grant
    Filed: August 29, 2007
    Date of Patent: August 7, 2012
    Inventor: David Charles Dewhurst
  • Patent number: 8233671
    Abstract: In some embodiments, disclosed is reading device that comprises a camera, at least one processor, and a user interface. The camera scans at least a portion of a document having text to generate a raster file. The processor processes the raster file to identify text blocks. The user interface allows a user to hierarchically navigate the text blocks when they are read to the user.
    Type: Grant
    Filed: December 27, 2007
    Date of Patent: July 31, 2012
    Assignee: Intel-GE Care Innovations LLC
    Inventors: Gretchen Anderson, Jeff Witt, Ben Foss, J M Van Thong
  • Patent number: 8218827
    Abstract: Methods and apparatus for facilitating detection of a presence or an absence of at least one underground facility within a dig area. A digital image of a geographic area including the dig area is electronically received, and at least a portion of the received digital image is displayed on a display device. The dig area is delimited on the displayed digital image, via a user input device associated with the display device, so as to generate a marked-up digital image including a delimited dig area, without acquiring geographic coordinates to delimit the dig area. Information relating to the dig area is electronically transmitted and/or electronically stored so as to facilitate the detection of the presence or the absence of the at least one underground facility within the dig area.
    Type: Grant
    Filed: January 16, 2009
    Date of Patent: July 10, 2012
    Assignee: Certusview Technologies, LLC
    Inventors: Steven E. Nielsen, Curtis Chambers