Reading Aids For The Visually Impaired Patents (Class 382/114)
  • Patent number: 11938672
    Abstract: Methods are described for creating a correspondence between percentages of a spot color and print material thicknesses. For example, a method can include printing a set of printed regions on a substrate, wherein each printed region is printed according to a different percentage of a selected spot color. The method can further comprise measuring the thickness of each printed region. The method can further comprise comparing the thickness of each printed region with a target thickness for the printed region. The target thickness for the printed region can be determined according to the percentage of the selected spot color used for printing the printed region. The method can further comprise, for each target thickness, determining an adjusted spot color percentage required to print a layer of structural print material having the target thickness.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: March 26, 2024
    Assignee: NIKE, Inc.
    Inventor: Todd W. Miller
  • Patent number: 11881005
    Abstract: It is possible to inhibit deterioration of extraction precision of a subject and reliably extract the subject even when colors of the subject and a background are the same or similar. An image processing device 1 includes an input unit 11 configured to input a first invisible light image of only a background in which a subject is not included and a second invisible light image in which the subject and the background are included and a subject region extraction unit 15 configured to calculate a difference between a pixel value of each pixel of the second invisible light image and a pixel value of a corresponding pixel of the first invisible light image, determine whether the pixel is in a subject region or a background region in accordance with whether the difference is equal to or greater than a predetermined threshold, and extract the subject region from the second invisible light image.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: January 23, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Jiro Nagao, Mariko Yamaguchi, Hidenobu Nagata, Kota Hidaka
  • Patent number: 11794898
    Abstract: The present disclosure provides an air combat maneuvering method based on parallel self-play, including the steps of constructing a UAV (unmanned aerial vehicle) maneuver model, constructing a red-and-blue motion situation acquiring model to describe a relative combat situation of red and blue sides, constructing state spaces and action spaces of both red and blue sides and a reward function according to a Markov process, followed by constructing a maneuvering decision-making model structure based on a soft actor-critic (SAC) algorithm, training the SAC algorithm by performing air combat confrontations to realize parallel self-play, and finally testing a trained network, displaying combat trajectories and calculating a combat success rate. The level of confrontations can be effectively enhanced and the combat success rate of the decision-making model can be increased.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: October 24, 2023
    Assignee: NORTHWESTERN POLYTECHNICAL UNIVERSITY
    Inventors: Bo Li, Kaifang Wan, Xiaoguang Gao, Zhigang Gan, Shiyang Liang, Kaiqiang Yue, Zhipeng Yang
  • Patent number: 11726500
    Abstract: An unmanned aerial vehicle (UAV) landing method includes detecting, via one or more visual sensors, a gesture or movement of an operator of a UAV; and controlling to decelerate, with aid of one or more processors and in response to the detected gesture or movement, one or more rotor blades of the UAV to cause the UAV to land autonomously.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: August 15, 2023
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventor: Mingyu Wang
  • Patent number: 11450035
    Abstract: Embodiments of the present disclosure relate to computer storage, methods, and systems for the optimization of accessible color themes. Systems and methods are disclosed that leverage the use of confusion lines to identify and highlight relationships between colors that may be inaccessible (e.g., indistinguishable) for a person with a vision impairment, such as a color vision deficiency. In some embodiments, a graphical user interface is provided that, based on a selection of colors in a color wheel, visually indicates curves of confusion for each color in the selection of colors. Each curve of confusion visually indicates a confusion of colors for a type of vision impairment, such as a CVD.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: September 20, 2022
    Assignee: Adobe Inc.
    Inventors: Jose Ignacio Echevarria Vallespi, Adrian Cristian Brojbeanu, Bernard James Kerr
  • Patent number: 11386590
    Abstract: Methods and systems disclosed relate to color controls for visual accessibility within applications. Within a content editor of an application, a user may choose one or more colors for a content element. Upon choosing the color for the content element, a color control generates a contrast ratio between the chosen color of the content element and a background color upon which the content element may be seen. If a contrast ratio is not met or exceeded, an indicator is provided to a user. In some embodiments, the color control may further recommend an accessible color to the user in place of the chosen color, such that the contrast ratio between the accessible color and the background color meets or exceeds the threshold.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: July 12, 2022
    Assignee: OPENGOV, INC.
    Inventors: Michael Bonfiglio, Andrew Reder, Seth McLeod
  • Patent number: 11308317
    Abstract: An electronic device according to an embodiment disclosed in the present document may comprise: an imaging device for generating image data; a communication circuit; at least one processor operatively connected to the imaging device and the communication circuit; and a memory operatively connected to the processor, for storing a command.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: April 19, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Juyong Choi, Jinhyun Kim, Misu Kim, Jeongin Choe, Yeunwook Lim
  • Patent number: 11150472
    Abstract: The display system includes a first storage unit storing standardized data composed chromaticity values and luminance values. An information acquirer acquires luminance values and chromaticity values of a visual target and luminance and chromaticity values of a background thereof. A standardization unit standardizes the chromaticity values and the luminance values of the visual target and the background based on these chromaticity values and the luminance values of the visual target and the background thereof and the standardized data stored in the first storage unit. A visual target contrast calculator calculates a contrast of a visual target to a background by measuring a distance in a color space between the visual target and the background each defined by the standardized luminance and chromaticity values. A second storage unit stores an expression defining a relation between the contrast thereof to the background and a size of the visual target.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: October 19, 2021
    Assignees: DENSO CORPORATION, THE KITASATO INSTITUTE
    Inventors: Hiroaki Ogawa, Takeshi Enya, Takushi Kawamorita
  • Patent number: 11079844
    Abstract: An electronic device includes a contact portion that comes into contact with a ventral side of a finger and performs at least one of presenting stimulation to the finger or acquiring information from the finger. The electronic device is mounted on the finger such that a portion of the finger from a first joint to a fingertip on the ventral side of the finger is exposed except for a portion of the finger, which comes into contact with the contact portion.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: August 3, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Satoru Tsuto
  • Patent number: 11012559
    Abstract: A system and method for enhancing communication between multiple parties includes a first user accessing a communication device; initiating a communication connection to a receiving communication device of a second user; and wherein at least one of the communication devices includes a list of enabled universal communication attributes of the user, utilizing one or more of the enabled communication attributes to complete the communication connection between the initiating and receiving communication devices. A user can select a desired communication attribute or multiple attributes which can be stored in the user's profile. The enabled attributes can be utilized by a network accessing the user's profile to complete the communication connection.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: May 18, 2021
    Assignee: Rochester Institute of Technology
    Inventors: Gary Behm, Brian Trager, Shareef Ali, Mark Jeremy, Byron Behm
  • Patent number: 10976575
    Abstract: Improved eyewear is disclosed. The eyewear comprises a frame member and a lens. The eyewear also includes circuitry within the frame member for enhancing the use of the eyewear. A system and method in accordance with the present invention is directed to a variety of ways to enhance the use of eyeglasses. They are: (1) media focals, that is, utilizing the eyewear for its intended purpose and enhancing that use by using imaging techniques to improve the vision of the user; (2) telecommunications enhancements that allow the eyeglasses to be integrated with telecommunication devices such as cell phones or the like; and (3) entertainment enhancements that allow the eyewear to be integrated with devices such as MP3 players, radios, or the like.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: April 13, 2021
    Assignee: Percept Technologies Inc
    Inventor: Scott W. Lewis
  • Patent number: 10970458
    Abstract: Techniques are disclosed for clustering text. The techniques may be employed to cluster text blocks that are received in either sequential reading order or arbitrary order. A methodology implementing the techniques according to an embodiment includes receiving text blocks comprising elements that may include one or more of glyphs, characters, and/or words. The method further includes determining an order of the received text blocks as one of arbitrary order or sequential reading order. Text blocks received in sequential reading order progress from left to right and from top to bottom for horizontal oriented text, and from top to bottom and left to right for vertical oriented text. The method further includes performing z-order text clustering in response to determining that the received text blocks are in sequential reading order and performing sorted order text clustering in response to determining that the received text blocks are not in sequential reading order.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: April 6, 2021
    Assignee: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Matthew Fisher, Arushi Jain
  • Patent number: 10956699
    Abstract: In determining a distance of an object captured by a remote camera, a controller receives an image of the object from another controller coupled to a camera over a data network. The image includes a label image of a label associated with the object. The controller determines a label dimension of the label that includes a real world size of the label and determines a label image dimension of the label image that includes a size of the label image. The controller calculates a label distance using optical characteristics of the camera, the label dimension, and the label image dimension, and announces the label distance using an output component coupled to the controller. When the controller receives a command to operate the camera input by a user, the controller sends at least one instruction to operate the camera according to the command to the other controller over the data network.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: March 23, 2021
    Inventors: Chi Fai Ho, Augustine Junda Ho
  • Patent number: 10955678
    Abstract: In certain embodiments, enhancement of a field of view of a user may be facilitated via one or more dynamic display portions. In some embodiments, one or more changes related to one or more eyes of a user may be monitored. Based on the monitoring, one or more positions of one or more transparent display portions of wearable device may be adjusted, where the transparent display portions enable the user to see through the wearable device. A live video stream representing an environment of the user may be obtained via the wearable device. A modified video stream derived from the live video stream may be displayed on one or more other display portions of the wearable device.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: March 23, 2021
    Assignee: University of Miami
    Inventors: Mohamed Abou Shousha, Ahmed Sayed
  • Patent number: 10867449
    Abstract: A method of augmenting sight in an individual. The method comprises obtaining an image of a scene using a camera carried by the individual; transmitting the obtained image to a processor carried by the individual; selecting an image modification to be applied to the image by the processor; operating upon the image to create a modified image using either analog or digital imaging techniques, and displaying the modified image on a display device worn by the individual. The invention also relates to an apparatus augmenting sight in an individual. The apparatus comprises a camera, carried by the individual, for obtaining an image of a scene viewed by the individual; a display carried by the individual; an image modification input device carried by the individual; and a processor, carried by the individual. The processor modifies the image and displays the modified image on the display carried by the individual.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: December 15, 2020
    Assignee: eSight Corp.
    Inventors: Conrad Lewis, Daniel Mathers, Robert Hilkes, Rejean Munger, Roger Colbeck
  • Patent number: 10817675
    Abstract: Methods and systems are provided for communicating an announcement to passengers on a transportation vehicle. For example, one method includes providing an information system on the vehicle having at least one of a wireless access point and a plurality of seat display devices and operating the information system to communicate with the wireless access point or the seat display devices. The method includes playing audio corresponding to the announcement over a public address system of the vehicle, and causing text corresponding to the audio to display on the seat display devices or personal electronic devices in communication with the wireless access point.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: October 27, 2020
    Assignee: Panasonic Avionics Corporation
    Inventors: Philip Watson, Steven Bates
  • Patent number: 10776999
    Abstract: A system and method is provided for generating textured 3D building models from ground-level imagery. Ground-level images for the sides/corners of building objects are collected for identification of key architectural features, corresponding key façade geometry planes, and generation of a 3D building façade geometry. The 3D building model is properly geo-positioned, scaled and textured.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: September 15, 2020
    Assignee: Hover Inc.
    Inventors: Shaohui Sun, Ioannis Pavlidis, Adam J. Altman
  • Patent number: 10776929
    Abstract: The present invention relates to a method, system and non-transitory computer-readable recording medium for determining a region of interest for photographing ball images. According to one aspect of the invention, there is provided a method for determining a region of interest for photographing ball images, comprising the steps of: recognizing a location of a ball whose physical quantity is to be measured, in a state in which shot preparation is completed; and dynamically determining a region of interest to be photographed to acquire images including an appearance of the ball, with reference to the location of the ball and at least one of a predicted moving direction of the ball and a location of at least one camera configured to photograph the ball.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: September 15, 2020
    Assignee: CREATZ INC.
    Inventors: Yong Ho Suk, Jey Ho Suk
  • Patent number: 10713515
    Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes receiving a first image from a first camera depicting a first view of a physical item, where the physical item displays a plurality of characters. The method includes receiving a second image from a second camera depicting a second view of the physical item. The method includes performing optical character recognition on the first image to identify first characters and a first layout in the first image and on the second image to identify second characters and a second layout in the second image. The method includes combining the first characters with the second characters by comparing the first characters with the second characters and the first layout with the second layout. The method includes storing the combined first and second characters.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: July 14, 2020
    Assignee: ABBYY PRODUCTION LLC
    Inventors: Aleksey Ivanovich Kalyuzhny, Aleksey Yevgen'yevich Lebedev
  • Patent number: 10649536
    Abstract: Hand dimensions are determined for hand and gesture recognition with a computing interface. An input sequence of frames is received from a camera. Frames of the sequence are identified in which a hand is recognized. Points are identified in the identified frames corresponding to features of the recognized hand. A value is determined for each of a set of different feature lengths of the recognized hand using the identified points for each identified frame. Each different feature length value is collected for the identified frames independently of each other feature length value. Each different feature length value is analyzed to determine an estimate of each different feature length, and the estimated feature lengths are applied to a hand tracking system, the hand tracking system for applying commands to a computer system.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Alon Lerner, Shahar Fleishman
  • Patent number: 10649706
    Abstract: The disclosure discloses a non-transitory computer-readable recording medium storing a virtual label display process program for executing steps. The steps include a composite image generating step, a composite image output step, a determining step, and a notifying step. In the composite image generating step, a real image data of a desired field of view and a virtual image data of a label are combined. In the composite image output step, a composite image data is output to a display device, and a virtual image of the label on the display device is superimposed and displayed. In the determining step, it is determined whether a desired suitability is satisfied between an exterior appearance of a background object and an exterior appearance of the label based on the real image data and the virtual image data. In the notifying step, a predetermined suitability notification is made.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: May 12, 2020
    Assignee: BROTHER KOGYO KABUSHIKI KAISHA
    Inventors: Feng Zhu, Keigo Kako
  • Patent number: 10555034
    Abstract: What is disclosed is a video system. The video system includes a digital video recorder comprising a first camera interface configured to receive video captured from a first plurality of cameras, a packet interface configured to receive in a packet format video captured by a second plurality of cameras, and a storage system configured to store the video captured by the first plurality of cameras and the video captured by the second plurality of cameras. The video system also includes a video encoder coupled to the digital video recorder by a packet link, where the video encoder includes a second camera interface configured to receive video captured from the second plurality of cameras and an output interface configured to transfer in the packet format the video captured by the second plurality of cameras for delivery to the digital video recorder over the packet link.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 4, 2020
    Assignee: Verint Americas Inc.
    Inventors: Hugo Martel, Charles Gregory Lampe, Louis Marchand, Jim Moran
  • Patent number: 10387485
    Abstract: A method, computer program product, and system includes a processor(s) monitoring, via an image capture device communicatively coupled to the one or more processors, visual focus of a user to identify a focal point of a user on an area of an image of at least one object displayed in a graphical user interface communicatively coupled to the one or more processors. The processor(s) derives shape geometry of the object, creating a three-dimensional model. The processor(s) obtains, via the image capture device, a physical gesture by the user. The processor(s) performs a contextual analysis of the physical gesture to determine an application of the physical gesture to a portion of the object depicted in the area of the image. The processor(s) formulates search criteria, based on determining the application and the area. The processor(s) execute a search based on the search criteria and display by a search result.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: August 20, 2019
    Assignee: International Business Machines Corporation
    Inventors: Munish Goyal, Wing L. Leung, Sarbajit K. Rakshit, Kimberly Greene Starks
  • Patent number: 10386641
    Abstract: Configurations are disclosed for a health system to be used in various healthcare applications, e.g., for patient diagnostics, monitoring, and/or therapy. The health system may comprise a light generation module to transmit light or an image to a user, one or more sensors to detect a physiological parameter of the user's body, including their eyes, and processing circuitry to analyze an input received in response to the presented images to determine one or more health conditions or defects.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: August 20, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Nicole Elizabeth Samec, John Graham Macnamara, Christopher M. Harrises, Brian T. Schowengerdt, Rony Abovitz, Mark Baerenrodt
  • Patent number: 10354116
    Abstract: A method and apparatus for authenticating a fingerprint image captured through an optical sensor. For at least some embodiments, light scattering characteristics associated with a fingerprint are determined and compared to a reference light scattering characteristic. The fingerprint is authenticated when the light scattering characteristics are within a threshold difference of the reference light scattering characteristic. For some embodiments, the light scattering characteristics associated with the fingerprint are compared to light scattering characteristics associated with one or more reference (enrollment) images. For at least some embodiments, the light scattering characteristics may be based on a correlation value based on identified pixels and one or more pixels neighboring the identified pixel.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: July 16, 2019
    Assignee: SYNAPTICS INCORPORATED
    Inventor: Scott Dattalo
  • Patent number: 10140507
    Abstract: A virtual reality (VR) headset configured to be worn by a user. The VR headset comprises: i) a forward-looking vision sensor for detecting objects in the forward field of view of the VR headset; ii) a downward-looking vision sensor for detecting objects in the downward field of view of the VR headset; iii) a controller coupled to the forward-looking vision sensor and the downward-looking vision sensor. The controller is configured to: a) detect a hand in a first image captured by the forward-looking vision sensor; b) detect an arm of the user in a second image captured by the downward-looking vision sensor; and c) determine whether the detected hand in the first image is a hand of the user.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: November 27, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Gaurav Srivastava
  • Patent number: 10126826
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: November 13, 2018
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 10058454
    Abstract: An apparatus, system or method for aiding the vision of visually impaired individuals having a retina with reduced functionality, which overcomes the drawbacks of the background art by overcoming such reduced and/or uneven retinal function.
    Type: Grant
    Filed: August 19, 2013
    Date of Patent: August 28, 2018
    Assignee: IC INSIDE LTD.
    Inventors: Haim Chayet, Boris Greenberg, Lior Ben-Hur
  • Patent number: 9811885
    Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: November 7, 2017
    Assignee: ABBYY DEVELOPMENT LLC
    Inventors: Konstantin Bocharov, Mikhail Kostyukov
  • Patent number: 9684055
    Abstract: A method and system are provided for controlling a measurement device remotely through gestures performed by a user. The method includes providing a relationship between each of a plurality of commands and each of a plurality of user gestures. A gesture is performed by the user with the user's body that corresponds to one of the plurality of user gestures. The gesture performed by the user is detected. A first command is determined from one of the plurality of commands based at least in part on the detected gesture. Then the first command is executed with the laser tracker.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 20, 2017
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Robert E. Bridges, David H. Parker, Kelley Fletcher
  • Patent number: 9626000
    Abstract: A reading machine that operates in various modes includes image correction processing is described. The reading device pre-processes an image for optical character recognition by receiving the image and determining whether text in the image is too large or small for optical character recognition processing by determining that text height falls outside of a range in which optical character recognition software will recognize text in a digitized image. If necessary the image is resized according to whether the text is too large or too small.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: April 18, 2017
    Assignee: KNFB READER, LLC
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
  • Patent number: 9619688
    Abstract: Navigation techniques including map based and object recognition based and especially adapted for use in a portable reading machine are described.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: April 11, 2017
    Assignee: KNFB READER, LLC
    Inventor: Rafael Maya Zetune
  • Patent number: 9618748
    Abstract: A method and apparatus of displaying a magnified image comprising obtaining an image of a scene using a camera with greater resolution than the display, and capturing the image in the native resolution of the display by either grouping pixels together, or by capturing a smaller region of interest whose pixel resolution matches that of the display. The invention also relates to a method whereby the location of the captured region of interest may be determined by external inputs such as the location of a person's gaze in the displayed unmagnified image, or coordinates from a computer mouse. The invention further relates to a method whereby a modified image can be superimposed on an unmodified image, in order to maintain the peripheral information or context from which the modified region of interest has been captured.
    Type: Grant
    Filed: September 27, 2010
    Date of Patent: April 11, 2017
    Assignee: eSight Corp.
    Inventors: Rejean J. Y. B. Munger, Robert G. Hilkes, Marc Perron, Nirmal Sohi
  • Patent number: 9507561
    Abstract: Exemplary embodiments are described wherein an auxiliary sensor attachable to a touchscreen computing device provides an additional form of user input. When used in conjunction with an accessibility process in the touchscreen computing device, wherein the accessibility process generates audible descriptions of user interface features shown on a display of the device, actuation of the auxiliary sensor by a user affects the manner in which concurrent touchscreen input is processed and audible descriptions are presented.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 29, 2016
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: Frank A. Mckiel, Jr.
  • Patent number: 9491836
    Abstract: Methods and apparatus for determining the relative electrical positions of lighting units (202a, 202b, 202c, 202d) arranged in a linear configuration along a communication bus (204) are provided. The methods may involve addressing each lighting unit (202a, 202b, 202c, 202d) of the linear configuration once, and counting a number of detected events at the position of each lighting unit. The number of detected events may be unique to each electrical position, thus providing an indication of the relative position of a lighting unit within the linear configuration. The methods may be implemented at least in part by a controller (210) common to multiple lighting units of a lighting system, or may be implemented substantially by the lighting units (202a, 202b, 202c, 202d) themselves.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: November 8, 2016
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Ihor Lys
  • Patent number: 9436887
    Abstract: Devices and a method are provided for providing context-related feedback to a user. In one implementation, the method comprises capturing real time image data from an environment of the user. The method further comprises identifying in the image data a hand-related trigger. Multiple context-based alternative actions are associated with the hand-related trigger. Further, the method comprises identifying in the image data an object associated with the hand-related trigger. The object is further associated with a particular context. Also, the method comprises selecting one of the multiple alternative actions based on the particular context. The method further comprises outputting the context-related feedback based on a result of the executed alternative action.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: September 6, 2016
    Assignee: OrCam Technologies, Ltd.
    Inventors: Yonatan Wexler, Erez Na'Aman, Amnon Shashua
  • Patent number: 9418407
    Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: August 16, 2016
    Assignee: ABBYY Development LLC
    Inventors: Konstantin Bocharov, Mikhail Kostyukov
  • Patent number: 9389682
    Abstract: A method for presenting content on a display screen is provided. The method initiates with presenting first content on the display screen, the first content being associated with a first detected viewing position of a user that is identified in a region in front of the display screen. At least part of second content is presented on the display screen along with the first content, the second content being progressively displayed along a side of the display screen in proportional response to a movement of the user from the first detected viewing position to a second detected viewing position of the user.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: July 12, 2016
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ryuji Nakayama
  • Patent number: 9377867
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: June 28, 2016
    Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 9367126
    Abstract: A method for providing a dynamic perspective-based presentation of content on a cellular phone is provided, comprising: presenting a first portion of a content space on a display screen of the cellular phone; tracking a location of a user's head in front of the display screen; detecting a lateral movement of the user's head relative to the display screen; progressively exposing an adjacent second portion of the content space, from an edge of the display screen opposite a direction of the lateral movement, in proportional response to the lateral movement of the user's head relative to the display screen.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: June 14, 2016
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ryuji Nakayama
  • Patent number: 9311917
    Abstract: A machine, system and method for user-guided teaching of deictic references and referent objects of deictic references to a conversational system. The machine includes a system bus for communicating data and control signals received from the conversational system to the computer system, a data and control bus for connecting devices and sensors in the machine, a bridge module for connecting the data and control bus to the system bus, respective machine subsystems coupled to the data and control bus, the respective machine subsystems having a respective user interface for receiving a deictic reference from a user, a memory coupled to the system bus for storing deictic references and objects of the deictic references learned by the conversational system and a central processing unit coupled to the system bus for executing the deictic references with respect to the objects of the deictic references learned.
    Type: Grant
    Filed: January 21, 2009
    Date of Patent: April 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Liam D. Comerford, Mahesh Viswanathan
  • Patent number: 9263026
    Abstract: A screen reader software product for low-vision users, the software having a reader module collecting textual and non-textual display information generated by a web browser or word processor. Font styling, interface layout information and the like are communicated to the end user by sounds broadcast simultaneously rather than serially with the synthesized speech to improve the speed and efficiency in which information may be digested by the end user.
    Type: Grant
    Filed: July 11, 2014
    Date of Patent: February 16, 2016
    Assignee: Freedom Scientific, Inc.
    Inventors: Christian D. Hofstader, Glen Gordon, Eric Damery, Ralph Ocampo, David Baker, Joseph K. Stephen
  • Patent number: 9213911
    Abstract: A device and method are provided for recognizing text on a curved surface. In one implementation, the device comprises an image sensor configured to capture from an environment of a user multiple images of text on a curved surface. The device also comprises at least one processor device. The at least one processor device is configured to receive a first image of a first perspective of text on the curved surface, receive a second image of a second perspective of the text on the curved surface, perform optical character recognition on at least parts of each of the first image and the second image, combine results of the optical character recognition on the first image and on the second image, and provide the user with a recognized representation of the text, including a recognized representation of the first portion of text.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: December 15, 2015
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 9191554
    Abstract: Some implementations include using a trained classifier to identify page-turn events in a video. The video may be divided into multiple segments based on the page-turn events, with each segment of the multiple segments corresponding to a pair of adjacent pages in a book. Exemplar frames that provide non-redundant data compared to other frames may be chosen from each segment. The exemplar frames may be cropped to include content portions of pages. The exemplar frames may be aligned such that a pixel is located in a same position in each frame. Optical character recognition (OCR) may be performed on exemplar frames and the OCR for exemplar frames in each segment may be combined. The exemplar frames in each segment may be combined to create a composite image for each pair of adjacent pages in the book, and OCR may be performed on the composite image.
    Type: Grant
    Filed: November 14, 2012
    Date of Patent: November 17, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Vasant Manohar, Sridhar Godavarthy, Viswanath Sankaranarayanan
  • Patent number: 9165478
    Abstract: A method and system for use in a user system, for accessing information related to a physical document. An electronic copy of an existing physical document is identified and located. The electronic copy of the physical document is an exact replica of the physical document. One or more pages of the physical document are identified. A selected part of the physical document is identified using the position of points on the identified one or more pages of the physical document and in response, data related to the selected part of the physical document is retrieved from the electronic copy of the physical document. The retrieved data is presented visually to a visually impaired person or orally to a blind person on the user system, which enables the visually impaired person to see or hear, respectively, the retrieved data.
    Type: Grant
    Filed: April 15, 2004
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fernando Incertis Carro, Sharon M. Trewin
  • Patent number: 9129374
    Abstract: Embodiments of the present invention provide an image sharpening method and device. The method includes performing bilateral filtering processing and difference of Gaussians filtering processing on original image information to obtain first image-layer information and second image-layer information respectively. The first image-layer information is subtracted from the original image information to obtain third image-layer information. Fusion and superimposition processing is performed on the second image-layer information and the third image-layer information to obtain fourth image-layer information. The original image information and the fourth image information are added to obtain processed image information.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: September 8, 2015
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Xianxiang Xu
  • Patent number: 8920174
    Abstract: An electro-tactile display includes an electrode substrate provided with a plurality of stimulation electrodes, a conductive gel layer positioned between the stimulation electrodes and the skin of a wearer, a switching circuit section electrically connected to the stimulation electrodes, a stimulation pattern generating section electrically connected to the switching circuit, and means for alleviating a sensation experienced by the wearer as a result of the stimulation electrodes. In one aspect, the means for alleviating a sensation is configured from the conductive gel layer. The conductive gel layer has a resistance value equivalent to that of the horny layer of the skin. In another aspect, the means for alleviating a sensation is configured from the stimulation determination means and the threshold value adjustment means.
    Type: Grant
    Filed: December 7, 2006
    Date of Patent: December 30, 2014
    Assignees: The University of Tokyo, Eye Plus Plus, Inc.
    Inventors: Susumu Tachi, Hiroyuki Kajimoto, Yonezo Kanno
  • Patent number: 8908995
    Abstract: A method of operating a dimensioning system to determine dimensional information for objects is disclosed. A number of images are acquired. Objects in at least one of the acquired images are computationally identified. One object represented in the at least one of the acquired images is computationally initially selected as a candidate for processing. An indication of the initially selected object is provided to a user. At least one user input indicative of an object selected for processing is received. Dimensional data for the object indicated by the received user input is computationally determined.
    Type: Grant
    Filed: January 12, 2010
    Date of Patent: December 9, 2014
    Assignee: Intermec IP Corp.
    Inventors: Virginie Benos, Vincent Bessettes, Franck Laffargue
  • Patent number: 8884899
    Abstract: Provided is a thin three-dimensional interactive display which enables multi-touch sensing and three-dimensional gesture recognition. The three-dimensional interactive display includes a light source for irradiating an object to be detected with a light, a light modulation layer, into which a scattered light generated by irradiating the object with the light from the light source enters, at least for modulating an intensity of the scattered light, a transparent light-receiving layer for receiving the light transmitted through the light modulation layer, and a display panel or a back light panel disposed on the opposite side of the transparent light-receiving layer from the light modulation layer. The transparent light-receiving layer has a two-dimensional array of light-receiving elements.
    Type: Grant
    Filed: May 17, 2012
    Date of Patent: November 11, 2014
    Assignee: Sony Corporation
    Inventors: Wei Luo, Yuichi Tokita, Yoshio Goto, Seiji Yamada, Satoshi Nakamaru
  • Publication number: 20140270398
    Abstract: An apparatus and method are provided for identifying and audibly presenting textual information within captured image data. In one implementation, a method is provided for audibly presenting text retrieved from a captured image. According to the method, at least one image of text is received from an image sensor, and the text may include a first portion and a second portion. The method includes identifying contextual information associated with the text, and accessing at least one rule associating the contextual information with at least one portion of text to be excluded from an audible presentation associated with the text. The method further includes performing an analysis on the at least one image to identify the first portion and the second portion, and causing the audible presentation of the first portion.
    Type: Application
    Filed: December 20, 2013
    Publication date: September 18, 2014
    Applicant: ORCAM TECHNOLOGIES LTD.
    Inventors: Yonatan WEXLER, Amnon SHASHUA