Reading Aids For The Visually Impaired Patents (Class 382/114)
-
Patent number: 12142012Abstract: There is provided a system and method of re-projecting and combining sensor data of a scene from a plurality of sensors for visualization. The method including: receiving the sensor data from the plurality of sensors; re-projecting the sensor data from each of the sensors into a new viewpoint; localizing each of the re-projected sensor data; combining the localized re-projected sensor data into a combined image; and outputting the combined image. In a particular case, the receiving and re-projecting can be performed locally at each of the sensors.Type: GrantFiled: June 13, 2023Date of Patent: November 12, 2024Assignee: INTERAPTIX INC.Inventors: Dae Hyun Lee, Tyler James Doyle
-
Patent number: 12079395Abstract: Collaborative sessions in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a participant crops media content by use of a hand gesture to produce an image segment that can be associated to the collaborative object. The hand gesture resembles a pair of scissors and the camera and processor of the client device track a path of the hand gesture to identify an object within a displayed image to create virtual content of the identified object. The virtual content created by the hand gesture is then associated to the collaborative object.Type: GrantFiled: August 31, 2022Date of Patent: September 3, 2024Assignee: Snap Inc.Inventors: Youjean Cho, Chen Ji, Fannie Liu, Andrés Monroy-Hernández, Tsung-Yu Tsai, Rajan Vaish
-
Patent number: 12033381Abstract: Various implementations disclosed herein include devices, systems, and methods for performing scene-to-text conversion. In various implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, a method includes obtaining environmental data corresponding to an environment. Based on the environmental data, a plurality of objects that are in the environment are identified. An audio output describing at least a first object of the plurality of objects in the environment is generated based on a characteristic value associated with a user of the device. The audio output is outputted.Type: GrantFiled: September 3, 2021Date of Patent: July 9, 2024Assignee: APPLE INC.Inventor: Jack Greasley
-
Patent number: 12020499Abstract: This biological information acquisition device comprises: a first camera which captures a first hand image of a user, the hand being inserted into an imaging region; a second camera which captures a second hand image of the user used for biological authentication; and a processor which causes the second camera to capture the second hand image when a stop of the user's hand is detected on the basis of the hand image captured by the first camera.Type: GrantFiled: October 29, 2021Date of Patent: June 25, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Hideyuki Nakamura, Risa Komatsu
-
Patent number: 12015842Abstract: Eyewear having an image signal processor (ISP) dynamically operable in a camera pipeline for augmented reality (AR) and computer vision (CV) systems. Multi-purpose cameras are used for simultaneous image capture and CV on wearable AR devices. The cameras are coupled to a frame and configured to generate images, wherein the cameras and the ISP are configured to operate in a first AR mode and capture images having a first resolution suitable for use in AR, and are configured to operate in a second CV mode to provide the images having a second resolution suitable for use in CV. The first resolution in the AR mode is higher than the second resolution in the CV mode, and the cameras and the ISP consume less power in the second CV mode than the first AR mode. The cameras and the ISP save significant system power by operating in the low power mode CV mode.Type: GrantFiled: August 29, 2021Date of Patent: June 18, 2024Assignee: Snap Inc.Inventors: Bo Ding, Chintan Doshi, Alexander Kane, John James Robertson, Dmitry Ryuma
-
Patent number: 11938672Abstract: Methods are described for creating a correspondence between percentages of a spot color and print material thicknesses. For example, a method can include printing a set of printed regions on a substrate, wherein each printed region is printed according to a different percentage of a selected spot color. The method can further comprise measuring the thickness of each printed region. The method can further comprise comparing the thickness of each printed region with a target thickness for the printed region. The target thickness for the printed region can be determined according to the percentage of the selected spot color used for printing the printed region. The method can further comprise, for each target thickness, determining an adjusted spot color percentage required to print a layer of structural print material having the target thickness.Type: GrantFiled: March 6, 2023Date of Patent: March 26, 2024Assignee: NIKE, Inc.Inventor: Todd W. Miller
-
Patent number: 11881005Abstract: It is possible to inhibit deterioration of extraction precision of a subject and reliably extract the subject even when colors of the subject and a background are the same or similar. An image processing device 1 includes an input unit 11 configured to input a first invisible light image of only a background in which a subject is not included and a second invisible light image in which the subject and the background are included and a subject region extraction unit 15 configured to calculate a difference between a pixel value of each pixel of the second invisible light image and a pixel value of a corresponding pixel of the first invisible light image, determine whether the pixel is in a subject region or a background region in accordance with whether the difference is equal to or greater than a predetermined threshold, and extract the subject region from the second invisible light image.Type: GrantFiled: July 31, 2019Date of Patent: January 23, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Jiro Nagao, Mariko Yamaguchi, Hidenobu Nagata, Kota Hidaka
-
Patent number: 11794898Abstract: The present disclosure provides an air combat maneuvering method based on parallel self-play, including the steps of constructing a UAV (unmanned aerial vehicle) maneuver model, constructing a red-and-blue motion situation acquiring model to describe a relative combat situation of red and blue sides, constructing state spaces and action spaces of both red and blue sides and a reward function according to a Markov process, followed by constructing a maneuvering decision-making model structure based on a soft actor-critic (SAC) algorithm, training the SAC algorithm by performing air combat confrontations to realize parallel self-play, and finally testing a trained network, displaying combat trajectories and calculating a combat success rate. The level of confrontations can be effectively enhanced and the combat success rate of the decision-making model can be increased.Type: GrantFiled: October 13, 2021Date of Patent: October 24, 2023Assignee: NORTHWESTERN POLYTECHNICAL UNIVERSITYInventors: Bo Li, Kaifang Wan, Xiaoguang Gao, Zhigang Gan, Shiyang Liang, Kaiqiang Yue, Zhipeng Yang
-
Patent number: 11726500Abstract: An unmanned aerial vehicle (UAV) landing method includes detecting, via one or more visual sensors, a gesture or movement of an operator of a UAV; and controlling to decelerate, with aid of one or more processors and in response to the detected gesture or movement, one or more rotor blades of the UAV to cause the UAV to land autonomously.Type: GrantFiled: April 2, 2021Date of Patent: August 15, 2023Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventor: Mingyu Wang
-
Patent number: 11450035Abstract: Embodiments of the present disclosure relate to computer storage, methods, and systems for the optimization of accessible color themes. Systems and methods are disclosed that leverage the use of confusion lines to identify and highlight relationships between colors that may be inaccessible (e.g., indistinguishable) for a person with a vision impairment, such as a color vision deficiency. In some embodiments, a graphical user interface is provided that, based on a selection of colors in a color wheel, visually indicates curves of confusion for each color in the selection of colors. Each curve of confusion visually indicates a confusion of colors for a type of vision impairment, such as a CVD.Type: GrantFiled: November 13, 2019Date of Patent: September 20, 2022Assignee: Adobe Inc.Inventors: Jose Ignacio Echevarria Vallespi, Adrian Cristian Brojbeanu, Bernard James Kerr
-
Patent number: 11386590Abstract: Methods and systems disclosed relate to color controls for visual accessibility within applications. Within a content editor of an application, a user may choose one or more colors for a content element. Upon choosing the color for the content element, a color control generates a contrast ratio between the chosen color of the content element and a background color upon which the content element may be seen. If a contrast ratio is not met or exceeded, an indicator is provided to a user. In some embodiments, the color control may further recommend an accessible color to the user in place of the chosen color, such that the contrast ratio between the accessible color and the background color meets or exceeds the threshold.Type: GrantFiled: January 20, 2021Date of Patent: July 12, 2022Assignee: OPENGOV, INC.Inventors: Michael Bonfiglio, Andrew Reder, Seth McLeod
-
Patent number: 11308317Abstract: An electronic device according to an embodiment disclosed in the present document may comprise: an imaging device for generating image data; a communication circuit; at least one processor operatively connected to the imaging device and the communication circuit; and a memory operatively connected to the processor, for storing a command.Type: GrantFiled: February 18, 2019Date of Patent: April 19, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Juyong Choi, Jinhyun Kim, Misu Kim, Jeongin Choe, Yeunwook Lim
-
Patent number: 11150472Abstract: The display system includes a first storage unit storing standardized data composed chromaticity values and luminance values. An information acquirer acquires luminance values and chromaticity values of a visual target and luminance and chromaticity values of a background thereof. A standardization unit standardizes the chromaticity values and the luminance values of the visual target and the background based on these chromaticity values and the luminance values of the visual target and the background thereof and the standardized data stored in the first storage unit. A visual target contrast calculator calculates a contrast of a visual target to a background by measuring a distance in a color space between the visual target and the background each defined by the standardized luminance and chromaticity values. A second storage unit stores an expression defining a relation between the contrast thereof to the background and a size of the visual target.Type: GrantFiled: May 7, 2020Date of Patent: October 19, 2021Assignees: DENSO CORPORATION, THE KITASATO INSTITUTEInventors: Hiroaki Ogawa, Takeshi Enya, Takushi Kawamorita
-
Patent number: 11079844Abstract: An electronic device includes a contact portion that comes into contact with a ventral side of a finger and performs at least one of presenting stimulation to the finger or acquiring information from the finger. The electronic device is mounted on the finger such that a portion of the finger from a first joint to a fingertip on the ventral side of the finger is exposed except for a portion of the finger, which comes into contact with the contact portion.Type: GrantFiled: September 25, 2018Date of Patent: August 3, 2021Assignee: FUJIFILM Business Innovation Corp.Inventor: Satoru Tsuto
-
Patent number: 11012559Abstract: A system and method for enhancing communication between multiple parties includes a first user accessing a communication device; initiating a communication connection to a receiving communication device of a second user; and wherein at least one of the communication devices includes a list of enabled universal communication attributes of the user, utilizing one or more of the enabled communication attributes to complete the communication connection between the initiating and receiving communication devices. A user can select a desired communication attribute or multiple attributes which can be stored in the user's profile. The enabled attributes can be utilized by a network accessing the user's profile to complete the communication connection.Type: GrantFiled: February 14, 2020Date of Patent: May 18, 2021Assignee: Rochester Institute of TechnologyInventors: Gary Behm, Brian Trager, Shareef Ali, Mark Jeremy, Byron Behm
-
Patent number: 10976575Abstract: Improved eyewear is disclosed. The eyewear comprises a frame member and a lens. The eyewear also includes circuitry within the frame member for enhancing the use of the eyewear. A system and method in accordance with the present invention is directed to a variety of ways to enhance the use of eyeglasses. They are: (1) media focals, that is, utilizing the eyewear for its intended purpose and enhancing that use by using imaging techniques to improve the vision of the user; (2) telecommunications enhancements that allow the eyeglasses to be integrated with telecommunication devices such as cell phones or the like; and (3) entertainment enhancements that allow the eyewear to be integrated with devices such as MP3 players, radios, or the like.Type: GrantFiled: January 3, 2019Date of Patent: April 13, 2021Assignee: Percept Technologies IncInventor: Scott W. Lewis
-
Patent number: 10970458Abstract: Techniques are disclosed for clustering text. The techniques may be employed to cluster text blocks that are received in either sequential reading order or arbitrary order. A methodology implementing the techniques according to an embodiment includes receiving text blocks comprising elements that may include one or more of glyphs, characters, and/or words. The method further includes determining an order of the received text blocks as one of arbitrary order or sequential reading order. Text blocks received in sequential reading order progress from left to right and from top to bottom for horizontal oriented text, and from top to bottom and left to right for vertical oriented text. The method further includes performing z-order text clustering in response to determining that the received text blocks are in sequential reading order and performing sorted order text clustering in response to determining that the received text blocks are not in sequential reading order.Type: GrantFiled: June 25, 2020Date of Patent: April 6, 2021Assignee: Adobe Inc.Inventors: Praveen Kumar Dhanuka, Matthew Fisher, Arushi Jain
-
Patent number: 10955678Abstract: In certain embodiments, enhancement of a field of view of a user may be facilitated via one or more dynamic display portions. In some embodiments, one or more changes related to one or more eyes of a user may be monitored. Based on the monitoring, one or more positions of one or more transparent display portions of wearable device may be adjusted, where the transparent display portions enable the user to see through the wearable device. A live video stream representing an environment of the user may be obtained via the wearable device. A modified video stream derived from the live video stream may be displayed on one or more other display portions of the wearable device.Type: GrantFiled: September 4, 2019Date of Patent: March 23, 2021Assignee: University of MiamiInventors: Mohamed Abou Shousha, Ahmed Sayed
-
Patent number: 10956699Abstract: In determining a distance of an object captured by a remote camera, a controller receives an image of the object from another controller coupled to a camera over a data network. The image includes a label image of a label associated with the object. The controller determines a label dimension of the label that includes a real world size of the label and determines a label image dimension of the label image that includes a size of the label image. The controller calculates a label distance using optical characteristics of the camera, the label dimension, and the label image dimension, and announces the label distance using an output component coupled to the controller. When the controller receives a command to operate the camera input by a user, the controller sends at least one instruction to operate the camera according to the command to the other controller over the data network.Type: GrantFiled: November 18, 2019Date of Patent: March 23, 2021Inventors: Chi Fai Ho, Augustine Junda Ho
-
Patent number: 10867449Abstract: A method of augmenting sight in an individual. The method comprises obtaining an image of a scene using a camera carried by the individual; transmitting the obtained image to a processor carried by the individual; selecting an image modification to be applied to the image by the processor; operating upon the image to create a modified image using either analog or digital imaging techniques, and displaying the modified image on a display device worn by the individual. The invention also relates to an apparatus augmenting sight in an individual. The apparatus comprises a camera, carried by the individual, for obtaining an image of a scene viewed by the individual; a display carried by the individual; an image modification input device carried by the individual; and a processor, carried by the individual. The processor modifies the image and displays the modified image on the display carried by the individual.Type: GrantFiled: March 4, 2019Date of Patent: December 15, 2020Assignee: eSight Corp.Inventors: Conrad Lewis, Daniel Mathers, Robert Hilkes, Rejean Munger, Roger Colbeck
-
Patent number: 10817675Abstract: Methods and systems are provided for communicating an announcement to passengers on a transportation vehicle. For example, one method includes providing an information system on the vehicle having at least one of a wireless access point and a plurality of seat display devices and operating the information system to communicate with the wireless access point or the seat display devices. The method includes playing audio corresponding to the announcement over a public address system of the vehicle, and causing text corresponding to the audio to display on the seat display devices or personal electronic devices in communication with the wireless access point.Type: GrantFiled: November 20, 2018Date of Patent: October 27, 2020Assignee: Panasonic Avionics CorporationInventors: Philip Watson, Steven Bates
-
Patent number: 10776999Abstract: A system and method is provided for generating textured 3D building models from ground-level imagery. Ground-level images for the sides/corners of building objects are collected for identification of key architectural features, corresponding key façade geometry planes, and generation of a 3D building façade geometry. The 3D building model is properly geo-positioned, scaled and textured.Type: GrantFiled: September 2, 2016Date of Patent: September 15, 2020Assignee: Hover Inc.Inventors: Shaohui Sun, Ioannis Pavlidis, Adam J. Altman
-
Patent number: 10776929Abstract: The present invention relates to a method, system and non-transitory computer-readable recording medium for determining a region of interest for photographing ball images. According to one aspect of the invention, there is provided a method for determining a region of interest for photographing ball images, comprising the steps of: recognizing a location of a ball whose physical quantity is to be measured, in a state in which shot preparation is completed; and dynamically determining a region of interest to be photographed to acquire images including an appearance of the ball, with reference to the location of the ball and at least one of a predicted moving direction of the ball and a location of at least one camera configured to photograph the ball.Type: GrantFiled: June 29, 2017Date of Patent: September 15, 2020Assignee: CREATZ INC.Inventors: Yong Ho Suk, Jey Ho Suk
-
Patent number: 10713515Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes receiving a first image from a first camera depicting a first view of a physical item, where the physical item displays a plurality of characters. The method includes receiving a second image from a second camera depicting a second view of the physical item. The method includes performing optical character recognition on the first image to identify first characters and a first layout in the first image and on the second image to identify second characters and a second layout in the second image. The method includes combining the first characters with the second characters by comparing the first characters with the second characters and the first layout with the second layout. The method includes storing the combined first and second characters.Type: GrantFiled: September 25, 2017Date of Patent: July 14, 2020Assignee: ABBYY PRODUCTION LLCInventors: Aleksey Ivanovich Kalyuzhny, Aleksey Yevgen'yevich Lebedev
-
Patent number: 10649706Abstract: The disclosure discloses a non-transitory computer-readable recording medium storing a virtual label display process program for executing steps. The steps include a composite image generating step, a composite image output step, a determining step, and a notifying step. In the composite image generating step, a real image data of a desired field of view and a virtual image data of a label are combined. In the composite image output step, a composite image data is output to a display device, and a virtual image of the label on the display device is superimposed and displayed. In the determining step, it is determined whether a desired suitability is satisfied between an exterior appearance of a background object and an exterior appearance of the label based on the real image data and the virtual image data. In the notifying step, a predetermined suitability notification is made.Type: GrantFiled: September 25, 2017Date of Patent: May 12, 2020Assignee: BROTHER KOGYO KABUSHIKI KAISHAInventors: Feng Zhu, Keigo Kako
-
Patent number: 10649536Abstract: Hand dimensions are determined for hand and gesture recognition with a computing interface. An input sequence of frames is received from a camera. Frames of the sequence are identified in which a hand is recognized. Points are identified in the identified frames corresponding to features of the recognized hand. A value is determined for each of a set of different feature lengths of the recognized hand using the identified points for each identified frame. Each different feature length value is collected for the identified frames independently of each other feature length value. Each different feature length value is analyzed to determine an estimate of each different feature length, and the estimated feature lengths are applied to a hand tracking system, the hand tracking system for applying commands to a computer system.Type: GrantFiled: November 24, 2015Date of Patent: May 12, 2020Assignee: Intel CorporationInventors: Alon Lerner, Shahar Fleishman
-
Patent number: 10555034Abstract: What is disclosed is a video system. The video system includes a digital video recorder comprising a first camera interface configured to receive video captured from a first plurality of cameras, a packet interface configured to receive in a packet format video captured by a second plurality of cameras, and a storage system configured to store the video captured by the first plurality of cameras and the video captured by the second plurality of cameras. The video system also includes a video encoder coupled to the digital video recorder by a packet link, where the video encoder includes a second camera interface configured to receive video captured from the second plurality of cameras and an output interface configured to transfer in the packet format the video captured by the second plurality of cameras for delivery to the digital video recorder over the packet link.Type: GrantFiled: June 29, 2018Date of Patent: February 4, 2020Assignee: Verint Americas Inc.Inventors: Hugo Martel, Charles Gregory Lampe, Louis Marchand, Jim Moran
-
Patent number: 10387485Abstract: A method, computer program product, and system includes a processor(s) monitoring, via an image capture device communicatively coupled to the one or more processors, visual focus of a user to identify a focal point of a user on an area of an image of at least one object displayed in a graphical user interface communicatively coupled to the one or more processors. The processor(s) derives shape geometry of the object, creating a three-dimensional model. The processor(s) obtains, via the image capture device, a physical gesture by the user. The processor(s) performs a contextual analysis of the physical gesture to determine an application of the physical gesture to a portion of the object depicted in the area of the image. The processor(s) formulates search criteria, based on determining the application and the area. The processor(s) execute a search based on the search criteria and display by a search result.Type: GrantFiled: March 21, 2017Date of Patent: August 20, 2019Assignee: International Business Machines CorporationInventors: Munish Goyal, Wing L. Leung, Sarbajit K. Rakshit, Kimberly Greene Starks
-
Patent number: 10386641Abstract: Configurations are disclosed for a health system to be used in various healthcare applications, e.g., for patient diagnostics, monitoring, and/or therapy. The health system may comprise a light generation module to transmit light or an image to a user, one or more sensors to detect a physiological parameter of the user's body, including their eyes, and processing circuitry to analyze an input received in response to the presented images to determine one or more health conditions or defects.Type: GrantFiled: September 19, 2016Date of Patent: August 20, 2019Assignee: Magic Leap, Inc.Inventors: Nicole Elizabeth Samec, John Graham Macnamara, Christopher M. Harrises, Brian T. Schowengerdt, Rony Abovitz, Mark Baerenrodt
-
Patent number: 10354116Abstract: A method and apparatus for authenticating a fingerprint image captured through an optical sensor. For at least some embodiments, light scattering characteristics associated with a fingerprint are determined and compared to a reference light scattering characteristic. The fingerprint is authenticated when the light scattering characteristics are within a threshold difference of the reference light scattering characteristic. For some embodiments, the light scattering characteristics associated with the fingerprint are compared to light scattering characteristics associated with one or more reference (enrollment) images. For at least some embodiments, the light scattering characteristics may be based on a correlation value based on identified pixels and one or more pixels neighboring the identified pixel.Type: GrantFiled: July 6, 2017Date of Patent: July 16, 2019Assignee: SYNAPTICS INCORPORATEDInventor: Scott Dattalo
-
Patent number: 10140507Abstract: A virtual reality (VR) headset configured to be worn by a user. The VR headset comprises: i) a forward-looking vision sensor for detecting objects in the forward field of view of the VR headset; ii) a downward-looking vision sensor for detecting objects in the downward field of view of the VR headset; iii) a controller coupled to the forward-looking vision sensor and the downward-looking vision sensor. The controller is configured to: a) detect a hand in a first image captured by the forward-looking vision sensor; b) detect an arm of the user in a second image captured by the downward-looking vision sensor; and c) determine whether the detected hand in the first image is a hand of the user.Type: GrantFiled: December 29, 2015Date of Patent: November 27, 2018Assignee: Samsung Electronics Co., Ltd.Inventor: Gaurav Srivastava
-
Patent number: 10126826Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: GrantFiled: June 27, 2016Date of Patent: November 13, 2018Assignee: Eyesight Mobile Technologies Ltd.Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Patent number: 10058454Abstract: An apparatus, system or method for aiding the vision of visually impaired individuals having a retina with reduced functionality, which overcomes the drawbacks of the background art by overcoming such reduced and/or uneven retinal function.Type: GrantFiled: August 19, 2013Date of Patent: August 28, 2018Assignee: IC INSIDE LTD.Inventors: Haim Chayet, Boris Greenberg, Lior Ben-Hur
-
Patent number: 9811885Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.Type: GrantFiled: August 4, 2016Date of Patent: November 7, 2017Assignee: ABBYY DEVELOPMENT LLCInventors: Konstantin Bocharov, Mikhail Kostyukov
-
Patent number: 9684055Abstract: A method and system are provided for controlling a measurement device remotely through gestures performed by a user. The method includes providing a relationship between each of a plurality of commands and each of a plurality of user gestures. A gesture is performed by the user with the user's body that corresponds to one of the plurality of user gestures. The gesture performed by the user is detected. A first command is determined from one of the plurality of commands based at least in part on the detected gesture. Then the first command is executed with the laser tracker.Type: GrantFiled: December 12, 2016Date of Patent: June 20, 2017Assignee: FARO TECHNOLOGIES, INC.Inventors: Robert E. Bridges, David H. Parker, Kelley Fletcher
-
Patent number: 9626000Abstract: A reading machine that operates in various modes includes image correction processing is described. The reading device pre-processes an image for optical character recognition by receiving the image and determining whether text in the image is too large or small for optical character recognition processing by determining that text height falls outside of a range in which optical character recognition software will recognize text in a digitized image. If necessary the image is resized according to whether the text is too large or too small.Type: GrantFiled: October 27, 2014Date of Patent: April 18, 2017Assignee: KNFB READER, LLCInventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
-
Patent number: 9618748Abstract: A method and apparatus of displaying a magnified image comprising obtaining an image of a scene using a camera with greater resolution than the display, and capturing the image in the native resolution of the display by either grouping pixels together, or by capturing a smaller region of interest whose pixel resolution matches that of the display. The invention also relates to a method whereby the location of the captured region of interest may be determined by external inputs such as the location of a person's gaze in the displayed unmagnified image, or coordinates from a computer mouse. The invention further relates to a method whereby a modified image can be superimposed on an unmodified image, in order to maintain the peripheral information or context from which the modified region of interest has been captured.Type: GrantFiled: September 27, 2010Date of Patent: April 11, 2017Assignee: eSight Corp.Inventors: Rejean J. Y. B. Munger, Robert G. Hilkes, Marc Perron, Nirmal Sohi
-
Patent number: 9619688Abstract: Navigation techniques including map based and object recognition based and especially adapted for use in a portable reading machine are described.Type: GrantFiled: October 8, 2013Date of Patent: April 11, 2017Assignee: KNFB READER, LLCInventor: Rafael Maya Zetune
-
Patent number: 9507561Abstract: Exemplary embodiments are described wherein an auxiliary sensor attachable to a touchscreen computing device provides an additional form of user input. When used in conjunction with an accessibility process in the touchscreen computing device, wherein the accessibility process generates audible descriptions of user interface features shown on a display of the device, actuation of the auxiliary sensor by a user affects the manner in which concurrent touchscreen input is processed and audible descriptions are presented.Type: GrantFiled: March 15, 2013Date of Patent: November 29, 2016Assignee: Verizon Patent and Licensing Inc.Inventor: Frank A. Mckiel, Jr.
-
Patent number: 9491836Abstract: Methods and apparatus for determining the relative electrical positions of lighting units (202a, 202b, 202c, 202d) arranged in a linear configuration along a communication bus (204) are provided. The methods may involve addressing each lighting unit (202a, 202b, 202c, 202d) of the linear configuration once, and counting a number of detected events at the position of each lighting unit. The number of detected events may be unique to each electrical position, thus providing an indication of the relative position of a lighting unit within the linear configuration. The methods may be implemented at least in part by a controller (210) common to multiple lighting units of a lighting system, or may be implemented substantially by the lighting units (202a, 202b, 202c, 202d) themselves.Type: GrantFiled: June 22, 2009Date of Patent: November 8, 2016Assignee: KONINKLIJKE PHILIPS N.V.Inventor: Ihor Lys
-
Patent number: 9436887Abstract: Devices and a method are provided for providing context-related feedback to a user. In one implementation, the method comprises capturing real time image data from an environment of the user. The method further comprises identifying in the image data a hand-related trigger. Multiple context-based alternative actions are associated with the hand-related trigger. Further, the method comprises identifying in the image data an object associated with the hand-related trigger. The object is further associated with a particular context. Also, the method comprises selecting one of the multiple alternative actions based on the particular context. The method further comprises outputting the context-related feedback based on a result of the executed alternative action.Type: GrantFiled: December 20, 2013Date of Patent: September 6, 2016Assignee: OrCam Technologies, Ltd.Inventors: Yonatan Wexler, Erez Na'Aman, Amnon Shashua
-
Patent number: 9418407Abstract: Disclosed are systems, computer-readable mediums, and methods for detecting glare in a frame of image data. A frame of image data is preprocessed. A set of connected components in the preprocessed frame is determined. A set of statistics is calculated for one or more connected components in the set of connected components. A decision for the one or more connected components is made, using the calculated set of statistics, if the connected component is a light spot over text. Whether glare is present in the frame is determined.Type: GrantFiled: December 9, 2014Date of Patent: August 16, 2016Assignee: ABBYY Development LLCInventors: Konstantin Bocharov, Mikhail Kostyukov
-
Patent number: 9389682Abstract: A method for presenting content on a display screen is provided. The method initiates with presenting first content on the display screen, the first content being associated with a first detected viewing position of a user that is identified in a region in front of the display screen. At least part of second content is presented on the display screen along with the first content, the second content being progressively displayed along a side of the display screen in proportional response to a movement of the user from the first detected viewing position to a second detected viewing position of the user.Type: GrantFiled: July 1, 2013Date of Patent: July 12, 2016Assignee: Sony Interactive Entertainment Inc.Inventor: Ryuji Nakayama
-
Patent number: 9377867Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: GrantFiled: August 8, 2012Date of Patent: June 28, 2016Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Patent number: 9367126Abstract: A method for providing a dynamic perspective-based presentation of content on a cellular phone is provided, comprising: presenting a first portion of a content space on a display screen of the cellular phone; tracking a location of a user's head in front of the display screen; detecting a lateral movement of the user's head relative to the display screen; progressively exposing an adjacent second portion of the content space, from an edge of the display screen opposite a direction of the lateral movement, in proportional response to the lateral movement of the user's head relative to the display screen.Type: GrantFiled: September 30, 2014Date of Patent: June 14, 2016Assignee: Sony Interactive Entertainment Inc.Inventor: Ryuji Nakayama
-
Patent number: 9311917Abstract: A machine, system and method for user-guided teaching of deictic references and referent objects of deictic references to a conversational system. The machine includes a system bus for communicating data and control signals received from the conversational system to the computer system, a data and control bus for connecting devices and sensors in the machine, a bridge module for connecting the data and control bus to the system bus, respective machine subsystems coupled to the data and control bus, the respective machine subsystems having a respective user interface for receiving a deictic reference from a user, a memory coupled to the system bus for storing deictic references and objects of the deictic references learned by the conversational system and a central processing unit coupled to the system bus for executing the deictic references with respect to the objects of the deictic references learned.Type: GrantFiled: January 21, 2009Date of Patent: April 12, 2016Assignee: International Business Machines CorporationInventors: Liam D. Comerford, Mahesh Viswanathan
-
Patent number: 9263026Abstract: A screen reader software product for low-vision users, the software having a reader module collecting textual and non-textual display information generated by a web browser or word processor. Font styling, interface layout information and the like are communicated to the end user by sounds broadcast simultaneously rather than serially with the synthesized speech to improve the speed and efficiency in which information may be digested by the end user.Type: GrantFiled: July 11, 2014Date of Patent: February 16, 2016Assignee: Freedom Scientific, Inc.Inventors: Christian D. Hofstader, Glen Gordon, Eric Damery, Ralph Ocampo, David Baker, Joseph K. Stephen
-
Patent number: 9213911Abstract: A device and method are provided for recognizing text on a curved surface. In one implementation, the device comprises an image sensor configured to capture from an environment of a user multiple images of text on a curved surface. The device also comprises at least one processor device. The at least one processor device is configured to receive a first image of a first perspective of text on the curved surface, receive a second image of a second perspective of the text on the curved surface, perform optical character recognition on at least parts of each of the first image and the second image, combine results of the optical character recognition on the first image and on the second image, and provide the user with a recognized representation of the text, including a recognized representation of the first portion of text.Type: GrantFiled: December 20, 2013Date of Patent: December 15, 2015Assignee: OrCam Technologies Ltd.Inventors: Yonatan Wexler, Amnon Shashua
-
Patent number: 9191554Abstract: Some implementations include using a trained classifier to identify page-turn events in a video. The video may be divided into multiple segments based on the page-turn events, with each segment of the multiple segments corresponding to a pair of adjacent pages in a book. Exemplar frames that provide non-redundant data compared to other frames may be chosen from each segment. The exemplar frames may be cropped to include content portions of pages. The exemplar frames may be aligned such that a pixel is located in a same position in each frame. Optical character recognition (OCR) may be performed on exemplar frames and the OCR for exemplar frames in each segment may be combined. The exemplar frames in each segment may be combined to create a composite image for each pair of adjacent pages in the book, and OCR may be performed on the composite image.Type: GrantFiled: November 14, 2012Date of Patent: November 17, 2015Assignee: Amazon Technologies, Inc.Inventors: Vasant Manohar, Sridhar Godavarthy, Viswanath Sankaranarayanan
-
Patent number: 9165478Abstract: A method and system for use in a user system, for accessing information related to a physical document. An electronic copy of an existing physical document is identified and located. The electronic copy of the physical document is an exact replica of the physical document. One or more pages of the physical document are identified. A selected part of the physical document is identified using the position of points on the identified one or more pages of the physical document and in response, data related to the selected part of the physical document is retrieved from the electronic copy of the physical document. The retrieved data is presented visually to a visually impaired person or orally to a blind person on the user system, which enables the visually impaired person to see or hear, respectively, the retrieved data.Type: GrantFiled: April 15, 2004Date of Patent: October 20, 2015Assignee: International Business Machines CorporationInventors: Fernando Incertis Carro, Sharon M. Trewin