Aid For The Blind Patents (Class 348/62)
-
Patent number: 11938083Abstract: It is determined whether a traffic light is recognized in an image based on each of a result of a recognition operation by a traffic light first recognition unit (a recognition operation of the traffic light for an image acquired by a camera using a learned model based on pre-annotated data) and a result of a recognition operation by a traffic light second recognition unit (a recognition operation of the traffic light for an image acquired by the camera based on a feature amount of the traffic light). This makes it possible to sufficiently obtain the recognition accuracy of the traffic light, and to appropriately perform an instruction to a pedestrian according to the state of the traffic light.Type: GrantFiled: March 7, 2022Date of Patent: March 26, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kohei Shintani, Hiroaki Kawamura
-
Patent number: 11812280Abstract: A method for controlling the position of an updating agent in a decentralised multi-agent system, the method comprising: identifying a first neighbouring agent within a communicative range of the updating agent; estimating a first distance to the first neighbouring agent and a first direction to the first neighbouring agent; determining a movement direction based on the first direction to the first neighbouring agent; and determining a movement magnitude based on an activation function, the first distance to the first neighbouring agent, and a desired reference distance. The activation function is configured such that the greater the difference between the first distance to the neighbouring agent and the desired reference difference, the larger the movement magnitude. The method further comprises moving the updating agent based on the movement direction and the movement magnitude.Type: GrantFiled: June 1, 2021Date of Patent: November 7, 2023Assignee: Kabushiki Kaisha ToshibaInventors: Marius Jurt, Anthony Portelli, Adnan Aijaz
-
Patent number: 11804006Abstract: In one implementation, an enhanced vision system includes a portable user device, a base station including a hardware processor and a memory storing a virtual effects rendering software code, and a display device communicatively coupled to the base station. The hardware processor executes the virtual effects rendering software code to detect the presence of the portable user device in a real-world environment, obtain a mapping of the real-world environment, and identify one or more virtual effect(s) for display in the real-world environment. The hardware processor further executes the virtual effects rendering software code to detect actuation of the portable user device, and to control the display device to display the virtual effect(s) in the real-world environment based on the mapping, and the position and orientation of the portable user device during the detected actuation.Type: GrantFiled: June 3, 2020Date of Patent: October 31, 2023Assignee: Disney Enterprises, Inc.Inventors: Steven M. Chapman, Matthew Deuel, Daniel Baker, Dane Coffey, Mark R. Mine, Evan Goldberg
-
Patent number: 11796804Abstract: Illumination light is emitted from a light source. The illumination light is directed to an eyebox region via a lightguide. Illumination light and visible light beams are incoupled into the lightguide by a tiltable reflector. A tracking signal is generated with a sensor in response to a returning light becoming incident on the sensor.Type: GrantFiled: March 28, 2022Date of Patent: October 24, 2023Assignee: Meta Platforms Technologies, LLCInventors: Robin Sharma, Maxwell Parsons, Andrew John Ouderkirk, Sascha Hallstein
-
Patent number: 11763646Abstract: A device including sensors and a processor. The sensors are configured to detect movements of a user. The processor is configured to categorize the movements of the user as a micro-movement or a macro-movement; quantify a number of the micro-movements; quantify a number of the macro-movements; determine based upon the number of micro-movements whether a body part of interest of a user is supported; and provide feedback to the user if the body part of interested is unsupported and continuing to monitor the body part of interest if the user is supported without providing any feedback.Type: GrantFiled: July 12, 2021Date of Patent: September 19, 2023Assignees: Zepp, Inc., Anhui Huami Health Technology Co., Ltd.Inventors: Yan Vule, Artem Galeev, Vahid Zakeri, Kongqiao Wang
-
Patent number: 11727858Abstract: A 1D scanning micro-display architecture for high-resolution image visualization in compact AR and Head Mounted Displays (“HMDs”). A display driver is configured to drive a plurality of display pixels of a tri-linear microdisplay, wherein the tri-linear microdisplay defines one or more stripes. Each of the stripes are constructed of one or more rows of pixels, and is used in the 1D-scanning display system to create high-resolution images in an augmented reality (“AR”) or Head Mounted Display.Type: GrantFiled: March 16, 2021Date of Patent: August 15, 2023Assignee: Auroratech CompanyInventors: Ziqi (Kelly) Peng, Eric Schmid
-
Patent number: 11705018Abstract: A personal navigation system includes a sensing suite (50) that is configured to find a safety route for a person; a set of modules (200) including a generation module that is configured to generate an auditory-haptic content (70) that is not perceived visually but perceived by auditory sense or haptic sense and indicates the found safety route; and a communication device (30) that is configured to communicate the auditory-haptic content to the person.Type: GrantFiled: February 21, 2018Date of Patent: July 18, 2023Inventors: Haley Brathwaite, Amaranath Somalapuram, Krishnaraj Bhat, Ramanathan Venkataraman, Prakash Murthy
-
Patent number: 11681496Abstract: A communication support device comprises an imaging unit, a counterpart detector, a distance measuring unit, a expression determination unit, a motion determination unit, and a voice output unit. The imaging unit captures an image of a surrounding environment of a user. The counterpart detector detects a predetermined counterpart in the captured image. The distance measuring unit measures a distance between the counterpart and the imaging unit based on the captured image. The expression determination unit determines a facial expression of the counterpart based on the captured image. The motion determination unit determines a motion of the counterpart based on the captured image. The voice output unit notifies the user of identification information for identifying the counterpart by a voice when the distance measured by the distance measuring unit is an interaction distance of a first threshold or less.Type: GrantFiled: June 3, 2021Date of Patent: June 20, 2023Assignee: OMRON CORPORATIONInventors: Endri Rama, Kazuo Yamamoto, Tomohiro Yabuuchi, Naoto Iwamoto
-
Patent number: 11675217Abstract: Eyewear having a light array and vibration sensors for indicating to a user the direction and distance of an object relative to the eyewear to help a user understand and avoid objects. To compensate for partial blindness, the front portion of the eyewear frame may include the light array, where one or more lights of the light array is illuminated to indicate a corresponding direction of a proximate object. The relative brightness of the one or more lights indicates how close the object is, where brighter light(s) indicates a close object. To compensate for more severe partial blindness or complete blindness, the eyewear has haptics, such as a plurality of vibration devices on the front portion of the eyewear, such as in the bridge, which selectively vibrate to indicate a direction of the object. The stronger the vibration, the closer the object.Type: GrantFiled: November 29, 2021Date of Patent: June 13, 2023Assignee: Snap Inc.Inventors: Stephen Pomes, Yu Jiang Tham
-
Patent number: 11650421Abstract: A method may include identifying, by one or more processors, an object in a field of view of a wearable display, where the object is identified for a presbyopic compensation. The presbyopic compensation is performed by the one or more processors on image data of the object to generate compensated image data of the object. The one or more processors render an image in response to the compensated image data of the object on a display of the wearable display.Type: GrantFiled: May 23, 2019Date of Patent: May 16, 2023Assignee: Meta Platforms Technologies, LLCInventors: Ian Erkelens, Larry Richard Moore, Jr., Kevin James MacKenzie
-
Patent number: 11631246Abstract: An event-based sensor having a sensing part that produces events asynchronously. The output signal includes information relating to at least some of the produced events. The method comprises: estimating a rate of production of events by the sensing part; while the estimated rate is less than a threshold, transmitting the signal having a variable rate; and when the estimated rate is above the threshold, transmitting the signal including information relating to only some of the produced events, such that a rate of events for which information is included in the signal remains within the threshold.Type: GrantFiled: December 26, 2018Date of Patent: April 18, 2023Assignee: PROPHESEEInventors: Stephane Laveau, Xavier Lagorce
-
Patent number: 11521515Abstract: A method and a wearable system which includes distance sensors, cameras and headsets, which all gather data about a blind or visually impaired person's surroundings and are all connected to a portable personal communication device, the device being configured to use scenario-based algorithms and an A.I to process the data and transmit sound instructions to the blind or visually impaired person to enable him/her to independently navigate and deal with his/her environment by provision of identification of objects and reading of local texts.Type: GrantFiled: February 11, 2020Date of Patent: December 6, 2022Assignee: Can-U-C Ltd.Inventors: Leeroy Solomon, Doron Solomon
-
Patent number: 11501498Abstract: According to one implementation, an augmented reality image generation system includes a display, and a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to receive a camera image depicting one or more real-world object(s), and to identify one or more reference point(s) corresponding to the camera image, each of the reference point(s) having a predetermined real-world location. The software code further maps the real-world object(s) to their respective real-world location(s) based on the predetermined real-world location(s) of the reference point(s), merges the camera image with a virtual object to generate an augmented reality image including the real-world object(s) and the virtual object, and renders the augmented reality image on the display. The location of the virtual object in the augmented reality image is determined based on the real-world location(s) of the real-world object(s).Type: GrantFiled: May 8, 2018Date of Patent: November 15, 2022Assignee: Disney Enterprises, Inc.Inventors: Michael P. Goslin, Steven M. Chapman, Mark Arana
-
Patent number: 11432989Abstract: A controller of a white stick system determines a walking direction of a person who acts without using eyesight based on sensor data acquired from built-in sensors, such as a camera, and generates a first notification to notify the determined walking direction. A wireless communication unit receives information regarding movement of a mobile object present around the person from a data communication module and a mobile terminal. The controller generates a second notification to notify a collision between the person who moves without using eyesight and the mobile object when the collision is predicted based on the information regarding movement of the mobile object.Type: GrantFiled: April 23, 2021Date of Patent: September 6, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Mashio Taniguchi, Mariko Higuchi, Masatoshi Kakutani, Yukiko Ohnishi, Hiroaki Kawamura
-
Patent number: 11409942Abstract: Devices and methods use a processor, a scanner, and a display screen. The processor is used to recognize Braille characters within the field of view of the scanner. The processor is used to convert the Braille characters into text, and the display screen is used to display the text visibly.Type: GrantFiled: January 6, 2021Date of Patent: August 9, 2022Assignee: Xerox CorporationInventors: Rajana M. Panchani, Michael D. McGrath, Patrick W. B. Gerrits, Peter Granby
-
Patent number: 11300857Abstract: A variety of wearable mounts for a portable camera are disclosed. The variety of wearable mounts includes a ring mount, a necklace mount, a hat mount, and an eyewear mount.Type: GrantFiled: November 12, 2019Date of Patent: April 12, 2022Assignee: Opkix, Inc.Inventors: Christopher Lawrence Greaves, John McGuinness, Shahin Amirpour, Ryan Mikah Fuller, Christopher Steven David Albanese
-
Patent number: 11270451Abstract: A system for providing information about an environment to a user within the environment is featured. An electronic processor is configured to receive input including a user selection of an object of interest from among potential objects of interest. The electronic processor is further configured to provide output to guide the user to move the detection apparatus to position the object of interest near a reference point on a field of view of the detection apparatus, obtain multiple images of the object of interest during the user's movement of the detection apparatus, and crop each of the images to keep the object of interest near a reference point on each of the images.Type: GrantFiled: March 16, 2018Date of Patent: March 8, 2022Assignee: The Schepens Eye Research Institute, Inc.Inventors: Eliezer Peli, JaeHyun Jung, Cheng Qiu
-
Patent number: 11215846Abstract: Eyewear having a light array and vibration sensors for indicating to a user the direction and distance of an object relative to the eyewear to help a user understand and avoid objects. To compensate for partial blindness, the front portion of the eyewear frame may include the light array, where one or more lights of the light array is illuminated to indicate a corresponding direction of a proximate object. The relative brightness of the one or more lights indicates how close the object is, where brighter light(s) indicates a close object. To compensate for more severe partial blindness or complete blindness, the eyewear has haptics, such as a plurality of vibration devices on the front portion of the eyewear, such as in the bridge, which selectively vibrate to indicate a direction of the object. The stronger the vibration, the closer the object.Type: GrantFiled: August 20, 2020Date of Patent: January 4, 2022Assignee: Snap Inc.Inventors: Stephen Pomes, Yu Jiang Tham
-
Patent number: 11186225Abstract: A motor vehicle includes a left side view camera attached to the left lateral side of the body and having a field of view in a rearward direction. A right side view camera is attached to the right lateral side of the body and has a field of view in a rearward direction. There is a right back up camera and a left back up camera. An optics module receives resulting video signals and produces light fields such that the light fields are reflected off of windows of the motor vehicle and are then visible to a driver of the motor vehicle as virtual images.Type: GrantFiled: October 31, 2019Date of Patent: November 30, 2021Assignee: Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North AmericaInventor: Benjamin David Sweet
-
Patent number: 11190753Abstract: Various implementations disclosed are for detecting moving objects that are in a field of view of a head-mountable device (HMD). In various implementations, the HMD includes a display, an event camera, a non-transitory memory, and a processor coupled with the display, the event camera and the non-transitory memory. In some implementations, the method includes synthesizing a first optical flow characterizing one or more objects in a field of view of the event camera based on depth data associated with the one or more objects. In some implementations, the method includes determining a second optical flow characterizing the one or more objects in the field of view of the event camera based on event image data provided by the event camera. In some implementations, the method includes determining that a first object of the one or more objects is moving based on the first optical flow and the second optical flow.Type: GrantFiled: June 22, 2018Date of Patent: November 30, 2021Assignee: APPLE INC.Inventor: Peter Meier
-
Patent number: 11132793Abstract: A method, computer system, and a computer program product for case-adaptive image quality assessment is provided. The present invention may include detecting a current set of features in a current exam associated with a patient. The present invention may also include calculating a current set of quality measurements for the current exam based on the detected current set of features. The present invention may further include in response to determining that the calculated current set of quality measurements for the current exam is below a patient-specific image quality threshold defined by at least one prior exam associated with the patient, automatically registering a negative quality assessment for the current exam associated with the patient.Type: GrantFiled: August 1, 2019Date of Patent: September 28, 2021Assignee: International Business Machines CorporationInventors: Maria Victoria Sainz de Cea, David Richmond
-
Patent number: 11102449Abstract: Embodiments of the present invention are directed towards methods and systems for providing an enhanced telepresence experience to users participating in a videoconferencing session (VCS). In the embodiments a camera is configured and arranged to capture image data covering objects within a substantial portion of the field of view (FOV) of a display device, without capturing image data encoding images displayed on the display device. That is, the camera's FOV is aligned with the display device's FOV. As such, the camera captures image data encoding images in a substantial portion of the display's FOV. According, users within a VCS may approach their display without falling outside their camera's FOV. This provides an enhanced telepresence experience, where the users may interact with each other through what appears to be a transparent window or barrier.Type: GrantFiled: May 26, 2020Date of Patent: August 24, 2021Inventor: Noah Zimmerman
-
Patent number: 11089289Abstract: An image processing device includes depth acquisition circuitry that uses a parallax corresponding to image data to electronically generate a depth map of an image and object detection circuitry that uses distance information and the depth map to electronically detect a specific object in the image by identifying specific pixels in the image data. The depth map includes information that pertains to distances from a reference position for each pixel in the image.Type: GrantFiled: April 22, 2016Date of Patent: August 10, 2021Assignee: Sony Semiconductor Solutions CorporationInventors: Takeo Ohishi, Kazuyuki Yamamoto, Shuzo Sato, Takayuki Yoshigahara, Jun Murayama, Ken Nishida, Kazuhiro Yamada
-
Patent number: 11033187Abstract: A text-to-Braille service is disclosed herein that includes an imaging module. The imaging module includes multiple cameras arranged to image part of a page of text. Each camera has a different field of view of the page, so each camera images a unique portion of the page. The multiple images can be combined to form a single image upon which optical character recognition is performed. The text of the page can be converted into Braille characters and displayed on a refreshable Braille device.Type: GrantFiled: December 9, 2016Date of Patent: June 15, 2021Inventors: Grace Li, Chen Wang, Chandani Doshi, Tania Yu, Jialin Shi, Charlene Xia
-
Patent number: 10983351Abstract: Configurations are disclosed for a health system to be used in various healthcare applications, e.g., for patient diagnostics, monitoring, and/or therapy. The health system may comprise a light generation module to transmit light or an image to a user, one or more sensors to detect a physiological parameter of the user's body, including their eyes, and processing circuitry to analyze an input received in response to the presented images to determine one or more health conditions or defects.Type: GrantFiled: August 12, 2019Date of Patent: April 20, 2021Assignee: Magic Leap, Inc.Inventors: Nicole Elizabeth Samec, John Graham Macnamara, Christopher M. Harrises, Brian T. Schowengerdt, Rony Abovitz, Mark Baerenrodt
-
Patent number: 10963999Abstract: A system and methods for contrast sensitivity compensation provides for correcting the vision of users whose vision is deficient for discerning high spatial frequencies. The system and methods use measurements of the user's contrast detection as a function of spatial frequency in the image to correct images in real time. The system includes a head-mountable device that includes a camera and a processor that can provide enhanced images at video framing rates.Type: GrantFiled: February 13, 2019Date of Patent: March 30, 2021Assignee: Irisvision, Inc.Inventors: Frank Werblin, Robert Massof, Chris Bradley
-
Patent number: 10902214Abstract: A personality model is created for a population and used as an input to a text generation system. Alternative texts are created based upon the emotional effect of the generated text. Certain words or phrases are “pinned” in the output, reducing the variability of the generated text so as to preserve required information content, and a number of tests provide input to a discriminator network so that proposed outputs both match an outside objective regarding the information content, emotional affect, and grammatical acceptability. A feedback loop provides new “ground truth” data points for refining the personality model and associated generated text.Type: GrantFiled: January 19, 2020Date of Patent: January 26, 2021Assignee: Jungle Disk, L.L.C.Inventor: Michael DeFelice
-
Patent number: 10900788Abstract: Described herein is a system for providing range and navigation information for visually impaired persons using range finders, image recognition, and non-visual sensual signals. The system provides information about the identity and distance of objects and potential obstacles in the vicinity of the user in a non-visual form that can be perceived by a visually impaired person.Type: GrantFiled: December 3, 2018Date of Patent: January 26, 2021Inventor: Sidharth Anantha
-
Patent number: 10901234Abstract: An electronic frame for an optical device, the frame including a front frame element able to partially house at least one lens and including at least one electronic component, the front element extending, on either side of said at least one lens, over a retained length of said at least one lens. The front element includes a reinforcing element extending at least substantially over all said retained length of said at least one lens.Type: GrantFiled: September 8, 2016Date of Patent: January 26, 2021Assignee: Essilor InternationalInventor: Eric Patin
-
Patent number: 10902263Abstract: A device includes a camera and an image processing system. A first identity of a first object represented in a first image is determined. The first image is captured at a first time. A first geographical location of the camera and a second geographical location of the first object are determined at the first time. A second identity of a second object represented in a second image is determined. The second image is captured at a second time. A third geographical location of the camera and a fourth geographical location of the second object is determined at the second time. The first object and the second object are determined to be the same when the first identity matches the second identity and the second geographical location is within a threshold distance of the fourth geographical location. The device generates an output message including information about the first object.Type: GrantFiled: June 26, 2018Date of Patent: January 26, 2021Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Justin-Joseph Angel, Robert Steven Murdock, Milena Sadée, Kyle Crouse, Eric Alan Breitbard, Dan Jeffrey Wilday, Sche I. Wang, Sophia Chonghyun An, Sarah Glazer, Dennis Harrington, Paul Leland Mandel
-
Patent number: 10887635Abstract: A broadcast receiver includes a tuner configured to receive a broadcast signal; a frequency deinterleaver configured to frequency deinterleave data in the broadcast signal based on an address generator; a time deinterleaver configured to time deinterleave a Time Interleaving (TI) block including the frequency deinterleaved data, the TI block including one or more Forward Error Correction (FEC) blocks interleaved by a TI scheme, the TI scheme including linear-writing one or more FEC blocks in a memory and diagonal-reading the one or more FEC block based on the TI block by skipping one or more virtual FEC blocks that are ahead of the one or more FEC blocks in the TI block; a decoder configured to decode the broadcast signal, the decoded broadcast signal including a signal frame including: one or more components included in a content of a service and content information describing the content, the content information including component information including role information for at least one of an audio componenType: GrantFiled: April 14, 2020Date of Patent: January 5, 2021Assignee: LG ELECTRONICS INC.Inventors: Minsung Kwak, Seungryul Yang, Kyoungsoo Moon, Woosuk Ko, Sungryong Hong
-
Patent number: 10867527Abstract: Wearable device for helping a user and process for using such device, which device includes at least one image and/or video acquisition unit of the stereoscopic or multicamera type, a data processing unit connected to the image and/or video acquisition unit, and a unit indicating information processed by the processing unit to the user. The device also includes additional sensors connected to the processing unit and is intended to perform a plurality of functions, which are activated and/or deactivated for defining a plurality of device operating states alternative to each other, there being provided a unit analyzing the three-dimensional structure of a scene and the signals generated by the sensors for assigning the operating state.Type: GrantFiled: August 31, 2015Date of Patent: December 15, 2020Assignee: 5LION HORUS TECH LLC.Inventors: Saverio Murgia, Luca Nardelli
-
Patent number: 10843299Abstract: An apparatus configured to perform object recognition is provided that includes a camera, an assistance feedback device, and processing circuitry. The processing circuitry may be configured to receive the plurality of images from the camera and repeatedly determine characteristic features within the plurality of images and compare the characteristic features within the plurality of images to an object identification dataset to determine object matches for identified objects within the plurality of images. The processing circuitry may be further configured to determine a name for identified objects from the object identification dataset, output, using the assistance feedback device, a position indicator for identified objects, and output the name of identified objects within the assistance feedback device field of view.Type: GrantFiled: July 9, 2019Date of Patent: November 24, 2020Assignee: The Johns Hopkins UniversityInventors: Seth D. Billings, Kapil D. Katyal
-
Patent number: 10733448Abstract: A system for contextual interpretation of a three-dimensional scene includes an object recognition engine that analyzes scene data collected from the three-dimensional scene to identify at least one object present in the three-dimensional scene. The system further includes a contextual inference engine trained on a context data training set to analyze context of the scene by identifying a potential contextual inference associated in memory with the at least one object identified by the object recognition engine; comparing the scene data to a subset of the context data training set identified as satisfying the potential contextual inference; and outputting scene context information conveying the potential contextual inference responsive to a determination that the scene data and the subset of the context data train set satisfy a predetermined correlation.Type: GrantFiled: March 15, 2018Date of Patent: August 4, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Zachary T. Zimmerman, Alex Jungyeop Woo, Donna K. Long, Julie Hubschman, Kendall C. York, Saqib Shaikh
-
Patent number: 10628950Abstract: A device and method use multiple light emitters with a single, multi-spectrum imaging sensor to perform multi-modal infrared light based depth sensing and visible light based Simultaneous Localization and Mapping (SLAM). The multi-modal infrared based depth sensing may include, for example, any combination of infrared-based spatial mapping, infrared based hand tracking and/or infrared based semantic labeling. The visible light based SLAM may include head tracking, for example.Type: GrantFiled: March 1, 2017Date of Patent: April 21, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Raymond Kirk Price, Michael Bleyer, Denis Demandolx
-
Patent number: 10607585Abstract: Provided is a signal processing device including a control unit that performs a sound signal process on a waveform of a signal generated on a basis of movement of an object, and cause sound corresponding to a signal generated on a basis of the sound signal process to be output within a predetermined period of time. The signal processing device is capable of aurally-exaggerating movement of an object itself and providing the aurally-exaggerated movement of the object by performing a sound signal process on a waveform of a signal generated on the basis of the movement of the object.Type: GrantFiled: November 1, 2016Date of Patent: March 31, 2020Assignee: SONY CORPORATIONInventors: Heesoon Kim, Masahiko Inami, Kouta Minamizawa, Yuta Sugiura, Mio Yamamoto
-
Patent number: 10583290Abstract: The present disclosure provides a computer-implemented method for enhancing vision for a vision impaired user. The method comprises, for a point in an input image, determining (210) a weight for the point based on visual importance of the point in the input image; comparing (220) the weight for the point to a threshold; and if the weight for the point meets the threshold, determining (230) a first output value for an imaging element of a vision enhancement apparatus so that a difference between the first output value and an intensity level of a portion of the input image neighbouring the point increases with the weight, wherein the difference is at least one Just-Noticeable-Difference of the vision enhancement apparatus, such that when the first output value is applied to the imaging element of the vision enhancement apparatus to create a first visual stimulus, the first visual stimulus is substantially perceivable by the vision impaired user.Type: GrantFiled: September 10, 2015Date of Patent: March 10, 2020Assignee: National ICT Australia LimitedInventors: Chris McCarthy, Nick Barnes
-
Patent number: 10585563Abstract: Techniques are disclosed for providing accessible reading modes in electronic computing devices. The user can transition between a manual reading mode and an automatic reading mode using a transition gesture. The manual reading mode may allow the user to navigate through content, share content with others, aurally sample and select content, adjust the reading rate, font, volume, or configure other reading and/or device settings. The automatic reading mode facilitates an electronic device reading automatically and continuously from a predetermined point with a selected voice font, volume, and rate, and only responds to a limited number of command gestures that may include scrolling to the next or previous sentence, paragraph, page, chapter, section or other content boundary. For each reading mode, earcons may guide the selection and/or navigation techniques, indicate content boundaries, confirm user actions or selections, or to otherwise provide an intuitive and accessible user experience.Type: GrantFiled: May 22, 2017Date of Patent: March 10, 2020Assignee: Nook Digital, LLCInventors: Harold E. Cohn, Luis D. Mosquera, Matthew Pallakoff
-
Patent number: 10565898Abstract: One embodiment of a system for presenting audio and tactile representations of visual and non-visual items includes obtaining (1) items, acquiring (2) a primary item and acquiring (3) secondary items according to criteria, and processing (4) the acquired items into the form of categorically-perceived audio and/or tactile effects (for example speech sounds), the primary and secondary items being distinguishable via categorically-distinct effects such as echo, reverberation, voice character, tactile effects, and stereophonic and/or tactile location. The resultant effects are typically presented (6) to an auditory and/or tactile display, allowing people can have their focus of attention directed to primary items, and simultaneously be made aware of secondary items. The magnitude of the effects can relate to the values of certain properties of the items (5).Type: GrantFiled: June 19, 2017Date of Patent: February 18, 2020Inventor: David Charles Dewhurst
-
Patent number: 10549198Abstract: A method and system for verifying a client device's location in a parallel reality game hosted by a server. The client transmits its location to the server and receives verification instructions comprising a landmark and a verification pathway. The client prompts a player to capture image data of the landmark and, in response, receives a first set of image data of the landmark from an initial perspective. The client determines whether the first set of image data matches to the landmark before prompting the player to move along the verification pathway while capturing image data. The client receives a second set of image data of the landmark from a moving perspective. The client determines whether the second set of image data matches to an expected change in perspective of the landmark. Upon completion of the verification instructions, the client confirms to the server the client's location.Type: GrantFiled: October 30, 2018Date of Patent: February 4, 2020Assignee: Niantic, Inc.Inventor: Hansong Zhang
-
Patent number: 10540446Abstract: A personality model is created for a population and used as an input to a text generation system. Alternative texts are created based upon the emotional effect of the generated text. Certain words or phrases are “pinned” in the output, reducing the variability of the generated text so as to preserve required information content, and a number of tests provide input to a discriminator network so that proposed outputs both match an outside objective regarding the information content, emotional affect, and grammatical acceptability. A feedback loop provides new “ground truth” data points for refining the personality model and associated generated text.Type: GrantFiled: January 31, 2018Date of Patent: January 21, 2020Assignee: JUNGLE DISK, L.L.C.Inventor: Michael DeFelice
-
Patent number: 10510248Abstract: An auxiliary identification device for indicator objects and an auxiliary identification and display method therefor are provided. The auxiliary identification device includes a camera module, a controller, and a display. The camera module is configured to obtain a video in a first direction. The controller is coupled to the camera module and is configured to capture and identify multiple indicator objects in the video. Each of the indicator objects includes indication information. The controller sorts the indicator objects to determine a priority display order of the indicator objects, and further generates a display image signal according to the priority display order. The display is coupled to the controller and is configured to sequentially display the indicator objects according to the display image signal.Type: GrantFiled: December 29, 2017Date of Patent: December 17, 2019Assignee: Wistron CorporationInventors: Chia-Chang Hou, Yu-Yen Chen
-
Patent number: 10436593Abstract: An augmented reality assistance system can include a wearable augmented reality device. The augmented reality device can include an imaging device, a speaker, a microphone, and one or more sensors. The system can also include a server communicably coupled to the wearable augmented reality device. The server can include processing circuitry configured to receive location information via at least one of the wearable augmented reality device, the one or more sensors, and the microphone, determine amplified location information based on the location information, receive image data from the imaging device, perform a web crawl of the image data, determine a navigation recommendation based on the amplified location information and the web crawl of the image data, and output the amplified location information and the navigation recommendation via the speaker.Type: GrantFiled: November 8, 2016Date of Patent: October 8, 2019Inventor: Reem Jafar Alataas
-
Patent number: 10387114Abstract: The present invention relates to a system for a visually impaired user comprising a micro camera coupled to an eye wear, proximity sensors coupled to the wearable objects and the handheld electronic device coupled to both micro camera & proximity sensor and to the database. The visually impaired user instructs the micro camera and proximity sensors through his handheld electronic device to capture the data of proximally placed objects, places or people. The captured data of both the micro camera and the proximity sensors provides the complete information about the captured object, place or people and transmit it to the database of the handheld electronic device. This captured data is then processed by the database and the relevant audio output, corresponding to the captured data, is transmitted to the ear wear or the speaker of the handheld electronic device.Type: GrantFiled: September 16, 2018Date of Patent: August 20, 2019Inventor: Manouchehr Shahbaz
-
Patent number: 10296178Abstract: A system and methods for facilitation of user interactions with an electronic device. A number of user interface methods are described and may be used alone or in combination with one another to present an enhanced interface to a user. A method of providing user interaction using a compact status indicator is described. A method for providing a virtual scroll wheel to a user for interaction with content sets is described. A method for allowing a user to dynamically modify a scalable user interface is described. A method for providing gesture based input to a user via a virtual gesture pad is described. A method of providing an interactive graphic search query interface is described. A method for indicating and selecting available content type is described.Type: GrantFiled: December 15, 2010Date of Patent: May 21, 2019Assignee: Universal Electronics, Inc.Inventors: Christopher Chambers, Wayne Scott, Cheryl Scott, Allen Yuh, Paul D. Arling
-
Patent number: 10282057Abstract: Image editing on a wearable device includes a system which obtains sensor data via the wearable device. The sensor data includes a representation of hand movement, head movement or voice command associated with a user. The system executes an application for editing an image based on the obtained sensor data. The system provides for display a list of image adjustment types associated with the application. The system selects an image adjustment type based on one or more of the hand movement, the head movement or the voice command. The system provides for display a prompt having options to adjust a property of the selected image adjustment type. The system selects one of the options included in the prompt. The system modifies an image based on the selected option. The system then provides the modified image for storage in a data structure of a memory unit in the wearable device.Type: GrantFiled: July 28, 2015Date of Patent: May 7, 2019Assignee: Google LLCInventors: Thomas Binder, Ronald Frank Wotzlaw
-
Patent number: 10149101Abstract: A method for reminding a user to take a weather-related object with him or her when going outside, applied to an electronic device, includes obtaining weather information at the present time or for one or more predetermined future time points or periods. A reminder is given to the user according to the obtained weather information, wherein the reminder includes a weather-related object relevant to the obtained weather information.Type: GrantFiled: February 21, 2017Date of Patent: December 4, 2018Assignee: Chiun Mai Communication Systems, Inc.Inventor: Cheng-Kuo Yang
-
Patent number: 10146406Abstract: A meta guiding mode can be implemented on a computing device to alter or disable touch inputs that the computing device otherwise recognizes as input.Type: GrantFiled: September 30, 2014Date of Patent: December 4, 2018Assignee: RAKUTEN KOBO INC.Inventors: Benjamin Landau, Vanessa Ghosh
-
Patent number: 10102856Abstract: Ambient assistance is described. An assistant device can operate in an active experience mode in which a response to a user's speech is provided if it includes a hardware activation phrase. Based on characteristics of the environment, the mode can be adjusted to a passive experience mode in which a response to speech is provided even if it does not include the hardware activation phrase.Type: GrantFiled: May 18, 2017Date of Patent: October 16, 2018Assignee: ESSENTIAL PRODUCTS, INC.Inventors: Mara Clair Segal, Manuel Roman, Dwipal Desai, Andrew E. Rubin
-
Patent number: 9955862Abstract: Techniques related to vision correction are disclosed. The techniques involve establishing a visual model associated with a patient. The visual model includes data related to a quality of the patient's vision. A boundary is established as a function of the data associated with the visual model. The boundary is indicative of an area to be corrected within the patient's vision. A retinal map is established as a function of the boundary. An image from a camera associated with the patient is captured and corrections are applied to the image based on the retinal map to generate a corrected image. The corrected image is presented to the eye of the patient.Type: GrantFiled: March 17, 2016Date of Patent: May 1, 2018Assignee: Raytrx, LLCInventors: Richard C. Freeman, Michael H. Freeman, Mitchael C. Freeman, Chad Boss, Jordan Boss