Human Body Observation Patents (Class 348/77)
  • Patent number: 10621424
    Abstract: A multi-level state detecting system includes an image capture device for capturing an image of a subject; a site detecting unit for detecting a person in the image; a face recognition unit for detecting a face in the image via a face recognition database; a multi-level state identification unit for determining corresponding state and probability of the subject; a neural network prediction model database storing trained neural network prediction model associated with state classifications, the multi-level state identification unit identifying corresponding state of the subject when the face recognition unit does not detect the face; and a multi-level state updating unit for generating a final state according to a current state and at least one previous state received from the multi-level state identification unit.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: April 14, 2020
    Assignee: Wistron Corporation
    Inventor: Zhao-Yuan Lin
  • Patent number: 10616560
    Abstract: A system includes a first scanner having an inflatable membrane configured to be inflated with a medium to conform an exterior surface of the inflatable membrane to an interior shape of a cavity. The medium attenuates, at first rate per unit length, light having a first optical wavelength, and attenuates, at a second rate per unit length, light having a second optical wavelength. An emitter is configured to generate light to illuminate the interior surface and a detector is configured to receive light from the interior surface. The scanner further includes a processor configured to generate a first electronic representation of the interior shape based on the light. A design computer is configured to modify the first electronic representation into a three-dimensional shape corresponding to at least a portion of the interior shape and a fabricator configured to fabricate, based at least on the modified first electronic representation, an earbud.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: April 7, 2020
    Assignee: LANTOS TECHNOLOGIES, INC.
    Inventors: Robert J. Fei, Michael L. Rishton, Jonathan Aguilar, Lydia Gregoret, Keith Guggenberger, Brett Zubiate, Brian J. Fligor, Xiaowei Chen, David J. Wilfert
  • Patent number: 10616425
    Abstract: An image forming apparatus of an embodiment includes a sensor unit, an operation identification unit, and an object detection unit. The sensor unit receives an electromagnetic wave and outputs information indicating a physical quantity of the received electromagnetic wave. The operation identification unit identifies an input operation to its own apparatus, based on the information indicating the physical quantity of the electromagnetic wave. The object detection unit detects the presence or absence of an object around its own apparatus, based on the information indicating the physical quantity of the electromagnetic wave.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: April 7, 2020
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Atsushi Ishihara
  • Patent number: 10607503
    Abstract: A blush guide device is provided to guide a user to draw a blush. The blush guide device includes an image capturing unit, a processing unit and a display unit. The image capturing unit captures a user face image. The processing unit receives the user face image, and obtains a plurality of face feature points according to the user face image. The processing unit performs calculation according to the face feature points to obtain at least one blush guide block. The display unit displays the user face image and the corresponding blush guide block, and guides the user to put on makeup to the blush guide block. The disclosure further provides a blush guide method adapted to the blush guide device.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: March 31, 2020
    Assignee: CAL-COMP BIG DATA, INC.
    Inventors: Shyh-Yong Shen, Min-Chang Chi, Cheng-Hsuan Tsai
  • Patent number: 10599933
    Abstract: A biometric image capturing apparatus includes an irradiating unit configured to irradiate a subject with light, a camera configured to capture an image of the subject, a polarizing unit which is disposed on an optical path between the irradiating unit and the camera, and configured to allow passage of light in a first polarization direction and light in a second polarization direction, a processor configured to perform computation using a first image captured by the camera according to the light in the first polarization direction that has passed through the polarizing unit and a second image captured by the camera according to the light in the second polarization direction that has passed through the polarizing unit, and to determine whether the subject is a biological body according to a result of the computation performed by the computing unit.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: March 24, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Soichi Hama, Takahiro Aoki, Isao Iwaguchi
  • Patent number: 10599143
    Abstract: A system may include one or more sensors configured to acquire data associated with a driver of a vehicle and a processor. The processor may receive the data and determine whether the data is within a baseline data associated with expected behavior of the driver. The processor may then control one or more operations of the vehicle in response to the data being outside the baseline data.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: March 24, 2020
    Assignee: United Services Automobile Association (USAA)
    Inventors: Kade L. Scott, Benjamin D. Ethington, Richard Daniel Graham, Matthew T. Flachsbart
  • Patent number: 10600204
    Abstract: A system and a method are described for preventing pressure ulcers in a medical care environment by monitoring adherence to a pressure ulcer prevention protocol. The method also includes identifying a first subset of pixels from the plurality of potential locations as representing a bed and/or a seating platform. The method also includes identifying a second subset of pixels within the field of view of the camera as representing an object (e.g., a subject, such as a patient, medical personnel; bed; chair; patient tray; medical equipment; etc.) proximal to the bed and/or seating platform. The method also includes determining an orientation of the object with respect to the bed and/or seating platform, and determining changes in the orientation and/or position of the object over a period of time. In implementations, the method further includes issuing an electronic communication alert based upon the determined orientation and/or position of the object over the period of time.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: March 24, 2020
    Assignee: Ocuvera
    Inventors: Benjamin D. Rush, Joshua M. Brown-Kramer, Lucas A. Sabalka, Brian S. Gansemer, Clay T. Upton, Paul B. Bauer, Andrew R. Unterseher, Benjamin J. Origas, Douglas W. Durham
  • Patent number: 10593044
    Abstract: An information processing apparatus includes a depth image acquisition unit configured to acquire a depth image from a measurement apparatus that has measured a distance to an object, an image acquisition unit configured to acquire a captured image from an image capturing apparatus that has captured an image of the object, and an estimation unit configured to estimate a shape of the object based on the depth image and the captured image. The estimation unit acquires information about a contour of the object from the captured image, corrects the information about the contour based on the depth image, and estimates the shape of the object based on the corrected information about the contour.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: March 17, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Keita Masuda
  • Patent number: 10592728
    Abstract: A method for enhancing user liveness detection is provided that includes calculating, by a computing device, parameters for each frame in a video of captured face biometric data. Each parameter results from movement of at least one of the computing device and the biometric data during capture of the biometric data. The method also includes creating a signal for each parameter and calculating a similarity score. The similarity score indicates the similarity between the signals. Moreover, the method includes determining the user is live when the similarity score is at least equal to a threshold score.
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: March 17, 2020
    Assignee: DAON HOLDINGS LIMITED
    Inventor: Mircea Ionita
  • Patent number: 10587822
    Abstract: An audio-video distribution system includes a base station 12 having inputs and outputs for connection of audio-video sources and display screens respectively. In most embodiments, any audio-video source will be selectable for viewing on any display screen. Infra-red transmitters 14a,b,c,d are provided for controlling connected source devices and connected display screens. An application running on a mobile computer 16 can cause the infra-red transmitters 14a,b,c,d to transmit infra-red commands in response to user input. This is achieved with very low latency by loading machine code instructions into a secondary computer, and causing the second computer to execute those instructions in response to user input on the mobile computer 16. The machine code instructions can be stored on the mobile computer.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: March 10, 2020
    Assignee: HD Connectivity Ltd.
    Inventor: Dillan Pattni
  • Patent number: 10586332
    Abstract: Embodiments of the present disclosure provide a software program that displays both a volume as images and segmentation results as surface models in 3D. Multiple 2D slices are extracted from the 3D volume. The 2D slices may be interactively rotated by the user to best follow an oblique structure. The 2D slices can “cut” the surface models from the segmentation so that only half of the models are displayed. The border curves resulting from the cuts are displayed in the 2D slices. The user may click a point on the surface model to designate a landmark point. The corresponding location of the point is highlighted in the 2D slices. A 2D slice can be reoriented such that the line lies in the slice. The user can then further evaluate or refine the landmark points based on both surface and image information.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: March 10, 2020
    Assignee: Smith & Nephew, Inc.
    Inventors: Yangqiu Hu, Gaetano Calabrese
  • Patent number: 10580265
    Abstract: Various arrangements for handling a potential security situation using a home automation system are presented. During a defined active monitoring time period, a biometric measurement of a user may be received. The biometric measurement of the user may exceed a defined threshold value for the biometric measurement. A security alert from a home automation device in wireless communication with the home automation host system may be received. If the time period between the biometric measurement is within a threshold time period of the security alert being received a security response action may be performed.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: March 3, 2020
    Assignee: DISH Ukraine L.L.C.
    Inventor: Zane Eaton
  • Patent number: 10574819
    Abstract: Techniques are described for analyzing communications sent during a service session to provide (e.g., customer) service on a social media channel, the analysis to determine a quality of service provided during the session. Natural language processing, lexical analysis, pattern matching, or other types of analysis may be used to determine an empathy factor and a conversational factor for communications between a service representative (SR) and a user during a session. The factors may be combined with other factors, such as a timely acknowledgement factor and a timely resolution factor, to generate a response quality index (RQI) for the session. Based on the RQI, feedback information may be generated and sent to the SR. In some implementations, the communications may be analyzed and feedback information sent to the SR in real time during the session, to dynamically improve service quality.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: February 25, 2020
    Assignee: United Services Automobile Association (USAA)
    Inventors: Renee Lynette Horne, Julie Finlay
  • Patent number: 10561051
    Abstract: A movement error detection apparatus includes a movable region inside mark made of a projected image arranged in a movable region of a movable conveyor of each of first and second substrate conveyance devices; and a pair of movable region outside marks respectively provided on both sides of the movable region inside mark and arranged at a position outside the movable region of the movable conveyor and outward of a substrate at a work position; an imaging device which moves together with a mounting head; and a control device that selects two marks, among the marks, which are positioned on both outer sides of the substrate and are closest to the substrate, and causes the first substrate recognition camera to image the marks, as well as obtaining a movement error of the first mounting head on the basis of the two mark images.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: February 11, 2020
    Assignee: YAMAHA HATSUDOKI KABUSHIKI KAISHA
    Inventors: Tarou Kawai, Ryousuke Nakamura, Kota Ito
  • Patent number: 10554871
    Abstract: According to one aspect of the present disclosure, an apparatus for taking pictures or videos triggered by pre-selected changes in a user's biorhythms is provided. The apparatus includes at least one biosensor configured to detect at least one biorhythm of a user and to generate a biorhythm signal based on the detected at least one biorhythm of the user, a camera for taking one or more pictures and/or videos, and at least one processor configured to receive the biorhythm signal from the at least one biosensor, the processor configured to compare the biorhythm signal to a biorhythm threshold and, in response to the biorhythm signal exceeding the biorhythm threshold, to simultaneously trigger the camera to take one or more pictures or videos. Further aspects and methods are also disclosed.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: February 4, 2020
    Assignee: Kristin Blume Slater
    Inventor: Kristin Blume Slater
  • Patent number: 10548482
    Abstract: A medical imaging apparatus includes a gantry with a tunnel-shaped opening, an examination region and a holder. In an embodiment, the holder includes a light source for illuminating the examination region and the holder is arranged on the gantry projecting above the tunnel-shaped opening.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: February 4, 2020
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Daniel Lerch, Carsten Thierfelder, Andreas Wiesinger
  • Patent number: 10536588
    Abstract: An image forming apparatus includes a sensor, an operation section, and a controller. The sensor detects a sensor target. The operation section receives an operation instruction. The controller executes a setting process for controlling the sensor. The setting process is a process to adjust a sensitivity of the sensor according to detection precision. The detection precision is a relationship in a first unit period between the number of times the sensor detects the sensor target and the number of times of the operation section receives the operation instruction.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: January 14, 2020
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Yukihiro Shibata, Akira Ohashi, Koji Tagaki, Satoshi Sato, Yuki Yamamoto, Yusuke Okazaki
  • Patent number: 10534952
    Abstract: A system and method for providing a matching process that determines a conforming pattern match to a pattern-under-test from a set of matching patterns in a pattern storage. A matching process tests every template against the pattern-under-test and resolves a conforming match condition when multiple matches to the pattern-under-test are found. In some embodiments, a matcher engine using variable match thresholds may be used to differentiate among matching templates to identify a conforming-matching template with respect to the pattern-under-test.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: January 14, 2020
    Assignee: IDEX ASA
    Inventor: Roger A. Bauchspies
  • Patent number: 10529102
    Abstract: An acquiring unit acquires a face image. A display control unit displays the face image acquired by the acquisition unit on a display unit. A detection unit detects positions on the face image designated by a pointer 20. A drawing unit draws makeup objects 34 and 35 on the face image along the trajectory specified by sequentially connecting the positions detected by the detection unit. The overlap control unit increases a density of an overlapping portion 36 between the makeup object 34 and the makeup object 35. The correction unit corrects the density of the overlapping portion 36 at a position corresponding to a point P3 when the trajectory has the point P3 where a direction changes with a change amount equal to or greater than a threshold value.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: January 7, 2020
    Assignee: OPTiM Corporation
    Inventor: Shunji Sugaya
  • Patent number: 10523855
    Abstract: A dual sensor imaging system is described for visible and infrared light. One example includes a first image sensor to detect the luminance of a scene, a second image sensor to detect the visible light chrominance of the scene and to detect an infrared image of the scene, and an image processor to receive the luminance from the first image sensor and the chrominance from the second sensor to generate a visible light image of the scene, the image processor to further receive the infrared image from the second image sensor and to extract the infrared image from the visible light chrominance of the scene.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: December 31, 2019
    Assignee: INTEL CORPORATION
    Inventor: Richmond Hicks
  • Patent number: 10517483
    Abstract: A fluorescence imaging device detects fluorescence in parts of the visible and invisible spectrum, and projects the fluorescence image directly on the human body, as well as on a monitor, with improved sensitivity, video frame rate and depth of focus, and enhanced capabilities of detecting distribution and properties of multiple fluorophores. Direct projection of three-dimensional visible representations of florescence on three-dimensional body areas advantageously permits viewing of it during surgical procedures, including during cancer removal, reconstructive surgery and wound care, etc. A NIR laser and a human visible laser (HVL) are aligned coaxially and scanned over the operating field of view. When the NIR laser passes over the area where the florescent dye is present, it energizes the dye which emits at a shifted NIR frequency detected by a photo diode. The HVL is turned on when emission is detected, providing visual indication of those positions.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: December 31, 2019
    Assignee: AccuVein, INC.
    Inventors: Fred Wood, Dmitry Yavid, Joe Zott, Ron Goldman
  • Patent number: 10515102
    Abstract: Data is received that is derived from a plurality of geo-spatial sensors that respectively generate data characterizing a plurality of sources within a zone of interest. The data includes series time-stamped frames for each of the sensors and at least one of the sources has two or more associated sensors. The received data can be sorted and processed, for each sensor on a sensor-by-sensor basis, using a sliding window. The sorted and processed data can then be correlated and written into a data storage application. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: December 24, 2019
    Assignee: SAP SE
    Inventors: Torsten Abraham, Florian Foebel, Boris Gruschko, Gerrit Simon Kazmaier, Christian Lahmer, Nico Licht, Marcus Lieberenz, Lars Volker
  • Patent number: 10509967
    Abstract: An apparatus for detecting when a subject has exited an item of furniture is provided. The apparatus comprises a camera adapted to be arranged, when in use, below the underside of an item of furniture having an upper side for supporting a subject, the underside being opposite to the upper side, and a processing unit. The camera captures sequential images that do not include the upper side of the item of furniture, the images having a foreground corresponding to a region below the underside of the item of furniture and a background corresponding to a region adjacent the item. The processing unit receives images from the camera; detects, for each image, an edge corresponding to an edge of the item; detects the appearance of a feature contiguous with the edge; monitors how a detected feature changes over a time period; determines whether a change to the detected feature satisfies at least one predefined criterion for a subject exit event; and outputs a signal based on the determining.
    Type: Grant
    Filed: August 10, 2016
    Date of Patent: December 17, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Ihor Olehovych Kirenko, Mukul Julius Rocque
  • Patent number: 10503986
    Abstract: A passenger information detection device includes: an acquisition unit that acquires an image imaged by an imaging device that is provided in an interior space of a vehicle to image a passenger seated on a seat and a detection value of a load sensor provided on the seat; a first calculation unit that calculates first information that is information on a face of the passenger from the image; and a second calculation unit that calculates second information that is information on a body size of the passenger based on the first information and the detection value.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: December 10, 2019
    Assignee: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Hiroyuki Fujii, Shin Osuga
  • Patent number: 10495882
    Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within and on the HMD to capture portions of a face of a user wearing the HMD. Multiple image capture devices are included within the HMD to capture different portions of the face of the user within the HMD, and one or more other image capture devices are positioned to capture portions of the face of the user external to the HMD. Captured images from various image capture devices may be communicated to a console or a controller that generates a graphical representation of the user's face based on the captured images.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: December 3, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Hernan Badino, Yaser Sheikh, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch, Jason Saragih, Shih-En Wei
  • Patent number: 10492873
    Abstract: A system and method for medical spatial orientation is disclosed including a robotic arm; an image sensor coupled to the robotic arm; a visualization headset; and a computing subsystem coupled to the robotic arm and the visualization headset. The computing subsystem includes a processor; a non-transitory machine-readable medium communicatively coupled to the processor; and instructions stored on the non-transitory machine-readable medium that, when loaded and executed by the processor, cause the processor to create a first image of a first portion of a surgical field at an image sensor coupled to the robotic arm; detect a movement of a visualization headset indicating a second portion of the surgical field; determine a movement of the robotic arm to position the image sensor at the second portion of the surgical field; move, based on the determination, the robotic arm; and create a second image of the second portion of the surgical field.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: December 3, 2019
    Assignee: Novartis AG
    Inventors: Paul Hallen, Joshua Anderson
  • Patent number: 10477785
    Abstract: Proactively identifying and interdicting transport of commodities associated with illicit nuclear materials and nuclear weapons shielded by high Z-number materials, such as lead, can help ensure effective nuclear nonproliferation. In an embodiment, a method for imaging an object on a surface includes exciting a surface with ultrasonic excitation from an ultrasonic transmitter having an ultrasonic transducer in contact with the surface. The method further includes imaging, at a processor, a two-dimensional representation of the object acoustically coupled to the surface based on the ultrasonic reflections received at an ultrasonic receiver via a receiving transducer in contact with the surface. This method can complement existing x-ray screening systems to increase the odds of detecting radiological materials.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: November 19, 2019
    Assignee: Northeastern University
    Inventors: Jose A. Martinez-Lorenzo, Yuri Alvarez Lopez
  • Patent number: 10466474
    Abstract: Embodiments for facilitating communication between users sharing a visual cue. A gaze location of a first user is determined as an approximation in three dimensional space. The gaze location is transmitted to a second user. A gaze focus of the first user based on the determined gaze location is displayed to the second user.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: November 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David B. Lection, Sarbajit K. Rakshit, Mark B. Stevens, John D. Wilson
  • Patent number: 10446052
    Abstract: A lip gloss guide device including an image capturing unit, a processing unit and a display unit is provide to guide a user to draw a lip gloss. The image capturing unit captures a face image of the user, where the face image includes a lip of the user. The processing unit receives the face image, and obtains a plurality of lip feature points according to the face image. The processing unit performs calculation according to the lip feature points and a predetermined ratio between an upper lip and a lower lip to obtain an upper lip gloss guide block and a lower lip gloss guide block. The display unit displays a lip image and the corresponding upper lip gloss guide block and lower lip gloss guide block, and guides the user to put on makeup to the upper lip gloss guide block and the lower lip gloss guide block.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: October 15, 2019
    Assignee: CAL-COMP BIG DATA, INC.
    Inventors: Shyh-Yong Shen, Min-Chang Chi, Cheng-Hsuan Tsai
  • Patent number: 10433095
    Abstract: An image of a pinna is captured. Based on the image of the pinna, a non-linear transfer function is determined which characterizes how sound is transformed at the pinna. A signal is output indicative of one or more audio cues to facilitate spatial localization of sound via the pinna, where the one or more audio cues is based on the non-linear transfer function.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 1, 2019
    Assignee: EmbodyVR, Inc.
    Inventor: Kapil Jain
  • Patent number: 10432911
    Abstract: A method for identifying contamination upon a lens of a stereoscopic camera is disclosed. The stereoscopic camera is arranged such that it has the same capturing area over time, and is provided with a first camera providing first images of said capturing area and a second camera providing second images of said capturing area. The first and second images are divided into at least one evaluation area correspondently located in respective image. A traffic surveillance system is also disclosed where contamination upon a lens of a stereoscopic camera is identified according to said method.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: October 1, 2019
    Assignee: Kapsch TrafficCom AB
    Inventor: Björn Crona
  • Patent number: 10416143
    Abstract: A computing system performs a method of determining cumulative exposure to a gas. The computing system receives data that correspond to local concentrations of a gas from a plurality of stationary gas sensors in a home. Respective stationary gas sensors are located at respective fixed locations in respective rooms in the home. The computing system also receives data that correspond to occupancy of the home, including occupancy by a first occupant. The computing system determines a cumulative exposure of the first occupant to the gas in the home, based at least in part on the received data that correspond to local concentrations of the gas and the received data that correspond to occupancy of the home. The computing system performs and/or sends instructions to perform one or more predefined operations in accordance with the determined cumulative exposure of the first occupant.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: September 17, 2019
    Assignee: GOOGLE LLC
    Inventors: David Sloo, Yoky Matsuoka
  • Patent number: 10383532
    Abstract: Provided is a method of measuring a heart rate in an electronic device. The method includes obtaining a face image of a subject by using the electronic device; determining a target region within the obtained face image; analyzing color information within the determined target region; and determining a heart rate of the subject on the basis of the analyzed color information. The analyzing of color information is performed on the basis of a differential arithmetic operation between a first color component and a second color component within the target region.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: August 20, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Toshihiro Kitajima, Edwardo Murakami, Sang-on Choi
  • Patent number: 10382556
    Abstract: Providing a registry of sensor devices may comprise obtaining a device, determining one or more information types returned by the device, determining one or more communication protocols used by the device for transmitting information, determining one or more encoding schemes used by the device to format the information, adding the device to the registry of sensor devices including at least the one or more information types, the one or more communication protocols and the one or more encoding schemes, and allowing access to the registry of sensor devices.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Wilfredo Ferre, Dimitri Kanevsky, Peter K. Malkin, Marc P. Yvon
  • Patent number: 10380425
    Abstract: A wearable apparatus is provided for capturing and processing images from an environment of a user. In one implementation, the wearable apparatus is used for identifying a contextual situation related to a wearer. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from an environment of the wearer. The wearable apparatus further includes at least one processing device. The at least one processing device is programmed to analyze the plurality of images to identify the contextual situation related to the wearer; determine information associated with the contextual situation; and cause the transmitter to transmit the determined information to a device paired with the wearable apparatus to cause the paired device to provide at least one alert to the wearer based on the determined information associated with the contextual situation.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: August 13, 2019
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 10378882
    Abstract: Examples of a light field metrology system for use with a display are disclosed. The light field metrology may capture images of a projected light field, and determine focus depths (or lateral focus positions) for various regions of the light field using the captured images. The determined focus depths (or lateral positions) may then be compared with intended focus depths (or lateral positions), to quantify the imperfections of the display. Based on the measured imperfections, an appropriate error correction may be performed on the light field to correct for the measured imperfections. The display can be an optical display element in a head mounted display, for example, an optical display element capable of generating multiple depth planes or a light field display.
    Type: Grant
    Filed: November 2, 2016
    Date of Patent: August 13, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Ivan L. Yeoh, Lionel E. Edwin, Samuel A. Miller
  • Patent number: 10376318
    Abstract: Provided is a surgical assistance apparatus and so on that make it possible to more accurately check the state that an affected part is to be in after a surgical operation has started, in advance of the surgical operation. A surgical assistance apparatus includes an image processing unit. The image processing unit creates projection image data based on measurement data that has been obtained by measuring an affected part of a patient, and thus creates a projection image as an image that is specified by the projection image data. This projection image is an image that is to be projected onto the patient, and includes an image that shows a state that the affected part is to be in after a surgical operation has started.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: August 13, 2019
    Assignee: KYOCERA CORPORATION
    Inventors: Shinya Tsusaka, Masahiko Hashida
  • Patent number: 10366297
    Abstract: The technology disclosed relates to coordinating motion-capture of a hand by a network of motion-capture sensors having overlapping fields of view. In particular, it relates to designating a first sensor among three or more motion-capture sensors as having a master frame of reference, observing motion of a hand as it passes through overlapping fields of view of the respective motion-capture sensors, synchronizing capture of images of the hand within the overlapping fields of view by pairs of the motion-capture devices, and using the pairs of the hand images captured by the synchronized motion-capture devices to automatically calibrate the motion-capture sensors to the master frame of reference frame.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 30, 2019
    Assignee: Leap Motion, Inc.
    Inventor: David S. Holz
  • Patent number: 10354436
    Abstract: An image processing apparatus includes a processor including hardware, wherein the processor is configured to: construct first and second pieces of three-dimensional image data based on two different subject images received; arrange first and second three-dimensional shapes included in the first and second pieces of three-dimensional image data at positions corresponding to pieces of image pickup position information; calculate a deviation angle based on at least one of a degree of similarity between hues, a degree of similarity between textures and a degree of similarity between edges, and correct the deviation angle and arrange the first and second three-dimensional shapes in three-dimensional space to generate a three-dimensional shape image.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: July 16, 2019
    Assignee: OLYMPUS CORPORATION
    Inventors: Syunya Akimoto, Junichi Onishi
  • Patent number: 10339896
    Abstract: A head-mounted display apparatus (1) includes an image display part (10) configured to display an image, an optical unit (20, 30, 50) configured to lead the image to an eye of a wearer of the display apparatus (1), a light intensity detector (60) configured to detect light intensity (A) of external light, a dimmer filter (40) configured to vary transmittance to adjust the intensity (B) of the external light reaching the eye of the wearer, and a controller (70) configured to adjust the transmittance of the dimmer filter (40) and the light intensity (C) of a light source of the image display part (10) based on the light intensity (A) of the external light obtained by the light intensity detector (60).
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: July 2, 2019
    Assignee: Ricoh Company, Ltd.
    Inventors: Yuuma Usui, Shigenobu Hirano, Ikue Kawashima, Yasuo Katano, Aino Hasegawa, Atsushi Ohshima
  • Patent number: 10324563
    Abstract: Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: June 18, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Daniel R Tretter, Jinman Kang, Kar Han Tan, Wei Hong, Bradley N Suggs, David Bradley Short, Otto Sievert
  • Patent number: 10311624
    Abstract: Presented herein are systems and methods configured to generate virtual entities representing real-world users. In some implementations, the systems and/or methods are configured to capture user appearance information with imaging devices and sensors, determines correspondence values conveying correspondences between the appearance of the user's body or user's head and individual ones of default body models and/or default head models, modifies a set of values defining a base body model and/or base head model based on determined correspondence values and sets of base values defining the default body models and/or default head models. The base body model and/or base head model may be modified to model the appearance of the body and/or head of the user.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: June 4, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth Mitchell, Charles Malleson, Ivan Huerta Casado, Martin Klaudiny, Malgorzata Edyta Kosek
  • Patent number: 10296096
    Abstract: According to one embodiment, there is provided an operation recognition device comprising a computer including a hardware processor. The hardware processor is configured to acquire movement information associated with a movement of a user, and area information corresponding to a first operation section; determine, based at least in part on the movement information and the area information corresponding to the first operation section, an estimated position corresponding to a subsequent operation by the user; and determine whether the subsequent operation by the user is directed to the first operation section based at least in part on the estimated position corresponding to the subsequent operation.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: May 21, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Toshiaki Nakasu, Tsukasa Ike, Yasunobu Yamauchi
  • Patent number: 10275882
    Abstract: An observation apparatus includes an imaging unit, a driving mechanism, an operation circuit and a control circuit. The imaging unit generates image data by imaging a target object. The driving mechanism moves the imaging unit to change an imaging position of the target object. The operation circuit calculates information on imaging conditions of the imaging unit based on an auxiliary information calculation image, which is image data of auxiliary information calculation light. The control circuit controls the imaging performed by the imaging unit, using the information on the imaging conditions.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: April 30, 2019
    Assignee: Olympus Corporation
    Inventors: Shigeru Kato, Tsuyoshi Yaji, Osamu Nonaka
  • Patent number: 10275630
    Abstract: A fingerprint identification apparatus including a light-transmitting device, an image-capture device disposed opposite to the light-transmitting device, and a collimator disposed between the light-transmitting device and the image-capture device is provided. The collimator includes a plurality of light-shielding layers and a plurality of light-transmitting layers that are alternately stacked. Each of the light-shielding layers has a plurality of openings respectively corresponding to a plurality of pixel regions of the image-capture device. Openings of the plurality of light-shielding layers corresponding to one pixel region are arranged along an oblique direction. The oblique direction and a normal direction of a pressing surface of the light-transmitting device have an included angle ?, and 0°<?<90°.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: April 30, 2019
    Assignee: Gingy Technology Inc.
    Inventors: Chuck Chung, Jen-Chieh Wu
  • Patent number: 10257405
    Abstract: An automatic focus method includes: catching an image data in a focus area; calculating the image data to get a contrast value curve via a contrast value algorithm; obtaining a contrast weight of each contrast value; calculating an adapted contrast value by multiplying the contrast value to the contrast weight; and adapting the focus according to the adapted contrast value curve. An automatic focus system using the automatic focus method is further disclosed.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: April 9, 2019
    Assignee: NANNING FUGUI PRECISION INDUSTRIAL CO., LTD.
    Inventors: Pi-Lun Cheng, Hsuan-Kuei Huang
  • Patent number: 10254756
    Abstract: A cleaning robot including a main body, a moving assembly to move the main body, a cleaning tool provided at a bottom part of the main body to collect foreign substances on a floor, and an imager to collect images around the main body. The cleaning robot also including a controller to recognize motion of a hand by performing image processing of the collected images, identify a control command corresponding to the motion of the hand, plan a moving direction and a moving distance of the main body as movement information based on the control command, and control operations of the moving assembly and the cleaning tool based on the planned movement information. Since the user directly controls movement of the cleaning robot, it is possible to improve interactivity between human and cleaning robot, to reduce the effort by the user and increase convenience.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: April 9, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dong Hun Lee, Heum yong Park
  • Patent number: 10248845
    Abstract: A facial recognition method includes identifying a first makeup pattern in a region of interest (ROI) of a first image, applying one of the first makeup pattern and a candidate makeup pattern to a ROI of a second image corresponding to the ROI of the first image to generate a third image and recognizing a face based on the first image and the third image.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: April 2, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungju Han, Sungjoo Suh, Jungbae Kim, Jaejoon Han
  • Patent number: 10244099
    Abstract: Embodiments of the present disclosure provide a method and a device for determining a status of a terminal and a terminal. The method includes: transmitting, by a bone-conduction acoustic generator, a detection signal outwards; receiving, by a microphone, a reflection signal of the detection signal reflected by an external object; obtaining a distance between the terminal and the external object according to the detection signal and the reflection signal; and determining the status of the terminal in relative to the external object based on the distance.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: March 26, 2019
    Inventor: Haiping Zhang
  • Patent number: 10238009
    Abstract: Cooling control methods include measuring a temperature of at least one component of each of multiple nodes and finding a maximum component temperature across all such nodes, comparing the maximum component temperature to a first and second component threshold and comparing the air temperature to a first and second air threshold, and controlling a proportion of coolant flow and a coolant flow rate to the air-to-liquid heat exchanger and the nodes based on the comparisons.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: March 19, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy J. Chainer, Milnes P. David, Madhusudan K. Iyengar, Pritish R. Parida, Robert E. Simons