User Positioning (e.g., Parallax) Patents (Class 348/14.16)
  • Patent number: 10349010
    Abstract: An imaging apparatus includes: a first imaging unit that captures an image of a subject and generates first video data; a first communication unit that communicates with a plurality of electronic devices, and receives second video data generated by the plurality of electronic devices and evaluation information indicating priorities of the second video data; an image processor that performs synthesis processing of synthesizing a first video and a second video, the first video being indicated by the first video data, and the second video being indicated by the second video data received from at least one electronic device of the plurality of electronic devices; and a first controller that controls the synthesis processing in the image processor.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: July 9, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Hideto Kobayashi
  • Patent number: 10321123
    Abstract: Gaze is corrected by adjusting multi-view images of a head. Image patches containing the left and right eyes of the head are identified and a feature vector is derived from plural local image descriptors of the image patch in at least one image of the multi-view images. A displacement vector field representing a transformation of an image patch is derived, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector produced by machine learning. The multi-view images are adjusted by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: June 11, 2019
    Assignee: RealD Spark, LLC
    Inventors: Eric Sommerlade, Michael G. Robinson
  • Patent number: 10319356
    Abstract: Various implementations described herein are directed to a transducer array. The transducer array may include a first receiver having a first aperture width. The transducer array may include a second receiver having a second aperture width that is substantially equal to the first aperture width. The transducer array may also include a transceiver having a third aperture width that is larger than the first aperture width and the second aperture width.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 11, 2019
    Assignee: NAVICO HOLDING AS
    Inventor: David Parks
  • Patent number: 10299062
    Abstract: A method for generating loudspeaker signals associated with a target screen size is disclosed. The method includes receiving a bit stream containing encoded higher order ambisonics signals, the encoded higher order ambisonics signals describing a sound field associated with a production screen size. The method further includes decoding the encoded higher order ambisonics signals to obtain a first set of decoded higher order ambisonics signals representing dominant components of the sound field and a second set of decoded higher order ambisonics signals representing ambient components of the sound field. The method also includes combining the first set of decoded higher order ambisonics signals and the second set of decoded higher order ambisonics signals to produce a combined set of decoded higher order ambisonics signals.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: May 21, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Peter Jax, Johannes Boehm, William Redmann
  • Patent number: 10282862
    Abstract: Techniques and systems for digital image generation and capture hint data are described. In one example, a request is formed by an image capture device for capture hint data. The request describes a characteristic of an image scene that is to be a subject of a digital image. A communication is received via a network by the image capture device in response to the request. The communication includes capture hint data that is based at least in part of the characteristic. The capture hint data is displayed by a display device of the image capture device. The digital image of the image scene is then captured by the image capture device subsequent to the display of the capture hint data.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: May 7, 2019
    Assignee: Adobe Inc.
    Inventors: Abhay Vinayak Parasnis, Oliver I. Goldman
  • Patent number: 10261596
    Abstract: Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. The techniques are particularly well-suited for hand-based navigational gesture processing. In one example case, for instance, the techniques are implemented in a computer system wherein initial threshold detection (image disturbance) and optionally user presence (hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. In some cases, image processing and communication of pixel information between various processing stages which lies outside a markered region is suppressed. In some embodiments, the markered region is aligned with, a mouse pad or designated desk area or a user input device such as a keyboard. Pixels evaluated by the system can be limited to a subset of the markered region.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventor: Jeremy Burr
  • Patent number: 10182208
    Abstract: An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. Rule based logic is integrated into an automatic process for producing director quality video for videoconferencing. An automatic process can include a method for composing a display for use in a video system having an active talker video stream and a panoramic view video stream having more than one person in video. The method can include determining a region of interest in a panoramic view video using motion detection and presence sensors, and preparing the panoramic view video by centering the region of interest and by zooming towards the region of interest, based upon the location of persons in the panoramic view video. The method includes determining placement of panoramic view video on a composite display to prevent the panoramic view video overlaying display of an active talker on the active talker video stream.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: January 15, 2019
    Assignee: Polycom, Inc.
    Inventors: Alain Elon Nimri, Shu Gao, Stephen Schaefer, Robert Murphy
  • Patent number: 10171522
    Abstract: A method, computer program product, and system for video commentary is described. A method may comprise providing particular media content to two or more user computing devices. The method may further comprise receiving, from a first user computing device of the two or more user computing devices, a selection to view comments from a second user computing device of the two or more user computing devices. The method may also comprise receiving one or more comments from the second user computing device of the two or more user computing devices. The one or more comments from the second user computing device may be associated with video media content. The method may additionally comprise transmitting the one or more comments from the second user computing device to the first user computing device based upon, at least in part, the selection from the first user computing device to view the comments.
    Type: Grant
    Filed: January 13, 2012
    Date of Patent: January 1, 2019
    Assignee: Google LLC
    Inventors: Sean Liu, Nikhyl Singhal
  • Patent number: 10009568
    Abstract: A system and method for displaying visual focus points of meeting participants uses an image capture device to generate a real-time graphical representation of a physical meeting space containing collocated meeting participants. Remote display devices display the real-time graphical representation of the physical meeting space. Each remote display device is associated with a remote meeting participant located at a remote location. A viewpoint monitoring mechanism determines a remote participant visual point of focus within the real-time graphical representation. A remote participant simulator located in the physical meeting space has a unique remote participant representation for each remote meeting participant and a remote participant visual point of focus indicator associated with each remote participant representation to simulate the remote participant visual point of focus.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: June 26, 2018
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TECHNISCHE UNIVERSITEIT EINDHOVEN
    Inventors: Jason Benjamin Ellis, Thomas D. Erickson, Karin Niemantsverdriet, Bin Xu
  • Patent number: 10004915
    Abstract: A compact and economical transcranial magnetic stimulation system is provided which allows patient to carry out transcranial magnetic stimulation therapy routinely and repeatedly in, for example, his or her house or a neighborhood primary-care medical facility. The system (1) has a magnetic field generating means for generating a magnetic field to be used for providing magnetic stimulation to a specific part of the patient's head. The magnetic field generating means has a magnetic coil (2) for generating a variable magnetic field, a holder (10) for holding the magnetic coil (2), cameras (150a) and another for recognizing respective reference markings (24) or specific portions of ears of the patient M (tragi (24)), and laser beam oscillators (160a) and another. According to the system, the coil (2) is set in a proper posture with respect to the specific part of the patient by aligning the optical axes of the cameras (150a) and other and the laser beam oscillators (160a) and other with respective tragi (24).
    Type: Grant
    Filed: October 24, 2012
    Date of Patent: June 26, 2018
    Assignees: Teijin Pharma Limited, OSAKA UNIVERSITY
    Inventors: Youichi Saitoh, Kenji Tojo, Atsushi Asahina
  • Patent number: 10002309
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: June 19, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
  • Patent number: 9961301
    Abstract: In a system and method for video communications, a user in an observation zone views a display image rendered on a vertically oriented screen through a two-way mirror in front of the screen. The two-way mirror is vertically angled at substantially 45 degrees, and reflects a part of a side wall in the observation zone. The backdrop for the light-emitting portion of the display image is provided by the reflection that is superimposed by the two-way mirror onto the non-light-emitting portion of the display image. The backdrop provides the light-emitting portion of the display image with a depth relationship that is observable when the user views the display image along a line of vision that extends straight through the two-way mirror to the screen behind the two-way mirror. A camera embedded in the side wall captures video of the user that is reflected by the two-way mirror from the observation zone back into the camera.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: May 1, 2018
    Assignee: Telepresence Technologies, LLC
    Inventor: Peter McDuffie White
  • Patent number: 9930269
    Abstract: A method of cropping an image in an apparatus having a camera includes displaying an image, identifying a face area in the image if a crop mode is selected, setting and displaying a crop mark including the identified face area, and displaying a crop image by cropping an image in the crop mark if a crop execution is requested.
    Type: Grant
    Filed: January 2, 2014
    Date of Patent: March 27, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Kyunghwa Kim
  • Patent number: 9881529
    Abstract: A display device and an operating method thereof are provided. The display device includes a display module for providing an original image, a optical component, a sensing module, and a control module. The optical component is for projecting a translating image of the original image and includes a first LC GRIN lens array, and a second LC GRIN lens array arranged parallel to the first LC GRIN lens array. The control module is for receiving an object information from the sensing module and adjusting the translating image by applying a first bias arrangement to the first LC GRIN lens array and a second bias arrangement to the second LC GRIN lens array according to the object information.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: January 30, 2018
    Assignee: INNOLUX CORPORATION
    Inventor: Naoki Sumi
  • Patent number: 9883138
    Abstract: The description relates to remote collaboration via a telepresence experience. One example can include an interactive digital display. The example can also include a virtual user presentation component configured to generate a graphical user interface that includes a virtual representation of a remote user on the interactive digital display. The graphical user interface can be configured to present the remote user in a side by side or mirror image relationship to a local user of the interactive digital display.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: January 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yinpeng Chen, Philip A. Chou, Zhengyou Zhang
  • Patent number: 9813605
    Abstract: Apparatuses, methods, systems, and program products are disclosed for tracking items. An identification module identifies an item using one or more sensors of an information handling device. A location module receives location data for the item in response to identifying the item. A communication module shares the location data with one or more different information handling devices.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: November 7, 2017
    Assignee: Lenovo (Singapore) PTE. LTD.
    Inventors: Nathan J. Peterson, John Carl Mese, Russell Speight VanBlon, Rod D. Waltermann, Arnold S. Weksler
  • Patent number: 9794692
    Abstract: A method is disclosed for determining a relative orientation of speakers that receive audio signals from a portable audio source device. In an embodiment, a microphone coupled with the portable audio source device receives a first sound from a first speaker and a second sound from a second speaker. An orientation detector determines a volume of at least one of the first and second sounds and the portable audio source device detects a movement of the portable audio source device. The orientation detector detects a variation in the determined volume and determines a relative orientation of the first and second speakers based, at least in part, on the detected movement and the detected variation in the determined volume.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: October 17, 2017
    Assignee: International Business Machines Corporation
    Inventors: Kulvir S. Bhogal, Jonathan F. Brunn, Jeffrey R. Hoy, Asima Silva
  • Patent number: 9754526
    Abstract: Some embodiments include a mobile device with a light sensor overlaid under a display (e.g., a touchscreen). The mobile device can identify a command to capture an image. The mobile device can adjust at least a target portion of an opaqueness adjustable region of the display directly over the light sensor. The opaqueness adjustable region is capable of transforming from substantially opaque to substantially transparent. The mobile device can capture the image utilizing at least the light sensor while the target portion of the opaqueness adjustable region is substantially transparent.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: September 5, 2017
    Assignee: ESSENTIAL PRODUCTS, INC.
    Inventors: David John Evans, V, Xinrui Jiang, Andrew E. Rubin, Matthew Hershenson, Xiaoyu Miao, Joseph Anthony Tate, Jason Sean Gagne-Keats, Rebecca Schultz Zavin
  • Patent number: 9736611
    Abstract: A system enhances spatialization in an audio signal at a receiving location. The system applies a phase difference analysis to signals received from an array of spaced apart input devices that convert sound into electrical signals. The system derives spatial or directional information about the relative locations of the sound sources. The converted signals may be mixed using weights derived from the spatial information to generate a multichannel output signal that, when processed by a remote or local audio system, generates a representation of the relative locations of the sound sources at the originating location at the receiving location.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: August 15, 2017
    Assignee: 2236008 Ontario Inc.
    Inventors: Phillip A. Hetherington, Mark Fallat
  • Patent number: 9712783
    Abstract: A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are not both met, the endpoint reframes the view.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: July 18, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Glenn Robert Grimsrud Aarrestad, Rune Øistein Aas, Kristian Tangeland
  • Patent number: 9672412
    Abstract: Provided are methods and apparatus for tracking a head pose with online face template reconstruction. The method comprises the steps of retrieving a plurality of frames of images of the user; comparing each of the retrieved frames with a predetermined face template to determine one or more head poses that are monitored successfully and obtain head pose information of the determined one or more head poses; and reconstructing, during the step of comparing, the face template from the obtained head pose information; wherein the reconstructed face template is compared with subsequently retrieved images such that the head poses of the user are tracked in time.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: June 6, 2017
    Assignee: THE CHINESE UNIVERSITY OF HONG KONG
    Inventors: King Ngi Ngan, Songnan Li
  • Patent number: 9665804
    Abstract: A method for tracking an object by an electronic device is described. The method includes detecting an object position in an initial frame to produce a detected object position. The method also includes measuring one or more landmark positions based on the detected object position or a predicted object position. The method further includes predicting the object position in a subsequent frame based on the one or more landmark positions. The method additionally includes determining whether object tracking is lost. The method also includes avoiding performing object detection for the subsequent frame in a case that object tracking is maintained.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: May 30, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Michel Adib Sarkis, Yingyong Qi, Magdi Abuelgasim Mohamed
  • Patent number: 9646198
    Abstract: In an approach to determine a sentiment of an attendee of a video conference, the computer receives a video of an attendee of a video conference and, then, determines, based, at least in part, on the video of the attendee, a first sentiment of the attendee. Furthermore, in the approach the computer receives an indication of an attendee activity on a first application and determines, based, in part, on the attendee activity whether the first sentiment of the attendee is related to the video conference.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: May 9, 2017
    Assignee: International Business Machines Corporation
    Inventors: Hernan A. Cunico, Asima Silva
  • Patent number: 9648061
    Abstract: In an approach to determine a sentiment of an attendee of a video conference, the computer receives a video of an attendee of a video conference and, then, determines, based, at least in part, on the video of the attendee, a first sentiment of the attendee.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: May 9, 2017
    Assignee: International Business Machines Corporation
    Inventors: Hernan A. Cunico, Asima Silva
  • Patent number: 9582707
    Abstract: A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: February 28, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Piyush Sharma, Ashwin Swaminathan, Ramin Rezaiifar, Qi Xue
  • Patent number: 9582895
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: February 28, 2017
    Assignees: International Business Machines Corporation, The University of Queensland
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
  • Patent number: 9544578
    Abstract: A portable electronic equipment (1) has an autostereoscopic display (11), a sensor device (2), and a controller (6, 7). The autostereoscopic display (11) comprises a display panel and an image directing device. The sensor device (2) is configured to capture distance information indicative of a distance of a user from the autostereoscopic display (11) and a direction in which the user is positioned. The controller (6, 7) is coupled to the autostereoscopic display (11) to control the display panel (12) and the image directing device based on the distance information and/or direction informa-tion. The controller (6, 7) is configured to compute plural images to be output based on the distance information, to control the display panel to output the computed plural images, and to control the image directing device based on the distance information and/or direction information.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: January 10, 2017
    Assignee: Sony Ericsson Mobile Communications AB
    Inventors: Anders Linge, Martin Ek, Jonas Gustavsson
  • Patent number: 9544540
    Abstract: Computing devices may implement dynamic display of video communication data. Video communication data for a video communication may be received at a computing device where another application is currently displaying image data on an electronic display. A display location may be determined for the video communication data according to display attributes that are configured by the other application at runtime. Once determined, the video communication data may then be displayed in the determined location. In some embodiments, the video communication data may be integrated with other data displayed on the electronic display for the other application.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: January 10, 2017
    Assignee: Apple Inc.
    Inventors: Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, James O. Normile, Joe S. Abuan, Hyeonkuk Jeong, Yan Yang, Gobind Johar, Thomas Jansen
  • Patent number: 9521362
    Abstract: A virtual camera pose determiner is configured to determine a position and an orientation of a virtual camera. The position of the virtual camera is determined on the basis of a display position of a displayed representation of a remote participant on a display. The orientation of the virtual camera is determined on the basis of a geometrical relation between the display position of the remote participant on the display, and a position of a local participant. The virtual camera is configured to transmit an image or a sequence of images to the remote participant, so that an image provided by the virtual camera has the view on the local participant as is if viewed from the display position. Further embodiments provide a video communication system having a virtual camera pose determiner for providing a virtual camera pose on basis of the display position and the position of the local participant.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: December 13, 2016
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Nicole Atzpadin, Ingo Feldmann, Peter Kauff, Oliver Schreer, Wolfgang Waizenegger
  • Patent number: 9521329
    Abstract: A display device includes an imaging unit configured to capture an image of an object and to generate image data of the object, a display unit configured to display the image corresponding to the image data generated by the imaging unit, and a display controller configured to cause the display unit to display a sample image in which at least a line of sight of the object has been changed when the imaging unit captures the image of the object. An imaging direction of the imaging unit and a display direction of the display unit are matched. The sample image is an object image in which the line of sight has changed from front to a direction other than the front.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: December 13, 2016
    Assignee: OLYMPUS CORPORATION
    Inventors: Osamu Nonaka, Mai Yamaguchi, Yuiko Uemura, Tomomi Uemura, Sachie Yamamoto
  • Patent number: 9521363
    Abstract: A live video conference session can be established between a first device and second remotely located devices each having a first and second camera as an input peripheral. The first and second camera can capture a first and second video stream of a first and second session participant. During the live video conference session, the first and second video stream can be continuously conveyed in real time over a network to displays of each device. The first video stream can be analyzed to programmatically determine that a position of the first participant is non-optimal as seen by the second participant. An eye guide can be presented on a user interface to assist the first participant to focus their eyes in a new location indicated by the eye. If subsequent detects improved eye focus with the second participant the eye guide can be dismissed.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: December 13, 2016
    Inventors: Brian K. Buchheit, Satesh Ramcharitar
  • Patent number: 9497413
    Abstract: In an approach to video filtering, a computer receives a first video frame of a presenter from a video feed that includes an audio feed. The computer receives a second video frame. The computer determines whether a difference between the first video frame and the second video frame exceeds a pre-defined threshold. In response to determining the difference between the first video frame and the second video frame exceeds the pre-defined threshold, the computer determines whether the difference between the first video frame and the second video frame is expected. In response to determining the difference between the first video frame and the second video frame is expected, the computer discards video data associated with the difference between the first video frame and the second video frame. The computer creates a third video frame, based, at least in part, on non-discarded video data.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Abdullah Q. Chougle, Akash U. Dhoot, Shailendra Moyal
  • Patent number: 9445154
    Abstract: First video showing content displayed on a touch sensitive screen is combined with second video from a camera showing a user's interaction with the touch sensitive screen. The second video is filtered (digitally or physically) to prevent the camera from capturing the content of the touch sensitive screen. Combining the first video with the second video creates a combined video containing both a high quality image of the graphic user interface of an application appearing on the touch sensitive screen as well as how a person operating the application is gesturing and otherwise touching the touch sensitive screen when interacting with the application.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: September 13, 2016
    Assignee: GLANCE NETWORKS, INC.
    Inventor: Richard L. Baker
  • Patent number: 9437011
    Abstract: A method of estimating a pose of a head for a person, includes estimating the pose of the head for the person based on a content, and generating a three-dimensional (3D) model of a face for the person. The method further includes generating pictorial structures of the face based on the estimated pose and the 3D model, and determining a refined pose of the head by locating parts of the face in the pictorial structures.
    Type: Grant
    Filed: June 3, 2013
    Date of Patent: September 6, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hariprasad Kannan, Anant Vidur Puri
  • Patent number: 9380262
    Abstract: The present invention relates to a mobile terminal, and a method for operating the same. According to an embodiment of the present invention, a method for operating a mobile terminal includes the steps of: forming an audio beam based on at least one of a photographed image from a camera and motion information from a motion sensor; receiving an audio signal from a speaker through a plurality of microphones; and processing the received audio signal based on the formed audio beam. Thus, the use convenience is improved.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: June 28, 2016
    Assignee: LG ELECTRONICS INC.
    Inventor: Ju Yeon Shin
  • Patent number: 9360939
    Abstract: It is a problem of the present invention to provide a technical method to achieve a video-game-use simulated-experience remote control button that allows simulated experience that allows experience and reaction of tactile sense corresponding to video game software. According to the present invention, the button portion of a video-game-use remote control is replaced with a tool and a mechanical device that allow creating tactile sense corresponding to the content of a video game, providing a reaction after a human operator the created tactile sense, operation using the power level of the human, so as to generate a load. The generated load is quantified and the quantified value is set as X, and X is converted into a button operation of the video-game-use remote control so as to achieve a video-game-use simulated-experience remote control button that allows simulated experience of tactile sense.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: June 7, 2016
    Inventor: Shinji Nishimura
  • Patent number: 9342759
    Abstract: Described is a system for improving object recognition. Object detection results and classification results for a sequence of image frames are received as input. Each object detection result is represented by a detection box and each classification result is represented by an object label corresponding to the object detection result. A pseudo-tracklet is formed by linking object detection results representing the same object in consecutive image frames. The system determines whether there are any inconsistent object labels or missing object detection results in the pseudo-tracklet. Finally, the object detection results and the classification results are improved by correcting any inconsistent object labels and missing object detection results.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: May 17, 2016
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Changsoo S. Jeong, Deepak Khosla, Kyungnam Kim, Shinko Y. Cheng, Lei Zhang, Alexander L. Honda
  • Patent number: 9298007
    Abstract: Aspects of the present invention relate to methods and systems for imaging, recognizing, and tracking of a user's eye that is wearing a HWC. Aspects further relate to the processing of images reflected from the user's eye and controlling displayed content in accordance therewith.
    Type: Grant
    Filed: March 17, 2014
    Date of Patent: March 29, 2016
    Assignee: Osterhout Group, Inc.
    Inventor: John N. Border
  • Patent number: 9270943
    Abstract: Embodiments of the present invention provide a novel system and/or method for performing over-the-network collaborations and interactions between remote end-users. Embodiments of the present invention produce the perceived effect of each user sharing a same physical workspace while each person is actually located in separate physical environments. In this manner, embodiments of the present invention allow for more seamless interactions between users while relieving them of the burden of using common computer peripheral devices such as mice, keyboards, and other hardware often used to perform such interactions.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: February 23, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Jana Ehmann, Liang Zhou, Onur G. Guleryuz, Fengjun Lv, Fengqing Zhu, Naveen Dhar
  • Patent number: 9269012
    Abstract: Systems and approaches are provided for tracking an object using multiple tracking processes. By combining multiple lightweight tracking processes, object tracking can be robust, use a limited amount of power, and enable a computing device to respond to input corresponding to the motion of the object in real time. The multiple tracking processes can be run in parallel to determine the position of the object by selecting the results of the best performing tracker under certain heuristics or combining the results of multiple tracking processes in various ways. Further, other sensor data of a computing device can be used to improve the results provided by one or more of the tracking processes.
    Type: Grant
    Filed: August 22, 2013
    Date of Patent: February 23, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: David Allen Fotland
  • Patent number: 9258520
    Abstract: A mobile terminal includes a communication unit, a display, a camera for obtaining images, and a processor for causing an image to be displayed on the display and for causing images to be communicated via the communication unit. The processor may be further configured to obtain a first image from the camera, obtain a substantial mirror image of the first image to form a second image, display the second image on the display, and communicate the first image to a receiving device via a wireless communication link.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: February 9, 2016
    Assignee: LG ELECTRONICS INC.
    Inventor: Eun Young Lee
  • Patent number: 9253443
    Abstract: In an approach to video filtering, a computer receives a first video frame of a presenter from a video feed that includes an audio feed. The computer extracts a face of the presenter and a background from the first video frame. The computer receives a second video frame. The computer determines whether a difference between the first video frame and the second video frame exceeds a pre-defined threshold. The computer converts the audio feed from speech to text. The computer determines whether the difference between the first video frame and the second video frame is expected, based, at least in part, on the converted audio feeds. The computer discards video data associated with the difference between the first video frame and the second video frame. The computer creates a third video frame, based, at least in part, on non-discarded video data.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: February 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Abdullah Q. Chougle, Akash U. Dhoot, Shailendra Moyal
  • Patent number: 9215408
    Abstract: A view morphing algorithm is applied to synchronous collections of video images from at least two video imaging devices, and interpolating between the images, creates a composite image view of the local participant. This composite image approximates what might be seen from a point between the video imaging devices, presenting the image to other video session participants.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: December 15, 2015
    Assignee: APPLIED INVENTION, LLC
    Inventors: W. Daniel Hillis, Bran Ferren, Russel Howe
  • Patent number: 9182935
    Abstract: Methods and devices for selectively presenting a user interface on a second screen. More particularly, the method includes a change in the display mode of a multiple screen device from a first screen to a second screen while the device is closed. The change in the display mode may be made in a menu. The menu can be rendered in response to a request. The change to the second screen can be received by the selection of a user interface device in the menu. In response to the selection, the device can render the user interface in the second screen.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: November 10, 2015
    Assignee: Z124
    Inventors: Sanjiv Sirpal, Mohammed Selim
  • Patent number: 9111171
    Abstract: A method and mobile terminal for correcting a gaze of a user in an image includes setting eye outer points that define an eye region of the user in an original image, transforming the set eye outer points to a predetermined reference camera gaze direction, and transforming the eye region of the original image based on the transformed eye outer points.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: August 18, 2015
    Assignees: Samsung Electronics Co., Ltd, Postech Academy-Industry
    Inventors: Byung-Jun Son, Dai-Jin Kim, Jong-Ju Shin, In-Ho Choi, Tae-Hwa Hong
  • Patent number: 9094571
    Abstract: A video chatting method and system are provided. The method and system describe collection of facial vector data, audio data, and interactive motion information of a user of a first client. The collected data may be transmitted to a second client. The second client, in turn, may generate a virtual avatar model of the user of the first client based on the received data. The second client may further display the virtual model, play sound in the audio data. The second client may also render the interactive motion information and facial data information of a user of the second client, and generate and display a virtual avatar model of the user of the second client. The provided method and system may decreases amount of data that may be transferred over the network. This may allow data transmission rate during video communication to be high enough for a smooth operation.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: July 28, 2015
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shang Yu, Feng Rao, Yang Mo, Jun Qiu, Fei Wang
  • Patent number: 9077974
    Abstract: Disclosed herein is a 3D teleconferencing apparatus and method enabling eye contact. The 3D teleconferencing apparatus enabling eye contact according to the present invention includes an image acquisition unit for acquiring depth images and color images by manipulating cameras in real time in consideration of images obtained by capturing a subject that is a teleconference participant and images received over a network and corresponding to a counterpart involved in the teleconference; a full face generation unit for generating a final depth image and a final color image corresponding to a full face of the participant for eye contact using the depth images and the color images; and a 3D image generation unit for generating a 3D image corresponding to the counterpart and displaying the 3D image on a display device.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: July 7, 2015
    Assignee: CENTER OF HUMAN-CENTERED INTERACTION FOR COEXISTENCE
    Inventors: Bum-Jae You, Eun-Kyung Lee, Ji-Yong Lee, Jai-Hi Cho, Shin-Young Kim
  • Patent number: 9041764
    Abstract: Embodiments of the present invention provide a method, device, and system for highlighting a party of interest in video conferencing, relating to the communication field and capable of effectively utilizing network bandwidth and enhancing conference efficiency. The method includes: converting received audio and video signals sent by multiple conferencing terminals into multiple independent video images corresponding to the multiple conferencing terminals, and displaying the multiple video images through a display device; and adjusting display factors of the multiple video images according to obtained video image display priority signals, so that the multiple video images present different visual characteristics in the display device. The embodiments of the present invention are applied in video conferencing.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: May 26, 2015
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zongbo Wang, Ying Lu, Shilu Ma
  • Patent number: 9041767
    Abstract: A system and method is disclosed for adapting a continuous presence videoconferencing layout according to interactions between conferees. Using regions of interest found in video images, the arrangement of images of conferees may be dynamically arranged as displayed by endpoints. Arrangements may be responsive to various metrics, including the position of conferees in a room and dominant conferees in the videoconference. Video images may be manipulated as part of the arrangement, including cropping and mirroring the video image. As interactions between conferees change, the layout may be automatically rearranged responsive to the changed interactions.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: May 26, 2015
    Assignee: Polycom, Inc.
    Inventors: Eyal Leviav, Niv Wagner, Efrat Be'ery
  • Patent number: 9025002
    Abstract: A method and an apparatus for playing audio of an attendant at a remote end and a remote video conference system are provided. The method includes: receiving audio of an attendant at a remote site; and by means of two or more loudspeakers mounted at a top and a bottom of a remote image presentation device at a local site, simulating an audio transmission path of the audio of the attendant at the remote site between the two or more loudspeakers and an attendant at the local site through a predetermined algorithm, where the simulating means that the audio transmission path is simulated between an image of the head of the attendant at the remote site displayed in the remote image presentation device, and the head of the attendant at the local site.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: May 5, 2015
    Assignee: Huawei Device Co., Ltd.
    Inventors: Wuzhou Zhan, Dongqi Wang