User Positioning (e.g., Parallax) Patents (Class 348/14.16)
-
Patent number: 10536669Abstract: Methods, systems, and apparatus for conducting a video conference. A location of one or more sets of eyes in an image may be determined. The relative location of an image capture device and/or a portion of a display device may be adjusted based on the determined location.Type: GrantFiled: June 12, 2017Date of Patent: January 14, 2020Assignee: eBay Inc.Inventor: Jeremiah Joseph Akin
-
Patent number: 10491857Abstract: An asymmetric video conferencing system comprises a panoramic video conferencing device in a first location and a head mounted video conferencing device in a second location. The panoramic video conferencing device acquires panoramic images of multiple participants via a 360 degree camera, identifies a face or other region of interest (ROI) in the panoramic images, and captures video data of the ROI. To reduce bandwidth utilized, the ROI video data is transmitted to the head mounted video conferencing device used by a remote participant, who can command alternative views and audio from the first location. Images and audio of the remote participant can be transmitted to the first location for viewing and hearing by the multiple participants, enabling virtual face-to-face discussions. A method implementing the system is also disclosed herein.Type: GrantFiled: November 7, 2018Date of Patent: November 26, 2019Assignee: NANNING FUGUI PRECISION INDUSTRIAL CO., LTD.Inventor: Yung-Chang Huang
-
Patent number: 10477143Abstract: Provided is an information processing device, including: a camera that captures a real space; a communication unit that communicates with a terminal device used by a remote user; a streaming control unit that streams a first video captured by the camera from the communication unit to the terminal device if the information processing device itself is selected by the remote user from among a plurality of devices that capture the real space; and a display that displays a user image of the remote user while the first video is being streamed to the terminal device. The information processing device reduces inconvenience for the remote user during remote communication, and realizes rich remote communication with a higher degree of freedom.Type: GrantFiled: January 4, 2018Date of Patent: November 12, 2019Assignee: SONY CORPORATIONInventor: Hiroaki Tobita
-
Patent number: 10423830Abstract: Techniques related to eye contact correction to provide a virtual user gaze aligned with a camera while the user views a display are discussed. Such techniques may include encoding an eye region of a source image using a pretrained neural network to generate compressed features, applying a pretrained classifier to the features to determine a motion vector field for the eye region, and warping and inserting the eye region into the source image to generate an eye contact corrected image.Type: GrantFiled: April 22, 2016Date of Patent: September 24, 2019Assignee: Intel CorporationInventors: Edmond Chalom, Or Shimshi
-
Patent number: 10384144Abstract: Systems and methods herein are directed to a travel case for a portable Pepper's Ghost Illusion setup. In particular, various embodiments are described that provide a road case that folds out into a Pepper's Ghost Illusion system, allowing for extended portability. Specifically, in one embodiment, the portable case may be built for air travel (e.g., less than 50 pounds and less than or equal to 62 linear inches (H+W+D)), using a panel display and space-saving designs for legs, holographic foil and frame, and other components (e.g., remotes, wires, etc.). For example, foam used for cushioning and packaging can double as a stage.Type: GrantFiled: March 11, 2016Date of Patent: August 20, 2019Assignee: VENTANA 3D, LLCInventors: Ashley Crowder, Benjamin Conway, Troy P. Senkiewicz
-
Patent number: 10349010Abstract: An imaging apparatus includes: a first imaging unit that captures an image of a subject and generates first video data; a first communication unit that communicates with a plurality of electronic devices, and receives second video data generated by the plurality of electronic devices and evaluation information indicating priorities of the second video data; an image processor that performs synthesis processing of synthesizing a first video and a second video, the first video being indicated by the first video data, and the second video being indicated by the second video data received from at least one electronic device of the plurality of electronic devices; and a first controller that controls the synthesis processing in the image processor.Type: GrantFiled: August 4, 2017Date of Patent: July 9, 2019Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventor: Hideto Kobayashi
-
Patent number: 10319356Abstract: Various implementations described herein are directed to a transducer array. The transducer array may include a first receiver having a first aperture width. The transducer array may include a second receiver having a second aperture width that is substantially equal to the first aperture width. The transducer array may also include a transceiver having a third aperture width that is larger than the first aperture width and the second aperture width.Type: GrantFiled: December 20, 2017Date of Patent: June 11, 2019Assignee: NAVICO HOLDING ASInventor: David Parks
-
Patent number: 10321123Abstract: Gaze is corrected by adjusting multi-view images of a head. Image patches containing the left and right eyes of the head are identified and a feature vector is derived from plural local image descriptors of the image patch in at least one image of the multi-view images. A displacement vector field representing a transformation of an image patch is derived, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector produced by machine learning. The multi-view images are adjusted by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field.Type: GrantFiled: January 4, 2017Date of Patent: June 11, 2019Assignee: RealD Spark, LLCInventors: Eric Sommerlade, Michael G. Robinson
-
Patent number: 10299062Abstract: A method for generating loudspeaker signals associated with a target screen size is disclosed. The method includes receiving a bit stream containing encoded higher order ambisonics signals, the encoded higher order ambisonics signals describing a sound field associated with a production screen size. The method further includes decoding the encoded higher order ambisonics signals to obtain a first set of decoded higher order ambisonics signals representing dominant components of the sound field and a second set of decoded higher order ambisonics signals representing ambient components of the sound field. The method also includes combining the first set of decoded higher order ambisonics signals and the second set of decoded higher order ambisonics signals to produce a combined set of decoded higher order ambisonics signals.Type: GrantFiled: July 27, 2016Date of Patent: May 21, 2019Assignee: Dolby Laboratories Licensing CorporationInventors: Peter Jax, Johannes Boehm, William Redmann
-
Patent number: 10282862Abstract: Techniques and systems for digital image generation and capture hint data are described. In one example, a request is formed by an image capture device for capture hint data. The request describes a characteristic of an image scene that is to be a subject of a digital image. A communication is received via a network by the image capture device in response to the request. The communication includes capture hint data that is based at least in part of the characteristic. The capture hint data is displayed by a display device of the image capture device. The digital image of the image scene is then captured by the image capture device subsequent to the display of the capture hint data.Type: GrantFiled: June 20, 2017Date of Patent: May 7, 2019Assignee: Adobe Inc.Inventors: Abhay Vinayak Parasnis, Oliver I. Goldman
-
Patent number: 10261596Abstract: Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. The techniques are particularly well-suited for hand-based navigational gesture processing. In one example case, for instance, the techniques are implemented in a computer system wherein initial threshold detection (image disturbance) and optionally user presence (hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. In some cases, image processing and communication of pixel information between various processing stages which lies outside a markered region is suppressed. In some embodiments, the markered region is aligned with, a mouse pad or designated desk area or a user input device such as a keyboard. Pixels evaluated by the system can be limited to a subset of the markered region.Type: GrantFiled: July 3, 2018Date of Patent: April 16, 2019Assignee: Intel CorporationInventor: Jeremy Burr
-
Patent number: 10182208Abstract: An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. Rule based logic is integrated into an automatic process for producing director quality video for videoconferencing. An automatic process can include a method for composing a display for use in a video system having an active talker video stream and a panoramic view video stream having more than one person in video. The method can include determining a region of interest in a panoramic view video using motion detection and presence sensors, and preparing the panoramic view video by centering the region of interest and by zooming towards the region of interest, based upon the location of persons in the panoramic view video. The method includes determining placement of panoramic view video on a composite display to prevent the panoramic view video overlaying display of an active talker on the active talker video stream.Type: GrantFiled: December 12, 2017Date of Patent: January 15, 2019Assignee: Polycom, Inc.Inventors: Alain Elon Nimri, Shu Gao, Stephen Schaefer, Robert Murphy
-
Patent number: 10171522Abstract: A method, computer program product, and system for video commentary is described. A method may comprise providing particular media content to two or more user computing devices. The method may further comprise receiving, from a first user computing device of the two or more user computing devices, a selection to view comments from a second user computing device of the two or more user computing devices. The method may also comprise receiving one or more comments from the second user computing device of the two or more user computing devices. The one or more comments from the second user computing device may be associated with video media content. The method may additionally comprise transmitting the one or more comments from the second user computing device to the first user computing device based upon, at least in part, the selection from the first user computing device to view the comments.Type: GrantFiled: January 13, 2012Date of Patent: January 1, 2019Assignee: Google LLCInventors: Sean Liu, Nikhyl Singhal
-
Patent number: 10004915Abstract: A compact and economical transcranial magnetic stimulation system is provided which allows patient to carry out transcranial magnetic stimulation therapy routinely and repeatedly in, for example, his or her house or a neighborhood primary-care medical facility. The system (1) has a magnetic field generating means for generating a magnetic field to be used for providing magnetic stimulation to a specific part of the patient's head. The magnetic field generating means has a magnetic coil (2) for generating a variable magnetic field, a holder (10) for holding the magnetic coil (2), cameras (150a) and another for recognizing respective reference markings (24) or specific portions of ears of the patient M (tragi (24)), and laser beam oscillators (160a) and another. According to the system, the coil (2) is set in a proper posture with respect to the specific part of the patient by aligning the optical axes of the cameras (150a) and other and the laser beam oscillators (160a) and other with respective tragi (24).Type: GrantFiled: October 24, 2012Date of Patent: June 26, 2018Assignees: Teijin Pharma Limited, OSAKA UNIVERSITYInventors: Youichi Saitoh, Kenji Tojo, Atsushi Asahina
-
Patent number: 10009568Abstract: A system and method for displaying visual focus points of meeting participants uses an image capture device to generate a real-time graphical representation of a physical meeting space containing collocated meeting participants. Remote display devices display the real-time graphical representation of the physical meeting space. Each remote display device is associated with a remote meeting participant located at a remote location. A viewpoint monitoring mechanism determines a remote participant visual point of focus within the real-time graphical representation. A remote participant simulator located in the physical meeting space has a unique remote participant representation for each remote meeting participant and a remote participant visual point of focus indicator associated with each remote participant representation to simulate the remote participant visual point of focus.Type: GrantFiled: April 21, 2017Date of Patent: June 26, 2018Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TECHNISCHE UNIVERSITEIT EINDHOVENInventors: Jason Benjamin Ellis, Thomas D. Erickson, Karin Niemantsverdriet, Bin Xu
-
Patent number: 10002309Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.Type: GrantFiled: November 10, 2016Date of Patent: June 19, 2018Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
-
Patent number: 9961301Abstract: In a system and method for video communications, a user in an observation zone views a display image rendered on a vertically oriented screen through a two-way mirror in front of the screen. The two-way mirror is vertically angled at substantially 45 degrees, and reflects a part of a side wall in the observation zone. The backdrop for the light-emitting portion of the display image is provided by the reflection that is superimposed by the two-way mirror onto the non-light-emitting portion of the display image. The backdrop provides the light-emitting portion of the display image with a depth relationship that is observable when the user views the display image along a line of vision that extends straight through the two-way mirror to the screen behind the two-way mirror. A camera embedded in the side wall captures video of the user that is reflected by the two-way mirror from the observation zone back into the camera.Type: GrantFiled: October 13, 2017Date of Patent: May 1, 2018Assignee: Telepresence Technologies, LLCInventor: Peter McDuffie White
-
Patent number: 9930269Abstract: A method of cropping an image in an apparatus having a camera includes displaying an image, identifying a face area in the image if a crop mode is selected, setting and displaying a crop mark including the identified face area, and displaying a crop image by cropping an image in the crop mark if a crop execution is requested.Type: GrantFiled: January 2, 2014Date of Patent: March 27, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Kyunghwa Kim
-
Patent number: 9883138Abstract: The description relates to remote collaboration via a telepresence experience. One example can include an interactive digital display. The example can also include a virtual user presentation component configured to generate a graphical user interface that includes a virtual representation of a remote user on the interactive digital display. The graphical user interface can be configured to present the remote user in a side by side or mirror image relationship to a local user of the interactive digital display.Type: GrantFiled: July 16, 2014Date of Patent: January 30, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Yinpeng Chen, Philip A. Chou, Zhengyou Zhang
-
Patent number: 9881529Abstract: A display device and an operating method thereof are provided. The display device includes a display module for providing an original image, a optical component, a sensing module, and a control module. The optical component is for projecting a translating image of the original image and includes a first LC GRIN lens array, and a second LC GRIN lens array arranged parallel to the first LC GRIN lens array. The control module is for receiving an object information from the sensing module and adjusting the translating image by applying a first bias arrangement to the first LC GRIN lens array and a second bias arrangement to the second LC GRIN lens array according to the object information.Type: GrantFiled: June 12, 2015Date of Patent: January 30, 2018Assignee: INNOLUX CORPORATIONInventor: Naoki Sumi
-
Patent number: 9813605Abstract: Apparatuses, methods, systems, and program products are disclosed for tracking items. An identification module identifies an item using one or more sensors of an information handling device. A location module receives location data for the item in response to identifying the item. A communication module shares the location data with one or more different information handling devices.Type: GrantFiled: October 31, 2014Date of Patent: November 7, 2017Assignee: Lenovo (Singapore) PTE. LTD.Inventors: Nathan J. Peterson, John Carl Mese, Russell Speight VanBlon, Rod D. Waltermann, Arnold S. Weksler
-
Patent number: 9794692Abstract: A method is disclosed for determining a relative orientation of speakers that receive audio signals from a portable audio source device. In an embodiment, a microphone coupled with the portable audio source device receives a first sound from a first speaker and a second sound from a second speaker. An orientation detector determines a volume of at least one of the first and second sounds and the portable audio source device detects a movement of the portable audio source device. The orientation detector detects a variation in the determined volume and determines a relative orientation of the first and second speakers based, at least in part, on the detected movement and the detected variation in the determined volume.Type: GrantFiled: April 30, 2015Date of Patent: October 17, 2017Assignee: International Business Machines CorporationInventors: Kulvir S. Bhogal, Jonathan F. Brunn, Jeffrey R. Hoy, Asima Silva
-
Patent number: 9754526Abstract: Some embodiments include a mobile device with a light sensor overlaid under a display (e.g., a touchscreen). The mobile device can identify a command to capture an image. The mobile device can adjust at least a target portion of an opaqueness adjustable region of the display directly over the light sensor. The opaqueness adjustable region is capable of transforming from substantially opaque to substantially transparent. The mobile device can capture the image utilizing at least the light sensor while the target portion of the opaqueness adjustable region is substantially transparent.Type: GrantFiled: September 30, 2016Date of Patent: September 5, 2017Assignee: ESSENTIAL PRODUCTS, INC.Inventors: David John Evans, V, Xinrui Jiang, Andrew E. Rubin, Matthew Hershenson, Xiaoyu Miao, Joseph Anthony Tate, Jason Sean Gagne-Keats, Rebecca Schultz Zavin
-
Patent number: 9736611Abstract: A system enhances spatialization in an audio signal at a receiving location. The system applies a phase difference analysis to signals received from an array of spaced apart input devices that convert sound into electrical signals. The system derives spatial or directional information about the relative locations of the sound sources. The converted signals may be mixed using weights derived from the spatial information to generate a multichannel output signal that, when processed by a remote or local audio system, generates a representation of the relative locations of the sound sources at the originating location at the receiving location.Type: GrantFiled: April 17, 2015Date of Patent: August 15, 2017Assignee: 2236008 Ontario Inc.Inventors: Phillip A. Hetherington, Mark Fallat
-
Patent number: 9712783Abstract: A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are not both met, the endpoint reframes the view.Type: GrantFiled: March 3, 2016Date of Patent: July 18, 2017Assignee: Cisco Technology, Inc.Inventors: Glenn Robert Grimsrud Aarrestad, Rune Øistein Aas, Kristian Tangeland
-
Patent number: 9672412Abstract: Provided are methods and apparatus for tracking a head pose with online face template reconstruction. The method comprises the steps of retrieving a plurality of frames of images of the user; comparing each of the retrieved frames with a predetermined face template to determine one or more head poses that are monitored successfully and obtain head pose information of the determined one or more head poses; and reconstructing, during the step of comparing, the face template from the obtained head pose information; wherein the reconstructed face template is compared with subsequently retrieved images such that the head poses of the user are tracked in time.Type: GrantFiled: June 24, 2014Date of Patent: June 6, 2017Assignee: THE CHINESE UNIVERSITY OF HONG KONGInventors: King Ngi Ngan, Songnan Li
-
Patent number: 9665804Abstract: A method for tracking an object by an electronic device is described. The method includes detecting an object position in an initial frame to produce a detected object position. The method also includes measuring one or more landmark positions based on the detected object position or a predicted object position. The method further includes predicting the object position in a subsequent frame based on the one or more landmark positions. The method additionally includes determining whether object tracking is lost. The method also includes avoiding performing object detection for the subsequent frame in a case that object tracking is maintained.Type: GrantFiled: November 12, 2014Date of Patent: May 30, 2017Assignee: QUALCOMM IncorporatedInventors: Michel Adib Sarkis, Yingyong Qi, Magdi Abuelgasim Mohamed
-
Patent number: 9646198Abstract: In an approach to determine a sentiment of an attendee of a video conference, the computer receives a video of an attendee of a video conference and, then, determines, based, at least in part, on the video of the attendee, a first sentiment of the attendee. Furthermore, in the approach the computer receives an indication of an attendee activity on a first application and determines, based, in part, on the attendee activity whether the first sentiment of the attendee is related to the video conference.Type: GrantFiled: December 15, 2014Date of Patent: May 9, 2017Assignee: International Business Machines CorporationInventors: Hernan A. Cunico, Asima Silva
-
Patent number: 9648061Abstract: In an approach to determine a sentiment of an attendee of a video conference, the computer receives a video of an attendee of a video conference and, then, determines, based, at least in part, on the video of the attendee, a first sentiment of the attendee.Type: GrantFiled: August 8, 2014Date of Patent: May 9, 2017Assignee: International Business Machines CorporationInventors: Hernan A. Cunico, Asima Silva
-
Patent number: 9582895Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.Type: GrantFiled: May 22, 2015Date of Patent: February 28, 2017Assignees: International Business Machines Corporation, The University of QueenslandInventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
-
Patent number: 9582707Abstract: A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.Type: GrantFiled: April 25, 2012Date of Patent: February 28, 2017Assignee: QUALCOMM IncorporatedInventors: Piyush Sharma, Ashwin Swaminathan, Ramin Rezaiifar, Qi Xue
-
Patent number: 9544540Abstract: Computing devices may implement dynamic display of video communication data. Video communication data for a video communication may be received at a computing device where another application is currently displaying image data on an electronic display. A display location may be determined for the video communication data according to display attributes that are configured by the other application at runtime. Once determined, the video communication data may then be displayed in the determined location. In some embodiments, the video communication data may be integrated with other data displayed on the electronic display for the other application.Type: GrantFiled: June 4, 2014Date of Patent: January 10, 2017Assignee: Apple Inc.Inventors: Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, James O. Normile, Joe S. Abuan, Hyeonkuk Jeong, Yan Yang, Gobind Johar, Thomas Jansen
-
Patent number: 9544578Abstract: A portable electronic equipment (1) has an autostereoscopic display (11), a sensor device (2), and a controller (6, 7). The autostereoscopic display (11) comprises a display panel and an image directing device. The sensor device (2) is configured to capture distance information indicative of a distance of a user from the autostereoscopic display (11) and a direction in which the user is positioned. The controller (6, 7) is coupled to the autostereoscopic display (11) to control the display panel (12) and the image directing device based on the distance information and/or direction informa-tion. The controller (6, 7) is configured to compute plural images to be output based on the distance information, to control the display panel to output the computed plural images, and to control the image directing device based on the distance information and/or direction information.Type: GrantFiled: January 17, 2012Date of Patent: January 10, 2017Assignee: Sony Ericsson Mobile Communications ABInventors: Anders Linge, Martin Ek, Jonas Gustavsson
-
Patent number: 9521362Abstract: A virtual camera pose determiner is configured to determine a position and an orientation of a virtual camera. The position of the virtual camera is determined on the basis of a display position of a displayed representation of a remote participant on a display. The orientation of the virtual camera is determined on the basis of a geometrical relation between the display position of the remote participant on the display, and a position of a local participant. The virtual camera is configured to transmit an image or a sequence of images to the remote participant, so that an image provided by the virtual camera has the view on the local participant as is if viewed from the display position. Further embodiments provide a video communication system having a virtual camera pose determiner for providing a virtual camera pose on basis of the display position and the position of the local participant.Type: GrantFiled: May 27, 2014Date of Patent: December 13, 2016Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventors: Nicole Atzpadin, Ingo Feldmann, Peter Kauff, Oliver Schreer, Wolfgang Waizenegger
-
Patent number: 9521363Abstract: A live video conference session can be established between a first device and second remotely located devices each having a first and second camera as an input peripheral. The first and second camera can capture a first and second video stream of a first and second session participant. During the live video conference session, the first and second video stream can be continuously conveyed in real time over a network to displays of each device. The first video stream can be analyzed to programmatically determine that a position of the first participant is non-optimal as seen by the second participant. An eye guide can be presented on a user interface to assist the first participant to focus their eyes in a new location indicated by the eye. If subsequent detects improved eye focus with the second participant the eye guide can be dismissed.Type: GrantFiled: May 17, 2016Date of Patent: December 13, 2016Inventors: Brian K. Buchheit, Satesh Ramcharitar
-
Patent number: 9521329Abstract: A display device includes an imaging unit configured to capture an image of an object and to generate image data of the object, a display unit configured to display the image corresponding to the image data generated by the imaging unit, and a display controller configured to cause the display unit to display a sample image in which at least a line of sight of the object has been changed when the imaging unit captures the image of the object. An imaging direction of the imaging unit and a display direction of the display unit are matched. The sample image is an object image in which the line of sight has changed from front to a direction other than the front.Type: GrantFiled: February 27, 2015Date of Patent: December 13, 2016Assignee: OLYMPUS CORPORATIONInventors: Osamu Nonaka, Mai Yamaguchi, Yuiko Uemura, Tomomi Uemura, Sachie Yamamoto
-
Patent number: 9497413Abstract: In an approach to video filtering, a computer receives a first video frame of a presenter from a video feed that includes an audio feed. The computer receives a second video frame. The computer determines whether a difference between the first video frame and the second video frame exceeds a pre-defined threshold. In response to determining the difference between the first video frame and the second video frame exceeds the pre-defined threshold, the computer determines whether the difference between the first video frame and the second video frame is expected. In response to determining the difference between the first video frame and the second video frame is expected, the computer discards video data associated with the difference between the first video frame and the second video frame. The computer creates a third video frame, based, at least in part, on non-discarded video data.Type: GrantFiled: December 2, 2015Date of Patent: November 15, 2016Assignee: International Business Machines CorporationInventors: Abdullah Q. Chougle, Akash U. Dhoot, Shailendra Moyal
-
Patent number: 9445154Abstract: First video showing content displayed on a touch sensitive screen is combined with second video from a camera showing a user's interaction with the touch sensitive screen. The second video is filtered (digitally or physically) to prevent the camera from capturing the content of the touch sensitive screen. Combining the first video with the second video creates a combined video containing both a high quality image of the graphic user interface of an application appearing on the touch sensitive screen as well as how a person operating the application is gesturing and otherwise touching the touch sensitive screen when interacting with the application.Type: GrantFiled: July 8, 2014Date of Patent: September 13, 2016Assignee: GLANCE NETWORKS, INC.Inventor: Richard L. Baker
-
Patent number: 9437011Abstract: A method of estimating a pose of a head for a person, includes estimating the pose of the head for the person based on a content, and generating a three-dimensional (3D) model of a face for the person. The method further includes generating pictorial structures of the face based on the estimated pose and the 3D model, and determining a refined pose of the head by locating parts of the face in the pictorial structures.Type: GrantFiled: June 3, 2013Date of Patent: September 6, 2016Assignee: Samsung Electronics Co., Ltd.Inventors: Hariprasad Kannan, Anant Vidur Puri
-
Patent number: 9380262Abstract: The present invention relates to a mobile terminal, and a method for operating the same. According to an embodiment of the present invention, a method for operating a mobile terminal includes the steps of: forming an audio beam based on at least one of a photographed image from a camera and motion information from a motion sensor; receiving an audio signal from a speaker through a plurality of microphones; and processing the received audio signal based on the formed audio beam. Thus, the use convenience is improved.Type: GrantFiled: September 9, 2013Date of Patent: June 28, 2016Assignee: LG ELECTRONICS INC.Inventor: Ju Yeon Shin
-
Patent number: 9360939Abstract: It is a problem of the present invention to provide a technical method to achieve a video-game-use simulated-experience remote control button that allows simulated experience that allows experience and reaction of tactile sense corresponding to video game software. According to the present invention, the button portion of a video-game-use remote control is replaced with a tool and a mechanical device that allow creating tactile sense corresponding to the content of a video game, providing a reaction after a human operator the created tactile sense, operation using the power level of the human, so as to generate a load. The generated load is quantified and the quantified value is set as X, and X is converted into a button operation of the video-game-use remote control so as to achieve a video-game-use simulated-experience remote control button that allows simulated experience of tactile sense.Type: GrantFiled: July 10, 2014Date of Patent: June 7, 2016Inventor: Shinji Nishimura
-
Patent number: 9342759Abstract: Described is a system for improving object recognition. Object detection results and classification results for a sequence of image frames are received as input. Each object detection result is represented by a detection box and each classification result is represented by an object label corresponding to the object detection result. A pseudo-tracklet is formed by linking object detection results representing the same object in consecutive image frames. The system determines whether there are any inconsistent object labels or missing object detection results in the pseudo-tracklet. Finally, the object detection results and the classification results are improved by correcting any inconsistent object labels and missing object detection results.Type: GrantFiled: March 11, 2014Date of Patent: May 17, 2016Assignee: HRL Laboratories, LLCInventors: Yang Chen, Changsoo S. Jeong, Deepak Khosla, Kyungnam Kim, Shinko Y. Cheng, Lei Zhang, Alexander L. Honda
-
Patent number: 9298007Abstract: Aspects of the present invention relate to methods and systems for imaging, recognizing, and tracking of a user's eye that is wearing a HWC. Aspects further relate to the processing of images reflected from the user's eye and controlling displayed content in accordance therewith.Type: GrantFiled: March 17, 2014Date of Patent: March 29, 2016Assignee: Osterhout Group, Inc.Inventor: John N. Border
-
Patent number: 9270943Abstract: Embodiments of the present invention provide a novel system and/or method for performing over-the-network collaborations and interactions between remote end-users. Embodiments of the present invention produce the perceived effect of each user sharing a same physical workspace while each person is actually located in separate physical environments. In this manner, embodiments of the present invention allow for more seamless interactions between users while relieving them of the burden of using common computer peripheral devices such as mice, keyboards, and other hardware often used to perform such interactions.Type: GrantFiled: March 31, 2014Date of Patent: February 23, 2016Assignee: Futurewei Technologies, Inc.Inventors: Jana Ehmann, Liang Zhou, Onur G. Guleryuz, Fengjun Lv, Fengqing Zhu, Naveen Dhar
-
Patent number: 9269012Abstract: Systems and approaches are provided for tracking an object using multiple tracking processes. By combining multiple lightweight tracking processes, object tracking can be robust, use a limited amount of power, and enable a computing device to respond to input corresponding to the motion of the object in real time. The multiple tracking processes can be run in parallel to determine the position of the object by selecting the results of the best performing tracker under certain heuristics or combining the results of multiple tracking processes in various ways. Further, other sensor data of a computing device can be used to improve the results provided by one or more of the tracking processes.Type: GrantFiled: August 22, 2013Date of Patent: February 23, 2016Assignee: Amazon Technologies, Inc.Inventor: David Allen Fotland
-
Patent number: 9258520Abstract: A mobile terminal includes a communication unit, a display, a camera for obtaining images, and a processor for causing an image to be displayed on the display and for causing images to be communicated via the communication unit. The processor may be further configured to obtain a first image from the camera, obtain a substantial mirror image of the first image to form a second image, display the second image on the display, and communicate the first image to a receiving device via a wireless communication link.Type: GrantFiled: April 5, 2013Date of Patent: February 9, 2016Assignee: LG ELECTRONICS INC.Inventor: Eun Young Lee
-
Patent number: 9253443Abstract: In an approach to video filtering, a computer receives a first video frame of a presenter from a video feed that includes an audio feed. The computer extracts a face of the presenter and a background from the first video frame. The computer receives a second video frame. The computer determines whether a difference between the first video frame and the second video frame exceeds a pre-defined threshold. The computer converts the audio feed from speech to text. The computer determines whether the difference between the first video frame and the second video frame is expected, based, at least in part, on the converted audio feeds. The computer discards video data associated with the difference between the first video frame and the second video frame. The computer creates a third video frame, based, at least in part, on non-discarded video data.Type: GrantFiled: March 3, 2015Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Abdullah Q. Chougle, Akash U. Dhoot, Shailendra Moyal
-
Patent number: 9215408Abstract: A view morphing algorithm is applied to synchronous collections of video images from at least two video imaging devices, and interpolating between the images, creates a composite image view of the local participant. This composite image approximates what might be seen from a point between the video imaging devices, presenting the image to other video session participants.Type: GrantFiled: June 14, 2013Date of Patent: December 15, 2015Assignee: APPLIED INVENTION, LLCInventors: W. Daniel Hillis, Bran Ferren, Russel Howe
-
Patent number: 9182935Abstract: Methods and devices for selectively presenting a user interface on a second screen. More particularly, the method includes a change in the display mode of a multiple screen device from a first screen to a second screen while the device is closed. The change in the display mode may be made in a menu. The menu can be rendered in response to a request. The change to the second screen can be received by the selection of a user interface device in the menu. In response to the selection, the device can render the user interface in the second screen.Type: GrantFiled: December 30, 2011Date of Patent: November 10, 2015Assignee: Z124Inventors: Sanjiv Sirpal, Mohammed Selim
-
Patent number: 9111171Abstract: A method and mobile terminal for correcting a gaze of a user in an image includes setting eye outer points that define an eye region of the user in an original image, transforming the set eye outer points to a predetermined reference camera gaze direction, and transforming the eye region of the original image based on the transformed eye outer points.Type: GrantFiled: February 1, 2013Date of Patent: August 18, 2015Assignees: Samsung Electronics Co., Ltd, Postech Academy-IndustryInventors: Byung-Jun Son, Dai-Jin Kim, Jong-Ju Shin, In-Ho Choi, Tae-Hwa Hong