Patents Issued in July 10, 2014
  • Publication number: 20140192123
    Abstract: According to an embodiment, an image erasing apparatus includes an erasing unit, an input unit, and a control unit. The erasing unit erases an image formed on a recording medium. The input unit inputs identification data which identifies an apparatus which forms the image on the recording medium. The control unit controls an operation of the erasing unit based on the identification data input by the input unit. According to another embodiment, an image forming apparatus includes an image forming unit which forms an image on a recording medium, the erasing unit, the input unit, and the control unit.
    Type: Application
    Filed: July 15, 2013
    Publication date: July 10, 2014
    Inventor: Tomoaki Kubo
  • Publication number: 20140192124
    Abstract: According to one embodiment, an image forming apparatus having an image forming function to form an image on a recording medium, and an image erasing function to erase an image formed on a recording medium is provided. The image forming apparatus has an erasing unit which erases the image formed on the recording medium under an erasing condition changeable in accordance with whether or not erasable coloring material used in the image formed on the recording medium is predetermined coloring material, when the image erasing function is performed.
    Type: Application
    Filed: October 15, 2013
    Publication date: July 10, 2014
    Applicants: TOSHIBA TEC KABUSHIKI KAISHA, KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuharu ARIMA
  • Publication number: 20140192125
    Abstract: According to an embodiment, an image erasing apparatus that erases an image formed on a recording medium is provided. The image erasing apparatus includes an erasing unit that erases an image, a reading unit that reads a size of the recording medium, a classification device that classifies the erasing-processed recording medium, and a control unit. The control unit recognizes the size of the erasing-processed recording medium based on data about the size, and controls the classification device to classify each recording medium from which the image has been erased for each different size.
    Type: Application
    Filed: October 15, 2013
    Publication date: July 10, 2014
    Applicants: TOSHIBA TEC KABUSHIKI KAISHA, KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuharu ARIMA
  • Publication number: 20140192126
    Abstract: According to an embodiment, an image erasing apparatus that erases an image printed on a sheet is provided. When it is determined that a sheet after an erasing process of the image is not reusable, the image erasing apparatus accommodates the sheet in a reject tray in a state where a face with a small information amount of the erased image is directed in a state direction. When it is determined that the sheet after the erasing process of the image is not reusable, the image erasing apparatus accommodates the sheet in a reuse tray in a state where a face with a large information amount of the erased image is directed in the state direction.
    Type: Application
    Filed: October 15, 2013
    Publication date: July 10, 2014
    Applicants: TOSHIBA TEC KABUSHIKI KAISHA, KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuharu ARIMA
  • Publication number: 20140192127
    Abstract: According to the above-described embodiment, provided is an image forming apparatus that has an image forming function of forming an image on a recording medium and an erasing function of erasing an image formed on a recording medium. The image forming apparatus includes an image forming unit, a delivery unit, and an image erasing unit. The delivery unit delivers information requiring users to supply a recording medium having an image formed thereon by an erasable color material to the apparatus.
    Type: Application
    Filed: November 26, 2013
    Publication date: July 10, 2014
    Applicants: TOSHIBA TEC KABUSHIKI KAISHA, KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuharu ARIMA
  • Publication number: 20140192128
    Abstract: Provided are an image erasing apparatus which erases an image printed on a sheet and an image forming apparatus having an image erasing function. The image erasing apparatus and the image forming apparatus each include a readout unit, an erasing unit, and a control unit. The control unit acquires authentication information of a user who uses the apparatuses and allows the erasing unit to erase the image when determining that the acquired authentication information coincides with authentication information read out by the readout unit.
    Type: Application
    Filed: November 26, 2013
    Publication date: July 10, 2014
    Applicants: TOSHIBA TEC KABUSHIKI KAISHA, KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuharu ARIMA
  • Publication number: 20140192129
    Abstract: A thermal image receiver element dry image receiving layer has a Tg of at least 25° C. and is the outermost layer. The dry image receiving layer has a dry thickness of at least 0.5 ?m and up to and including 5 ?m. It comprises a water-dispersible release agent and a polymer binder matrix that consists essentially of: (1) a water-dispersible acrylic polymer comprising chemically reacted or chemically non-reacted hydroxyl, phospho, phosphonate, sulfo, sulfonate, carboxy, or carboxylate groups, and (2) a water-dispersible polyester that has a Tg of 30° C. or less. The water-dispersible acrylic polymer is present in an amount of at least 55 weight % and at a dry ratio to the water-dispersible polyester of at least 1:1. The thermal image receiver element can be used to prepare thermal dye images after thermal transfer from a thermal donor element.
    Type: Application
    Filed: February 14, 2014
    Publication date: July 10, 2014
    Applicant: Kodak Alaris Inc.
    Inventor: Teh-Ming Kung
  • Publication number: 20140192130
    Abstract: An optical disk label writing method for writing a label on an optical disc uses a similar writing operation to that used to write data to the disc. The disc has a label side including a laser reactive material for forming the label image, and a tracking format that can be tracked by a writing laser in a similar way to a writing operation. A computer program is provided for converting a label image to a disk image file for writing to the label side.
    Type: Application
    Filed: July 1, 2013
    Publication date: July 10, 2014
    Applicant: Fortium Technologies Ltd.
    Inventors: Anthony Miles, Robert Glyn Miles
  • Publication number: 20140192131
    Abstract: An image forming device includes a photoreceptor drum including a target surface that is scanned in a main scanning direction and a sub-scanning direction, an exposure head including a plurality of light emitting segments aligned in parallel to the main scanning direction, an exposure driving unit which selectively drives the plural light emitting segments, a storing unit which stores a profile where the respective positions of the plural light emitting segments correspond to a correction amount from the main scanning direction toward the sub-scanning direction at every position, and a correcting unit which smoothes a local change of the correction amount in the profile.
    Type: Application
    Filed: January 8, 2014
    Publication date: July 10, 2014
    Applicants: Toshiba Tec Kabushiki Kaisha, Kabushiki Kaisha Toshiba
    Inventor: TAKAHIRO HAMANAKA
  • Publication number: 20140192132
    Abstract: Processes, systems, and devices for the automated scheduling of visits with persons having limited access to communications or limited ability to travel are provided. This involves the receipt of a machine-readable visit request. The visit request is used to schedule either a videoconference or a contact visit using a scheduling database. Systems for the automated arrangement of confidential visits (both in person and via videoconference) and systems for dynamically revising scheduled visits in response to updates from a jail management system are also provided.
    Type: Application
    Filed: June 29, 2012
    Publication date: July 10, 2014
    Applicant: iWebVisit.com, LLC
    Inventors: Robert Clayton Avery, Thomas Edward Viloria
  • Publication number: 20140192133
    Abstract: Systems, apparatus, articles, and methods are described including operations for content aware selective adjusting of motion estimation.
    Type: Application
    Filed: March 28, 2012
    Publication date: July 10, 2014
    Inventors: Kin-Hang Cheung, Ping Liu
  • Publication number: 20140192134
    Abstract: A mobile device user interface method activates a camera module to support a video chat function and acquires an image of a target object using the camera module. In response to detecting a face in the captured image, the facial image data is analyzed to identify an emotional characteristic of the face by identifying a facial feature and comparing the identified feature with a predetermined feature associated with an emotion. The identified emotional characteristic is compared with a corresponding emotional characteristic of previously acquired facial image data of the target object. In response to the comparison, an emotion indicative image is generated and the generated emotion indicative image is transmitted to a destination terminal used in the video chat.
    Type: Application
    Filed: December 31, 2013
    Publication date: July 10, 2014
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jeehye JUNG, Bokun CHOI, Doosuk KANG, Changho LEE, Sae Mee YIM, Euichang JUNG
  • Publication number: 20140192135
    Abstract: A child-monitoring system includes a parent unit and a child unit. The child-monitoring system is configured to communicate audio and video signals between the parent unit and the child unit through a wireless connection so that the parent unit and the child unit may be maintained in a spaced-apart relation to one another.
    Type: Application
    Filed: January 7, 2014
    Publication date: July 10, 2014
    Inventors: Roger J. Babineau, Helena C. Silva, John Ristuccia
  • Publication number: 20140192136
    Abstract: A video chatting method and system are provided. The method and system describe collection of facial vector data, audio data, and interactive motion information of a user of a first client. The collected data may be transmitted to a second client. The second client, in turn, may generate a virtual avatar model of the user of the first client based on the received data. The second client may further display the virtual model, play sound in the audio data. The second client may also render the interactive motion information and facial data information of a user of the second client, and generate and display a virtual avatar model of the user of the second client. The provided method and system may decreases amount of data that may be transferred over the network. This may allow data transmission rate during video communication to be high enough for a smooth operation.
    Type: Application
    Filed: February 22, 2013
    Publication date: July 10, 2014
    Inventors: Shang Yu, Feng Rao, Yang Mo, Jun Qiu, Fei Wang
  • Publication number: 20140192137
    Abstract: A method and an apparatus for obtaining image data for video communication in an electronic device are provided. In an embodiment, a method and an apparatus for providing a video communication function that simultaneously uses image data obtained via a plurality of cameras. In the method, first image data is obtained using a first camera. Second image data is obtained using a second camera. The first image data is merged with the second image data. The merged image data is transmitted to a second electronic device. Other embodiments are possible.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 10, 2014
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Hyun-Kyoung Kim, Dae-Sung Kim, So-Ra Kim, Hang-Kyu Park, Seung-Kyung Lim
  • Publication number: 20140192138
    Abstract: A method for displaying information in a videoconference is disclosed. Video information from a first endpoint of the videoconference may be received. The video information may include an image of a participant at the first endpoint. Participant information for the participant (e.g., name, phone number, job title, etc.) may also be received. The video information and the participant information for the participant may be displayed together on a display screen at a second endpoint of the videoconference.
    Type: Application
    Filed: March 10, 2014
    Publication date: July 10, 2014
    Applicant: Logitech Europe S.A.
    Inventor: Michael L. Kenoyer
  • Publication number: 20140192139
    Abstract: The disclosure relates to a method and system for audio/video communication, and a client. The method includes setting up a connection with a server for audio/video communication, opening multiple windows for the audio/video communication; obtaining an enabling instruction for enabling audio/video communication of any one of the windows for audio/video communication amongst the multiple windows for audio/video communication; disabling audio/video communication for other windows for audio/video communication amongst the multiple windows for audio/video communication according to the enabling instruction.
    Type: Application
    Filed: March 13, 2014
    Publication date: July 10, 2014
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zhongnan Li, Pu Wang, Xiaoyu Liu, Jiajun Chen
  • Publication number: 20140192140
    Abstract: Various embodiments provide an interactive, shared, story-reading experience in which stories can be experienced from remote locations. Various embodiments enable augmentation or modification of audio and/or video associated with the story-reading experience. This can include augmentation and modification of a reader's voice, face, and/or other content associated with the story as the story is read.
    Type: Application
    Filed: January 7, 2013
    Publication date: July 10, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Alan W. Peevers, John C. Tang, Nizamettin Gok, Gina Danielle Venolia, Kori Inkpen Quinn, Nitin Khanna, Simon Andrew Longbottom, Kurt A. Thywissen, Koray Can Oztekin, Vijay Chandrasekaran
  • Publication number: 20140192141
    Abstract: A computer implemented method is disclosed, the method including but not limited to detecting an event of interest in video conference data for a plurality of video conference participants and notifying an end user of the event of interest. A computer readable medium is also disclosed for containing a computer program for performing the method. A computer implemented method is also disclosed for receiving at an end user device, a notification of an event of interest in a video teleconference, the method including but not limited to receiving at an end user device from a notification indicating a detection of the event of interest in video conference data from the video teleconference for a plurality of video conference participants; and sending data from the end user device to the server requesting a transcription of comments from the speaker in video teleconference.
    Type: Application
    Filed: March 10, 2014
    Publication date: July 10, 2014
    Applicant: AT&T Intellectual Property I, LP
    Inventors: Lee Begeja, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky, Andrea Basso, David Crawford Gibbon, Sumit Kumar
  • Publication number: 20140192142
    Abstract: A videoconference may be initiated between a plurality of endpoints. At least one of the endpoints may be coupled to a recording server, which may be configured to record the videoconference. A configuration may be selected (e.g., automatically or manually) for performing the recording. The endpoint (e.g., acting as an MCU) may transmit information to endpoints and may transmit recording information to the recording server. The recording information may be different from the videoconference information. For example, it may be in a “streaming friendly” format, at a different bit rate, encoded differently, have different inputs, etc. The manner in which the videoconference is stored and/or recorded may be based on the selected configuration. Clients may be configured to receive and display the videoconference from the recording server and may be configured to change the provided layout to different layouts, e.g., based on user input.
    Type: Application
    Filed: March 10, 2014
    Publication date: July 10, 2014
    Applicant: LOGITECH EUROPE S.A.
    Inventors: Ashish Goyal, Binu Kaiparambil Shanmukhadas, Vivek Wamorkar, Keith C. King, Stefan F. Slivinski, Raphael Anuar, Boby S. Pullamkottu, Sunil George
  • Publication number: 20140192143
    Abstract: A system is disclose including but not limited to a processor in data communication with a non-transitory computer readable medium; a computer program stored in the computer readable medium, the computer program including but not limited to instructions to send from a first client device to a server, data indicating a first list designating a first group of video conference participants' end user devices' addresses; and instructions to send from the client device to the server, a first video conference data, the first video conference data to be sent from the server over a single video conference channel in the video system to first group of video conference participants' end user devices addresses, wherein each of a plurality of groups of end user devices receives a different one of a plurality of video conferences data over a single video conference channel.
    Type: Application
    Filed: March 12, 2014
    Publication date: July 10, 2014
    Applicant: AT&T Intellectual Property I, LP
    Inventor: Edward A. Walter
  • Publication number: 20140192144
    Abstract: One aspect of the present invention provides a simple, cost-effective, efficient solution directed to the generation of the source material for the generation of still panoramic images. The precision optical alignment among all the mounted lenses, provided by the precision rectangular mounting rig, greatly reduces or eliminates stitching errors. Stitching errors often result in noticeable defects in the final image which will require human technical assistance to remedy (if the defect is of the repairable type). Accurate, error-free, source material enables virtually full automation of the panoramic imaging process; wherein the end product is high quality and quickly achieved.
    Type: Application
    Filed: January 4, 2014
    Publication date: July 10, 2014
    Inventor: Patrick A. St. Clair
  • Publication number: 20140192145
    Abstract: A system and method are presented for estimating the orientation of a panoramic camera mounted on a vehicle relative to the vehicle coordinate frame. An initial pose estimate of the vehicle is determined based on global positioning system data, inertial measurement unit data, and wheel odometry data of the vehicle. Image data from images captured by the camera is processed to obtain one or more tracks, each track including a sequence of matched feature points stemming from a same three-dimensional location. A correction parameter determined from the initial pose estimate and tracks can then be used to correct the orientations of the images captured by the camera. The correction parameter can be optimized by deriving a correction parameter for each of a multitude of distinct subsequences of one or more runs. Statistical analysis can be performed on the determined correction parameters to produce robust estimates.
    Type: Application
    Filed: March 12, 2014
    Publication date: July 10, 2014
    Applicant: GOOGLE INC.
    Inventors: Dragomir Anguelov, Daniel Joseph Filip
  • Publication number: 20140192146
    Abstract: Provided is an apparatus and method for displaying a hologram image that may display a hologram image created by tracking a position of a pupil of a user using an acquired user image, tracking a position of a light source of a reflection hologram image that is reflected from appearance of the user, and by correcting a position of a light source of a display hologram image based on the position of the pupil of the user.
    Type: Application
    Filed: December 2, 2013
    Publication date: July 10, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Min Sik PARK, Kyung Ae MOON, Jin Woong KIM
  • Publication number: 20140192147
    Abstract: A system for generating automatically tracking mattes that rapidly integrates live action and virtual composite images.
    Type: Application
    Filed: March 13, 2014
    Publication date: July 10, 2014
    Inventors: NEWTON ELIOT MACK, PHILIP R. MASS
  • Publication number: 20140192148
    Abstract: Methods (700, 900) for providing information from an encoder (220) to a decoder (230) concerning a spatial validity range, at which view synthesis of an image at a virtual camera position can be performed with sufficient visual quality, based on a view of at least one real camera (210-1) comprised in a set of real cameras (210-1, 210-2, 210-3, 210-4). The methods (700, 900) comprises determining (701) the spatial validity range of the at least one real camera (210-1), which spatial validity range specifies for the decoder (230) what information to use for synthesising the image of the virtual camera position. Also, the determined (701) spatial validity range is transmitted (706) to the decoder (230).
    Type: Application
    Filed: May 24, 2012
    Publication date: July 10, 2014
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Apostolos Georgakis, Andrey Norkin, Thomas Rusert
  • Publication number: 20140192149
    Abstract: A device obtains, from a bitstream that includes an encoded representation of the video data, a non-nested Supplemental Enhancement Information (SEI) message that is not nested within another SEI message in the bitstream. Furthermore, the device determines a layer of the bitstream to which the non-nested SEI message is applicable. The non-nested SEI message is applicable to layers for which video coding layer (VCL) network abstraction layer (NAL) units of the bitstream have layer identifiers equal to a layer identifier of a SEI NAL unit that encapsulates the non-nested SEI message. A temporal identifier of the SEI NAL unit is equal to a temporal identifier of an access unit containing the SEI NAL unit. Furthermore, the device processes, based in part on one or more syntax elements in the non-nested SEI message, video data of the layer of the bitstream to which the non-nested SEI message is applicable.
    Type: Application
    Filed: September 25, 2013
    Publication date: July 10, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Ye-Kui Wang, Ying Chen, Adarsh Krishnan Ramasubramonian
  • Publication number: 20140192150
    Abstract: An image processing device (10a) that receives a plurality of individual images including at least two images that form a stereoscopic image and generates a display image displaying the plurality of input individual images simultaneously on a display unit. The image processing device includes an image conversion unit (111) that converts at least one image among the input images that form the stereoscopic image into a planar image, and an image generation unit (151) that generates the display image by synthesizing the planar image that has been converted by the image conversion unit (111) and an image among the plurality of input images that has not been converted into a planer image by the image conversion unit (111).
    Type: Application
    Filed: May 30, 2012
    Publication date: July 10, 2014
    Applicant: SHARP KABUSHIKI KAISHA
    Inventor: Yuhji Tanaka
  • Publication number: 20140192151
    Abstract: Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of video data, wherein the track includes one or more views. The method further includes parsing information to determine whether the track includes only texture views, only depth views, or both texture and depth views. Another example method includes composing a track of video data, wherein the track includes one or more views and composing information that indicates whether the track includes only texture views, only depth views, or both texture and depth views.
    Type: Application
    Filed: December 20, 2013
    Publication date: July 10, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Ye-Kui Wang, Ying Chen
  • Publication number: 20140192152
    Abstract: Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of video data, wherein the track includes one or more views. The method further includes parsing information to determine whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track. Another example method includes composing a track of video data, wherein the track includes one or more views and composing information that indicates whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track.
    Type: Application
    Filed: December 20, 2013
    Publication date: July 10, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Ye-Kui Wang, Ying Chen
  • Publication number: 20140192153
    Abstract: Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of multiview video data, wherein the track includes at least one depth view. The method further includes parsing information to determine a spatial resolution associated with the depth view, wherein decoding the spatial resolution does not require parsing of a sequence parameter set of the depth view. Another example method includes composing a track of multiview video data, wherein the track includes the one or more views. The example method further includes composing information to indicate a spatial resolution associated with the depth view, wherein decoding the spatial resolution does not require parsing of a sequence parameter set of the depth view.
    Type: Application
    Filed: December 20, 2013
    Publication date: July 10, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Ye-Kui Wang, Ying Chen
  • Publication number: 20140192154
    Abstract: A method and apparatus for decoding the depth map of multi-view video data are provided. The method includes splitting a block of restored multi-view color video frame into a partition based on a pixel value of the block of the prediction-encoded and restored multi-view color video frame; obtaining a parameter indicating correlation between block partitions of the multi-view color video frame and block partitions of the depth map frame using peripheral pixel values of the block partitions of the multi-view color video frame and peripheral pixel values of the block partitions of the depth map frame corresponding to the block partitions of the multi-view color video frame with respect to each of the block partitions of the restored multi-view color video frame; and obtaining prediction values of corresponding block partitions of the depth map frame from the block partitions of the restored multi-view color video frame using the obtained parameter.
    Type: Application
    Filed: August 9, 2012
    Publication date: July 10, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-soo Jeong, Byeong-doo Choi, Jeong-hoon Park
  • Publication number: 20140192155
    Abstract: A method and apparatus for encoding multi-view video data and a method and apparatus for decoding multi-view video data are provided. The method of encoding multi-view video data includes obtaining a multi-view color video frame and a depth map frame corresponding to the multi-view color video frame, prediction-encoding the multi-view color video frame, and prediction-encoding the depth map frame, based on a result of prediction-encoding the multi-view color video frame.
    Type: Application
    Filed: August 9, 2012
    Publication date: July 10, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Byeong-doo Choi, Seung-soo Jeong, Jeong-Hoon Park
  • Publication number: 20140192156
    Abstract: A stereo-image processing apparatus capable of adaptively converting, regarding stereo vision, a parallax distribution of a stereo image in accordance with human visual performance is provided. The stereo-image processing apparatus includes a continuity detection unit (31) that detects a parallax continuous region on the basis of the discontinuity of parallax values in a stereo image and a conversion processing unit (32) that performs processing for enhancing the parallax gradient of the parallax continuous region.
    Type: Application
    Filed: August 24, 2012
    Publication date: July 10, 2014
    Applicants: KOCHI UNIVERSITY OF TECHNOLOGY, SHARP KABUSHIKI KAISHA
    Inventors: Hisao Kumai, Ikuko Tsubaki, Mikio Seto, Hiroaki Shigemasu
  • Publication number: 20140192157
    Abstract: In an example, a method of decoding video data includes determining whether a reference index for a current block corresponds to an inter-view reference picture, and when the reference index for the current block corresponds to the inter-view reference picture, obtaining, from an encoded bitstream, data indicating a view synthesis prediction (VSP) mode of the current block, where the VSP mode for the reference index indicates whether the current block is predicted with view synthesis prediction from the inter-view reference picture.
    Type: Application
    Filed: January 9, 2014
    Publication date: July 10, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Ying Chen, Ye-Kui Wang, Li Zhang
  • Publication number: 20140192158
    Abstract: The description relates to stereo image matching to determine depth of a scene as captured by images. More specifically, the described implementations can involve a two-stage approach where the first stage can compute depth at highly accurate but sparse feature locations. The second stage can compute a dense depth map using the first stage as initialization. This improves accuracy and robustness of the dense depth map.
    Type: Application
    Filed: January 4, 2013
    Publication date: July 10, 2014
    Applicant: Microsoft Corporation
    Inventors: Oliver Whyte, Adam G. Kirk, Shahram Izadi, Carsten Rother, Michael Bleyer, Christoph Rhemann
  • Publication number: 20140192159
    Abstract: Apparatus, systems, and methods may operate to receive a real image or real images of a coverage area of a surveillance camera. Building Information Model (BIM) data associated with the coverage area may be received. A virtual image may be generated using the BIM data. The virtual image may include at least one three-dimensional (3-D) graphics that substantially corresponds to the real image. The virtual image may be mapped with the real image. Then, the surveillance camera may be registered in a BIM coordination system using an outcome of the mapping.
    Type: Application
    Filed: June 14, 2011
    Publication date: July 10, 2014
    Applicant: METROLOGIC INSTRUMENTS, INC.
    Inventors: Henry Chen, Xiaoli Wang, Hao Bai, Saad J Ros, Tom Plocher
  • Publication number: 20140192160
    Abstract: A three-dimensional image sensing device includes a light source, a sensing module, and a signal processing module. The sensing module includes a pixel array, a control unit, and a light source driver. The light source generates flashing light with a K multiple of a frequency of flicker noise or a predetermined frequency. The pixel array samples the flashing light to generate a sampling result. The control unit executes an image processing on the sampling result to generate a spectrum. The light source driver drives the light source according to the K multiple of the frequency or the predetermined frequency. The signal processing module generates the K multiple of the frequency according to the spectrum, or outputs the predetermined frequency to the light source driver, and generates depth information according to a plurality of first images/a plurality of second images during turning-on/turning-off of the light source included in the sampling result.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 10, 2014
    Applicant: EMINENT ELECTRONIC TECHNOLOGY CORP. LTD.
    Inventors: TOM CHANG, Kao-Pin Wu, Kun-Huang Tsai, Shang-Ming Hung, Cheng-Ta Chuang, Chih-Jen Fang, Tseng Kuo-Tsai
  • Publication number: 20140192161
    Abstract: An apparatus and method may be used to create images, e.g., three-dimensional images, based on received radio-frequency (RP), e.g., millimeter wave, signals carrying image data. The RF signals may be modulated onto optical carrier signals, and the resulting modulated optical signals may be cross-correlated. The resulting cross-correlations may be used to extract image data that may be used to generate three-dimensional images.
    Type: Application
    Filed: January 8, 2014
    Publication date: July 10, 2014
    Applicant: PHASE SENSITIVE INNOVATIONS, INC.
    Inventors: Janusz Murakowski, Garrett Schneider, Shouyuan Shi, Christopher A. Schuetz, Dennis W. Prather
  • Publication number: 20140192162
    Abstract: After AE/AF/AWB operation, a subject distance is calculated for each pixel, and a histogram which shows the distance distribution is created based thereon. The class with the highest frequency which is the peak at the side nearer than the focus distance is searched based on the histogram and a rectangular area Ln which includes pixels which have a subject distance within the searched range is set. The average parallax amount Pn which is included in the rectangular area Ln is calculated and it is confirmed whether Pn is within a range of parallax amounts a and a?t1. In a case where Pn is not within a range of parallax amounts a and a?t1 which is set in advance, the aperture value is adjusted such that Pn is within the range of the parallax amounts a and a?t1.
    Type: Application
    Filed: March 12, 2014
    Publication date: July 10, 2014
    Applicant: FUJIFILM Corporation
    Inventors: Takashi AOKI, Youichi SAWACHI
  • Publication number: 20140192163
    Abstract: An imaging device generates distance information for each object in a plurality of images having the same viewpoint. During the generation, the imaging device detects distances from the viewpoint to some of the objects intermittently, and estimates the distances from the viewpoint to the other objects using the detected distances. The imaging device extracts object areas from the images, estimates the correspondence between the object areas of a target image targeted for distance estimation and the object areas of a reference image having been subjected to distance detection by a comparison therebetween, and allocates, for each of the object areas of the target image, the distance information of the corresponding object area of the reference image.
    Type: Application
    Filed: August 21, 2012
    Publication date: July 10, 2014
    Inventor: Kenji Shimizu
  • Publication number: 20140192164
    Abstract: A system and method for determining individualized depth information in an augmented reality scene are described. The method includes receiving a plurality of images of a physical area from a plurality of cameras, extracting a plurality of depth maps from the plurality of images, generating an integrated depth map from the plurality of depth maps, and determining individualized depth information corresponding to a point of view of the user based on the integrated depth map and a plurality of position parameters.
    Type: Application
    Filed: January 7, 2013
    Publication date: July 10, 2014
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Hian-Kun TENN, Yao-Yang TSAI, Ko-Shyang WANG, Po-Lung CHEN
  • Publication number: 20140192165
    Abstract: An encoder and a method therein for providing an update message relating to at least one of camera parameters and depth parameters “the parameters”, a decoder and a method therein for decoding the update message, a first device comprising the encoder and a second device comprising the decoder are provided. The parameters enable the decoder to synthesize a first view for a first camera position based on a second view for a second camera position and the parameters of the second view. The encoder detects which of the parameters are changing over time. Next, the encoder modularizes the parameters into a respective module. Furthermore, the encoder encodes each respective module into the update message and sends the update message to the decoder. Next, the decoder decodes each respective module of the update message to obtain the parameters which are to be updated.
    Type: Application
    Filed: June 1, 2012
    Publication date: July 10, 2014
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Andrey Norkin, Zhuangfei Wu, Thomas Rusert
  • Publication number: 20140192166
    Abstract: Optical systems utilize waveplates to simultaneously encode information for increasing image depth of field and for providing a depth map of the imaged object or sample. These waveplates are configured to result in a focus-invariant point spread function in one focal region, and to result in point spread functions that vary as a function of range within the imaged object in a different focal region. For example, a basic compound microscope might have a specially shaped waveplate inserted at the back aperture plane of the microscope objective to manipulate the phase of the wavefront. An image formed on one side of the plane of best focus is focus invariant, and is brought into focus by a restoring algorithm. An image formed on the other side of the plane of best focus captures point spread functions comprising rings that vary with depth within the imaged object.
    Type: Application
    Filed: March 14, 2013
    Publication date: July 10, 2014
    Inventor: The Regents of the University of Colorado, a body corporate
  • Publication number: 20140192167
    Abstract: In the stereoscopic imaging device including a single imaging optical system, lens information (focal length, F-stop range) is acquired (step S18), and a parallax priority program diagram (F-stop is fixed) which uses a lens F-stop and a focal length within a range capable of obtaining a minimum parallax amount or more is set (step S20). In a first mode, exposure conditions including the F-stop capable of carrying out the stereoscopic imaging with a parallax amount equal to more than the minimum parallax amount is calculated using the set parallax priority program diagram to set exposure during main imaging (steps S26 and S28). In a second mode, it is determined whether or not the focal length and the F-stop which are set by a user are within a range equal to or more than the minimum parallax amount, and a user is notified of the determination result (steps S38 to S42).
    Type: Application
    Filed: March 13, 2014
    Publication date: July 10, 2014
    Applicant: FUJIFILM Corporation
    Inventors: Junji HAYASHI, Yoshihiro SATODATE
  • Publication number: 20140192168
    Abstract: According to an embodiment, an image processing device provides a stereoscopic image to be displayed on a display and includes an acquisition unit, first and second calculators, a selector, and a determiner. The acquisition unit acquires observer information. The first calculator calculates, a viewpoint vector pointing from one to another of the observer position of each observer and the display, and an eye vector pointing from one to another eye of the observer, based on the observer information. The second calculator calculates a weight indicating a degree of desirability of stereoscopic viewing for each observer when the stereoscopic image according to one of display parameters is displayed on the display) by using the viewpoint vector and the eye vector of the observer. The selector selects one display parameter based on the weight. The determiner determines the stereoscopic image according to the selected display parameter.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 10, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kenichi SHIMOYAMA, Nao Mishima, Takeshi Mita, Ryusuke Hirai
  • Publication number: 20140192169
    Abstract: According to an embodiment, a stereoscopic image display device includes a display, an optical element, a detector, a calculator, a deriver, and an applier. The display has a display surface including pixels arranged thereon. The optical element has a refractive-index distribution that changes according to an applied voltage. The detector detects a viewpoint position representing a position of a viewer. The calculator calculates a gravity point of the viewpoint positions when a plurality of viewpoint positions are detected. The deriver derives a drive mode according to the gravity point, where the drive mode is indicative of a voltage to be applied to the optical element. The applier applies a voltage to the optical element according to the drive mode such that a visible area within which a display object displayed on the display is stereoscopically viewable is set at the gravity position.
    Type: Application
    Filed: March 11, 2014
    Publication date: July 10, 2014
    Inventors: Masako KASHIWAGI, Ayako Takagi, Shinichi Uehara, Masahiro Baba
  • Publication number: 20140192170
    Abstract: A method for reducing cross-talk in a 3D display is disclosed. The cross-talk in the 3D display is characterized with a plurality of test signals to generate a forward transformation model. Input image signals are applied to the forward transformation model to generate modeled signals. The modeled signals are applied to a visual model to generate a visual measure. The input signals are modified based on the visual measure.
    Type: Application
    Filed: August 25, 2011
    Publication date: July 10, 2014
    Inventors: Ramin Samadani, Nelson Liang An Chang
  • Publication number: 20140192171
    Abstract: A method for displaying three-dimensional integral images using a mask and a time division multiplexing which is configured in such a way that a three-dimensional image is displaced in a space as an element image obtained from a three-dimensional object is passed through a lenslet and a mask, the mask consisting of a blocking region through which an element image does not pass and a transmission region through which an element image passes, for thereby displaying three-dimensional images. The present invention is advantageous to play back a three-dimensional image the resolutions of which are enhanced in a depth-based integral imaging method using a time division display of an element image and a masked image.
    Type: Application
    Filed: April 25, 2013
    Publication date: July 10, 2014
    Applicant: DONGSEO University Technology Headquarters
    Inventors: Dong-Hak SHIN, Yong-Seok OH, Byung-Gook LEE
  • Publication number: 20140192172
    Abstract: A 3D display apparatus is provided, which includes a display panel which displays a multi-viewpoint image, a barrier arranged on one side of the display panel unit, and a controller which controls the barrier to alternately form light transmitting areas and light blocking areas. The barrier includes a liquid crystal layer, a plurality of upper electrodes arranged to be spaced apart from one another on an upper surface of the liquid crystal layer, and a plurality of lower electrodes arranged to be spaced apart from one another on a lower surface of the liquid crystal layer.
    Type: Application
    Filed: September 30, 2011
    Publication date: July 10, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-Hyung Kang, Dong-Choon Hwang, Sang-Moo Park, Jung-Hoon Yoon, Soo-Bae Moon