Patents by Inventor Yasuhiro Ohki
Yasuhiro Ohki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240162340Abstract: A semiconductor device includes a semiconductor layer including an electron transit layer and an electron supply layer; a gate electrode, a source electrode and a drain electrode, the gate electrode, the source electrode and the drain electrode being disposed on the semiconductor layer; and a metal film connected to the gate electrode, wherein the semiconductor layer includes an active region, and an inactive region surrounding the active region in plan view, wherein the gate electrode includes, in plan view, a first region overlapping the active region, and two second regions having the first region interposed therebetween, the two second regions both overlapping the inactive region, and wherein the metal film contacts the two second regions.Type: ApplicationFiled: September 18, 2023Publication date: May 16, 2024Applicant: Fujitsu LimitedInventors: Yusuke KUMAZAKI, Shirou OZAKI, Naoya OKAMOTO, Yasuhiro NAKASHA, Toshihiro OHKI
-
Patent number: 9277202Abstract: A motion vector detecting unit 12 detects a motion vector V1 of an input image X(n), and an interpolating vector generating unit 13 generates an interpolating vector V2 in accordance with the motion vector V1. In a two-dimensional display mode, the input image X(n) is output as an original image, and an image generating unit 16 generates an interpolated image X(n+0.5) in accordance with the interpolating vector V2. In a three-dimensional display mode, the input image X(n) is output as a left-eye image L(n), and the image generating unit 16 generates a right-eye image R(n+0.5) in accordance with a sum of the interpolating vector V2 and a parallax vector V3 input from the outside. The image generating unit is shared in a frame rate conversion process and a three-dimensional conversion process, and the right-eye image is generated at the same location as the interpolated image in time axis. In this way, image quality is increased when a moving image is displayed with a small amount of circuit.Type: GrantFiled: January 12, 2012Date of Patent: March 1, 2016Assignee: SHARP KABUSHIKI KAISHAInventors: Masafumi Ueno, Xiaomang Zhang, Yasuhiro Ohki
-
Patent number: 9215353Abstract: A three-dimensional noise reduction processing unit perform as a recursive noise reduction process to an input image X(n), using a motion vector MV detected by a motion vector detecting unit. A three-dimensional noise reduced image B(n) is output as a corrected original image Y(n). A two-dimensional noise reduction filter processing unit applies a two-dimensional noise reduction filter to the input image X(n). Using the motion vector MV, an interpolated image generating unit generates an interpolated image Y(n+0.5) based on a two-dimensional noise reduced image A(n). Significant degradation of an interpolated image due to false detection of a motion vector is prevented by generating the interpolated image based on the image resulted without performing the recursive noise reduction process.Type: GrantFiled: April 4, 2011Date of Patent: December 15, 2015Assignee: SHARP KABUSHIKI KAISHAInventors: Masafumi Ueno, Xiaomang Zhang, Yasuhiro Ohki
-
Patent number: 8897569Abstract: The disclosed image enlargement device is provided with: an image enlargement filter (1) that enlarges an input image, generating a first enlarged image; a first wavelet transformation unit (2) that performs a wavelet transformation on the first enlarged image; a second wavelet transformation unit (3) that performs a wavelet transformation on the first enlarged image; and an accentuation processing unit (6, 7, 8, 10) that performs an accentuation process using a first edge signal (EDGE_CDF9/7), generated from the output of the first wavelet transformation unit, and a second edge signal (EDGE_Harr), generated from the output of the second wavelet transformation unit. The first wavelet transformation unit and the second wavelet transformation unit perform different wavelet transformations.Type: GrantFiled: November 9, 2010Date of Patent: November 25, 2014Assignee: Sharp Kabushiki KaishaInventors: Xiaomang Zhang, Masafumi Ueno, Yasuhiro Ohki
-
Patent number: 8791932Abstract: A display device includes: an LED control section (4) for carrying out control in which (i) an output luminance of an LED (10) whose measured luminance is deviated from a reference luminance or (ii) output luminances of peripheral LEDs (10) which are provided around the LED (10) is or are corrected, respectively, by using control information of the plurality of LEDs, which control information contains (a) information on measured luminances of the plurality of LEDs, the information being obtained by the plurality of photosensors (11) and (b) positional information of the plurality of LEDs, the positional information being obtained by the plurality of photosensors (11), and a liquid crystal display control section (3) for controlling, based on (i) video signals which have been subjected to the video signal process and are supplied from a video signal processing section (2) and (ii) the control information supplied from the LED control section (4), (a) levels of video signals to be supplied to pixels correspondiType: GrantFiled: February 17, 2010Date of Patent: July 29, 2014Assignee: Sharp Kabushiki KaishaInventors: Masafumi Ueno, Hiroyuki Furukawa, Kazuyoshi Yoshiyama, Yasuhiro Ohki, Kenji Takase, Takashi Ishizumi
-
Publication number: 20130300827Abstract: A motion vector detecting unit 12 detects a motion vector V1 of an input image X(n), and an interpolating vector generating unit 13 generates an interpolating vector V2 in accordance with the motion vector V1. In a two-dimensional display mode, the input image X(n) is output as an original image, and an image generating unit 16 generates an interpolated image X(n+0.5) in accordance with the interpolating vector V2. In a three-dimensional display mode, the input image X(n) is output as a left-eye image L(n), and the image generating unit 16 generates a right-eye image R(n+0.5) in accordance with a sum of the interpolating vector V2 and a parallax vector V3 input from the outside. The image generating unit is shared in a frame rate conversion process and a three-dimensional conversion process, and the right-eye image is generated at the same location as the interpolated image in time axis. In this way, image quality is increased when a moving image is displayed with a small amount of circuit.Type: ApplicationFiled: January 12, 2012Publication date: November 14, 2013Applicant: SHARP KABUSHIKI KAISHAInventors: Masafumi Ueno, Xiaomang Zhang, Yasuhiro Ohki
-
Publication number: 20130069922Abstract: A three-dimensional noise reduction processing unit perform as a recursive noise reduction process to an input image X(n), using a motion vector MV detected by a motion vector detecting unit. A three-dimensional noise reduced image B(n) is output as a corrected original image Y(n). A two-dimensional noise reduction filter processing unit applies a two-dimensional noise reduction filter to the input image X(n). Using the motion vector MV, an interpolated image generating unit generates an interpolated image Y(n+0.5) based on a two-dimensional noise reduced image A(n). Significant degradation of an interpolated image due to false detection of a motion vector is prevented by generating the interpolated image based on the image resulted without performing the recursive noise reduction process.Type: ApplicationFiled: April 4, 2011Publication date: March 21, 2013Applicant: SHARP KABUSHIKI KAISHAInventors: Masafumi Ueno, Xiaomang Zhang, Yasuhiro Ohki
-
Publication number: 20120321194Abstract: The disclosed image enlargement device is provided with: an image enlargement filter (1) that enlarges an input image, generating a first enlarged image; a first wavelet transformation unit (2) that performs a wavelet transformation on the first enlarged image; a second wavelet transformation unit (3) that performs a wavelet transformation on the first enlarged image; and an accentuation processing unit (6, 7, 8, 10) that performs an accentuation process using a first edge signal (EDGE_CDF9/7), generated from the output of the first wavelet transformation unit, and a second edge signal (EDGE_Harr), generated from the output of the second wavelet transformation unit. The first wavelet transformation unit and the second wavelet transformation unit perform different wavelet transformations.Type: ApplicationFiled: November 9, 2010Publication date: December 20, 2012Applicant: Sharp Kabushiki KaishaInventors: Xiaomang Zhang, Masafumi Ueno, Yasuhiro Ohki
-
Publication number: 20120113164Abstract: A liquid crystal data calculation section forms, on the basis of input image data, liquid crystal data to display an image on a liquid crystal panel. In at least one example embodiment, an LED data calculation section forms, on the basis of the input image data, LED data for adjusting an amount of light of an LED backlight. An LED control section controls an amount of an output current of an LED power source on the basis of the LED data, and includes a protection function of limiting the amount of the output current so that the amount of the output current does not exceed a predetermined upper limit. In a case where the amount of the output current of the LED power source is reduced to the upper limit by the LED control section, a liquid crystal transmittance correction section corrects the liquid crystal data and increases transmittance so as to compensate reduction in luminance of the backlight.Type: ApplicationFiled: March 31, 2010Publication date: May 10, 2012Applicant: SHARP KABUSHIKI KAISHAInventors: Hiroyuki Furukawa, Kazuyoshi Yoshiyama, Yasuhiro Ohki, Masafumi Ueno, Takashi Ishizumi, Kenji Takase
-
Publication number: 20120075274Abstract: A display device includes: an LED control section (4) for carrying out control in which (i) an output luminance of an LED (10) whose measured luminance is deviated from a reference luminance or (ii) output luminances of peripheral LEDs (10) which are provided around the LED (10) is or are corrected, respectively, by using control information of the plurality of LEDs, which control information contains (a) information on measured luminances of the plurality of LEDs, the information being obtained by the plurality of photosensors (11) and (b) positional information of the plurality of LEDs, the positional information being obtained by the plurality of photosensors (11), and a liquid crystal display control section (3) for controlling, based on (i) video signals which have been subjected to the video signal process and are supplied from a video signal processing section (2) and (ii) the control information supplied from the LED control section (4), (a) levels of video signals to be supplied to pixels correspondiType: ApplicationFiled: February 17, 2010Publication date: March 29, 2012Applicant: SHARP KABUSHIKI KAISHAInventors: Masafumi Ueno, Hiroyuki Furukawa, Kazuyoshi Yoshiyama, Yasuhiro Ohki, Kenji Takase, Takashi Ishizumi
-
Publication number: 20110316426Abstract: An illumination device detecting section (6) detects data on the position of each illumination device (7) installed in the audio-visual environment space for a viewer. An illumination control data generating section (9) generates illumination control data for controlling each illumination device installed in the audio-visual environment space for the viewer, with use of the data on the position of each illumination device (7). The illumination control data allows suitable control of each illumination device installed in the audio-visual environment space for the viewer, in correspondence with its installation position, thereby improving the realistic atmosphere obtained by the viewer.Type: ApplicationFiled: December 25, 2007Publication date: December 29, 2011Applicant: SHARP KABUSHIKI KAISHAInventors: Takuya Iwanami, Taiji Nishizawa, Yasuhiro Yoshida, Yasuhiro Ohki, Takashi Yoshii, Manabu Ishikawa
-
Publication number: 20100031298Abstract: A data transmission device (1) sends image data with (i) reference data relating to a position of at least one illumination device in a virtual audio-visual environment space and (ii) illumination control data for controlling each of the at least one illumination device in the virtual audio-visual environment space. An image receiving device (audio-visual environment control device) (4) uses the audio-visual environment reference data and audio-visual environment data for each of a position of at least one illumination device in an actual audio-visual environment space so as to convert the illumination control data into illumination control data for appropriately controlling the illumination device in the actual audio-visual environment. This allows appropriate illumination control according to the actual audio-visual environment.Type: ApplicationFiled: December 25, 2007Publication date: February 4, 2010Applicant: SHARP KABUSHIKI KAISHAInventors: Takuya Iwanami, Yasuhiro Yoshida, Yasuhiro Ohki, Takashi Yoshii, Manabu Ishikawa