Patents by Inventor Takaaki Shiratori
Takaaki Shiratori has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230316007Abstract: In a method for improving machine translation results, a processor receives a target document in a first language. A processor may also translate the target document into a first translated document in a second language using a first neural machine translation (NMT) model, determine a target attribute value from the target document and a first translated attribute value for the first translated document using a natural language understanding (NLU) model, and compare the target attribute value to the first translated attribute value to determine a first comparison score for the first NMT model.Type: ApplicationFiled: March 31, 2022Publication date: October 5, 2023Inventors: Takaaki Shiratori, TAKEHIKO ISHII, Tomoka Mochizuki
-
Publication number: 20230229722Abstract: An embodiment includes determining semantic attributes of a bookmark. The embodiment also includes determining respective attribute values for each of the semantic attributes of the bookmark. The embodiment also includes rendering a three-dimensional (3D) virtual space for display to a user, where the virtual space is defined by three orthogonal axes, each associated with a respective one of the semantic attributes. The embodiment also includes displaying a symbol representative of the bookmark in the virtual space. The symbol is positioned in the virtual space at an intersection of perpendicular projections from locations on the three axes corresponding with respective attribute values of the attributes associated with the respective axes.Type: ApplicationFiled: January 17, 2022Publication date: July 20, 2023Inventors: Takehiko Ishii, Tomoka Mochizuki, Takaaki Shiratori
-
Publication number: 20230005204Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: ApplicationFiled: September 6, 2022Publication date: January 5, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Patent number: 11461950Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: GrantFiled: January 11, 2021Date of Patent: October 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Publication number: 20220248947Abstract: Provided are an ophthalmologic device and a method of controlling the same capable of accurately and reliably acquiring ocular characteristics of a subject eye. The ophthalmologic device includes: a face supporting unit configured to support a face of an examinee; an anterior ocular segment image acquiring unit configured to repeatedly acquire an anterior ocular segment image of the subject eye of the face supported by the face supporting unit; a pupil image detecting unit configured to detect a pupil image of the subject eye for each anterior ocular segment image based on the anterior ocular segment image repeatedly acquired by the anterior ocular segment image acquiring unit; and a determining unit configured to determine whether or not the face is properly supported by the face supporting unit based on a result of detection of the pupil image for each anterior ocular segment image by the pupil image detecting unit.Type: ApplicationFiled: April 27, 2022Publication date: August 11, 2022Applicant: Topcon CorporationInventors: Takaaki SHIRATORI, Takuya OKI, Hiroyuki AOKI, Yusuke ONO
-
Patent number: 11182947Abstract: In one embodiment, a system may access a codec that encodes an appearance associated with a subject and comprise codec portions that respectively correspond to body parts of the subject. The system may generate a training codec that comprises a first subset of the codec portions (a first set of body parts) and a modified second subset of the codec portions (muted body parts). The system may decode the training codec using a machine-learning model to generate a mesh of the subject. The system may transform the mesh of the subject based on a predetermined pose. The system may update the machine-learning model based on a comparison between the transformed mesh and a target mesh of the subject having the predetermined pose. The system in the present application can train a machine-learning model to render an avatar with a pose using uncorrelated codec portions corresponding to different body parts.Type: GrantFiled: April 17, 2020Date of Patent: November 23, 2021Assignee: Facebook Technologies, LLC.Inventors: Chenglei Wu, Jason Saragih, Tomas Simon Kreuz, Takaaki Shiratori
-
Patent number: 11087521Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: GrantFiled: January 29, 2020Date of Patent: August 10, 2021Assignee: Facebook Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Publication number: 20210134043Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: ApplicationFiled: January 11, 2021Publication date: May 6, 2021Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Patent number: 10916047Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: GrantFiled: January 16, 2020Date of Patent: February 9, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Publication number: 20200151935Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: ApplicationFiled: January 16, 2020Publication date: May 14, 2020Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Patent number: 10586370Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: GrantFiled: July 31, 2018Date of Patent: March 10, 2020Assignee: Facebook Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Patent number: 10573049Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: GrantFiled: February 5, 2018Date of Patent: February 25, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Publication number: 20190213772Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: ApplicationFiled: July 31, 2018Publication date: July 11, 2019Inventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Publication number: 20180158226Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: ApplicationFiled: February 5, 2018Publication date: June 7, 2018Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Patent number: 9928634Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: GrantFiled: March 1, 2013Date of Patent: March 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Publication number: 20160379366Abstract: Systems, methods, and computer-readable storage media are provided for aligning three-dimensional point clouds that each includes data representing at least a portion of an area-of-interest. The area-of-interest is divided into multiple regions, each region having a closed-loop structure defined by a plurality of border segments, each border segment including a plurality of fragments. Point clouds representing the fragments that make up each closed-loop region are aligned with one another in a parallelized manner, for instance, utilizing a Simultaneous Generalized Iterative Closest Point (SGICP) technique, to create aligned point cloud regions. Aligned point cloud regions sharing a common border segment portion are aligned with one another to create a single, consistent, aligned point cloud having data that accurately represents the area-of-interest.Type: ApplicationFiled: June 25, 2015Publication date: December 29, 2016Inventors: CHINTAN ANIL SHAH, JEROME FRANCOIS BERCLAZ, MICHAEL L. HARVILLE, YASUYUKI MATSUSHITA, TAKAAKI SHIRATORI, TAOYU LI, TAEHUN YOON, STEPHEN EDWARD SHILLER, TIMO P. PYLVAENAEINEN
-
Patent number: 9317112Abstract: An optical flow of depth video of a depth camera imaging a human subject is recognized. An energy field created by motion of the human subject is generated as a function of the optical flow and specified rules of a physical simulation of the virtual environment. The energy field is mapped to a virtual position in the virtual environment. A property of a virtual object in the virtual environment is adjusted based on a plurality of energy elements of the energy field in response to the virtual object interacting with the virtual position of the energy field.Type: GrantFiled: November 19, 2013Date of Patent: April 19, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Xiang Cao, Takaaki Shiratori, Xin Tong, Feng Xu, Thomas Gersten, Tommer Leyvand
-
Publication number: 20160027199Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.Type: ApplicationFiled: March 1, 2013Publication date: January 28, 2016Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
-
Publication number: 20150138063Abstract: An optical flow of depth video of a depth camera imaging a human subject is recognized. An energy field created by motion of the human subject is generated as a function of the optical flow and specified rules of a physical simulation of the virtual environment. The energy field is mapped to a virtual position in the virtual environment. A property of a virtual object in the virtual environment is adjusted based on a plurality of energy elements of the energy field in response to the virtual object interacting with the virtual position of the energy field.Type: ApplicationFiled: November 19, 2013Publication date: May 21, 2015Applicant: Microsoft CorporationInventors: Xiang Cao, Takaaki Shiratori, Xin Tong, Feng Xu, Thomas Gersten, Tommer Leyvand
-
Patent number: 8814363Abstract: Techniques for presenting display frames projected using a handheld projector. Embodiments detect a marker embedded in a display surface. A first display frame is projected onto the display surface from the handheld projector, where the first display frame projects, on the display surface, one or more animated objects positioned relative to the detected marker embedded in the display surface. Embodiments generate one or more subsequent display frames projected from the handheld projector, where at least a first one of the animated objects in the one or more subsequent display frames is positioned relative to the detected marker embedded in the display surface.Type: GrantFiled: October 26, 2012Date of Patent: August 26, 2014Assignee: Disney Enterprises, Inc.Inventors: Karl D. D. Willis, Takaaki Shiratori