Individual Object Patents (Class 715/849)
  • Patent number: 11755790
    Abstract: A product visualization and manufacturing system and method which bridges two-dimensional (2D) and three-dimensional (3D) technologies in order to quickly and effectively display the product. The system and methods are helpful for many different product types, but especially for custom-designed jewelry products. The 2D/3D bridging invention enables a user to generate a three-dimensional generic base model of a product, modify the three-dimensional generic base model using two-dimensional image manipulation, and display a three-dimensional customized base model of a customized product. Templates, material libraries, HDRI maps, and lighting schemes may be employed.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: September 12, 2023
    Inventors: Christopher W. Hancock, Jill M. Goodson
  • Patent number: 11733861
    Abstract: A user computing device displays a three-dimensional virtual space via a user interface. The user computing device detects a gesture input at a location of the user interface. The user computing device translates the gesture input into a user interface input by predicting, based on the gesture input, a design intended by the gesture input and mapping, based on the design and the location of the gesture input on the user interface, the design to the user interface to generate the user interface input. The user computing device executes, in response to the user interface input, an operation to add an object in the three dimensional virtual space. The user computing device renders an updated three dimensional space displaying the object.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: August 22, 2023
    Assignee: Trimble Inc.
    Inventors: Michael Tadros, Robert Banfield, Ross Stump, Wei Wang
  • Patent number: 11710280
    Abstract: Disclosed herein is an environmental scanning tool that generates a digital model representing the surroundings of a user of an extended reality head-mounted display device. The environment is imaged in both a depth map and in visible light for some select objects of interest. The selected objects exist within the digital model at higher fidelity and resolution than the remaining portions of the model in order to manage the storage size of the digital model. In some cases, the objects of interest are selected, or their higher fidelity scans are directed, by a remote user. The digital model further includes time stamped updates of the environment such that users can view a state of the environment according to various timestamps.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: July 25, 2023
    Assignee: United Services Automobile Association (USAA)
    Inventors: Ravi Durairaj, Marta Argumedo, Sean C. Mitchem, Ruthie Lyle, Nolan Serrao, Bharat Prasad, Nathan L. Post
  • Patent number: 11689704
    Abstract: A space in which an augmented reality (AR) computer simulation is played is mapped by multiple cameras. An AR video game player can select a location in space from which the AR player wishes to have a view of the space, including himself. Using the mapping of the space a synthesized video is generated as if from a camera located at the location in space selected by the player.
    Type: Grant
    Filed: August 13, 2022
    Date of Patent: June 27, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn Black, Michael Taylor
  • Patent number: 11151773
    Abstract: This disclosure discloses a method and an apparatus for adjusting a viewing angle in a virtual environment. The method includes: displaying a first viewing angle picture, the first viewing angle picture including a virtual object having a first orientation; receiving a drag instruction for a viewing angle adjustment control; adjusting the first viewing angle direction according to the drag instruction, to obtain a second viewing angle direction; and displaying a second viewing angle picture, the second viewing angle picture including the virtual object having the first orientation.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: October 19, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Han Wang
  • Patent number: 11126341
    Abstract: The disclosure provides an object manipulating method, a host device and a computer readable storage medium. The method includes the following steps: in response to an operable object is selected, showing a cursor on the operable object; moving the cursor on the operable object in response to a movement of a controller; and moving the operable object according to the movement of the controller in response to a movement triggering event.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: September 21, 2021
    Assignee: HTC Corporation
    Inventor: Wei-Jen Chung
  • Patent number: 11089130
    Abstract: A computer system including program instructions to receive a message at a source gateway of the local network, the message includes message data corresponding to a plurality of message elements, assign a unique group ID based on the type of message received at the source gateway, extract a message format from the received message, the message format defines how the message data is organized with respect to the message elements, and associate the extracted message format with the unique group ID. The computer system further including program instructions to store locally, the extracted message format together with the associated unique group ID, establish a dedicated connection between the source gateway and a target gateway of the remote network based on the unique group ID, encode the message based on the extracted message format, and send the encoded message from the source gateway to the target gateway across the dedicated connection.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: August 10, 2021
    Assignee: International Business Machines Corporation
    Inventors: Wei-Ting Chou, Chih-Hsiung Liu, Xin Peng Liu, Hao-Ting Shih, Joey H. Y. Tseng
  • Patent number: 11023113
    Abstract: Visual manipulation of a digital object such as three-dimensional digital object manipulation on a two-dimensional display surface is described that overcomes the challenges of explicit specification of axis manipulation for each of the three axes one at a time. In an example, a multipoint gesture to a digital object is received on a display surface, which generates an axis of manipulation based on a position of the multipoint gesture relative to the digital object. Then a manipulation gesture is recognized, indicative of a manipulation of the digital object relative to the axis of manipulation, and a visual manipulation of the digital object about the axis of manipulation is generated based on the manipulation gesture.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: June 1, 2021
    Assignee: Adobe Inc.
    Inventor: Erik Jon Natzke
  • Patent number: 11019469
    Abstract: A method of defining a set of parameters for a user computing device in a geographic location system includes receiving, through a user interface on the user device, a signal activating a share profile, wherein a share profile comprises a pre-defined set of parameters for operation of the system with regard to the user device, presenting, on the user interface, options to allow the user to define the share profile, wherein the options include location sharing, profile sharing, and activation, and receiving, from the user device, a set of parameters, wherein the set of parameters define how the geographic location system interacts with the user device while that share profile is active.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: May 25, 2021
    Assignee: CLICK INC
    Inventor: Quan Nguyen
  • Patent number: 10958755
    Abstract: A method including receiving a message at a source gateway of the local network, the message includes message data corresponding to a plurality of message elements, assigning a unique group ID based on the type of message received at the source gateway, extracting a message format from the received message, the message format defines how the message data is organized with respect to the message elements, and associating the extracted message format with the unique group ID. The method further including storing, locally, the extracted message format together with the associated unique group ID, establishing a dedicated connection between the source gateway and a target gateway of the remote network based on the unique group ID, encoding the message based on the extracted message format, and sending the encoded message from the source gateway to the target gateway across the dedicated connection.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: March 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Wei-Ting Chou, Chih-Hsiung Liu, Xin Peng Liu, Hao-Ting Shih, Joey H. Y. Tseng
  • Patent number: 10867452
    Abstract: A computer-implemented method of and system for converting a two-dimensional drawing into a navigable three-dimensional computer graphics representation of a scene that includes inputting the two-dimensional drawing, embedding some portion of the two-dimensional drawings onto one or more two-dimensional planes, arranging the two-dimensional planes in a virtual three-dimensional space; and outputting the arranged two-dimensional planes into the three-dimensional computer graphics representation of the scene.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: December 15, 2020
    Assignee: Mental Canvas LLC
    Inventors: Julie Dorsey, Steven Gortler, Leonard McMillan, Sydney Shea
  • Patent number: 10796490
    Abstract: Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2D) environment. The method may include projecting a ray from a user device to a ground plane and determining an angle at which the projected ray touches the ground plane. The method further helps determine a level for the ground plane for positioning the 3D model of the object in the 2D environment.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: October 6, 2020
    Assignee: Atheer, Inc.
    Inventor: Milos Jovanovic
  • Patent number: 10652105
    Abstract: A display apparatus is provided. The display apparatus includes a display configured to display at least one of a first Graphic User Interface (GUI) representing a domain which provides an execution screen according to a depth, a user interface configured to receive a user command, and a controller configured to, when one of the at least one of the first GUI is selected according to the user command, display at least one of a second GUI representing a sub domain which is available in a domain represented by the selected first GUI, and when one of the at least one of the second GUI is selected, provide an execution screen corresponding to the sub domain based on sub domain information which is mapped with the selected second GUI.
    Type: Grant
    Filed: July 29, 2014
    Date of Patent: May 12, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Kyung-soo Lim
  • Patent number: 10643362
    Abstract: For presenting the message in a location of a display based on a location of a user's limb, methods, apparatus, and systems are disclosed. One apparatus includes a display that presents a first view, a processor, and a memory that stores code executable by the processor. Here, the processor receives a first message to be presented to a user within the first view. The processor determines a location of a limb of the user relative to the first view. Moreover, the processor presents the first message in a location of the first view based on the limb location.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: May 5, 2020
    Assignee: Lenovo (Singapore) PTE LTD
    Inventors: Russell Speight VanBlon, John Carl Mese, Nathan J. Peterson
  • Patent number: 10459526
    Abstract: In one embodiment, a method includes displaying, on a client system, a visual scene with one or more first objects and one or more second objects. The one or more second objects are associated with an augmented reality context. A first touch event handler, associated with an operating system running on the client system, may receive a set of touch events and send the set of touch events to a second touch event handler running on the client system. The second touch event handler may detect a first subset of touch events relating to the one or more second objects. The second touch event handler may process the first subset of touch events and send a second subset of touch events relating to the one or more first objects to the first touch event handler. The first touch event handler may process the second subset of touch events.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: October 29, 2019
    Assignee: Facebook, Inc.
    Inventors: Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Michael Slater, Sergei Viktorovich Anpilov
  • Patent number: 10430692
    Abstract: In some implementations, a training platform may receive data for generating synthetic models of a body part, such as a hand. The data may include information relating to a plurality of potential poses of the hand. The training platform may generate a set of synthetic models of the hand based on the information, where each synthetic model, in the set of synthetic models, representing a respective pose of the plurality of potential poses. The training platform may derive an additional set of synthetic models based on the set of synthetic models by performing one or more processing operations with respect to at least one synthetic model in the set of synthetic models, and causing the set of synthetic models and the additional set of synthetic models to be provided to a deep learning network to train the deep learning network to perform image segmentation, object recognition, or motion recognition.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: October 1, 2019
    Assignee: Capital One Services, LLC
    Inventors: Reza Farivar, Kenneth Taylor, Austin Walters, Joseph Ford, III, Rittika Adhikari
  • Patent number: 10394314
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: August 27, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Patent number: 10379733
    Abstract: An apparatus comprises at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to cause a view of a three-dimensional graphical user interface to be displayed on a touch-sensitive display, the three-dimensional graphical user interface comprising a three-dimensional arrangement of a plurality of graphical objects, each of the graphical objects, when displayed, having an associated display parameter, to identify at least one graphical object for which the associated display parameter satisfies a predetermined criterion, and to enable individual selectability in respect of the identified at least one graphical object, wherein each individually selectable graphical object is selectable with a touch input and wherein individually selecting a graphical object with a touch input causes an action to be performed in respect of the selected graphical object.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: August 13, 2019
    Assignee: NOKIA TECHNOLOGIES OY
    Inventor: Ashley Colley
  • Patent number: 10168768
    Abstract: Presented herein are systems and methods to facilitate interactions in an interactive space. Interactions may include interactions between one or more real-world objects and one or more virtual objects. The interactions with a virtual object may be facilitated through the use of a secondary virtual object that may be provided as part of the virtual object. Interactions with the secondary virtual object may translated into interactions with the virtual object.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: January 1, 2019
    Assignee: Meta Company
    Inventor: Zachary R. Kinstner
  • Patent number: 10147239
    Abstract: A server for content creation is described. A content creation tool of the server receives, from a first device, a content identifier of a physical object, a virtual object content, and a selection of a template corresponding to an interactive feature for the virtual object content. The content creation tool generates a content dataset based on the content identifier of the physical object, the virtual object content, and the selected template. The content creation tool provides the content dataset to a second device, the second device configured to display the interactive feature corresponding to the selected template.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: December 4, 2018
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10074345
    Abstract: A mobile terminal includes a display unit to display a first surface of a multifaceted graphical object, a motion detecting unit to detect a motion of the mobile terminal, and a control unit to switch the displayed first surface to a second surface of the multifaceted graphical object based on the detected motion. A method for switching a display surface of a multifaceted graphical object includes displaying a first surface of the multifaceted graphical object on a mobile terminal, detecting a motion of the mobile terminal, and switching the displayed first surface to a second surface of the multifaceted graphical object based on the detected motion.
    Type: Grant
    Filed: January 15, 2013
    Date of Patent: September 11, 2018
    Assignee: Pantech Inc.
    Inventors: Sang-Wook Park, Hyoung-Il Park, Seung-Jin Ahn
  • Patent number: 10055059
    Abstract: An electronic device including a touch screen configured to display at least one graphic object for executing an operation on the electronic device; at least first and second magnetic sensors configured to detect a spatial position of an input device having a magnetic field generating unit; and a controller configured to in response to a touch applied to the graphic object using the input device for executing the operation, execute a hold mode of holding the execution of the operation while the spatial position of the input device is moved away from the touch screen while being maintained within a reference range, and release the hold mode and execute the operation when the spatial position of the input device is moved out of the reference range.
    Type: Grant
    Filed: November 17, 2016
    Date of Patent: August 21, 2018
    Assignee: LG ELECTRONICS INC.
    Inventors: Suyoung Lee, Yongjae Kim, Yoonchan Won
  • Patent number: 10037614
    Abstract: Approaches provide for minimizing variations in the height of a camera of a computing device when estimating the distance to objects represented in image data captured by the camera. For example, a front-facing camera of a computing device can be used to capture a live camera view of a user. An application can analyze the image data to locate features of the user's face for purposes of aligning the user with the computing device. As the position and/orientation of the device changes with respect to the user, the image data can be analyzed to detect whether a location of a representation of a feature of the user aligns with the alignment element. Once the feature is aligned with the alignment element, a rear-facing camera (or other camera) can capture second image data of an object.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: July 31, 2018
    Assignee: A9.COM, INC.
    Inventors: Eran Borenstein, Arunkumar Devadoss, Zur Nehushtan
  • Patent number: 9928662
    Abstract: A system includes a first and a second hand tracking device, the first and second hand tracking devices being configured to provide position information associated with hands of a user; hardware processors; a memory; and a temporal manipulation module. The temporal manipulation module is configured to perform operations including recording changes made in a virtual reality (VR) environment involving the user, thereby creating a recorded content, detecting a first revolution gesture performed by the user with the first and second hand tracking devices, the first revolution gesture including the first and second hand tracking devices revolving in a circular motion about a common axis, stopping the recording based on detecting the first revolution gesture, rewinding through the recorded content based on the first revolution gesture and, during the rewinding, displaying the recorded content in the VR environment, the displaying including altering the VR environment based on the recorded content.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: March 27, 2018
    Assignee: Unity IPR ApS
    Inventor: Gregory Lionel Xavier Jean Palmaro
  • Patent number: 9805256
    Abstract: Method for setting a tridimensional shape detection classifier for detecting tridimensional shapes from depth images in which each pixel represents a depth distance from a source to a scene, the classifier comprising a forest of at least a binary tree (T) for obtaining the class probability (p) of a given shape comprising nodes associated with a distance function (f) that taking at least a pixel position in a patch calculates a pixel distance.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: October 31, 2017
    Assignee: EXIPPLE STUDIO, S.L.
    Inventors: Marcel Alcoverro Vidal, Adolfo Lopez Mendez, Xavier Suau Cuadros
  • Patent number: 9704028
    Abstract: An information processing system that acquires image data corresponding to a target object that is a target for gesture recognition captured by an imaging device; determines whether a distance between the target object and the imaging device is inadequate for recognition of a gesture made by the target object; and outputs a notification when the determining determines that the distance between the target object and the imaging device is inadequate for recognition of a gesture made by the target object.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: July 11, 2017
    Assignee: SONY CORPORATION
    Inventor: Jun Kimura
  • Patent number: 9679416
    Abstract: A server for content creation is described. A content creation tool of the server receives, from a first device, a content identifier of a physical object, a virtual object content, and a selection of a template corresponding to an interactive feature for the virtual object content. The content creation tool generates a content dataset based on the content identifier of the physical object, the virtual object content, and the selected template.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: June 13, 2017
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 9286709
    Abstract: In rendering a computer-generated animation sequence, pieces of animation corresponding to shots of the computer-generated animation sequence are obtained. Measurements of action in the shots are obtained. Frame rates, which can be different, for the shots are determined based on the determined measurements of action in the shots. The shots are rendered based on the determined frame rates for the shots. The rendered shots with frame rate information indicating the frame rates used in rendering the shots are stored.
    Type: Grant
    Filed: July 27, 2009
    Date of Patent: March 15, 2016
    Assignee: DreamWorks Animation LLC
    Inventor: Erik Nash
  • Patent number: 9269395
    Abstract: Disclosed is a display control apparatus for performing display control in order to display an image on a display apparatus; the display control apparatus comprising an input unit configured to input image group data composed of a plurality of images to be sequentially played; an acquiring unit configured to acquire information of a display size of a specific object included in each of the images; and a deciding unit configured to decide a play speed at which the plurality of images are sequentially played one by one in accordance with the display size of the specific object included in each of the images.
    Type: Grant
    Filed: August 2, 2013
    Date of Patent: February 23, 2016
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Masatsugu Sasaki, Takashi Yamamoto
  • Patent number: 9262865
    Abstract: A server for content creation is described. A content creation tool of the server generates an experience content dataset using a template to process a content identifier and virtual object content. An experience generator of the server provides the experience content dataset to a device that recognizes the content identifier, to generate an interactive experience with the virtual object content at the device.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 16, 2016
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 9256348
    Abstract: Computer simulation generates improved 3D images of human movement involving an object associated with the human character. A set of axes in 3 dimensional space is originally defined for tracking orientation of the human character in a 3D image. This set of axes is subsequently automatically applied to and used for object(s) carried by the human character. The object is displayed at a constant (same, unchanged) orientation while the human character is illustrated moving in certain ways in succeeding 3D images.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: February 9, 2016
    Assignee: Dassault Systemes Americas Corp.
    Inventors: Prasad Belvadi, André Chamberland, David Brouillette
  • Patent number: 9240075
    Abstract: A server for campaign optimization is described. An experience content dataset is generated for an augmented reality application of a device based on analytics results. The analytics results are generated based on analytics data received from the device. The experience content dataset is provided to the device. The device recognizes a content identifier of the experience content dataset and generates an interactive experience with a presentation of virtual object content that is associated with the content identifier.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 19, 2016
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 9207756
    Abstract: An apparatus and method for controlling a 3-dimensional (3D) image using a virtual tool are provided. The apparatus may detect a user object that controls the 3D image, and determine a target virtual tool matching movement of the user object. In addition, the apparatus may display the determined target virtual tool along with the 3D image.
    Type: Grant
    Filed: August 14, 2012
    Date of Patent: December 8, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kwon Ju Yi, Sung Joo Suh, Du Sik Park, Byung In Yoo, Chang Kyu Choi
  • Patent number: 9069403
    Abstract: A display operating device includes a position detecting section, an axis decision section, a movement determining section, a display section, and a first rotation processing section. The position detecting section is configured to detect three positions on a touch surface. The axis decision section is configured to decide a first axis using first and second positions which are two of the three positions. The movement determining section is configured to determine whether or not a third position, which is the remaining one of the three positions, is moving. The first rotation processing section is configured to, when with the 3D object displayed on the display section the three positions are detected and the third position is determined to be moving, rotate the 3D object by a first predetermined angle about the first axis.
    Type: Grant
    Filed: July 25, 2014
    Date of Patent: June 30, 2015
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Norie Fujimoto
  • Patent number: 9041734
    Abstract: Image information displayed on an electronic device can be modified based at least in part upon a relative position of a user with respect to a device. Mapping, topological or other types of positional data can be used to render image content from a perspective that is consistent with a viewing angle for the current relative position of the user. As that viewing angle changes, as a result of movement of the user and/or the device, the content can be re-rendered or otherwise updated to display the image content from a perspective that reflects the change in viewing angle. Simulations of effects such as parallax and occlusions can be used with the change in perspective to provide a consistent user experience that provides a sense of three-dimensional content even when that content is rendered on a two-dimensional display. Lighting, shading and/or other effects can be used to enhance the experience.
    Type: Grant
    Filed: August 12, 2011
    Date of Patent: May 26, 2015
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Howard D. Look, Leo B. Baldwin, Kenneth M. Karakotsios, Dennis Hodge, Isaac S. Noble, Volodymyr V. Ivanchenko, Jeffrey P. Bezos
  • Publication number: 20150143302
    Abstract: A method of providing a virtual reality-based three-dimensional interface for a web object search and a real time metadata representations and a web search system using the three-dimensional interface thereof are provided. In this method, a plurality of nodes each in which a node identification label string to which an information page is linked is together written and a plurality of links connecting the plurality of nodes, respectively, are displayed in a three-dimensional form that is either static or spinning. Thereafter, by performing an action of reduction, enlargement, movement, rotation, expanding, hiding, removal, and addition in a three-dimensional object that is formed with the node and the link that are displayed in the three-dimensional form according to a user input, the three-dimensional object is displayed or an information page that is linked to a node that is selected by the user is displayed.
    Type: Application
    Filed: July 9, 2014
    Publication date: May 21, 2015
    Inventors: Seongju Chang, Apurva Gupta
  • Patent number: 9037486
    Abstract: A method for disabling and re-enabling third-party advertisements is disclosed. An Internet accessible “virtual world” or interactive on-line community can have its advertisements disabled by the entering and subsequent validation of a registration code that is associated with a toy into a website, once validated, displaying a virtual representation of the toy on the website, providing virtual world content so that the virtual representation of the toy is caused to interact with the virtual world content and the toy virtual representations of other users, displaying advertisement on the website in a first mode and allowing customization of the virtual world content including the disabling of advertisements in a second mode. In a similar manner the third party advertisements can be re-enabled.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: May 19, 2015
    Assignee: GANZ
    Inventors: Howard Ganz, Karl Joseph Borst, Jessica Boyd
  • Patent number: 9032336
    Abstract: The present invention provides an input interface that a human can operate naturally and intuitively using a human gesture (motion), by including means for acquiring motion information based on a gesture and input interface means for generating information for operating an object on a desktop of a computer on the basis of motion information. In this case, the motion information is matched against a template for recognizing a motion of a user and a matched event is outputted so that the object is operated. The object includes a pie menu in which a menu item is disposed in a circular form. A user is allowed to selects a desired menu item in the pie menu in accordance with an angle at which the user twists a wrist thereof.
    Type: Grant
    Filed: September 7, 2006
    Date of Patent: May 12, 2015
    Assignee: Osaka Electro-Communication University
    Inventors: Hirotaka Uoi, Katsutoshi Kimura
  • Patent number: 9032334
    Abstract: According to the present invention, disclosed is an electronic device having a three-dimensional display, comprising a sensor obtaining information about a gesture's motion; a three-dimensional display displaying a pointer and/or an object moving in three-dimensional space according to the gesture's motion; and a controller checking applications in execution, determining a movement distance of the pointer and/or the object in proportion to a movement distance of the gesture by taking account of gesture sensitivity selected according to the type of the checked application, and controlling the display to move the pointer and/or the object as much as the determined movement distance.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: May 12, 2015
    Assignee: LG Electronics Inc.
    Inventors: Sunjin Yu, Taehyeong Kim, Sang Ki Kim, Soungmin Im, Sangki Kim
  • Patent number: 9003358
    Abstract: Techniques and a system for creating a vendor independent computer language and compiling the language into an architecture specification language allowing for taking a source data stream (file, WSDL, XML) and passing thru a language parser, populating a storage medium with a plurality of technical inputs and vendor technical specifications for generic technologies and probable technologies required for desired architectures generated by the language parser, and optimizing the inputs and creating relationships between technologies and groups of technologies and storing results in the storage medium.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: April 7, 2015
    Inventor: Russell Sellers
  • Patent number: 8990705
    Abstract: Modifying display of an object in a display of part of a virtual universe is provided. In one embodiment, the process obtains avatar tracking data that identifies a location of an avatar in relation to a range of the object. The range includes a viewable field. The process then selects a data collection method based on the location of the set of avatars. In response to detecting an event for triggering modification of the object, the process calculates a set of color modifiers based on display setting data to form a modified color. Thereafter, the process renders the object using the modified color when the location of the set of avatars is within the range of the object.
    Type: Grant
    Filed: July 1, 2008
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Peter George Finn, Rick Allen Hamilton, II, Brian Marshall O'Connell, Clifford Alan Pickover, Keith Raymond Walker
  • Publication number: 20150082180
    Abstract: Approaches enable three-dimensional (3D) display and interaction with interfaces (such as a webpage, an application, etc.) when the device is operating in a 3D view mode. For example, interface elements can be highlighted, emphasized, animated, or otherwise altered in appearance, and/or arrangement in the renderings of those interfaces based at least in part on an orientation of the device or a position of a user using the device. Further, the 3D view mode can provide for an animated 3D departure and appearance of elements as the device navigates from a current page to a new page. Further still, approaches provide for the ability to specify 3D attributes (such as the appearance, action, etc.) of the interface elements. In this way, a developer of such interfaces can use information (e.g., tags, CSS, JavaScript, etc.) to specify a 3D appearance change to be applied to at least one interface element when the 3D view mode is activated.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: Amazon Technologies, Inc.
    Inventors: CHARLEY AMES, DENNIS PILARINOS, PETER FRANK HILL, SASHA MIKHAEL PEREZ, TIMOTHY THOMAS GRAY
  • Publication number: 20150082253
    Abstract: A method for a computer system includes determining a plurality of positions of portions of a hand of a user simultaneously placed upon a user interface device of the computer system, retrieving a set of display icons in response to the plurality of positions of the portions of the user hand, displaying the display icons from the set of display icons on a display relative to the plurality of positions of the portions of the user hand; while displaying the display icons on the display, determining a user selection of a display icon from the display icons, and performing a function in response to the user selection of the display icon.
    Type: Application
    Filed: September 2, 2014
    Publication date: March 19, 2015
    Inventors: Tony DeRose, Kenrick Kin
  • Publication number: 20150074610
    Abstract: Embodiments include a system for providing blood flow information for a patient. The system may include at least one computer system including a touchscreen. The at least one computer system may be configured to display, on the touchscreen, a three-dimensional model representing at least a portion of an anatomical structure of the patient based on patient-specific data. The at least one computer system may also be configured to receive a first input relating to a first location on the touchscreen indicated by at least one pointing object controlled by a user, and the first location on the touchscreen may indicate a first location on the displayed three-dimensional model. The at least one computer system may be further configured to display first information on the touchscreen, and the first information may indicate a blood flow characteristic at the first location.
    Type: Application
    Filed: November 13, 2014
    Publication date: March 12, 2015
    Inventors: Gregory R. HART, John H. STEVENS
  • Patent number: 8954853
    Abstract: An after-action, mission review tool that displays camera and navigation sensor data allowing a user to pan, tilt, and zoom through the images from front and back cameras on an vehicle, while simultaneously viewing time/date information, along with any available navigation information such as the latitude and longitude of the vehicle at that time instance. Also displayed is a visual representation of the path the vehicle traversed; when the user clicks on the path, the image is automatically changed to the image corresponding to that position. If aerial images of the area are available, the path can be plotted on the geo-referenced image.
    Type: Grant
    Filed: September 6, 2012
    Date of Patent: February 10, 2015
    Assignee: Robotic Research, LLC
    Inventors: Alberto Daniel Lacaze, Karl Nicholas Murphy, Anne Rachel Schneider, Raymond Paul Wilhelm, III
  • Publication number: 20150020031
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: July 3, 2014
    Publication date: January 15, 2015
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Publication number: 20150012890
    Abstract: Described is a virtual environment built by drawing stacks of three-dimensional objects (e.g., discrete blocks) as manipulated by a user. A user manipulates one or more objects, resulting in stack heights being changed, e.g., by adding, removing or moving objects to/from stacks. The stack heights are maintained as sample points, e.g., each point indexed by its associated horizontal location. A graphics processor expands height-related information into visible objects or stacks of objects by computing the vertices for each stack to draw that stack's top surface, front surface and/or side surface based upon the height-related information for that stack. Height information for neighboring stacks may be associated with the sample point, whereby a stack is only drawn to where it is occluded by a neighboring stack, that is, by computing the lower vertices for a surface according to the height of a neighboring stack where appropriate.
    Type: Application
    Filed: September 22, 2014
    Publication date: January 8, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark T. Finch, Matthew B. MacLaurin, Stephen B. Coy, Eric S. Anderson, Lili Cheng
  • Patent number: 8930844
    Abstract: In a network-linked computer graphics image rendering system serving to render images of objects in scenes, these objects are so rendered from high-resolution 3D models and textures that are, in particular, stored and maintained on one or more server computers in one or more libraries that are secure. Using stand-in object models and textures, design professionals at client computers are able to “fine-tune” and preview designs that incorporate objects stored securely in the server's(s') models' library(ies). Yet the high-resolution, 3D, relatively expensive, and proprietary object models remain completely secure at one (i.e., centralized) or more (i.e., distributed) server computers. 2D perspective-view or stereo in-situ photorealistic images of scenes incorporating these objects are rendered at the one or more sever computers, for subsequent remote viewing at the one or more client computers.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: January 6, 2015
    Inventor: Bruce Carlin
  • Patent number: 8913057
    Abstract: There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for displaying the object arranged in the virtual space, on a display device, a detection unit for detecting device information of the display device, and an execution unit for executing predetermined processing toward the object based on the device information.
    Type: Grant
    Filed: February 9, 2011
    Date of Patent: December 16, 2014
    Assignee: Sony Corporation
    Inventors: Hiroyuki Ishige, Kazuhiro Suzuki, Akira Miyashita
  • Patent number: 8907943
    Abstract: A three-dimensional (“3D”) display environment for mobile device is disclosed that uses orientation data from one or more onboard sensors to automatically determine and display a perspective projection of the 3D display environment based on the orientation data without the user physically interacting with (e.g., touching) the display.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: December 9, 2014
    Assignee: Apple Inc.
    Inventor: Patrick Piemonte