Picking 3d Objects Patents (Class 715/852)
-
Patent number: 12039679Abstract: Systems and methods for data asset acquisition and obfuscation can be helpful for retrieving augmented reality rendering data assets from third parties. The sending of a software development kit and receiving back data assets can ensure the data assets are compatible with the augmented reality rendering experience in the user interface. The data acquisition system with obfuscation can also ensure the code generated by third parties is stripped of semantics and has reduced readability.Type: GrantFiled: November 23, 2022Date of Patent: July 16, 2024Assignee: GOOGLE LLCInventors: Ivan Neulander, Ian Joseph Roth, Hao Wang, Agustin III Olivan Venezuela, Subramanian Shyamsunder Mathur, Xuemei Zhao, Valdrin Koshi, James Sraw Singh
-
Patent number: 12008208Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item.Type: GrantFiled: March 15, 2023Date of Patent: June 11, 2024Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
-
Patent number: 11934625Abstract: Provided is a mobile terminal which allows pieces of furniture to be virtually arranged. A mobile terminal according to one embodiment of the present invention comprises: a wireless communication unit which is capable of communicating with an external server or an external device; a display unit for displaying an execution screen of a certain application; and a control unit, wherein the execution screen at least comprises: a first area for displaying a first image corresponding to a certain area; a second area for displaying information on each of a plurality of pieces of furniture which can virtually be arranged on the first image; and a third area which includes a chat room for exchanging opinions related to the virtual arrangement of the pieces of furniture on the first image, with a user of at least one predetermined external device on which the certain application is installed.Type: GrantFiled: July 6, 2021Date of Patent: March 19, 2024Assignee: LG ELECTRONICS INC.Inventors: Euikyeom Kim, Kyungtae Oh, Seungju Choi, Yoonjung Son
-
Patent number: 11860324Abstract: A method is described for estimating seismic velocity from seismic data by training a neural network using a subset of a seismic dataset and the velocity model; estimating a second velocity model using the neural network and a second subset of the seismic dataset; and displaying the second velocity model on a graphical user interface. The method may be executed by a computer system.Type: GrantFiled: July 7, 2021Date of Patent: January 2, 2024Assignee: Chevron U.S.A. Inc.Inventor: Enning Wang
-
Patent number: 11809507Abstract: The subject technology causes, at a client device, display of a graphical interface comprising a plurality of selectable graphical items, each selectable graphical item corresponding to a respective content item associated with a different geolocation. The subject technology receives, at the client device, a selection of a first selectable graphical item from the plurality of selectable graphical items, the first selectable graphical item corresponding to a particular geolocation. The subject technology causes display, at the client device, a second plurality of selectable graphical items, each of the second plurality of selectable graphical items corresponding to a particular second geolocation of an activity or place of business within a geographical area associated with the particular geolocation.Type: GrantFiled: January 12, 2021Date of Patent: November 7, 2023Assignee: SNAP INC.Inventors: Kaveh Anvaripour, Virginia Drummond, Erika Michele Kehrwald, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
-
Patent number: 11521358Abstract: Systems and methods for data asset acquisition and obfuscation can be helpful for retrieving augmented reality rendering data assets from third parties. The sending of a software development kit and receiving back data assets can ensure the data assets are compatible with the augmented reality rendering experience in the user interface. The data acquisition system with obfuscation can also ensure the code generated by third parties is stripped of semantics and has reduced readability.Type: GrantFiled: January 7, 2021Date of Patent: December 6, 2022Assignee: GOOGLE LLCInventors: Ivan Neulander, Ian Joseph Roth, Hao Wang, Agustin III Olivan Venezuela, Subramanian Shyamsunder Mathur, Xuemei Zhao, James Sraw Singh, Valdrin Koshi
-
Patent number: 11257170Abstract: In one embodiment, a method includes rendering, based on a three-dimensional model, a virtual object in a three-dimensional virtual environment, where the rendering is customized for a user of the virtual environment, the customized rendering being based on a current stage of the user in a course of stages, wherein the course comprises a plurality of stages including a first stage, a final stage, and one or more intermediate stages, each stage being associated with one or more transition conditions, wherein the course comprises at least one path through the stages from the first stage to the final stage, detecting, in the virtual environment, one or more actions by the user, updating the current stage of the user in response to the detected actions, and providing information relating to the updated current stage of the user to an interested party.Type: GrantFiled: March 5, 2019Date of Patent: February 22, 2022Assignee: Meta Platforms, Inc.Inventor: Amod Ashok Dange
-
Patent number: 11204974Abstract: A method includes displaying and capturing image data containing an object and accessing a plurality of records related to objects, selecting a record related to the captured object, obtaining an identifier of a vendor of the object of the selected data record, combining the selected data record and the vendor identifier to form a search record, displaying, based on the search record, an augmented reality interface to receive a first interactive action for saving the search record, receiving the first interactive action, saving, in response to receiving the first interactive action, the search record into a searchable data structure, receiving a second interactive action, retrieving, in response to receiving the second interactive action, the search record from the searchable data structure, updating the vendor identifier based on the retrieved search record, and displaying information related to the search record.Type: GrantFiled: July 19, 2019Date of Patent: December 21, 2021Assignee: Capital One Services, LLCInventors: Staevan Duckworth, Daniel Martinez, William Hardin, Victoria Blacksher, Jonathan Castaneda, Stewart Youngblood
-
Patent number: 11132054Abstract: An electronic apparatus is provided. The electronic apparatus according to an embodiment includes a storage, a communicator comprising communication circuitry, and a processor configured to render a virtual reality (VR) image including a first object corresponding to a first display device based on VR image information stored in the storage, wherein the processor is further configured to receive motion information of a second display device from the second display device through the communicator, to render one area of the VR image including the first object based the first object being included in a view of a second object corresponding to the second display device based on the motion information of the second display device, and to control the communicator to transmit the rendered one area of the VR image to the second display device.Type: GrantFiled: August 13, 2019Date of Patent: September 28, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Bonggil Bak, Donghwan Ji
-
Patent number: 10937224Abstract: Systems and methods for rendering an Augmented Reality (“AR”) object. The methods comprise: drawing a first bitmap of a first AR object rendered by a server on a display of a client device; selecting/focusing on a second AR object or a part of the first AR object shown on the display; communicating a request for the second AR object or the part of the first AR object from the client device to the server; obtaining, by the server, an object file for the second AR object or part of the first AR object; providing the object file to the client device; locally rendering, by the client device, the second AR object or part of the first AR object as a second bitmap; superimposing the second bitmap on the first bitmap to generate a third bitmap; and drawing the third bitmap on the display of the client device.Type: GrantFiled: January 10, 2020Date of Patent: March 2, 2021Assignee: Citrix Systems, Inc.Inventors: Pawan Kumar Dixit, Mudit Mehrotra
-
Patent number: 10650606Abstract: Provided herein are method, apparatus, and computer program products for generating a first and second three dimensional interactive environment. The first three dimensional interactive environment may contain one or more engageable virtual interfaces that correspond to one or more items. Upon engagement with a virtual interface the second three dimensional interactive environment is produced to virtual simulation related to the one or more items.Type: GrantFiled: June 20, 2019Date of Patent: May 12, 2020Assignee: Groupon, Inc.Inventor: Scott Werner
-
Patent number: 10525355Abstract: A processing method includes mapping a first physical object into a first virtual object in a virtual space; generating a collision signal in response to the first virtual object overlap a second virtual object in the virtual space; and providing the collision signal to the first physical object, so that the first physical object operates according to the collision signal.Type: GrantFiled: August 18, 2017Date of Patent: January 7, 2020Assignee: HTC CorporationInventor: Wei-Cheng Chiu
-
Patent number: 10303306Abstract: A projection display unit (1) includes a body (10) and an invisible light application unit (30). The body (10) includes a projection optical system and a detection optical system. The projection optical system projects an image onto a projection surface (110). The detection optical system acquires an imaging signal based on invisible light. The invisible light application unit (30) applies the invisible light along a surface in vicinity of the projection surface while being placed on a surface that is an extension of the projection surface. The body (10) is movable with respect to an output opening (31) of the invisible light application unit, and a position of the body is adjustable with respect to the projection surface.Type: GrantFiled: August 28, 2015Date of Patent: May 28, 2019Assignee: SONY CORPORATIONInventors: Takaki Hasuike, Hajime Ishihara, Masaharu Sakata, Yasutaka Sakata
-
Patent number: 10120462Abstract: An information processing apparatus includes a display, a sensor, and a controller. The display has a screen. The sensor is configured to detect an inclination. The controller is configured to display a first object on the screen and display a second object associated with the first object on the screen in accordance with the inclination detected by the sensor.Type: GrantFiled: September 20, 2017Date of Patent: November 6, 2018Assignee: SONY CORPORATIONInventors: Yusuke Miyazawa, Seiji Suzuki, Yasushi Okumura
-
Patent number: 9842419Abstract: The disclosure discloses a method and device for display a picture, relates to the field of electronic information and is intended to address the problems of a long modeling period of time and inefficient modeling when the picture is three-dimensionally displayed. Particularly the method includes: obtaining at least one picture sequence number, wherein a picture sequence number corresponds to a picture; substituting the at least one picture sequence number into a preset set of equations to calculate location information of at least one picture, wherein the preset set of equations is a set of equations created in a virtual three-dimensional coordinate system, and location information of a picture corresponds to a picture sequence number; and displaying the at least one picture according to the location information of the at least one picture. The disclosure is applicable to display of a picture.Type: GrantFiled: May 8, 2015Date of Patent: December 12, 2017Assignees: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY CO., LTD., HISENSE USA CORPORATION, HISENSE INTERNATIONAL CO., LTD.Inventor: Te Bi
-
Patent number: 9224234Abstract: This disclosure includes a method for electronically generating a single image for product visualization. The method comprises receiving a selection of a first variation of a first consumer product layer with a first depth attribute from a plurality of variations of the first consumer product layer, each variation comprising at least one surface. The method further includes receiving a selection of a second variation of a second consumer product layer with a second depth attribute from a plurality of variations of the second consumer product layer, each variation comprising at least one surface. The method also includes layering the first variation of the first consumer product layer in the single image based at least on the first depth attribute; and layering the second variation of the second consumer product layer in the single image based at least on the second depth attribute. Related systems and apparatuses are also disclosed.Type: GrantFiled: February 15, 2013Date of Patent: December 29, 2015Assignee: MICRO*D, INC.Inventors: Manoj Nigam, Mark McCuistion, Ron Gordon, Marek Scholaster
-
Patent number: 9195794Abstract: A system for determining the pose of an articulated model of a virtual subject in carrying out a task within a vehicle occupant packaging design is described. The system uses an initial posture of the articulated model prior to the virtual subject carrying out the task as a starting point. The system determines pose throughout the carrying out of the task by the virtual subject. The pose is determined based on the parameters of the virtual subject, the design, the task to be completed, and a set of constraints restricting the motion of the virtual subject. The pose can be analyzed to determine the feasibility of a design for a human subject, without the need for live subject testing. The method is analytically derived and results in a kinematically and dynamically consistent posture in real-time without requiring iterative optimization.Type: GrantFiled: March 12, 2013Date of Patent: November 24, 2015Assignee: HONDA MOTOR CO., LTD.Inventor: Behzad Dariush
-
Patent number: 9087662Abstract: A keyboard includes a membrane switch circuit module, a metallic supporting plate, plural keys, an induction antenna assembly, and a signal processing unit. The induction antenna assembly includes a first partition plate, a second partition plate and an antenna layer. The first partition plate is arranged between the metallic supporting plate and the keycap of a specified key. The second partition plate is connected with the first partition plate and arranged between the first partition plate and the keycap of the specified key. The antenna layer is arranged between the first partition plate and the second partition plate. The signal processing unit is electrically connected with the antenna layer. When the antenna layer senses that an object enters the sensing range, the signal processing unit issues a sensing signal.Type: GrantFiled: December 12, 2013Date of Patent: July 21, 2015Assignee: PRIMAX ELECTRONICS LTD.Inventors: Bo-An Chen, Hsien-Tsan Chang
-
Patent number: 9043732Abstract: In accordance with an example embodiment of the present invention, a method for proximity based input is provided, comprising: detecting presence of an object in close proximity to an input surface, detecting a displayed virtual layer currently associated with the object on the basis of distance of the object to the input surface, detecting a hovering input by the object, and causing a display operation to move at least a portion of the associated virtual layer in accordance with the detected hovering input.Type: GrantFiled: October 21, 2010Date of Patent: May 26, 2015Assignee: Nokia CorporationInventor: Mikko Nurmi
-
Patent number: 9030498Abstract: A method including presenting, by a computer, multiple interactive items on a display coupled to the computer, and receiving, from a depth sensor, a sequence of three-dimensional (3D) maps containing at least a hand of a user of the computer. An explicit select gesture performed by the user toward one of the interactive items is detected in the maps, and the one of the interactive items is selected responsively to the explicit select gesture. Subsequent to selecting the one of the interactive items, a TimeClick functionality is actuated for subsequent interactive item selections to be made by the user.Type: GrantFiled: August 14, 2012Date of Patent: May 12, 2015Assignee: Apple Inc.Inventors: Micha Galor, Jonathan Pokrass, Amir Hoffnung, Ofir Or
-
Patent number: 9032336Abstract: The present invention provides an input interface that a human can operate naturally and intuitively using a human gesture (motion), by including means for acquiring motion information based on a gesture and input interface means for generating information for operating an object on a desktop of a computer on the basis of motion information. In this case, the motion information is matched against a template for recognizing a motion of a user and a matched event is outputted so that the object is operated. The object includes a pie menu in which a menu item is disposed in a circular form. A user is allowed to selects a desired menu item in the pie menu in accordance with an angle at which the user twists a wrist thereof.Type: GrantFiled: September 7, 2006Date of Patent: May 12, 2015Assignee: Osaka Electro-Communication UniversityInventors: Hirotaka Uoi, Katsutoshi Kimura
-
Patent number: 9013469Abstract: A method and device for displaying a three-dimensional view of the surface of a viewed object is disclosed, wherein a subset of the three-dimensional data from the entire image of the viewed object in a region of interest is determined and displayed to provide enhanced detail in the region of interest.Type: GrantFiled: March 4, 2011Date of Patent: April 21, 2015Assignee: General Electric CompanyInventor: Clark Alexander Bendall
-
Publication number: 20150089453Abstract: A system and method for providing a 3D gesture based interaction system for a projected 3D user interface is disclosed. A user interface display is projected onto a user surface. Image data of the user interface display and an interaction medium are captured. The image data includes visible light data and IR data. The visible light data is used to register the user interface display on the projected surface with the Field of View (FOV) of at least one camera capturing the image data. The IR data is used to determine gesture recognition information for the interaction medium. The registration information and gesture recognition information is then used to identify interactions.Type: ApplicationFiled: September 25, 2014Publication date: March 26, 2015Inventors: Carlo Dal Mutto, Abbas Rafii, Britta Hummel
-
Patent number: 8966400Abstract: Technologies are generally described for a system for interpreting user movement in computer generated reality. In some examples, the system includes a user interface effective to generate movement data relating to movement of the user interface. In some examples, the system further includes a processor receive the movement data. In some examples, the processor is further effective to define a coordinate system based on the movement data and map the movement data to the coordinate system to produce mapped movement data. In some examples, the processor is further effective to determine a feature of the mapped movement data and to map the feature to a code. In some examples, the processor is further effective to send the code to the application and receive application data from the application in response to the code. In some examples, the processor is further effective to generate an image based on the application data.Type: GrantFiled: June 7, 2010Date of Patent: February 24, 2015Assignee: Empire Technology Development LLCInventor: Tralvex Yeap
-
Patent number: 8954273Abstract: A navigation system, in particular for a motor vehicle, for determining a route to a destination, wherein the navigation system includes an input device for alphanumeric input of the destination and a display for displaying changing information and includes a processing unit for displaying potential destinations in a representation of a map by means of the display during input of the destination.Type: GrantFiled: June 25, 2012Date of Patent: February 10, 2015Assignee: Volkswagen AktiengesellschaftInventors: Moritz Neugebauer, Gordon Seitz, Stefan Schulz, Imke Gaus, Oliver Meyer
-
Publication number: 20150040074Abstract: Methods and systems for enabling creation of augmented reality content on a user device including a digital imaging part, a display, a user input part and an augmented reality client, wherein said augmented reality client is configured to provide an augmented reality view on the display of the user device using an live image data stream from the digital imaging part are disclosed. User input is received from the user input part to augment a target object that is at least partially seen on the display while in the augmented reality view. A graphical user interface is rendered to the display part of the user device, said graphical user interface enabling a user to author augmented reality content for the two-dimensional image.Type: ApplicationFiled: August 18, 2011Publication date: February 5, 2015Applicant: Layar B.V.Inventors: Klaus Michael Hofmann, Raimo Jahani Van Der Klein, Ronald Van Der Lingen, Klasien Van De Zandschulp
-
Publication number: 20150012891Abstract: In accordance with one implementation, a method is illustrated that allows a computing device to determine a user input. The method includes detecting one or more user input objects in a 3-dimensional field relative to a 2-dimensional surface. The method also includes determining coordinates for the one or more user input objects relative to the 2-dimensional surface. And, the method further includes determining a user input based on the coordinates.Type: ApplicationFiled: September 24, 2014Publication date: January 8, 2015Inventors: Lai Xue, Darren Lim
-
Publication number: 20150007114Abstract: Technology is described for web-like hierarchical menu interface which displays a menu in a web-like hierarchical menu display configuration in a near-eye display (NED). The web-like hierarchical menu display configuration links menu levels and menu items within a menu level with flexible spatial dimensions for menu elements. One or more processors executing the interface select a web-like hierarchical menu display configuration based on the available menu space and user head view direction determined from a 3D mapping of the NED field of view data and stored user head comfort rules. Activation parameters in menu item selection criteria are adjusted to be user specific based on user head motion data tracked based on data from one or more sensors when the user wears the NED. Menu display layout may be triggered by changes in head view direction of the user and available menu space about the user's head.Type: ApplicationFiled: June 28, 2013Publication date: January 1, 2015Inventors: Adam G. Poulos, Anthony J. Ambrus, Cameron G. Brown, Jason Scott, Brian J. Mount, Daniel J. McCulloch, John Bevis, Wei Zhang
-
Publication number: 20150007087Abstract: Data visualization that interactively rotates data about a particular axis or translates data in a particular plane based on input received outside the axis space. Data to be visualized is accessed by a data visualization application. The data may be structured or unstructured, filtered and analyzed. The accessed data may be displayed through an interface of the visualization application for a user. The coordinate system for displaying the data may also be displayed. A user may rotate data about a particular axis of the coordinate system or translate data in a particular plane by providing a continuous input within a graphics portion of an interface. The input may be associated with a virtual track ball.Type: ApplicationFiled: June 28, 2013Publication date: January 1, 2015Applicant: Silicon Graphics International Corp.Inventor: Marc David Hansen
-
Patent number: 8918715Abstract: According to a preferred aspect of the instant invention, there is provided a system and method that allows the user to present standard 2D multimedia data (photo, video) to an audience in a stereoscopic 3D presentation. The system allows the user to transfer standard 2D multimedia content into a stereoscopic 3D multimedia work by automatically placing the individual 2D multimedia input material into specific placeholder sections in specially prepared and provided S3D multimedia themes.Type: GrantFiled: November 16, 2011Date of Patent: December 23, 2014Assignee: Magix AGInventors: Tilman Herberger, Titus Tost
-
Publication number: 20140372957Abstract: A head mounted display allows user selection of a virtual object through multi-step focusing by the user. Focus on the selectable object is determined and then a validation object is displayed. When user focus moves to the validation object, a timeout determines that a selection of the validation object, and thus the selectable object has occurred. The technology can be used in see through head mounted displays to allow a user to effectively navigate an environment with a multitude of virtual objects without unintended selections.Type: ApplicationFiled: June 18, 2013Publication date: December 18, 2014Inventors: Brian E. Keane, Ben J. Sugden, Robert L. Crocco, Jr., Daniel Deptford, Tom G. Salter, Laura K. Massey, Alex Aben-Athar Kipman, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda
-
Patent number: 8913057Abstract: There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for displaying the object arranged in the virtual space, on a display device, a detection unit for detecting device information of the display device, and an execution unit for executing predetermined processing toward the object based on the device information.Type: GrantFiled: February 9, 2011Date of Patent: December 16, 2014Assignee: Sony CorporationInventors: Hiroyuki Ishige, Kazuhiro Suzuki, Akira Miyashita
-
Patent number: 8907943Abstract: A three-dimensional (“3D”) display environment for mobile device is disclosed that uses orientation data from one or more onboard sensors to automatically determine and display a perspective projection of the 3D display environment based on the orientation data without the user physically interacting with (e.g., touching) the display.Type: GrantFiled: July 7, 2010Date of Patent: December 9, 2014Assignee: Apple Inc.Inventor: Patrick Piemonte
-
Publication number: 20140344762Abstract: Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The method may include capturing an image of one or more target objects, wherein the one or more target objects are positioned on a pre-defined background. The method may also include segmenting the image into one or more areas corresponding to the one or more target objects and one or more areas corresponding to the pre-defined background. The method may additionally include converting the one or more areas corresponding to the one or more target objects to a digital image. The method may further include generating one or more AR objects corresponding to the one or more target objects, based at least in part on the digital image.Type: ApplicationFiled: May 12, 2014Publication date: November 20, 2014Applicant: QUALCOMM IncorporatedInventors: Raphael Grasset, Hartmut Seichter
-
Patent number: 8881059Abstract: A virtual object display determination unit identifies from real object display determination information a priority corresponding to a movement of a user indicated by user movement information notified by a state communication unit and, at the same time, identifies from real object attribute information a priority corresponding to a state change indicated by state change information notified by the state communication unit. By comparing the two identified priorities, the virtual object display determination unit determines whether or not to change a display mode of a virtual object. A UI generation unit generates a UI to be presented to the user based on a determination result of the virtual object display determination unit, and causes the UI to be displayed by a UI display unit.Type: GrantFiled: February 9, 2012Date of Patent: November 4, 2014Assignee: Panasonic Intellectual Property Corporation of AmericaInventor: Takao Adachi
-
Publication number: 20140317575Abstract: Systems and methods for digitally drawing on virtual 3D object surfaces using a 3D display system. A 3D drawing mode may be enabled and a display screen of the system may correspond to a zero parallax plane of a 3D scene that may present a plurality of surfaces at non-zero parallax planes. User input may be received at a location on the display screen, and in response, a surface may be specified, rendered, and displayed at the zero parallax plane. Further, additional user input on the display screen may be received specifying drawing motion across the rendered and displayed surface. The drawing motion may start at the location and continue across a boundary between the surface and another contiguous surface. Accordingly, in response to the drawing motion crossing the boundary, the contiguous surface may be rendered and displayed at the zero parallax plane along with results of the drawing motion.Type: ApplicationFiled: April 21, 2014Publication date: October 23, 2014Applicant: zSpace, Inc.Inventors: Peter F. Ullmann, Clifford S. Champion
-
METHOD AND SYSTEM FOR RESPONDING TO USER'S SELECTION GESTURE OF OBJECT DISPLAYED IN THREE DIMENSIONS
Publication number: 20140317576Abstract: The present invention relates to a method for responding to a users selection gesture of an object displayed in three dimensions. The method comprises comprising displaying at least one object using a display, detecting a users selection gesture captured using an image capturing device, and based on the image capturing devices output, determining whether an object among said at least one objects is selected by said user as a function of the eye position of the user and of the distance between the users gesture and the display.Type: ApplicationFiled: December 6, 2011Publication date: October 23, 2014Applicant: THOMSON LICENSINGInventors: Jianping Song, Lin Du, Wenjuan Song -
Publication number: 20140317574Abstract: In accordance with one implementation, a method is illustrated that allows a computing device to determine a user input. The method includes detecting one or more user input objects in a 3-dimensional field relative to a 2-dimensional surface. The method also includes determining coordinates for the one or more user input objects relative to the 2-dimensional surface. And, the method further includes determining a user input based on the coordinates.Type: ApplicationFiled: March 14, 2014Publication date: October 23, 2014Inventors: Lai Xue, Darren Lim
-
Publication number: 20140282073Abstract: The present invention is an apparatus for communicating information comprised of an electronic display device, a touch sensitive input device positioned to substantially cover said display device, a computerized device in electronic communication with the display and the input device, and an algorithm executing on the computerized device. The algorithm causes a two-dimensional array of three-dimensional shapes to appear on said display and rotate about an axis and undulate in a wave-like manner without interaction from a viewer during an attract mode. After the touch sensitive input device is contacted by a user, the algorithm may display an interaction mode wherein the shapes may be caused to rotate about an axis by a user's contacting the touch sensitive input device at a point on the display device corresponding to a three-dimensional shape and moving such a contact point in the direction of rotation to expose information displayed on the surface of the three-dimensional shape.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: Micro Industries CorporationInventors: Amanda CURRAN, Michael A. Curran, Jon DeGenova
-
Publication number: 20140282267Abstract: A display device for a three-dimensional virtual scenario for selecting objects in the virtual scenario provides feedback upon successful selection of an object. The display device is designed to output a haptic or tactile, optical or acoustic feedback upon selection of a virtual object.Type: ApplicationFiled: September 6, 2012Publication date: September 18, 2014Applicant: EADS Deutschland GmbHInventors: Leonhard Vogelmeier, David Wittmann
-
Patent number: 8839136Abstract: A method of controlling a viewpoint of a user or a virtual object on a two-dimensional (2D) interactive display is provided. The method may convert a user input to at least 6 degrees of freedom (DOF) structured data according to a number of touch points, a movement direction thereof, and a rotation direction thereof. Any one of the virtual object and the viewpoint of the user may be determined as a manipulation target based on a location of the touch point.Type: GrantFiled: May 7, 2009Date of Patent: September 16, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Byung In Yoo, Chang Kyu Choi
-
Patent number: 8839150Abstract: A first graphical object on a user interface of a device can be transformed to a second graphical object on the user interface. The second graphical object can be manipulated by a user on the user interface using touch input or by physically moving the device. When manipulated, the object can be animated to appear to have mass that responds to real-world, physical forces, such as gravity, friction or drag. The data represented by the second graphical object can be compressed or archived using a gesture applied to the second graphical object. Graphical objects can be visually sorted on the user interface based on their mass (size). The visual appearance of graphical objects on the user interface can be adjusted to indicate the age of data represented by the graphical objects.Type: GrantFiled: February 10, 2010Date of Patent: September 16, 2014Assignee: Apple Inc.Inventors: Nicholas V. King, Brett Bilbrey, Todd Benjamin
-
Publication number: 20140250412Abstract: A representation device for representing and interacting with a three-dimensional virtual scenario includes an input unit and at least one representation region for representing the three-dimensional scenario. A marking element may be moved on a virtual surface area with two translational degrees of freedom such that each virtual object in the three-dimensional virtual scenario may be selected with the marking element.Type: ApplicationFiled: September 5, 2012Publication date: September 4, 2014Applicant: EADS Deutschland GmbHInventors: Leonhard Vogelmeier, David Wittmann
-
Patent number: 8826184Abstract: A mobile terminal and a method of controlling an image display thereof are disclosed. A display module for a mobile terminal as disclosed herein may include a display for displaying an image that includes one or more objects, a user input interface to receive an input to change the image between a 2D display and a 3D display, and a controller configured to change the displayed image between the 2D display and the 3D display based on the received input. The controller may control the display to sequentially display one or more intermediate images in order to gradually change an extent in which at least one of the one or more objects is perceived to protrude or recede into the display during the change in the displayed image.Type: GrantFiled: March 16, 2011Date of Patent: September 2, 2014Assignee: LG Electronics Inc.Inventors: Joonwon Kwak, Kisun Lee, Jonghwan Kim, Seonhwi Cho
-
Patent number: 8826151Abstract: An information processing apparatus is connected via a network to both an MFP and a virtual-space management server that manages a virtual space that contains a virtual device that is created by virtualizing the MFP. The information processing apparatus includes a display control unit that displays map information on a display unit, wherein a device symbol that corresponds to the MFP is present in the map information at a position corresponding to the position of the MFP in the real world; a receiving unit that receives a selected device symbol in the map information; an output control unit that outputs image data to either a first data storage unit of the MFP that corresponds to the selected device symbol or to a second data storage unit for the virtual device that is created by virtualizing the MFP and corresponds to the selected device symbol.Type: GrantFiled: September 14, 2010Date of Patent: September 2, 2014Assignee: Ricoh Company, LimitedInventors: Yasuhiro Tabata, Takashi Yano, Katsuyuki Kaji, Kenta Nozaki
-
Publication number: 20140237366Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.Type: ApplicationFiled: February 19, 2013Publication date: August 21, 2014Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
-
Patent number: 8797317Abstract: The present disclosure relates to a mobile terminal and control method thereof for allowing a touch input to a three-dimensional stereoscopic image. The method disclosed herein may include displaying a three-dimensional stereoscopic image including a plurality of objects, detecting the location of a detection target in a detection region corresponding to the three-dimensional stereoscopic image, selecting a first object based on the location of the detection target, moving the first object along the movement of the detection target in a state that the first object is selected, and generating at least one object between the first and the second object when a distance between the first and the second object is increased in one direction due to the movement of the first object.Type: GrantFiled: July 8, 2011Date of Patent: August 5, 2014Assignee: LG Electronics Inc.Inventor: Jonghwan Kim
-
Patent number: 8791962Abstract: There is provided an information processing device including a display section configured to display a first object in a virtual three-dimensional space having a depth direction of a display screen, an operation section configured to acquire an operation for moving the first object in at least the depth direction, and a controller configured to move the first object on the display screen in accordance with the acquired operation, to execute, when a region of the first object overlaps a first overlap determination region, a first process to one or both of the first and second objects, and to execute, when the region of the first object overlaps a second overlap determination region, a second process to one or both of the first and second objects. The first overlap determination region may be a region obtained by extending the second overlap determination region in at least the depth direction.Type: GrantFiled: June 3, 2011Date of Patent: July 29, 2014Assignee: Sony CorporationInventors: Takuro Noda, Akihiro Komori, Nariaki Satoh, Osamu Shigeta, Kazuyuki Yamamoto
-
Publication number: 20140206448Abstract: A method for restricting the number of consequential interactions to further virtual objects having a relationship with a first virtual object, resulting from an interaction with said first virtual object. The method comprises: defining a maximum number of consequential interactions, counting consequential interactions, and stopping further interaction when the maximum number of consequential interactions is reached.Type: ApplicationFiled: March 24, 2014Publication date: July 24, 2014Applicant: Technion Research & Development Foundation LimitedInventors: Gershon Elber, Orit Shaked, Oded Shmueli
-
Publication number: 20140208272Abstract: Method, technology and system of user-controlled realistic 3D simulation and interaction are disclosed for providing realistic and enhanced digital object viewing and interaction experience with improved three dimensional (3D) visualization effects. A solution is provided to make available 3D-model/s carrying similar properties of real object, where performing user-controlled realistic interactions selected from extrusive interaction, intrusive interactions, time-bound changes based interaction and real environment mapping based interactions are made possible as per user choice.Type: ApplicationFiled: July 19, 2013Publication date: July 24, 2014Inventors: NITIN VATS, GAURAV VATS