Abstract: Some embodiments provide a navigation application with a novel declutter navigation mode. In some embodiments, the navigation application has a declutter control that when selected, directs the navigation application to simplify a navigation presentation by removing or de-emphasizing non-essential items that are displayed in the navigation presentation. In some embodiments, the declutter control is a mode-selecting control that allows the navigation presentation to toggle between normal first navigation presentation and a simplified second navigation presentation, which below is also referred to as a decluttered navigation presentation. During normal mode operation, the navigation presentation of some embodiments provides (1) a representation of the navigated route, (2) representations of the roads along the navigated route, (3) representation of major and minor roads that intersect or are near the navigated route, and (4) representations of buildings and other objects in the navigated scene.
Type:
Grant
Filed:
December 23, 2022
Date of Patent:
July 23, 2024
Assignee:
Apple Inc.
Inventors:
Jonathan L. Berk, Bradley L. Spare, Edward J. Cooper
Abstract: A system may include a processor that may receive account information having a plurality of transactions associated with a bank account. The processor may then determine an expected burn rate of funds based on the account information, determine an expected savings balance at a first time based on the account information and the expected burn rate, generate an event in response to the expected savings balance being below a threshold, and send an alert including the event to a computing device associated with the user. The alert may indicate a status of the expected savings balance via an electronic display of the computing device.
Type:
Grant
Filed:
June 7, 2023
Date of Patent:
July 16, 2024
Assignee:
United Services Automobile Association (USAA)
Inventors:
Nathan J. Rowe, Michael Aaron McGlasson, Stephen Holloway, Lea B. Sims, Joseph Wall, Richard R. Rohrbough
Abstract: A first image processing section performs a blurring process on a target region that is at least a part of a content image. A second image processing section performs predetermined image processing on the blurring-processed target region by using brightness values of individual pixels in the target region that has not yet been blurring-processed. A text processing section displays text over the target region that is subjected to the predetermined image processing.
Abstract: A system and method enabling per-user-optimized computing, rendering, and provisioning within virtual worlds. The system comprises a server including memory and at least one processor, the memory storing a persistent virtual world system comprising a data structure in which at least one virtual replica of at least one corresponding real object is represented, and a computing optimization platform configured to store and provide rules for optimizing the computing, rendering and data provisioning to users via user devices. A plurality of connected devices connected to the server via a network provide multi-source data, user input, or combinations thereof, to the persistent virtual world system, updating the virtual replicas. The server retrieves user location, viewing position and orientation from the one or more user devices to determine a user interaction radius, thereby optimizing via the computing optimization platform the relevant computing, rendering and provisioning for the one or more user devices.
Abstract: A system includes: a screen configured for wear by a user, the screen configured to display a 2-dimensional (2D) element; a processing unit coupled to the display; and a user input device configured to generate a signal in response to a user input for selecting the 2D element displayed by the screen; wherein the processing unit is configured to obtain a 3-dimensional (3D) model associated with the 2D element in response to the generated signal.
Type:
Grant
Filed:
April 7, 2023
Date of Patent:
July 2, 2024
Assignee:
Magic Leap, Inc.
Inventors:
Christopher Richard Williams, Damian Franco
Abstract: A display control method includes: acquiring a standard time; synchronizing an internal time referred to in processing by one display device with the standard time; detecting a reference time, which is the internal time when an image represented by an input image signal satisfies a predetermined condition; outputting the reference time to another display device of a plurality of display devices; acquiring a reference time detected in the another display device; and delaying a vertical synchronization signal by a time period from the reference time of the one display device to the reference time of the another display device, when the reference time of the another display device is later than the reference time of the one display device.
Abstract: A system for relationship information evaluation and management. The system incorporates relationship quality parameters that define the contextual parameters of quality, which are mutually validated and are accepted by the respective parties within an existing or prospective relationship. A computer, computing device, written documents or other means are used to capture the perspectives of each person, representative, or entity within the relationship or prospective relationship. This can be done automatically, on a random, periodic, or scheduled basis, or upon manual initiation. The information or data may then be analyzed and coalesced into a meaningful whole (or segments thereof), which may then be archived, printed, reported, and presented to one or both of the parties, or others.
Abstract: A system for detecting and incorporating three-dimensional objects into a video stream reads an input video data stream. The user specifies areas of attention wherein said areas of attention or hotspots. Tracking movement of the hotspots generating a trajectory of said at least one object. Generating a cloud of points and tracking said points to detect configurations of points most similar to the initially defined hotspot. Obtaining a three dimensional topology defining a volume of interest in a three-dimensional space. Building virtual structures or pseudo objects that are placed within a spherical environment generated on the input video.
Type:
Grant
Filed:
March 4, 2023
Date of Patent:
June 11, 2024
Assignee:
VR DRIVE SP. Z O.O.
Inventors:
Arkadiusz Rogozinski, Radoslaw Gezella, Pawel Tryzno, Piotr Kozinski
Abstract: Automating conversion of drawings to indoor maps and plans. One example is a computer-implemented method of creating an indoor map from a CAD drawing, the method comprising: preprocessing an original CAD drawing to create a modified CAD drawing, a text database containing text from the original CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; creating a floor-level outline; sensing furniture depicted on the floor, the sensing creates set of furniture entities; identifying a room depicted in the CAD drawing; and creating the indoor map for a floor using the floor-level outline, the room-level outline and the room identity.
Type:
Grant
Filed:
August 17, 2023
Date of Patent:
June 11, 2024
Assignee:
Pointr Limited
Inventors:
Ege Çetintaş, Melih Peker, Umeyr Kiliç, Can Tunca
Abstract: Systems and methods are described for selecting a 3D object for display in an extended reality environment. A space in an extended reality environment is determined for placement of a 3D object. A set of space parameters are determined comprising: an amount of memory available for generating the display of the extended reality environment and an amount of computing power available for generating the display of the extended reality environment. The 3D object is selected for display in the space based on the amount of memory and the amount of computing power available.
Abstract: An image search system (1) is provided with: a search key information acquisition unit (110) that acquires color specifying information indicating a plurality of colors and reference information indicating a reference of proportions of the plurality of colors being occupied in an image area; a specified color proportion information acquisition unit (120) that acquires specified color proportion information indicating proportions of the plurality of colors associated with the color specifying information, being occupied in an image to be processed; and an image search unit (130) that identifies whether the image to be processed is a search target image, based on the specified color proportion information and the reference information.
Abstract: A game system includes a processor, the processor being configured to perform a reception process of receiving an operation input by a player; a virtual space setting process of setting a virtual space in which at least one character is disposed; a virtual camera setting process of setting a plurality of virtual cameras; a character process of setting parts constituting the at least one character based on the operation input; and a display process of generating a plurality of character images, which are images of the at least one character viewed from the plurality of virtual cameras, and displaying a character-creation image in which the plurality of character images are arranged on a display section. In the display process, when the setting process of setting the parts is performed, the processor performs a process of displaying the character-creation image on which the result of the setting process is reflected.
Type:
Grant
Filed:
January 3, 2022
Date of Patent:
May 14, 2024
Assignees:
BANDAI NAMCO ENTERTAINMENT INC., BANDAI NAMCO ONLINE INC., BANDAI NAMCO STUDIOS INC.
Abstract: A display method for an electronic device including a first display area and a second display area. In the method, the electronic device determines a first application mode, adjusts, based on the first application mode, intensity values of color channels, stored in a hardware composer (HWC), determines, using the HWC, each first layer corresponding to the first display area, and overlays, using the HWC, the first layer and a background color layer corresponding to adjusted intensity values of the color channels. In an overlay process, the background color layer is located below the first layer, and the background color layer corresponds to the second display area. Then, the display is used to display an overlaid image.
Abstract: In some examples, the disclosure describes a device, comprising: a processor resource, and a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause the processor resource to: determine an emotion based on a facial expression of a user captured within an image, apply a plurality of alterations to the image to exaggerate the facial expression of the user when the emotion has continued for a threshold quantity of time, and remove the plurality of alterations to the image when the emotion of the user has changed.
Type:
Grant
Filed:
June 25, 2021
Date of Patent:
May 7, 2024
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: In various embodiments, a workflow application generates and evaluates designs that reflect stylistic preferences. In operation, the workflow application determines a target style based on input received via a graphical user interface (GUI). Notably, the target style characterizes a first set of designs. The workflow application then generates stylized design(s) based on stylization algorithm(s) associated with the target style. Subsequently, the workflow application, displays a subset of the stylized design(s) via the GUI. A stylized design included in the subset of stylized design(s) is ultimately selected for production via the GUI. Advantageously, because the workflow application can substantially increase the number of designs that can be generated and evaluated based on the target style in a given amount of time, relative to more manual prior art techniques, the overall quality of the stylized design selected for production can be improved.
Abstract: The disclosure relates to system and method for generating recommendations for capturing images of a real-life object with essential features. The method includes detecting an Augmented Reality (AR) plane for a target object. The method further includes capturing a set of poses corresponding to the target object and a set of coordinate points in the AR plane. The set of poses includes a tracking marker, and the set of coordinate points indicates a location of the target object. The method further includes determining an instant distance between the AR imaging device and the target object, and an instant angle of the AR imaging device with respect to the target object. The method further includes dynamically generating the recommendations for adjusting a position and an orientation of the AR imaging device with respect to the target object.
Abstract: Graphical user interface (GUI) based systems and methods are disclosed for regionizing full-size process plant displays for rendering on mobile user interface devices. A regionizer application receives a full-size process plant display that graphically represents at least a portion of a process plant that includes graphic representations of a plurality of process plant entities. The regionizer app determines display region(s) of the full-size process plant display that define corresponding view portions of the full-size process plant display. The display regions are transmitted to a mobile user interface device for rendering by a mobile display navigation app. The GUI based systems and methods can also automatically detect graphical process control loop display portions within full-size process plant displays for rendering on mobile user interface devices.
Type:
Grant
Filed:
November 7, 2022
Date of Patent:
May 7, 2024
Assignee:
FISHER-ROSEMOUNT SYSTEMS, INC.
Inventors:
Cristopher Ian Sarmiento Uy, Ryan Gallardo Valderama, Dino Anton Yu, Mariana Dionisio, Daniel R. Strinden, Mark J. Nixon
Abstract: Some embodiments herein can include methods and systems for predicting next poses of a character within a virtual gaming environment. The pose prediction system can identify a current pose of a character, generate a gaussian distribution representing a sample of likely poses based on the current pose, and apply the gaussian distribution to the decoder. The decoder can be trained to generate a predicted pose based on a gaussian distribution of likely poses. The system can then render the predicted next pose of the character within the three-dimensional virtual gaming environment. Advantageously, the pose prediction system can apply a decoder that does not include or use input motion capture data that was used to train the decoder.
Type:
Grant
Filed:
January 21, 2021
Date of Patent:
April 30, 2024
Assignee:
Electronic Arts Inc.
Inventors:
Fabio Zinno, George Cheng, Hung Yu Ling, Michiel van de Panne
Abstract: Technology is described herein for facilitating a user's interaction with a digital ink document. The technology internally represents the ink document using a data structure having a hierarchy of nodes. The nodes describe respective elements in the ink document. The technology leverages the data structure to identify a set of nodes that grows upon the user's repeated selection of a particular part of the ink document. At each stage of the selection, the technology highlights a set of elements in the ink document that correspond to the current set of identified nodes. According to another illustrative aspect, the technology produces the data structure by modifying an original data structure provided by a text analysis engine. The technology performs this task with the objective of accommodating structured interaction by the user with the ink document.
Type:
Grant
Filed:
August 28, 2022
Date of Patent:
April 30, 2024
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Oz Solomon, Erich Søren Finkelstein, Gary Lee Caldwell, Nathan James Fish, Sergey Aleksandrovich Doroshenko
Abstract: A content distribution system according to an embodiment is provided with at least one processor. The at least one processor: specifies a real image region and a farthest region in a space represented with a content image showing a first virtual object; disposes a second virtual object in the farthest region; and displays, on a user terminal, a content image representing a space in which the second virtual object is disposed.
Abstract: A visual data operation method, system, and device, and a medium for one or more pieces of data, creating one or more corresponding value objects, and arranging the one or more value objects as a tree structure; and for the one or more value objects, creating corresponding database records, and defining fields of the database records by the one or more value objects. Data and database records correspond to each other and are mutually converted by using value objects. A developer only needs to arrange the value objects as a tree structure, and a system can automatically complete remaining data operations subsequently, including creating database records and synchronizing database records, and the like. A developer can quickly design a data storage format by taking a user as a unit, and the design presentation is intuitive and understandable to other developers.
Abstract: A client computing device (115) determines a desired adjustment to a transmission rate of a media stream (245) received from the content server device (110), and encodes the desired adjustment to the transmission rate in an object ordering priority (255) field of a request (250) for a media portion (215). The client computing device (115) sends the request (250) to the content server device (110) to adjust the transmission rate of the media stream (245) with respect to the media portion (215). The content server device (110) receives the request (25) for the media portion (215) from the client computing device (115), and adjusts the transmission rate of the media stream (245) based on the object ordering priority (255). The content server device (110) transmits the media portion (215) to the client computing device (115) via the media stream (245) at the adjusted transmission rate.
Type:
Grant
Filed:
December 2, 2016
Date of Patent:
April 9, 2024
Assignee:
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
Inventors:
Geza Szabo, Daniel Bezerra, Wesley Davison Braga Melo, Djamel Fawzi Hadj Sadok, Jairo Matheus Vilaça Alves, Igor Nogueira de Oliveira, Sándor Rácz, Maria Silvia Ito
Abstract: A system and method for method for monitoring and tracking browsing activity of a user on a client device. The method includes generating, based on browsing activity information of a user interacting with at least a page displayed over the client device and page information identifying in part the page displayed over the client device, an exposure map at a page-level view, wherein the exposure map indicates a salience of each area of a page-view respective of the page displayed over the client device and visited by the user.
Abstract: A method for computer animation includes receiving an input file that includes an asset geometry, where the asset geometry defines an asset mesh structure, where the asset geometry may exclude an internal support frame, and where logic for custom deformation steps may be included, altogether in a fashion portable and made to produce consistent results across multiple different software and/or hardware platform environments and/or across real-time and/or offline scenarios. The method also includes applying at least one deformer to the asset mesh structure, where the at least one deformer includes a plurality of user-selectable deformer channels, and where each deformer channel is associated with at least a portion of the asset mesh structure and is configured to adjust a visual appearance of the associated portion of the asset mesh structure.
Type:
Grant
Filed:
April 6, 2023
Date of Patent:
April 2, 2024
Assignee:
O3 Story Technologies, Inc.
Inventors:
Eric A. Soulvie, Richard R. Hurrey, R. Jason Bickerstaff, Clifford S. Champion, Peter E. McGowan, Robert Ernest Schnurstein
Abstract: A vision-aided inertial navigation system (VINS) comprises an image source for producing image data along a trajectory. The VINS further comprises an inertial measurement unit (IMU) configured to produce IMU data indicative of motion of the VINS and an odometry unit configured to produce odometry data. The VINS further comprises a processor configured to compute, based on the image data, the IMU data, and the odometry data, state estimates for a position and orientation of the VINS for poses of the VINS along the trajectory. The processor maintains a state vector having states for a position and orientation of the VINS and positions within the environment for observed features for a sliding window of poses. The processor applies a sliding window filter to compute, based on the odometry data, constraints between the poses within the sliding window and compute, based on the constraints, the state estimates.
Type:
Grant
Filed:
May 29, 2019
Date of Patent:
March 26, 2024
Assignee:
Regents of the University of Minnesota
Inventors:
Stergios I. Roumeliotis, Kejian J. Wu, Chao Guo, Georgios Georgiou
Abstract: A method, computer program, and computer system is provided for streaming immersive media. Content is ingested in a first two-dimension format or a first three-dimensional format, whereby the format references a neural network. The ingested content is converted to a second two-dimensional or a second three-dimensional format based on the referenced neural network. The converted content is streamed to a client end-point, such as a television, a computer, a head-mounted display, a lenticular light field display, a holographic display, an augmented reality display, or a dense light field display.
Abstract: Configuration discrepancies, such as server drift among different servers or malicious code installed on one or more servers, can be identified using system attribute information regarding processes, CPU usage, memory usage, etc. The system attribute information can be used to generate an image, which can be compared to other images to determine if a configuration discrepancy exists. Image recognition algorithms can be used to facilitate image comparison for different systems. By identifying configuration discrepancies, downtime and other issues can be mitigated and system performance can be improved.
Abstract: An electronic device is provided with a flexible display. The electronic device includes a housing, a flexible display, a display support structure including a plurality of support bars supporting a rear surface of the flexible display, and a display support disposed to correspond to the flexible display, a first guide rail including a recess formed along a path in which the display support structure is moved when the flexible display is drawn outside the housing or introduced into the inner space of the housing, a second guide rail including a recess formed along the path and into which end portions of the plurality of support bars are inserted, and a rotation part.
Abstract: A computer-implemented method for designing a three-dimensional (3D) mesh in a 3D scene. The method comprises displaying a 3D mesh in a 3D scene and providing a global orientation and selecting, with a pointing device, one or more vertices of the 3D mesh, thereby forming a set of one or more vertices. The method comprises computing at least one picking zone that surrounds each vertex of the set. The method comprises providing a first manipulator for controlling a displacement of each vertex of the set along one or more NUV directions and determining whether the pointing device is maintained within the picking zone. If not, the method comprises providing a second manipulator for controlling a displacement of the one or more vertices of the set along one or more directions defined by the global orientation. The method improves user interactions for switching back and forth a first and second manipulators.
Type:
Grant
Filed:
July 15, 2021
Date of Patent:
March 26, 2024
Assignee:
DASSAULT SYSTEMES
Inventors:
Yani Sadoudi, Frédéric Letzelter, Christophe Dufau
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for using modularized digital editing action blocks in a graphical user interface to generate and apply a variety of advanced modifications to a digital image. The disclosed systems can categorize the digital editing action blocks into digital editing action categories from which a user can select a digital editing action block and insert into a field of a node compositor. Once the digital editing action block is arranged at a compatible field within the digital editing user interface, the snap effects system can execute the digital editing action block to create a particular graphical effect according to the positional configuration of the digital editing action block within the node compositor. In turn, the snap effects system can save the action-block configuration within the node compositor and facilitate additional use and/or sharing thereof.
Abstract: A method and system for assessing a machine learning model providing a prediction as to the disease state of a patient from a 2D or 3D image of the patient or a sample obtained therefrom. The machine learning model produces a prediction of the disease state from the image. The method involves presenting on a display of a workstation the image of the patient or a sample obtained therefrom along with a risk score or classification associated with the prediction. The image is further augmented with high-lighting to indicate one or more regions in the image which affected the prediction produced by the machine learning model. Tools are provided by which the user may highlight one or more regions of the image which the user deems to be suspicious for the disease state. Inference is performed on the user-highlighted areas by the machine learning model. The results of the inference are presented to the user via the display.
Abstract: In implementations of systems for generating spacing guides for objects in perspective views, a computing device implements a guide system to determine groups of line segments of perspective bounding boxes of objects displayed in a user interface of a digital content editing application. Interaction data is received describing a user interaction with a particular object of the objects displayed in the user interface. The guide system identifies a particular group of the groups of line segments based on a line segment of a perspective bounding box of the particular object. An indication of a guide is generated for display in the user interface based on the line segment and a first line segment included in the particular group.
Abstract: An information processing apparatus comprises a decoder, a managing unit which manages storage of history information relating to communication with an image capturing apparatus, a receiving unit which, in a case of communication with the image capturing apparatus, receives an image file held in the image capturing apparatus and saves the image file to a predetermined storage unit; and a display control unit which controls a display of an image of the image file saved to the storage unit, wherein, when displaying an image file stored in the storage unit, the display control unit extracts, from the image file, identification information of the image capturing apparatus, and, based on the extracted identification information and the managed history information, determines whether or not to use the decoder to decode the image file.
Abstract: An object of the present disclosure is to provide a mechanism capable of performing thickening processing of an object irrespective of the color of a line even in a case where the black over print is valid. One embodiment of the present invention is an image forming apparatus comprising a control unit configured to control whether or not to perform thickening processing to thicken an object in an upper layer and an object in a lower layer based on a raster operation code designating drawing processing in a case where the object in the upper layer and the object in the lower layer overlap, wherein the control unit: does not perform the thickening processing in a case where a value of the raster operation code is MERGEPEN and a color of the object in the upper layer is not a black color.
Abstract: The various embodiments described herein include methods, devices, and systems for generating object meshes. In some embodiments, a method includes obtaining a trained classifier, and an input observation of a 3D object. The method further includes generating a three-pole signed distance field from the input observation using the trained classifier. The method also includes generating an output mesh of the 3D object from the three-pole signed distance field; and generating a display of the 3D object from the output mesh.
Abstract: Described are systems and methods that enable secure real time communication (“RTC”) sessions that may be used, for example, for editing and movie production. Client devices may interact with an RTC management system to obtain color calibration information so that the color presented on the different client devices is consistent with each other and corresponds to the intended color of the video for which collaboration is to be performed. In addition, on-going multifactor authentication may be performed for each client device of an RTC session during the RTC session. Still further, to improve the quality of the exchanged video information and to reduce transmission requirements, in response to detection of events, such as a pause event, a high resolution image of a paused video may be generated and sent for presentation on the display of each client device, instead of continuing to stream a paused video.
Type:
Grant
Filed:
December 31, 2020
Date of Patent:
February 13, 2024
Assignee:
Evercast, LLC
Inventors:
Damien Phelan Stolarz, Roger Patrick Barton, Brad Thomas Ahlf, Chad Andrew Furman, Steven Barry Cohen
Abstract: This application provides a video splitting method and an electronic device. When the method is performed by a server, the server processes a long video into a plurality of short videos, and then a terminal obtains the short video from the server and plays the short video; or when the method is performed by a terminal, the terminal obtains a long video from a server, then processes the long video into a plurality of short videos, and plays the plurality of short videos.
Abstract: Systems and methods are provided for performing operations on an augmented reality (AR) device. The system accesses, by the AR device, movement data comprising inertial measurement data and camera data. The system determines three-dimensional (3D) movement of the AR device based on the movement data. The system presents, by the AR device, an AR object on a real-world environment being viewed using the AR device. The system, in response to determining the 3D movement of the AR device, modifies the AR object by the AR device.
Type:
Grant
Filed:
November 8, 2022
Date of Patent:
February 6, 2024
Assignee:
SNAP INC.
Inventors:
Lien Le Hong Tran, William Miles Miller
Abstract: An electronic device according to various embodiments of the disclosure includes: a communication module comprising communication circuitry and a processor operatively connected to the communication module. The processor may be communicatively connected to an augmented reality (AR) device through the communication module, and be configured to receive image information obtained by a camera of the AR device from the AR device, to detect an object based on the received image information, to acquire virtual information corresponding to the object, to control the communication module to transmit the virtual information to the AR device, to determine, based on the received image information, whether the object is out of a viewing range of the AR device, and to change a transfer interval of the virtual information for the AR device based on the determination.
Abstract: A method of measuring a structure includes acquiring azimuth and tilt readings at a first location and second location. Images of the structure are also acquired from the first and second location. The respective distances from the first and second locations to a first and second point on the structure are measured. A scale is established from two positions of the structure depicted in the first or second image of the structure. The distance between the first and second points on the structure is found using the established scale. This distance is used with the azimuth and tilt readings and measured distances from the first and second location to build an epipolar model of the structure. The structure may be a utility pole. Also disclosed are methods of assisting photogrammetric measurements and estimating the class of a utility pole, and methods of determining the compliance status of a utility pole.
Type:
Grant
Filed:
April 3, 2020
Date of Patent:
January 23, 2024
Assignee:
IKEGPS Group Limited
Inventors:
Jeremy James Gold, Leon Mathieu Lammers van Toorenburg
Abstract: A robot system includes a robot body, a memory, an operation controlling module, a manipulator, and a limit range setting module configured to set a limit range of the corrective manipulation by the manipulator. The operation controlling module executes a given limiting processing when a corrective manipulation is performed beyond the limit range from an operational position based on automatic operation information. The limit range setting module calculates a positional deviation between the operational position based on the automatic operation information before the correction and an operational position based on the corrected operation information, and when the positional deviation is at or below a first threshold, narrows the limit range in the next corrective manipulation by the manipulator.
Abstract: Examples disclosed herein relate to digital mark-up in a three dimensional (3D) environment. An example device for digital mark-up in a 3D environment includes a processor, a display for showing a view of the 3D environment, and a memory including instructions on the processor. When the memory stored instructions are executed on the processor, they cause the processor to generate an anchor point in response to an author input, wherein the anchor point includes a virtual location. When the memory stored instructions are executed on the processor, they cause the processor to generate a mark-up object associated with the anchor point, wherein the mark-up object includes mark-up dimensions, a virtual authoring location, and a selectable association that, in response to being selected, instructs the processor to adjust the view shown in the display to be a view from the virtual authoring location at the time the mark-up object was authored.
Type:
Grant
Filed:
March 22, 2018
Date of Patent:
January 23, 2024
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: A method for authentication is provided. The method includes storing, by a first electronic device, first path information in a memory. The first path information includes data representing a first path of input on a touch interface associated with the first electronic device. One or more first portions of the first path information are communicated to a second electronic device. One or more second portions of second path information are received from the second electronic device. The one or more second portions are compared to the first path information. The second electronic device is authenticated based on similarity between the one or more second portions and the first path information. The method may allow for separate authentication to be performed by each device.
Type:
Grant
Filed:
November 19, 2019
Date of Patent:
January 23, 2024
Assignee:
International Business Machines Corporation
Abstract: The disclosure provides a method for determining a two-handed gesture, a host, and a computer readable medium. The method includes: providing a visual content of a reality system; tracking a first hand gesture and a second hand gesture perform by two hands; in response to determining that the first hand gesture and the second hand gesture form a two-handed gesture, activating a system function of the reality system, wherein the system function of the reality system is independent to the visual content.
Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
Type:
Grant
Filed:
July 25, 2022
Date of Patent:
January 16, 2024
Assignee:
Fyusion, Inc.
Inventors:
Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
Abstract: A system includes one or more processors and software, executable by the one or more processors that, when executed by the one or more processors, cause the system to retrieve Environmental, Social & Governance (ESG) related data from one or more first databases, retrieve financial-related data from one or more second databases, and generate a dynamically interactive graphical user interface (GUI), containing a combination of ESG and financial information, the GUI having multiple GUI Modules including a Screening GUI Module, a Ratings GUI Module, a Climate GUI Module, and a Sustainability GUI Module.
Type:
Grant
Filed:
September 13, 2023
Date of Patent:
January 16, 2024
Assignee:
MORGAN STANLEY SERVICES GROUP INC.
Inventors:
Srijan Sharma, Andrew Kyle Ford, David Anthony Senn, Sowmya Viswanath, Haridass Ramachandran, Amit Kumar Goel, Anirudh Gopal
Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
Type:
Grant
Filed:
January 30, 2023
Date of Patent:
January 9, 2024
Assignee:
Ultrahaptics IP Two Limited
Inventors:
David S. Holz, Kevin Horowitz, Raffi Bedikian, Hua Yang
Abstract: A display control method includes: obtaining, by a terminal, orientations and display statuses of a first screen and a second screen; determining, by the terminal, whether a trigger event used for adjusting the display statuses of the first screen and the second screen occurs; and when the trigger event occurs, displaying, by the terminal, adjusted display content on the first screen and the second screen based on the orientations and the display statuses of the first screen and the second screen. By obtaining an orientation and a display status of a terminal screen, when the trigger event occurs, the terminal can adjust the display status of the screen in a timely manner, so that a user can perform an interaction operation and interaction display between different screens.
Abstract: A management system of the present invention includes a design database storing member identification information, member coordinates, and member shapes of construction members constituting a construction object, a management member selecting unit configured to select a management member to be managed among the construction members stored in the design database, a related member selecting unit configured to select a related member adjacent to the management member from the design database, a boundary information creating unit configured to select an adjacent portion between the management member and the related member, and with respect to the adjacent portion, as boundary information, associate identification information, adjacent portion coordinates, an adjacent portion shape of the adjacent portion, member identification information of the management member, and member identification information of the related member with each other, and a comparison result database configured to store information associate