Abstract: The disclosure relates to system and method for generating recommendations for capturing images of a real-life object with essential features. The method includes detecting an Augmented Reality (AR) plane for a target object. The method further includes capturing a set of poses corresponding to the target object and a set of coordinate points in the AR plane. The set of poses includes a tracking marker, and the set of coordinate points indicates a location of the target object. The method further includes determining an instant distance between the AR imaging device and the target object, and an instant angle of the AR imaging device with respect to the target object. The method further includes dynamically generating the recommendations for adjusting a position and an orientation of the AR imaging device with respect to the target object.
Abstract: A display method for an electronic device including a first display area and a second display area. In the method, the electronic device determines a first application mode, adjusts, based on the first application mode, intensity values of color channels, stored in a hardware composer (HWC), determines, using the HWC, each first layer corresponding to the first display area, and overlays, using the HWC, the first layer and a background color layer corresponding to adjusted intensity values of the color channels. In an overlay process, the background color layer is located below the first layer, and the background color layer corresponds to the second display area. Then, the display is used to display an overlaid image.
Abstract: In various embodiments, a workflow application generates and evaluates designs that reflect stylistic preferences. In operation, the workflow application determines a target style based on input received via a graphical user interface (GUI). Notably, the target style characterizes a first set of designs. The workflow application then generates stylized design(s) based on stylization algorithm(s) associated with the target style. Subsequently, the workflow application, displays a subset of the stylized design(s) via the GUI. A stylized design included in the subset of stylized design(s) is ultimately selected for production via the GUI. Advantageously, because the workflow application can substantially increase the number of designs that can be generated and evaluated based on the target style in a given amount of time, relative to more manual prior art techniques, the overall quality of the stylized design selected for production can be improved.
Abstract: In some examples, the disclosure describes a device, comprising: a processor resource, and a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause the processor resource to: determine an emotion based on a facial expression of a user captured within an image, apply a plurality of alterations to the image to exaggerate the facial expression of the user when the emotion has continued for a threshold quantity of time, and remove the plurality of alterations to the image when the emotion of the user has changed.
Type:
Grant
Filed:
June 25, 2021
Date of Patent:
May 7, 2024
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: Graphical user interface (GUI) based systems and methods are disclosed for regionizing full-size process plant displays for rendering on mobile user interface devices. A regionizer application receives a full-size process plant display that graphically represents at least a portion of a process plant that includes graphic representations of a plurality of process plant entities. The regionizer app determines display region(s) of the full-size process plant display that define corresponding view portions of the full-size process plant display. The display regions are transmitted to a mobile user interface device for rendering by a mobile display navigation app. The GUI based systems and methods can also automatically detect graphical process control loop display portions within full-size process plant displays for rendering on mobile user interface devices.
Type:
Grant
Filed:
November 7, 2022
Date of Patent:
May 7, 2024
Assignee:
FISHER-ROSEMOUNT SYSTEMS, INC.
Inventors:
Cristopher Ian Sarmiento Uy, Ryan Gallardo Valderama, Dino Anton Yu, Mariana Dionisio, Daniel R. Strinden, Mark J. Nixon
Abstract: Technology is described herein for facilitating a user's interaction with a digital ink document. The technology internally represents the ink document using a data structure having a hierarchy of nodes. The nodes describe respective elements in the ink document. The technology leverages the data structure to identify a set of nodes that grows upon the user's repeated selection of a particular part of the ink document. At each stage of the selection, the technology highlights a set of elements in the ink document that correspond to the current set of identified nodes. According to another illustrative aspect, the technology produces the data structure by modifying an original data structure provided by a text analysis engine. The technology performs this task with the objective of accommodating structured interaction by the user with the ink document.
Type:
Grant
Filed:
August 28, 2022
Date of Patent:
April 30, 2024
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Oz Solomon, Erich Søren Finkelstein, Gary Lee Caldwell, Nathan James Fish, Sergey Aleksandrovich Doroshenko
Abstract: Some embodiments herein can include methods and systems for predicting next poses of a character within a virtual gaming environment. The pose prediction system can identify a current pose of a character, generate a gaussian distribution representing a sample of likely poses based on the current pose, and apply the gaussian distribution to the decoder. The decoder can be trained to generate a predicted pose based on a gaussian distribution of likely poses. The system can then render the predicted next pose of the character within the three-dimensional virtual gaming environment. Advantageously, the pose prediction system can apply a decoder that does not include or use input motion capture data that was used to train the decoder.
Type:
Grant
Filed:
January 21, 2021
Date of Patent:
April 30, 2024
Assignee:
Electronic Arts Inc.
Inventors:
Fabio Zinno, George Cheng, Hung Yu Ling, Michiel van de Panne
Abstract: A content distribution system according to an embodiment is provided with at least one processor. The at least one processor: specifies a real image region and a farthest region in a space represented with a content image showing a first virtual object; disposes a second virtual object in the farthest region; and displays, on a user terminal, a content image representing a space in which the second virtual object is disposed.
Abstract: A visual data operation method, system, and device, and a medium for one or more pieces of data, creating one or more corresponding value objects, and arranging the one or more value objects as a tree structure; and for the one or more value objects, creating corresponding database records, and defining fields of the database records by the one or more value objects. Data and database records correspond to each other and are mutually converted by using value objects. A developer only needs to arrange the value objects as a tree structure, and a system can automatically complete remaining data operations subsequently, including creating database records and synchronizing database records, and the like. A developer can quickly design a data storage format by taking a user as a unit, and the design presentation is intuitive and understandable to other developers.
Abstract: A client computing device (115) determines a desired adjustment to a transmission rate of a media stream (245) received from the content server device (110), and encodes the desired adjustment to the transmission rate in an object ordering priority (255) field of a request (250) for a media portion (215). The client computing device (115) sends the request (250) to the content server device (110) to adjust the transmission rate of the media stream (245) with respect to the media portion (215). The content server device (110) receives the request (25) for the media portion (215) from the client computing device (115), and adjusts the transmission rate of the media stream (245) based on the object ordering priority (255). The content server device (110) transmits the media portion (215) to the client computing device (115) via the media stream (245) at the adjusted transmission rate.
Type:
Grant
Filed:
December 2, 2016
Date of Patent:
April 9, 2024
Assignee:
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
Inventors:
Geza Szabo, Daniel Bezerra, Wesley Davison Braga Melo, Djamel Fawzi Hadj Sadok, Jairo Matheus Vilaça Alves, Igor Nogueira de Oliveira, Sándor Rácz, Maria Silvia Ito
Abstract: A method for computer animation includes receiving an input file that includes an asset geometry, where the asset geometry defines an asset mesh structure, where the asset geometry may exclude an internal support frame, and where logic for custom deformation steps may be included, altogether in a fashion portable and made to produce consistent results across multiple different software and/or hardware platform environments and/or across real-time and/or offline scenarios. The method also includes applying at least one deformer to the asset mesh structure, where the at least one deformer includes a plurality of user-selectable deformer channels, and where each deformer channel is associated with at least a portion of the asset mesh structure and is configured to adjust a visual appearance of the associated portion of the asset mesh structure.
Type:
Grant
Filed:
April 6, 2023
Date of Patent:
April 2, 2024
Assignee:
O3 Story Technologies, Inc.
Inventors:
Eric A. Soulvie, Richard R. Hurrey, R. Jason Bickerstaff, Clifford S. Champion, Peter E. McGowan, Robert Ernest Schnurstein
Abstract: A system and method for method for monitoring and tracking browsing activity of a user on a client device. The method includes generating, based on browsing activity information of a user interacting with at least a page displayed over the client device and page information identifying in part the page displayed over the client device, an exposure map at a page-level view, wherein the exposure map indicates a salience of each area of a page-view respective of the page displayed over the client device and visited by the user.
Abstract: A computer-implemented method for designing a three-dimensional (3D) mesh in a 3D scene. The method comprises displaying a 3D mesh in a 3D scene and providing a global orientation and selecting, with a pointing device, one or more vertices of the 3D mesh, thereby forming a set of one or more vertices. The method comprises computing at least one picking zone that surrounds each vertex of the set. The method comprises providing a first manipulator for controlling a displacement of each vertex of the set along one or more NUV directions and determining whether the pointing device is maintained within the picking zone. If not, the method comprises providing a second manipulator for controlling a displacement of the one or more vertices of the set along one or more directions defined by the global orientation. The method improves user interactions for switching back and forth a first and second manipulators.
Type:
Grant
Filed:
July 15, 2021
Date of Patent:
March 26, 2024
Assignee:
DASSAULT SYSTEMES
Inventors:
Yani Sadoudi, Frédéric Letzelter, Christophe Dufau
Abstract: Configuration discrepancies, such as server drift among different servers or malicious code installed on one or more servers, can be identified using system attribute information regarding processes, CPU usage, memory usage, etc. The system attribute information can be used to generate an image, which can be compared to other images to determine if a configuration discrepancy exists. Image recognition algorithms can be used to facilitate image comparison for different systems. By identifying configuration discrepancies, downtime and other issues can be mitigated and system performance can be improved.
Abstract: A method, computer program, and computer system is provided for streaming immersive media. Content is ingested in a first two-dimension format or a first three-dimensional format, whereby the format references a neural network. The ingested content is converted to a second two-dimensional or a second three-dimensional format based on the referenced neural network. The converted content is streamed to a client end-point, such as a television, a computer, a head-mounted display, a lenticular light field display, a holographic display, an augmented reality display, or a dense light field display.
Abstract: A vision-aided inertial navigation system (VINS) comprises an image source for producing image data along a trajectory. The VINS further comprises an inertial measurement unit (IMU) configured to produce IMU data indicative of motion of the VINS and an odometry unit configured to produce odometry data. The VINS further comprises a processor configured to compute, based on the image data, the IMU data, and the odometry data, state estimates for a position and orientation of the VINS for poses of the VINS along the trajectory. The processor maintains a state vector having states for a position and orientation of the VINS and positions within the environment for observed features for a sliding window of poses. The processor applies a sliding window filter to compute, based on the odometry data, constraints between the poses within the sliding window and compute, based on the constraints, the state estimates.
Type:
Grant
Filed:
May 29, 2019
Date of Patent:
March 26, 2024
Assignee:
Regents of the University of Minnesota
Inventors:
Stergios I. Roumeliotis, Kejian J. Wu, Chao Guo, Georgios Georgiou
Abstract: An electronic device is provided with a flexible display. The electronic device includes a housing, a flexible display, a display support structure including a plurality of support bars supporting a rear surface of the flexible display, and a display support disposed to correspond to the flexible display, a first guide rail including a recess formed along a path in which the display support structure is moved when the flexible display is drawn outside the housing or introduced into the inner space of the housing, a second guide rail including a recess formed along the path and into which end portions of the plurality of support bars are inserted, and a rotation part.
Abstract: A method and system for assessing a machine learning model providing a prediction as to the disease state of a patient from a 2D or 3D image of the patient or a sample obtained therefrom. The machine learning model produces a prediction of the disease state from the image. The method involves presenting on a display of a workstation the image of the patient or a sample obtained therefrom along with a risk score or classification associated with the prediction. The image is further augmented with high-lighting to indicate one or more regions in the image which affected the prediction produced by the machine learning model. Tools are provided by which the user may highlight one or more regions of the image which the user deems to be suspicious for the disease state. Inference is performed on the user-highlighted areas by the machine learning model. The results of the inference are presented to the user via the display.
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for using modularized digital editing action blocks in a graphical user interface to generate and apply a variety of advanced modifications to a digital image. The disclosed systems can categorize the digital editing action blocks into digital editing action categories from which a user can select a digital editing action block and insert into a field of a node compositor. Once the digital editing action block is arranged at a compatible field within the digital editing user interface, the snap effects system can execute the digital editing action block to create a particular graphical effect according to the positional configuration of the digital editing action block within the node compositor. In turn, the snap effects system can save the action-block configuration within the node compositor and facilitate additional use and/or sharing thereof.
Abstract: In implementations of systems for generating spacing guides for objects in perspective views, a computing device implements a guide system to determine groups of line segments of perspective bounding boxes of objects displayed in a user interface of a digital content editing application. Interaction data is received describing a user interaction with a particular object of the objects displayed in the user interface. The guide system identifies a particular group of the groups of line segments based on a line segment of a perspective bounding box of the particular object. An indication of a guide is generated for display in the user interface based on the line segment and a first line segment included in the particular group.
Abstract: An information processing apparatus comprises a decoder, a managing unit which manages storage of history information relating to communication with an image capturing apparatus, a receiving unit which, in a case of communication with the image capturing apparatus, receives an image file held in the image capturing apparatus and saves the image file to a predetermined storage unit; and a display control unit which controls a display of an image of the image file saved to the storage unit, wherein, when displaying an image file stored in the storage unit, the display control unit extracts, from the image file, identification information of the image capturing apparatus, and, based on the extracted identification information and the managed history information, determines whether or not to use the decoder to decode the image file.
Abstract: An object of the present disclosure is to provide a mechanism capable of performing thickening processing of an object irrespective of the color of a line even in a case where the black over print is valid. One embodiment of the present invention is an image forming apparatus comprising a control unit configured to control whether or not to perform thickening processing to thicken an object in an upper layer and an object in a lower layer based on a raster operation code designating drawing processing in a case where the object in the upper layer and the object in the lower layer overlap, wherein the control unit: does not perform the thickening processing in a case where a value of the raster operation code is MERGEPEN and a color of the object in the upper layer is not a black color.
Abstract: The various embodiments described herein include methods, devices, and systems for generating object meshes. In some embodiments, a method includes obtaining a trained classifier, and an input observation of a 3D object. The method further includes generating a three-pole signed distance field from the input observation using the trained classifier. The method also includes generating an output mesh of the 3D object from the three-pole signed distance field; and generating a display of the 3D object from the output mesh.
Abstract: Described are systems and methods that enable secure real time communication (“RTC”) sessions that may be used, for example, for editing and movie production. Client devices may interact with an RTC management system to obtain color calibration information so that the color presented on the different client devices is consistent with each other and corresponds to the intended color of the video for which collaboration is to be performed. In addition, on-going multifactor authentication may be performed for each client device of an RTC session during the RTC session. Still further, to improve the quality of the exchanged video information and to reduce transmission requirements, in response to detection of events, such as a pause event, a high resolution image of a paused video may be generated and sent for presentation on the display of each client device, instead of continuing to stream a paused video.
Type:
Grant
Filed:
December 31, 2020
Date of Patent:
February 13, 2024
Assignee:
Evercast, LLC
Inventors:
Damien Phelan Stolarz, Roger Patrick Barton, Brad Thomas Ahlf, Chad Andrew Furman, Steven Barry Cohen
Abstract: This application provides a video splitting method and an electronic device. When the method is performed by a server, the server processes a long video into a plurality of short videos, and then a terminal obtains the short video from the server and plays the short video; or when the method is performed by a terminal, the terminal obtains a long video from a server, then processes the long video into a plurality of short videos, and plays the plurality of short videos.
Abstract: An electronic device according to various embodiments of the disclosure includes: a communication module comprising communication circuitry and a processor operatively connected to the communication module. The processor may be communicatively connected to an augmented reality (AR) device through the communication module, and be configured to receive image information obtained by a camera of the AR device from the AR device, to detect an object based on the received image information, to acquire virtual information corresponding to the object, to control the communication module to transmit the virtual information to the AR device, to determine, based on the received image information, whether the object is out of a viewing range of the AR device, and to change a transfer interval of the virtual information for the AR device based on the determination.
Abstract: Systems and methods are provided for performing operations on an augmented reality (AR) device. The system accesses, by the AR device, movement data comprising inertial measurement data and camera data. The system determines three-dimensional (3D) movement of the AR device based on the movement data. The system presents, by the AR device, an AR object on a real-world environment being viewed using the AR device. The system, in response to determining the 3D movement of the AR device, modifies the AR object by the AR device.
Type:
Grant
Filed:
November 8, 2022
Date of Patent:
February 6, 2024
Assignee:
SNAP INC.
Inventors:
Lien Le Hong Tran, William Miles Miller
Abstract: Examples disclosed herein relate to digital mark-up in a three dimensional (3D) environment. An example device for digital mark-up in a 3D environment includes a processor, a display for showing a view of the 3D environment, and a memory including instructions on the processor. When the memory stored instructions are executed on the processor, they cause the processor to generate an anchor point in response to an author input, wherein the anchor point includes a virtual location. When the memory stored instructions are executed on the processor, they cause the processor to generate a mark-up object associated with the anchor point, wherein the mark-up object includes mark-up dimensions, a virtual authoring location, and a selectable association that, in response to being selected, instructs the processor to adjust the view shown in the display to be a view from the virtual authoring location at the time the mark-up object was authored.
Type:
Grant
Filed:
March 22, 2018
Date of Patent:
January 23, 2024
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: A method for authentication is provided. The method includes storing, by a first electronic device, first path information in a memory. The first path information includes data representing a first path of input on a touch interface associated with the first electronic device. One or more first portions of the first path information are communicated to a second electronic device. One or more second portions of second path information are received from the second electronic device. The one or more second portions are compared to the first path information. The second electronic device is authenticated based on similarity between the one or more second portions and the first path information. The method may allow for separate authentication to be performed by each device.
Type:
Grant
Filed:
November 19, 2019
Date of Patent:
January 23, 2024
Assignee:
International Business Machines Corporation
Abstract: A method of measuring a structure includes acquiring azimuth and tilt readings at a first location and second location. Images of the structure are also acquired from the first and second location. The respective distances from the first and second locations to a first and second point on the structure are measured. A scale is established from two positions of the structure depicted in the first or second image of the structure. The distance between the first and second points on the structure is found using the established scale. This distance is used with the azimuth and tilt readings and measured distances from the first and second location to build an epipolar model of the structure. The structure may be a utility pole. Also disclosed are methods of assisting photogrammetric measurements and estimating the class of a utility pole, and methods of determining the compliance status of a utility pole.
Type:
Grant
Filed:
April 3, 2020
Date of Patent:
January 23, 2024
Assignee:
IKEGPS Group Limited
Inventors:
Jeremy James Gold, Leon Mathieu Lammers van Toorenburg
Abstract: A robot system includes a robot body, a memory, an operation controlling module, a manipulator, and a limit range setting module configured to set a limit range of the corrective manipulation by the manipulator. The operation controlling module executes a given limiting processing when a corrective manipulation is performed beyond the limit range from an operational position based on automatic operation information. The limit range setting module calculates a positional deviation between the operational position based on the automatic operation information before the correction and an operational position based on the corrected operation information, and when the positional deviation is at or below a first threshold, narrows the limit range in the next corrective manipulation by the manipulator.
Abstract: The disclosure provides a method for determining a two-handed gesture, a host, and a computer readable medium. The method includes: providing a visual content of a reality system; tracking a first hand gesture and a second hand gesture perform by two hands; in response to determining that the first hand gesture and the second hand gesture form a two-handed gesture, activating a system function of the reality system, wherein the system function of the reality system is independent to the visual content.
Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
Type:
Grant
Filed:
July 25, 2022
Date of Patent:
January 16, 2024
Assignee:
Fyusion, Inc.
Inventors:
Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
Abstract: A system includes one or more processors and software, executable by the one or more processors that, when executed by the one or more processors, cause the system to retrieve Environmental, Social & Governance (ESG) related data from one or more first databases, retrieve financial-related data from one or more second databases, and generate a dynamically interactive graphical user interface (GUI), containing a combination of ESG and financial information, the GUI having multiple GUI Modules including a Screening GUI Module, a Ratings GUI Module, a Climate GUI Module, and a Sustainability GUI Module.
Type:
Grant
Filed:
September 13, 2023
Date of Patent:
January 16, 2024
Assignee:
MORGAN STANLEY SERVICES GROUP INC.
Inventors:
Srijan Sharma, Andrew Kyle Ford, David Anthony Senn, Sowmya Viswanath, Haridass Ramachandran, Amit Kumar Goel, Anirudh Gopal
Abstract: A management system of the present invention includes a design database storing member identification information, member coordinates, and member shapes of construction members constituting a construction object, a management member selecting unit configured to select a management member to be managed among the construction members stored in the design database, a related member selecting unit configured to select a related member adjacent to the management member from the design database, a boundary information creating unit configured to select an adjacent portion between the management member and the related member, and with respect to the adjacent portion, as boundary information, associate identification information, adjacent portion coordinates, an adjacent portion shape of the adjacent portion, member identification information of the management member, and member identification information of the related member with each other, and a comparison result database configured to store information associate
Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
Type:
Grant
Filed:
January 30, 2023
Date of Patent:
January 9, 2024
Assignee:
Ultrahaptics IP Two Limited
Inventors:
David S. Holz, Kevin Horowitz, Raffi Bedikian, Hua Yang
Abstract: A display control method includes: obtaining, by a terminal, orientations and display statuses of a first screen and a second screen; determining, by the terminal, whether a trigger event used for adjusting the display statuses of the first screen and the second screen occurs; and when the trigger event occurs, displaying, by the terminal, adjusted display content on the first screen and the second screen based on the orientations and the display statuses of the first screen and the second screen. By obtaining an orientation and a display status of a terminal screen, when the trigger event occurs, the terminal can adjust the display status of the screen in a timely manner, so that a user can perform an interaction operation and interaction display between different screens.
Abstract: Systems, methods and non-transitory computer readable media for generating visual content consistent with aspects of a visual brand language are provided. An indication of at least one aspect of a visual brand language may be received. Further, an indication of a desired visual content may be received. A new visual content consistent with the visual brand language and corresponding to the desired visual content may be generated based on the indication of the at least one aspect of the visual brand language and the indication of the desired visual content. The new visual content may be provided in a format ready for presentation.
Abstract: This application is directed to matching edges of polygons representing neighboring regions in a map user application. A computer system obtains a first polygon and a second polygon, and the second polygon is connected to the first polygon via polygon edges that are at least partially mismatched. Automatically and without user intervention, the computer system combines the first polygon and the second polygon to form a joined polygon defined by an outline of the first polygon and the second polygon. A region defined by the second polygon is subtracted from a region defined by the joined polygon to form a new region. The computer system defines an updated first polygon as an outline of the new region and renders, on a screen, the updated first polygon and the second polygon with matching edges.
Type:
Grant
Filed:
January 28, 2022
Date of Patent:
December 19, 2023
Assignee:
Tableau Software, LLC
Inventors:
Zhengxiao Li, Daniel Robert Strebe, Matthew Nathaniel Kenny, Jimmy Y Sun, Bryan Harold Haber, Ryan Milton Whitley, Aysegul Yeniaras-Kramer, Steven Richard Hollasch
Abstract: A machine translation method includes: an electronic device displays a first user interface, where source text content is displayed in the first user interface; after detecting an operation of triggering scrolling screenshot taking by a user, the electronic device automatically starts to take a scrolling screenshot; the electronic device obtains a first picture through scrolling screenshot taking; the electronic device obtains translation content corresponding to the source text content displayed on the first picture; and the electronic device automatically displays a second user interface, where a part or all of the translation content is displayed in the second user interface.
Abstract: A data sharing method includes providing a receiver and at least one transmitter, changing a first hardware registration identification code of the at least one transmitter to a second hardware registration identification code of a virtual camera device corresponding to at least one communication software program by the receiver, and using the virtual camera device for converting at least one image data signal transmitted from the at least one transmitter to video stream data supported by the at least one communication software program after the receiver receives the at least one image data signal.
Abstract: A motion estimation system 80 includes a pose acquisition unit 81 and an action estimation unit 82. The pose acquisition unit 81 acquires, in time series, pose information representing a posture of one person and a posture of another person identified simultaneously in a situation in which a motion of the one person affects a motion of the other person. The action estimation unit 82 divides the acquired time series pose information on each person by unsupervised learning to estimate an action series that is a series of motions including two or more pieces of pose information.
Abstract: A method may include obtaining an image of a scene from a first perspective, the image including an object, and detecting the object in the image using a machine learning process, where the object may be representative of a known shape with at least four vertices at a first set of points. The method may also include automatically predicting a second set of points corresponding to the at least four vertices of the object in a second perspective of the scene based on the known shape of the object. The method may additionally include constructing, without user input, a transformation matrix to transform a given image from the first perspective to the second perspective based on the first set of points and the second set of points.
Type:
Grant
Filed:
June 4, 2021
Date of Patent:
December 12, 2023
Assignee:
FUJITSU LIMITED
Inventors:
Nannan Wang, Xi Wang, Paparao Palacharla
Abstract: Systems and methods are provided for receiving a first plurality of media content items during a first time interval, identifying, from the first plurality of media content items, a first subset of media content items based on a first characteristic, and identifying, from the first subset of media content items, a second subset of media content items based on a second characteristic. The systems and methods are also provided for generating a first sequenced content collection including the first subset and the second subset and causing a first content collection interface to be displayed by the first computing device, the first content collection interface comprising the first sequenced content collection.
Type:
Grant
Filed:
May 21, 2020
Date of Patent:
December 12, 2023
Assignee:
SNAP INC.
Inventors:
Alexander Collins, Benedict Copping, Justin Huang
Abstract: A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory (6DOF) of a target. The 6DOF transformation parameters are used to transform multiple images to the frame time of a selected image, thus obtaining multiple images at the same frame time. These multiple images may be used to increase a resolution of the image at each frame time, obtaining the collection of the superresolution images.
Type:
Grant
Filed:
August 8, 2022
Date of Patent:
December 12, 2023
Assignee:
Aeva, Inc.
Inventors:
Richard L. Sebastian, Anatoley T. Zheleznyak
Abstract: An information processing apparatus to execute object snapping to arrange a new object on a spread page area so that the new object is automatically aligned with another object arranged on the spread page area includes a determination unit and an execution unit. The determination unit determines a snapping point relating to the object snapping based on an arranged object arranged on the spread page area and the spread page area. The execution unit executes the object snapping in a case where the new object to be newly arranged on the spread page area is arranged in a predetermined range based on the snapping point. In a case where the object snapping is executed, the new object to be arranged on the spread page area is arranged at the snapping point by the object snapping.
Abstract: Automating conversion of drawings to indoor maps and plans. One example is a computer-implemented comprising: preprocessing an original CAD drawing to create a modified CAD drawing, a text database containing text from the original CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; determining a floor depicted in the CAD drawing, the determining results in a floor-level bounding line; sensing furniture depicted on the floor by applying the floor-level bounding line, the CAD vector-image, and the text database to machine-learning algorithms, the sensing results in a plurality of furniture entities and associated location information; identifying each room depicted in the CAD drawing, the identifying results in a plurality of room outlines; and creating an indoor map for the floor by combining the plurality of furniture entities and associated location information with the plurality of room outlines.
Type:
Grant
Filed:
November 4, 2022
Date of Patent:
December 5, 2023
Assignee:
Pointr Limited
Inventors:
Ege Çetintaş, Melih Peker, Umeyr Kiliç, Can Tunca
Abstract: Methods are disclosed for the generation and editing of layer delineations within three-dimensional tomography scans. Cross sections of a subject are generated and presented to an operator, who has the ability to edit layer delineations within the cross section, or determine parameters used to generate new cross sections. By guiding an operator through a set of displayed cross sections, the methods can allow for a more rapid, efficient, and error-free segmentation of the subject. The cross sections can be nonplanar in shape or planar and non-axis-aligned. The cross sections can be restricted to exclude one or more user-defined regions of the subject, or to include only one or more user-defined regions of the subject. The cross sections can be localized to a point-of-interest. Iterative implementations of the methods can be used to arrive at a segmentation deemed satisfactory by the user.
Type:
Grant
Filed:
March 24, 2023
Date of Patent:
November 28, 2023
Assignee:
Voxeleron, LLC
Inventors:
Daniel B. Russakoff, Jonathan D. Oakley
Abstract: A handwritten content removing method and device and a storage medium. The handwritten content removing method comprises: acquiring an input image of a text page to be processed, the input image comprising a handwritten region, which comprises a handwritten content (S10); identifying the input image so as to determine the handwritten content in the handwritten region (S11); and removing the handwritten content in the input image so as to obtain an output image (S12).