Abstract: A head-mounted display device includes a body, two extension members and an adjustable fixing member. The body is adapted to lean against a front portion of a head of the user. The two extension members are disposed opposite each other. First ends of the two extension members are respectively coupled to two opposite ends of the body and configured to rotate with respect to the body so as to lean against two side portions of the head. The adjustable fixing member is coupled to each second end of the two extension members to lean against the back portion or top portion of the head, wherein the body is fixed on the head through the two extension members and the adjustable fixing member. Each of the extension members is adapted to contact or to depart from the two side portions of the head by rotating with respect to the body.
Abstract: A head-mounted display device including a body, a display element, and two lens groups is provided. The display element is disposed on the body and is adapted to provide an image beam. The two lens groups are disposed on a transmission path of the image beam. Each the lens group includes a plurality of lenses, wherein one of the lenses has an astigmatism surface on at least one side of the display element. The astigmatism surface is non-rotationally symmetrical, and the image beam has astigmatic aberration after passing through the astigmatism surface.
Abstract: A communication device of handling connection comprises a storage device for storing an instruction and a processing circuit coupled to the storage device. The processing circuit is configured to execute the instructions stored in the storage device. The instruction comprises prioritizing selecting a long term evolution (LTE) network over a new radio (NR) network, when the communication device is configured to enable a voice service.
Abstract: A method, a mixed reality (MR) system, and a recording medium for detecting a real-world light source in MR are provided. The method is applicable to the MR system having a computing apparatus and a head-mounted display (HMD). In the method, a plurality of light regions of a plurality of continuous images is analyzed, and each of the light regions is projected according to a coordinate position of the HMD corresponding to the plurality of continuous images to generate three-dimensional vectors. Then, the three-dimensional vectors are clustered to predict a directional light in a current image. Finally, a plurality of intersection points between the three-dimensional vectors is calculated, and a point light in the current image is predicted according to the intersection points.
Abstract: A computer aided medical method includes the following steps. An initial symptom of a patient and context information is collected through an interaction interface. Actions in a series are sequentially generated according to the candidate prediction models and the initial symptom. Each of the actions corresponds to one of the inquiry actions or one of the disease prediction actions. If the latest one of the sequential actions corresponds to one of the disease prediction actions, potential disease predictions are generated in a first ranking evaluated by the candidate prediction models. The first ranking is adjusted into a second ranking according to the context information. A result prediction corresponding to the potential disease predictions is generated in the second ranking.
July 3, 2020
Date of Patent:
June 14, 2022
Kai-Fu Tang, Edward Chang, Hao-Cheng Kao
Abstract: A head mounted display device includes a display, a light waveguide element, and a light shutter. The display periodically provides a display image. The light waveguide element receives the display image, generates a projection image according to the display image, projects the projection image from a second surface, and projects the projection image to a target zone from a first surface. The light shutter is adjacent to the second surface of the light waveguide element and is coupled to the light waveguide element. The light shutter is periodically disabled and enabled in an alternating manner.
November 19, 2020
Date of Patent:
June 14, 2022
Cheng-Hsiu Tsai, Wei-Jen Chang, Fu-Cheng Fan
Abstract: A wireless communication device is provided. The wireless communication device includes a housing, a circuit board, a radio frequency module and an antenna. The housing has a frame and a back cover to define a receiving space. The circuit board is disposed in the receiving space, and defines a clearance area from the housing in the receiving space. The circuit board includes a ground terminal, a first feeding point, and a second feeding point. The antenna includes at least one metal conductor coupled to the first feeding point and the second feeding point, respectively, to provide a low frequency resonant path, a first middle frequency resonant path, a second middle frequency resonant path and a high frequency resonant path.
Abstract: A speaker module adapted to be disposed on a wearable device. The speaker module includes at least one driving unit and an enclosure. The driving unit is configured to produce sound. The enclosure contains the driving unit and has a front chamber and a rear chamber. The front chamber and the rear chamber are individually located at two opposite sides of the driving unit. The enclosure has a front opening, a first rear opening, and a second rear opening. The front opening communicates with the front chamber. The first rear opening and the second rear opening individually communicate with the rear chamber. A sum of sound outputted from the front opening, the first rear opening, and the second rear opening has directivity.
December 1, 2021
June 2, 2022
Yen-Chieh Wang, Sung Jen Wang, Yu-Zhen He
Abstract: A method, an electronic apparatus and a recording medium for automatically configuring a plurality of sensing devices, applicable to an electronic apparatus having at least one sensor and a communication device, is provided. In the method, a first sensing data is detected by using the at least one sensor. A plurality of second sensing data is respectively received from the plurality of sensing devices by using the communication device. The first sensing data and each of the second sensing data are analyzed to obtain a moving pattern of the electronic apparatus and each of the sensing devices. A position on a user's body of each of the sensing devices is configured by comparing the moving patterns with at least one movement model.
Abstract: An electronic device includes a circuit board, a package on package structure, a heat-conducting cover, and a heat-conducting fluid. The circuit board has a first surface and a second surface opposite to each other. The package on package structure is disposed on the first surface. The package on package structure has at least one heat generating element. The heat-conducting cover is disposed on the second surface and is in thermal contact with the circuit board. The heat-conducting cover and the second surface form an enclosed space. The heat-conducting fluid is filled in the enclosed space.
Abstract: A first base station (BS) for handling radio bearer (RB) configurations of radio access technologies (RATs) comprises at least one storage device; and at least one processing circuit, coupled to the at least one storage device. The at least one storage device stores, and the at least one processing circuit is configured to execute instructions of: configuring a first RB configuration of a first RAT and a second RB configuration of a second RAT to a first communication device, wherein the first RB configuration and the second RB configuration are associated to a first RB; communicating first data associated to the first RB with the first communication device according to the first RB configuration and the second RB configuration; and transmitting the first RB configuration and the second RB configuration to a second BS in a handover preparation procedure for the first communication device.
Abstract: A method of generating user-interactive object is provided. The method includes the following operations: receiving a picture of a physical environment; identifying a target surface and multiple target objects located on the target surface from the picture to generate an identifying result; generating a virtual surface and multiple virtual three-dimensional (3D) objects located on the virtual surface according to the identifying result, in which the multiple virtual 3D objects are user-interactive objects; and setting multiple operational behaviors of the multiple 3D virtual objects according to a configuration file. The multiple operational behaviors correspond to multiple input operations, respectively. The virtual surface and the multiple virtual objects are for being displayed and manipulated in a virtual environment.
Abstract: A method for managing virtual environment, which includes the following operations: controlling an audio-visual device to display a virtual environment, in which the virtual environment includes multiple virtual characters, and the multiple virtual characters include a first virtual character corresponding to the audio-visual device; in response to a grouping signal, grouping the multiple virtual characters into multiple virtual groups at different locations in the virtual environment, in which the multiple virtual groups include a first virtual group including the first virtual character; in response to a first selecting signal, controlling the audio-visual device to stop playing audio of one or more of the multiple virtual groups other than the first virtual group.
Abstract: A waveguide device includes a first diffractive element, a second diffractive element, a third diffractive element, and a waveguide element. The first diffractive element has a first grating configured to diffract light of a wavelength to propagate with a first diffraction angle. The second diffractive element has a second grating configured to diffract the light of the wavelength to propagate with a second diffraction angle. The third diffractive element has a third grating and a fourth grating. The third grating is configured to diffract the light of the wavelength to propagate with the first diffraction angle. The fourth grating is configured to diffract the light of the wavelength to propagate with the second diffraction angle. The waveguide element configured to guide light propagated from the first diffractive element and the second diffractive element to the third diffractive element.
Abstract: An eye-tracking apparatus includes a first lens group, a light splitting device, a display, an image sensor, a second lens group, and a plurality of light sources. The light splitting device receives a first beam, generates a second beam, and transmits the second beam to a second surface of the first lens group. The display projects a reference mark to a target area through the light splitting device and the first lens group. The image sensor captures a detection image on the target area through the first lens group, the light splitting device, and the second lens group. The second lens group is disposed between the light splitting device and the image sensor. The light sources are disposed around the image sensor and project a plurality of beams to the target area through the first lens group, the light splitting device, and the second lens group.
Abstract: The disclosure provides a head-mounted display (HMD) including a chamber a light emitter, a camera, and a processor. The chamber has a lens and a display, wherein the lens is coated with a reflective layer and faces a target eye of a wearer, and the reflective layer has at least one specific location. The light emitter emits a first light to the reflective layer, wherein for an i-th specific location, the first light is scattered as multiple second lights by the i-th specific location, the second lights are scattered as multiple third lights by the target eye, and the third lights are scattered as multiple fourth lights by multiple reference locations on the reflective layer. The camera captures the fourth lights as an image corresponding to the i-th specific location. The processor estimates an eye pose of the target eye based on the image corresponding to each specific location.
Abstract: A head mounted display device, including a main body, a headband component, two earphone components, and two driving modules, is provided. The headband component includes two opposite headband connectors. The two headband connectors are respectively rotatably arranged on two opposite sides of the main body along a first axis. The two earphone components respectively include two earphone connectors. The two earphone connectors are respectively rotatably arranged on the two sides of the main body along two parallel second axes, and the two earphone connectors are located beside the two headband connectors of the headband component. The two driving modules are respectively arranged between the two headband connectors and the two earphone connectors, and the two earphone connectors are linked to the two headband connectors.