PTC Reality Lab

We research the new, the uncharted, the possible next technologies.

Research publications

The PTC Reality Lab is committed to exploring the future of Human-Machine Interaction and Spatial Computing fields. Core to our approach is sharing our findings with the research community in order to advance the state of the art.

  • Transferring Graphic Objects Between Non-Augmented Reality and Augmented Reality Media Domains

    Reynolds, B., Vazquez, C., Heun, V. US Patent US20200051337A1. 2021

    Abstract: A display of an augmented reality-enabled (AR) device, such as a mobile phone, can be used to transfer a graphical object between a secondary display, such as a computer monitor, that is captured by a camera of the AR device, and AR space, where the object is visible only through the AR interface of the AR device. A graphical object can be selected through the AR interface and, for example, moved around on a canvas of the secondary display by the user of the AR device. When the AR interface is used to move an enabled object near an edge of the canvas or physical boundary of the secondary display, the object as shown on the secondary display can be made to disappear from the secondary display to be replaced by a virtual object shown only on the AR interface in a similar location.

    Read More


  • Kinetic AR: A Framework for Robotic Motion Systems in Spatial Computing

    Fuste, A., Reynolds, B., Hobin, J., & Heun, V. (2020, April). Kinetic AR: A Framework for Robotic Motion Systems in Spatial Computing. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-8).

    Abstract: We present Kinetic AR, a holistic user experience framework for visual programming of robotic motion systems in Augmented Reality. The Kinetic AR framework facilitates human-robot collaboration in a co-located environment. Our goal is to present a deployable guide for the creation and visualization of manifold robotic interfaces while maintaining a low entry barrier to complex spatial hardware programming. A two phase validation process has been conducted to assess our work. As an initial phase, we have performed a set of interviews with robotics experts. Based on these interviews we have established three main areas that our framework tackles in different time domains. In a second phase, we have developed a set of prototypes using mobile Augmented Reality that apply the principles of Kinetic AR to multiple hardware actors including an AGV, a robotic arm, and a prosthetic system. Additional feedback from experts indicate the potential of the Kinetic ARframework.

    Read More

  • Kinetic AR: Robotic Motion Planning and Programming using Augmented Reality Interfaces

    Fuste, A., Reynolds, B., Hobin, J., Braga, A., & Heun, V. (2020, March). Kinetic AR: Robotic Motion Planning and Programming using Augmented Reality Interfaces. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 641-641).

    Abstract: Using Augmented Reality (AR) interfaces, motion paths and tasks for co-bots in the factory can become visible and interactive. We present Kinetic AR, a system to control and manipulate motion of an MIR100 Automated Guided Vehicle (AGV) in Augmented Reality using the Reality Editor platform. The MIR100 robot performs a mapping of the environment using laser scanners. We synchronize the coordinate systems recognized by the smartphone and the AGV by performing spatial mapping. This allows for a seamless interaction where the user can control the motion of the AGV in an intuitive and spatial manner without any further technical requirements than a mobile phone. The user can perform path planning and visualize the motion of the AGV in real-time in AR. The synchronization of both environments allows for a usable manipulation where the AGV is aware of the position of the phone at all times and can perform actions such as following the user or moving towards the position where the phone is pointing on the floor. Moreover, motion checkpoints can be actionable and visually connected to other equipment in order to program the coordinated behavior of multiple systems. The platform is spatially aware and allows for a co-located seamless interaction between machines. We envision this technology as a usable interface for the creation and visualization of manifold AGV operations while maintaining a low entry barrier to complex spatial hardware programming.

    Read More

  • AIR: Augmented Intersection of Realities

    Vazquez, C., Reynolds, B., Bedri, H., Fusté, A., & Heun, V. (2019). Air: augmented intersection of realities. In ACM SIGGRAPH 2019 Virtual, Augmented, and Mixed Reality (pp. 1-1).

    Abstract: The way we interact with a computer’s digital space is disconnected from the way we interact with the physical space around us. Augmented Reality (AR) introduces a new hybrid notion of space, where digital elements coexist within the physical domain. This presents an opportunity to change the way we interact with computers, using physical space as the medium through which we can interact with content. In our interactive installation, we present research prototypes that augment the intersection of realities (AIR). These experiences leverage AR enabled devices to visualize the flow of digital content between computing systems, using congruent spatial metaphors to make the interaction with complex computer systems more intuitive. This is presented with two interactive applications. The first application, HybridUI, lets users transition 2D digital user interfaces between a digital display and its adjacent mixed reality space to create customizable HMIs. The second application, ScreenToPrint, extends the AIR model to accommodate 3D content, allowing users to extract a digital model from a computer’s screen, drop it on a 3D printer, and print it in one seamless experience. Our prototypes show how cross-domain interactions can be leveraged to reduce the number of context switches a user must undergo to interact with a system of smart devices, using AR to create a seamless flow between previously disjoint domains that mimics the intuitive spatial metaphors of real world interaction.

    Read More

  • Remote Spatial Programming and Collaboration using a Real-time Volumetric Capture Space

    Reynolds, B., Bedri, H., Heun, V., Fusté, A., & Vazquez, C. (2019). Remote spatial programming and collaboration using a real-time volumetric capture space. In ACM SIGGRAPH 2019 Virtual, Augmented, and Mixed Reality (pp. 1-1).

    Abstract: We present a 3D user interface to enable remote collaborative programming of an industrial machine. Operators can use the interface to fly around a realtime volumetric video of a machine as they make changes to its programming using a spatial drag-and-drop interface. This interface allows a user to alter the functioning of the machine while being able to view and debug the alterations in real-life, enabling the programmer to operate the machine from afar. Programming a machine usually requires a specialized programmer to be next to the machine. While programs can be deployed onto machines remotely, it is dangerous for a programmer to make changes to an industrial machine without seeing the effect in real life. This system uses an intuitive 3D visual interface to allow any operator to be able to configure industrial machines in person. This 3D user interface works for those in physical proximity to the machine through an augmented reality mobile device. Extending this to a remote user is difficult because they do not have a 3D spatial view of the machine (or so-called perspective freedom).

    Read More

Demo for BMW
A collection of capabilities put together for BMW by the @PTCRealityLab features #augmentedreality, #cad, and #plm