Skylight Part Two – Hello World

For Skylight to be used to its maximum potential, it is necessary to develop AddIns. As I previously mentioned, AddIns work with Skylight to provide additional information such as guides, directions, or sensor data that is directly linked to the company using it. Through AddIns, tasks, messages, and even points can be automatically sent to specific users depending on other circumstances, making Skylight an incredibly useful asset to any individual worker or team.

I went through the process of creating a basic “Hello World” AddIn (for Skylight R4, not R5), which helped me to understand the extents to which Skylight could be used. To create the AddIn, I got access to the Skylight server, created a user account, and downloaded the client application on the Vuzix M100 smart glasses that Progress Software had purchased for this purpose.

I also downloaded the files for the AddIn Host Component, which acts as an intermediary between the AddIn and Skylight (for those familiar with Android development, its function is similar to that of an adapter). The existence of the host component makes it incredibly easy to use an AddIn, because as long as all the files are set up and linked to each other correctly on the computer, virtually anything can be done with the AddIn, from sending sensor data to processing images and using machine learning to send relevant directions to a user.

To use the AddIn Host it had to be configured through the XMPP server. This involved creating an account for an external component (an AddIn) and changing the settings to match those of the AddIns, and adding certain privileges to allow messages to be sent and received.

Messages I received in my console when running the AddIn.

Messages I received in my console when running the AddIn.

To actually write the AddIn, I used Microsoft Visual Studio; the code was all written in C#. While AddIns can be written in Python or Java, that requires additional libraries, so at the moment C# is the most straightforward choice. The code I wrote first initialized an instance of the host, enabling communication with Skylight. Then, it checks to see if a presence is received, meaning that a user is connected to the server. If true, and if the user is currently online, then another event is triggered which sends a message to the user’s glasses. The user can then open and read the message.

While the AddIn I wrote was extremely basic, I learned an incredible amount about the preparation that goes into enabling the entire Skylight process to securely and reliably use an AddIn. Throughout the entire process, I also had the incredibly good fortune to have direct access to some of the core employees of APX Labs, all of whom provided me with immediate and thorough assistance every step of the way. I came away from this experience with a newfound appreciation for Skylight; with R5 and further development in progress, it has enormous potential to simplify and enhance the way industry is currently run.

Skylight – Industrial Augmented Reality

APX Labs display from the Augmented Reality conference earlier in June.

APX Labs display from the Augmented Reality conference earlier in June.

For the past few weeks I have been learning about Skylight, a software platform which allows companies to develop applications for smart glasses. This software was developed by a Virginian startup, APX Labs, with the goal of providing industry workers with access to on-the-job training and information. Skylight is currently already in use by aerospace company Boeing and other notable companies in industries such as automotive, telecom, oil and gas, manufacturing, and utilities.

Using Skylight enables workers to send and receive media files, video calls, step-by-step instructions and guides, and locations of important objects and other users. So, how does this all work?

At the core of Skylight is the server, which manages and processes the communication between other components. The server uses something called XMPP, which stands for Extensible Messaging and Presence Protocol. This is a communication protocol, and is typically used for purposes such as online presence detection or instant messaging. This allows users to interact with other online users, and manages other abilities such as whether a message or video call can be performed. The server has other components: the DataCatalog (which stores and manages media files), Points (which represent GPS coordinates of pertinent locations relative to the user’s location), TaskManager (which tracks and manages all the tasks a particular user has to perform), Gateway (which allows calls to an external system), and AddInHost (which connects Skylight to a company-specific AddIn).

For users that are wearing smart glasses (or potentially other wearables), all this information can be accessed by using the Skylight client. The client is essentially an app that runs on the glasses and can work with both binocular displays (which overlay data over your field of vision) and monocular displays (which display data in a small screen above or below your immediate field of vision). This allows the client to be run on a diverse group of glasses, from Google Glass to Vuzix M100 to the Epson Moverios. Once a user has registered their device through the dashboard, they are able to record videos, take pictures, read messages, receive video and phone calls, view tasks, and navigate according to points, all from the client.

The dashboard that I previously mentioned is primarily for users that are not using smart glasses, but rather acting as overseers or administrators. The dashboard is a browser-based tool that users can use to access all the functionalities supported by the server, and to manage a group of users or add new tasks and points.

While the dashboard allows users to perform a variety of actions, developers are also able to build AddIns, which work with Skylight to provide a service that is more tailored towards specific company needs. This can involve adding company-related documents or data to Skylight, or sending particular messages based upon certain actions, just to name a few possibilities. I will cover AddIns more in my next blog post, as I go through the process of creating one, so look for more updates soon!

Attending the 2015 Augmented World Expo

This past Tuesday, I had the opportunity to attend the 6th Augmented World Expo in Santa Clara. It was the biggest augmented reality conference in the world, with over 200 companies and 3000 attendees, proving that the hype around augmented and virtual reality has not yet died down. Looking back on all the technologies I saw, it’s easy to see why; of the 200 companies demoing, every one of them had a unique concept or angle behind the product they were developing, displaying the endless possibilities of virtual reality in everyday life. On a more personal note, it also showed me that augmented reality has incredible potential in the field of robotics, by enhancing computer vision capabilities, and leading to more automation and less teleop.

I found the most interesting application of smart glasses to be developed by a company in the startup alleyway named VA-ST. VA-ST was founded by students at Oxford University with the intention of using computer vision and smart glasses to enable visually impaired people to see more clearly. This is done through a process of locating relevant contours and then highlighting them by emphasizing contrasts and using bold lines.

Real-time feed of what the VA-ST display looks like.

Real-time feed of what the VA-ST display looks like.

The really clever aspect of this process is the fact that they also use distance as a filter, so people that are closer are shown in great detail while objects further away are not, in order to keep the field of vision from becoming cluttered and confusing to see. Equally as impressive was the demonstration of the glasses in a poorly lit area – the darker it was, the better they performed. VA-ST is currently in the process of testing their product with users but has met with huge success so far, so it’s very possible that this will become available soon!

On a more well-known note, I also got the chance to try out a prototype of the Oculus Rift with Intel’s new RealSense camera! The RealSense camera contains three lenses: an infrared camera, an infrared laser projector, and a regular 2D camera. Coupled together, these enable RealSense to create 3D scans of physical objects, use depth perception to see distances in pictures, and show holographic displays, to name a few capabilities. When used with the Oculus Rift, it has the potential to make the virtual gaming headset even more immersive by allowing the user to use their hands, interact with objects in the real world, and more accurately view the world around them. When wearing the Oculus Rift, I played a simple riddle game that involved looking at a (real) glass on the (real) table, watching the objects be manipulated virtually, and following virtual instructions to find the passcode to a lock. While the game was very primitive and was intended solely as a proof-of-concept, it was obvious that using these technologies would greatly enhance gaming or entertainment.

Getting a chance to try the Oculus Rift with RealSense!

Getting a chance to try the Oculus Rift with RealSense!

Night vision goggles with the ARC4 attached.

Night vision goggles with the ARC4 attached.

Other booths of interest were ARA, Atheer, and Optinvent. ARA is a research and engineering company that has geared their work in augmented reality towards military. Their device is ingeniously designed to work with the existing night vision goggles and helmets that soldiers already use, and is intended to provide a display with a map of their teammates that updates and they turn their head, along with a display of pertinent information such as objectives or communication.

Atheer created AiR SmartGlasses, which are currently the only glasses that are gesture-controlled and enable users to see an overlay of information over the task they are performing. The demo showed a surgeon using these to see critical health information of a patient while operating on them, and a worker receiving step-by-step instructions on something they were building. The use of gesture is especially important in these jobs, because they are unable to constantly waste time by reaching up to select something on their glasses, making this even more useful.

Optinvent's headphones.

Optinvent’s headphones.

A company that took a more unique approach to their product was Optinvent – they chose to focus more on user interaction by creating a set of headphones that have a built-in monocular virtual display. This not only removed the issue of short battery life and the bulky glasses, but made it appear more natural for a consumer to use. The headphones make it seem like a natural extension to have a small display for watching music videos or movies while listening to something. This was one of the few products that truly seemed successfully tailored towards the general public and consumers, so I’m interested to see how they do.

The final booth that we visited was APX Labs. APX Labs provides a platform, known as Skylight, for smart glasses. This platform is typically used in the industry, to provide entry level workers instructions, distances to objects, or on-site training. Another useful ability is the ability for an overseer to view exactly what the worker is, so they can remotely train multiple people, saving both time and money.

APX Labs display!

APX Labs display!

At the end of last year, Boeing selected Skylight to use as their platform for workers to use during assembly or repairs. It is not the only company to do so, as Microsoft and SAP, to name a few, have also begun using the platform.

As far as my work this summer at Progress Software goes, I will be collaborating with APX Labs to work with Skylight to develop more applications and uses, and I am incredibly enthused about starting!

**NOTE: All photo credit goes to Eduardo Pelegri-Llopart**