YOUR WORLD ON MACHINES TERMS.
Aside from AR glasses and AR headsets, augmented reality isn’t restricted to wearables like virtual reality is. Instead, it’s being tested and implemented on phones, projectors, and PCs alike. This technology, despite its enormous potential for widespread adoption, has a lower level of public understanding of it than virtual reality.
Because computers exist outside of us, everything we do has to be done according to their rules. This has been the history of human-computer interfaces. The machine vanishes as soon as we take our gaze away from its fixed display.
Our chances of us being understood if the keyboard malfunctions, disconnects, or fails to appear on the screen due to a bug are slim. I wish us luck if we attempt to bypass the keyboard by speaking into the device’s built-in voice assistant.
It’s even worse in software, and it shouldn’t be. In order to make a single edit to a file, we have to navigate through countless levels of dialog boxes. Many people would rather suffer than learn how to use shortcuts for this task because the process of setting them up is so time-consuming.
Because of this, the companies that make our lives easier provide us with 24-hour phone support, extensive community websites, a vast library of instructional videos, and even in-person genius bars as a way of making amends.
SPATIAL COMPUTING IS NOW PERVASIVE COMPUTING
This is no joke, things are about to get much better. The reason for this is something we’ve previously discussed: Technology that is mature and well-aligned. We are on the verge of seeing computers do what they always promised: disappear, thanks to the same process that got them here in the first place:
For more than three decades, we’ve been predicting the rise of ubiquitous computing, which we now refer to as “spatial computing.”
For example, in the last decade, the success of various unicorn startups has depended on the digitization of spatial information about the real world—for vehicle and residence sharing, home task services, on-demand food delivery, etc.
The emergence of “digital twins,” where the rules of interaction, transparency, and verifiable history are synced to their physical counterparts in physical space, is another example of this emerging phenomenon.
The smartphone has been the primary interface to this digital world, and it’s difficult to remember a time when it wasn’t around. Although only fifteen years have passed. The “mobile revolution” at the time taught us the importance of being able to carry around the internet with us at all times.
It’s also shown that we’re still operating according to the rules set by the computer. We are, in fact, even more, enmeshed in our electronic devices than we were previously. Having the internet at our fingertips seemed futuristic just fifteen years ago, but now that we can see it on our phones and tablets, it’s easy to lose sight of where we are in the real world.
More than you might imagine, peripheral vision is critical for determining the speed and distance of objects in your field of view. A rough estimate of movement toward or away from you is all that central vision can provide, despite its fine resolution. This estimate is based on changes in size or the parallax angle between your eyes.
Augmented Reality Rewires the Brain
Photoreceptors in the retina’s periphery are activated when moving objects enter your field of vision, so you get more accurate information about how fast things are moving. In your peripheral field, your brain detects objects and determines whether or not they’re moving. Interfering with this process can lead to a misunderstanding of relative motion, resulting in a tripping hazard or even a car accident.
The irony is astounding. Even if you purchase an AR device in the hopes of enhancing your abilities, you may find yourself experiencing many of the same difficulties as people who are blind or visually impaired. Because they haven’t adapted to their vision loss, AR users may be more vulnerable than those who do.
It is more likely that people who are concerned about running into an object up ahead will slow down and approach more cautiously than those who are confident that they can avoid it. It was found that those who had their vision artificially narrowed did not automatically add the extra distance because they had faith in their capabilities.
Let’s consider fighter pilots. They must be able to accurately judge the speed and distance of objects around them to keep their focus on what is in front of them. Because they present information in the center of the field, head-up displays don’t interfere with the ability of peripheral vision to assist in this task. The only reason this works is that the data has been reduced to the bare minimum number of lines and symbols.
Fiction Meets Reality
Recent augmented reality wearables, such as Microsoft’s HoloLens and Magic Leap, a Dania Beach, Florida-based stealth startup, appear to rely more on the center of the visual field than the periphery to display objects that are integrated into the real world. However, it appears that these will be realistic objects with fine details and full color, rather than simple graphics. As a result, there’s a second issue to consider.
Designers are likely to make AR even more exciting, with graphics that go far beyond the simple lines and numbers that appear in a fighter pilot’s display, making it a far more immersive experience than ever before. Through the projected image, you should be able to see the real world.
Even if the people in the images are virtual and the objects real, our brains’ pre-existing neural wiring favors images of people. Since the AR image includes people (even simple shapes resembling human forms) when you are looking at something that is not human in the real world, AR will win the battle for your attention.
Dot patterns resembling human stick figures have been used in experiments since the 1970s to study this phenomenon. They’re known as point-light walkers by scientists. Our perception is biased to recognize human forms with minimal prompting, as shown by these patterns of 11 to 15 points. These patterns show how our perception is biased. Even though it takes a human between one and two seconds to take two steps, we can identify a figure as a human within 200 milliseconds using just a dozen or so dots.
Snap Out of the Real World and Welcome to an Augmented World
Researchers have shown that an image can easily pique the interest of a passerby. It’s a safe bet that content providers will try to make attention-grabbing applications. Human-like shapes in apps will draw more attention from people, obscuring the real world and putting them at greater risk.
We’re familiar with the effects of augmented reality’s human-like images. Other, non-human images may also be problematic, but we don’t know as much about them. To help people overcome their fears, Kaiser Permanente has been testing virtual images for use in controlled exposure therapy.
Very low-resolution images can arouse some of these fears, such as a fear of spiders or other living things. If you’re designing an AR app, you might see a lot of biological forms being used to convey information. This means that for a very small percentage of the population, these apps could elicit very negative responses.
However, the news isn’t all bad. Augmented reality has the potential to be a lifesaver for those who find it difficult to get around in the real world. An AR device like Google Glass can track the user’s surroundings and movement in real-time and generate cues that improve the user’s security because it has sensors such as a camera and an accelerometer.
Australian researchers are working on a SURF algorithm that will allow AR devices to recognize traffic lights and pedestrian signals, and predict potential collisions with moving people and objects from video data, to assist the blind. 90% of the people and 80% of the objects can be identified by the algorithm so far.
This technology, if used in an AR device and coupled with an appropriate audible output that does not mask the real-world sounds necessary for blind users to navigate, could be extremely beneficial to those individuals. The form factor is appropriate, the mass-produced hardware is inexpensive, and the camera is perfectly positioned to “see” what’s ahead, even if these people don’t require the visual display that AR glasses provide.
AR Application with Parkinson’s Patients
Patients with Parkinson’s disease, which causes tremors and stiffness, can benefit from AR wearables. Motor block or freezing of gait (FOG)—the sudden inability to begin a step or stride—is a perplexing characteristic that puts these people at risk for serious falls. FOG is more likely to occur when a driver is approaching a turn, changing directions, navigating through a small space, or passing through a door.
With the right visual clues, some Parkinson’s disease patients with FOG can walk with a nearly normal stride. It was podiatrist and Parkinson’s patient Thomas Riess who first developed several early augmented reality devices in the 1990s to help people with their daily activities. LEDs mounted on the patient’s head project a scrolling ladder of bars onto a transparent screen in front of them. His device was able to improve his own and other Parkinson’s disease sufferers’ ability to walk unfrozen in trials.
Eric Sabelman also experimented in 2002 to see if computer-generated cues from wearables could reduce the amount of time Parkinson’s patients spent in FOG at the VA Rehabilitation R&D Center in Palo Alto, Calif.
The virtual cues were only shown when necessary in this experiment. LEDs and 3-axis accelerometers were placed on the corners of eyeglass frames to measure head movement and flashlights when the computer detected the onset of FOG. Patients in the early stages of the FOG state tilt their heads forward to see where their feet are.
Lights were flashed as a visual representation of continuing the intended motion, which was determined using the head-tilt angle and other sensor inputs.
Assuming that the patient froze while trying to turn left, the lights would flash more slowly on the left side of the room than on the right, because motion is more visible on the outside of the turn when you’re turning. LEDs would alternately flash in the rhythm of the patient’s steps before the freeze if the patient froze while moving forward.
Using this system, FOG was reduced by nearly 30 percent in a 10-meter walk. Newer AR devices with displays for both eyes and significantly faster processors could outperform Google Glass in our opinion.
Researchers at Brunel University in London are working on systems that project lines in front of patients, for example. Although these are cumbersome to use, they demonstrate the potential of AR to solve this particular issue.
Words of Caution
All of this research has a clear message. The designers of AR hardware and applications owe it to the world to be cautious because their users’ vision may be impaired. People of all ages and physical abilities are needed to test their products.
They need to monitor how quickly users respond to their apps to improve them. Real-world obstacle courses with a chance for the wearer to stumble are needed. You must figure out how much content you can show in an AR app before it becomes dangerous.
Manufacturers should also educate consumers about the dangers of using these devices, perhaps by requiring them to play mandatory training games before they can use them in the real world.
When we get our hands on new gadgets, we tend to start using them right away and only read the instructions later, if at all, when we get them. If you’re just getting started with wearable AR devices, it’s best to practice at home or in a nearby park. That way, the user would be aware of the risks and be able to accept or reject them.
If an app developer wanted to include diagnostics to measure things like reaction time and balance in their wearables, they could do so using software for augmented reality and virtual reality wearables that includes cognitive assessment tools.
With these tools, apps can help users learn about the dangers of using the app and guide them safely through the initial training. The future of this technology will depend on how rigorously new products are tested using improved tools.
AR GLASSES WILL BE THE UP-AND-COMING HARDWARE PLATFORM.
Now it’s clear that smartphones are a transitional device. Looking at your phone’s map as you walk down a busy city street is an example of people changing their natural, biology-based behavior to accommodate computers. Having a map in front of you, on the other hand, fulfills a basic human need.
Take into account:
- Humans have two eyes that work together to see in three dimensions.
- The visual system occupies 30% of the human brain.
- The eyes contain 70% of the human sensory receptors.
Processing visual information utilizes half of the brain’s capacity.
That’s why we need a 3D binocular-based technology interface—specifically one that matches our biology. There is no doubt that eyeglasses with built-in computers are the logical choice for the next hardware computing platform, and this is not because they are “sexier” or “more exciting.” Attempting anything else would be a waste of time and resources.
Furthermore, the moment is rapidly approaching. Glasses-based spatial interfaces, such as 5G networks, are becoming more affordable as their quality improves at the same time. I’ll say it again: mature and well-aligned technological solutions
The general public, on the other hand, has a difficult time getting behind the idea of carrying around a camera that records everything they see. The early Google Glasses, nicknamed “Glassholes” for their wearers, didn’t help matters either, as most people object to being video-recorded as part of everyday life.
And, of course, the word “sexy” poses its own set of difficulties. Eyeglasses with a Star Trek-like appearance are unpopular with most people. Engineering and design have struggled to make glasses that look “normal” while still containing all the technology of the next computing platform for at least 30 years. This has been a major challenge. We’re not quite there yet, but it’s only a matter of time before we can wear “normal glasses.”
AUGMENTED REALITY IS OUR CHOICE.
A layer of augmented reality (AR) will be layered on top of the real world when these glasses are worn. From Pokémon Go to city walking tours, augmented reality (AR) has already found a home in smartphone apps. However, using AR on a handheld screen removes you from the immersive experience immediately.
Virtual reality (VR) goggles, on the other hand, deliberately block out the outside world to create the illusion of an alternate reality. Facebook’s acquisition of Oculus VR shows that “full immersion,” primarily for gaming, is a real market, but Mark Zuckerberg admits that he doesn’t see a single set of glasses that can do both. Zuckerberg is also aggressively seeking AR glasses.
When it comes to working in the real world, our focus is on AR’s ability to display stored and live information as a “heads up” display on the environment. It’s already possible to use this capability in a wide range of applications. It is common practice for surgeons to use AR glasses to assist them in the alignment of artificial joints and the placement of screws and pins during knee, shoulder, and elbow surgeries.
In the wake of the COVID-19 pandemic and the global warming crisis, the use of AR in medicine has increased. When doctors couldn’t or didn’t want to travel, they’ve assisted distant surgeons by sharing their “eyes” and receiving spoken advice or even diagrams drawn in the “air” before them.
Other immediate applications of AR include any work environment where mistakes are costly and time is of the essence, such as a factory or a factory floor. Assembling and repairing automobiles, oil wells, factories, and warehouses, as well as calculating damage assessments for insurance agents, all require extensive databases of information.
MISTAKES MADE IN THE PAST MUST BE LEARNED FROM.
The merging of atom-based and bit-based worlds has long been discussed. The physical computing hardware layer of the Internet of Things (IoT) is capturing and distributing data for computation and storage via blockchain, edge, and mesh networks, and this is happening now.
Humans, on the other hand, will soon be outmatched by our environments. Smart contracts will populate every designed phenomenon, from buildings to manufactured goods, and our built surroundings will essentially become sentient.
Our ability to see reality through a magical microscope will be enhanced by quantum computers, which operate in a realm where classical physics does not apply. This will allow us to decode traffic patterns or the previously indecipherable pulse of global markets.
It is impossible to build a spatial web on the current web’s existing frameworks and surveillance capitalism schemes as we move into this new confluence of spatial technologies supported by a spatial web.
It’s possible for malicious actors, both human and algorithmic, to take advantage of these structures, which are designed to store and analyze our behavior on someone else’s servers. There will be a shift away from websites and apps being hacked to hacking our biology, biology, and brains. True dystopia can be modeled from there.
Can AR Glasses Be an Ethical Alternative?
We must establish ethical guidelines and limits for the use of spatial computing and the spatial web as we move forward with their development. In particular, we need to avoid “lock-in” where proprietary technologies become a permanent part of global systems’ infrastructure. Traditional software business models that rely on centralized power and siloed platforms will stifle not only innovation but basic human rights if we don’t challenge them.
Organizations need to understand both the positive and negative aspects of augmented reality (AR) to ensure a successful implementation of the technology in their businesses.
With the merging of the digital and real worlds brought about by AR, the actual world we live in will also be reshaped. Augmented reality (AR) is finding new applications in a wide range of fields, from the military to medicine, to enhance the sharing of real-time visual information. AR, like any other new technology, has its share of drawbacks and hazards. Organizations that plan to use AR in their processes, whether for their employees or customers, should focus on overcoming these risks.
#1. Overwhelmed by Information
Since the advent of the internet and the emergence of social media, there has been an explosion in the production and dissemination of information. Wearable AR technology, such as AR smart glasses, will make it even easier for people to access vast amounts of information from a variety of sources, thanks to the widespread use of mobile technology.
Having too much information can cause stress, indecisiveness, and even inaction, which is counterproductive to AR’s goal of enabling quick action with real-time information. For organizations looking to implement AR, limiting the amount of data that can be accessed through various AR applications should be a top priority.
#2. Impairment of Perception
Even though wearable AR glasses make up a significant portion of the technology, they are also the most dangerous. Depending on the application, low-quality AR lenses and glasses can distort the wearer’s field of vision and distort their perception in dangerous ways. AR equipment must meet the highest standards of quality and safety to ensure that the technology can be used without exposing users to the risks associated with AR.
#3. Party Influence
As useful as AR can be, it can also be a distraction at times. AR may prove to be more of a distraction and a hindrance than an enabler for employees who are just getting started with the technology. When using AR for activities like driving or surgery, distractions can be especially dangerous. AR-induced distractions can be avoided if users are properly trained so that they can seamlessly switch from non-AR operations to AR-enabled ones.
For example, an AR device first captures a real-world scene, analyzes the data it contains, and adds additional visual information, or “augmented reality,” to the scene. When it comes to working properly, AR relies heavily on data. As well as collecting data on users, augmented reality (AR) devices also collect data on people who are being viewed.
This may not be the best option for protecting individual privacy, which will be compromised as augmented reality becomes more widely used. Organizations will face challenges in preserving personal privacy despite the widespread use of AR.
AR is susceptible to hacker attacks and malware, just like any other connected technology. Denials of service attacks and information overlays can have catastrophic consequences, so it is important to be aware of these threats. For example, a hacker could lead to an accident by misdirecting a driver using an AR-powered navigation system.
Potential Dangers of AR
Despite all the benefits of augmented reality, organizations should not overlook the potential dangers of AR. Risks associated with this technology must be addressed early on to avoid major setbacks and complications during the full-scale implementation of AR, which may be counterproductive to its stated goal of making things easier.
The question is, can augmented reality live up to its promise? When it comes to diseases that impair mobility (Sabelman) and the use of technology to improve patient care, we are concerned about the dangers that are looming.
Research into the long-term effects of augmented reality on vision and mobility is still in its infancy. We found several reasons to be concerned after reviewing the existing research on how people perceive and interact with the world around them.
For example, you may underestimate your reaction time and overlook real-world dangers when you’re interacting with augmented reality. Even worse, you won’t know you’re at greater risk of harm until something bad happens.
Can Improvements be Made?
Of course, the solution to this problem is as simple as pie. When a wearable device has a built-in GPS receiver, designers can use it to disable notifications when the wearer is moving.
A safe mode could also be activated based on image analysis on many AR wearables, which are equipped with cameras. There are, in fact, simple solutions to this problem. However, it is unlikely that they will be used: Wearables buyers don’t want to ever have to worry about their data being cut off. Everything about the device is allowing you to stay connected at all times.
Why would augmented reality be beneficial but harmful to you?
However, a heads-up display, like augmented-reality glasses, obscures some of the user’s view and may be distracting, so it may not be the best solution. An aircraft head-up display typically displays information in a highly symbolized and minimalistic way, with little text and no images of people (we’ll discuss later why that is important). Furthermore, pilots undergo extensive training to interpret this data quickly and accurately.
Your ability to concentrate is harmed by presbyopia, as well as by farsightedness and nearsightedness.
Many conditions cause tunnel vision, which obscures objects in the peripheral visual field. These include diabetes, glaucoma, and retinitis pigmentosa. Aging-related macular degeneration reverses this effect by blurring everything in one’s field of view. A poorly designed augmented reality interface could have the same effect on vision as these diseases.
Think about your general ability to concentrate first. They either wear corrective lenses or have difficulty getting around, depending on the severity of their vision impairment. Is it difficult to see street signs in the distance, or do you drive more cautiously at night because it is more difficult to focus your eyes in low light? To use Google Glass, the wearer must be able to quickly switch between the real world in the distance and the images projected on the retina as if they were 2.5 meters away.
People who have good distance vision or corrective lenses should be able to handle objects at this distance without strain—but learning how to comfortably adjust focus to clearly see an AR display has a learning curve similar to that of adapting to bifocals.
As we get older, presbyopia makes it more difficult to quickly shift our focus. Some 5% to 10% of the people we tested Google Glass on gave up because of the eye strain they experienced. After about 20 seconds, they’d struggle to focus and then look away because it was so uncomfortable. People who struggled to focus initially but persevered were more likely to succeed with Glass than those who gave up.
However, shifting focus is only part of the problem. Wearable AR devices also reduce your field of vision. Designers of augmented-reality wearables take great care to avoid blocking the user’s central vision, at least when the user is looking straight ahead.
Because of this, AR displays have notifications tucked away on the side. To check these alerts, you have to temporarily shift your gaze away from the road. This doesn’t eliminate the risk of distraction, which is why so many people look at their phones while driving.
It’s easy to get into trouble if you keep your gaze off to the side for an extended period. These intrusions are dangerous even if you resist the urge to look down at a notification that appears at the very edge of your vision — perhaps by waiting until you have finished crossing the street.
The Salisbury Eye Evaluation conducted a study in 2004 that showed this. The researchers evaluated the ability of 1,344 adults ranging from 72 to 92 years old to distinguish between the central and peripheral objects on a computer display.
To test this hypothesis further, researchers had the participants walk through an obstacle course, which was scored based on how many things they bumped into, as well as other factors like their visual fields and balance. If all else was equal, a 4% increase in the number of run-ins with obstacles was predicted by a 10% drop in vision test scores. This suggests that those who are more prone to falling are those who are unable to properly track things in their peripheral vision.