Remember Google Glass, also known as Google’s failed attempt at making smart glasses a thing? Although this first concept misfired, Google has recently been working intently behind the scenes to perfect a new product. Google just acquired North, a tech company specialised in smart glasses since 2012. Are we about to get a new and improved version of Glass?
North Focal, which is the company’s own attempt at smart glasses, was launched around the same time as Google Glass, in 2012. Early on, North (then called Thalmic Labs) teamed up with Myo, a company that specialised in pairing automatic human interactions, such as muscle reactions, with the response signals in our brain, allowing us to control computers, mobile phones or fly drones with the flick of a hand. Only after that, North shifted its focus to Focals, its everyday smart glasses with direct retinal projection and prescription compatibility.
Looking back, both North and Myo were ahead of their time in many ways. For example, Apple only presented us with its first iPhone in 2007. Now, 12 years later, Apple’s smartwatches continuously pop up with notifications at any given moment. Their purpose is similar to that of Google’s Glass, with the only big difference between the two products being where they sit on our bodies. With smart glasses, we look ahead, and with smartwatches, we look down. All in all, it looks like we may soon have smartwatches on our face—whether you need prescription glasses or not.
North’s recent development informs us that as of 31 July 2020, customers who already own North Focals 1.0s will no longer be able to connect their glasses to the app or access their North accounts. Any recent purchases that were made from 30 June are also eligible for a refund.
Business Insider shared that the company’s next planned product will instead be stripped for parts to be used in order to develop North and Google’s next chapter. Rick Osterloh, the SVP of hardware at Google, has clearly stated his mission to help tech help us, and for it to eventually “fade into the background.” But what happens if it fades so much into the background that it eventually becomes invisible? He uses the term “ambient computing”—could that be the real goal here?
With the use of direct retinal projection, as well as prescription compatibility, you could argue it’s the real first step we’ll be taking towards human enhancement. Hands-free augmented reality for all. In turn, this could also introduce our capability to be in two places at once, while doing two or more things at once.
While we don’t know for sure what Google plans to work on with the help of North’s technology, we must think about how much control over ourselves we are willing to surrender to these advancements in technology. The tech industry is only getting closer to perfecting AR products such as mart glasses. Are we ready to let technology become part of us? And will the ‘off’ button continue to exist?
Networked and smart home devices are increasingly commonplace—from Alexa to Nest to Amazon’s new voice-activated device—although these developments aren’t always for the best. While some have raised concerns about privacy, the idea of collecting more information about how people react to specific environments has been appealing for researchers around interior design and neuroscience for many years. The idea of a smart office is becoming more commonplace too, moving into the future through using novel technologies to maximise productivity and create a better working environment overall.
This year, at the Salone del Mobile in Milan, Google and Johns Hopkins University teamed up to unveil an interactive installation focused on the area of neuroaesthetics, which examines the relationship between the brain and visual input. The installation, called Space for Being, had themed rooms with different names. Google’s head of design for all hardware products, Ivy Ross, and architect Suchi Reddy chose a range of furniture, colours and materials to evoke specific emotions. Each visitor, upon entry, was fitted with a Google-designed biometric band which measured their heart activity, breathing rate, temperature and body motion, all of which was collected for assessment by researchers from Johns Hopkins University at the end.
In some sense, these fields of research have already existed for many years—interior design, architecture and other areas of research around the built environment look at how people respond to the space around them. Common knowledge dictates that a child’s nursery shouldn’t be painted in tints of bright red or lime green, and increasingly, modern offices tend towards minimalism and neutral decors to absorb multiple businesses or companies. Research has indicated that the environment which you’re in can affect your mood.
When it comes to the modern workplace, these concerns get turned into fears around productivity, whether it’s finding out how it could be increased, or what problem areas there are. If workplaces were able to collect data about how employees actually reacted to their physical environments, the logic is that they should be able to make fixes or tweaks which could improve productivity.
To some extent, this kind of monitoring, and this small scale collection of data about minute details of our lives, is already very commonplace. Fitbits have surged in popularity in the last few years, and there are a whole host of other tracking technologies which people already willingly use. So perhaps it’s unsurprising that the new workplace (and the people who design it) would want to use some of those insights.
In theory, this could mean that smart offices are able to measure some of the mildly uncomfortable things about working in an office environment. Do you spend the majority of your day shivering, or constantly fidgeting because your chair is uncomfortable? Does your concentration rate drop off after a certain length of meetings? Those ideas sound innocuous enough, especially if they’re geared toward making minor changes around convenience. But they could be potentially troubling too.
As other research has suggested, networked devices and smart home gadgets can often be turned into tools of monitoring, 1984 style. At least in a house, the idea is that people could turn off these devices or disable their capabilities. But in a professional environment, individual employees don’t have that much autonomy if an employer decides to deploy a new, productivity-enforcing technology. That could include monitoring how much people use their phones at work. It’s also worth remembering that these conversations around smart offices ignore that blue collar workers—such as those who work in Amazon warehouses—are already subject to this kind of micro-monitoring, through biometric bands, but have few other options, or ways that they can meaningfully resist.
On one hand, this could have benefits for employees on a very minor scale, such as adjusting temperatures in different rooms, or potentially reducing the length of meetings. But at the end of the day, that’s the kind of problem which could be solved by better communication, or asking employees for feedback regularly. In reality, no one really wants your employer to have information on all of the things you do, whether it’s networked devices in the office which measure how long you’ve been away from your computer, or how sleepy you feel after a meeting. A careful line has to be tread between minimally invasive products and ones which could be actively harmful, particularly once workplace dynamics are taken into consideration.
There are privacy concerns as well. In the Google installation, visitors could see that their data was being erased right in front of their eyes. Although the data that’s collected through bands like these are arguably not that incriminating—who’s heartbeat doesn’t speed up slightly when they’re giving a presentation, for example—it’s more the principle of a data leak which could be worrying. If the workplace of the future can surveil employees in these ways, some would argue that it’s a slippery slope until they can start tracking employees in other ways too.