Google is working on a smart office project. What should we expect? – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Google is working on a smart office project. What should we expect?

Networked and smart home devices are increasingly commonplacefrom Alexa to Nest to Amazon’s new voice-activated devicealthough these developments aren’t always for the best. While some have raised concerns about privacy, the idea of collecting more information about how people react to specific environments has been appealing for researchers around interior design and neuroscience for many years. The idea of a smart office is becoming more commonplace too, moving into the future through using novel technologies to maximise productivity and create a better working environment overall.

This year, at the Salone del Mobile in Milan, Google and Johns Hopkins University teamed up to unveil an interactive installation focused on the area of neuroaesthetics, which examines the relationship between the brain and visual input. The installation, called Space for Being, had themed rooms with different names. Google’s head of design for all hardware products, Ivy Ross, and architect Suchi Reddy chose a range of furniture, colours and materials to evoke specific emotions. Each visitor, upon entry, was fitted with a Google-designed biometric band which measured their heart activity, breathing rate, temperature and body motion, all of which was collected for assessment by researchers from Johns Hopkins University at the end.

A-Space-for-Being-Reddymade

In some sense, these fields of research have already existed for many yearsinterior design, architecture and other areas of research around the built environment look at how people respond to the space around them. Common knowledge dictates that a child’s nursery shouldn’t be painted in tints of bright red or lime green, and increasingly, modern offices tend towards minimalism and neutral decors to absorb multiple businesses or companies. Research has indicated that the environment which you’re in can affect your mood.

When it comes to the modern workplace, these concerns get turned into fears around productivity, whether it’s finding out how it could be increased, or what problem areas there are. If workplaces were able to collect data about how employees actually reacted to their physical environments, the logic is that they should be able to make fixes or tweaks which could improve productivity.

To some extent, this kind of monitoring, and this small scale collection of data about minute details of our lives, is already very commonplace. Fitbits have surged in popularity in the last few years, and there are a whole host of other tracking technologies which people already willingly use. So perhaps it’s unsurprising that the new workplace (and the people who design it) would want to use some of those insights.

In theory, this could mean that smart offices are able to measure some of the mildly uncomfortable things about working in an office environment. Do you spend the majority of your day shivering, or constantly fidgeting because your chair is uncomfortable? Does your concentration rate drop off after a certain length of meetings? Those ideas sound innocuous enough, especially if they’re geared toward making minor changes around convenience. But they could be potentially troubling too.

As other research has suggested, networked devices and smart home gadgets can often be turned into tools of monitoring, 1984 style. At least in a house, the idea is that people could turn off these devices or disable their capabilities. But in a professional environment, individual employees don’t have that much autonomy if an employer decides to deploy a new, productivity-enforcing technology. That could include monitoring how much people use their phones at work. It’s also worth remembering that these conversations around smart offices ignore that blue collar workerssuch as those who work in Amazon warehousesare already subject to this kind of micro-monitoring, through biometric bands, but have few other options, or ways that they can meaningfully resist.

On one hand, this could have benefits for employees on a very minor scale, such as adjusting temperatures in different rooms, or potentially reducing the length of meetings. But at the end of the day, that’s the kind of problem which could be solved by better communication, or asking employees for feedback regularly. In reality, no one really wants your employer to have information on all of the things you do, whether it’s networked devices in the office which measure how long you’ve been away from your computer, or how sleepy you feel after a meeting. A careful line has to be tread between minimally invasive products and ones which could be actively harmful, particularly once workplace dynamics are taken into consideration.

There are privacy concerns as well. In the Google installation, visitors could see that their data was being erased right in front of their eyes. Although the data that’s collected through bands like these are arguably not that incriminatingwho’s heartbeat doesn’t speed up slightly when they’re giving a presentation, for exampleit’s more the principle of a data leak which could be worrying. If the workplace of the future can surveil employees in these ways, some would argue that it’s a slippery slope until they can start tracking employees in other ways too.

Opinion

Does Silicon Valley have a conscience again? Spoiler: not really

By Wade Wallerstein

There’s currently a huge move happening in Silicon Valley. Big Tech is just starting to figure out that any technology’s design can have unintentional consequences on users, and any intended consequences may have unexpected long-term effects—a realisation that should have been obvious from the beginning.

Lately, this movement has been championed by former Google Design Ethicist Tristan Harris, whose treatises on the ways that contemporary technologies “hijack our psychological vulnerabilities” led him to quit the big G and found the Center for Humane Technology. On April 23, Harris presented Humane: A New Agenda for Tech which explained the organisation’s mission to combat the extractive techniques utilised by our modern attention economy. According to Harris and the Center for Humane Technology, the current technological paradigm preys on human weaknesses, exploiting us for financial profit, rather than compensating for the vulnerabilities hardwired into our nature and making the world an easier place to live in.

While organisation’s like the Electronic Frontier Foundation and the World Economic Forum’s Centre for the Fourth Industrial Revolution have existed for years to help preserve the relationship between humans and technology, it is only recently that these topics have taken centre stage in American conversations surrounding ethical applications of technology. As we settle into the post-post-internet era, the critical need to understand the long-term effects of the information economy on a global audience’s ability to tell fact from fiction or exercise self-control has reached a fever pitch. Something’s gotta give.

Online advertising employs certain kinds of algorithmic technology that have made the tension between human needs and the coercive pull of technological design especially apparent. By design, a social media platform like Facebook works to extract attention from a captive audience. The more time a user spends watching or scrolling, the more money the platform makes. Our data is mined and algorithmically exploited to spew content at us that will keep our eyes locked on a screen for longer. YouTube’s recommendation algorithm, for example, actively works to break down free will in that it attempts to show us videos that are increasingly difficult to resist clicking on. This has led to the propagation of radical content across the network, as increasingly extreme videos are particularly effective at captivating viewers.

While data use, on the one hand, is a major issue, perhaps more frightening are the ways that black box algorithms invisibly shape the digital bubbles that surround us online—practically without us knowing it. The advertising rules for Facebook, for example, prohibit (among other things) any discussion of mental health, cryptocurrency or sex. These rules don’t just apply for products or services, but also non-profit groups and news publications. Take a small tech news business (like Screen Shot, for instance) that almost exclusively deals with these kinds of topics in relation to digital culture. Or, alternatively, a mental health group that wants to advertise its group grief counselling sessions to a wider audience that might need some help.

It’s true that the engineers behind these decisions probably had somewhat benevolent intentions—something along the lines of protecting young and vulnerable users from inappropriate content or pyramid schemes—the unintended consequences have been devastating. Advertisers who attempt to post content that violates advertising rules without in-depth knowledge of the subtleties of these policies risk content deletion, account suspension, or worse, shadow banning. Moreover, these policies stifle rich possibilities for conversation and cross-pollination of ideas.

As users, our options remain slim to combat these forms of algorithmic control. We naturally want to connect with other people, and we also naturally enjoy the targeted advertisements that more closely fit our needs. These advertising methods work, and e-commerce economies rely on the very high conversion rates that social media platforms offer, which is why these practices have only ramped up.

But, technologies of the extractive attention economy throw up major blinders that prevent users from branching into new digital ecosystems and instead confine us to shrinking online spheres. Instead of protecting users, algorithmic surveillance, biased content moderation, and artificially imposed social advertising regulations threaten free speech and damage the ability of smaller entities to competitively engage in their markets.

The solution? Hard to say beyond ‘don’t use social media’. Raising your voice and letting your chosen social media provider know that you disagree with some of their content moderation strategies might be a good option, but then again it’s hard to think that Facebook might change its mind about this stuff. These same damaging regulatory technologies also prevent child pornography, money laundering scams, and illegal goods from flooding your timeline, provide crucial targeted advertising on smaller scales for smaller entities and pay the bills of your favourite influencers so you can get more piping hot tea delivered to your feed on a regular basis. 

More than ever, it is vital that every internet user pays attention to the frame, rather than just the content that passes across the surface of the screen.