A now-viral video has left the internet astonished after it demonstrated a potential AI-piloted aircraft that would never actually land. Designed by science communicator and video producer Hashem Al-Ghaili, who’s best known for his infographics on scientific breakthroughs, the concept of the ‘Sky Cruise’ is basically a flying hotel that boasts 20 nuclear-powered engines with the capacity to carry up to 5,000 passengers. Let me explain.
Al-Ghaili has billed the aircraft as the “future of transport” and explained that conventional airlines would fly passengers to and from Sky Cruise, which would never touch the ground and have all repairs carried out in-flight. Thanks to its electric engines being powered by nuclear energy, the aircraft would never run out of fuel—though it should also be noted that the clip explained the hotel could “remain suspended in the air for several years,” indicating that at some point, it would need to land.
Reporting on Al-Ghaili’s design, the Daily Star noted that when asked how many pilots it would take to fly the Sky Cruise, he responded, “All this technology and you still want pilots? I believe it will be fully autonomous.”
Sky Cruise would still need an incredible number of onboard staff since it would include a shopping mall, pools, gyms, restaurants, luxurious rooms and even a “big hall that offers a 360-degree view of your surroundings.”
For those of you who have already started listing some of the cons that come with going on a holiday in the sky, here are a few answers to some of your potential worries. If you’ve thought about the risks that come with living above the clouds with no access to medical care if something were to happen to a guest, the video explained that Sky Cruise would not only use AI to predict air turbulences—and obviously avoid them when needed—but the hotel would also feature a facility equipped with the latest technology aimed to keep passengers “safe, healthy and fit.”
While the launch date for Sky Cruise is yet to be announced, netizens are already sharing their opinions on Al-Ghaili’s idea, with one person writing in the comment section below the YouTube video: “Sounds like a disaster waiting to happen. Looks cool anyways.”
“Hilarious! It’s like someone got in a time machine, travelled to 2070, found a retrofuturism video based on our era (as opposed to the 1950s or 1800s) depicting what people from our era thought our future would look like,” wrote another.
“This is literally just the Axiom ad from Wall-e,” one user joked, while another added: “You can draw it. You can hire voice actors. You cannot hide that this is an impossibility.”
Back in December 2021, COVID-19 deaths in South Korea had hit a record high when former Prime Minister Kim Boo-kyum admitted that the country could be forced to take “extraordinary measures” to tackle the surge. The plans included the use of AI and facial recognition by leveraging thousands of closed-circuit video cameras to track citizens infected with the virus.
At the time, the public raised several concerns about the technology’s attack on privacy and consent. Is the exchange of personal data for convenience, order and safety a fair trade-off for citizens? Or are governments using the pandemic as an excuse to normalise surveillance?
Now, reports are surfacing that the police in China are buying technology that harnesses vast surveillance data to predict crime and protests before they happen. What’s worse is that the systems in question are targeting potential troublemakers in the eyes of an algorithm and the Chinese authorities—including not only citizens with a criminal past but also vulnerable groups like ethnic minorities, migrant workers, people with a history of mental illness and those diagnosed with HIV.
According to a New York Times (NYT) report, more than 1.4 billion people living in China are being recorded by police cameras that are installed everywhere from street corners and subway ceilings to hotel lobbies and apartment buildings. Heck, even their phones are being tracked, their purchases monitored and their online chats censored. “Now, even their future is under surveillance,” the publication noted.
The latest generation of technology is capable of warning the police if a drug user makes too many calls to the same number or a victim of a fraud travels to Beijing to petition the government for payment. “They can signal officers each time a person with a history of mental illness gets near a school,” NYT added.
Procurement details and other documents reviewed by the publication also highlighted how the technology extends the boundaries of social and political control and incorporates them ever deeper into people’s lives. “At their most basic, they justify suffocating surveillance and violate privacy, while in the extreme they risk automating systemic discrimination and political repression,” the report mentioned.
In 2020, authorities in southern China allegedly denied a woman’s request to shift to Hong Kong to be with her husband after software warned them that the marriage was suspicious. An investigation later revealed that the two were “not often in the same place at the same time and had not spent the Spring Festival holiday together.” The police then concluded that the marriage had been faked to obtain a migration permit.
So, given the fact that Chinese authorities don’t require warrants to collect personal information, how can we know the future has been accurately predicted if the police intervene before it even happens? According to experts, even if the software fails to deduce human behaviour, it can be considered ‘successful’ since the surveillance itself helps curb unrest and crime to a certain extent.
“This is an invisible cage of technology imposed on society,” said Maya Wang, a senior China researcher with Human Rights Watch. “The disproportionate brunt of it being felt by groups of people that are already severely discriminated against in Chinese society.”
In 2017, entrepreneur Yin Qi, who founded an artificial intelligence start-up called Megvii, first introduced a computer system capable of predicting crimes. At the time, he told Chinese state media that if cameras detected a person spending hours at a stretch on a train station, the system could flag a possible pickpocket.
Fast forward to 2022, the police in Tianjin have reportedly bought software made by Hikvision, a Megvii competitor that aims to predict protests. At its core, the system collects data of Chinese petitioners—a general term used to describe people who try to file complaints about local officials with higher authorities in the country. The model then analyses each of these citizens’ likelihood to petition based on their social and family relationships, past trips and personal situations to help authorities create individual profiles—with fields for officers to describe the temperament of the protester, including “paranoid,” “meticulous” and “short tempered.”
“It would be scary if there were actually people watching behind the camera, but behind it is a system,” Qi told state media back in 2017. “It’s like the search engine we use every day to surf the internet—it’s very neutral. It’s supposed to be a benevolent thing.” He also went on to add that with such surveillance, “the bad guys have nowhere to hide.”