In an attempt to allay users’ fears about privacy infringement and consent, Zoom updated its terms of service Monday to clarify it will not use customer content to train artificial intelligence models without consent. Legal experts contend the tech giant’s terms of service still give the company wide berth to infringe on user privacy.
When Zoom updated its terms of service (TOS) in late July, the tech company sparked criticism and outrage among cybersecurity experts and users concerned about Zoom using customers’ data — without their consent — to train artificial intelligence (AI).
Zoom confirmed three times in its blog post, “Zoom does not use any of your audio, video, chat, screen-sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.”
According to Zoom, the AI features are turned off by default. However, if a meeting administrator — typically the host — turns them on, the only way for participants to opt out is to leave the meeting.
Referencing a January ruling by the Court of Justice of the European Union against Meta’s forced consent practices, TechCrunch reported, “It goes without saying that telling your users the equivalent of ‘hey, you’re free to leave’ does not sum to a free choice over what you’re doing with their data.”
Data are ‘the new gold’
Zoom states that it retains all rights to service-generated data, which means the company can modify, distribute, process, share, maintain and store such data.
Attorney W. Scott McCollough, whose experience includes writing TOS and acceptable use policies for the tech/telecommunication industry, told The Defender:
“Let us not be fooled and this should be no surprise. Zoom is freeware, although it has [a] subscription, too.
“When the ‘service’ is free the user is the product. There is always a route to user data appropriation and monetization no matter what the terms imply or what boxes the user clicks.”
Although there are reams of federal and state privacy laws, the exception to all of them is consent, attorney Greg Glaser told The Defender:
“There’s a huge value in data. Data is the new gold. Zoom is trying to get around the invasion of privacy laws by saying you consented to Zoom using and training AI on your own video [call].”
The ultimate goal? To use advanced AI to create experiences that can be sold, said Glaser.
‘They have big plans for our data’
According to Glaser, Zoom — along with other tech behemoths like Meta — is trying to put into terms early that they have legitimate access to consumer information. Then they would have carte blanche to create and use “emulates,” a computer version of an actual person.
“It is no accident that organizations like Meta, Zoom and Google utilize AI on our private videos. They have big plans for our data, and we’re not entirely necessary in their weird view. According to Robin Hanson in [his] TedTalk about emulates [or “ems” — digital copies of human minds] … ‘Your job is to retire and die.’
“What we’re experiencing is a great showdown between the forces of Western civilization and neo-dystopia.”
And don’t forget Zoom’s ownership structure, said McCollough.
“They are public, although management insiders own almost 60%. Among them is Lt. Gen. (Ret.) H.R. McMaster, former national security advisor to Trump famously tweet-fired for not doing enough to end the forever war in Afghanistan.
“There are also some familiar faces in the Class A lineup: Vanguard and T. Rowe Price. Both big in Pharma. Vanguard is big in telecom, too. The Class B shares appear to be held by others large into surveillance capitalism.”
What are these AI features?
Similar to Slack’s ChatGPT bot, Zoom’s latest AI features — branded as Zoom IQ— let users generate chat responses to colleagues, create whiteboards based on text prompts, provide recaps of meetings and summarize threads in Zoom Team Chat.
Digital privacy advocate Open Rights Group said it’s concerned that the Zoom IQ tools are available to customers on a free trial basis. The group told the BBC the free trial encourages customers to “opt in,” making Zoom’s TOS revisions “more alarming.”
People who turn on the AI tools would “be presented with a transparent consent process for training our AI models using your customer content,” Smita Hashim, Zoom’s chief product officer, told the BBC.
In its blog post, Zoom emphasized that “account owners and administrators control whether to enable these AI features for their accounts.”
But many times the user’s interaction with the options menu is itself monitored, said McCollough. “To these folks and their intelligence service handlers, opting out — or refusing to opt in — is a subversive act and probably leads to a demerit against your social credit score,” he added.
According to Axios, “This controversy is just the tip of an enormous iceberg of conflicts over intellectual property rights and privacy that the arrival of generative AI is sending our way.”
A history of privacy law violations
Zoom is no stranger to run-ins with privacy laws. Last April, the company paid $85 million to settle a class action lawsuit for security issues that enabled hackers to crash virtual meetings.
According to the lawsuit, participants in Zoom meetings “had their computer screens hijacked and their control buttons disabled while being forced to watch pornographic video footages,” including images of child sex abuse and physical abuse.
The plaintiffs also accused Zoom of unlawfully sharing data with authorized third parties such as Facebook, Google and LinkedIn and misrepresenting the strength of its end-to-end encryption protocols.