By The Hill•
Video app TikTok, which has come under intense scrutiny from the U.S. government, sidestepped Google policy and collected user-specific data from Android phones that allowed the company to track users without allowing them to opt out, according to an analysis conducted by The Wall Street Journal.
The report released Tuesday comes on the heels of President Trumpsigning an executive order that targets Beijing-based ByteDance, the parent company of TikTok. The order essentially gives the Chinese tech company 45 days to divest from the app or see it banned in the U.S.
“The spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China continues to threaten the national security, foreign policy, and economy of the United States,” the executive order states. “At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.”
The White House has grown increasingly wary of TikTok, with the administration claiming that TikTok is selling American user data to the Chinese government. TikTok has repeatedly said that it has not and would never do so.
The data that was taken from the Android phones is a 12-digit code called a “media access control” (MAC) address, according to the Journal. Each MAC address is unique and are standard in all internet-ready electronic devices. MAC addresses are useful for apps that are trying to drive targeted adds because they can’t be changed or reset, allowing tech companies to create consumer profiles based off of the content that users view.
Under the Children’s Online Privacy Protection Act, MAC addresses are considered by the Federal Trade Commission to be personally identifiable information.
A 2018 study from AppCensus, a mobile-app firm that analyzes companies’ privacy practices, showed that roughly 1 percent of Android apps collect MAC addresses.
“It’s a way of enabling long-term tracking of users without any ability to opt-out,” Joel Reardon, co-founder of AppCensus, told the Journal. “I don’t see another reason to collect it.”
Back in 2013, Apple safeguarded its phones’ MAC addresses and Google did the same with Android phones in 2015. However, TikTok got around this by accessing a backdoor that allows apps to get a phone’s MAC address in a roundabout way, the Journal’s analysis reveals.
The Journal says that TikTok utilized MAC addresses for 15 months, ending with an update in November 2019.
“We are committed to protecting the privacy and safety of the TikTok community,” a TikTok spokesperson told The Hill in a statement, citing the “decades of experience” of company chief information security officer Roland Cloutier.
The spokesperson added: “We constantly update our app to keep up with evolving security challenges, and the current version of TikTok does not collect MAC addresses. We have never given any US user data to the Chinese government nor would we do so if asked.”
Google told the Journal that it was “committed to protecting the privacy and safety of the TikTok community. Like our peers, we constantly update our app to keep up with evolving security challenges.”
Microsoft, which has said that it is actively working to purchase the wildly popular app, declined the Journal’s request for comment.
When Facebook started out, most Americans thought they were getting a free service to help them connect with family and friends and that Facebook would be funded by the advertisements on their computer screens. Almost no one understood that their private information was being used to create detailed personal profiles that tracked virtually everything — where they live, who their friends are, what they like and dislike, where they shop, what products they buy, what news or events interest them, and what their political views are. Monetizing each of its users is how Facebook became a billion dollar business. But very few understood that they were, in fact, the product being sold and monetized when they signed up.
We are about to see this same phenomena on replay when it comes to new high tech home security systems. But this time it will be on steroids — because firms like Amazon will have access to a lot more than just the things we chose to post online. Products like Ring are able to store this information and it can be accessed months or years later.
They will have microphones and cameras in and around our homes. They could conceivably have access to the most intensely private and personal information and even have video and photos and sound files with our voices from inside and around our homes. How will this valuable private data be used?
These security firms store this information and it can be accessed months or years later. The question is — accessed by whom and for what purposes? If past experience is any indicator, your private information will be available to whomever is willing to pay for it, and for whatever purpose generates income. But you’re not being told that when you buy these new products.
In the past, home security systems, used high tech solutions to monitor doors and windows and glass breakage and smoke to notify you and/or to call 911 when there was a break in or a fire. But they were not collecting your private information. They were not recording your conversations. They were not recording video inside your home or even who might be coming and going from your home. But all that is changing. The new frontier in home security appears to be the Facebook model — make the client the product that the company is actually selling, but don’t make that clear up front.
Many firms have used high tech automation to lower monitoring costs, and some offer lower prices because they will make it back the same way Facebook did. If you thought Facebook was gathering information about you and your family, wait until you see what they and others can do with your private conversations in the most intimate settings at the front door and within your home.
With devices in our homes that listen to our voice so that they can turn on or off lights or adjust temperatures or turn on the television, or a hundred other things, we now know that employees who listen to the devices have held parties where they all share the most embarrassing or strange events that they’ve overheard. Simply stated, employees have saved and replayed private conversations that were recorded in our homes and used them for their personal amusement. I’m pretty confident that wasn’t in the “User Agreement.” So we have to understand the potential for abuse of our private information is real and, in fact, likely.
If consumers want security services that record voice and video in and around their home, they have the right to choose that. But to be a real choice, there must be a full and complete disclosure in plain English and there must be real legal accountability for violations of the agreement.
We cannot make an informed decision when the marketing of these devices suggests that they are simply a lower cost, higher tech home security solution. That’s deceptive and it is designed to mislead consumers and lull them into a false sense that their privacy isn’t at risk.
We have the right to know what private information, voice recordings, photos and video are being recorded and stored. How will that information be used? Will it be sold? Will it be used at employee parties to get a laugh? Who has access to your private and intimate data? If you talk about something in the privacy of your bedroom, will you begin receiving push advertisements on that exact topic?
Policymakers should create clear standards that allow consumers to make informed choices. Consumers have every right to invite companies and their employees into their private lives. But it shouldn’t come as a surprise to them what the real deal is. Disclosure allows Americans to decide if they want a home security system or if they want to invite a large corporation into their home to surveil them so that they can expand their profits.
Dear Chairman Simons,
The Federal Trade Commission (“FTC” or “Commission”) should open an investigation into Ring—a subsidiary of Amazon—and its data-sharing practices with law enforcement officials. Ring’s conduct raises a number of concerns, including fears that (1) the emerging technology may result in discriminatory law enforcement activity, (2) sensitive consumer data may be jeopardized as a result of misuse by Amazon and (3) consumers may be subjected to heightened physical security risks. Given these concerns, which are outlined in greater detail below, and Amazon’s history of data mishandling, the FTC should more deeply examine the damaging effects of these practices.
While innovative, Ring’s home security doorbell and its use of consumer data are cause for significant concern as this conduct has the potential to result in considerable consumer harm. So-called “smart home” technology, still very much in its infancy, and its misuse have the potential to cause lasting damage to consumers if the necessary precautions are not taken.
Despite the potential benefits of “smart home” technology like the Ring “smart” doorbell, the data collected by Amazon opens consumers to exposure under the promise of additional security. As a result, not only is consumer data made more vulnerable, but their physical safety is put at unprecedented risk.
As the Commission is well aware, as more data is collected by Amazon, potential data breaches become more damaging. A data breach of consumers’ home security system by nefarious actors could have direct consequences on consumer physical safety. For example, should home security video footage fall into the wrong hands, consumers’ daily routines—including when they leave home and when they are alone and most vulnerable—would be easily discernible by criminals intending to cause harm.
According to reports, Ring has already misled consumers about its data handling practices. The Washington Post reports that Ring has partnered with over 400 police departments in the U.S., “granting them potential access to homeowners’ camera footage.” Amazon was able to secure hundreds of partnerships by capitalizing on artificially low prices funded through taxpayer resources. Making matters worse, Ring engages in these partnerships without first informing its users. This deceptive practice raises, at best, tremendous ethical concerns.
Amazon’s record on data security is already cause for concern. Recently, two prominent senators have asked the Commission to investigate Amazon’s role in the Capital One data breach, which affected nearly 100 million customers. Given Amazon’s potential involvement in this historic breach and its reckless handling of consumer data captured through Ring, it would be unwise to allow this activity to continue without at least some examination from the Commission.
In addition to the data security concerns, Ring’s video-sharing arrangement raises questions about the potential for profiling. In an open letter to lawmakers, more than 30 civil rights action groups described the threat to civil liberties posed by Ring’s partnership with law enforcement. In the letter, the organizations explain the dangers of this arrangement:
“With no oversight and accountability, Amazon’s technology creates a seamless and easily automated experience for police to request and access footage without a warrant, and then store it indefinitely. In the absence of clear civil liberties and rights-protective policies to govern the technologies and the use of their data, once collected, stored footage can be used by law enforcement to conduct facial recognition searches, target protesters exercising their First Amendment rights, teenagers for minor drug possession, or shared with other agencies […].”
These sentiments were echoed by another prominent senator in a letter to Amazon CEO Jeff Bezos. In the letter, the lawmaker outlined the privacy and civil liberty concerns noted above. Amazon has not yet responded to this letter—a clear indication that, unless pressured by government officials, the company will only act in accordance with its own interests, rather than address the genuine threats expressed here. Because of this, it would be wise for the FTC to act before the situation spirals out of control.
Inaction in light of these facts would subject consumers to risks that are all too dangerous. As the top “cop on the beat,” the FTC has a public responsibility to protect consumers from unfair and deceptive business practices. Given the data security and civil liberty concerns, it would be wise for the FTC to undertake a review of the partnership between Amazon and Ring and law enforcement authorities.
This issue—that of data security and physical safety—is bipartisan in nature. In fact, it transcends politics entirely.
Thank you for your attention to this matter.
 Harwell, D. (2019, August 28). Doorbell-camera firm Ring has partnered with 400 police forces, extending surveillance concerns. Retrieved October 24, 2019, from https://www.washingtonpost.com/technology/2019/08/28/doorbell-camera-firm-ring-has-partnered-with-police-forces-extending-surveillance-reach/.
 Guariglia, M. (2019, August 30). Five Concerns about Amazon Ring’s Deals with Police. Retrieved October 24, 2019, from https://www.eff.org/deeplinks/2019/08/five-concerns-about-amazon-rings-deals-police.
 Fight for the Future (2019, October 7). Open letter calling on elected officials to stop Amazon’s doorbell surveillance partnerships with police. Retrieved October 24, 2019, from https://www.fightforthefuture.org/news/2019-10-07-open-letter-calling-on-elected-officials-to-stop/.
The video doorbell is by far one of the most ubiquitous smart home devices. In 2018 alone, consumers spent more than $530 million on a total of 3.4 million units, putting electronic eyes on the doorsteps of homes across the country.
In the interest of disclosure, I must admit that I own several smart home products, including a video doorbell, and am relatively happy with its performance and functionality. But like many consumers I am concerned about a rash of recent reports highlighting previously undisclosed privacy concerns associated with these devices.
It was recently reported that Ring has entered into surveillance partnerships with over 400 law enforcement agencies across the country. Participating jurisdictions are provided access to a “Law Enforcement Neighborhood Portal” that allows them to directly request video without a warrant, and then store it indefinitely. That raises serious questions about civil rights and liberties and understandably has elicited significant community opposition.
Andrew Ferguson, law professor at the University of the District of Columbia succinctly sums up the privacy dynamics at play with these partnerships:
“The pushback they [Amazon] are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information. The former enhances an individual’s freedom and choice, the latter limits an individual’s freedom and choice.”
When law enforcement agencies are the customers, he concludes, a company has an obligation to slow down.
To be clear, I have no problem with civic-minded citizens volunteering resources to help solve crimes and Ring doorbells may help modernize crime fighting. But these partnerships, as they currently exist, threaten to create a new level of government surveillance at the front door if oversight cannot keep up.
The use of facial recognition, a feature that is increasingly found in today’s smart camera and often goes hand in hand with many of these law enforcement efforts has also been receiving increased scrutiny.
Already used by a host of Google Nest products including their Hello Doorbell, the recently released Google Nest Hub Max for the first time has brought full facial recognition into the home. These cameras flag known users through its “familiar faces” which according to a Nest spokesperson is “not shared across users or used in other homes.” That’s for now, at least. When asked about future uses, Nest did not provide additional comment. As Google looks to expand their facial recognition offerings to private sector and government clients, more clarification needs to be provided about potential applications in the future.
While Ring cameras do not currently employ facial recognition technology, their parent company Amazon has filed a patent to put its proprietary video scanning software, Rekognition, into its doorbells. Ostensibly it would be used to identify “suspicious people” and alert users when these individuals are caught on camera. While a spokesperson for the company has said “the application was designed only to explore future possibilities,” Rekognition’s other applications indicate this development warrants further examination.
The software has been used by law enforcement to match suspects caught on surveillance footage against mugshot databases. More recently, Rekognition has been marketed to law enforcement as a way to identify people captured on video in real time. If this technology is connected to video doorbells in the future, this could raise some serious privacy concerns.
Will Ring use this as a way to create a visitor log, in real time, of all guests who visit your house? If a “suspicious person” on your doorstep is “face matched” will law enforcement be alerted or add you to some sort of a watch list? Given the software’s propensity for generating false positives, this latter point is especially concerning and must be addressed.
Privacy concerns also extend beyond civil liberties. Some of these products have been released without simple safeguards such as two-factor password authentication and end-to-end encryption of videos, leaving sensitive information vulnerable to cyber attacks, stalkers, and foreign governments. Other times it has resulted in software bugs that could be exploited to spy on users.
One company even has a team of workers that are watching hundreds of clips per day, some of which capture very intimate moments, to train artificial intelligence algorithms. The fact of the matter is the move fast and break things mentality of the tech world doesn’t work when such sensitive information is at stake.
In order to regain consumer trust these companies must move more deliberately, provide greater transparency over how data will be used, and offer greater user control over their products.
Google for one has issued a set of plain-English privacy commitments that tells users what kind of data is collected and how it is used. Amazon, in recognition of the ongoing privacy backlash to their cameras will roll out a “Home Mode” function this fall that will allow owners to turn off audio and video recording while they are home. Both companies, as well as a host of other tech companies have asked for clearer government regulation of facial recognition technology. These are steps in the right direction, but there is still a long way to go.
There does not need to be a false choice when it comes to the utility and privacy of smart home devices. Consumers are demanding more transparency over what data is collected and control over how their data is used. Policymakers at all levels should get involved and provide proper oversight. At the end of the day these products can be valuable tools, but it is incumbent on us to set the rules now that will prevent Big Doorbell in the future.
Facebook faces what some are calling an “existential crisis” over revelations that its user data fell into the hands of the Trump campaign. Whether or not the attacks on the social media giant are justified, the fact is that the Obama campaign used Facebook (FB) data in the same way in 2012. But the reaction from the pundits and press back then was, shall we say, somewhat different.
According to various news accounts, a professor at Cambridge University built a Facebook app around 2014 that involved a personality quiz. About 270,000 users of the app agreed to share some of their Facebook information, as well as data from people on their friends list. As a result, tens of millions ended up part of this data-mining operation.
Consulting firm Cambridge Analytica, which paid for the research, later worked with the Trump campaign to help them target advertising campaigns on Facebook, using the data they’d gathered on users. Continue reading
According to the Government Accountability Office (GAO) there were over 5.3 million vehicle crashes in 2011 that resulted in 2.2 million injuries and about 32,000 deaths.
What if it was possible to prevent up to 80 percent of those crashes simply by incorporating a few new pieces of technology into automobiles? Driver assisted vehicles boast both the ability to increase road safety (by electronically calculating the potential road safety hazards better than a human driver) and to potentially drive themselves, freeing the vehicle operator to focus on other tasks. To the outside observer, it would seem an easy conclusion that American automobile consumers would want to see the expansion of driver assisted vehicles in the upcoming years.
But underlying the proposed safety and lifestyle benefits of driver assisted and driverless vehicles are serious privacy and personal liberty concerns (the ability by the government or a third party to track where you are driving, digitized limits on vehicle speed, etc.) that need to be addressed before the public will be comfortable backing widespread adoption. As the automobile industry seeks to expand driver assisted and driverless vehicle technology the balance between safety and privacy concerns are an automotive freedom issue that industry American public will have to grapple with for some time.
As early as 1939 American engineers were developing driverless vehicles and the road systems on which they would travel. While the technology was not ready then, it is now at the point where science meets fiction. At a recent House Transportation and Infrastructure Committee hearing, National Highway Traffic Safety Administration (NHTSA) Administrator David Strickland said his agency will “decide this year whether to further advance” vehicle-to-vehicle communications technology “through regulatory action, additional research, or a combination of both.”
Vehicle-to-Vehicle (V2V) technology enables driver assisted and even autonomous driving. Central to this technology is a “black box,” a computer that interacts with sensors throughout an automobile to control functions such as speed and direction, and also communicates with other vehicles to avoid collisions. V2V technology has the potential to reduce collisions by as much as 80% and according to Administrator Strickland’s testimony “will give drivers information needed to make safe decisions on the road that cameras and radars just cannot provide.”
While full vehicle automation is still several years down the road, in the short term V2V technology will be a marginal improvement over current sensor technology, which assists with blind spots, braking and lane drifting. One of the first true V2V technologies adopted could be an adaptive cruise control that would automatically adjust a vehicle’s speed to maintain a safe driving distance. This has the potential to increase lane capacity, reduce collisions and improve fuel economy by up to 20 percent. The benefits of driverless vehicles also extend beyond increased safety and could offer increased mobility to disadvantaged populations, such as senior citizens and the disabled.
V2V technology has the potential to alter more than just the way we drive. It also has the potential to alter the way driving is taxed. Currently motorists fund the majority of road infrastructure through a series of state and federal gasoline excise taxes. The federal tax, set at 18.4 cents/gallon, has not been increased since 1993. This has resulted in a structural deficit in the Highway Trust Fund of roughly $15 billion.
The advent of more fuel-efficient vehicles coupled with a tax that some argue has not kept up with inflation, has sparked discussions on how to reform the taxes used to fund transportation projects in order to rebuild crumbling road infrastructure. Some advocates for reform argue that the excise tax should be replaced with a miles traveled tax that could be calculated and tracked by the same black-box type device that would enables motorist-assisted and autonomous driving.
While paying by the mile driven may be a fairer way to tax road use, it also illustrates many of the privacy concerns V2V communications systems present. As with income taxes, some individuals would likely try to tamper with their devices to reduce their tax burden. In 2011, the IRS identified roughly two million tax returns that were fraudulent, leading to an increase in audits and other agency actions. Other black boxes may simply malfunction. Like the IRS a government agency could easily be deemed necessary to ensure the integrity of the mileage tax. As a result, it is not difficult to envision a scenario where vehicles equipped with V2V technology would be subjected to regular, perhaps even weekly or monthly black box inspections.
There are additional privacy concerns that the government admits must be addressed before the V2V technology is more widely adopted. According to a November 2013 GAO report, one of the greatest challenges facing the development of V2V technology is establishing “technical specifications for a system that attempts to maintain users’ privacy while providing security for over the air transmission of data.”
Any technology that can upload mileage information to the government could also used to determine and regulate a variety of other factors. The radio system underpinning V2V technology broadcasts a number of different data points including “a vehicle’s latitude, longitude, time, heading angle, speed, lateral acceleration, longitudinal acceleration, yaw rate, throttle position, brake status, steering angle, headlight status, turn signal status, vehicle width, vehicle mass, bumper height and the number of occupants in the vehicle.” In theory, the government could use this information to not only determine your location and perhaps limit the speed you can drive, but third-party companies such as automobile insurers, could increase premiums based on the information regarding driving habits or law enforcement could use the information find fault in accidents or in criminal investigations and to issue remote tickets for traffic violations.
For example, in 2011, the New York Times reported that Timothy Murray, the then Lieutenant Governor of Massachusetts was involved in a car accident with his government issued vehicle. Mr. Murray initially claimed that he was wearing his seat belt and was not speeding. When the police checked the on-board computer the state has installed to monitor the vehicle, it was discovered that he was driving 100 mph and was not wearing a seat belt. As a result Mr. Carney, who claims he was asleep at the wheel, was issued a $555 ticket.
A continuous connection between your vehicle and a network could also pose a threat to how and when you drive. Eric Peters of the National Motorists Association recently reported on the Ford MyKey system and its broader applications across a fleet of smart vehicles. One of the things most striking to the author was an icon within the speedometer that uses GPS location to constantly inform the driver of a road’s given speed limit. The vehicle, through certain administrator privileges has the ability to set a maximum speed. By wedding these two technologies, the author contends that in the near future cars may be programmed to never exceed the speed limit on a given road. Peters also believes that a centralized control technology means “cars could also simply be turned off, individually (as when you haven’t paid a fine or done some other thing to incur the government’s displeasure) or en masse — in the event of some ‘national emergency.’”
As with many new technologies, there are significant advantages to be realized but also serious concerns that must be addressed. That is why automotive freedom and consumer choices are an important component to the proper development of V2V technology.
When considering the adoption of V2V vehicles two very different scenarios are possible. In one scenario, the government could promote V2V technology the way they have with fuel economy standards, through a command and control approach that has shunned new technologies like turbo engines, which could increase fuel economy without sacrificing performance. Instead, government regulations have forced companies to manufacture lighter and smaller vehicles, effectively restricting consumer choice.
In a second scenario, private industry rises to meet a market demand, absent of government mandates. This is true of antilock brakes, which were organically created to address a safety issue, and have since evolved into a standard feature on most vehicles. When left with a choice, the consumer benefits most from the market leading innovation and responding to demand. This is in essence, the heart of automotive freedom.
Vehicle to vehicle technology and its eventual path to autonomous driving has the potential to be one of the most consequential technologies of this generation, but public acceptance is contingent upon the fact that performance meets expectations and privacy concerns are addressed. This is why automotive freedom is such an important component of V2V expansion. Motorists should be vigilant as this technology develops to ensure that the proper balance of both performance and personal liberty is achieved.
. . . . . . . . . . . . . . . . . . . . . . . .
Travis Korson is the Communications Director for Frontiers of Freedom, a think tank with a mission to promote free markets, limited government and free enterprise. Visit them at www.ff.org to learn more about their automotive freedom project.