By Aron Ravin•
New York City mayor Bill de Blasio’s ability to govern terribly is second to none. He has overseen a historic crime spike coinciding with a drastic decline in community–police relations. The homelessness crisis remains unaddressed, and drug abuse runs amok in the city. So when he manages to outdo himself, the man deserves some serious applause.
De Blasio recently announced that residents will be required to provide proof of vaccination to gain entry into “indoor dining, indoor fitness facilities, indoor entertainment facilities.” Already, even Democrats are identifying the myriad problems this policy poses.
For one, it’s an egregious encroachment on liberty. The nanny state (city?) is saying that it is within its power to segregate private facilities, so long as it is an issue of public health. Hopefully, this does not set some sort of precedent. Imagine if municipalities had had this authority during the height of the AIDS scare: “No admission unless you have proof that you are HIV-negative.” It’s a colossal privacy concern that can easily double as a political weapon.All Our Opinion in Your Inbox
But the shark most ominously lurking in the waters is the different rates of vaccination among demographic groups. Only about 40 percent of African Americans in New York City have received at least one dose. Among Latinos, the number is higher (63.5 percent) but still nothing to boast about. Conveniently, the health department does not publicize the statistics of vaccination for non-Hispanic whites. But if New York follows the trend in other liberal cities such as Portland, Ore. (and I see no reason why it would not), non-Hispanic white people are likely overrepresented among the vaccinated. This sounds like a rather inequitable policy, Mayor de Blasio. More black and brown people getting denied service than white people? That’s what could easily ensue.
The hypocrisy of de Blasio never ceases to astonish. No one has forgotten his grotesque double standard when he kept houses of worship shut down as he personally marched in the massive, superspreader riots that engulfed the city last summer. The argument for allowing those mostly peaceful protests, however, was that the cause of the activists was so important, so just, that they simply had to be permitted. The inequities, the systemic racism, of America just had to be addressed. But vaccine passports are inequitable, are they not?
If it was not completely clear before, it is now. De Blasio and the other politicians taking notes from him do not actually care about lofty leftist ideals like equity. They demand obedience to their ill-considered rules. It shows that they merely desire power.
By The Hill•
Video app TikTok, which has come under intense scrutiny from the U.S. government, sidestepped Google policy and collected user-specific data from Android phones that allowed the company to track users without allowing them to opt out, according to an analysis conducted by The Wall Street Journal.
The report released Tuesday comes on the heels of President Trumpsigning an executive order that targets Beijing-based ByteDance, the parent company of TikTok. The order essentially gives the Chinese tech company 45 days to divest from the app or see it banned in the U.S.
“The spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China continues to threaten the national security, foreign policy, and economy of the United States,” the executive order states. “At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.”
The White House has grown increasingly wary of TikTok, with the administration claiming that TikTok is selling American user data to the Chinese government. TikTok has repeatedly said that it has not and would never do so.
The data that was taken from the Android phones is a 12-digit code called a “media access control” (MAC) address, according to the Journal. Each MAC address is unique and are standard in all internet-ready electronic devices. MAC addresses are useful for apps that are trying to drive targeted adds because they can’t be changed or reset, allowing tech companies to create consumer profiles based off of the content that users view.
Under the Children’s Online Privacy Protection Act, MAC addresses are considered by the Federal Trade Commission to be personally identifiable information.
A 2018 study from AppCensus, a mobile-app firm that analyzes companies’ privacy practices, showed that roughly 1 percent of Android apps collect MAC addresses.
“It’s a way of enabling long-term tracking of users without any ability to opt-out,” Joel Reardon, co-founder of AppCensus, told the Journal. “I don’t see another reason to collect it.”
Back in 2013, Apple safeguarded its phones’ MAC addresses and Google did the same with Android phones in 2015. However, TikTok got around this by accessing a backdoor that allows apps to get a phone’s MAC address in a roundabout way, the Journal’s analysis reveals.
The Journal says that TikTok utilized MAC addresses for 15 months, ending with an update in November 2019.
“We are committed to protecting the privacy and safety of the TikTok community,” a TikTok spokesperson told The Hill in a statement, citing the “decades of experience” of company chief information security officer Roland Cloutier.
The spokesperson added: “We constantly update our app to keep up with evolving security challenges, and the current version of TikTok does not collect MAC addresses. We have never given any US user data to the Chinese government nor would we do so if asked.”
Google told the Journal that it was “committed to protecting the privacy and safety of the TikTok community. Like our peers, we constantly update our app to keep up with evolving security challenges.”
Microsoft, which has said that it is actively working to purchase the wildly popular app, declined the Journal’s request for comment.
When Facebook started out, most Americans thought they were getting a free service to help them connect with family and friends and that Facebook would be funded by the advertisements on their computer screens. Almost no one understood that their private information was being used to create detailed personal profiles that tracked virtually everything — where they live, who their friends are, what they like and dislike, where they shop, what products they buy, what news or events interest them, and what their political views are. Monetizing each of its users is how Facebook became a billion dollar business. But very few understood that they were, in fact, the product being sold and monetized when they signed up.
We are about to see this same phenomena on replay when it comes to new high tech home security systems. But this time it will be on steroids — because firms like Amazon will have access to a lot more than just the things we chose to post online. Products like Ring are able to store this information and it can be accessed months or years later.
They will have microphones and cameras in and around our homes. They could conceivably have access to the most intensely private and personal information and even have video and photos and sound files with our voices from inside and around our homes. How will this valuable private data be used?
These security firms store this information and it can be accessed months or years later. The question is — accessed by whom and for what purposes? If past experience is any indicator, your private information will be available to whomever is willing to pay for it, and for whatever purpose generates income. But you’re not being told that when you buy these new products.
In the past, home security systems, used high tech solutions to monitor doors and windows and glass breakage and smoke to notify you and/or to call 911 when there was a break in or a fire. But they were not collecting your private information. They were not recording your conversations. They were not recording video inside your home or even who might be coming and going from your home. But all that is changing. The new frontier in home security appears to be the Facebook model — make the client the product that the company is actually selling, but don’t make that clear up front.
Many firms have used high tech automation to lower monitoring costs, and some offer lower prices because they will make it back the same way Facebook did. If you thought Facebook was gathering information about you and your family, wait until you see what they and others can do with your private conversations in the most intimate settings at the front door and within your home.
With devices in our homes that listen to our voice so that they can turn on or off lights or adjust temperatures or turn on the television, or a hundred other things, we now know that employees who listen to the devices have held parties where they all share the most embarrassing or strange events that they’ve overheard. Simply stated, employees have saved and replayed private conversations that were recorded in our homes and used them for their personal amusement. I’m pretty confident that wasn’t in the “User Agreement.” So we have to understand the potential for abuse of our private information is real and, in fact, likely.
If consumers want security services that record voice and video in and around their home, they have the right to choose that. But to be a real choice, there must be a full and complete disclosure in plain English and there must be real legal accountability for violations of the agreement.
We cannot make an informed decision when the marketing of these devices suggests that they are simply a lower cost, higher tech home security solution. That’s deceptive and it is designed to mislead consumers and lull them into a false sense that their privacy isn’t at risk.
We have the right to know what private information, voice recordings, photos and video are being recorded and stored. How will that information be used? Will it be sold? Will it be used at employee parties to get a laugh? Who has access to your private and intimate data? If you talk about something in the privacy of your bedroom, will you begin receiving push advertisements on that exact topic?
Policymakers should create clear standards that allow consumers to make informed choices. Consumers have every right to invite companies and their employees into their private lives. But it shouldn’t come as a surprise to them what the real deal is. Disclosure allows Americans to decide if they want a home security system or if they want to invite a large corporation into their home to surveil them so that they can expand their profits.
The video doorbell is by far one of the most ubiquitous smart home devices. In 2018 alone, consumers spent more than $530 million on a total of 3.4 million units, putting electronic eyes on the doorsteps of homes across the country.
In the interest of disclosure, I must admit that I own several smart home products, including a video doorbell, and am relatively happy with its performance and functionality. But like many consumers I am concerned about a rash of recent reports highlighting previously undisclosed privacy concerns associated with these devices.
It was recently reported that Ring has entered into surveillance partnerships with over 400 law enforcement agencies across the country. Participating jurisdictions are provided access to a “Law Enforcement Neighborhood Portal” that allows them to directly request video without a warrant, and then store it indefinitely. That raises serious questions about civil rights and liberties and understandably has elicited significant community opposition.
Andrew Ferguson, law professor at the University of the District of Columbia succinctly sums up the privacy dynamics at play with these partnerships:
“The pushback they [Amazon] are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information. The former enhances an individual’s freedom and choice, the latter limits an individual’s freedom and choice.”
When law enforcement agencies are the customers, he concludes, a company has an obligation to slow down.
To be clear, I have no problem with civic-minded citizens volunteering resources to help solve crimes and Ring doorbells may help modernize crime fighting. But these partnerships, as they currently exist, threaten to create a new level of government surveillance at the front door if oversight cannot keep up.
The use of facial recognition, a feature that is increasingly found in today’s smart camera and often goes hand in hand with many of these law enforcement efforts has also been receiving increased scrutiny.
Already used by a host of Google Nest products including their Hello Doorbell, the recently released Google Nest Hub Max for the first time has brought full facial recognition into the home. These cameras flag known users through its “familiar faces” which according to a Nest spokesperson is “not shared across users or used in other homes.” That’s for now, at least. When asked about future uses, Nest did not provide additional comment. As Google looks to expand their facial recognition offerings to private sector and government clients, more clarification needs to be provided about potential applications in the future.
While Ring cameras do not currently employ facial recognition technology, their parent company Amazon has filed a patent to put its proprietary video scanning software, Rekognition, into its doorbells. Ostensibly it would be used to identify “suspicious people” and alert users when these individuals are caught on camera. While a spokesperson for the company has said “the application was designed only to explore future possibilities,” Rekognition’s other applications indicate this development warrants further examination.
The software has been used by law enforcement to match suspects caught on surveillance footage against mugshot databases. More recently, Rekognition has been marketed to law enforcement as a way to identify people captured on video in real time. If this technology is connected to video doorbells in the future, this could raise some serious privacy concerns.
Will Ring use this as a way to create a visitor log, in real time, of all guests who visit your house? If a “suspicious person” on your doorstep is “face matched” will law enforcement be alerted or add you to some sort of a watch list? Given the software’s propensity for generating false positives, this latter point is especially concerning and must be addressed.
Privacy concerns also extend beyond civil liberties. Some of these products have been released without simple safeguards such as two-factor password authentication and end-to-end encryption of videos, leaving sensitive information vulnerable to cyber attacks, stalkers, and foreign governments. Other times it has resulted in software bugs that could be exploited to spy on users.
One company even has a team of workers that are watching hundreds of clips per day, some of which capture very intimate moments, to train artificial intelligence algorithms. The fact of the matter is the move fast and break things mentality of the tech world doesn’t work when such sensitive information is at stake.
In order to regain consumer trust these companies must move more deliberately, provide greater transparency over how data will be used, and offer greater user control over their products.
Google for one has issued a set of plain-English privacy commitments that tells users what kind of data is collected and how it is used. Amazon, in recognition of the ongoing privacy backlash to their cameras will roll out a “Home Mode” function this fall that will allow owners to turn off audio and video recording while they are home. Both companies, as well as a host of other tech companies have asked for clearer government regulation of facial recognition technology. These are steps in the right direction, but there is still a long way to go.
There does not need to be a false choice when it comes to the utility and privacy of smart home devices. Consumers are demanding more transparency over what data is collected and control over how their data is used. Policymakers at all levels should get involved and provide proper oversight. At the end of the day these products can be valuable tools, but it is incumbent on us to set the rules now that will prevent Big Doorbell in the future.
A Chinese company, Ant Financial, largely owned by the government of China, is intent on taking over MoneyGram, a leading US-based financial payments company. This planned acquisition raises serious questions as to whether ownership of MoneyGram would be part of China’s strategic plan to obtain sensitive personal and financial information of Americans and westerners worldwide as well as to undermine American economic strength. This acquisition should be stopped for that reason.
The Committee on Foreign Investment in the United States (CFIUS) exists to review the national security implications of foreign investments in US companies. CFIUS is comprised of representatives from a number of US agencies or departments — including the Departments of Defense, Homeland Security, State and Commerce. CFIUS can block foreign sales and investments that would result in a foreign power acquiring assets and intellectual property that would harm America’s national security.
There are a number of important national security and strategic reasons that CFIUS should reject Ant Financial’s proposed takeover of MoneyGram. Continue reading
Representative Chaffetz has been investigating the scandal-plagued protective agency — the habitual drunkenness and whoring of its agents, among other things — when Secret Service personnel improperly accessed his protected records in a hunt for dirt. The aim of this was made clear by assistant director Ed Lowery, who wrote to assistant director Faron Paramore: “Some information that he might find embarrassing needs to get out.”
Critics are saying that the agency’s brass — at least 18 of them were culpably aware of the plan, and 45 employees illegally viewed the congressman’s information — have violated the Privacy Act. They certainly have, but that is the least of it. They have illegally accessed protected federal records, which is fraud under federal law and carries a ten-year prison sentence. Continue reading
In the wake of a security breach last month that resulted in the theft of personal taxpayer data, experts are now raising concerns over a government data warehouse that keeps the information of Obamacare enrollees forever.
The system, known as the Multidimensional Insurance Data Analytics System, or MIDAS is vaguely described on the federal healthcare.gov as a “perpetual central repository.” When asked by the Associated Press how many people have direct access to the database, officials refused to say.
The decision to keep the personal information of enrollees forever has raised the ire of experts. Lee Tien, a senior staff attorney for the Electronic Frontier Foundation stated that it is irresponsible of the government to retain data any longer than is necessary. Similarly, Michael Astrue, a former Social Security Commissioner has argued that there is no justification for keeping this data permanently. In addition, he worries that the federal government is illegally expanding MIDAS by adding personal information from state-run Obamacare exchanges without proper privacy consent. Continue reading
Minnesota insurance broker Jim Koester was looking for information about assisting with Obamacare implementation; instead, what landed in his inbox was a document filled with the names, Social Security numbers and other pieces of personal information belonging to his fellow Minnesotans.
In one of the first breaches of the new Obamacare online marketplaces, an employee of the Minnesota marketplace, called MNsure, accidentally emailed Koester a document containing personally identifying information for more than 2,400 insurance agents, the Minnesota Star Tribune reported. MNsure was able to quickly undo the damage because Koester cooperated with them, but the incident left him unnerved.
“The more I thought about it, the more troubled I was,” Koester told the newspaper. “What if this had fallen into the wrong hands? It’s scary. If this is happening now, how can clients of MNsure be confident their data is safe?” Continue reading