Twitter, the social media giant that dominates online chatter, suspended Friday the account of the pro-ballot integrity group “True the Vote,” after alleging the group’s tweets about military ballots and voting deadlines violated the platform’s rules.
True the Vote President Catherine Engelbrecht responded angrily to the move, the latest in a series of actions by the media platform that have some accusing it of trying to stifle debate and the free flow of information during the election season to the detriment of conservative candidates and activists.
Twitter temporarily suspended the group’s account, according to a statement from Engelbrecht, after a Sept. 15 post that encouraged citizens and potential voters to confirm their counties were following the rules for mailing out ballots to members of the military serving in other states and overseas.
Twitter and other social media sites have in recent months announced new policies to protect against tampering by foreign nationals and security agencies seeking to affect the 2020 election. The increased supervision of posts began after congressional investigating committees and an inquiry overseen by former FBI Director Robert Mueller all concluded the Russians had penetrated U.S. social media platforms with misleading messages during the 2016 campaign. No evidence was ever produced, however, that demonstrated beyond a reasonable doubt the Trump campaign colluded with Moscow in these activities as many Democrats charged then and still maintain was the case.
Advocates for the military have for some time complained that ballots for local, state, and federal elections are often not mailed out early enough for soldiers, sailors, and Marines serving overseas to receive them, fill them out, and return them in time for them to be counted. Effectively, they say, this leaves America’s troops in the field – many of whom are presumed to vote Republican – disenfranchised.
“True the Vote, an election integrity advocacy organization, was sending out information of public interest regarding deadlines for our military voters, pursuant to the ‘Military and Overseas Voter Empowerment’ Act, federal law, which requires states to send absentee ballots to UOCAVA voters at least 45 days before federal elections,” Englebrecht said, adding that information “in no way” violated Twitter’s terms of service.
The now-controversial tweet was “retweeted” by President Donald J. Trump two days after it was initially posted, an act Engelbrecht suggested in a statement might have provoked the ire of Trump opponents inside Twitter supervising what goes up on the platform while searching for electoral disinformation.
True the Vote is appealing the sanction and said it fully expects to have its access to the site restored in short order. Officials at Twitter could not be reached for comment.
Larry Dean is not as famous as he deserves to be but, as the man who developed the code that allows automatic teller machines to accept cards from other banks and outlets, he birthed a revolution in banking that forever changed the way people shop and get cash. His innovation allowed debit cards to function like credit cards, taking money directly from accounts and pushing the nation and world closer to a cashless economy.
Dean’s innovation made banking easier for millions. The application programming interfaces – the APIs – he developed were protected by copyright, meaning his intellectual labors produced great wealth. Outside Atlanta, he built Dean Gardens, a 33,000 square-foot, 15-bedroom home was so extravagant the annual up-keep alone cost $1.5 million. Infamous for the iconic “Liberace Meets Napoleon” style later imposed by Dean’s son – who lived there until 1994 – it featured a Moroccan theater, 24-karat gold sinks, a gallery of Hawaiian art, 13 fireplaces, an 18-hole golf course, and a 14-seat dining room whose most prominent feature was a wall-sized aquarium known as the “Predator Tank.”
This monument to conspicuous consumption, which might have given even pre-presidential Donald Trump pause, was bulldozed into rubble ten years ago. What endures is his code which, thanks to a legal push by Google seeking to eliminate the copyright protections coders enjoy for the APIs they develop might make innovators like Dean a thing of the past.
Whether that happens is in the hands of the United States Supreme Court which, in a matter of weeks will finally hear oral arguments in the matter of Google v. Oracle, a landmark case that will decide the course of intellectual property development going forward. If a majority of the justices side with Google, then future innovations like what Dean wrought will likely be few and far between.
The case stretches back over a decade. At one time, hard as it may be to believe, Google was losing out to BING in the critical mobile search engine market while the Apple iPhone was beating its brains out in the competition among smartphones.
Seeking to improve its competitive position, Google took 11,500 lines from Java’s API coding which the company used to pay to use to construct the Android mobile operating platform, installing its search engine as the default option. As Android grew more popular, so did the Google search engine, creating a boom for the company without, the suit alleges, paying licensing fees for the use of Java to its owner Oracle.
Google does not dispute it took the code. What its briefs do argue is that these types of software APIs may not be copyrightable and, even if they are, that Oracle cannot force them to pay for using it because what it did is covered under the fair use doctrine – and copyright law exception often used when news stories are reposted and circulated for comment but seldom in commercial situations.
As even those who are not lawyers may recognize, Google’s interpretation of the copyrightability of software and the fair use doctrine as applied in this situation cannot be sustained by historical and legal precedent or by common sense, not that it bothers the biggest of big tech very much.
Google’s layers have already admitted the company “doesn’t care much about precedent or law” when it comes to copyright. When the company didn’t like the licensing terms offered to it by Sun Microsystems (then the owner of Java) Adam Rubin, the father of Android, bluntly wrote in an e-mail the company would simply “do Java anyway and defend our decision, perhaps making enemies along the way.”
Being big doesn’t allow you to ignore the law. Google’s lack of concern for intellectual property doesn’t come as a surprise – some have argued its business model depends on using the IP of others without paying for it. And the court would do well to note that others have made similar complaints in the past including the American Association of Publishers, which settled a case alleging Google has posted books online with the permission of the authors, a lawsuit by PayPal arguing an ex-employee turned over trade secret IP used to construct Google Wallet, and a suit settled with Viacom over videos posted without permission to YouTube.
These issues persist, in part because of the lack of clarity in the law protecting intellectual property and because the white shoe lawyers employed by big tech make fortunes of their own finding, exploiting, even creating loopholes that end up exploiting consumers and inventors alike. The Supreme Court is being asked to slam the door on this kind of exploitation and should.
No one likes government interference in the marketplace or the court making law from the bench but that is not what a decision favorable to Oracle would do. A decision favorable to Google would set a precedent adversely affecting software development and every other industry that relies on innovation and creativity to maintain and enhance its market petition. For the sake of private property and our nation’s founding principles, the court must come down firmly on the side of protecting intellectual property rather than affirm the idea that loopholes exist allowing big tech to take the innovations of others for their use without compensation or consent. That’s not the American way.
By The Hill•
Video app TikTok, which has come under intense scrutiny from the U.S. government, sidestepped Google policy and collected user-specific data from Android phones that allowed the company to track users without allowing them to opt out, according to an analysis conducted by The Wall Street Journal.
The report released Tuesday comes on the heels of President Trumpsigning an executive order that targets Beijing-based ByteDance, the parent company of TikTok. The order essentially gives the Chinese tech company 45 days to divest from the app or see it banned in the U.S.
“The spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China continues to threaten the national security, foreign policy, and economy of the United States,” the executive order states. “At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.”
The White House has grown increasingly wary of TikTok, with the administration claiming that TikTok is selling American user data to the Chinese government. TikTok has repeatedly said that it has not and would never do so.
The data that was taken from the Android phones is a 12-digit code called a “media access control” (MAC) address, according to the Journal. Each MAC address is unique and are standard in all internet-ready electronic devices. MAC addresses are useful for apps that are trying to drive targeted adds because they can’t be changed or reset, allowing tech companies to create consumer profiles based off of the content that users view.
Under the Children’s Online Privacy Protection Act, MAC addresses are considered by the Federal Trade Commission to be personally identifiable information.
A 2018 study from AppCensus, a mobile-app firm that analyzes companies’ privacy practices, showed that roughly 1 percent of Android apps collect MAC addresses.
“It’s a way of enabling long-term tracking of users without any ability to opt-out,” Joel Reardon, co-founder of AppCensus, told the Journal. “I don’t see another reason to collect it.”
Back in 2013, Apple safeguarded its phones’ MAC addresses and Google did the same with Android phones in 2015. However, TikTok got around this by accessing a backdoor that allows apps to get a phone’s MAC address in a roundabout way, the Journal’s analysis reveals.
The Journal says that TikTok utilized MAC addresses for 15 months, ending with an update in November 2019.
“We are committed to protecting the privacy and safety of the TikTok community,” a TikTok spokesperson told The Hill in a statement, citing the “decades of experience” of company chief information security officer Roland Cloutier.
The spokesperson added: “We constantly update our app to keep up with evolving security challenges, and the current version of TikTok does not collect MAC addresses. We have never given any US user data to the Chinese government nor would we do so if asked.”
Google told the Journal that it was “committed to protecting the privacy and safety of the TikTok community. Like our peers, we constantly update our app to keep up with evolving security challenges.”
Microsoft, which has said that it is actively working to purchase the wildly popular app, declined the Journal’s request for comment.
The dynamic nature of our tech sector fosters a flow of new startups entering markets constantly. The speed at which companies can collaborate and innovate can significantly influence which may be the next Apple or Google and which will fail in their first year. These innovations, often the result of tireless investment in R&D, are frequently safeguarded through our system of intellectual property – through protections like patents and trade secrets.
However, abuse exists in nearly every system, and a 2018 Texas trade secrets decision boasts record-setting spoils for potential abusers and how a so-called expert witness can derail a jury. This case, if left unchecked, is a stark warning of just how high the cost of collaboration can be.
As I’ve previously written, since the enactment of the Defense of Trade Secrets Act (DTSA) in May 2016, the United States has experienced a rapid spike in trade secret lawsuit filings – with the number of civil trade secrets cases filed in federal and state courts increasing by 30%.
For example, in 2018 a suit involving autonomous-driving technology trade secrets between Uber and Waymo resulted in a $245 million settlement in San Francisco early last year. In another California-based lawsuit, a jury awarded the U.S. branch of a Dutch semiconductor maker, ASML, $223 million in its suit against a local rival, XTAL, for misappropriating trade secrets.
But the $740 million award handed down in Bexar County, Texas’ Title Source v. HouseCanary takes the cake for 2018’s most costly verdict in a trade secrets case. This record-setting award is especially concerning because just days following the decision it emerged that the victor, HouseCanary, may never have possessed any of the trade secrets at issue in the first place.
Title Source, now known as Amrock, sued Silicon Valley-based HouseCanary for breach of contract when the company failed to develop an automated valuation model (AVM) mobile application. In the meantime, Amrock developed their own AVM based off common industry practices and publicly available information. HouseCanary, in turn, accused Amrock of trade secret misappropriation.
The case is currently under appeal with the Texas Fourth Court of Appeals, which rests on the new evidence that emerged only after the award was already handed down. Whistleblower testimony from four former HouseCanary employees confirms that the app they were hired to develop was not a “functioning product,” was “vapor ware,” and had “none of the [promised] capabilities.”
Notably, the high-dollar remedy was awarded with reliance on suspect “expert” testimony from HouseCanary’s witness, Walter Bratic. As it turns out, Bratic knows his way to the witness stand, having established a career as an expert witness willing who doesn’t let logic or facts inhibit his “expertise,” so long as the check clears. Several of his lofty uncorroborated damages estimates have ultimately resulted in reversal on appeal with his logic-defying damages figures cited prominently among the reasons why the lower court had erred in its initial findings.
In what would have been one of the largest patent awards to date, a U.S. District Judge in East Texas in 2011 overturned a $625 million jury verdict in Mirror Worlds LLC v. Apple Inc. The judge pointed out “the scope of Mirror Worlds’ case and Apple’s potential liability exposure changed during the course of trial” and “Mr. Bratic did not adjust his damages calculations after dismissal of Mirror Worlds’ indirect infringement claims,” – which would have reduced damages by approximately 50% to about $300 million.
In his opinion overturning the verdict, the judge wrote the record “lacks substantial evidence to support the jury’s award of damages” liable for patent infringement, taking particular issue with the damages in Bratic’s dubious valuation suggesting Apple should pay whopping royalties.
However, these damages are only considered if it was proven that the infringed patent was so central to the entire product that it can be considered key driver of customer demand. On whether or not that standard was met, the appellate judge stated, “The record lacks substantial evidence to support the jury’s award of damages. The Court grants Apple’s request for Judgment as a Matter of Law to vacate the jury’s damages award.”
Bratic’s handiwork doesn’t end there. In 2013, his testimony in Xpertuniverse v. Cisco was deemed baseless and thrown out after he employed a bizarre “hypothetical negotiation” in which he contended both parties would have agreed to a $32.5 million lump sum royalty.
In the 2015 case IVS v. Microsoft, in which IVS accused Microsoft’s Xbox and Kinect of infringing on a facial recognition patent, the court ruled that Bratic erred in opining that royalty damages should be a running royalty of 3x the court-ordered royalty rate based on a prior case involving handheld controllers.
Likewise in his damages valuation testimony for HouseCanary, Bratic used an outrageous price of $11 per use for the AVM, which emails between Amrock employees from February 2015 make clear would never have been the agreed upon rate.
This case is the epitome of trade secrets litigation abuse – potentially an ominous high-dollar indication of an even costlier problem with broader impact on American innovation and competitiveness in a global technology sector. If the decision stands, the established precedent will further open the floodgates for abuse of IP protections – offering an attractive option for companies looking for a way around fair market competition and innovation.
What began as a $5 million contract has morphed into three-quarters of a billion dollars and a legal spectacle – in large part due to faulty reasoning and voodoo math of an expert witness with a history of over valuating for his clients and being overturned by the courts. Legal scholars and the entire tech sector are closely watching to see how this case plays out for the future of American innovation.
When Facebook started out, most Americans thought they were getting a free service to help them connect with family and friends and that Facebook would be funded by the advertisements on their computer screens. Almost no one understood that their private information was being used to create detailed personal profiles that tracked virtually everything — where they live, who their friends are, what they like and dislike, where they shop, what products they buy, what news or events interest them, and what their political views are. Monetizing each of its users is how Facebook became a billion dollar business. But very few understood that they were, in fact, the product being sold and monetized when they signed up.
We are about to see this same phenomena on replay when it comes to new high tech home security systems. But this time it will be on steroids — because firms like Amazon will have access to a lot more than just the things we chose to post online. Products like Ring are able to store this information and it can be accessed months or years later.
They will have microphones and cameras in and around our homes. They could conceivably have access to the most intensely private and personal information and even have video and photos and sound files with our voices from inside and around our homes. How will this valuable private data be used?
These security firms store this information and it can be accessed months or years later. The question is — accessed by whom and for what purposes? If past experience is any indicator, your private information will be available to whomever is willing to pay for it, and for whatever purpose generates income. But you’re not being told that when you buy these new products.
In the past, home security systems, used high tech solutions to monitor doors and windows and glass breakage and smoke to notify you and/or to call 911 when there was a break in or a fire. But they were not collecting your private information. They were not recording your conversations. They were not recording video inside your home or even who might be coming and going from your home. But all that is changing. The new frontier in home security appears to be the Facebook model — make the client the product that the company is actually selling, but don’t make that clear up front.
Many firms have used high tech automation to lower monitoring costs, and some offer lower prices because they will make it back the same way Facebook did. If you thought Facebook was gathering information about you and your family, wait until you see what they and others can do with your private conversations in the most intimate settings at the front door and within your home.
With devices in our homes that listen to our voice so that they can turn on or off lights or adjust temperatures or turn on the television, or a hundred other things, we now know that employees who listen to the devices have held parties where they all share the most embarrassing or strange events that they’ve overheard. Simply stated, employees have saved and replayed private conversations that were recorded in our homes and used them for their personal amusement. I’m pretty confident that wasn’t in the “User Agreement.” So we have to understand the potential for abuse of our private information is real and, in fact, likely.
If consumers want security services that record voice and video in and around their home, they have the right to choose that. But to be a real choice, there must be a full and complete disclosure in plain English and there must be real legal accountability for violations of the agreement.
We cannot make an informed decision when the marketing of these devices suggests that they are simply a lower cost, higher tech home security solution. That’s deceptive and it is designed to mislead consumers and lull them into a false sense that their privacy isn’t at risk.
We have the right to know what private information, voice recordings, photos and video are being recorded and stored. How will that information be used? Will it be sold? Will it be used at employee parties to get a laugh? Who has access to your private and intimate data? If you talk about something in the privacy of your bedroom, will you begin receiving push advertisements on that exact topic?
Policymakers should create clear standards that allow consumers to make informed choices. Consumers have every right to invite companies and their employees into their private lives. But it shouldn’t come as a surprise to them what the real deal is. Disclosure allows Americans to decide if they want a home security system or if they want to invite a large corporation into their home to surveil them so that they can expand their profits.
Photo by: Matt Rourke
FILE – This April 26, 2017, file photo shows the Twitter app icon on a mobile phone in Philadelphia. Twenty-six words tucked into a 1996 law overhauling telecommunications have allowed companies like Facebook, Twitter and Google to grow into the giants they are today. Those are the words President Donald Trump challenged in an executive order Thursday, May 28, 2020 one that would strip those protections if those companies engaged in editorial decisions like, for instance, adding a fact-check warning to one of Trump’s tweets. (AP Photo/Matt Rourke, File)
On Thursday, President Trump issued an executive order calling for new regulations under Section 230 of the 1996 Communications Decency Act that, he says, will prevent Big-Tech platforms from continuing what many believe is a pattern of discrimination against conservatives.
We’re not sure that’s the case — just as we’re not sure that much, even all of it will survive the inevitable challenges it will face in the courts. What we do know is that his effort to change the interpretation of Section 230 of the 1996 Communications Decency Act, just like his call for reform of libel laws during the 2016 campaign, should spark a national conversation about free speech that would be healthy for our republic.
Instead, the whole thing will ground down in pitched rhetoric passing back and forth between the president’s supporters and those who believe he is single-handedly responsible for the destruction of the nation, especially its core values and its reputation for having a civilized political process.
It seems clear Twitter’s Jack Dorsey, by allowing the presidential tweets to be footnoted, he’s acting like an editor, commenting on posts and making decisions about what other people can see. On its face, this would seem to put his platform outside the safe harbor Section 230 establishes to protect tech companies from being held liable in civil suits for things posted by platform users.
“In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to handpick the speech that Americans may access and convey on the internet,” the order says. “This practice is fundamentally un-American and anti-democratic. When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.”
That ought to be a nifty jumping-off point for a robust discussion of speech and how the protections provided by the First Amendment factor in — or don’t — to the part of the national conversation carried on in cyberspace. Legal scholars can point to numerous decisions upholding the idea the government can not infringe on speech, defined broadly to included campaign contributions, flag burning, pornography, as well as the written and spoken word when it occurs in the public square. That’s clear and has shaped a culture whose values generally extend into private space.
But what if the “public square,” however one defines it, now exists predominantly in a place that is privately owned. It’s worth discussing whether information carriers and conveyors like Twitter, YouTube, Facebook and Google have a responsibility to keep the space they own and operate open to all points of view, including the ones with which they disagree as well as the ones they may find abhorrent?
A strict reading of the U.S. Constitution would say as a matter of law, they don’t. But what about, to borrow a phrase so popular these days with those who would regulate just about every other aspect of the U.S. economy, their corporate social responsibility?
Further, the potential removal of Section 230 protections from any platform — which, as a matter of full disclosure, we also enjoy concerning the comments posted by readers of this our anything else we publish but not for the things we publish online or in print — is an opportunity for a vigorous discussion of the costs imposed on speech by the threat someone might get sued.
On the one hand, as we’ve seen an awful lot in the Trump era, people on both sides of the aisle have been telling outrageous lies and fabrications, made egregious exaggerations, and sullied the reputations of political leaders in both parties, journalists and entrepreneurs.
This had added an unpleasantly coarse overtone to the national debate yet, because of the way charges of libel, slander and defamation are viewed by the courts based on the existing case law, the victims of these slurs are often left without recourse and unable to recoup damages, if any. Tort reform is long overdue, we have long held, but some fresh eyes on this issue might help restore some sanity to a news business, forgive our obvious bias, driven by breaking television segments rather than the more thoughtful approach often taken by print media.
What the president has ordered is likely more a tempest in a teapot than a challenge to the constitutional order. But it raises issues worth talking about, intensely and for a long time in search of a new consensus concerning the role Big Tech plays in conveying information to the American people. Facebook’s Mark Zuckerberg has it right when he says these platforms shouldn’t be “arbiters of truth.” That doesn’t mean we shouldn’t have a conversation about what they should be.
The video doorbell is by far one of the most ubiquitous smart home devices. In 2018 alone, consumers spent more than $530 million on a total of 3.4 million units, putting electronic eyes on the doorsteps of homes across the country.
In the interest of disclosure, I must admit that I own several smart home products, including a video doorbell, and am relatively happy with its performance and functionality. But like many consumers I am concerned about a rash of recent reports highlighting previously undisclosed privacy concerns associated with these devices.
It was recently reported that Ring has entered into surveillance partnerships with over 400 law enforcement agencies across the country. Participating jurisdictions are provided access to a “Law Enforcement Neighborhood Portal” that allows them to directly request video without a warrant, and then store it indefinitely. That raises serious questions about civil rights and liberties and understandably has elicited significant community opposition.
Andrew Ferguson, law professor at the University of the District of Columbia succinctly sums up the privacy dynamics at play with these partnerships:
“The pushback they [Amazon] are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information. The former enhances an individual’s freedom and choice, the latter limits an individual’s freedom and choice.”
When law enforcement agencies are the customers, he concludes, a company has an obligation to slow down.
To be clear, I have no problem with civic-minded citizens volunteering resources to help solve crimes and Ring doorbells may help modernize crime fighting. But these partnerships, as they currently exist, threaten to create a new level of government surveillance at the front door if oversight cannot keep up.
The use of facial recognition, a feature that is increasingly found in today’s smart camera and often goes hand in hand with many of these law enforcement efforts has also been receiving increased scrutiny.
Already used by a host of Google Nest products including their Hello Doorbell, the recently released Google Nest Hub Max for the first time has brought full facial recognition into the home. These cameras flag known users through its “familiar faces” which according to a Nest spokesperson is “not shared across users or used in other homes.” That’s for now, at least. When asked about future uses, Nest did not provide additional comment. As Google looks to expand their facial recognition offerings to private sector and government clients, more clarification needs to be provided about potential applications in the future.
While Ring cameras do not currently employ facial recognition technology, their parent company Amazon has filed a patent to put its proprietary video scanning software, Rekognition, into its doorbells. Ostensibly it would be used to identify “suspicious people” and alert users when these individuals are caught on camera. While a spokesperson for the company has said “the application was designed only to explore future possibilities,” Rekognition’s other applications indicate this development warrants further examination.
The software has been used by law enforcement to match suspects caught on surveillance footage against mugshot databases. More recently, Rekognition has been marketed to law enforcement as a way to identify people captured on video in real time. If this technology is connected to video doorbells in the future, this could raise some serious privacy concerns.
Will Ring use this as a way to create a visitor log, in real time, of all guests who visit your house? If a “suspicious person” on your doorstep is “face matched” will law enforcement be alerted or add you to some sort of a watch list? Given the software’s propensity for generating false positives, this latter point is especially concerning and must be addressed.
Privacy concerns also extend beyond civil liberties. Some of these products have been released without simple safeguards such as two-factor password authentication and end-to-end encryption of videos, leaving sensitive information vulnerable to cyber attacks, stalkers, and foreign governments. Other times it has resulted in software bugs that could be exploited to spy on users.
One company even has a team of workers that are watching hundreds of clips per day, some of which capture very intimate moments, to train artificial intelligence algorithms. The fact of the matter is the move fast and break things mentality of the tech world doesn’t work when such sensitive information is at stake.
In order to regain consumer trust these companies must move more deliberately, provide greater transparency over how data will be used, and offer greater user control over their products.
Google for one has issued a set of plain-English privacy commitments that tells users what kind of data is collected and how it is used. Amazon, in recognition of the ongoing privacy backlash to their cameras will roll out a “Home Mode” function this fall that will allow owners to turn off audio and video recording while they are home. Both companies, as well as a host of other tech companies have asked for clearer government regulation of facial recognition technology. These are steps in the right direction, but there is still a long way to go.
There does not need to be a false choice when it comes to the utility and privacy of smart home devices. Consumers are demanding more transparency over what data is collected and control over how their data is used. Policymakers at all levels should get involved and provide proper oversight. At the end of the day these products can be valuable tools, but it is incumbent on us to set the rules now that will prevent Big Doorbell in the future.