The dynamic nature of our tech sector fosters a flow of new startups entering markets constantly. The speed at which companies can collaborate and innovate can significantly influence which may be the next Apple or Google and which will fail in their first year. These innovations, often the result of tireless investment in R&D, are frequently safeguarded through our system of intellectual property – through protections like patents and trade secrets.
However, abuse exists in nearly every system, and a 2018 Texas trade secrets decision boasts record-setting spoils for potential abusers and how a so-called expert witness can derail a jury. This case, if left unchecked, is a stark warning of just how high the cost of collaboration can be.
As I’ve previously written, since the enactment of the Defense of Trade Secrets Act (DTSA) in May 2016, the United States has experienced a rapid spike in trade secret lawsuit filings – with the number of civil trade secrets cases filed in federal and state courts increasing by 30%.
For example, in 2018 a suit involving autonomous-driving technology trade secrets between Uber and Waymo resulted in a $245 million settlement in San Francisco early last year. In another California-based lawsuit, a jury awarded the U.S. branch of a Dutch semiconductor maker, ASML, $223 million in its suit against a local rival, XTAL, for misappropriating trade secrets.
But the $740 million award handed down in Bexar County, Texas’ Title Source v. HouseCanary takes the cake for 2018’s most costly verdict in a trade secrets case. This record-setting award is especially concerning because just days following the decision it emerged that the victor, HouseCanary, may never have possessed any of the trade secrets at issue in the first place.
Title Source, now known as Amrock, sued Silicon Valley-based HouseCanary for breach of contract when the company failed to develop an automated valuation model (AVM) mobile application. In the meantime, Amrock developed their own AVM based off common industry practices and publicly available information. HouseCanary, in turn, accused Amrock of trade secret misappropriation.
The case is currently under appeal with the Texas Fourth Court of Appeals, which rests on the new evidence that emerged only after the award was already handed down. Whistleblower testimony from four former HouseCanary employees confirms that the app they were hired to develop was not a “functioning product,” was “vapor ware,” and had “none of the [promised] capabilities.”
Notably, the high-dollar remedy was awarded with reliance on suspect “expert” testimony from HouseCanary’s witness, Walter Bratic. As it turns out, Bratic knows his way to the witness stand, having established a career as an expert witness willing who doesn’t let logic or facts inhibit his “expertise,” so long as the check clears. Several of his lofty uncorroborated damages estimates have ultimately resulted in reversal on appeal with his logic-defying damages figures cited prominently among the reasons why the lower court had erred in its initial findings.
In what would have been one of the largest patent awards to date, a U.S. District Judge in East Texas in 2011 overturned a $625 million jury verdict in Mirror Worlds LLC v. Apple Inc. The judge pointed out “the scope of Mirror Worlds’ case and Apple’s potential liability exposure changed during the course of trial” and “Mr. Bratic did not adjust his damages calculations after dismissal of Mirror Worlds’ indirect infringement claims,” – which would have reduced damages by approximately 50% to about $300 million.
In his opinion overturning the verdict, the judge wrote the record “lacks substantial evidence to support the jury’s award of damages” liable for patent infringement, taking particular issue with the damages in Bratic’s dubious valuation suggesting Apple should pay whopping royalties.
However, these damages are only considered if it was proven that the infringed patent was so central to the entire product that it can be considered key driver of customer demand. On whether or not that standard was met, the appellate judge stated, “The record lacks substantial evidence to support the jury’s award of damages. The Court grants Apple’s request for Judgment as a Matter of Law to vacate the jury’s damages award.”
Bratic’s handiwork doesn’t end there. In 2013, his testimony in Xpertuniverse v. Cisco was deemed baseless and thrown out after he employed a bizarre “hypothetical negotiation” in which he contended both parties would have agreed to a $32.5 million lump sum royalty.
In the 2015 case IVS v. Microsoft, in which IVS accused Microsoft’s Xbox and Kinect of infringing on a facial recognition patent, the court ruled that Bratic erred in opining that royalty damages should be a running royalty of 3x the court-ordered royalty rate based on a prior case involving handheld controllers.
Likewise in his damages valuation testimony for HouseCanary, Bratic used an outrageous price of $11 per use for the AVM, which emails between Amrock employees from February 2015 make clear would never have been the agreed upon rate.
This case is the epitome of trade secrets litigation abuse – potentially an ominous high-dollar indication of an even costlier problem with broader impact on American innovation and competitiveness in a global technology sector. If the decision stands, the established precedent will further open the floodgates for abuse of IP protections – offering an attractive option for companies looking for a way around fair market competition and innovation.
What began as a $5 million contract has morphed into three-quarters of a billion dollars and a legal spectacle – in large part due to faulty reasoning and voodoo math of an expert witness with a history of over valuating for his clients and being overturned by the courts. Legal scholars and the entire tech sector are closely watching to see how this case plays out for the future of American innovation.
When Facebook started out, most Americans thought they were getting a free service to help them connect with family and friends and that Facebook would be funded by the advertisements on their computer screens. Almost no one understood that their private information was being used to create detailed personal profiles that tracked virtually everything — where they live, who their friends are, what they like and dislike, where they shop, what products they buy, what news or events interest them, and what their political views are. Monetizing each of its users is how Facebook became a billion dollar business. But very few understood that they were, in fact, the product being sold and monetized when they signed up.
We are about to see this same phenomena on replay when it comes to new high tech home security systems. But this time it will be on steroids — because firms like Amazon will have access to a lot more than just the things we chose to post online. Products like Ring are able to store this information and it can be accessed months or years later.
They will have microphones and cameras in and around our homes. They could conceivably have access to the most intensely private and personal information and even have video and photos and sound files with our voices from inside and around our homes. How will this valuable private data be used?
These security firms store this information and it can be accessed months or years later. The question is — accessed by whom and for what purposes? If past experience is any indicator, your private information will be available to whomever is willing to pay for it, and for whatever purpose generates income. But you’re not being told that when you buy these new products.
In the past, home security systems, used high tech solutions to monitor doors and windows and glass breakage and smoke to notify you and/or to call 911 when there was a break in or a fire. But they were not collecting your private information. They were not recording your conversations. They were not recording video inside your home or even who might be coming and going from your home. But all that is changing. The new frontier in home security appears to be the Facebook model — make the client the product that the company is actually selling, but don’t make that clear up front.
Many firms have used high tech automation to lower monitoring costs, and some offer lower prices because they will make it back the same way Facebook did. If you thought Facebook was gathering information about you and your family, wait until you see what they and others can do with your private conversations in the most intimate settings at the front door and within your home.
With devices in our homes that listen to our voice so that they can turn on or off lights or adjust temperatures or turn on the television, or a hundred other things, we now know that employees who listen to the devices have held parties where they all share the most embarrassing or strange events that they’ve overheard. Simply stated, employees have saved and replayed private conversations that were recorded in our homes and used them for their personal amusement. I’m pretty confident that wasn’t in the “User Agreement.” So we have to understand the potential for abuse of our private information is real and, in fact, likely.
If consumers want security services that record voice and video in and around their home, they have the right to choose that. But to be a real choice, there must be a full and complete disclosure in plain English and there must be real legal accountability for violations of the agreement.
We cannot make an informed decision when the marketing of these devices suggests that they are simply a lower cost, higher tech home security solution. That’s deceptive and it is designed to mislead consumers and lull them into a false sense that their privacy isn’t at risk.
We have the right to know what private information, voice recordings, photos and video are being recorded and stored. How will that information be used? Will it be sold? Will it be used at employee parties to get a laugh? Who has access to your private and intimate data? If you talk about something in the privacy of your bedroom, will you begin receiving push advertisements on that exact topic?
Policymakers should create clear standards that allow consumers to make informed choices. Consumers have every right to invite companies and their employees into their private lives. But it shouldn’t come as a surprise to them what the real deal is. Disclosure allows Americans to decide if they want a home security system or if they want to invite a large corporation into their home to surveil them so that they can expand their profits.
Photo by: Matt Rourke
FILE – This April 26, 2017, file photo shows the Twitter app icon on a mobile phone in Philadelphia. Twenty-six words tucked into a 1996 law overhauling telecommunications have allowed companies like Facebook, Twitter and Google to grow into the giants they are today. Those are the words President Donald Trump challenged in an executive order Thursday, May 28, 2020 one that would strip those protections if those companies engaged in editorial decisions like, for instance, adding a fact-check warning to one of Trump’s tweets. (AP Photo/Matt Rourke, File)
On Thursday, President Trump issued an executive order calling for new regulations under Section 230 of the 1996 Communications Decency Act that, he says, will prevent Big-Tech platforms from continuing what many believe is a pattern of discrimination against conservatives.
We’re not sure that’s the case — just as we’re not sure that much, even all of it will survive the inevitable challenges it will face in the courts. What we do know is that his effort to change the interpretation of Section 230 of the 1996 Communications Decency Act, just like his call for reform of libel laws during the 2016 campaign, should spark a national conversation about free speech that would be healthy for our republic.
Instead, the whole thing will ground down in pitched rhetoric passing back and forth between the president’s supporters and those who believe he is single-handedly responsible for the destruction of the nation, especially its core values and its reputation for having a civilized political process.
It seems clear Twitter’s Jack Dorsey, by allowing the presidential tweets to be footnoted, he’s acting like an editor, commenting on posts and making decisions about what other people can see. On its face, this would seem to put his platform outside the safe harbor Section 230 establishes to protect tech companies from being held liable in civil suits for things posted by platform users.
“In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to handpick the speech that Americans may access and convey on the internet,” the order says. “This practice is fundamentally un-American and anti-democratic. When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.”
That ought to be a nifty jumping-off point for a robust discussion of speech and how the protections provided by the First Amendment factor in — or don’t — to the part of the national conversation carried on in cyberspace. Legal scholars can point to numerous decisions upholding the idea the government can not infringe on speech, defined broadly to included campaign contributions, flag burning, pornography, as well as the written and spoken word when it occurs in the public square. That’s clear and has shaped a culture whose values generally extend into private space.
But what if the “public square,” however one defines it, now exists predominantly in a place that is privately owned. It’s worth discussing whether information carriers and conveyors like Twitter, YouTube, Facebook and Google have a responsibility to keep the space they own and operate open to all points of view, including the ones with which they disagree as well as the ones they may find abhorrent?
A strict reading of the U.S. Constitution would say as a matter of law, they don’t. But what about, to borrow a phrase so popular these days with those who would regulate just about every other aspect of the U.S. economy, their corporate social responsibility?
Further, the potential removal of Section 230 protections from any platform — which, as a matter of full disclosure, we also enjoy concerning the comments posted by readers of this our anything else we publish but not for the things we publish online or in print — is an opportunity for a vigorous discussion of the costs imposed on speech by the threat someone might get sued.
On the one hand, as we’ve seen an awful lot in the Trump era, people on both sides of the aisle have been telling outrageous lies and fabrications, made egregious exaggerations, and sullied the reputations of political leaders in both parties, journalists and entrepreneurs.
This had added an unpleasantly coarse overtone to the national debate yet, because of the way charges of libel, slander and defamation are viewed by the courts based on the existing case law, the victims of these slurs are often left without recourse and unable to recoup damages, if any. Tort reform is long overdue, we have long held, but some fresh eyes on this issue might help restore some sanity to a news business, forgive our obvious bias, driven by breaking television segments rather than the more thoughtful approach often taken by print media.
What the president has ordered is likely more a tempest in a teapot than a challenge to the constitutional order. But it raises issues worth talking about, intensely and for a long time in search of a new consensus concerning the role Big Tech plays in conveying information to the American people. Facebook’s Mark Zuckerberg has it right when he says these platforms shouldn’t be “arbiters of truth.” That doesn’t mean we shouldn’t have a conversation about what they should be.
The video doorbell is by far one of the most ubiquitous smart home devices. In 2018 alone, consumers spent more than $530 million on a total of 3.4 million units, putting electronic eyes on the doorsteps of homes across the country.
In the interest of disclosure, I must admit that I own several smart home products, including a video doorbell, and am relatively happy with its performance and functionality. But like many consumers I am concerned about a rash of recent reports highlighting previously undisclosed privacy concerns associated with these devices.
It was recently reported that Ring has entered into surveillance partnerships with over 400 law enforcement agencies across the country. Participating jurisdictions are provided access to a “Law Enforcement Neighborhood Portal” that allows them to directly request video without a warrant, and then store it indefinitely. That raises serious questions about civil rights and liberties and understandably has elicited significant community opposition.
Andrew Ferguson, law professor at the University of the District of Columbia succinctly sums up the privacy dynamics at play with these partnerships:
“The pushback they [Amazon] are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information. The former enhances an individual’s freedom and choice, the latter limits an individual’s freedom and choice.”
When law enforcement agencies are the customers, he concludes, a company has an obligation to slow down.
To be clear, I have no problem with civic-minded citizens volunteering resources to help solve crimes and Ring doorbells may help modernize crime fighting. But these partnerships, as they currently exist, threaten to create a new level of government surveillance at the front door if oversight cannot keep up.
The use of facial recognition, a feature that is increasingly found in today’s smart camera and often goes hand in hand with many of these law enforcement efforts has also been receiving increased scrutiny.
Already used by a host of Google Nest products including their Hello Doorbell, the recently released Google Nest Hub Max for the first time has brought full facial recognition into the home. These cameras flag known users through its “familiar faces” which according to a Nest spokesperson is “not shared across users or used in other homes.” That’s for now, at least. When asked about future uses, Nest did not provide additional comment. As Google looks to expand their facial recognition offerings to private sector and government clients, more clarification needs to be provided about potential applications in the future.
While Ring cameras do not currently employ facial recognition technology, their parent company Amazon has filed a patent to put its proprietary video scanning software, Rekognition, into its doorbells. Ostensibly it would be used to identify “suspicious people” and alert users when these individuals are caught on camera. While a spokesperson for the company has said “the application was designed only to explore future possibilities,” Rekognition’s other applications indicate this development warrants further examination.
The software has been used by law enforcement to match suspects caught on surveillance footage against mugshot databases. More recently, Rekognition has been marketed to law enforcement as a way to identify people captured on video in real time. If this technology is connected to video doorbells in the future, this could raise some serious privacy concerns.
Will Ring use this as a way to create a visitor log, in real time, of all guests who visit your house? If a “suspicious person” on your doorstep is “face matched” will law enforcement be alerted or add you to some sort of a watch list? Given the software’s propensity for generating false positives, this latter point is especially concerning and must be addressed.
Privacy concerns also extend beyond civil liberties. Some of these products have been released without simple safeguards such as two-factor password authentication and end-to-end encryption of videos, leaving sensitive information vulnerable to cyber attacks, stalkers, and foreign governments. Other times it has resulted in software bugs that could be exploited to spy on users.
One company even has a team of workers that are watching hundreds of clips per day, some of which capture very intimate moments, to train artificial intelligence algorithms. The fact of the matter is the move fast and break things mentality of the tech world doesn’t work when such sensitive information is at stake.
In order to regain consumer trust these companies must move more deliberately, provide greater transparency over how data will be used, and offer greater user control over their products.
Google for one has issued a set of plain-English privacy commitments that tells users what kind of data is collected and how it is used. Amazon, in recognition of the ongoing privacy backlash to their cameras will roll out a “Home Mode” function this fall that will allow owners to turn off audio and video recording while they are home. Both companies, as well as a host of other tech companies have asked for clearer government regulation of facial recognition technology. These are steps in the right direction, but there is still a long way to go.
There does not need to be a false choice when it comes to the utility and privacy of smart home devices. Consumers are demanding more transparency over what data is collected and control over how their data is used. Policymakers at all levels should get involved and provide proper oversight. At the end of the day these products can be valuable tools, but it is incumbent on us to set the rules now that will prevent Big Doorbell in the future.