At least that’s what the FTC thinks. They charged BJ’s Wholesale Club with failing to maintain adequate computer security—it is the first time the FTC has used Section 5(a) (the section that says if you engage in an unfair or deceptive act, or practice in or affecting commerce, it’s unlawful). The FTC cited failures to encrypt consumer information, storing sensitive computer information for a needlessly long time in files with common or default passwords, and lax measures regarding prevention of unauthorized access, detection and security investigations: The complaint alleged that when taken together, BJ’s failed to provide legally adequate security for sensitive consumer information. The Chairman of the FTC has called for Congress to enact legislation requiring notification to consumers if there is significant identity theft risk, and has asked Congress to consider extending the Gramm-Leach-Bliley Safeguards Rule currently applicable to financial institutions, to non-financial institutions.
Intermix Media has reportedly agreed to pay $7.5 million to settle a lawsuit filed by the New York Attorney General, and if true, this represents the largest fine in a consumer online privacy action to date. In addition to agreeing to hire a Chief Privacy Officer, Intermix must agree to stop distributing its adware/spyware and redirect programs which the NYAG alleged were downloaded to consumers’ personal computers with inadequate notice, and then hidden to make it difficult to remove. Besides the annoyance which consumers rail about, often such hidden programs can be part of more elaborate identity theft and security breaches, sometimes without the knowledge of the company that created them. The lawsuit’s primary claims were false advertising and deceptive business practices under New York’s General Business Law statutes.
Most of you have read about the security issues that have confronted LexisNexis and ChoicePoint, and each day we learn more news about more systems and databases that have been or may have been compromised. Here’s a secret, “Google hacking” is easier. It’s a term used to describe the simple act of using publicly available search engines (no, not only Google) to find information that criminals and wrong-doers can use.
Several months ago, The Wall Street Journal reported that some security experts held a contest to demonstrate how good Google hacking can be—they limited contestants to using only Google’s search engine and in less than one hour they unearthed enough information to perpetrate financial fraud on about 25 million people—including useful combinations of names, birth dates, credit card and social security numbers. In one such experiment, a team of contestants found a directory of more than 70 million social security numbers—all belonging to individuals who are no longer alive.
Most of you know “spyware” as pesky programs that install themselves on your computer – often tacked on to programs you intend to install – that do everything from tracking online browsing habits to stealing passwords and getting at sensitive data on your computer. But what about those programs that automatically download and patch your software or update your anti-virus definitions, or cookies that enable sites you visit to recognize you and customize your experience? Of course, you have also heard of “adware” -programs that trigger the delivery of online advertising (did I say pop-ups?) that target consumer preferences and activities.
Confused by the distinctions and attempts to sort out the definitions? There is clearly a legislative drive to prohibit programs from being installed on consumers’ computers without consent or knowledge and at least three spyware bills are winding their way through the U.S. Congress. Although it is unlikely a bill could reconcile the differences and reach the President for signature this session, there is clearly impetus to “do something,” and interests on all sides are lining up to shape the contours of legislation so as not to do away with all those “good” programs!
Confused about the definitions or worried Congress might get it wrong—or just wondering who cares? Pay attention. Much of the utility and appeal of the Internet is interactivity. Browsers and websites interact. Navigational tools and features which make browsing more efficient, reduce time, and provide a more customized – thus more useful—experience, are based on useful programs working in the background and which are helpful and desirable, if properly used—”properly” being the operative issue. If worded too broadly, legislation could prohibit tools that make sense. Imagine every advertiser, website owner, merchant and search engine being required go to every user with a new consent (“opt-in”) form! How will legislation be enforced if the website owner is in another jurisdiction? Need to follow this issue? Want to know more? Want to your voice heard? Call Rimon—we can help.
California has done it again! The nation’s toughest anti-spam law, the first database security breach notification law, and now the first state to require commercial website owners and online service providers to adopt and communicate privacy policies, ensure policies satisfy certain minimum standards, and pay penalties if they fail to conform.
In April 1995, Datapro Reports on Information Security published a Disaster Avoidance brief (IS38-200-101) entitled “Avoiding a Legal Disaster: Business Continuity Planning for Multinationals.” In that paper, the author analogizes a famous 1932 “technology” case decided by the Second Circuit Court of Appeals in the United States, to the growing potential liability of users in managing their technology and information security resources. Specifically, the article states that “In 1932, a famous case entitled The T.J. Hooper (60 F.2d 737; 2nd Circuit, 1932) held that the failure to take advantage of existing and available technology—even though it was not in widespread or common use—was not evidence that the defendant’s duty to take reasonable care had been fulfilled. By analogy, when a disaster occurs, it will not be a defense to argue that a recovery or security system or preventive measure is not commonly in use, especially if using it would have averted the disaster or minimized the loss.”
The article, which focuses on what organizations can do to minimize risk, goes on to note that, “The more reliant business and operations become on technology, the more available preventive and risk management tools become, the less excusable a failure to implement meaningful measures and exercise due diligence over company assets will become to government, employees, customers, suppliers, and shareholders—all potential plaintiffs.”
Now this fact and the author would probably be relegated to obscurity but for an interesting article on I.T. Litigation that has just appeared in the February 1, 2004 issue of CIO Magazine, entitled “Courts Make Users Liable for Security Glitches.” The author notes that an interesting turning point arose in the wake of 9/11 when, in October 2001, Hartford Insurance removed computer damages from its general commercial liability policy coverage. The article goes on to cite three recent cases which are beginning to look a lot like a legal trend in this area. First, a case in which Verizon asked a court to order the State of Maine to refund money because Verizon wasn’t using Maine’s network while Verizon was “down” because of the “Slammer” worm. Verizon had not implemented a Slammer patch and last April the Court ruled that while one may not be able to control a worm attack, they are foreseeable—no refund (Maine Public Utilities Commission v. Verizon).
In Cobell v. Norton, the U.S. Department of the Interior’s website and computer security became an issue in a case involving benefits allegedly and to American Indians. The Court was sufficiently irritated by the Department’s conduct related to security audits, that the Judge actually commenced contempt proceedings! Finally, in the last case cited by the article, the American Civil Liberties Union hoped to avoid liability for accidentally publishing donor information by pleading it had outsourced its security to a third-party vendor. Although the case settled, it is doubtful such a defense would have worked and it is almost certain regulated companies will not be able to escape accountability for compliance by outsourcing regulated activities—the responsibility will remain theirs!
There appears to be an increasing, and not-so-subtle, shift away from the notion that programming errors related to security breaches, computer viruses, worms, logic bombs and other malicious code or hacker and denial of service attacks are somehow equivalent to unpredictable natural disasters like earthquakes or fires—thus not subject to a “fault” analysis, but more appropriately covered by ‘accident’ insurance. Indeed, these and other cases arising in the courts treat breaches of security as fair game for negligence lawsuits—especially where damage has been done to a consumer (e.g., identity theft) or where the assets of a company—tangible or intellectual property—have been compromised. As noted in the 1995 article, liability for failure to implement available security is likely to increasingly hold both providers and users of technology liable where negligence can be shown—or even reckless disregard where safety or the protection of assets are concerned. You can read the CIO Magazine article here and, by the way, the obscure author of the 1995 Datapro article can be reached at firstname.lastname@example.org should anyone wish to see a copy or discuss the issues raised—then or now!