Wednesday, December 18, 2013

Important Social Media Guidance Issued for Financial Institutions

The Federal Financial Institutions Examination Council (FFIEC) issued final supervisory guidance that financial institutions are expected to use in "their efforts to ensure that their policies and procedures provide oversight and controls commensurate with the risks posed by their involvement with social media."

The FFIEC is the formal inter-agency body empowered to prescribe uniform principles, standards, and report forms for the federal examination of financial institutions by, among others, the Federal Reserve System, the Federal Deposit Insurance Company (FDIC) and the Consumer Financial Protection Bureau (CFPB).  The memorandum issued by the Council, "Social Media: Consumer Compliance Risk Management Guidance,"  is meant to "address the applicability of federal consumer protection and compliance laws, regulations, and policies to activities conducted via  social media by banks, savings associations, and credit unions, as well as nonbank entities supervised" by the CFPB.  Compliance officers with financial institutions as well as other senior managers at such institutions would be well served to review the Council's Guidance not only pursuant to their own responsibilities and obligations as outlined in the memorandum, but also because the memorandum provides a brief, yet substantive, overview of a wide variety of laws applicable to the financial sector's use of social media.  The Guidance makes reference to, and provides relevant summaries of, a variety of laws including, but not limited to, the Truth in Savings Act, the Equal Credit Opportunity Act, the Truth in Lending Act and the Fair Debt Collection Practice Act.

The Guidance  states that a "financial institution should have a risk management program that allows it to measure, monitor, and control the risks related to social media."  It also specifies that the risk management program should provide guidance and training for employee official use of social media.  The components of the risk management program include, in brief, the following:

  • a governance structure with clear roles and responsibilities;
  • policies and procedures regarding the use and monitoring of social media and compliance with all applicable  consumer protection laws and regulations, and incorporation of guidance as appropriate;
  • a risk management process for selecting and managing third-party relationships in connection with social media;
  • an employee training program;
  • an oversight process for monitoring information posted to the financial institution's social media site;
  • audit and compliance functions to ensure ongoing compliance; and
  • parameters for providing appropriate reporting to the financial institution's directors and senior management  for periodic evaluation.
The Guidance points out that "Since this form of customer interaction tends to be both informal and dynamic, and may occur in a less secure environment, it can present some unique challenges to financial institutions."

Wednesday, December 4, 2013

Court Says Social Media Sites Off Limits to Sex Offenders

A New Jersey appellate court has upheld the state parole board’s restriction disallowing convicted sex offenders from accessing social media or other comparable web sites.


Superior Court Judge,  Jack Sabatino, writing for the three judge panel, said, “we are satisfied that the Internet restrictions adopted here by the Parole Board have been constitutionally tailored to attempt to strike a fair balance.”  Judge Sabatino continued, “We recognize that websites such as Facebook and LinkedIn have developed a variety of uses apart from interactive communications with third parties.  Even so, the Parole Board has reasonably attempted to draw the line of permitted access in a fair manner that balances the important public safety interests at stake with the offenders’ interests in free expression and association.”

The defendants, several convicted sexual offenders whose cases were consolidated, challenged the constitutionality of the restrictions as infringing their First Amendment rights of free speech and association, a violation of their Due Process rights and  corresponding rights under New Jersey’s Constitution.  The restrictions stem from Megan’s Law, which is a series of laws, originally passed in New Jersey, aimed at sex offenders.  One component of Megan’s law includes a requirement that those persons convicted between 1994 and 2004 of certain sexual offenses must serve, in addition to any existing sentence, a special sentence of  “community supervision for life,” and those convicted after that date range are sentenced to “parole supervision for life.”

The New Jersey Parole Board’s restriction does provide for parolees to seek special permission for gaining access to certain sites for work or another “reasonable purpose.”  The state’s Deputy Attorney General said, “It is not the Parole Board’s intention that these provisions bar appellants from having Internet access to news, entertainment, and commercial transactions.”

The New Jersey restriction is hardly novel as these cases have been sprouting up throughout the nation with varied outcomes.  You can read the full opinion here.  

Thursday, November 21, 2013

Law Enforcement and the Social Media Stakeout

Law enforcement techniques that were previously used by only federal agencies are becoming more readily accessible to law enforcement at the local level.

Police sitting in a car, with a cup of coffee in hand, waiting for something to "go down" at the building across the street is a scene we have all watched countless times in movies over the years.  While possibly not as dramatic for cinematic purposes, today police are able participate in big data stakeouts from their own desks.  At a meeting last month for the International Chiefs of Police (IACP), a cloud based service was unveiled that will allow local law enforcement to monitor social networks for evidence and clues of crimes committed in the brick and mortar world.

A piece in ArsTechnica noted that a poll of 1,200 law enforcement officers, as conducted by LexisNexis, found that four out of five officers are now using social media as part of their investigations.  New SaaS programs allow police to aggregate information culled from social media sources and then link to databases with public records to enable law enforcement to cross reference the information gathered.  The article also noted that one of the services providing this type of assistance will even "monitor the general mood of postings and pick up potential threats of violence."

While police have been using social media for some time as an aid to investigations, new technology and services are providing them with more elaborate tools to assist them with their online efforts.

Wednesday, November 13, 2013

Creating Fake Profile of Your Competitor on LinkedIn…Bad Idea

If you think making bad choices on social media is limited to high school students and politicians, you should take a look at AvePoint, Inc. and AvePoint Public Sector, Inc. v. Power Tools, Inc. d/b/a Axceler and Michael X. Burns.

In this Virginia, District Court case, the court refused to dismiss most counts in the complaint brought by AvePoint, Inc. against its software competitor, Axceler.  The complaint alleges that Axceler and its agents made false, defamatory, and deceptive claims and statements regarding Avepoint through both Twitter and LinkedIn, as well as through direct communications with customers and prospective customers.  Specifically, the allegations against Axceler state that the company attempted to confuse customers into falsely believing that (i) AvePoint is a Chinese company, not an American company, (ii)  AvePoint’s software is not made, developed or supported in the U.S., (iii) AvePoint’s software is maintained in India, (iv) that Axceler’s ControlPoint software is “Microsoft recommended” over AvePoint’s DocAve software, (v) AvePoint’s customers are “dumping out of 3 year deals in year 2 to buy Axceler’s ControlPoint, and (vi) Axceler uses its maintenance revenue to improve its customers’ existing products, whereas AvePoint uses its maintenance revenue to develop new products to which its customers have no access.

If all of the allegations are true, it appears the defendant went to remarkable lengths to execute its campaign against the plaintiff.  The complaint alleges that the defendant created an account on LinkedIn for a fictitious AvePoint representative named Jim Chung and, in connection with the account, used the plaintiff’s registered trademark.  Emphasizing the confusion caused by the defendant’s actions, the plaintiff noted Jim Chung’s LinkedIn connection list.  Further, taking full advantage of the opportunities afforded by social media, the defendant’s Regional Vice President of Sales for Western North America, while at the SharePoint conference in Las Vegas, tweeted in regard to the fictitious AvePoint representative, “Just ran into jim chung from avePoint Good guy.” To add further credibility to Jim Chung’s existence, another Axceler employee tweeted, “@MICHAELBURNS Free Jimmy! #Axceler.”

The District Court refused Axceler’s request to dismiss most of the nine counts set out in AvePoint’s complaint.  The counts the court refused to dismiss included defamation, breach of contract (defendant also allegedly acquired trial software from the plaintiff through deceptive means), trademark infringement, false association or false endorsement under the Lanham Act, False Advertisement under the Lanham Act and certain violations of Virginia law.

The court’s full opinion is available here

Wednesday, November 6, 2013

Facebook Considers Using Cursor Tracking Technology

The Wall Street Journal reports that Facebook is currently looking into technology that will enable the social network to track the location of a user’s cursor on their screen or interface.

The Journal noted that Facebook would not be the first company to engage in this type of behavioral tracking as Shutterstock, a digital image marketplace, has already done so.  The article quotes Shutterstock CEO, Jon Oringer, as saying, “Today, we are looking at every move a user makes, in order to optimize the Shutterstock experience.”

The potential Facebook tracking technology could collect data on how long a user’s cursor hovers over a part of the website and whether user’s newsfeed is visible at a specific time on the user’s mobile phone.  Facebook is still in the process of testing the technology, but the Journal reports that the company should know whether it will be proceeding with the technology within months.

Ken Rudin, Facebook’s head of analytics, is working on increasing the volume of the company’s available data and storing it in a way that can be accessed more efficiently. He referred to the review of the new technology as a “never-ending phase” noting that it will not necessarily be rolled out.

With the knowledge that Facebook is now considering this technology and, if it uses it, will not be the first company to do so, another layer of behavioral tracking can be added to the myriad ways data can be collected and used on social media platforms.

Tuesday, October 29, 2013

Potential Landlord Liability in Facebook Stalking Case

A recent ruling by an Ohio appellate court indicates that the landlord of an apartment complex could have liability in a negligence action brought in connection with a Facebook stalking incident.

The facts of this case, as outlined by the Court of Appeals Twelfth Appellate District’s opinion, are particularly disturbing.  The case involves a single mother, Lindsay P., who resided with her young daughter in an Ohio apartment complex.  The mother complained to the management company, Towne Properties Asset Management Co., Ltd., about excessive noise, including fighting and loud music, which emanated from the apartment below.  The apartment below was occupied by both the resident named on the lease as well as her live in boyfriend who was not a party to the lease and whose presence was not contemplated by the lease terms.  The dispute eventually led to the downstairs neighbors’ boyfriend banging on Lindsay P.’s door and engaging in other intimidating behavior.  The intimidating behavior included the neighbor’s boyfriend eventually contacting Lindsay P. through her Facebook account.  He “began the exchange by stating that he knew the two had differences, that he had seen Lindsay upset and crying, and that he knew things were not ‘easy for a single mom.”  He proceeded to make apparently sexual overtures to Lindsay P. and even attached a link to a pornographic website showing a man and woman having sexual relations and who the court said “looked similar” to both Lindsay P. and her neighbor’s boyfriend.  After the matter continued to escalate in this manner and Lindsay P.’s concern and fear continued to grow, she allegedly informed the management company that she would like to leave her current residence and look for another place to live.  The management company told her that “was not an option,” but that instead she could move to a different apartment managed by the company a few blocks away.  While not an ideal alternative, as termination of the lease appeared to be rejected by the management company, Lindsay P. agreed to the move even though it was in a first floor apartment that she expressed concern over “because of safety and accessibility reasons.”  Soon after moving into the new apartment, the neighbor’s boyfriend broke into Lindsay P.’s apartment and proceeded to rape her with her young daughter in a nearby room overhearing the attack.

The record of the case indicates that the management company had been provided with a copy of the contents of the parties Facebook exchange and informed Lindsay P. to contact the local police, which she did.  “It is undisputed that the police did not pursue charges against Haynes (the neighbor’s boyfriend) because of the Facebook exchange, nor did they investigate the matter.” There was some dispute as to whether Lindsay P. had expressly requested that her lease be broken and the court reasoned that such lack of clarity was an issue of credibility that “must be determined by the trier of fact.”  Moreover, while the landlord’s “counsel suggested at oral arguments that the record did not contain evidence that Towne Properties let tenants out of their leases…’the record, however, does appear to contain such testimony.”

In the Lindsay P. v.Towne Properties Asset Management Co., Ltd. opinion the court  states that “it is cognizant that the criminal acts of third parties are very difficult to predict and that a landlord does not generally have a duty to protect its tenants from the criminal acts of third parties.  However, there are issues of fact regarding whether Towne Properties should have reasonably foreseen Haynes’s criminal activity.”

Haynes was apprehended by the police, was tried and convicted of rape and aggravated burglary and was sentenced to nine years in prison.

Tuesday, October 22, 2013

Failure to Follow DMCA Safe Harbor Requirements Leads to Stormy Seas

Recent cases suggest that Internet Service Providers or “ISPs” need to understand, and act upon, the statutory requirements associated with the safe harbor provisions of the Digital Millennium Copyright Act(“DMCA”).  Recall that the DMCA’s safe harbor provisions protect service providers from copyright liability related to user generated content that might infringe the rights of a third party copyright holder.

In order to qualify for safe harbor protection, the service provider must first adhere to certain requirements including the following:

            (i)         be a “service provider” as that term is defined in the DMCA;

            (ii)        adopt and implement a repeat infringer policy; and

(iii)       not interfere with technical measures copyright owners use to protect their copyrighted works.

Once it is determined that the ISP meets the necessary qualifications for safe harbor protection, the next part of the analysis includes whether the ISP had

(i)         actual knowledge of the infringement at issue (referred to as the “red flag” test);

(ii)        whether the ISP received any direct financial benefit as a result of the infringement; and 

(iii)       whether the ISP acted quickly to disable the infringing material.

In a recent Southern District of New York case, Capitol Records v. Vimeo, the court refused to recognize that, as a matter of law, all content that was the subject of claims brought by Capitol Records and EMI Blackwood Music against Vimeo, a video upload site, fell under the safeguards provided by the DMCA’s safe harbor.  While the court did find that much of the content did fall under the act’s protection, the court also found that a sizeable portion of the content required a fact finder’s assessment in order to properly determine if the statutory requirements were properly followed.

In Vimeo, certain materials had been uploaded by employees of the site itself, which raised the issue of whether the content was user directed or uploaded as a result of the site’s own employees.  In fact, labels identifying the content as having been uploaded by “STAFF” were included on the site to identify the related content.  In addition, raising the “red flag” rule, Vimeo employees had placed certain content in specific sections or categories of the site including on employee only channels and, moreover, employees had commented on some of the content as well.  As a result, the court found that the content associated with these actions presented triable issues of fact.

It should also be noted that this case follows on the heels of a recent U.S. District Court for the Southern District of Florida case, Disney Enterprises, Inc. v. Hotfile Corp. that found no safe harbor protection where a site failed to take action against repeat infringers after receiving proper takedown notices by rights holders.