Showing 3 of 3 Results

Copyright

04/28/2016
Stephen Carlisle
Monday, April 18, 2016 saw the Supreme Court of the United States deny the Author's Guild petition for writ of certiorari in the long running lawsuit against Google and their mass digitization of library books.[ref]Challenge to Google Books Is Declined by Supreme Court[/ref] News articles immediately ran out to trumpet the move as "Supreme Court Rejects Google Books Appeal,"[ref]Supreme Court Rejects Google Books Appeal[/ref] which isn't really true. All the Supreme Court said was that they weren't going to hear the case, and nothing more. While the refusal of the Supreme Court does leave the last decision in place, the Supreme Court usually does not give a reason for their action, and did not do so here. Reasons can include:
  • We think the decision below is correct.
  • We think the decision below is incorrect, but we have more important issues to decide.
  • We think the issue is important, but this case does not lend itself to a complete consideration or resolution of the issue.
And it is the last reason why I am not totally unhappy with the Supreme Court's action. Since the refusal of certiorari is not a decision on the merits of the case, not much has changed, and what we might have gotten could have been worse. Coincidentally, a few weeks before saw the latest ruling in the long running Georgia State e-books case, which once again found the majority of the digital scanning done was fair use.[ref]Georgia State Case[/ref] The difference was that the District Court, following the ruling of the Eleventh Circuit Court of Appeal, proceeded along the lines that "the excerpts are non-transformative because they are mirror image copies of the book."[ref]Id. at page 5.[/ref] This is the exact opposite of what the Second Circuit ruled in the Author's Guild Case, where Google made exact copies of entire books, not just portions of the books, yet declared the use "transformative."[ref]Authors Guild v Google, Inc. 804 F3d 202 Second Circuit Court of Appeals 2015[/ref] In Google books, Google scanned complete copies of books provided to them by participating libraries and made them searchable, and when asked, will display verbatim portions of the text.[ref]Id. at 207[/ref] The Second Circuit seemed overwhelmed by what they saw as the positive aspects of Google's book scanning project, that they gave no, or very curt, consideration to contrary points of view. "Google's making of a digital copy to provide a search function is a transformative use, which augments public knowledge by making available information about Plaintiffs' books without providing the public with a substantial substitute for matter protected by the Plaintiffs' copyright interests in the original works or derivatives of them."[ref]Id. at 207[/ref] Yet, as the Eleventh Circuit tartly observed: "[A]ll unpaid copying could be said to promote the spread of knowledge..."[ref]Cambridge University Press v Patton, 769 F.3d 1232 at 1282 Eleventh Circuit Court of Appeal 2014[/ref] Similarly, the Sixth Circuit ruled: "If you make verbatim copies of 95 pages of a 316 page book, you have not transformed the 95 pages very much-even if you juxtapose excerpts from other works."[ref]Princeton University Press v Michigan Document Services, Inc. 99 F.3d 1381 at 1389 Sixth Circuit Court of Appeals 1996[/ref] And this admonition from the Supreme Court of the United States: "We also agree with the Court of Appeals that whether ‘a substantial portion of the infringing work was copied verbatim' from the copyrighted work is a relevant question, (Citation omitted) for it may reveal a dearth of transformative character or purpose under the first factor, or a greater likelihood of market harm under the fourth; a work composed primarily of an original, particularly its heart, with little added or changed, is more likely to be a merely superseding use, fulfilling demand for the original."[ref]Campbell v. Acuff-Rose Music, Inc. 510 U.S. 569 at 587-588  Supreme Court of the United States 1994 [/ref] The fact that the information is presented in what the Second continually characterizes as "snippets" carried great weight with that Court. The District Court in the Georgia State remand characterized excerpts of 12.45% of a single book as "extremely large."[ref]Cambridge University Press (retrial) at 67[/ref] Yet, because the information in the Google books is contained in "snippets," the fact that one could access as much as 16% of an entire book was deemed inconsequential. "The fragmentary and scattered nature of the snippets revealed, even after a determined, assiduous, time-consuming search, results in a revelation that is not "substantial," even if it includes an aggregate 16% of the text of the book. If snippet view could be used to reveal a coherent block amounting to 16% of a book, that would raise a very different question beyond the scope of our inquiry."[ref]Authors Guild v Google, Inc. at 223[/ref] Again, in the Georgia State retrial, the Court found that "the excerpted portions of the work are dominated by author, opinion, analysis, evaluation and subjective description. Thus factor two disfavors fair use."[ref]Cambridge University Press (retrial) at 67[/ref] The Second Circuit dismisses this out of hand, again, because of "snippets." "Even if the snippet reveals some authorial expression, because of the brevity of a single snippet and the cumbersome, disjointed, and incomplete nature of the aggregation of snippets made available through snippet view, we think it would be a rare case in which the searcher's interest in the protected aspect of the author's work would be satisfied by what is available from snippet view, and rarer still—because of the cumbersome, disjointed, and incomplete nature of the aggregation of snippets made available through snippet view—that snippet view could provide a significant substitute for the purchase of the author's book." Surely the Second Circuit has heard of the case of Harper and Row Publishers v Nation Enterprises.[ref]471 U.S. 539 Supreme Court of the United States 1985[/ref] Of course they have, it's cited to in the opinion. What was overlooked by the Second Circuit was the truly minimal amount of copying done by the Defendant there: 300 to 400 words of verbatim quotes of a book[ref]Id. at 539[/ref] that is 454 pages long.[ref]A Time to Heal: The Autobiography of Gerald R. Ford[/ref] This amount was only 13% of the entire infringing article of some 2,250 words. One can only imagine the microscopic percentage when weighed against the entire book. Yet, this verbatim copying was held to be infringing and not fair use by the Supreme Court of the United States. So, to state that such a case of infringement might be "rare" doesn't mean it could not happen, and the author could be damaged, as in Harper and Row. But here's the kicker, and why it may be a good thing that the Supreme Court passed over the case. Google will remove a book from the "snippet view" at the request of the author, by the submission of an online form.[ref]Authors Guild v Google, Inc. at 210[/ref] The question then becomes, where is the damage to the author if the snippet is not viewable? (As an aside, gee, Google, it would be nice if you did this with YouTube, but for some reason you don't.) The Supreme Court in the three fair use cases it has undertaken, has consistently treated the fourth factor of fair use, the possibility for negative market effect, as the most important.[ref]Harper and Row at 566[/ref] So, the Supreme Court might look at the author's ability to withdraw the book, along with the rather attenuated damage of the closing the market to a competing database, and reasonably find there is no significant market harm to the authors and affirm the finding of fair use. This would be far worse than the decision we have now, which is not binding Supreme Court precedent and is counter-balanced by contrary rulings from the Sixth and Eleventh Circuits. So, my reaction today is the same as when I first heard about the refusal of the certiorari petition. I was partly sorry that the case would not be heard, because Lord knows the problems with "transformative use" are legion and need to be resolved, but I'm not sure that this was the best case to address the issue. Far better to take up the latest atrocity from Richard Prince, (e.g. "Richard Prince plays tic-tac-toe in the corner of a bunch of photographs, Court declares this action "transformative") than what we might have gotten with a decision in the Author's Guild case, namely an affirmance of a decision of questionable reasoning on other grounds.
No Subjects
04/21/2016
Stephen Carlisle
The folks over at Digital Music News have had a busy week. They have uncovered not one, but two previous musical compositions that sound an awful lot like the opening guitar melody to "Stairway to Heaven."[ref]Exclusive: 'Stairway to Heaven' Is Actually In the Public Domain and Bombshell Emerges In 'Stairway to Heaven' Plagiarism Case (LISTEN)[/ref] This riff is the subject of a rather heated court battle between the Trust of Spirit guitarist Randy California (real name Randy Wolfe) and Jimmy Page and Robert Plant of Led Zeppelin.[ref]Will the 'Stairway to Heaven' Jury Hear About Jimmy Page's Drug Use and "Serial Plagiarism"?[/ref] The part where it gets interesting is that one musical composition is so old it is in the public domain. That composition, "Sonata di Chittarra, e Violino, con il suo Basso Continuo," is by Giovanni Battista Granata, and dates from the 17th century. No need to listen to the entire song, the relevant part occurs at 0:32. https://youtu.be/zKpbJ5Kjy2I The other composition is "Cry Me a River" by British guitarist Davy Graham. The familiar riff appears several times in this film which dates from 1959. https://youtu.be/tWeejHJxGjs For comparison's sake here is the song "Taurus" by Spirit. https://youtu.be/gFHLO_2_THg And the song that should need no introduction, "Stairway to Heaven": https://youtu.be/CPSkNFODVRE To my ears, the song "Taurus" bears the least resemblance to "Stairway To Heaven" followed by "Cry Me a River" and then "Sonata di Chittarra, e Violino, con il suo Basso Continuo," in which the noted section seems very similar to the famous Led Zeppelin classic. And it's in the public domain! But does this get Jimmy Page off the hook? Not necessarily. The key here is the doctrine of independent creation. One can have a copyright in a work against another work that is very similar, as long as one was not copied from the other. This is what got the Bee Gees out of a copyright infringement suit over "How Deep Is Your Love." Said the Court: "Proof of copying is crucial to any claim of copyright infringement because no matter how similar the two works may be (even to the point of identity), if the defendant did not copy the accused work, there is no infringement."[ref]Selle v. Gibb, 741 F.2d 896, Seventh Circuit Court of Appeals 1984[/ref] So, if Jimmy copied Giovanni Battista Granata, then there is no infringement. But currently there is no evidence that Jimmy was aware of the composer or the musical composition in question. "Taurus" is a different story. As reported by The Hollywood Reporter ESQ: "Page has admitted that Spirit's album was in his record collection, but maintains there are many albums he's collected that he hasn't listened to. The defendants also admit that the two bands appeared at the same venue on the same day on multiple occasions, but assert there is no "chain of events" establishing that Led Zeppelin's members heard the Wolfe song." Page has been dogged for years with accusations of plagiarism. So much that the Wolfe Trust wishes to bring it up in court.[ref]Will the 'Stairway to Heaven' Jury Hear About Jimmy Page's Drug Use and "Serial Plagiarism"?[/ref] "[Plainitff's attorney] writes that Page and Plant admitted in depositions to routinely taking other people's songs. Among the works that have come up include ‘Dazed and Confused' (allegedly derived from Jake Holmes), ‘Whole Lotta Love' (allegedly Willie Dixon by way of The Small Faces) and ‘Babe I'm Gonna Leave You' (allegedly Anne Bredon from a song sang by Joan Baez)."[ref]Id.[/ref] "'There is no way any rational reasonable person listens to these songs and can conclude anything but that they were lifted, as Page and Plant admitted,' states a legal brief. ‘There is no other band in rock history who has been compelled to change the writing credits on its songs so many times, as Plant admitted at his deposition.'"[ref]Id.[/ref] But Page is also said to be a big fan of Davy Graham as well. According to Digital Music News: "Jimmy Page has cited Davey Graham as an influence.  Indeed, Graham is regarded as a highly influential and genre-pushing artist, and oftentimes credited with driving a resurgence in folk in 1960s Britain.  As an instrumentalist, his guitar-plucking style has often been imitated, especially by artists of the 60s and 70s."[ref]Bombshell Emerges In 'Stairway to Heaven' Plagiarism Case (LISTEN)[/ref] So what if the person Page copied wasn't Randy Wolfe, but Davy Graham? This would get him off the hook from copying "Taurus," but it might mean a potential lawsuit from Graham's heirs, since he died recently, in 2008.[ref]Davey Graham[/ref] So a last minute admission of this sort would be unlikely. But what of Randy Wolfe? Is it possible he copied from Davy Graham? Possible, but since he's dead we cannot go back and ask him. Which leaves Jimmy Page two remaining strategies, outside of the outright denial, of course. The first is the question of substantial similarity. The Wolfe estate must prove that the two songs are substantially similar to each other.[ref]Selle v. Gibb, 741 F.2d 896, Seventh Circuit Court of Appeals 1984 at 900[/ref] The problem is, "Stairway to Heaven" is a long song (8:02 in fact) and has many portions that are not similar to "Taurus" at all. Indeed, starting with the lyrics "If there's a bustle in your hedgerow…" on through the heavy final section, there is little to no similarity between the two. There is, however, the legal concept of "fragmented literal similarity." This is defined as "where ... parts of the pre-existing work are copied ... note for note ..., [t]he similarity, although literal, is not comprehensive."  Put slightly differently, "[f]ragmented literal similarity exists where the defendant copies a portion of the plaintiff's work exactly or nearly exactly, without appropriating the work's overall essence or structure."[ref]TufAmerica, Inc. v. Diamond  968 F.Supp.2d 588 United States District Court, S.D. New York (2013) [/ref] But, comparing the two, IMHO, "Stairway" and "Taurus" are indeed similar, but not close enough to get Plaintiff's to the point of proving "fragmented literal similarity" which would require a "nearly exact" copy. So let's bring back "Sonata di Chittarra, e Violino, con il suo Basso Continuo," and factor that into the mix. The fact that this song is in the public domain, does not immediately place "Cry Me a River," "Taurus" or "Stairway to Heaven" in the public domain, but does weaken the copyright protection afforded them. For assuming that neither Wolfe or Graham were aware of this 17th century musical composition, and Wolfe was not aware of Graham's composition, this would mean that three different composers (four if you count Jimmy Page) came up with very similar melodies using arpeggiated chords. This would indicate that the melodies themselves are kind of "stock" motifs that naturally would follow from using standard rules of  European musical theory, and really should receive little, if any, copyright protection. According to this website, both "Taurus" and "Stairway to Heaven" are "based on a commonly used A minor descending chromatic walk."[ref]Spirit - Taurus tab[/ref] It goes on to state that the songs are "vastly different in composition and complexity. Only an amateur would confuse them for the same song."[ref]Spirit - Taurus tab[/ref] For comparison, here is the guitar tab for both "Taurus" and "Stairway to Heaven".[ref]Spirit - Taurus tab[/ref] Blog In other words, "Taurus" was not the only source from which the melody could have emanated. Jimmy Page could have composed "Stairway to Heaven" without reference to either "Taurus" or "Cry Me a River," because it uses fairly standard techniques and harmonic progressions. As a songwriter myself, I can assure you that composing is a mysterious process, in which the brain guides you in both conscious and unconscious ways. It is very possible that Jimmy Page had heard "Taurus" or "Cry Me a River" (or both) and simply did not remember it when composing "Stairway to Heaven." As George Harrison found out, it's still copyright infringement, even if done unintentionally and subconsciously.[ref]Bright Tunes Music Corp. v. Harrisongs Music, Ltd. 420 F.Supp. 177 United States District Court, S. D. New York. (1976)[/ref] The Harrison case is further worth noting in that it shows us what substantial similarity really is. Taken side by side, it is clear that they are the same song with different lyrics. https://youtu.be/sYiEesMbe2I I cannot say, in my experience, that the same is true for "Taurus" and "Stairway to Heaven." I have been wrong before, the "Blurred Lines" case being the most recent example.[ref]The "Blurred Lines" Verdict: What It Means For Music Now and In the Future[/ref] Yet, as I discussed that verdict with an old attorney friend, and fellow musician, we both uttered the same thought at nearly the same time… "Gosh, there are only 12 notes in an octave to choose from." Given that limited a palette to paint from, similarities are bound to occur.
No Subjects
04/16/2016
Stephen Carlisle

On March 29, 2016, a huge study on the volume and effectiveness of DMCA take downs was published. 1 Titled “Notice and Takedown in Everyday Practice,” the authors made some rather stunning assertions which were widely reported in the news media. For example, as reported by this web blog, was the fact that “[a]lmost 30 percent of all piracy takedown requests are problematic.” 2

There’s only one problem. This “fact” is not true. Anyone who had actually taken the time to read the study would know it’s not true. The study only examined those notices sent to Lumens (formerly known as Chilling Effects) which are predominantly those directed to Google. To quote directly from the report:

“The dominance of Google notices in our dataset limits our ability to draw broader conclusions about the notice ecosystem…Google’s dominant position in search and the extraordinary number of notices it receives also make it unusual. This makes the Lumen dataset useful for studying an important part of the takedown system, but also means that the characteristics of these notices cannot be extrapolated to the entire world of notice sending.” 3

What really followed was a rather stunning failure of journalists across the country to read at all, much less with a critical eye, a Google funded study 4 that has fundamental flaws and exhibits a rather slanted viewpoint.

The study reaches the conclusion that the majority of faulty takedown notices are generated by small and individual copyright holders, not “professional content companies.” 5

The central problem is that the study lumps together takedown notices with mere clerical errors with those that might be abusive, and seeks to tar them all with the same brush. This leads to their rather curious assertion that a multi-billion dollar company like Google needs better legal protection from small copyright owners and individual artists. What they propose is the creation of brand new draconian legal penalties to be assessed against those small and individual copyright owners for the crime of making a clerical error. 6

At 147 pages, it’s a lot to digest. I’ve gone through it so many times I wore out two highlighters and my copy has so many sticky notes pasted to it, it looks like a peacock. So in case you don’t have the time, I’ve included the quick hit points. But the beauty of a blog is that I have no space constraints, so I can dig a lot deeper than a 500 word article. For those who are interested, a more detailed analysis follows the bullet points.

Here are the highlights, or should I say lowlights?

  • Google funded the report. 7
  • Google funded other projects by the participants. 8
  • The study directly adopts several recommendations from Public Knowledge, 9 also funded by Google. 10
  • The other Google funded projects include those in which the participants align themselves with the extremist Electronic Frontier Foundation, 11 which has itself received over $1 million dollars in funding from Google. 12
  • Recommends five specific legal reforms, all of which increase burdens and penalties on persons sending takedown notices and suggests no proposals which increase burdens, responsibilities or liabilities for Online Service Providers. 13
  • Fails to consider how “whack-a-mole” contributes to the overall volume of takedown notices, and provides no suggestions for curbing the problem. As near as I can tell, the entire “whack-a-mole” problem receives less than one full page of discussion in the 147 page report. 14
  • The study fails to factually reconcile it’s assertion of 30% “bad” notices rate against Google’s own transparency report which shows a positive compliance rate of 97% of all takedown notices received. 15
  • The study flatly states certain types of works “weight favorably towards fair use” without citing a single court case which found these types of works were in fact “fair use.” 16
  • Did not actually perform a full fair use analysis on any of the files it contended were fair use. 17
  • The study did not (as reported) review 108 million 18 or even 100 million takedown notices. 19 The actual number reviewed by the study was 1,826 20 sent to Google generally and 1,732 sent to Google Images. 21
  • In the Google Images subset, almost 80% of the notices were sent by persons outside the United States 22 and 52.9% were sent by a single individual, 23 a lady in Sweden, who very well may be off her rocker. 24
  • Fails to consider how Google’s deliberately obtuse and obstructionist takedown form/process might contribute to errors in takedown notices, 25 while contending that misidentification of the infringing work appears to be the major source of faulty takedown notices. 26
  • Contends that there are no significant barriers to individuals and small companies using the takedown system because the maybe-crazy lady in Sweden is capable of sending out a whole bunch of notices. (Seriously) 27
  • Failed to interview any small business or individual copyright holders who sent takedown notices to find out why or how problems occur. 28
  • Failed to interview a single person who had either been the “target” of a notice or filed a counter-notice. 29
  • Despite not interviewing even one “target” of a takedown notice, the study contends that the counter-notice provisions are “ineffective, empowering unscrupulous users and subjecting legitimate ones to legal jeopardy. Targets were widely considered to lack sufficient information to respond to mistaken or abusive notices.” 30
  • The section headed “Repeat Infringer Policies and ‘Strikes’” fails to mention even once the Grooveshark case 31 or the Cox Communications case, 32 despite the fact that the failure to terminate repeat infringers were central to the Court’s ruling in each case.
  • Devotes a page and half to a pseudonymously identified web site “struggling” with a flood of DMCA notices, 33 while failing to engage in equivalent scrutiny with web sites like MegaUpload and  Grooveshark, which despite receiving hundreds of thousands of legitimate notices, made DMCA non-compliance a part of their business model.
  • Devotes two pages to giving examples of mistaken ID in takedown notices. 34 By contrast, independent musician and composer Maria Schnieder’s passionate testimony before Congress about how the need to send out repetitive notices for the same files is ruining her business is dismissed in a footnote. 35
  • Throughout the study, makes no distinction between takedown notices with bad information and an abusive takedown notice.

If you’re satisfied with the box score, then thanks for reading and be on your way. If you want something a little more meaty to chew on, then pull up a chair beside me and let’s get at it.

Birds of a Feather Flock Together

Let’s start at the top. Copied into the endnote are six web posts that covered the release of the study. These news outlets include the Washington Post, Variety and CNBC. 36 Not one of them reported that Google funded the study. It’s really hard to miss this point. It’s in the preface to the report.

“This work would not have been possible without both data and funding resources for the coding effort… We are grateful for funding support from Google Inc. as a gift to The American Assembly…” 37

So Google funded the creation of the study, but it doesn’t stop there. Google also donated funds to the Takedown Project run by the American Assembly which employs one of the studies’ co-authors. 38 The two other co-authors are employed by the Berkeley Law School’s Samuelson Law Technology & Public Policy Center, which is one of the main partners in “Chilling Effects,” (now Lumens) 39 along with (surprise) The Electronic Frontier Foundation which has received more than $1 million dollars from Google. 40 And finally, the study directly adopts several recommendations from Public Knowledge, 41 also funded by Google. 42 Kind of cozy wouldn’t you say?

Now, the authors of the survey state that “[n]either funder directed our approach in any way, and neither funder reviewed any methods, data, results, or reporting before public release.” 43 I have no doubt that this is the case. But there’s also a saying that “birds of a feather flock together.”

So the fact that Google had no review or editorial power over the study does not appear to matter much. The author’s assertions and legal positions line up favorably with the interests of Google and the online service providers (OSP’s), and it shows throughout the study.

Here’s a good example. In a footnote devoted to filmmaker Ellen Siedler’s attempt to get her takedown notice taken off Chilling Effects, 44 the study could have linked to her original blog post found here:

http://voxindie.org/i-sent-chilling-effects-a-dmca-takedown-notice/

Instead, the study linked to a post at Tech Dirt which ridiculed her, calling her efforts “stupidity” and posits “Ellen Seidler — anti-piracy activist and tilter at windmills — continues down the road to irrelevance with her latest post…” 45

That the authors would choose to cite to a highly critical blog post that spews insults instead of linking to the author’s firsthand account speaks volumes. Birds of a feather…

And it goes a long way to explain how a 147 page report could come to the conclusion that a multi-billion dollar corporation like Google needs more legal protection from small businesses and individual artists trying desperately to protect their livelihood.

 

The Actual Numbers Could Be Off by Millions, and One Person Sent 52.9% of the Notices

Here are some more things that were reported about the study which are simply not true, and could have been corrected if the persons involved had simply read the study.

“[T]he researchers reviewed more than 108 million takedown requests and found that 28.4 percent “… had characteristics that raised clear questions about their validity.” 46

“Yet, according to the researchers’ detailed look at a database of more than 100 million requests, incorrect takedowns like those are frequently made and sometimes enforced. That’s a big problem for the Digital Millennium Copyright Act procedures that are often considered the legal backbone of the Internet.” 47

“Using data Google provides to the Lumen database, the researchers reviewed the accuracy of more than 108 million takedown requests.” 48

None of these statements are true.

That’s because the real number of takedowns reviewed is not 108 million or even 100 million.

The real number is 1,826. 49

This is because the researchers used a randomized selection process to create a statistical sample. 50 You can note from the endnote that this fact is revealed on page 1 of the study.

There is nothing wrong or inherently flawed about this approach. I am not a statistician, but in playing with several “statistical sample” generators I have confirmed that you can indeed generate results for 110 million instances from a sample as small as 1,800. But as with any statistical sample, the results are subject to error. The study itself admits that given the large set of the entire database, their results could be off by as much as 7 million instances. 51

Further, the sample was populated using random generators. The problem with random is that it’s, well, random. There is no guarantee that the random selection process will not generate skewed and unrepresentative results. This happened in the sample for notices sent to Google Images. It seems that one person had sent 52.9% of all of the notices. 52 According to the study, this one person (a lady in Sweden), sent notices complaining about material that she alleged “is defamatory, harassing, slanderous or threatening.” 53 In other words, this lady doesn’t know what she is doing and might be off her rocker. Yet, instead of realizing that the random generator has given them a bad data set, and they should try again, the researchers plow on ahead.

The Study Ignores the Effect of Whack-A-Mole

As I plowed through this 147 pages study, I kept asking myself “when are they going to address the issue of whack-a-mole?” (the need to send repetitive notices for the same infringing material). The answer is that the whack-a-mole problem is virtually ignored. This is a trend amongst tech companies and their supporters who seem to think that if the whole whack-a-mole problem is ignored, then maybe it will just go away. Here, the study follows the lead of the EFF, who, when directly confronted with a question regarding whack-a-mole, ignored it and pretended that it did not exist.

The EFF filed comments with the Copyright Office with regards to Section 512 takedowns. In the notice, the Copyright Office specifically asked for a response to this question:

  1. Does the notice-and-takedown process sufficiently address the reappearance of infringing material previously removed by a service provider in response to a notice? If not, what should be done to address this concern?

The EFF completely ducked this question, going from question #9 to question #12. 54

There can be no question that the need to send out repetitive notices greatly increases the total number of notices sent, and increases the need for automated bots to seek out the material. It is very simple for a bot to determine if a file is the same file which has been subject to a previous takedown notice. It can scan it in seconds. A human would have to watch the entire movie or TV show to determine if it was exactly the same.

The main complaint of the study is:

“The increased use of automated systems by large rightsholders, as well as DMCA Auto and DMCA Plus OSPs, however, raised questions of accuracy and due process. Though rightsholders and OSPs generally use some accuracy checks today, we identified a clear need for better mechanisms to check the accuracy of algorithms, more consistent human review, and a willingness by both rightsholders and OSPs to develop the capacity to identify and reject inappropriate takedown requests.” 55

So, let’s reduce the number of problem notices by reducing the number of total notices that need to be sent. If protections are agreed upon by both parties, and are put in place to help eliminate repeat infringements, this would reduce the total number of notices, and therefore total number of problem notices. As is typical for this study, these contrary facts are ignored, even when they are presented in their own paper.

“In 2011, five of the major ISPs announced a deal with several rightsholders groups to adopt and standardize procedures for warning and—after some number of warning notices—sanctioning repeat infringers… For some ISPs, this has also mitigated the floods of DMCA notices they were receiving before the agreement. According to several respondents, participating ISPs no longer receive DMCA notices from the Motion Picture Association of America (“MPAA”), RIAA, and their partnering groups. One ISP respondent described the agreement as superseding section 512 and generally reducing the number of headaches associated with managing notices. “Going from 1 million to around 100,000 notices per year, he observed, makes a big difference.” 56

Gosh, going from over 1 million notices to 100,000. Sounds like a good idea that should be pursued, right?

Nope. On page 122, the study spends an entire page discussing such voluntary agreements without once suggesting these agreements work and that this might be a good idea.

Just Because You Say It’s Fair Use Doesn’t Make It So

This is where the ideological alignment with tech interests really starts to show up. The study states that it concluded that 7.3% of the takedowns were “flagged with characteristics that weighed favorably towards fair use.” 57 What do the study authors consider “fair use?” “Mashups, remixes or covers.” 58 This is not an accurate statement of existing law. Mashup and remixes are presumptively derivative works under Section 106(2) and would require permission. A cover version under section 115 absolutely requires a license.

In the post-“transformative use” era, the 2nd, 7th, 9th and 11th Circuit Courts of Appeal have all weighed in on fair use, and none of these cases involved remixes, mash-ups or cover versions. 59 In fact, in the one case which most closely resembles a “mash-up,” the case of The Cat Not In the Hat, the 9th Circuit affirmed a finding that the book was copyright infringement and not fair use. 60

Further, this expansive view of mash-ups and remixes as fair use has been severely questioned by the Courts.

In particular, the Seventh Circuit Court of Appeals stated “[w]e’re skeptical of [the second Circuit’s]…approach, because asking exclusively whether something is “transformative” not only replaces the list in § 107 but also could override 17 U.S.C. § 106(2), which protects derivative works. To say that a new use transforms the work is precisely to say that it is derivative and thus, one might suppose, protected under § 106(2). Cariou and its predecessors in the Second Circuit do no tex-plain [sic] how every “transformative use” can be “fair use” without extinguishing the author’s rights under § 106(2). We think it best to stick with the statutory list.” 61

Let’s see a quick show of hands from all those making mash-ups and remixes who can recite the four factors of fair use from Section 107, and explain how to apply them correctly. Anybody?

The study could have asked them, but they didn’t. As a matter of fact, they did not interview one single person who was a “target” of a takedown. 62

And if they did, they would find that the understanding of these posters about what constitutes fair use is close to zero. Jonathan Bailey over at Plagiarism Today 63 found this exchange from a poster at Reddit, from an attorney trying to help with fair use claims:

“VideoGameAttorney here to answer questions about fair use, copyright, or whatever the heck else you want to know!”

“I’ve received over 700 emails this past week alone from content creators. I’m truly trying to help everyone I can, but it became overwhelming fast… For those truly being abused, we’re here to help. The tricky bit is that most I speak with aren’t being bullied unfairly. They are infringing and are properly being taken down. An important distinction.”

(Follow up question from a Reddit reader)

“[Q:] Are they contacting you knowing that they are in the wrong or just oblivious?

[A:] Mostly the second. A good portion of the Internet feels no one owns anything and everything is fair use. It’s not.” 64

And, of course, fair use is not a simple analysis, and every case is different. I direct you to this recent decision in the closely watched Georgia State fair use case. 65 Here, it takes the Judge 213 pages to analyze 48 cases of fair use. And this is where this first factor was deemed to favor Georgia State in all the cases.

Fair use is not simple, is not easy and requires rigorous analysis. Fair use requires more than ipse dixit. So, the study’s claim that there could be as many 8 million cases of takedowns aimed at “potential fair use” 66 just does not hold up, given their over-expansive position on what constitutes actual fair use.

How Can You Reach Conclusions Without Actually Talking to People?

As noted above, the study reached broad conclusions about the counter-notice and why it was under-used. The study contends counter-notice provisions are “ineffective, empowering unscrupulous users and subjecting legitimate ones to legal jeopardy. Targets were widely considered to lack sufficient information to respond to mistaken or abusive notices.” 67

The study came to this conclusion without actually talking to a single person who was the “target” of a takedown notice. 68 How did they come to this conclusion? Well, they talked to the online service providers about their interactions with the “targets.” 69 This is what lawyers call “hearsay,” and it’s generally inadmissible in Court.

Perhaps, one reason that the counter-notice is underused is because it is far easier to re-post the material than to file the counter-notice. This, of course, is the whack-a-mole problem that the study consistently ignores.

The other possibility as noted above by Jonathan Bailey of Plagiarism Today is that what many of the posters claim is fair use is in fact infringing content.

Additionally, the study notes that “OSPs noted that in their experience, these ‘small senders’ are most likely to misunderstand the notice and takedown process, mistake the statutory requirements, or use it for clearly improper purposes.” 70 In other words, the small senders create most of the problems and it is these people that Google needs enhanced protection from.

So, did the study actually talk to any “small senders” to determine why errors were made, and how they might be corrected?

Nope. 71

How Come Your Data Is So Different From Google’s?

Google’s own transparency report states that as recently as 2011, Google processed 97.3% of all takedowns as correct. This report claims that bad notices constitute nearly 30%. What’s the explanation for this rather large discrepancy? The study posits that Google must have taken down many notices that were not compliant anyway, with no corresponding data to back it up.

Are There Other Factors Which Cause Bad Notices?

The study could have taken a look at just how Google makes it extremely difficult to find out what the “correct” URL for an infringing file is. Here’s Ellen Siedler’s account of her personal experience from the Vox Indie website, published more than a month before the subject study was posted.

“Let’s say eventually (once you create a Google account and login) you end up at the right DMCA form, guess what? You’re still not finished. You’ll need to have the correct URL to report. That seems simple enough…you already copied the URL from the address bar of the pirated stream so that’s the link to report, right? WRONG! That would be too easy.

If you did report that URL nothing will happen. You must READ THE FINE PRINT on the DMCA form silly. Apparently the URL that appears in the address bar that appears–when you click a film to watch it in a new window– is NOT really the link to the pirated movie. It’s actually the URL for the Drive folder and, as the fine print notes, this type of URL is ‘unacceptable.’

So how to you find the right URL to report? Google doesn’t offer up any tips on that front. To find an actual link specific to the pirated movie requires some further detective work. Return to the page with the pirated stream and poke around. There’s nothing that says “here’s the video link” but after clicking various clickable things you may, if you’re lucky, eventually discover that by navigating to menu bar and sliding your cursor over the three dots–an option for more actions appears.

From there if you click report abuse you’ll arrive at yet another page with a new URL. You’re asked to click the type of abuse….Warning, if you select report “copyright abuse” then ruh roh, you end up back at the beginning the DMCA maze that Google has so conveniently created to impede and confuse you once again.

It’s an absurd maze, but for Google, it’s clearly no accident.” 72

The fact that Google deliberately obfuscates what constitutes an acceptable URL, and how this might factor into the rate of incorrect notices, is never addressed by the study.

Note also that under the law Google is required to have a DMCA agent whose job it is to receive takedown requests. If you do a search for “Google DMCA agent,” Google will send you to this page 73 which not only does not provide the name and address of who the DMCA agent is, but never mentions the words “DMCA agent” at all. For that, you have to go to the Copyright Office website. 74 It provides this URL: http://www.google.com/dmca.html as the place to “Submit Notices of Claimed Infringement.” This link will take you to this page 75 which states “to file a notice of infringement with us, please file a complaint using the steps available at our legal troubleshooter.” Clicking on that link takes you back to the same page noted at endnote 73, which of course makes no mention of the DMCA at all and puts you at about step 14 out of the 46+ steps as noted in last week’s blog post. 76

Do You Understand the English Language?

So back to the study. Recall that in the study of notices sent to Google Images that one person had sent 52.9% of all of the incorrect notices. 77 So, let’s take her out of the equation. Of the remaining notices, 56.5% of those notices were sent by people outside the United States, including Germany, China, and Israel. 78 Gosh, do you think that given that almost 80% of these notices are being sent by people for whom a) English is not their primary language and b) are trying to interpret some fairly complex requirements of U.S. Copyright law, might have something to do with the rate of incorrect notices? This possibility is never addressed.

It’s Not a Violation of Due Process If You Don’t Defend

Here’s another head scratcher. The study concludes that since so few people use the counter-notice provisions, that this amounts to a denial of due process.

“The quantitative studies, which showed no use of the counter notice process, reinforced this concern. Without better accuracy requirements for notices, a reasonable ability to respond before action is taken, and an unbiased adjudicator to decide whether takedown is appropriate, counter notice and putback procedures fail to offer real due process protection to targets.” 79

If you are sued in a lawsuit and don’t respond or defend the suit, a default judgement will be entered against you. That’s the way the system works. Due process exists to give you an opportunity to respond to the charges (civil or criminal). If you don’t contest them, you lose the case.

If you get a takedown notice and don’t respond with a counter-notice, then your stuff will be taken down and it will stay down. That’s because you did not respond once given notice and opportunity to respond. This is not a denial of due process.

The “unbiased adjudicator” called for by the study is called a judge. That’s what you get when you file a counter-notice and the copyright owner takes you to Court. There is absolutely no need to inject another “unbiased adjudicator” into the process.

Recommends Changes to the Copyright Law That Only Burdens Persons Sending Notices

Amongst the recommended changes to the law, the study only recommends those that place increased burdens on those senders of notice, but not those receiving them. They include: 80

  • Require takedown notice to be under penalties of perjury and create statutory damages for incorrect notices.
  • Require immediate putback for files for which a counter-notice is served.
  • Remove the “per work infringed” metric from statutory damages.

Why no requirement of take down stay down? Because that would be too “burdensome.” 81 Never mind that these are the businesses that are directly profiting from the posting of infringing material. Instead, they would rather create quasi-criminal penalties for making a simple mistake, or in the case of fair use, an area in which there can be (and is) room for substantial disagreement.

Consider the call for “immediate putback” of work for which a counter notice is filed. The study claims that:

“The ten-day waiting period is routinely criticized for jeopardizing expression, especially time-sensitive expression. Given the very small number of counter notices received by OSPs and the high social cost of censoring expression, any costs related to this change would be far outweighed by the benefit of fixing this problem.” 82

Except for the fact that piracy is also time sensitive and occurs with the greatest frequency when a new work is released. Consider this report from 2014:

“The latest episode of Game of Thrones has broken the record for the most people sharing a file simultaneously via BitTorrent. More than 193,000 people shared a single copy yesterday evening, and roughly 1.5 million people downloaded the episode during the first day.” 83

So a bogus counter-notice gets your infringing content put back up, during the short period of time when it is most valuable, and acts of piracy are the greatest. Clearly, the authors have not thought that one through.

And what’s the remedy for a bogus counter-notice, such as one that claims that they are in fact the owner of the material, or a phony assertion of “mistake or misidentification,” or a totally fictitious name and address? What’s the remedy for that?

There currently is no remedy. Why don’t the authors suggest one?

It does not appear the authors have considered that question. Because at the end of the day, it’s not Google’s problem, is it?

Notes:

  1. Jennifer M. Urban, Joe Karaganis, and Brianna L. Schofield, Notice and Takedown in Everyday Practice
  2. Study Finds Major Flaws in DMCA Takedown Procedure and Policing the Pirates: 30% of Takedown Requests Are Questionable (Study)
  3. Notice and Takedown in Everyday Practice, at 79 (emphasis added)
  4. Notice and Takedown in Everyday Practice, at “Acknowledgement and Disclosures”
  5. Notice and Takedown in Everyday Practice, at 40
  6. Notice and Takedown in Everyday Practice, at 128-129
  7. Notice and Takedown in Everyday Practice, at “Acknowledgement and Disclosures”
  8. The Takedown Project
  9. Notice and Takedown in Everyday Practice, at 128
  10. U.S. Public Policy – Transparency
  11. Lumen – About Us
  12. The EFF and Google: Too Close for Comfort?
  13. Notice and Takedown in Everyday Practice, at 128-129
  14. Notice and Takedown in Everyday Practice, at 60
  15. Google Transparency Report FAQ
  16. Notice and Takedown in Everyday Practice, at 95
  17. Notice and Takedown in Everyday Practice, at 96
  18. 28% of Piracy Takedown Requests Are “Questionable”
  19. Blame the Robots for Copyright Notice Dysfunction
  20. Notice and Takedown in Everyday Practice, at 81
  21. Notice and Takedown in Everyday Practice, at 98
  22. Notice and Takedown in Everyday Practice, at 102
  23. Notice and Takedown in Everyday Practice, at 100
  24. “Most pointed to written material she alleges is defamatory, harassing, slanderous or threatening.” Notice and Takedown in Everyday Practice, at 100. The study contends most likely 100% of her notices are defective. Notice and Takedown in Everyday Practice, at 107.
  25. Why Does Google Make It So Damn Difficult to Send a DMCA Notice?
  26. Notice and Takedown in Everyday Practice, at 95
  27. Notice and Takedown in Everyday Practice, at 110
  28. Notice and Takedown in Everyday Practice, at 141
  29. Notice and Takedown in Everyday Practice, at 27
  30. Notice and Takedown in Everyday Practice, at 74
  31. Grooveshark Is Now Deadshark: How an Illegal Streaming Service Hid Behind the DMCA for Nearly 10 Years
  32. 14 Strikes and You’re Out! (Maybe): How Cox Communications Lost its DMCA Safe Harbor
  33. Notice and Takedown in Everyday Practice, at 66-67
  34. Notice and Takedown in Everyday Practice, at 91-92
  35. Notice and Takedown in Everyday Practice, at 33 footnote 93
  36. Study Finds Major Flaws in DMCA Takedown Procedure, Columbia University Researchers Claim 28% of Google’s URL Takedown Requests Are Invalid, How We’re Unwittingly Letting Robots Censor the Web, Blame the Robots for Copyright Notice Dysfunction, 28% of Piracy Takedown Requests Are “Questionable”, Policing the Pirates: 30% of Takedown Requests Are Questionable (Study)
  37. Notice and Takedown in Everyday Practice, at “Acknowledgement and Disclosures”
  38. The Takedown Project
  39. Lumen – About Us
  40. The EFF and Google: Too Close for Comfort?[/ref] Not to mention, former EFF Senior Staff Counsel Fred Von Lohmann is now Google Senior Copyright Counsel Fred Von Lohmann, who also was a visiting researcher with the Berkeley Center for Law and Technology. 84Fred von Lohmann
  41. Notice and Takedown in Everyday Practice, at 128
  42. Google U.S. Public Policy – Transparency
  43. Notice and Takedown in Everyday Practice, at “Acknowledgement and Disclosures”
  44. Notice and Takedown in Everyday Practice, at 51, footnote 155
  45. Anti-Piracy Activist Issues Takedown To Chilling Effects To Take Down Her Takedown Notice To Google
  46. Columbia University Researchers Claim 28% of Google’s URL Takedown Requests Are Invalid
  47. Blame the Robots for Copyright Notice Dysfunction
  48. 28% of Piracy Takedown Requests Are “Questionable”
  49. Notice and Takedown in Everyday Practice, at 81
  50. Notice and Takedown in Everyday Practice, at 1
  51. Notice and Takedown in Everyday Practice, at 88 footnote 246
  52. Notice and Takedown in Everyday Practice, at 100
  53. Id.
  54. Copyright Office Section 512 Study – EFF 512 Study Comments
  55. Notice and Takedown in Everyday Practice, at 3
  56. Notice and Takedown in Everyday Practice, at 61
  57. Notice and Takedown in Everyday Practice, at 95
  58. Id.
  59. Marching Bravely Into the Quagmire: The Complete Mess that the “Transformative” Test Has Made of Fair Use, and Copyright Blog Update: Court of Appeals Rejects “Transformative Use” Test & Malibu Media Marches Along
  60. Dr. Seuss Enterprises v. Penguin Books USA, Inc. 109 F3rd 1394 Ninth Circuit Court of Appeals 1997
  61. Kienetz v. Sconnie Nation, LLC, 2014 WL 4494835, Seventh Circuit Court of Appeals 2104
  62. Notice and Takedown in Everyday Practice, at 27
  63. My Comments to the Copyright Office on DMCA Safe Harbor
  64. VideoGameAttorney here to answer questions about fair use, copyright, or whatever the heck else you want to know!
  65. Georgia State Fair Use Case
  66. Notice and Takedown in Everyday Practice, at 97
  67. Notice and Takedown in Everyday Practice, at 74
  68. Notice and Takedown in Everyday Practice, at 27
  69. Notice and Takedown in Everyday Practice, at 27
  70. Notice and Takedown in Everyday Practice, at 40
  71. Notice and Takedown in Everyday Practice, at 141
  72. Why Does Google Make It So Damn Difficult to Send a DMCA Notice?
  73. Google Legal Help – Legal Removal Requests
  74. Google Amended Interim Designation of Agent to Receive Notification of Claimed Infringement
  75. Google Legal Help – Digital Millennium Copyright Act
  76. How to Send a Takedown Notice to Google in 46 (or more) Easy Steps!
  77. Notice and Takedown in Everyday Practice, at 100
  78. Notice and Takedown in Everyday Practice, at 102
  79. Notice and Takedown in Everyday Practice, at 3
  80. Notice and Takedown in Everyday Practice, at 128-129
  81. Notice and Takedown in Everyday Practice, at 123 and 140
  82. Notice and Takedown in Everyday Practice, at 128
  83. Game of Thrones Sets New Torrent Swarm Record
No Subjects