[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 4:52pm
Well, a wrong has been righted. Kind of. And for how long, no one really knows. Texas is on the leading edge of book censorship in the United States you know, the land most famous for its freedoms, one of which is the famous/infamous (depending on who you ask) First Amendment. Its only second to another state run by the Party of Free Speech, according to data gathered by people who actually firmly believe in protecting First Amendment rights. Texas is second in the nation in banning books, with more than 1,500 titles removed from 2021 to 2023, according to PEN America, a literary freedom non-profit. Only Florida has banned more, with 5,100 titles removed. The censorship will continue to escalate, of course, despite this recent concession. Both Florida and Texas are run by publicly-funded bigots who not only encourage the worst of their constituents to engage in speech-threatening activism, but provide them with the codified weapons they need to accomplish this task. But one county in Texas was on the receiving end of national negative press when its censorship board (the Montgomery County Citizens Review Committee) decided it wasnt going to limit itself to banning books from public schools and libraries. It was also going to decide what is or isnt a fact based on its subjective feelings about the facts themselves. The Montgomery County Commissioners Court ordered librarians there to reclassify the nonfiction children’s book “Colonization and the Wampanoag Story” as fiction. This reclassification decision is a consequence of a contentious policy change in March. Right-wing activists pressured the Montgomery County Commissioners Court to remove librarians from the review process for challenged children’s, young adult and parenting books. Thats the sort of thing we expect from dictatorships and dystopian novels. Its not the sort of thing we expect to see in the United States: a literal, unilateral decision to declare some facts to be fiction, simply because a government board, whose meetings are closed to the public, decided it would rather not allow children to have access to factual depictions of past violence against indigenous people. Fortunately, the nonfiction book has now been restored to its proper classification. (h/t Techdirt reader Eric Knapp) A Texas county reversed its decision to place Colonization and the Wampanoag Story, a children’s history book about the Native American experience, in the fiction category at local libraries. [] The Texas community of Montgomery county, near Houston, reclassified the book after creating a citizen review committee, making the committee’s meetings secret and removing librarians from deliberations – changes driven by a conservative Christian group. This move towards greater and more creative censorship is one of the expected side effects of allowing activists with religious agendas to be given an out-sized voice in day-to-day government. In this county, the propelling force is Michele Nuckolls, the founder of Two Moms and Some Books a group whose innocuous name might make some people believe this a grassroots efforts that just wants whats best for all children. In reality, its a self-described Christian conservative group that wishes to see as many people harmed as possible, especially those who dont describe themselves as Christian conservatives. The group advocates for books, primarily those about sexuality and transgender identity, to be moved to more “restrictive” adult sections of the library and for more Christian titles to be added to shelves. Nuckolls is also an annoyance at local school board meetings a place where she shouldnt really be allowed to speak considering she homeschools her children and is generally not affected at all by any of the school boards decisions. But Nuckolls is going to face a bit more of an uphill battle the next time she and her bigoted buddies start leaning on the Citzens Review Committee to ban more books and/or declare facts that dont portray white Christian conservatives in the most flattering light to be fiction. At an Oct. 22 meeting, the Montgomery County Commissioners Court issued a stay against all actions of the citizens reconsideration committee since Oct. 1 and put any future decisions on hold. The commissioners also created another committee to review and revise library policy, including the rules around the citizens reconsideration group. It will be made up of employees from different commissioners offices and advised by the county attorney’s office. For now, Montgomery County is incapable of further embarrassing itself on the national stage. But once the stay is lifted, it will be up to the committee overseeing the Citizens Review Committee to prevent further such embarrassments from reoccurring. Given the fact that this committee will be comprised other representatives of the same county government that allowed the first debacle to happen, I dont have particularly high hopes the county wont try anything quite as stupid again once its been given the opportunity to do so. But at least its something, no matter how minimal it is. That means at least a few people on the inside are aware these actions are not only unwise, but unconstitutional. What Montgomery County needs is more of those people making decisions, rather than handing it off to the most ignorant policymakers and constituents in its midst.

[Category: 1, 1st amendment, book ban, censorship, libraries, montgomery county, texas]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 2:32pm
The release of a bipartisan draft of the American Privacy Rights Act (APRA) reinvigorated the effort to pass a federal consumer privacy law, only to sputter and stall amid concerns raised from across the political spectrum. All that is gone, however, is not forgotten: it is only a matter of time before Congress returns its institutional gaze to consumer privacy. When it does, Congress should pay careful attention to the implications of the APRA’s policy choices on AI development. The APRA proposed to regulate AI development and use in two key ways. First, it required impact assessments and audits on algorithms used to make “consequential decisions” in areas such as housing, employment, healthcare, insurance, and credit, and provided consumers with rights to opt-out of the use of such algorithms. House drafters subsequently struck these provisions. Second, perhaps more importantly – and the focus of this article – the APRA also prohibited the use of personal data to train multipurpose AI models. This prohibition is not explicit in the APRA text. Rather, it is a direct implication of the “data minimization” principle that serves as the bedrock of the entire bill. Data Minimization as a Framework for Consumer Privacy Data minimization is the principle that data collection should be limited to only what is required to fulfill a specific purpose, and has both procedural and substantive components. Procedural data minimization, which is a hallmark of both European Union and United States privacy law, focuses on disclosure and consumer consent. Virginia’s Consumer Data Protection Act, for example, requires data collected and processed to be “adequate, relevant, and reasonably necessary” for its purposes as disclosed to the consumer. Privacy statutes modeled on procedural data minimization might make it difficult to process certain kinds of personal information, but ultimately with sufficient evidence of disclosure, they tend to remain agnostic about the data’s ultimate use. Substantive data minimization goes further by limiting the ability of controllers to use consumer data for purposes beyond those expressly permitted under the law. Maryland’s Online Data Privacy Act, enacted earlier this year, is an example of this. The Maryland law permits covered businesses to collect, process or share sensitive data when it is reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer. Although Maryland permits consumers to consent to additional uses, practices that are by default legal under Virginia’s and similar statutes — such as a local boat builder using data on its current customers’ employment or hobbies to predict who else in the area is likely to be interested in its business — would generally not be permissible in Maryland. The APRA adopts a substantive data minimization approach, but it goes further than Maryland. The APRA mandates that covered entities shall not collect or process covered data “beyond what is necessary, proportionate, and limited to provide or maintain a specific product or service requested by the individual to whom the data pertains,” or alternatively “for a purpose other than those expressly permitted.” The latter category would then permit data to be used only for purposes explicitly authorized in the legislation — described as “permitted purposes” — but does not permit consumers to consent to additional uses, or even to several such “permitted purposes” at the same time. The APRA proposes what is essentially a white list approach to data collection and processing. It does not permit personal data to be used for a range of socially-beneficial purposes, such as detecting and preventing identity theft, fraud and harassment that are essential to a functioning economy. And because the development of AI models is not among the permitted purposes, no personal data could be used to train AI models – even if consumers were to consent and even if the information was never disclosed. In contrast, current U.S. laws permit collection and processing of personal data subject to a series of risk-based regulations. The substantive data minimization approach reflected in the APRA represents a potential sea change in norms for consumer privacy law in the United States. Each of the 19 state consumer privacy laws now in effect has by and large adopted a procedural data minimization approach in which data collection and processing is presumptively permissible. They have generally avoided substantive minimization restrictions. Even Maryland, the most stringent of these, has stopped well short of the APRA’s proposal to restrict data collection and processing to only those uses specified in the bill itself. The GDPR’s Minimization Approach The APRA’s approach to data minimization has more in common with the EU General Data Protection Regulation (GDPR) than with U.S. state privacy laws. The GDPR follows a substantive data minimization model, allowing collection only for a set of “specified, explicit, and legitimate” purposes. Unlike the APRA, however, a data controller may use data if a consumer provides affirmative express consent. As such, compliance practitioners typically advise companies operating in Europe that intend to “reuse” data for multiple purposes, such as to train multimodal AI models, to simply obtain a consumer’s consent to use any data sets that would undergird future technological development of these models.1 Even with the permission to use data pursuant to consumer consent, the GDPR framework has been largely criticized for slowing innovation that relies on data. Some have attributed the slow pace of European AI development, compared to the United States and China, to the GDPR’s restriction of data use. Notably, enforcement actions by EU regulators, as well as general uncertainty over the legality of training multimodal AI under the GDPR, have already forced even large companies operating in the EU to altogether stop offering their consumer AI applications within the jurisdiction. How the APRA Would Cut Off AI Development The APRA, if enacted in its current form, would have a starker impact on AI development than even the GDPR. This is because the APRA would not permit any “reuse” of data, nor permit the use of data for any purpose outside the bill’s white list, even in cases where a consumer affirmatively consents. That policy choice moves the APRA from the GDPR’s already restrictive framework into a new kind of exclusively substantive privacy regulation that will hamstring AI development. Multifaceted requests by end users form the foundation of generative AI. Flexibility in consumer applications is these models’ purpose and promise. If data collected and processed for one purpose may never be reused for another purpose regardless of consumer consent or even a clear criteria, training and offering multipurpose generative AI applications is rendered facially illegal. The AI developer that could comply with the GDPR by obtaining affirmative consent in order to enable the reuse of data for multiple productive applications could not do so under the APRA. The downsides of training entire AI models to serve only one purpose will have negative effects on both safety and reliability. Responsible AI practices include a multitude of safeguards that build off each other and their underlying data set to optimize machine learning applications for accuracy, consumer experience, and even data minimization itself. These improvements would not be feasible if every model used for a new purpose is forced to “start from scratch.” For example, filtering for inaccurate data and efforts to avoid duplicative datasets, both of which depend on well-developed training data, would be rendered ineffective. Consumers would also need to reset preferences, parameters and data output safeguards for each model, leading to user fatigue. Moreover, the APRA approach would prevent developers from building AI tools designed to enhance privacy. For example, the creation of synthetic data based on well-developed datasets that is then substituted instead of consumers’ personal data — a privacy-protective goal — is impossible in the absence of well-developed underlying data. Paradoxically, consumers’ personal data would instead need to be duplicated to serve each model and each purpose. The sole provision in the APRA that would generally permit personal data to be used in technological development is a specific permitted purpose that allows covered entities to “develop or enhance a product or service.” This subsection, however, applies only to de-identified data. Filtering out all personal data from AI training data sets presents an impossible challenge at scale. Models are not capable of distinguishing whether, for example, a word is a name, or what data may be linked to it. Implementing filters attempting to weed out all personal data from a training data set would inevitably also remove large swaths of non-personal data – a phenomenon known as “false positives.” High false positive rates are especially detrimental to training data sets because they refer to the removal of large amounts of valuable training data that are not personal data, leading to unpredictable and potentially biased results. Even if this were feasible, filtering all personal data out from training data would itself lower the quality of the data set, further biasing outputs. Furthermore, many AI models include anti-bias output safeguards that would also be diminished in the absence of the data they use to control for bias. Thus, a lack of relevant training data can bias outputs, yet so too can an inherently biased model whose output safeguards are rendered ineffective because they lack the necessary personal information to accomplish their task. Unfortunately, both of these harms are almost certain to materialize under a regime that wholly eschews personal information from inclusion in training data. Where to Go From Here As the APRA falters and Congress looks forward to a likely redraft of federal privacy legislation, it is critical to avoid mothballing domestic AI development with a poorly-scoped overhaul of U.S. privacy norms. For several years preceding the APRA’s introduction, privacy advocates have advanced a narrative that the U.S. experiment with “notice and choice,” or notifying consumers and presenting an opportunity to opt out of data collection, has failed to protect consumer data. Improving this framework in a way that gives consumers greater control over their data is possible, and even desirable, via federal legislation. Yet a framework built around permitting only predetermined uses of data would have unintended, unforeseen and potentially disastrous consequences both for domestic technological development and U.S. competitiveness on the world stage. 1 The GDPR does not generally permit data collected for one permitted purpose to be used for others, except as subject to vague criteria. Although the law includes a series of criteria to do so, these criteria are. They include 1) a link between the new and original purpose, 2) the context of collection, in particular regarding the relationship between data subjects and the controller,” 3) the nature and sensitivity of the personal data, 4) the possible consequences of the new processing to data subjects, and 5) appropriate technical safeguards. The GDPR also specifically articulates that this criteria also may not include contextual considerations, rendering compliance uncertain in the majority of cases. Paul Lekas is Senior Vice President and Head of Global Public Policy and Government Affairs at the Software & Information Industry Association (SIIA). Anton van Seventer is Counsel for Privacy and Data Policy at SIIA.

[Category: 1, ai, apra, data minimization, gdpr, generative ai, permission, privacy]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 1:04pm
Last week, Bluesky, where I am on the board (so feel free to consider this as biased as can be), announced that it had raised a $15 million seed round, and with it announced some plans for building out subscription plans and helping to make the site sustainable (some of which may be very cool — stay tuned). A few days prior to that happening, Bluesky hit 13 million users and continues to grow. It’s still relatively small, but it has now done way more with a smaller team and less money than Twitter did at a similar point in its evolution. I’m excited with where things are trending with Bluesky for a few reasons, but I wanted to actually talk about something else. Just before I joined the board, I had met up with a group of supporters of “decentralized social media,” who more leaned towards ActivityPub/Mastodon/Threads over Bluesky. Even though I wasn’t officially representing Bluesky, they knew I was a fan of Bluesky and asked me how I viewed the overall decentralized social media landscape. Similar questions have come up a few times in the last few months, and I thought that it made sense to write about my thoughts on the wider decentralized social media ecosystem, just as we’ve hit the two year anniversary of Elon Musk taking over Twitter. Since then, he’s wiped out billions of dollars in value and revenue, turned what had been a pretty neutral open speech platform that fought globally for free speech, into a one-sided, bot-filled partisan platform that only fights for free speech when it disagrees with the government, but is happy to cave if the authoritarians in charge are friendly with Musk. But the one key thing is that the decentralized social media landscape has been invigorated and supercharged, almost entirely because of Elon Musk. Thank you, Elon. I previously told the story of my attendance at a conference in New York in October of 2022, where there was a very interesting presentation predicting the adoption of decentralized alternatives to centralized social media with this chart being shown: As I noted, this chart and the “events that trigger disillusion” in particular struck me as a bit too underpants gnomey: What those “events that trigger disillusion” actually are becomes pretty damn important. So, I had asked a question to that effect at the event. For years since my Protocols, Not Platforms paper came out, I had struggled with what would actually lead to real change. I didn’t find the presenter’s answer all that satisfying, but little did I know that literally while that presentation was happening, Elon Musk was officially saying that he would drop his attempt to get out of buying Twitter, and would move forward with the acquisition. At that point, Bluesky was still just a concept of a protocol. It was far from any sort of app (it wasn’t even clear it was going to be an app). But in the events that followed over the next few weeks and months, as Elon’s approach to dismantling basically everything that he claimed he supported with ExTwitter became clear, Bluesky realized it needed to build its own app. Indeed, it’s astounding how much Elon has become the one man “events that trigger disillusion” from that chart above. With it, he has become a singular driving force towards driving adoption in alternative platforms. I’m not betraying any internal secrets in noting that people within Bluesky have referred to some of the big influxes of new users on the platform to “EME: Elon Musk Events.” Whenever he chooses to do something reckless — ban popular users, launch a poorly planned fight with a Brazilian judge, take away the block feature — it seems to drive floods of traffic to Bluesky. But also to other new alternative platforms. Thank you, Elon, for continuing to supply “events that trigger disillusion.” But waiting for Elon to fuck up again and again is not a long-term strategy, even if it keeps happening. It is introducing more and more people to the alternatives, and many people are liking what they’ve found. For example, well-known engineer Kelsey Hightower recently left ExTwitter and explained how ATProtocol (which underlies Bluesky and enables much of what’s great about it from a technical standpoint) is one of the most exciting things he’s seen in years. The more I dig into Bluesky, and more importantly the AT Protocol, the more I get that feeling I had when I first got involved with the Kubernetes project. — Kelsey Hightower (@kelseyhightower.com) 2024-10-20T19:01:10.972Z But, the reality is that no one quite knows what is going to really “click” to make decentralized social media more appealing long term and for more people than centralized social media. Many of us have theories, but the reality is that what makes something really click and go from a niche (or dying!) thing to essential is only possible to understand in retrospect, rather than prospectively. Just as I spent a few years trying to work out what kinds of things might be “events that trigger disillusion,” I think we’re still in the discovery stage of “events that trigger lasting value.” People leaving the old place because they’re disillusioned is a starting point. It’s an opportunity to show them there are alternatives. But to make it last, we need to create things that people find real value out of that weren’t available at the old place. The key to every “killer app” on a new system, even ones that start out mimicking the old paradigm, is enabling something that couldn’t be done on the old system. That’s when things get really fun. Early TV was just radio with video until people figured out to embrace the medium. Smartphones were initially just tiny computers, until services that embraced native features like location were better understood. We need that for decentralized social media. But right now, we don’t really know what that trigger is going to be. I can think that some of Bluesky’s features — things like domains as handles, using standardized decentralized IDs, composable and stackable moderation, and algorithmic choice — are part of what will get us there, but I don’t know for sure what the big breakthroughs will be. And neither does anyone else. As such, we need more experiments and experimenting, and not all of that should be done directly within the ATProtocol system (the ATmosphere). Because, even while I think it’s extremely clever in what it enables, the choices made in its approach might limit somethings enabled by other approaches. So I don’t so much see other decentralized social media systems like ActivityPub (Mastodon, Threads, etc.), nostr, Farcaster, Lens, DSNP, etc., as competitors. Rather, I see them as all presenting unique experiments to see where the real value can show up. I think there’s a ton to learn from all of them. For example, I think Mastodon’s focus on local community and the power of defederation is a fascinating experiment. We’re also seeing some interesting new systems built on ActivityPub that challenge the way we think about decentralized apps. I think that nostr’s simplicity that makes it ridiculously easy for anyone to build clients and relays is important. Farcaster has a number of really cool ideas, including things like Frames that allow you to create apps within social feeds. In other words, there is a lot of experimentation going on right now, and all of that helps the wider ecosystem of decentralized social media, because we can all learn from each other. We already see that Mastodon has been making changes in response to the things that people like about Bluesky. I’m sure that everyone working on all of these systems are looking at what others are doing and learning from each other. The simple reality is that right now, no one really knows what will “click.” We don’t know what the real “killer app” is that convinces more people to switch over from centralized systems to decentralized ones. “Events that trigger disillusion” are great for getting people to look. But, getting people to stay and eagerly participate requires adding real value. I’m happy to see all this experimentation going on to figure out what that is. Just “being decentralized” is not a value that attracts most users. It has to be what that decentralization enables, preferably the kinds of things that a centralized system can’t actually match, that will create the next breakthrough. Since no one can predict exactly what that breakthrough is, the best way to find out what will really make it work is having the wider decentralized ecosystem all experimenting. This isn’t even a “rising tide lifts all boats” kinda thing. It’s more of a “we need lots of folks digging holes to see where the oil is” kinda thing. Letting each of these systems test things out with their own unique approach is the best way to discover what will actually excite and attract users positively, rather than just in response to yet another Elon Musk Event. I’m enthusiastic about Bluesky’s approach. I think the ATProtocol gives us the best chance of reaching that breakthrough. But I’m happy to see others trying different ideas as well, because all of these experiments will help bring us to a world where more people embrace decentralized systems (whether they know it or not) and move away from old walled gardens. Not because of “events that trigger disillusion” but because what’s happening over here is just that much more useful and powerful.

[Category: bluesky, farcaster, mastodon, twitter, x, atprotocol, competition, decentralization, decentralized social media, elon musk, social media]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 11:51am
Donald Trump and the politicians that either think like him, or think saying things like this might make him like them, continue to pretend major US cities are besieged by violent criminals. While there have been a few spikes in certain cities, for the most part, crime rates are returning to their normal, historic lows following aberrations generated by the once-in-a-lifetime worldwide pandemic. These politicians dont have the facts on their side. Fortunately for them, many of their supporters are way less interested in facts than they are in feelings, especially the feeling that anyone described as progressive is anti-American and that pretty much all crime in the US can be traced back to (1) minorities or (2) undocumented immigrants. These are the people who claim facts dont care about your feelings, but when it comes to facts they dont care for, theyre more than willing to let their feelings take control of the conversation. Theyre wrong in every case. And this report, compiled by the Brennan Center, isnt going to change their minds. But for those of us who still care about facts, heres another set of data points that makes it clear progressive prosecution policies neither hamstring police nor embolden criminals. Previous research on this subject has, with some exceptions, found little to no relationship between the inauguration of a pro-reform prosecutor and a measurable increase in crime, even after using sophisticated statistical strategies. Our analysis, described below, also finds no clear relationship between the pro-reform prosecutorial approach and the incidence of crime. Using data collected by the Council on Criminal Justice, we compared aggravated assault, larceny, and homicide trends in cities with pro-reform prosecutors to trends in cities without pro-reform prosecutors. Assault and larceny were selected because of their frequency, allowing clearer analysis, and because they are more likely to be affected by prosecutorial decision-making. Murder was chosen because of its seriousness and because those crimes spiked sharply during the first two years of the Covid-19 pandemic. In terms of homicide rates, the data shows that if theres any difference between progressive and regular prosecutors, its that cities with progressive prosecutors are seeing fewer homicides. The Brennan Center freely admits its working with limited data not because there isnt a wealth of crime data available, but because the number of cities with progressive prosecutors (itself a term open to some interpretation) is extremely low in comparison to the number of cities overseen by prosecutors no one has ever labeled progressive, if theyve ever bothered to label them at all. Even when the contrast isnt nearly as stark as it is in the homicide numbers, the data shows that, at worst, progressive prosecutors and policies arent resulting in abnormally high crime rates in comparison to other, less-progressive cities. So, are these reform-minded prosecutors better or worse for large cities? The answer likely depends on far more than aggregate crime data. But what data is available shows some progressive prosecutors are presiding over some pretty impressive crime rate decreases, despite the vociferous protestations by those who believe any mild criminal justice reform must be to blame for whatever recent criminal act they saw covered on the evening news, or its nearest social media equivalent, Facebook. Notably, the graphs show that crime trends in pro-reform prosecutor jurisdictions largely match those in their comparison groups. Where they do not match, they indicate lower crime rates in cities with pro-reform prosecutors. That is not what we would expect to see if, as some critics claim, jurisdictions with pro-reform prosecutors experience rising or higher crime. In Los Angeles, a pandemic-era rise and plateau in aggravated assault rates is mirrored by trends in the comparison group of cities — both before and after the inauguration of pro-reform prosecutor George Gascón. In Austin, the decline in larceny rates overseen by José Garza, another district attorney elected on a reform platform, outpaces that of the comparison group. And assault trends in Boston are particularly notable. Aggravated assault rates began to decline sharply in the first year of then–District Attorney Rachael Rollins’s administration and have remained significantly below 2018 rates since then, even after Rollins’s departure in 2022. No other city or combination of cities in our sample could match Boston’s steep drop in aggravated assaults. Correlation is not causation and all of that, but what the Brennan Center points out is that progressive prosecution policies like not spending time arresting and prosecuting people for low-level offenses like minor drug possession, unpaid tolls/parking tickets, and non-violent misdemeanors has freed up prosecutors, police officers, and investigators to spend more time dealing with more serious and violent criminal acts. And that alone might explain why violent crime rates continue to drop in areas where prosecutorial discretion, diversion programs, and not turning local jails into debtors prisons have actually prioritized tackling the sort of crime most people think law enforcement should be focusing its resources on. Like I said, this data wont stop Trump and others from pretending any liberal city is crime-ridden wasteland. But it still matters for the rest of us, including those who live in these cities, who might be experiencing new historic lows in violent crime, even if they dont actually agree with the prosecutorial policies currently in place.

[Category: brennan center, crime data, crime rates, criminal justice reform, progressive prosecutors, prosecution]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 11:46am
The Complete Cisco Training Bundle has 6 courses to help you get ready to become certified. Courses cover al you need to know as a CCNA, CCEA, and more. Its on sale for $40. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

[Category: 1, daily deal]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 10:27am
Hey Jeff, Since I know you’ll never actually read this, I figured the best way to set this up as an open letter. One that you should read, but never will. It appears that your stupendously cowardly decision to block the Washington Post from publishing its planned endorsement of Kamala Harris just days before the election is not working out quite the way you hoped. While it’s pretty common for people to claim they’re canceling a subscription whenever a newspaper does something bad, this time it appears they are actually doing so. In droves. Oops! Reports note that over 200,000 subscriptions have been cancelled, or around 8% of your subscribers, since the news came out on Friday (coming via the publisher rather than you directly). And it sounds like more cancellations are coming in: More than 200,000 people had canceled their digital subscriptions by midday Monday, according to two people at the paper with knowledge of internal matters. Not all cancellations take effect immediately. Still, the figure represents about 8% of the paper’s paid circulation of roughly 2.5 million subscribers, which includes print as well. The number of cancellations continued to grow Monday afternoon. Last night, I saw you took to the pages of the newspaper whose credibility you just destroyed to give a sanitized explanation for this decision. All I can say is, Jeff, fire whichever lackey wrote this. They’re terrible. Let’s be clear: there are plenty of good reasons not to do endorsements. At Techdirt, we don’t do endorsements. There’s no requirement to do endorsements. And, honestly, in many cases, endorsements for things like President are kinda silly. I get that part. But this isn’t actually about the decision not to publish an endorsement. The real issue is you stepping in as owner to block the endorsement at the perfect time to show that you capitulated in advance to an authoritarian bully who has attacked your business interests in the past and has indicated he has a plan to exact revenge on all who wronged him. The principled response to such threats is to continue doing good journalism and not back down. The cowardly shit is to suddenly come up with an excuse for not publishing an endorsement that had already been planned. Your explanation gets everything backwards. In the annual public surveys about trust and reputation, journalists and the media have regularly fallen near the very bottom, often just above Congress. But in this year’s Gallup poll, we have managed to fall below Congress. Our profession is now the least trusted of all. Something we are doing is clearly not working. It’s true. The mainstream media is not trusted. You want to know why? Because time and time again the media shows that it is unfit to cover the world we live in. It pulls punches. It equivocates. It sanewashes authoritarian madness. All of that burns trust. As does a billionaire owner stepping in to block an already written opinion piece. That is why people are canceling. You just destroyed their trust. This is particularly stupid at this moment because trust is at an all-time low, as you note. But the ones who already trust the Washington Post to tell them what’s up in this moment of uncertainty are subscribers to your newspaper. And they’re now leaving in droves. Because you destroyed their trust. It’s one thing to win people’s trust. You’ve destroyed trust that people already had in the Washington Post. One reason why credibility is so low is because it’s believed that the wealthy elite billionaires “control” the news and push their personal beliefs. Jeff, you know what helps reinforce that belief? You, the billionaire, elite owner of the Washington Post, stepping in to overrule your editorial team on a political endorsement in a manner that suggests that you wish to put your thumb on the scale in order to maintain more control. Then your piece gets worse. Let me give an analogy. Voting machines must meet two requirements. They must count the vote accurately, and people must believe they count the vote accurately. The second requirement is distinct from and just as important as the first. Likewise with newspapers. We must be accurate, and we must be believed to be accurate. It’s a bitter pill to swallow, but we are failing on the second requirement. Most people believe the media is biased. Anyone who doesn’t see this is paying scant attention to reality, and those who fight reality lose. Reality is an undefeated champion. It would be easy to blame others for our long and continuing fall in credibility (and, therefore, decline in impact), but a victim mentality will not help. Complaining is not a strategy. We must work harder to control what we can control to increase our credibility. This is exactly correct in isolation. Of course newspapers must increase their credibility. You know how a newspaper does that? By not having its billionaire owner step in and tell its editorial team not to publish an endorsement days before an election in a manner that makes it look like you’re willing to interfere in their editorial choices to curry favor with politicians. You literally did the exact opposite of what you claim you’re trying to do. And for what? Do you think that MAGA folks are suddenly going to come rushing to subscribe to the Washington Post now? Do you think this built up your credibility with a crew of folks who have made it clear they only wish to surround themselves with propaganda and bullshit? Is that who you want credibility with? If so, hire a propagandist and fire your journalists. Those people are never going to “trust” you, because they are looking for confirmation bias. And if the truth goes against what they want, they’ll refuse to trust you. Do you think this will make Donald Trump leave you alone? Have you never read a single history book that Amazon sells? Trump will see your capitulation as a sign of weakness. He sees it as a sign that he can squeeze you for more and more, and that you’ll give. Because rather than stand up for truth, you caved. Like a coward. Presidential endorsements do nothing to tip the scales of an election. No undecided voters in Pennsylvania are going to say, “I’m going with Newspaper A’s endorsement.” None. Even if this is true, you should have made this decision clear a year or two ago and given your reasons then, instead of stepping in a week before the election, destroying all credibility, interfering with the editorial independence of your newspaper and looking like a simp for Trump. And, even worse, announcing it without an explanation until this hastily penned joke of an attempt at justification. If you want to build up more credibility and trust in news, that’s great. But you did the opposite. Lack of credibility isn’t unique to The Post. Our brethren newspapers have the same issue. And it’s a problem not only for media, but also for the nation. Many people are turning to off-the-cuff podcasts, inaccurate social media posts and other unverified news sources, which can quickly spread misinformation and deepen divisions. And you think the best way to correct that is for a billionaire owner to step in and overrule the editorial team? While I do not and will not push my personal interest, I will also not allow this paper to stay on autopilot and fade into irrelevance — overtaken by unresearched podcasts and social media barbs — not without a fight. It’s too important. The stakes are too high. Now more than ever the world needs a credible, trusted, independent voice, and where better for that voice to originate than the capital city of the most important country in the world? And you will do that by pushing my personal interest and blocking the editorial team, allowing them to be overtaken in credibility by podcasts and social media barbs? Also, “not without a fight?” Dude, you just forfeited the fucking fight. The stakes are high, and you just told your newspaper, “Sit this one out, folks.” You took yourself out of the fight. Yes, the world needs a credible, trusted, independent voice. You just proved that the Washington Post cannot be that voice, because it has a billionaire owner willing to step in, destroy that credibility and trust, and make it clear to the world that its editorial team has no independence. The Washington Post has some amazing journalists, and you just undermined them. For what? Absolutely nothing.

[Category: washington post, credibility, endorsements, jeff bezos, journalism, trust]

[*] [-] [-] [x] [A+] [a-]  
[l] at 10/29/24 6:22am
Earlier this month the FTC announced it was modifying some existing rules to crack down on companies that make it extremely difficult to cancel services. The agencys revamp of its 1973 “Negative Option Rule” requires companies be completely transparent about the limitations of deals and promotions, requires consumers actively consent to having read terms and deal restrictions, and generally makes cancelling a service as easy as signing up. So of course, cable and media giants like Comcast and Charter, whove built an entire industry on being overtly hostile to consumers, are suing. Under the banner of the NCTA (The Internet & Television Association), Comcast and Charter filed a lawsuit late last week in the Republican-heavy 5th circuit, claiming the new rules are arbitrary, onerous, capricious, and an abuse of the industrys existing authority. The Interactive Advertising Bureau (IAB) (with members ranging from Disney to Google) also joined the lawsuit. Corporate members of most of these organizations have a long, proud history of misleading promotions and making it difficult to cancel services. The Wall Street Journal, for example, historically made it annoyingly difficult to cancel digital subscriptions. And telecoms, of course, have historically made misleading their customers via fine print a high art form. Kind of like hidden and misleading fees, weve cultivated a U.S. business environment where being a misleading asshole to the consumer is simply viewed as a sort of business creativity, not the fraud it actually is. That (plus corruption) has historically resulted in a feckless regulatory environment where this stuff is only fecklessly and inconsistently enforced. Usually with piddly fines and wrist slaps. Telecoms have always been at the forefront of insisting that any effort to change this paradigm is regulatory over-reach. Thanks to recent Supreme Court rulings like Loper Bright (specifically designed to turn U.S. regulators into the policy equivalent of decorative gourds), they have more legal leverage than ever to crush corporate oversight with the help of a very broken and corrupt MAGA-heavy court system. The flimsy logic pushed by the extraction class to justify the dismantling of Chevron deference was that feckless U.S. regulators (who again, in reality, can rarely take action against the worst offenders on a good day) had somehow gotten too bold, and that recent Supreme decisions had rebalanced things so that out of control regulators cant act without the specific approval of Congress. But corporations didn’t lobby the unelected Supreme Court because they were just honestly concerned about the balance of policy power among “unelected bureaucrats.” They did it because they know they’ve already lobbied Congress into absolute, corrupt dysfunction on nearly all meaningful reform and corporate oversight (guns, health, whatever). Now they’re taking aim at the already shaky authority of federal U.S. regulators. Once theyre done there, theyll take aim at state consumer protection power. Companies like Comcast envision a world in which theres really no functional state or federal corporate oversight whatsoever. It really doesnt matter the subject (net neutrality, transparency label requirements, privacy, efforts to stop racial discrimination in broadband deployment, annoying cancellations). They sue claiming regulatory overreach. And thanks to the corrupt Supreme Court and decades of demonization of the regulatory state as uniquely and purely harmful, corporations have a better chance of winning than ever. And of course this isnt just happening in telecom and media. Its happening across every industry that touches every aspect of U.S. life, often in potentially deadly or hazardous ways (see this ProPublica report). Having federal regulators that cant do anything without it being dismantled by the whims of an errant, logic-optional 5th Circuit Republican Judge will cause endless legal chaos and grind most meaningful reform to a halt, just the way industry designed it. Its the culmination of a fifty-year strategy by large corporations, it wont be in any way subtle, and annoying cancellation obstacles will likely be the least of our worries as the chaos mounts in the years to come. Court reform (Supreme Court term limits and court expansion chief among them) is utterly essential, unless we really do want a world in which corporate power is the only power that matters.

[Category: 1, charter, comcast, esa, iab, ncta, 5th circuit, cancel, chevron, click to cancel, consumers, fine print, ftc, lina khan, loper bright, telecom]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 8:50pm
There must be something about the alcohol business that creates silly trademark disputes over geographic terms. Weve seen this several times in the past, such as in the whole Ravinia Festival dispute, or the time two breweries fought over a trademark for the neighborhood one of them operated out of. While these dont always turn out the way I think they should, the fact is that trademarks for geographic locations are supposed to be very, very narrowly granted. But it gets even weirder when were talking about homonyms. In this case, it concerns a trademark dispute between the Douro & Port Wine Institute, an industry association for makers of those types of wine, and a Scottish distillery. The European Intellectual Property Office has ruled that an Edinburgh distillery may partially continue to use the phrase ‘Port of Leith’ on its bottles after an attempt was made by an association of Port winemakers to ban it. The ruling states that the distillery is permitted to use the term ‘Port’ on its registered spirit products, but not on any future Port or Sherry products. It was claimed by the Douro and Port Wine Institute that consumers would be “confused” by the distiller’s use of the word ‘Port’ and that the whisky makers would unfairly benefit from the positive historical association with the fortified wine. Youve probably already guessed where this is going. The port in Port of Leith has nothing to do with wine. Or alcohol for that matter. Its a literal waterway port north of Edinburgh. The reason for the partial ruling is that the distillery actually does make some port wines along with sherry and whisky. Those wines may need to be discontinued or renamed, though I would argue against that. But the distillery is absolutely free to continue using its name and branding on packaging and in marketing for non-port wine products. That the DPWI ever thought otherwise is absurd. Telling a business it cant operate using the geographic name of its own home was never the purpose of trademark law. Thats why demonstrable public confusion, or the very clear potential for such confusion, is so paramount in trademark law. The point isnt to allow a business to lock up common language, but to keep the public from being fooled or confused as to the origin or association of a particular service or good. Thankfully, unlike what we saw in the Ravinia case, the EUIPO got this one right.

[Category: 1, douro & port wine institute, euipo, port, port of leith, trademark]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 4:29pm
Considering how to increase competition in the search space without damaging end users is a trickier question than it seems at first. Many of the suggestions that people have tossed out have tended to focus on ideas that are purely punitive to Google, but which would also have negative impacts on users (and even some competitors). As we reach the stage of the antitrust battle where remedies are actually being considered, it’s crucial that we focus on solutions that will truly promote competition and benefit users, not just score political points against Google. Earlier this year, I was left troubled by the end result of the ruling against Google in the first (of a few) antitrust cases against it. I think the (currently ongoing) case about the company’s practices regarding advertising is a lot stronger. The case that was ruled on this summer, though, was about Google’s massive payments to Apple and Mozilla to have those companies have Google search as the default on Apple devices/Safari and on Firefox. At the time, we pointed out that it was difficult to think of any remedies that actually helped solve the situation. Both Apple and Mozilla more or less admitted during the trial that users effectively demanded Google search be the default, and any attempt to use other search engines resulted in angry users. If the court demanded Google stop paying the billions of dollars to Apple or the hundreds of millions of dollars to Mozilla, it wouldn’t hurt Google. Indeed, it would seem to help them. Since both companies admitted that users were demanding Google as the default, little would change there other than Google getting to keep even more money. And Apple and (especially) Mozilla losing a ton of revenue. That didn’t seem very helpful at all. In the intervening months, I’ve had a few conversations with folks about possible remedies that make sense. The most reasonable suggestion seemed to be DuckDuckGo’s main suggestion: allow other search engines to build off of Google’s search corpus by enabling API access under Fair, Reasonable and Non-Discriminatory (FRAND) grounds. The best and fastest way to level this playing field is for Google to provide access to its search results via real-time APIs (Application Programming Interfaces) on fair, reasonable, and non-discriminatory (FRAND) terms. That means for any query that could go in a search engine, a competitor would have access to the same search results: everything that Google would serve on their own search results page in response to that query. If Google is forced to license its search results in this manner, this would allow existing search engines and potential market entrants to build on top of Google’s various modules and indexes and offer consumers more competitive and innovative alternatives.  Today, we believe that we already offer a compelling search alternative with more privacy and fewer ads, relative to Google. We’ve also been working for fifteen years to make our search results on par in terms of feature set and quality by combining our own search indexes with those of partners like Apple, Microsoft, TripAdvisor, Wikipedia, and Yelp. However, we know that many consumers still prefer Google’s results due to the benefits of scale discussed above, and this intervention would erase that advantage, instantly making us and others much more competitive.  This remedy would certainly allow for more competition to arise, which has proven difficult today. No one (not even Microsoft’s Bing) really has the reach and comprehensiveness of Google’s index. DuckDuckGo is mostly built on Bing (I know it insists it’s more than that, but in practice, it appears to be mostly Bing — as we discovered when Bing banned Techdirt, and we also disappeared from DDG). Every attempt to build competing search engines seems to run into the scale problem eventually without access to Google results. Even Kagi, which was briefly a darling among folks looking for a search alternative, apparently makes use of Google’s search tech on the backend. It seems like a pretty reasonable idea to make it so that others can license access to the API and build Google results into alternative search products, as this gets at the actual issues underlying this case. A few weeks ago, the Justice Department filed its preliminary thoughts on remedies, and there are a wide mix of ideas in there, some crazier than others. A lot of the headlines that filing generated were around big “break up” ideas: spinning off Chrome or Android. These seem preposterous and unlikely. Under antitrust law while breakups (“structural remedies”) are certainly one tool in the toolbox, they are supposed to be related to the violation at hand. Given that the antitrust problem in this case was about the search payments, and not anything specific to Chrome or Android, it’s difficult to see how such remedies would even be allowed under the law, let alone make sense. Indeed, without Chrome and Android being attached to Google, those products would likely suffer, as both are subsidized by Google, and that would do a lot to harm users. That doesn’t seem like a good result either. So the proposals from the DOJ that match DDG’s suggestion of API access are much more interesting (and probably better) overall. Plaintiffs are considering remedies that will offset this advantage and strengthen competition by requiring, among other things, Google to make available, in whole or through an API, (1) the indexes, data, feeds, and models used for Google search, including those used in AI-assisted search features, and (2) Google search results, features, and ads, including the underlying ranking signals, especially on mobile Again, this seems to actually target the issue. It creates a scenario for increased competition without a corresponding harm to users or to other competitors. Many of the other sections do not. Also, arguably, the DOJ could have gone even further, conveying on users more ability to designate access to information and data as a way to escape the silo of Google. This is a bigger issue and one that doesn’t get as much attention, but the ability of large companies to lock in users has diminished the ability of competitors to grow and challenge the network effects of existing businesses. For some users of Google, the fact that it tracks your history is not seen as creepy or privacy invading, but rather a benefit for that user (and yes, this is not true for everyone!). But if the user could retain control over their own search histories and preferences, and allow third party search engines to access it with the user’s permission it would also help users get out of an existing silo. Just as one example, Google knows a fair bit about what I normally search on and click on. But if I could make use of that history and give DuckDuckGo or Kagi or someone else access to it for the sake of improving their own search results to my queries, that would be potentially useful for competition. And all it’s really doing is saying that the user who generated that history and metadata should have some control over it as well, including separating it from the underlying Google product. Yes, this would have to be done carefully, to avoid (say) exposing more sensitive data regarding searches to these other companies, but if it was done in a way that was transparent, and which the end user had control over, it could be really valuable. Not surprisingly, Google is very, very upset about all these potential remedies. It suggests that if they were forced to share such things with others, it would lead to privacy and security risks: Forcing Google to share your search queries, clicks, and results with competitors risks your privacy and security. It’s widely recognized, including explicitly by the DOJ in its outline, that forcing the sharing of your searches with other companies could create major privacy and security risks. The search queries you share with Google are often sensitive and personal and are protected by Googles strict security standards; in the hands of a different company without strong security practices, bad actors could access them to identify you and your search history — as we’ve seen before. Additionally, while sharing Google’s search results with others might create a few copycats, it could also decrease incentives for other companies to actually innovate in search. This very much depends on what information is shared, with whom, and how. I still think that simply giving the user more control over it, rather than just letting companies fight over access, solves some of Google’s stated concerns. On the whole, the larger structural remedies (spinning off lines of business) don’t seem to target the underlying issue, seem mainly punitive, and won’t do much to help competition or users. But the idea of opening up access to search systems and data, especially if it gives more control to the end user actually seems like a really good way of increasing competition and improving the situations for users. Google’s statements about security and privacy are still ones worth considering, but there are ways to deal with those issues, mainly by providing more power to the end user, rather than just opening up that info directly to other search engines.

[Category: duckduckgo, google, android, antitrust, api, breakups, chrome, competition, data, doj, frand, remedies, search, search history, silos, structural remedies]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 2:02pm
As Mike and others have pointed out, the Los Angeles Times and Washington Post have utterly failed the public. While it is of course their right to endorse, or not endorse, anyone they choose, the refusal to provide any such endorsement in an election with such high stakes abandons the important role the press plays in helping ensure that the electorate is as informed as it needs to be to make its self-governance choices. They join the outlets like the New York Times, CNN, the Wall Street Journal, and others who have also pulled their punches in headlines and articles about the racist threats being made in the course of the presidential campaign, or inaccurately paint a false coherence between the candidates in their headlines and articles, and in doing so kept the public from understanding what is at stake.  The First Amendment protects the press so that it can be free to perform that critical role of informing the public of what it needs to know. A press that instead chooses to be silent is of no more use than a press that cant speak. The issue here is not that the LA Times and Washington Post could not muster opinions (in fact, one could argue that its silence is actually expressing one). The issue is more how they’ve mischaracterized endorsements as some sort of superfluous expression of preference and not a meaningful synthesis of the crucial reporting it has done. In other words, despite their protests, the endorsement is supposed to be reporting, a handy packaging of its coverage for readers to conveniently review before voting. If it turns out that the publication can draw a conclusion no better than a low-information voter, when it, as press, should have the most information of all, then it can no longer be trusted as a useful source of it. While both the LA Times and Washington Post have still produced some helpful political reporting, their editorial reluctance to embrace their own coverage makes one wonder what else they have held back that the public really needed to know about before heading to the ballot box. Especially when it seems the Times in particular also nixed the week-long series of Trump-focused articles it had been planning, which would have culminated in the editorial against him – the absence of that reporting too raises the strong suspicion that other relevant reporting has also been suppressed. This crucial educative role that the press plays to inform public discourse so necessary for democracy to successfully function is now going unserved by the publications who have now abdicated that important job. Which is, of course, their choice: it is their choice in whether and how to exercise the editorial discretion of what to cover and what to conclude. The press freedom the First Amendment protects includes the freedom to be absolutely awful in one’s reporting decisions. No law could constitutionally demand anything otherwise and still leave that essential press freedom intact. But if these incumbent outlets are not going to do it, then someone else will need to. The problem we are faced with is that not only are these publications refusing to play this critical democracy-defending role, but they are also actively trying to prevent anyone else from doing it. Because that’s the upshot to all the “link taxes” they and organizations they support keep lobbying for. As we’ve discussed many times, link taxes destroy journalism by making that journalism much more difficult to find. The link sharing people are now able to freely do on social media and such would now require permission, which would necessarily deter it. The idea behind link taxes it would raise revenue if people had to pay for the permission needed to link to their articles. But all such a law would be sure to do is cut media outlets off from their audiences by deliberately cutting off a main way they get linked to them. While the goal of the policy, to support journalism, may be noble, the intention cannot redeem such a counterproductive policy when its inevitable effect will be the exact opposite.  It is, in short, a dumb idea. But if link taxes are imposed it will be a dumb idea everyone has to live with, no matter how much it hurts them. And it will hurt plenty. Because even if it manages to generate some money, the only outlets likely to ever see any of it would be the big incumbents – the same ones currently failing us. Smaller outlets, by being smaller, would be unlikely to benefit – compulsory licensing schemes such as this one rarely return much to the longtail of supposed beneficiaries. Yet for those smaller outlets keen to build audiences and then monetize that attention in ways most appropriate for it, these link tax schemes will be crippling obstacles, preventing their work from even getting seen and leaving them now without either revenue or audience. Which will make it impossible for them to survive and carry the reporting baton that the larger outlets have now dropped. Which therefore means that the public will still have to go without the reporting it needs, because the bigger outlets aren’t doing it and the smaller ones now can’t. Laws that impose regulatory schemes like these are of dubious constitutionality, especially in how they directly interfere with the operation of the press by suppressing these smaller outlets. But what is perhaps most alarming here is the utter hypocrisy of these incumbent outlets to claim link taxes are needed to “save” journalism while not actually doing the journalism that needs saving, yet demanding a regulatory scheme that would effectively silence anyone interested in doing better. If they wonder why journalism is struggling, then the thing they need to do is look in the mirror. The way to save journalism is to actually practice journalism. No link tax is going to make the LA Times or Washington Post play the role they have chosen not to play anymore. But they will make it so that no one else can play it either. And that’s no way to save journalism; that’s how you kill it for good. And with it the democracy that depends on it.

[Category: 1, la times, washington post, cjpa, endorsements, jcpa, journalism, link taxes, politics]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 12:12pm
Well, this is certainly one of the more entertaining decisions Ive ever read, even though most of it deals with the more boring side of civil rights litigation, i.e., questions of standing and mootness. I mean, those can be interesting but theyre far less interesting than seeing a court dig into cases where the either the rights violations are egregious or, conversely, the lawsuit is being brought by unserious people who think anything anyone does to them, whether its a government agency or a private company, must be a violation of their rights. That being said, this is more the latter than the former. Cambridge Christian School is the plaintiff. And it fervently believes (I mean, faith is pretty much its whole deal) it has been wronged because the Florida High School Athletic Association (FHSAA) violated its rights by not allowing it to do an over-the-air broadcast of its preferred prayer prior to a championship football game it played [re-reads ruling] in 2015. Cambridge believes its First Amendment rights have been violated by the FHSAAs refusal to allow it broadcast its prayers over venue PA systems prior to championship games. It can only aver to one such alleged violation, since the last time it played in a championship game was nearly a decade ago. The FHSAA says theres no discrimination here, nor any denial of Cambridges free speech rights. It has had a PA policy in place for years that requires announcers to be as firmly neutral as humanly possible, restricting their speech to only facts about whats happening on the field. From the Eleventh Circuit Appeals Court decision [PDF]: The FHSAA creates scripts for all playoff football games, including state championship games, and expects PA announcers to follow those scripts. It also has a protocol that governs the use of PA systems at playoff games. According to that protocol, PA announcers must follow the PA scripts the FHSAA gives them for promotional announcements, player introductions, and awards ceremonies. The protocol limits all other announcements to: emergencies; lineups for the participating teams; messages provided by host school management (for the non-championship playoff games when there is a host school); announcements about the sale of FHSAA merchandise and concessions; and other “practical” announcements (e.g., there is a car with its lights on). As for game play, the PA protocol instructs PA announcers to recognize players attempting to make or making a play and to report penalties, substitutions, and timeouts. PA announcers may not call the “play-by-play” or provide “color commentary” (as if they were announcing for a radio or television broadcast), and they may not make comments that might advantage or criticize either team. Theres nothing in this policy that even suggests it might be ok to let either competitor roll into the announcing booth and broadcast its prayer of choice. Giving it to one team could be perceived as unfair to the other team. Giving it to either might suggest the FHSAA has a preference in deities and supports whatever message is delivered by the teams prayer. Giving it to both schools (if both are religious schools) just doubles down on that message. Therefore, the safest route is the one governed by these rules and delivered (in championship games) by a neutral announcer who works for neither school participating in the game. Nonetheless, the 2015 denial still weighs heavily on the school, which has chosen to neither forgive nor forget. It also fervently believes its First Amendment concerns outweigh the First Amendment issues that would be raised if a government entity (like the FHSAA) decided to start blending some church in with its state action, even as limited as it would be in this sort of situation. But going beyond that, theres a question of whether Cambridge even has standing to bring this lawsuit at all. After all, to obtain the injunction its seeking one blocking the FHSAA from blocking its PA prayers relies on demonstrating theres some sort of foreseeable and ongoing injury from being denied its prayer requests. And thats where this decision almost veers into snark. And it would be forgivable if it had. But this court is far more restrained than I could ever be. It simply points out the facts: this football program hasnt done a damn thing for most of a decade. Any injury from prayer blockage at championship games isnt foreseeable. Its imaginary. The school seeks “an injunction barring FHSAA from enforcing the Prayer Ban and prohibiting FHSAA from discriminating against religious speech over the loudspeaker.” It defines the “Prayer Ban” as the FHSAA’s 2015 “policy prohibiting schools participating in the football state championship game from using the stadium loudspeaker for pregame prayer.” In other words, the school has limited its request for equitable relief to pregame prayer over the PA system at FHSAA state championship football matches. As Cambridge Christian puts it: “[Cambridge Christian] annually competes to make it to the championship game and, if it reaches that game, it will be denied the ability to engage in its constitutionally protected religious practice and speech. But only, we would add, if it wins all of its playoff games leading to the state championship game, the final one. Kudos to the clerk that formatted this decision. Because thats the last sentence of the 19th page one that gives readers only the tiniest hint of whats to come. You have to scroll to the next page to see the last sentence explained in the context of the schools claims two solid paragraphs of verge-of-snark writing that makes it clear why Cambridge has no standing to sue. (All emphasis mine.) Here’s the problem with Cambridge Christian’s position. Its football team has not returned to the FHSAA state championship since 2015. In fact, 2015 is the only year the team has ever made it to the state championship since the school started its football program in 2003. Only once in two decades. Cambridge Christian acknowledges that its standing theory relies on “speculation” that it “will make it to another championship game,” but the school contends that that speculation does not defeat standing because there’s no need to prove that future harm is certain. True, Cambridge Christian is not required to demonstrate “that it is literally certain that the harms [it] identif[ies] will come about.” But the school does need to demonstrate that future injury is “certainly impending,” or at the very least, that there is a “substantial risk” that the harm will occur. And given the Lancers’ past performance on the gridiron, it cannot meet that standard. All the more so because as Cambridge Christian admits, the “competitiveness” of its football team “has waned” over the last few seasons, and the team is now in what it calls a “rebuilding phase” that it expects to last for a “few years.” Hope springs eternal but standing cannot be built on hope. With all due respect to the Cambridge Christian Fighting Lancers, there’s nothing to suggest that the team’s participation in a future football state championship is imminent or even likely. Yikes. Its one thing to witness the year-to-year failure to compete. Its quite another to have that pointed out to you by appellate-level judges. Maybe next year? Or the year after that? This assessment of the future of the Fighting Lancers aside, theres another problem with the lawsuit. The rules have changed since the alleged injury from nearly 10 years ago that the school has been suing about for the better part of a decade. (It filed this suit December 2016.) A recently passed state law allows school reps to take over the PA for a few pre-game Hail Marys or whatever prior to high school sporting events. In May 2023 the Florida legislature passed House Bill 225, which required the FHSAA to “adopt bylaws, policies, or procedures that provide each school participating in a high school championship contest or series of contests under the direction and supervision of the association the opportunity to make brief opening remarks, if requested by the school, using the public address system at the event.” The law became effective on July 1, 2023. In response, the FHSAA adopted a policy that allows schools participating in state championship events to make brief opening remarks over the PA system. According to the new policy, the remarks may not exceed two minutes per school and may not be derogatory, rude, or threatening. And “[b]efore the opening remarks, the announcement must be made that the content of any opening remarks by a participating school is not endorsed by and does not reflect the views and/or opinions of the FHSAA.” And theres the mootness. Even if Cambridge somehow finds a way to field a competitive team within the next decade, it can fire off a 2-minute prayer over the PA system prior to taking the field during championship games. Even in its wildest speculation of instant competitiveness, the injury it claimed to have suffered in 2015 (when it was denied its request to broadcast a prayer over the PA) is even more unlikely than the schools sudden return to championship form to re-occur. Thats (mercifully) the end of this lawsuit. I mean, I would hope. The court sends it back down to the lower court with instructions to vacate the ruling in favor of the FHSAA on the injunction request and replace it with a declaration theres no lawsuit to be had here. The other part of the prior ruling the one dismissing the schools First Amendment claims is upheld. Its actually two losses in one. But if theres anything this school is familiar with at this point in its history, its a steady string of losses in one arena or another.

[Category: cambridge christian school, florida high school athletic association, 11th circuit appeals court, 1st amendment, church and state]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 12:07pm
StackSkills is the premier online learning platform for mastering todays most in-demand skills. Now, with this exclusive limited-time offer, youll gain access to 1000+ StackSkills courses for life! Whether youre looking to earn a promotion, make a career change, or pick up a side hustle to make some extra cash, StackSkills delivers engaging online courses featuring the skills that matter most today, both personally and professionally. Its on sale for a limited time only for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

[Category: 1, daily deal]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 10:30am
The NY Times has real difficulty not misrepresenting Section 230. Over and over and over and over and over again it has misrepresented how Section 230 works, even having to once run this astounding correction (to an article that had a half-page headline saying Section 230 was at fault): A day later, it had to run another correction on a different article also misrepresenting Section 230: You would think with all these mistakes and corrections that the editors at the NY Times might take things a bit more slowly when either a reporter or a columnist submits a piece purportedly about Section 230. Apparently not. Julia Angwin has done some amazing reporting on privacy issues in the past and has exposed plenty of legitimately bad behavior by big tech companies. But, unfortunately, she appears to have been sucked into nonsense about Section 230. She recently wrote a terribly misleading opinion piece, bemoaning social media algorithms and blaming Section 230 for their existence. The piece is problematic and wrong on multiple levels. It’s disappointing that it ever saw the light of day without someone pointing out its many flaws. A history lesson: Before we get to the details of the article, let’s take a history lesson on recommendation algorithms, because it seems that many people have very short memories. The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop. There were attempts to organize that information and make it useful. Things like Yahoo became popular not because they had a search engine (that came later!) but because they were an attempt to “organize” the internet (Yahoo originally stood for “Yet Another Hierarchical Officious Oracle”, recognizing that there were lots of attempts to “organize” the internet at that time). After that, searching and search algorithms became a central way of finding stuff online. In its simplest form, search is a recommendation algorithm based on the keywords you provide run against its index. In the early days, Google cracked the code to make that recommendation algorithm for content on the wider internet. The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.” The next generation of the internet was content in various silos. Some of those were user-generated silos of content, such as Facebook and YouTube. And some of them were professional content, like Netflix or iTunes. But, once again, it wasn’t long before users felt overwhelmed with the sheer amount of content at their fingertips. Again, they sought out recommendation algorithms to help them find the relevant or good content, and to avoid the less relevant “bad” content. Netflixs algorithm isnt very different from Googles recommendation engine. Its just that, rather than heres whats most relevant for your search keywords, its heres whats most relevant based on your past viewing history. Indeed, Netflix somewhat famously perfected the content recommendation algorithm in those years, even offering up a $1 million prize to anyone who could build a better version. Years later, a team of researchers won the award, but Netflix never implemented it, saying that the marginal gains in quality were not worth the expense. Either way, though, it was clearly established that the benefit and the curse of the larger internet is that in enabling anyone to create and access content, too much content is created for anyone to deal with. Thus, curation and recommendation is absolutely necessary. And handling both at scale requires some sort of algorithms. Yes, some personal curation is great, but it does not scale well, and the internet is all about scale. People also seem to forget that recommendation algorithms aren’t just telling you what content they think you’ll want to see. They’re also helping to minimize the content you probably don’t want to see. Search engines choosing which links show up first are also choosing which links they wont show you. My email is only readable because of the recommendation engines I run against it (more than just a spam filter, I also run algorithms that automatically put emails into different folders based on likely importance and priority). Algorithms aren’t just a necessary part of making the internet usable today. They’re a key part of improving our experiences. Yes, sometimes algorithms get things wrong. They could recommend something you dont want. Or demote something you do. Or maybe they recommend some problematic information. But sometimes people get things wrong too. Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm. If the problem is people outsourcing their brain to the algorithm, it wont be solved by outlawing algorithms or adding liability to them. It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time. And opinions are protected free speech under the First Amendment. If we held anyone liable for opinions or recommendations, we’d have a massive speech problem on our hands. If I go into a bookstore, and the guy behind the counter recommends a book to me that makes me sad, I have no legal recourse, because no law has been broken. If we say that tech company algorithms mean they should be liable for their recommendations, we’ll create a huge mess: spammers will be able to sue if email is filtered to spam. Terrible websites will be able to sue search engines for downranking their nonsense. On top of that, First Amendment precedent has long been clear that the only way a distributor can be held liable for even harmful recommendation is if the distributor had actual knowledge of the law-violating nature of the recommendation. I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden,  but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty.  Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs. Its not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content. Absent that, there is nothing to actually sue over. And, thats good. Because you cant demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield. Note that the issue of Section 230 does not come up even once in this history lesson. All that Section 230 does is say that websites and users (thats important!) are immune from their editorial choices for third party content. That doesnt change the underlying First Amendment protections for their editorial discretion, it just allows them to get cases tossed out earlier (at the very earliest motion to dismiss stage) rather than having to go through expensive discovery/summary judgment and possibly even all the way to trial. Section 230 isn’t the issue here: Now back to Angwin’s piece. She starts out by complaining about Mark Zuckerberg talking up Meta’s supposedly improved algorithms. Then she takes the trite and easy route of dunking on that by pointing out that Facebook is full of AI slop and clickbait. Thats true! But… that’s got nothing to do with legal liability. That simply has to do with… how Facebook works and how you use Facebook? My Facebook feed has no AI slop or clickbait, perhaps because I don’t click on that stuff (and I barely use Facebook). If there was no 230 and Facebook were somehow incentivized to do less algorithmic recommendation, feeds would still be full of nonsense. That’s why the algorithms were created in the first place. Indeed, studies have shown that when you remove algorithms, feeds are filled with more nonsense, because the algorithms dont filter out the crap any more. But Angwin is sure that Section 230 is to blame and thinks that if we change it, it will magically make the algorithms better. Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices. Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram. So, again, this is wrong. From the earliest days of the internet, we always relied on recommendation systems and moderation, as noted above. And “social media” didn’t even come into existence until years after Section 230 was created. So, it’s not just wrong to say that Section 230’s protections made sense for early social media, it’s backwards. Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them. But it was also intended to help protect companies from being sued for recommendations. Indeed, two years ago, Cox and Wyden explained this to the Supreme Court in a case about recommendations: At the same time, Congress drafted Section 230 in a technology-neutral manner that would enable the provision to apply to subsequently developed methods of presenting and moderating user-generated content. The targeted recommendations at issue in this case are an example of a more contemporary method of content presentation. Those recommendations, according to the parties, involve the display of certain videos based on the output of an algorithm designed and trained to analyze data about users and present content that may be of interest to them. Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230. And because Section 230 is agnostic as to the underlying technology used by the online platform, a platform is eligible for immunity under Section 230 for its targeted recommendations to the same extent as any other content presentation or moderation activities. So the idea that 230 wasn’t meant for recommendation systems is wrong and ahistorical. It’s strange that Angwin would just claim otherwise, without backing up that statement. Then, Angwin presents a very misleading history of court cases around 230, pointing out cases where Section 230 has been successful in getting bad cases dismissed at an early stage, but in a way that makes it sound like the cases would have succeeded absent 230: Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world. But again, these links misrepresent and misunderstand how Section 230 functions under the umbrella of the First Amendment. None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. All Section 230 did was speed up the resolution of those cases, without stopping the plaintiffs from taking legal action against those actually responsible for the harms. And, similarly, we could point to another list of cases where Section 230 shielded tech firms from consequences for things we want them shielded from consequences on, like spam filters, kicking Nazis off your platform, fact-checking vaccine misinformation and election denial disinformation, removing hateful content and much much more. Remove 230 and you lose that ability as well. And those two functions are tied together at the hip. You cant get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment. This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of bad content. Thats the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if youd win on First Amendment grounds years later). Angwin’s issue (as is the issue with so many Section 230 haters) is that she wants to blame tech companies for harms created by users of those technologies. At its simplest level, Section 230 is just putting the liability on the party actually responsible. Angwin’s mad because she’d rather blame tech companies than the people actually selling drugs, sexually harassing people, selling illegal arms or engaging in human trafficking. And I get the instinct. Big tech companies suck. But pinning liability on them wont fix that. Itll just allow them to get out of having important editorial discretion (making everything worse) while simultaneously building up a bigger legal team, making sure competitors can never enter the space. That’s the underlying issue. Because if you blame the tech companies, you don’t get less of those underlying activities. You get companies who won’t even look to moderate such content, because that would be used in lawsuits against them as a sign of “knowledge.” Or if the companies do decide to more aggressively moderate, you would get any attempt to speak out about sexual harassment blocked (goodbye to the #MeToo movement… is that what Angwin really wants?) Changing 230 would make things worse, not better: From there, Angwin takes the absolutely batshit crazy 3rd Circuit opinion in Anderson v. TikTok, which explicitly ignored a long list of other cases based on misreading a non-binding throwaway line in a Supreme Court ruling, and gave no other justification for its ruling, as being a good thing? If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies, including TikTok, to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech. Except, it won’t do that. Because of the First Amendment it does the opposite. The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content (so goodbye #MeToo, #BlackLivesMatter or protests against the political class). Even worse, Angwin seems to have spoken to no one with actual expertise on this if she thinks this is the end result: My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability. As someone who is actively working to help create systems that give control back to users, I will say flat out that Angwin gets this backwards. Without Section 230 it becomes way more difficult to do so. Because the users themselves would now face much greater liability, and unlike the big companies, the users won’t have buildings full of lawyers willing and able to fight such bogus legal threats. If you face liability for giving users more control, users get less control. And, I mean, it’s incredible to say we need legal guardrails and less 230 and then say this: In the meantime, there are alternatives. I’ve already moved most of my social networking to Bluesky, a platform that allows me to manage my content moderation settings. I also subscribe to several other feeds — including one that provides news from verified news organizations and another that shows me what posts are popular with my friends. Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to. As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230. Without Section 230 to protect both Bluesky and its users, it makes it much more difficult to defend lawsuits over those feeds. Angwin literally has this backwards. Without Section 230, is Bluesky as open to offering up third-party feeds? Are they as open to allowing users to create their own feeds? Under the world that Angwin claims to want, where platforms have to crack down on “bad” content, it would be a lot more legally risky to allow user control and third-party feeds. Not because providing the feeds would lead to legal losses, but without 230 it would encourage more bogus lawsuits, and cost way more to get those lawsuits tossed out under the First Amendment. Bluesky doesnt have a building full of lawyers like Meta has. If Angwin got her way, Bluesky would need that if it wanted to continue offering the features Angwin claims she finds so encouraging. This is certainly not the first time that the NY Times has directly misled the public about how Section 230 works. But Angwin certainly knows many of the 230 experts in the field. It appears she spoke to none of them and wrote a piece that gets almost everything backwards. Angwin is a powerful and important voice towards fixing many of the downstream problems of tech companies. I just wish that she would spend some time understanding the nuances of 230 and the First Amendment to be more accurate in her recommendations. I’m quite happy that Angwin likes Bluesky’s approach to giving power to end users. I only wish she wasn’t advocating for something that would make that way more difficult.

[Category: 1, 1st amendment, algorithms, content moderation, free speech, history, julia angwin, recommendations, section 230]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/28/24 6:29am
After countless years pondering the idea, the FCC in 2022 announced that it would start politely asking the nation’s lumbering telecom monopolies to affix a sort of “nutrition label” on to broadband connections. The labels will clearly disclose the speed and latency (ping) of your connection, any hidden fees users will encounter, and whether the connection comes with usage caps or overage fees. Initially just a voluntary measure, bigger ISPs had to start using the labels back in April. Smaller ISPs had to start using them as of October 10. In most instances theyre supposed to look something like this: As far as regulatory efforts go, its not the worst idea. Transparency is lacking in broadband land, and U.S. broadband and cable companies have a 30+ year history of ripping off consumers with an absolute cavalcade of weird restrictions, fees, surcharges, and connection limitations. Heres the thing though: transparently knowing youre being ripped off doesnt necessarily stop you from being ripped off. A huge number of Americans live under a broadband monopoly or duopoly, meaning they have no other choice in broadband access. As such, Comcast or AT&T or Verizon can rip you off, and you have absolutely no alternative options that allow you to vote with your wallet. That wouldnt be as much of a problem if U.S. federal regulators had any interest in reining in regional telecom monopoly power, but they dont. In fact, members of both parties are historically incapable of even admitting monopoly harm exists. Democrats are notably better at at least trying to do something, even if that something often winds up being decorative regulatory theater. The other problem: with the help of a corrupt Supreme Court, telecoms and their Republican and libertarian besties are currently engaged in an effort to dismantle whats left of the FCCs consumer protection authority under the pretense this unleashes free market innovation. It, of course, doesnt; regional monopolies like Comcast just double down on all of their worst impulses, unchecked. If successful, even fairly basic efforts like this one wont be spared, as the FCC wont have the authority to enforce much of anything. Its all very demonstrative of a U.S. telecom industry thats been broken by monopoly power, a lack of competition, and regulatory capture. As a result, even the most basic attempts at consumer protection are constantly undermined by folks whove dressed up greed as some elaborate and intellectual ethos.

[Category: 1, broadband, consumers, fcc, fees, high speed internet, nutrition label, telecom, usage caps]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/27/24 1:00pm
This week, our first place winner on the insightful side is an anonymous comment about the legality of Elon Musks vote-buying scheme: My trainers say its illegal (volunteer voter registrar) We were taught that under no circumstances could we induce or attempt to induce anyone to register to vote, let alone to vote, let alone to vote any particular way. Our role is to register anyone who wishes to be registered, and to do so according to federal and state election laws (which govern things like proof of identity). That’s it. This means, for example, that while I can sit at a table outside a sporting event with a sign that says “register to vote here”, I cannot initiate a conversation with anyone passing by in an attempt to get them to register. I have to passively wait until they initiate the process, and then of course I can converse with them, explain how it’s done, go through the process, etc. Yes, I know: following the law is for the raggedy poor people, not for the rich elite. But this is yet another reason Musk should be stripped of everything and deported. The US doesn’t need this malignant cancer. In second place, its another anonymous comment about Musks gullibility and fondness for spreading misinformation: The amount of energy required to refute bullshit is an order of magnitude greater than to produce it. For editors choice on the insightful side, weve got a pair of comments from That One Guy. First, its another comment about Musks million-dollar vote gambit: Two truisms apply here: 1) Every accusation a confession, every self-given label a rejection of. Of course the person who turned his dumpsterfire of a social media platform into a platform to push his preferred (convicted felon) candidate, and created a lotto to collect personal data for campaign purposes to push people to register and vote accused someone else of spending a bunch of money to create a ‘propaganda machine’, the best way to know what a republican is doing/wants to do/plans on doing is merely to pay attention to what they claim someone else is doing. 2) Laws are for the little people. Is it possible that he’s breaking the law here? Absolutely. Will he face any penalty whatsoever beyond a wrist slap at most if he is and is found guilty in court? Not a chance in hell and he knows it. Next, its a comment on our post about conspiracy theories and how they are fueled by not trusting anything: I reject your demonstrable reality and substitute the one given to me “Everything is a conspiracy theory when you don’t trust anything.” I’m not sure this one is quite accurate either since I’d argue that not trusting things is only half the problem, with the other, more impactful half being the trust in people that definitely should not be trusted over demonstrable, observable reality, such that what the person says is given higher priority over what can be observed, tested and measured. Over on the funny side, our first place winner is another anonymous comment, once again about Musk and misinformation: Truth does another hit piece on Musk. In second place, its an anonymous comment on our post about Chris Rufo abusing academic plagiarisms absurd norms, replying to another commenter who attempted some sort of weird irony in saying they didnt expect us to be pro-Rufo: I didn’t expect to be abducted by aliens either, but that also didn’t happen, so I didn’t otherwise think it was relevant to mention. For editors choice on the funny side, we start out with an anonymous reply to the first place funny winner above: More anti-liar hate speech from TruthDirt. Finally, its Toom1275 with a reply to a screeching accusation of ORANGE MAN BAD: Orange fan SAD. Thats all for this week, folks!
[*] [+] [-] [x] [A+] [a-]  
[l] at 10/26/24 2:01pm
Five Years Ago This week in 2019, congress rushed to pass a bill empowering copyright trolls to shake people down, while Ajit Pai was complaining about the state-level net neutrality laws he helped create. Donald Trump was threatening to sue CNN for its coverage based on a dumb legal theory, while we wrote about how a cops bogus defamation lawsuit nearly put a small Iowa newspaper out of business. And we looked once again at why Section 230 is not a free pass for internet companies, and dedicated an episode of the podcast to the story of Backpage versus the feds. Ten Years Ago This week in 2014, we noted that even the FBI itself knew director James Comey was wrong about encryption, while he called on congress to fix the problem. Mike Rogers was ramping up the rhetoric and calling for Ed Snowden to be charged with murder, while a former NSA official was saying the government shouldnt hire anyone who justified Snowdens leaks. Marvel went DMCA crazy over the leaked trailer for Avengers 2 shortly before putting it on its own YouTube page, Microsoft got a bunch of non-infringing videos taken down because of product keys posted in the comments, and we wrote about how copyright law stifles artistic criticism. Fifteen Years Ago This week in 2009, Monster Energy drink was getting aggressive, hiring trademark bullies to go after a beverage review site and a movie monster, but it also backed down from another fight it had started a few weeks earlier against a Vermont brewery. Hollywood studios were starting to put anti-Twitter clauses in contracts with actors, AT&T was asking its employees to hide their affiliation while protesting net neutrality laws, and we asked whether the AP could be trusted to report on its own lawsuit with Shepard Fairey.

[Category: 1, history, look back]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/25/24 8:39pm
Newspaper presidential endorsements may not actually matter that much, but billionaire media owners blocking editorial teams from publishing their endorsements out of concern over potential retaliation from a future Donald Trump presidency should matter a lot. If people were legitimately worried about the “weaponization of government” and the idea that companies might silence speech over threats from the White House, what has happened over the past few days should raise alarm bells. But somehow I doubt we’ll be seeing the folks who were screaming bloody murder over the nothingburger that was the Murthy lawsuit saying a word of concern about billionaire media owners stifling the speech of their editorial boards to curry favor with Donald Trump. In 2017, the Washington Post changed its official slogan to “Democracy Dies in Darkness.” The phrase was apparently a favorite of Bob Woodward, who was one of the main reporters who broke the Watergate story decades ago. Lots of people criticized the slogan at the time (and have continued to do so since then), but no more so than today, as Jeff Bezos apparently stepped in to block the newspaper from endorsing Kamala Harris for President. An endorsement of Harris had been drafted by Post editorial page staffers but had yet to be published, according to two people who were briefed on the sequence of events and who spoke on the condition of anonymity because they were not authorized to speak publicly. The decision to no longer publish presidential endorsements was made by The Post’s owner, Amazon founder Jeff Bezos, according to the same two people. This comes just days after a similar situation with the LA Times, whose billionaire owner, Patrick Soon-Shiong, similarly blocked the editorial board from publishing its planned endorsement of Harris. Soon-Shiong tried to “clarify” by claiming he had asked the team to instead publish something looking at the pros and cons of each candidate. However, as members of the editorial board noted in response, that’s what you’d expect the newsroom to do. The editorial board is literally supposed to express its opinion. In the wake of that decision, at least three members of the LA Times editorial board have resigned. Mariel Garza quit almost immediately, and Robert Greene and Karin Klein followed a day later. As of this writing, it appears at least one person, editor-at-large Robert Kagan, has resigned from the Washington Post. Or, as the Missing The Point account on Bluesky noted, perhaps the Washington Post is changing its slogan to “Hello Darkness My Old Friend”: Marty Baron, who had been the Executive Editor of the Washington Post when it chose “Democracy Dies in Darkness” as a slogan, called Bezos’ decision out as “cowardice” and warned that Trump would see this as a victory of his intimidation techniques, and it would embolden him: The thing is, for all the talk over the past decade or so about “free speech” and “the weaponization of government,” this sure looks like these two billionaires suppressing speech from their organizations over fear of how Trump will react, should he be elected. During his last term, Donald Trump famously targeted Amazon in retaliation for coverage he didn’t like from the Washington Post. His anger at WaPo coverage caused him to ask the Postmaster General to double Amazon’s postage rates. Trump also told his Secretary of Defense James Mattis to “screw Amazon” and to kill a $10 billion cloud computing deal the Pentagon had lined up. For all the (misleading) talk about the Biden administration putting pressure on tech companies, what Trump did there seemed like legitimate First Amendment violations. He punished Amazon for speech he didn’t like. It’s funny how all the “weaponization of the government” people never made a peep about any of that. As for Soon-Shiong, it’s been said that he angled for a cabinet-level “health care czar” position in the last Trump administration, so perhaps he’s hoping to increase his chances this time around. In both cases, though, this sure looks like Trump’s past retaliations and direct promises of future retaliation against all who have challenged him are having a very clear censorial impact. In the last few months Trump has been pretty explicit that, should he win, he intends to punish media properties that reported on him in ways he dislikes. These are all reasons why anyone who believes in free speech should be speaking out about the dangers of Donald Trump towards our most cherished First Amendment rights. Especially those in the media. Bezos and Soon-Shiong are acting like cowards. Rather than standing up and doing what’s right, they’re pre-caving, before the election has even happened. It’s weak and pathetic, and Trump will see it (accurately) to mean that he can continue to walk all over them, and continue to get the media to pull punches by threatening retaliation. If democracy dies in darkness, it’s because Bezos and Soon-Shiong helped turn off the light they were carrying.

[Category: amazon, la times, washington post, censorship, cowardice, donald trump, endorsements, free speech, jeff bezos, journalism, kamala harris, patrick soon-shiong, presidential endorsements]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/25/24 4:43pm
Over a decade ago, we wrote about how the flurry of trademark lawsuits seen at that time over competitors buying up Google Adwords to get their company ads displayed when competitors are searched might finally be coming to an end. While these types of suits have certainly reduced in number based on anecdotal evidence, they have not disappeared entirely. And they make no more sense today than they did a decade ago. Buying an Adword that would cause a prospective buyer to search for a direct competitor isnt trademark infringement except in the rare cases where the ads are constructed such that actual substantial customer confusion occurs. Otherwise, its not different than ads and coupons in retail stores appearing next to a competing product. Because, you know, thats where the potential customer is. If I go down the aisle looking for Oreos and next to them is a coupon for Chips Ahoy, that isnt infringement. Buying Google Adwords for competitors search terms is no different. You would think law firms of all groups would know this sort of thing. One national law firm, Lerner & Rowe, appears to need several court-taught lessons on the matter. They brought one of these suits against a competitor in Arizona, the Accident Law Group (ALG), lost, and then lost again on appeal recently. The 9th U.S. Circuit Court of Appeals upheld, opens new tab a lower court’s ruling that granted a bid by the Arizona firm, the Accident Law Group, for summary judgment in the trademark infringement lawsuit brought by Lerner & Rowe over ALG’s ads that appeared on Lerner & Rowe’s Google search results. Lerner & Rowe had accused ALG of attaching ads for its firm to search terms or keywords associated with Lerner & Rowe and siphoning off potential clients. The appeals court said that despite Lerner & Rowes strong trademark and its expenditure of more than $100 million on marketing in Arizona, data from Google and ALG showed that only a tiny fraction of people who called ALG about potential legal representation mentioned Lerner & Rowe and therefore may have been confused. As the court went on to note in its analysis, thats likely because ALG didnt actually engage in anything deceptive beyond buying the Adwords. The ads it displayed made it plain that the ad was for ALG and not Lerner & Rowe. The two firms branding is otherwise not confusing. Theres just nothing here, other than the Adword buy itself. Which is why the number of people who even cited Lerner & Rowe to ALG is so tiny. In 2023, U.S. District Judge David Campbell granted ALG’s bid for summary judgment, in part relying on data from ALG’s intake department, which said it received a little more than 200 phone calls from people who specifically mentioned “Lerner & Rowe.” In contrast, ALG’s ads appeared on “Lerner & Rowe” searches more than 109,000 times between 2017 and 2021, Campbell said. The appeals court on Tuesday said that the district court was correct to conclude that the case was one of the rare trademark infringement cases susceptible to summary judgment. While this shouldnt be surprising any longer, it is nice to note when the courts get these sorts of trademark questions correct.

[Category: 1, alg, lerner & rowe, adwords, keyword advertising, lawyers, likelihood of confusion, trademark]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/25/24 2:30pm
The copyright world is currently trying to assert its control over the new world of generative AI through a number of lawsuits, several of which have been discussed previously on Walled Culture. We now have our first decision in this area, from the regional court in Hamburg. Andres Guadamuz has provided an excellent detailed analysis of a ruling that is important for the German judges’ discussion of how EU copyright law applies to various aspects of generative AI. The case concerns the freely-available dataset from LAION (Large-scale Artificial Intelligence Open Network), a German non-profit. As the LAION FAQ says: “LAION datasets are simply indexes to the internet, i.e. lists of URLs to the original images together with the ALT texts found linked to those images.” Guadamuz explains: The case was brought by German photographer Robert Kneschke, who found that some of his photographs had been included in the LAION dataset. He requested the images to be removed, but LAION argued that they had no images, only links to where the images could be found online. Kneschke argued that the process of collecting the dataset had included making copies of the images to extract information, and that this amounted to copyright infringement. LAION admitted making copies, but said that it was in compliance with the exception for text and data mining (TDM) present in German law, which is a transposition of Article 3 of the 2019 EU Copyright Directive. The German judges agreed: The court argued that while LAION had been used by commercial organisations, the dataset itself had been released to the public free of charge, and no evidence was presented that any commercial body had control over its operations. Therefore, the dataset is non-commercial and for scientific research. So LAION’s actions are covered by section 60d of the German Copyright Act That’s good news for LAION and its dataset, but perhaps more interesting for the general field of generative AI is the court’s discussion of how the EU Copyright Directive and its exceptions apply to AI training. It’s a key question because copyright companies claim that they don’t, and that when such training involves copyright material, permission is needed to use it. Guadamuz summarizes that point of view as follows: the argument is that the legislators didn’t intend to cover generative AI when they passed the [EU Copyright Directive], so text and data mining does not cover the training of a model, just the making of a copy to extract information from it. The argument is that making a copy to extract information to create a dataset is fine, as the court agreed here, but the making of a copy in order to extract information to make a model is not. I somehow think that this completely misses the way in which a model is trained; a dataset can have copies of a work, or in the case of LAION, links to the copies of the work. A trained model doesn’t contain copies of the works with which it was trained, and regurgitation of works in the training data in an output is another legal issue entirely. The judgment from the Hamburg court says that while legislators may not have been aware of generative AI model training in 2019, when they drew up the EU Copyright Directive, they certainly are now. The judges use the EU’s 2024 AI Act as evidence of this, citing a paragraph that makes explicit reference to AI models complying with the text and data mining regulation in the earlier Copyright Directive. As Guadamuz writes in his post, this is an important point, but the legal impact may be limited. The judgment is only the view of a local German court, so other jurisdictions may produce different results. Moreover, the original plaintiff Robert Kneschke may appeal and overturn the decision. Furthermore, the ruling only concerns the use of text and data mining to create a training dataset, not the actual training itself, although the judges’ thoughts on the latter indicate that it would be legal too. In other words, this local outbreak of good sense in Germany is welcome, but we are still a long way from complete legal clarity on the training of generative AI systems on copyright material. Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

[Category: laion, ai, copyright, copyright directive, germany, hamburg, reading, robert kneschke, tdm, text and data mining, training]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/25/24 1:06pm
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderations Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. Ben and Mike are technically off this week, but we decided to run an experiment. After discussing Google’s NotebookLM and its ability to create AI-generated podcasts about any content, Mike experimented with how it would handle one of the stories Mike & Ben discussed last week: Daphne Keller’s The Rise of the Compliant Speech Platform on Lawfare. Mike explains why we’re running this, some of the work that went into it, as well as his thoughts on the experiment, followed by the AI-generated version. This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

[Category: 1, google, ai, artificial intelligence, content moderation, daphne keller, notebookLM]

[*] [+] [-] [x] [A+] [a-]  
[l] at 10/25/24 11:57am
Not all cops are terrible people, but a whole lot of terrible people seem to be cops. For some reason, a police officer (who has not been officially identified by the district) has a side gig as a substitute English teacher. I dont know what qualifications this officer brings to that job, but absolutely none of them were on display last week, when he earned himself a permanent ban from the district for teaching anything but English to an English class at Woodbury High School. (h/t Chris Ingraham) A substitute teacher has been banned from a Twin Cities metro area school district after reportedly using a student to reenact the police actions that led to the murder of George Floyd. The teacher also reportedly made racist and sexist comments to students at Woodbury High School on Monday, among other actions that prompted school officials to remove him from the school. “It was very disturbing to us as a school district that something like that would ever occur in one of our classrooms,” South Washington County Schools Superintendent Julie Nielsen told MPR’s Minnesota Now on Wednesday, adding that in more than 30 years as an educator, “I have never heard of such poor judgment in a classroom.” Weve always taken the position that putting cops in schools is a generally terrible idea, but that was in the context of cops handling the disciplinary issues that have been historically handled by teachers, parents, and school administrators. Going forward, we will be taking the position that putting cops in classrooms as instructors is generally a terrible idea, pretty much entirely because of this guy: Officer Steven Dwight Williams of the Prescott, Wisconsin police department (as reported by the Minneapolis Star-Tribune) has been outed by his employer, which has placed him on administrative leave. What he was doing offering his dubious English teaching skills to a school in a suburb of St. Paul, Minnesota is anyones guess. That being said, the reporting from MPR leaves something to be desired in terms of its cop-washing of the most eye-grabbing part of Officer Williams teaching methods: reenact the police actions that led to the murder of George Floyd. No, he absolutely performed the very act that murdered George Floyd, using a Black high school student to represent the Black adult murdered by Minnesota police officer Derek Chauvin. Much like Chauvin, Officer Williams reenactment of a murder performed by a police officer was captured by a bystanders phone: But even that line is better than the phrase used in the Woodbury News Nets coverage of this debacle: Substitute Teacher Banned After Allegedly Reenacting George Floyd Restraint At Woodbury High First, theres footage of the incident and the verification of every student in the class. Theres also the letter sent to parents of students at the school, which affirms this did, in fact, happen. Finally, calling a murder a restraint may be technically accurate, but it downplays the insanity of this reenactment, which was performed by a cop, utilizing a Black student. Its like teaching history by reenacting a lynching but at least in that case, it might have something to do with the subject being taught. This aint English. And neither is anything else that fell out of that cops mouth during what is hopefully his final substitute teacher job ever. This is from the letter/email sent to parents by the district, which includes a long list of other terrible things Officer Williams did during the four classes he taught that day. During class time, some of the things students reported were that the substitute teacher: ● Put a student on the ground in front of the class as part of a reenactment of the police actions that resulted in the murder of George Floyd.● Twisted a student’s arm behind the student’s back and showed pressure points on the chin and face.● Spoke about a bar fight and fake punched a student with his fist “really close” to the student’s face.● “Invaded students space” and mimicked holding up a gun and pointing it at students.● Repeatedly made racially-harmful comments.● Told sexist jokes.● Spoke in disturbing detail about dead bodies he had seen, and shared explicit details about two sexual assault cases he investigated.● Shared specific names of people he arrested.● Stated that “cops would be the best criminals” and that “they know how to get away with stuff,” stating that he once got an “A” on a paper about how to get away with murder.● Spoke at length about his gun collection.● Stated that “police brutality isn’t real.” Thats insane. Unless the purpose of the class was to instruct students that cops are terrible, untrustworthy people, theres no way anyone came away from these classes with a better understanding of the subject matter. And I can only imagine how horrified the person Officer Williams replaced must be at this development, if only because its clear he didnt follow the lesson plans they left for him. How this guy has held down this side gig for multiple years is beyond my comprehension. This cant be the only time he decided to regale classes with his cop-centric view of the world. Or maybe this was just the day he decided he wanted to get fired and decided to drag four consecutive classes down with him. Either way, more schools should probably start running background checks on their substitute teacher rosters. Administrators probably assumed someone with a law enforcement background would be a solid pick for open teaching slots. That assumption has now been completely undermined by one officers actions. Maybe no other cop/sub would do things this officer did. But as cops themselves will tell you when theyre trying to impose their will on you, better safe than sorry.

[Category: 1, george floyd, minnesota, police brutality, school cops, steven williams, woodbury high school]

As of 10/29/24 6:13pm. Last new 10/29/24 3:58pm.

Next feed in category: Arc Technica Science