‘Breaking Bad’ Elevated Television

If you’ve been out of the country for the past six years, you have an excuse for being unfamiliar with Breaking Bad, perhaps the best show that’s ever been on television.

The story of Walter White, a humble high school chemistry teacher who, upon learning he has lung cancer, decides to team up with a former student to make methamphetamines, BB portrays the transformation of White from “Mr. Chips into Scarface,” as the show’s creator, Vince Gilligan, describes it.

Fresh off its Emmy award as best drama series, a recognition that was too long in coming, the question now is when will we see another TV series that is as astonishingly good?  And another question: Why is it so hard for truly excellent programming to get air time?

In his book Difficult Men, Brett Martin recounts the lengthy and harrowing path traversed by Gilligan on the way to securing a deal with AMC, one of the several channels that comprise AMC Networks.

Martin tells the tale of Gilligan’s meeting with executives of the TNT cable network, who liked the show but were afraid of the drug-making aspect of it: “We don’t want to be stereotypical philistine executives, but does it have to be meth?  We love this, but if we buy it, we’ll be fired.”

Nor was TNT the only cable network that turned thumbs down on Breaking Bad.  So too did Showtime, HBO, and FX, meaning, as Gilligan put it, “there was no place left in the known universe.”

Elsewhere in his book, Martin usefully recounts the words of the AMC executive (Rob Sorcher) who decided to take a chance on the show: “We had had success with Mad Men,” he said.  “And once you’ve had that cookie it tastes good.  You want another one.  The decision to go another way, believe me, it was … terrifying.  But once you did, once you chose quality over everything else … you could do anything.”

At a time when so much video programming – film as well as TV – is demographically driven, PC themed, and/or scripted for cardboard characters, Breaking Bad is something very different.

Incorporating tremendous writing, directing, acting, and visuals, BB delivered a series that was marked by ambiguity, complexity, surprise, and sophistication.

As many have noted, in recent years the Emmy’s have been dominated by cable rather than broadcast network programming.  Indeed, both pay and basic cable channels have gained a reputation as the place to find smarter, edgier original series like Mad Men, The Sopranos, and of course Breaking Bad (despite the initial drug-themed hesitation about BB).  And this raises the question of why much of the best programming has been gravitating to cable.  

One explanation is that broadcasting is much more heavily regulated.  For this reason, programming that is marked by sexual or violent content carries greater risk for broadcasters than for cable networks.  And the risks involved don’t issue from government only.

A case in point is the Showtime program Dexter, a series that, though critically acclaimed, features both sexual situations and violence.  In 2007, CBS announced that it was considering broadcasting reruns of Dexter over the air.  In response, a conservative group, the Parents Television Council, warned CBS affiliates to preempt the show, and threatened the show’s advertisers.

As it happened, CBS edited the reruns down to a TV-14 rating and aired them on its affiliates, but only for a single season.

None of this is to suggest that violence equals excellence, or that excellence can only be achieved with the inclusion of violence – only that where violence is a necessary ingredient in the excellent telling of a good story, its inclusion ought not to preempt the airing of it.

For years now, many people have bemoaned the “dumbing down” of America, a phenomenon defined by Wikipedia as “the deliberate diminishment of the intellectual level of the content of schooling and education, of literature and cinema, and of news and culture.”

The popular and critical success of Breaking Bad demonstrates that there is both the talent and the audience for something better.

                                               

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.  A version of this article appeared in the online edition of USA Today on Sept. 29, 2013.

Google, the FTC, and ‘Plausible’ Justifiability

Though it was surely not its intention, the Federal Trade Commission’s conclusion last week of its investigation of Google invites the question: What useful function does the FTC serve?

Not content, after two years of investigation on the taxpayers’ dime, to largely look past the mountain of evidence of marketplace harm caused by Google’s search and advertising practices, the Commission compounded that error by declining to issue a formal consent order, leaving it in the hands of Google itself, without the prospect of penalty, to change some of its business practices.

As even Commissioner J. Thomas Rosch said in his statement of concurrence and dissent, the FTC’s “settlement” with Google “creates very bad precedent and may lead to the impression that well-heeled firms such as Google will receive special treatment at the Commission.”

In elaboration of his dissent from the settlement procedure, Comm. Rosch added this:

Instead of following standard Commission procedure and entering into a binding consent agreement to resolve the majority’s concerns, Google has instead made non-binding commitments with respect to its search practices….

Our settlement with Google is not in the form of a binding consent order and, as a result, the Commission cannot enforce it by initiating contempt proceedings.  The inability to enforce Google’s commitments through contempt proceedings is particularly problematic given that the Commission has charged Google with violating a prior consent agreement.

What Comm. Rosch delicately calls “special treatment,” the more cynical of us would recognize as political influence peddling, a practice that Google has become quite adept at employing.  First it bankrolled the codification, at the Federal Communications Commission, of “net neutrality” regulations, thereby providing a solution to a nonexistent problem; then it led the successful opposition to the PIPA and SOPA copyright bills, the better to protect its investment in YouTube; now it has neutered the FTC, with the consequence being that it can continue to game its search results in ways that favor companies it controls.

So how has Google managed such political feats?  Well, would you believe that money has played a role?  In the FTC investigation alone Google reportedly spent some $25 million lobbying the matter.  To give an idea of the magnitude of this kind of spending, it equals 10 percent of the FTC’s total annual budget of $250 million.

But in addition to its FTC-specific lobbying, it’s well known that Google has cast its lot, through munificent campaign contributions and public policy support, with the current administration. Though it failed to come to pass, there was undoubtedly substance to the rumor that Google’s Eric Schmidt was being considered for a cabinet post in the Obama Administration.

Even so, there is evidence that the FTC commissioners know what they have done.  Their concluding statement about Google’s search practices, for instance, displays an almost comical defensiveness as they contend that, even if Google’s search practices favor its own companies, that is arguably okay:

In sum, we find that the evidence presented at this time does not support the allegation that Google’s display of its own vertical content at or near the top of its search results page was a product design change undertaken without a legitimate business justification.  Rather, we conclude that Google’s display of its own content could plausibly (emphases added) be viewed as an improvement in the overall quality of Google’s search product….  Although at points in time various vertical websites have experienced demotions, we find that this was a consequence of algorithm changes that also could plausibly be viewed as an improvement in the overall quality of Google’s search results….

Although our careful review of the evidence in this matter supports our decision to close this investigation, we will remain vigilant and continue to monitor Google for conduct that may harm competition and consumers.

Such limp-wristed rhetoric aside, there is a chance that Google will be brought to heel, just not by American authorities.  As it happens, the European Commission has also been investigating Google’s misdeeds, and the odds are good that, lacking the kind of political clout in Europe that it has in the USA, the company may actually receive from the Europeans something more than just a slap on the wrist.  On Dec. 18 the Commission gave the company 30 days to provide it with proposals to settle its complaints, something that could cost Google billions if it fails to do so.

Whatever the Europeans do, however, there remains the FTC’s foozled play, well put in a Bloomberg News editorial:

The FTC missed an opportunity to explore publicly one of the paramount issues of our day: Is Google abusing its role as gatekeeper to the digital economy?  Lawmakers, economists, other regulators, and consumers should all be in on this important debate over whether Google is leveraging its overwhelming dominance of search into unassailable market power in other areas. 

                                               

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

The ITU and the Internet

In 1971, when China was first admitted to the United Nations, William Rusher quipped that it was "a case of loosing a China in the bullshop.”  Such is the first thought that comes to mind in reflection on the latest bit of mischief to issue from the UN, in this case courtesy of that body’s International Telecommunications Union (ITU).

The second thought is of the power of precedents in law and policymaking.  Policywise, precedents can be likened to the engine of a train, the caboose of which is incremental or galloping movement in the same direction.

So the take-away from the vote last week in Dubai by 89 countries, including such freedom-loving regimes as those of China, Russia, Iran, and Venezuela (you know, the usuals), is that it’s just a matter of time before many of those same countries claim the right, under the UN charter, to control the Internet through such things as filtering, identifying users, and surveillance.

Defenders of last week’s vote, like the head of the ITU, disingenuously claim that “The conference was not about Internet control or Internet governance….  And indeed there are no treaty provisions on the Internet.”  The key word here is “treaty,” since tucked away in the appendices, as reported by Ars Technica, is this sentence:

[WCIT-12 resolves to invite member states] to elaborate on their respective positions on international Internet-related technical, development and public-policy issues within the mandate of ITU at various ITU forums including, inter alia, the World Telecommunications/ICT Policy Forum, the Broadband Commission for Digital Development and ITU study groups. 

So for the first time, the precedent has been established that the UN is an appropriate body for the deliberation of policy issues affecting the Internet.  Never mind that this resolution is not binding on those countries, like the United States, which voted against the International Telecommunications Regulations.  The point survives: From this time forward the UN’s ITU will provide cover for those nations that wish to wall their citizens off from the open Internet.

Nor is this the only dangerous precedent to be noted in the context of the WCIT.  As warned two years ago by Ambassador Philip Verveer, the adoption by this country of so-called “net neutrality” regulations itself provides an opportunity for international mischief making.

As Robert McDowell, than whom no other FCC commissioner in memory has been right more often, put it in congressional testimony earlier this month:

Should the FCC’s regulation of Internet network management be overturned by the court, in lieu of resorting to the destructive option of classifying, for the first time, broadband Internet access services as common carriage under Title II, the FCC should revive a concept I proposed nearly five years ago – that is to use the tried and true multi-stakeholder model for resolution of allegations of anti-competitive conduct by Internet service providers….

If we are going to preach the virtues of the multi-stakeholder model at the pending World Conference on International Telecommunications (WCIT) in Dubai, we should practice what we preach.  Not only would the U.S. then harmonize its foreign policy with its domestic policy, but such a course correction would yield better results for consumers as well. 

                                               

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

 

Reconsidering the FCC’s Political File Rule

The FCC’s recently minted rule requiring certain broadcast stations to post their political ad files online rather than, as is currently the case, in their local public inspection files, is not the kind of issue that is likely to stir the nation’s passions.Regardless of how challenges to the rule pan out, very few people are going to run off and join the circus if things don’t go a certain way.

Still, it’s a more interesting issue than, on its face, it would appear to be – and there’s evidence that defenders of the rule, along with reporters, are not paying attention to some of the finer points being made in opposition to it.

As of today there are three separate challenges to the rule – one at the FCC, one at the Office of Management and Budget, and one in the U.S. Court of Appeals for the D.C. Circuit.  The petition for reconsideration at the FCC, signed by 12 TV station groups, is the most nuanced of the complaints.

As with the others, the FCC petitioners are mostly concerned about having to reveal online their spot-by-spot ad rates, but with this difference: The petitioners propose to aggregate such data in a way that would not reveal their ad rates but would actually make it easier for everyone, journalists included, to understand who is contributing to whom, and in what amounts, and in addition to include online the same kind of information for state and local candidates, something the FCC rule does not require.

Why the broadcasters are opposed to having to reveal online their political ad rates, when they already provide this information in their local public files, takes a little explaining.

Currently, broadcasters are required by law to offer political advertising to candidates for federal office at the “lowest unit rate,” which is the rate they charge their best commercial advertisers.  But these data are not that user friendly, and in any event requires that someone physically go to a TV station for the purpose.  (For anyone so disposed, the cumbersomeness in this only grows, as the date of an election draws near, because TV stations update their political files more frequently at that time.)

Campaign representatives sometimes do check these files to ensure that their candidates are not being charged more than their opponents, but commercial advertisers do not, and that fact touches on one of the main worries among the broadcasters: They fear that if they have to reveal online their spot-by-spot ad rates, some of their commercial advertisers (knowing that the political rates are based on what the stations charge their best commercial customers) will demand these rates for themselves.

It’s also bothersome to broadcasters that their media competitors, both in broadcasting and cable, would have access to this information, and it’s further been suggested that, as written, the FCC rule may encourage trial lawyers to file frivolous lawsuits against TV stations on behalf of losing candidates.

So in the case of the FCC petitioners, the question isn’t why broadcasters don’t want to provide their political files online (they are willing to do that), but why defenders of the FCC rule insist on requiring the online display of stations’ ad rates?

After all, one of the main goals of the campaign finance laws is to provide, in a timely way, information about candidate and issue expenditures.  It’s not the goal of these laws to compel TV stations to divulge their competitive secrets about ad rates and the like.

When asked about the unwillingness of the FCC to approve this simple modification to its rule – the Commission had this suggestion before it prior to its vote in late April – a communications lawyer prominently involved in the matter said that, in the wake of the Citizens United decision, everything touching on campaign finance has taken on a kind of “religious aspect,” such that advocates of campaign finance laws are these days unwilling even to grant such harmless accommodations as those presented by the petitioners.

Notable by their absence from the FCC petition are the station groups owned and operated by the Big Four TV networks.  Lawyers for the petitioners note that the networks supported the suggested “aggregation” approach prior to the FCC’s vote, and aver that they support the petition now.

That may be right, but if so it’s hard to confirm.  It may be, instead, that the networks don’t like the odds that the FCC will accommodate the petitioners, or that they are unhappy about the petitioners’ proposed inclusion of political ad information about candidates for local office.

For its part, the National Association of Broadcasters has appealed the FCC’s rule to the OMB, claiming that the obligation to put the political files online is unduly burdensome, and in conflict with the Paperwork Reduction Act.

There may well be real merit in these other concerns, and in the arguments to be fleshed out in the broadcasters’ lawsuit in the D.C. Circuit, but it’s the modest proposal made by the FCC petitioners that shines the brightest light on how hard it is these days to forge reasonable compromises in a deeply divided nation.

                                  

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

Google and the First Amendment

By guest blogger KURT WIMMER, ESQ., partner at Covington & Burling LLP in Washington, D.C., and chairman of The Media Institute’s First Amendment Advisory Council.

I just had the privilege of participating in a panel discussion at an American Antitrust Institute conference.  My panel included such luminaries as Eli Noam of Columbia, Gene Kimmelman of the Antitrust Division of the Department of Justice, and Susan DeSanti of the Federal Trade Commission.  Unlike many of my colleagues on the panel, I’m far from being an antitrust expert.  My topic was a more familiar one – whether enforcement of antitrust law against a search and advertising provider would violate the First Amendment. 

The question arises because of a novel proposition being advanced by Google.  The Federal Trade Commission is investigating claims that Google has violated antitrust law by manipulating search results to favor its own services and bury the services offered by vertical search engines that might compete with Google.  Google has argued that it is absolutely immune from antitrust liability because its search results constitute speech protected by the First Amendment – in fact, it asserts that the First Amendment actually “blocks” the application of antitrust law to it.  Google analogizes its work to that of a newspaper editor selecting information for publication, and seeks the same “absolute” protection that a newspaper editor would receive under the First Amendment.

But wait – newspaper editors don’t receive absolute protection under the First Amendment.  If editors’ work is absolutely protected, why did I spend last night discussing a story with an editor to mitigate defamation risk?  Why did I defend a deposition last week of a reporter attempting to keep his source confidential?  Why have reporters gone to prison in the United States to protect sources?  Why are some in Congress talking about doubling down on legal restrictions to stop leaks to the press?

The First Amendment is not absolute, and never has been, for anyone, whether they run a newspaper, a blog, or a search-and-advertising business. False and deceptive speech, as Google’s manipulated search results are alleged to be, falls outside the protection of the First Amendment.  Jon Leibowitz, chairman of the FTC, made precisely this point in an All Things Digital interview just this month, and he’s precisely right as a matter of constitutional law.  Otherwise, the FTC would have no jurisdiction to enforce privacy laws or laws against false advertising and deceptive trade practices.

Of course, non-deceptive speech also may be regulated in many circumstances.  The antitrust laws, which regulate commercial behavior to promote competition, are an example of laws that may permissibly restrict certain kinds of speech.  The plain fact is that “the First Amendment does not provide blanket protection to restraints of trade effectuated through speech,” in the words of the Department of Justice.  This principle has been applied consistently since the Supreme Court affirmed an antitrust judgment against the Associated Press in 1945, and remains the law today.

Google’s arguments that it is uniquely immune from antitrust liability, regardless of how it has abused its massive market share, remind me of the quaint musings of early Internet pioneers that law cannot apply in “cyberspace.”  But the same law that applies offline generally applies online (in the absence of online-specific legislation such as Section 230), and damage to competition that may occur on the Internet can destroy real businesses in the real world.  No one is above the law – not even Google.  Whether any of the allegations against Google can be proved, of course, remains to be seen.  But to assert at the very outset that the First Amendment actually “blocks” liability, regardless of what the FTC or a court might find, ignores the law.

If you’d like to read more, the Media Institute has graciously agreed to host my paper (available here) that addresses these issues in more depth.

                                  

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

Julius Genachowski and Broadband Billing

Comments made earlier this week by FCC chairman Julius Genachowski have raised hackles at organizations like Free Press and kindred groups.  The occasion was the Cable Show in Boston, and the offending subject was what is called “usage-based billing” – the radical notion that people who use more of a thing should pay more than those who use less.

In a Q&A session with Michael Powell, former FCC chairman and current CEO of the National Cable and Telecommunications Association, Genachowski avowed that there was much to like about broadband providers basing their charges on usage (rather than on a one-size-fits-all basis).

This wasn’t the first time Genachowski had endorsed this practice – it was part of the net neutrality regulations that the FCC promulgated a couple of years ago – but it was enough to provoke the simple folk at Free Press into eruptions of their usual blather.

The last time broadband billing was discussed in this blog (April 2009), the news was Time Warner Cable’s decision, under fire from people and organizations like Free Press, Public Knowledge, and Sen. Charles Schumer, to suspend their trials of this kind of billing in a handful of cities.

As reported at the time, the air was thick with celebration as the “victors” issued triumphant statements on the occasion.  Triumphant no more, they have been reduced, in response to Genachowski’s comments on Tuesday, to broadsides and bromides like this one from Matt Wood, policy director of Free Press: “The data caps being pushed by the biggest cable companies are bad for consumers … and the FCC should be investigating these caps, not endorsing them.”

But enough about broadband billing per se.  The more noteworthy thing about Genachowski’s comment is that this marks at least the third time that he has demonstrated his independence from the louder voices among communications policy outfits.

The first time was with the FCC’s adoption of what came to be called “net neutrality lite,” and the second was when he hired Steve Waldman to head up the agency’s “future of media” report, a document that steered clear of the most intrusive and inappropriate kinds of recommendations that had been proposed for it.

None of this is to say – nor would the gentleman necessarily welcome our saying – that Mr. Genachowski is the very model of what one looks for in an FCC chairman.  Though the net neutrality regulations are much better than what they might have been, better still would be no such regulations at all.

Still, in an environment as divisive as Washington’s, it’s probably a good idea once in a while to step outside of it all and give credit where credit is due.  So props to Julius Genachowski for his embrace of usage-based broadband billing.  ’Tis a fine thing he’s done.

                                     

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

 

Locking Up Reporters at the DOL

If, like many people, you’re an investor, you are already familiar with the market-moving impact of government data, like the Department of Labor’s monthly payrolls and unemployment figures.  What you probably don’t know are the ways in which the DOL has for decades arranged for release of this information, or of their plans to change the procedure in July.

In order to ensure the simultaneous release of the data, the Department conducts what they call a “press lock-up.”  It works this way: At 7:30 a.m. on the day figures are to be released, about a dozen reporters arrive at the DOL, and at 8 a.m. they surrender their mobile devices, are locked in a room with the electricity cut, and given the data.  The reporters then use their own computers and software to write their stories, often with analysis and graphs, such that when the DOL restores electricity at the release time, 8:30 a.m., the reporters can then transmit their stories over their own dedicated lines.

By all accounts, this procedure has worked well and has provided the public with timely and important information, delivered in context by professional news organizations.

On April 10, however, the department announced major changes in this procedure, the most important of which are these: All computers and communication lines, which to date have been owned by the participating news organizations, are to be removed and replaced with government-owned computers and telecommunications lines; all participating news organizations’ press credentials will expire, and those news organizations that wish to participate in the future must apply for new credentials; and those groups that do apply “will be considered as an overall group and not necessarily on an individual basis (that) distributes a variety of news products that reach a wide and diverse audience.”

There are some disturbing policy aspects as well as practical problems with this scheme, and an important question that the Labor Department refuses to answer: What’s wrong with the old system, and why change it?

By requiring them to draft their stories on government computers, the DOL is, in effect, obliging reporters to turn over their notebooks.  Moreover, there appears to be less security in the new plan since all data would be released through the Internet rather than, as is presently done, through dedicated and redundant lines owned by the participating media.

Owing to the fact that the new policy was announced without notice or comment, the DOL arranged a conference call with reporters on April 16, presided over by the department’s press spokesman, Carl Fillichio.

In the same way that great truths are sometimes revealed, if unwittingly, by the smallest people, the transcript of this call speaks volumes.

Witness, for instance, this exchange during the call:

Daniel Moss (Bloomberg News): “I’m just wondering, why is the Labor Department choosing to do this now?  What is the problem that you believe you are trying to fix given the master switch is already in place and working effectively?

Carl Fillichio: It’s been, as I mentioned, 10 years since we took a holistic view of the lock-up, and times have certainly changed.  Why now rather than any other time?  Now is the right time to do it.

Daniel Moss: What is the problem that you imagine you’re trying to fix given there is an effective master switch there already that controls access out of the room for the information?

Carl Fillichio: There’s nothing we necessarily expect.  I think we’re doing prudent business management of reviewing our systems and looking at the changes in technology and the way that the news is delivered and have decided that now is the correct time to institute these changes….

Daniel Moss: Do I interpret your response, Carl, as meaning there is no current problem?

Carl Fillichio: What I’m trying to do is prevent a problem, Daniel.

Daniel Moss: What is the problem you think, you imagine, that this will prevent.

Carl Fillichio: I think we’re going to move on.  Operator, we’ll take the next question. 

There’s more like this, lots more, with some of the better ones being Fillichio’s exchanges with Steven Goldstein of MarketWatch, and Mark Tapscott of the Washington Examiner.  You can read the whole of the transcript here.

Though he never says that any violations of the lock-ups are the cause of the new policy, nor that the new policy will correct for any such violations, Fillichio does aver that two reporters in the past were “suspended” from the lock-ups. Since he refuses to elaborate about these alleged past infractions, much less to say that they were of the sort that necessitated the new policy, one is left to wonder.

Seems hard to believe that the problem would have been early public release of the data since, if anyone did so, the other news organizations would know about it and loudly object.  Perhaps there were instances where the data fell into the hands of traders who used it to buy or sell stocks in the pre-market, but if so these would likely be seen as a form of “front running” or “insider trading,” both of which are illegal and in the province of the SEC.

Apart from the practical and policy problems with Labor’s new lock-up plan, there is the interesting question of the wisdom in it.  Owing to the growing concern with invasions of privacy by corporations like Google, and governmental bodies like the Department of Homeland Security, why would anyone think this is the right time to formulate a policy that widens further government’s reach, even if benignly, into reporting of the news?

As this note is being written, May 5, there are reports that a coalition of media and "open government" organizations may soon file a letter with the Department of Labor asking that the new policy be delayed until after some further discussion of it with media representatives.

One hopes the coalition will do so, and also that, in step with the “holistic” approach that Fillichio cites no less than three times in the Q&A, the DOL will see the wisdom in stopping, looking, and listening.

 

Orts and All

Regulating the ’Net.  Much has been alleged in recent days about the risks to the independence of the Internet were the copyright bills currently before Congress to become law.  As mentioned here and here, the most extravagant of these allegations are flummery of the first water, but copyright issues aside, the ’net is indeed on the cusp of a significant transformation.

Evidence of this can be seen in the actions of the FCC, whether on its own initiative or by its implementation of regulations after passage of legislation into law.  The Commission’s codification of  "net neutrality" rules was the first example of the Internet’s capture.  The action currently underway by the FCC to promulgate regulations re the 21st Century Communications and Video Accessibility Act, a law which, among other things, mandates captioning for online video, is another.

Goes without saying that making online video accessible to the deaf is a nice thing to do, and for many that’s the end of the story.  But people who are familiar with the way laws and regulatory policies evolve know that things like these have a precedential impact in Congress, the courts, and the regulatory agencies, and that very often these precedents are then offered up in justification of other laws or rules that are not so nice.

In any case, the point here is that it’s already too late in the day for people who have an idealistic interest in the Internet to fret the future loss of its independence.  Thanks to the majority at the FCC and/or in Congress, the Internet’s pristine independence has already been lost.

Media Matters.  The organization called Media Matters for America, which exists to demean and (where possible) destroy conservative journalists and organizations like FOX News, has now come out with a contrived accusation against George Will.

The gravamen of MMA’s contrivance is that, as a Board member of a conservative grant-giving group (the Bradley Foundation), Will should be required to mention this connection whenever he writes about or cites the work of any of the groups to which Bradley contributes!

Given that Bradley funds a very large number of conservative think tanks and other enterprises, this would mean, as a practical matter, that Will would have to include this disclosure pretty much all the time since he is, after all, a conservative himself and cites these organizations’ work frequently.

As the Washington Post’s executive editor put it, in reply to a request from MMA for comment: “Is it seriously a surprise to you that George Will quotes experts from conservative think tanks more often than he quotes experts from liberal think tanks?”

What a relief! The latest news is that Keith Olbermann, who is faithfully viewed nightly by at least 16 people, may be staying on at Current TV, a network that captures the imagination of dozens.  

It’s been a close call for the past few days, but as this is being written word is out that Olbermann and management of Current, who have been at loggerheads over something or other, have resolved their differences.  So a country that has been paralyzed with fear that things might not work out can breathe again. What a happy day.

                                  

The opinions expressed above are those of the writer and not of The Media Institute, its Board, contributors, or advisory councils.

Net Neutrality’s Poison Petition

For those in the communications policy business, perhaps the most jaw-dropping datum to issue from Tuesday’s elections is this: Of the 95 candidates for the House and Senate who signed a petition encouraging “net neutrality” regulation, all of them lost.  Not some of them.  Not most of them.  All of them.

It’s really quite remarkable.  Not even the Black Death killed everybody.  But there it is, a new world record for political toxicity.  The humorous aspects of this debacle aside, there is a serious lesson here: There is no appetite in this country for regulatory schemes whose effect is to promote government (and a few companies) at the expense of private-sector investment generally.

Yet this is precisely what net neutrality regulations, whether Lite or industrial strength, would do.  Intended or not, codified regulations would inevitably lead to government meddling in this freest part of the communications industry, and frustrate the kind of investment in the broadband infrastructure without which there can be no growth in this vital sector of the economy.

And for what?  As mentioned here, net neutrality is the condition that obtains today!  Nobody is being deprived or disadvantaged of anything worth talking about.  Indeed, a quick look at the kinds of organizations that have been promoting net neutrality pretty much says it all.

On the one hand we have groups like Free Press, whose interest in the subject is precisely because of the potential in governmental oversight to yoke communications companies to the agenda of the nation’s “progressives.”  While on the other you have a company like Google that, in the best tradition of crony capitalism, wants to tilt public policy in a direction that benefits its private interests.

It is widely believed that FCC Chairman Genachowski  would like the FCC to be relieved of the responsibility of taking on the task of codification of the net neutrality rules.  He is to be commended for his reservations, especially since he is under great pressure from the net neutrality lobbies to act.

The wise course now would be to let the clock run out on any kind of FCC action.  If the Republican gains in Tuesday’s elections don’t speak clearly enough about the matter, surely the fate of the hapless signers of the net neutrality petition does.

[Updated 11-4-10, 1:50 p.m. EDT, to reflect latest election results.]

                                                   

               
The opinions expressed above are those of the writer and not necessarily of The Media Institute, its Board, contributors, or advisory councils.
 

Shedding Light on Title II and the First Amendment

Now that FCC Chairman Julius Genachowski has proposed what Broadcasting & Cable’s John Eggerton artfully calls a “Title II Lite” approach to broadband regulation, it’s a good time to take a second look (or maybe your first) at a recent paper by Robert Corn-Revere.

Bob wrote a Perspectives policy paper for The Media Institute titled “Defining Away the First Amendment,” which we released May 4.

This noted First Amendment attorney makes a crucial point – but a point that has not received adequate attention: “The FCC’s current ability to change the level of First Amendment protection for a medium simply by changing its regulatory definition is quite limited, if not nonexistent.”

Whoa, you mean there’s a First Amendment dimension to this reclassification debate?  You’d never know it by listening to the FCC, or to “net neutrality” supporters like Free Press.  Maybe that’s not surprising, since the First Amendment could very well prove an unwelcome stumbling block for Chairman Genachowski and his net-neutrality ilk.  Easier for them just to ignore it.

But, I would suggest to you, the First Amendment is far too important to ignore here.  In his issue paper, Bob Corn-Revere has shed some much-needed light on a pivotal concern that the FCC has tried to keep in the shadows.  Taking a “lite” approach to Title II reclassification doesn’t absolve the FCC of its constitutional obligations.  If anything, we need more “light” from Bob and others who are willing to hold the FCC accountable for the First Amendment ramifications of its regulatory agenda.