Even though the Supreme Court has just ruled, in Reno v. American Civil Liberties Union, that Congress' attempt to stop indecency on the Internet, the Communications Decency Act, is unconstitutional, it is likely that Congress will try again some time soon. Already bills have been introduced to try to stop unwanted commercial e-mail. But rather than trying to control Internet content by censorship laws, Congress should provide a simple tool that enhances technological solutions to unwanted material and should challenge the Internet community to bring unwanted material under control.
Instead of trying to criminalize offensive content, Congress should simply criminalize the knowing misrepresentation of a work by the attachment of a fraudulent content rating. Content rating would not be required by this law, but providing one that is deliberately deceptive would be a punishable offense.
It is not necessary to make the provision of ratings a legal requirement. The laws of economics will do that job for Congress. As more and more users (or their parents) set their viewing software to reject unrated works automatically lest they contain unsuitable material, content providers that want to reach the largest possible audience will be driven by market forces to rate their works. Software providers should offer browsers that reject unrated material by default and should instruct clients as to how to alter that default behavior to suit their own needs.
There would be no need for the proposed law to specify a particular ratings system. Any system would be acceptable so long as its criteria were clearly specified and understandable to content providers and users. New systems can evolve, based on experience and user preference, with no need for legislative oversight.
Most rating systems would include digital codes that would be recognizable by filters that would identify the particular system being used; it would be unlikely that somebody could accidentally produce an incorrect content rating.
And by defining "rating system" broadly, "spamming" -- the massive and inappropriate sending of messages to newsgroups or e-mail users -- could be stopped. If, for instance, a rating could be attached to all legitimate messages indicating that they are not unsolicited advertising, or that a particular mass posting is appropriate to a newsgroup, users would be able to filter out unwanted messages. There would be little economic incentive to spam if there were few recipients and if criminal sanctions could be invoked against those who used the ratings improperly.
The ratings that currently exist are of limited value. Most must be provided by a service, which cannot possibly view the millions of pages of information now on the Internet. Furthermore, individuals with Web sites are unlikely to pay for a rating service when they are providing free information. Self-rating will be necessary if most Internet content is to be rated and not rejected by user filters. At present, self-rating is rare and is, in any case, not trusted by concerned users. CriminaLizing misrating will increase users' confidence in self-ratings.
Like the Communications Decency Act, the proposed law would provide no control over content originating abroad. But the same economic solution would apply. If the data base existed, the country associated with a particular network address could be determined, and users could boycott works from nations that failed to enforce a valid ratings system. This loss of an audience would encourage legitimate content providers in foreign countries to press for legislation on false marling similar to the one proposed here.
A data base that identifies the country of an Internet address could serve another function: It could be used by content providers anxious to avoid breaking other laws, either at home or abroad.
Why resort to criminal law when there are technological solutions? Because there will always be people who don't want to follow rules. Some people simply decide that their economic advantage outweighs any breach of "netiquette." There are those who take pleasure in flouting convention, or who feel that, as a matter of principle, the Internet should have no rules. And if even a small number of providers misrate their content to get around filters, the information necessary for filters can't be trusted. Criminal sanctions are, unfortunately, the only guarantee.
The Communications Decency Act was based on control of content and consequently ran afoul of the First Amendment. This proposal would not cnminalize content; it would not limit anyone's rights to place anything on the Internet. It would simply require that content providers, if they choose to rate a work, do so honestly. Anyone can define a ratings system and subscribing to it would be a voluntary affair. These factors should be sufficient to allay First Amendment fears.
Anyone would be able to opt out without penalty, albeit at the loss of some portion of their audience. But users would be able to decide what they want to view, just as they can avoid or not listen to a speaker they do not wish to hear.
In arguments against the Communications Decency Act the Internet community has said that filtering based on ratings could solve the problem of unwanted material. Congress should pass the simple criminal provision that would provide the enforceability of this system. The Internet community, inventive as it is, should develop and promote self-rating and educate users about its advantages. If this works, heavy-handed censorship law won't be necessary. If it doesn't, then Congress will have the necessary justification for a new decency law.