Congress: Don't censor the internet, but help users avoid unwanted material

Lee A. Hollaar
Professor of Computer Science
University of Utah


Copyright © 1997 by Lee A. Hollaar. You may copy or distribute for non-commercial purposes, as long as you do not make any changes and include this notice. For commercial distribution, contact the author at hollaar@cs.utah.edu.


Even though the Supreme Court has just ruled that Congress' attempt to stop indecency on the Internet, the Communications Decency Act (CDA), is unconstitutional, it is likely that Congress will try again to censor the Internet. Bills* already introduced try to control commercial email. But rather than trying to control Internet content by censorship laws, Congress should provide a simple tool that enhances technological solutions to unwanted material, and challenge the Internet community to bring unwanted material under control.

Just like digital technology, with its ability to make perfect copies at little or no cost, has substantially affected copyright law, the internet has established a new regime for the distribution of works. For under $10 a month paid to an internet service provider, a person can become a worldwide distributor of material.

But unlike past distribution systems for books or magazines, where you could get an idea of the nature of a work by where it was found (adult material is not found in toy stores) or by its packaging (while you can't tell a book by its cover, you can make an educated guess about its suitability), until you access an internet work you don't know what it contains. Browsing on the internet can be like going into a store marked only "Books" and every book there has a blank cover. It is hard to avoid unwanted material while trying to find what you really want.

Material can be unwanted for a variety of reasons. The Communications Decency Act treats any indecent material as unwanted by minors (or, at least, unwanted for them by their parents). Adults may also find indecent material unwanted. Junk email, newsgroup "spam" -- massive posting to unrelated newsgroups -- or unrequested commercial solicitations may also be unwanted material. But the difficulty with the CDA was that it tried to block the distribution of material based on its content, with content providers unsure whether material is indecent or not, or how to be sure that it cannot reach children while allowing adults to access it anonymously.

To illustrate the difficulties posed by the CDA, when I placed this article on this Web site, I also included some background material on the CDA. As part of the CDA legislative history, the Supreme Court case FCC v. Pacifica is cited for its discussion of indecency and for the proposition that indecency can be regulated, at least in some electronic communications contexts. But Pacifica contains an appendix containing the material that was judged indecent -- George Carlin's comedy routine on the seven words that can't be said in a radio or television broadcast. Because I included Pacifica and its appendix, under the CDA I would be required to have some way of assuring that nobody reading it is a minor, even though the only indecency is found in the Supreme Court decision that underlies the CDA promoters' argument that the CDA was constitutional.

An alternative solution to the problem of receiving unwanted material is to have the supplier of material indicate its nature in a way that can be reviewed by the user or a filter before viewing the actual material. Filters were proposed by the opponents of the CDA as an alternative means for preventing indecent material from reaching children. However, companies providing filters are unable to rate more than a small fraction of the sites on the internet, and cannot continually view a site to assure that its content remains acceptable. Self-rating is necessary, but nothing now prevents the provider of material from intentionally misrating the material.

Rather than trying to criminalize offensive content, Congress should instead criminalize knowingly misrepresenting a work by attaching a fraudulent content rating. In particular, two related acts would be criminal offenses -- intentionally using a content certification mark without the permission its owner, and marking a work with a rating of an established content rating system knowing that the work does not meet the standards for that rating. (For more details on these two provisions, see the proposed legislation.) This is similar to section 43 of the Trademark Act, which imposes civil liability for the false description of goods or services, or the criminal provisions of the Trademark Counterfeiting Act.

Like the CDA, this proposed law would not be a replacement for the current obscenity laws. The constitutionality of regulating obscenity has been consistently upheld by the Supreme Court. It is only when the regulation burdens material that is indecent, but not obscene, as the CDA attempted to do, that there are constitutional difficulties.

The proposed law is not limited to digital works on the internet. A covered work would include any writing, image, sound recording, video game, or other carrier of information, available in the United States through interstate or foreign commerce, including works distributed by telecommunications systems. This would protect not only internet content rating systems, but also those for video tapes, games, books, or magazines. On the internet, rating systems could be developed for web pages, electronic mail, newsgroups and mailing lists, and chat rooms.

The first provision of the proposed law protects content certification marks, so that they are used only with works that meet the conditions dictated by the person or group establishing the content certification mark. This is similar to current certification marks that identify that a product meets some specified criteria, such as quality grade marks for plywood. A content certification mark could be an image, a digital code, or some other indication whose presence could be recognized by the user or the user's content-viewing software. The owner of the content certification mark establishes the rules for using that mark with a work. They may actually review the content before allowing the use of the mark, or may allow the use of the mark by anybody meeting their criteria. This would allow interested groups, such as the PTA, to establish standards for internet information and allow their "seal of approval" to be included with any work that meets those standards.

The second provision protects a particular type of content certification mark -- one that indicates that a work meets the criteria of a particular rating within a content rating system. A content rating system consists of a series of ratings that specify the suitability of a work for particular age groups, particular audiences, or any other way of characterizing the contents of a work into one or more categories.

Content ratings would not be required by the legislation, but intentionally giving an inaccurate rating if you decided to use an established content rating system would be punishable. The laws of economics, rather than Congress, will make content providers rate their content. Since many users (or their parents) will have browsers reject works that are not rated because the work may contain unwanted material, those content providers wishing to reach the largest audience will have a strong economic incentive to rate their works. Software providers can help by having browsers reject unrated material by default, and educating users how to alter that default behavior to block only material they don't want.

Just as the proposed law doesn't require marking, it doesn't specify a particular rating system. Any rating system whose criteria is clearly specified by its promoter will be protected. New rating systems can evolve based on experience and user-preference, with no need for legislative oversight. The Platform for Internet Content Selection (PICS) system allows the promoter of a rating system to create a description of the system that can be incorporated into a Web browser or other content-viewing program to support the rating system.

The rating system must have already been established at the time of the marking of a work for protection -- its criteria must be clearly spelled out and available both to the people who will be rating the content and those who will be using it to block unwanted content. Since most ratings would involve particular codewords or other items specified by the rating system, it would be difficult for persons misrating the content of a work to claim that they were unaware of the rating system's criteria.

Ratings could be assigned by submitting a work to a rating panel, answering a questionnaire from the group that established the rating system, or simply reading the rules for the rating and attaching the appropriate rating to the work. While motion pictures are now rated by a panel, and video games and some internet content is rated by completing a questionnaire, given the large number of pages accessible through the World Wide Web, most of which don't contain objectionable material, self-rating will probably become the preferred alternative, especially with criminal sanctions against those who knowingly misrate their works.

Existing rating systems range from the very simple (suitable for children, parental guidance suggested, unsuitable for children) to systems with many factors and a variety of ratings for each factor, and new systems can be developed by content providers and users based on experience with existing systems. Content providers marking their works are not required by the proposed law to update their marking as rating systems change, since the act covered by the law takes place with the marking of the work, not its continued delivery.

Rating systems would generally be defined as including works that are suitable for a wider audience within a restricted rating. Using the movie rating system as an example, a work that could be rated G could also carry a rating of PG, R, or even X without it being a misrating of the work. While this could result in a person searching for X-rated works occasionally finding something less prurient, it allows a content provider to error on the conservative side when it is not clear whether a work crosses the line between one rating and another. Alternatively, the content provider could use a rating system that better describes the work, although that may lose any audience that uses one rating system and not another. Those content providers wanting to reach the largest audience will rate their content using most rating systems so that it will be accepted by most filters. Users will want to filter using a number of systems, so that suitable works rated using only a single system will be viewable.

In the rare case when no rating system can accurately describe a work, the content provider could create a descriptive page that could be rated, which then links to the unrated content. The descriptive page would indicate that the page is unrated and any user filter would have to accept the unrated page if it were to be viewed. Filters such as the one in Microsoft's Internet Explorer bring up a message window when unrated content is received and blocking of unrated content has been selected, allowing the user (or, more likely, the user's parent) to type a password to view the work. For example, if I were unsure how to rate the Pacifica decision because of its appendix (which contains the "seven dirty words"), I could set up a descriptive page indicating that the work was a Supreme Court decision, but that it uses language that may offend to illustrate the decision. I would then link to the decision, but not rate it. I could also include a link to an edited version, for those who want to avoid the appendix but see the opinion. Then the user would have the information to make a decision to see or avoid language that may be unwanted.

While ratings now exist for Internet material, they are rarely used because most require the rating of the material by a service, which cannot possibly view the millions of pages of information now on the Internet. Yet individuals with Web sites are unlikely to pay for a rating service when they are providing information for free. Self-rating will be necessary if most Internet content is to be rated and not rejected by user filters. But few people currently filter based on self-ratings because they can't be trusted. Criminalizing misrating will increase users' confidence in self-ratings.

While most people think of ratings in terms of the appropriateness for children of a work, based on its language or images, the proposed law views ratings in a broader sense -- anything that characterizes the content of a work. This could be used to reduce spamming -- the posting of inappropriate material to a number of newsgroups -- by attaching a rating indicating that the posting is appropriate for a newsgroup, allowing users to filter out any posting not having the rating. Any spammer using the rating necessary to get through the filter would violate the proposed law and could be prosecuted. (Of course, you would have to get a federal prosecutor to bring the case, but even the potential of prosecution could make people reconsider their spamming.) A similar approach could be used to reduce unsolicited junk email, by having a rating indicating that the message is not an unsolicited advertisement or other category of undesirable email. There will be little economic incentive for junk email if there are few recipients, and those who improperly use these special ratings to avoid users' filters could be criminally sanctioned.

Like the Communications Decency Act, the proposed law provides no control over content originating from a foreign country. However, it is possible to develop a listing of internet addresses or network numbers for sites located in countries that do not have a law addressing false rating of content. Concerned users could then treat works from such countries as if they were not marked. This loss of an audience would encourage legitimate content providers in a foreign country to press for legislation on false marking in their country.

A database indicating the country in which a particular internet address is located will also be important to content providers who are concerned about a work reaching a particular country, either because of export restrictions (such as for cryptographic software), content restricted by the destination country (such as Nazi information by Germany), agreements restricting the scope of distribution of a work to particular countries, or simply wanting to avoid the jurisdiction of a particular country by avoiding any substantial contact.

But why a criminal law when there seems to be technological solutions to unwanted material? Because there will always be people who don't want to follow "netiquette" -- otherwise there would be no spamming of newsgroups or email. Some people see their economic advantage far outweighing any ill-regard by Internet users. Others like to flaunt conventions, perhaps because they feel the Internet should have no rules. But if even a few people misrate their content to get around filters, the information necessary for filters can't be trusted. Without the fall-back of criminal sanctions, a few bad apples can spoil technological solutions.

Unlike content-based laws like the CDA, there isn't a First Amendment problem with the proposed law. The new law would not criminalize any particular content. It would not limit anyone's rights to place anything on the Internet. Anyone can opt out without penalty, except for loss of some audience. It only requires that if content providers rate a work, they don't give it a fraudulent rating. There is no free speech right to misuse a trademark to cause confusion by users or use another's certification mark on goods that don't meet the requirements of that certification mark.

The proposed law lets users trust the rating attached to a work, knowing that anybody providing a false rating can be criminally prosecuted. Rating filters can also allow the user to decide how to handle unrated works, and a user concerned about unwanted content would indicate that the filter should treat an unrated work as if it had an unsuitable rating. This is what will encourage users to rate their works -- to reach the widest audience, a work will have to be rated. The First Amendment allows one to speak freely, but does not command an audience for that speech. The proposed law allows users to decide what they want to view, just as anyone has the right to avoid or not listen to a speaker they do not wish to hear.

In their arguments against the CDA, the Internet community said that filtering based on ratings could solve the problem of unwanted material. Congress should pass the simple criminal law necessary to assure that people don't knowingly misrate their material and render the technological solutions worthless, and challenge the Internet community to solve the problem by promoting self-rating and educating users on how to filter out unwanted material. If the technological solutions do work, then censorship law like the CDA won't be necessary. If they don't, then Congress will have the justification it needs for a new decency law.


* The "Netizens Protection Act of 1997" (H.R. 1748), the "Unsolicited Commercial Electronic Mail Choice Act of 1997" (S. 771), and "Electronic Mailbox Protection Act of 1997" (S. 875). (You didn't think you'd read a law paper without a footnote, did you?)
[Note: none of these bills passed the 105th Congress.]