<<

. 7
( 53 .)



>>

however, the political climate in Washington had shifted. Declassi-
¬cation efforts were underfunded, while conservatives™ fears about
the threat of espionage by agents of the Chinese government under-
mined efforts to develop less onerous classi¬cation policies.45 “The
vast secrecy system,” Senator Daniel Patrick Moynihan complained,
“shows no signs of receding.”46
After September 11, secrecy became even more deeply en-
trenched, once again raising fears about the harm being done to civil


36
Secrecy and Security


and political rights behind closed doors. Hundreds of aliens were
detained by the U.S. government, which refused to reveal their names
or their place of detention; many were subsequently deported fol-
lowing hearings that were closed to the public. Hundreds of alleged
“enemy combatants” “ many held on slight evidence and having lit-
tle or no value as sources of intelligence value “ were hidden at a
Defense Department facility in Guantanamo Bay. The Central Intel-
ligence Agency ran its own network of secret detention facilities, as
well as a secret program to seize suspected terrorists covertly from
other nations.47 Much of this was deeply disturbing, but nonethe-
less familiar: It was the sort of behavior one expected to see from
the regimes that had allowed security concerns to overwhelm con-
cern for human rights. However, Americans also saw a new form of
secrecy emerging after September 11, as organizations not typically
counted within the security establishment began to restrict access to
information already in the public domain.
The withdrawn material was of two types. The ¬rst was infor-
mation about so-called “critical infrastructure” “ such as re¬ner-
ies, pipelines, dams, nuclear plants, power lines, and other physical
assets, as well as less tangible assets such as computer systems “ that
seemed vulnerable to terror attacks. In the months following 9/11,
several federal agencies “ hoping to avoid providing a “road map for
terrorists” “ restricted or eliminated access to maps that showed the
location of critical infrastructure, or reports that assessed the risks
that these facilities posed to neighboring communities.48 For exam-
ple, the Federal Energy Regulatory Commission, which regulates key
components of the nation™s energy infrastructure such as hydroelec-
tric dams and natural gas terminals and pipelines, withdrew a sub-
stantial amount of material from its web-accessible docket, instead
making the information available to selected individuals subject to
restrictions on its use.49 The Department of Homeland Security also
adopted new rules that allowed it to deny requests for “critical infras-
tructure information” provided to it by industry.50
A second type of now-restricted information related to the mon-
itoring and inspection work of federal agencies. Two weeks after
the 9/11 attacks, the Federal Aviation Administration blocked public
access to its database of enforcement actions, which journalists had
used to identify security lapses by airlines and airports.51 Federal


37
Blacked Out


of¬cials also denied access to the results of “detection tests”
undertaken to check whether weapons would be discovered at airport
security checkpoints.52 The Transportation Safety Administration,
formed in the aftermath of 9/11, later received broader statutory
authority to withhold “sensitive security information” without regard
to the requirements of the Freedom of Information Act.53 The Cus-
toms Service refused to release information about its inspection
practices for incoming shipping containers,54 while the Nuclear Reg-
ulatory Commission decided that it would no longer release score-
cards showing the results of its inspections of the physical security of
nuclear plants or information about enforcement actions on matters
relating to plant security.55
These new policies re¬‚ected a fundamental shift in perceptions
about the character of the security threat confronting the United
States. In the era of the Cold War, security policy had been premised
on the assumption that the principal threat to national security would
be posed by other states, and that those threats would be manifested
through overt military confrontations rather than sporadic acts of
terror or sabotage within national borders. The 9/11 attacks com-
pelled a reconsideration of this view, weakening the concept of the
“impenetrable nation state”56 and inducing “a level of vulnerability
that Americans have not seen since they were living on the edge of a
dangerous frontier 150 years ago.”57
Fears about the United States™ susceptibility to domestic attack,
already stoked by the attacks, were further heightened in the fol-
lowing months. Bush administration of¬cials said that documents
found in al Qaeda™s Tora Bora cave complex in eastern Afghanistan
in December 2001 gave evidence of further plotting: maps of the
Washington subway system, blueprints of nuclear power plants and
water distribution systems, photographs of the Seattle waterfront,
and trade publications of the American chemical industry.58 In
January 2002, the computer of a suspected al Qaeda member was
found to contain detailed information about dams and water systems
in the United States.59
The attempt to restrict access to information that might reveal
domestic vulnerabilities was subject to three main criticisms. The ¬rst
was a fatalistic view about the likely effectiveness of such efforts in a
world of “information abundance.”60 In 2002, a George Washington


38
Secrecy and Security


University Law student, Air Force Major Joseph Jacobson, demon-
strated that information comparable to that contained in the EPA™s
now-inaccessible risk-management plans could be compiled from
other sources on the internet. Producing a list of chemical plants that
could be potential targets was straightforward, and enough informa-
tion on production processes for speci¬c plants could be obtained to
reach conclusions about the “off-site consequences” of an accident
that were roughly comparable to the conclusions provided by plant
owners to the EPA. “Not posting this information on the Internet,”
Jacobson concluded, “simply forces a would-be terrorist to spend a
few extra minutes on the computer researching available ˜target™ data
that would otherwise be conveniently assembled by the EPA.”61
A 2004 RAND study reached a similar conclusion, observing that
in many cases information similar to that provided by government
sources was available from “a diverse set of non-federal sources” “
and that in any case “direct access or observation” of potential targets
was more likely to be the ¬rst choice for collecting information needed
to plan an attack. A survey of hundreds of federal data-sets revealed
none whose contents were “critical to meeting attacker needs.” The
study also noted that attackers had the advantage of a “broad range”
of targets: If access to information about one potential target was
blocked, another could easily be found.62
A second criticism of these new restrictions emphasized the harm
done to citizens, because of their undermined capacity to monitor
government or business actions that would have an important impact
on their well-being. In the three years following the September 11th
attacks, complaints about the erosion of these rights were common,
although the evidence was still inchoate. In one prominent case, a
Utah-based environmental group, Living Rivers, challenged the Inte-
rior Department™s refusal to provide maps that showed the likely
impact of a failure of the Glen Canyon Dam on the Colorado River,
the second highest concrete-arch dam in the United States. Govern-
ment of¬cials justi¬ed their refusal by arguing that the maps would
reveal that the dam could be turned into a “weapon of mass destruc-
tion,” threatening down-river communities.63 Living Rivers retorted
that residents were being kept “in the dark” about risks posed by the
dam; however, the group conceded that it had been able “ through
other sources “ to document those risks.64


39
Blacked Out


Critics complained that new rules to protect “critical infrastruc-
ture information” also undercut their rights. Community organizers
in Virginia said that FERC™s new rules to protect energy infrastruc-
ture had compromised their ability to learn the proposed route of a
new natural gas pipeline, constraining residents™ ability to mobilize
against a route that crossed their property and created a signi¬cant
safety risk.65 At the same time, activists in Alabama claimed that
FERC™s rules would restrict access to information about the safety
of a liquid natural gas terminal proposed for the Port of Mobile.66
Many journalists also protested over FERC™s insistence that they
sign agreements before receiving information that allowed FERC
staff to undertake a pre-publication review of stories based on that
information.67 On the other hand, FERC asserted in 2004 that it had
not received any complaints that a participant in a Commission pro-
ceeding had been denied access to information needed to participate
in the proceeding.68
Early decisions to withhold information sometimes failed to rec-
ognize the distinction between information that revealed previously
unknown vulnerabilities and information that merely con¬rmed the
magnitude of known risks. The failure of the Glen Canyon Dam was
a known risk, particularly after a government of¬cial af¬rmed under
oath that its failure could cause “mass destruction”; the security inter-
est in withholding details about the precise dimensions of the likely
destruction was less clear. It was similarly obvious that a substan-
tial risk would be posed by a liquid natural gas terminal located in a
populated area. The case for withholding information about hidden
weaknesses “ for example, about the location of airports or nuclear
plants that frequently failed security tests “ seemed clearer.
There were many critics who were prepared to challenge even
this position, however. This was the third criticism made against the
new pattern of secrecy: Rather than promoting security, the unwill-
ingness to disclose information about vulnerabilities actually weak-
ened it. The decision to withhold details about gaps in security was
predicated on the assumption that of¬cials or businesses that held
the information would take steps to remedy the problems. But here
was the fundamental question: Could large bureaucracies “ public
or private “ be trusted to act vigorously without being prodded by
journalists or advocacy groups who shared knowledge of security
defects?

40
Secrecy and Security


Skepticism about the public™s ability to rely on the vigilance of
of¬cials in ¬xing security problems pervaded the post-9/11 debate
over the withholding of information. Rena Steinzor, a sharp critic of
rules to protect “critical infrastructure information” collected by the
Department of Homeland Security, warned:

Disclosure leads to accountability not just for information but
for eliminating the vulnerability the information describes. As a
matter of human nature, the absence of this powerful incentive for
action will lead to failures to address security problems, ultimately
making people less safe, not more. These outcomes will occur
even if the individuals who know about a vulnerability are well-
meaning and patriotic because it is very dif¬cult for Americans to
combat institutional inertia from a wide variety of sources. . . . The
dilemma is not whether information will fall into terrorist hands,
but rather whether suppression of such information, . . . will lead
to even graver outcomes.69

As if to validate Steinzor™s complaint, the Department of Home-
land Security announced in 2004 that it had sharply reduced the
number of chemical plants it regarded as serious terror risks, a deci-
sion that limited plant owners™ obligation to invest in new security
measures.70
The Nuclear Regulatory Commission™s restrictions on access were
challenged for similar reasons. Advocacy groups that had long com-
plained about industry in¬‚uence over the regulator argued that the
NRC™s decision to withhold new security standards would simply
hide its unwillingness to set rigorous rules on the protection of
nuclear plants against terror attacks.71 “Without public pressure,”
a Greenpeace spokesman said in August 2004, when the Commis-
sion announced further restrictions on inspection data, “these guys
go back to sleep.”72 A month later, federal auditors validated that
complaint: A report by the independent Government Accountability
Of¬ce criticized the Commission for its slowness in improving secu-
rity, suggesting that its efforts had been compromised by close rela-
tionships with plant owners.73 A few months later, the Commission
was chastised again, this time for withholding data from a National
Academy of Sciences panel charged with assessing the vulnerability
to terrorist attack of spent-fuel cooling pools at some reactor sites.
The panel ultimately concluded that the Commission had not taken
adequate measures to limit risks.74

41
Blacked Out


Transparency and security
Criticisms such as these pose a challenge to a precept that has, for
many years, sustained the security establishment as an enclave in
which the right to information has little hold: the presumed identity
of security and secrecy. The assumption that the defense of national
security demands strict controls on the ¬‚ow of information is deeply
embedded in bureaucratic “ and popular “ culture. But events fol-
lowing the 2001 terror attacks give reason for holding an alternative
view: that in robust democracies, the path to improved security may
actually lie in a policy that encourages the free ¬‚ow of information.
The 9/11 Commission, like the earlier Joint Congressional Inquiry
into 9/11, concluded that informational blockages contributed to the
failure of federal agencies to anticipate the terror attacks. Most of the
ten “operational opportunities” to deter the attacks that the Commis-
sion identi¬ed in its 2004 report involved the failures to share infor-
mation within or between agencies.75 By the summer of 2001, CIA
Director George Tenet told the Commission, senior of¬cials respon-
sible for counterterrorism had deep concern about an impending
attack: In Tenet™s words, “the system was blinking red.” But no warn-
ing was distributed to lower-level of¬cials responsible for dealing with
attacks within the United States, and investigators working on late-
emerging leads on potential threats did not connect them to broader
concerns about impending attacks.76
For the Commission, one of the essential steps in reform follow-
ing the 9/11 attacks was overcoming the bureaucratic and technical
hurdles to the sharing of information within the federal government.
In its ¬nal report, the Commission urged abandoning the “˜need-
to-know™ culture of information protection” in favor of a “˜need-to-
share™ culture” that rewards information sharing. By doing this, the
Commission argued, analysts and investigators would have a better
chance of “connecting the dots” to anticipate impending threats.77
Other commentators reached the same conclusion. “Today,” says
Bruce Berkowitz, “effective warning often means getting informa-
tion in front of as many people as possible so as to improve the odds

<<

. 7
( 53 .)



>>