***THE MAIN BOARDS - Welcome to the Prison Planet Educational Forum and Library*** > Ptech and the use of transportation (Planes, Trains, and Automobiles) for assassinations and false flags

Jay Rockefeller should demand a 9/11 investigation if cybersecurity is a threat

<< < (2/2)

Rebelitarian:
Anyone named Rockefeller is a globalist traitor.

Dig:
Computer Security in Aviation:
Vulnerabilities, Threats, and Risks
ftp://ftp.csl.sri.com/pub/users/neumann/air.html
Peter G. Neumann
Principal Scientist, Computer Science Laboratory, SRI International, Menlo Park CA 94025-3493
Telephone 1-415-859-2375, valid until March 1998 (1-650-859-2375 after 1 Aug 1997)
E-mail Neumann@CSL.SRI.com ; WorldWideWeb http://www.csl.sri.com/neumann.html

International Conference on Aviation Safety and Security in the 21st Century, 13-15 January 1997; White House Commission on Safety and Security, and George Washington University

Abstract. Concerning systems that depend on computers and communications, we define security to involve the prevention of intentional and -- to a considerable extent -- accidental misuse whose occurrence could compromise desired system behavior. This position paper addresses some of the fundamental security-related risks that arise in the context of aviation safety and reliability. We observe that many of the past accidents could alternatively have been caused intentionally -- and in some cases could be recreated maliciously today.

We first examine characteristic security vulnerabilities and risks with respect to aviation and its supporting infrastructure, and recall some previous incidents. We consider primarily commercial air travel, but also note some related problems in military applications. We then consider what crises are possible or indeed likely, and what we might do proactively to prevent disasters in the future.

Brief Summary of Security-Relevant Problems

An overall system perspective is essential. Security is tightly coupled with safety and reliability, and must not be ignored or relegated to incidental concerns. We take a broad view here of the problems of attaining security and safety, and consider these problems as a unified global system/network/enterprise problem. (See References 2 and 3 for extensive background, with considerable emphasis on safety as well as security, together with an integrative view that encompasses both. See also Reference 4 for some specific recommendations relating to the computer-communication security infrastructure, the role of cryptography, and the system development process.)

Security vulnerabilities are ubiquitous. Most computer operating systems have weak authentication and are relatively easy to penetrate. Most such systems have weak access controls and tend to be poorly configured, and are as a result relatively easy to misuse once initial access is attained. These systems often have monitoring facilities that are ill adapted to determining when threats are mounting and what damage may have occurred. Consequently, misuse by outsiders and insiders is potentially easy to achieve and sometimes very difficult to detect.

System safety depends on many factors. System safety typically depends upon adequate system security and adequate system reliability (as well as many other factors). It can be impaired by hardware and software problems, as well by human fallibility and nonbenevolent operating environments. As a consequence, in many of the cases discussed here, an event that occurred accidentally could alternatively have been triggered intentionally, with or without malice. A conclusion from that observation is that a sensible approach to security must encompass a sensible approach to system safety and overall system reliability.

Threats to security and safety are ubiquitous. The range of threats that can exploit these vulnerabilities is enormous, stemming from possible terrorist activities, sabotage, espionage, industrial or national competition, copycat crimes, mechanical malfunctions, and human error. Attacks may involve Trojan-horse insertion and physical tampering, including retributive acts by disgruntled employees or former employees or harassment. Denial of service attacks are particularly insidious, because they are so difficult to defend against and because their effects can be devastating. Systems connected to the Internet or available by dial-up lines are potential victims of external penetrations. Even systems that appear to be completely isolated are subject to internal misuse. In addition, many of those seemingly isolated systems can be compromised remotely because of their facilities for remote diagnostics and remote maintenance. Electromagnetic interference is a particularly complex type of threat. Unanticipated acts of God are also a source of threat -- for example, from lightning or extreme weather conditions. Of increasing concern in aviation is the omnipresent threat of terrorism. In addition, with respect to safety, References 2 and 3 provide a chilling history of relevant computer-related problems.

The risks are ubiquitous. The consequences of these vulnerabilities and associated threats imply that the risks can be very considerable. Computer-related misuse may (for example) result in loss of confidentiality, loss of system integrity when systems are corrupted, loss of data integrity when data is altered, denials of service that render resources unavailable, or seemingly innocuous thefts of service. Such misuse may be intentional or accidental. It may be very difficult to detect as in the case of a latent Trojan horse, or may be blatantly obvious as in the case of a complete system wipeout -- with the usual spectrum of difficulty in between. More broadly, overall system risks included major air-traffic-control outages, airport closures, loss of aircraft, deaths of many passengers, and other major disturbances.

The interrelationships are complex. As stated above, security, safety, and reliability are closely interrelated, and the interrelationships can be subtle. In general, if a system is not adequately secure, it cannot be dependably reliable and it cannot have any predictable availability; misuses could happen at any time. Similarly, if a system is not adequately reliable, it cannot be dependably secure; the security controls could be vitiated at any time. A simple example of a security-related reliability flaw is provided by the time when MIT's CTSS (the first time-sharing system) spewed out the entire password file as the logon message of the day. (See Reference 3 for a more detailed discussion of the interrelationships.)

A Review of Past Incidents

Among a large collection of riskful events, Reference 2 includes many aviation-related cases -- with a wide variety of causes and an enormous range of effects. Two sections in that list are of particular interest here, namely, those relating to commercial aviation and to military aviation. We consider here just a few cases from that list. (The sections of that list on space and defense are also instructive, as are the lengthy sections relating to security and privacy.)

Radio-frequency spoofing of air-traffic control. Several people have masqueraded as air-traffic controllers on designated radio frequencies (in Miami, in Manchester, England, and in Virginia -- the ``Roanoake Phantom''), altering flight courses and causing serious confusion. (Some communication authentication might help mitigate problems of this type.)

Power and telecommunication infrastructural problems. Vulnerabilities of the power infrastructure and other computer problems have seriously affected air-traffic control (Chicago, Oakland, Miami, Washington DC, Dallas-FortWorth, Cleveland, all three New York airports, Pittsburgh, Oakland, etc.). An FAA report listed 114 major telecom outages in a 12-month period in 1990-91. Twenty air-traffic control centers were downed by a fiber-optic cable inadvertently cut by a farmer burying his cow (4 May 1991). The Kansas City ATC was brought down by a beaver-chewed cable (1990); other outages were due to lightning strikes, misplaced backhoe buckets, blown fuses, and various computer problems, as well as a 3-hour outage and airport delays in Boston that resulted from unmarked electronic components being switched. The AT&T outage of 17 September 1991 blocked 5 million calls and crippled air travel with 1,174 flights cancelled or delayed. Many such cases have been recorded. (Much greater recognition is needed of the intricate ways in which air-traffic control depends on the power and telecommunication infrastructures.)

Fatal aircraft incidents. The list of computer-related aircraft accidents is not encouraging. Undeserved faith in the infallibility of computer systems and the people who use them played a role in the Korean Airlines 007 shootdown, the Vincennes' Aegis shootdown of the Iranian Airbus, the F-15 shootdowns of two U.S. BlackHawks over Iraq, the Air New Zealand crash into Mt Erebus, the Lauda Air thrust-reverser problem, NW flight 255, the British Midlands 737 crash, several Airbus A320 crashes, the American Airlines Cali crash, the Ilyushin Il-114 crash -- to name just a few.

Near-misses and near-accidents. Numerous near-misses have also been reported, and probably many more have not. The recent missile observed passing AA 1170 over Wallops Island reminds us that accidents can be caused by friendly fire (as was indeed the case in the two UH-60 BlackHawks shot down by our own F-15Cs over Iraq). The sections in References 2 and 3 on commercial and military aviation are particularly well worth reviewing.

Electromagnetic interference. Interference seem to be a particularly difficult type of threat, although its effects on aircraft computers and communications are still inadequately understood. Passenger laptops with cable-attached devices appear to be a particularly risky source of in-flight radiation. EMI was considered as one possible explanation for the U.S. Air Force F-16 accidentally dropping a bomb on rural West Georgia on 4 May 1989. EMI was the cited cause of several UH-60 BlackHawk helicopter hydraulic failures. Australia's Melbourne Airport reported serious effects on their RF communications, which were finally traced to a radiating video cassette recorder near the airport.

Risks inherent in developing complex systems. Computer-communication system difficulties associated with air-traffic control are of particular concern. Significant problems have arisen in computer-communication systems for air-traffic control and procurements for military and commercial aviation and defense systems. Unfortunately, these problems are not indigenous to the aviation industry. There have been real fiascos elsewhere in attempts to develop large infrastructural computer-communication systems, which are increasingly dominated by their software complexity. For example, the experiences of system development efforts for the Social Security Administration, the IRS Tax Systems Modernization effort, and law enforcement merely reinforce the conclusion that the development of large systems can be a risky business. Another example is provided by the C-17 software and hardware problems; this case was cited by a GAO report as ``a good example of how not to approach software development when procuring a major weapons system.'' Unfortunately, we have too many such horrible ``good'' examples of what not to do, and very few examples of how systems can be developed successfully. In general, efforts to develop and operate complex computer-based systems and networks that must meet critical requirements have been monumentally unsuccessful -- particularly with respect to security, reliability, and survivability. We desperately need the ability to develop complex systems -- within budget, on schedule, and with high assurance compliant with their stated requirements. (References 2 and 3 provide numerous examples of development fiascos.)

In some aircraft incidents, system design and implementation were problematic; in other cases, the human-computer interface design was implicated; in further cases, human error was involved. In some cases, there were multiple causes and the blame can be distributed. Unfortunately, catastrophes are often attributed to ``human error'' (on the part of pilots or traffic controllers) for problems that really originated within the systems or that can be attributed to poor interface design (which, ultimately, should be attributed to human problems -- on the part of designers, system developers, maintainers, operators, and users!).

There are many common threads among these cases (as well as many dissimilarities), which makes a careful study of causes and effects imperative. In particular, although most of the cases seem to have had some accidental contributing factors (except for the masqueraders and various terrorist incidents such as the PanAm Lockerbie disaster), and some cases appear not to be computer related (TW 800), there is much that can be learned concerning the potential security risks. As we discuss in the following section, many of the accidentally caused cases could alternatively have been triggered intentionally.

Possible Future Incidents

If accidental outages and unintended computer-related problems can cause this much trouble, just think what maliciously conceived coordinated attacks could do -- particularly, well conceived attacks striking at weak links in the critical infrastructure! On one hand, attacks need not be very high-tech -- under various scenarios, bribes, blackmail, explosives, and other strong-arm techniques may be sufficient; well-aimed backhoes can evidently have devastating effects. On the other hand, once a high-tech attack is conceived, its very sophisticated attack methods can be posted on underground bulletin boards and may then be exploited by others without extensive knowledge or understanding. Thus, a high level of expertise is no longer a prerequisite.

It is perhaps unwise in this written statement to be too explicit about scenarios for bringing down major components of the aviation infrastructure. There are always people who might want to try those scenarios, and one incident can rapidly be replicated; the copycat has at least nine lives (virtually). Instead, we consider here some of the factors that must be considered in assessing future risks to security, in assessing the safety and reliability that in turn depend upon adequate security, and in efforts to avoid future disasters.

Targets. The air-traffic-control system is itself a huge target. Physical and logical attacks on computers, communications, and radars are all possible. Any use of the Internet for intercommunications could create further risks. Many airports represent vital targets, and the disruptions caused by outages in any major airport are typically felt worldwide. Individual aircraft of course also present highly vulnerable targets. In principle, sectoring of the en-route air-traffic facility provides some redundancy in the event only a single ATC center is affected; however, that is not a sufficient defense against coordinated simultaneous attacks. Overall, the entire gamut of security threats noted above is potentially relevant.

Attack modes. We must anticipate misuse by insiders and attacks by outsiders, including privacy violations, Trojan horses and other integrity attacks, extensive denials of service, physical attacks such as cable cuts and bombs, and electromagnetic and other forms of interference -- to name just a few. There are also more benign attacks, such as wiretaps and electronic eavesdropping -- perhaps gathering information useful for subsequent attacks.

Weak links. Many of the illustrative-risks cases cited in Reference 2 required a confluence of several causes rather than just a single-point failure. The 1980 ARPAnet collapse resulted from bits dropped in a memory that did not have any error checking, combined with an overly lazy garbage collection algorithm. The 1986 separation of New England from the rest of the ARPAnet resulted because seven trunk lines all went through the same cable, which was cut in White Plains, NY. Security is a weak-link problem, but compromises of security often involve exploitation of multiple vulnerabilities, and in many instances multiple exploitations are not significantly more difficult to perpetrate than single-point exploitations. Consequently, trying to avoid single weak links is not enough to ensure the absence of security risks. The basic difficulty is that there are too many weak links, and in some cases -- it would seem -- nothing but weak links. Indeed, the situation is not generally improving, and we can expect systems in the future to continue to have many vulnerabilities -- although some defenses may be locally stronger.

Global problems with local causes. Global problems can result from seemingly isolated events, as exhibited by the 1960s power-grid collapses, the 1980 ARPANET collapse which began at a single node and soon brought down every node, the self-propagating 1990 AT&T long-distance collapse, and a new flurry of widespread west-coast power outages in the summer of 1996 -- all of which seemingly began with single-point failures.

Malicious intent versus accidents. In many cases, air-traffic control and aviation are dependent on our critical infrastructures (e.g., telecommunications, power distribution, and many associated control systems). As noted above, some of the types of situations that did or could occur accidentally could also have been or could still be triggered intentionally. Many of the far-reaching single-point failures that involve cable cuts could have been triggered maliciously. In addition, there are various application areas in which intentional illegal acts can masquerade as apparent accidents.

Terrorism and sabotage. Incentives seem to be on the rise for increased terrorist and other information-warfare activities. The potential for massive widespread disruption or for intense local disruption is ever greater -- especially including denial-of-service attacks. Increasingly, the widespread availability of system-cracking software tools suggests that certain types of attacks may become more frequent as the attack techniques become widely known and adequate defenses fail to materialize. For example, the SYN-flooding denial-of-service attack on the Internet service provider PANIX recently inspired an even more aggressive and more damaging attack on WebCom that affected 3000 websites, over an outage period of about 40 hours on a very busy pre-Christmas business weekend. (See the on-line Risks Forum, volume 18, issues 45, 48, and 69 for further details.)

The feasibility and likelihood of coordinated attacks. Because of the increased use of the Internet, information exchange is very easy and inexpensive. Furthermore, even a single individual can develop simultaneous attacks launched from many different sites that can attack globally wherever vulnerabilities exist. We must recognize the fact that our computer-communication infrastructure is badly flawed, and that our electronic defenses are fundamentally inadequate. Not surprisingly, our ability to resist well-conceived coordinated attacks is even worse. Consequently, we must expect to see large-scale coordinated attacks that will be very difficult to detect, diagnose, and prevent -- and difficult to contain once they are initiated. We must plan our defenses accordingly.

Effects of system and operational complexity. Systems with critical requirements tend to have substantial amounts of software devoted to attaining security, safety, and reliability. Attempts to develop large and very complex systems that are really dependable tend to introduce new risks, particularly when the additional defensive software is used only in times of extreme and often unpredictable circumstances. In many critical systems, as much as half of the software may be dedicated to techniques for attempting to increase security, reliability, and safety.

Increasingly widespread opportunities for misuse. Everyone seems to be jumping on the Internet and the WorldWideWeb, with their inherent dependence on software of completely unknown trustworthiness. The ease with which web pages of the CIA, DoJ, NASA, and the U.S. Air Force have been altered by intruders merely hints at the depth of the problem. Furthermore, those intruders typically acquired the privileges necessary to do much greater damage than was actually observed. As air-industry-related activities become more Internet and computer-system dependent, the risks become ever greater. One recent example relevant to public transportation is provided by the recent breakdown of the Amtrak ticket and information system, which on 29 November 1996 brought the rail system to its knees; employees had to resort to manual ticketing operations, but with no on-line schedules and no hardcopy backups.

International scope. The problems of the Internet are worldwide, just as are the problems of ensuring the safety and security of air travel. We are increasingly confronted with problems that are potentially worldwide in scope -- and in some cases beyond our control.

There are no easy answers. Security, safety, and reliability are separately each very difficult problems. The combination of all three together is seemingly even more complicated. But that combination cries out for a much more fundamental approach -- one that characterizes the overall system requirements a priori, carefully controls system procurements and developments, enforces compliance with the requirements, and continues that control throughout system operation. Simplistic solutions are very risky.

Conclusions

Total integration. Security, safety, and reliability of the aviation infrastructure must be thoroughly integrated throughout the entire infrastructure, addressing computer systems, computer networks, public-switched networks, power-transmission and -distribution facilities, the air-traffic-control infrastructure, and all of the interactions and interdependencies among them.

Technology. Potentially useful technology is emerging from the R&D community, but is typically lacking in robustness. The desired functionality is difficult to attain using only commercially available systems. Further research and prototype development are fundamentally needed, particularly with respect to composing dependable systems out of less dependable components in a way that leads to predictable results. However, greater incentives are needed to stimulate the development of much more robust infrastructures.

Products. The public, our Government, and indeed our entire public infrastructure are vitally dependent on commercial technological developments for the dependability of the infrastructure. We are particularly dependent on our computer-communication systems. The Government must encourage developers to provide better security as a part of their normal product line, and to address safety and reliability much more consistently. Operating systems, networking, and cryptographic policy all play a role.

People. People are always potential source of risks, even if they are well meaning. Much greater awareness is essential -- of the threats, vulnerabilities, and risks -- on the part of everyone involved. Better education and training are absolutely essential, with respect to all of the attributes of security, safety, and reliability. Computer literacy is increasingly necessary for all of us.

Historical perspective. This is not a new topic. The author worked with Alex Blumenstiel in 1987 in developing an analysis of the threats perceived at that time. Those threats are still with us -- perhaps even more intensely than before -- and have been a continuing source of study. (See Reference 1.)

We have been fortunate thus far, in that security-relevant attacks have been relatively limited in their effects. However, the fact that so many reliability and safety violations have occurred reminds us that comparable intentional attacks could have been mounted. Nevertheless, the potential for enormous damage is present. We must not be complacent. Proactive prevention of serious consequences requires foresight and a commitment to the challenge ahead. The technology is ready for much better security than we have at present, although there will always be some risks. The Government has a strong role to play in ensuring that the information infrastructure is ready for prime time.

Perhaps the most fundamental question today is this: How much security is enough? The answer in any particular application must rely on a realistic consideration of all of the significant risks. In general, security is not a positive contributor to the bottom line, although it can be a devastating negative contributor following a real crisis. As a consequence, organizations tend not to devote adequate attention to security until after they have been burned. However, the potential risks in aviation are enormous, and are generally actually much worse than imagined. Above all, there is a serious risk of ignoring risks that are difficult to deal with -- unknown, unanticipated, or seemingly unlikely but with very serious consequences. For situations with potentially very high risks, as is the case in commercial aviation, significantly greater attention to security is prudent.

References

1. Alexander D. Blumenstiel, Guidelines for National Airspace System Electronic Security, DOT/RSPA/Volpe Center, 1987. This report considers the electronic security of NAS Plan and other FAA ADP systems. See also Alex D. Blumenstiel and Paul E. Manning, Advanced Automation System Vulnerabilities to Electronic Attack, DoT/RSPA/TSC, 11 July 1986, and an almost annual subsequent series of reports -- for example, addressing accreditation (1990, 1991, 1992), certification (1992), air-to-ground communications (1993), ATC security (1993), and communications, navigation, and surveillance (1994). For further information, contact Alex at 1-617-494-2391 (Blumenstie@volpe1.dot.gov) or Darryl Robbins, FAA Office of Civil Aviation Security Operations, Internal and AIS Branch.

2. Peter G. Neumann, Illustrative Risks to the Public in the Use of Computer Systems and Related Technology. [This document is updated at least eight times a year, and is available for anonymous ftp as a PostScript file at ftp://ftp.csl.sri.com/illustrative.PS or ftp://ftp.sri.com/risks/illustrative.PS . If you cannot print PostScript, I would be delighted to send you a hardcopy. The compilation of mostly one-line summaries is currently 19 pages, double-columned, in 8-point type. It grows continually.]

3. Peter G. Neumann, Computer-Related Risks, Addison-Wesley, 1995.

4. Peter G. Neumann, Security Risks in the Emerging Infrastructure, Testimony for the U.S. Senate Permanent Subcommittee on Investigations of the Senate Committee on Governmental Affairs, 25 June 1996. [http://www.csl.sri.com/neumann.html/neumannSenate96.html , or browsable from within my web page http://www.csl.sri.com/neumann.html .]

5. Computers at Risk: Safe Computing in the Information Age, National Academy Press, 5 December 1990. [Final report of the National Research Council System Security Study Committee.]

6. Information Security: Computer Attacks at Department of Defense Pose Increasing Risks, U.S. General Accounting Office, May 1996, GAO/AIMD-96-84.

7. Cryptography's Role In Securing the Information Society, National Academy Press, prepublication copy, 30 May 1996; bound version in early August 1996. [Final report of the National Research Council System Cryptographic Policy Committee. The executive summary is on the World-Wide Web at http://www2.nas.edu/cstbweb .]

8. The Unpredictable Uncertainty: Information Infrastructure Through 2000, National Academy Press, 1969. [Final report of the NII 2000 Steering Committee.]

Personal Background

In my 43 and one-half years in various capacities as a computer professional, I have long been concerned with security, reliability, human safety, system survivability, and privacy in computer-communication systems and networks, and with how to develop systems that can dependably do what is expected of them. For example, I have been involved in designing operating systems (Multics) and networks, secure database-management systems (SeaView), and systems that can monitor system behavior and seek to identify suspicious or otherwise abnormal behavior (IDES/NIDES/EMERALD). I have also been seriously involved in identifying and preventing risks. Some of this experience is distilled into my recent book, Computer-Related Risks (Reference 3).

In addition to projects involving computer science and systems, I have worked in many application areas -- including (for example) national security, law enforcement, banking, process control, air-traffic control, aviation, and secure space communications (CSOC). I participated in SRI projects for NASA, one in the early 1970s on a prototype ultrareliable fly-by-wire system, and another in 1985 in which I provided preliminary computer-communication security requirements for the space station. Perhaps most relevant here, the 1987 study I did for Alex Blumenstiel and Bob Wiseman (Department of Transportation, Cambridge, Mass.) specifically addressed computer-communication security risks in aviation. Alex has continued to refine that analysis. (See Reference 1.)

I was a member of the 1994-96 National Research Council committee study of U.S. cryptographic policy, Cryptography's Role In Securing the Information Society (Reference 7), and the 1988-90 National Research Council study report, Computers at Risk (Reference 5). I am chairman of the Association for Computing (ACM) Committee on Computers and Public Policy, and Moderator of its widely read Internet newsgroup Risks Forum (comp.risks). (Send one-line message ``subscribe'' to risks-request@CSL.sri.com for automated subscription to the on-line newsgroup.)

I am a Fellow of the American Association for the Advancement of Science, the Institute for Electrical and Electronics Engineers, and the Association for Computing (ACM). My present title is Principal Scientist in the Computer Science Laboratory at SRI International

Dig:
National Airspace System: Free Flight Tools Show Promise, but Implementation Challenges Remain
GAO-01-932 August 31, 2001
Full Report (PDF, 30 pages)     Recommendations (HTML)
http://www.gao.gov/products/GAO-01-932

Summary

This report reviews the Federal Aviation Administration's (FAA) progress on implementing the Free Flight Program, which would provide more flexibility in air traffic operations. This program would increase collaboration between FAA and the aviation community. By using a set of new automated technologies (tools) and procedures, free flight is intended to increase the capacity and efficiency of the nation's airspace system while helping to minimize delays. GAO found that the scheduled March 2002 date will be too early for FAA to make an informed investment decision about moving to phase 2 of its Free Flight Program because of significant technical and operational issues. Furthermore, FAA's schedule for deploying these tools will not allow enough time to collect enough data to fully analyze their expected benefits. Currently, FAA lacks enough data to demonstrate that these tools can be relied upon to provide accurate data.


Recommendations

Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work. Director:    Gerald L. Dillingham
Team:    Government Accountability Office: Physical Infrastructure
Phone:    (202) 512-4803



Recommendations for Executive Action

Recommendation: To make the most informed decision about moving to phase two of the Free Flight program, the Secretary of Transportation should direct the FAA Administrator to collect and analyze sufficient data in phase 1 to ensure that the User Request Evaluation Tool can effectively work with other air traffic control systems.

Agency Affected: Department of Transportation

Status: Closed - implemented

Comments: FAA has taken several steps from design through testing to help ensure that the User Request Evaluation can effectively work with other air traffic control systems. For example, FAA conducted extensive testing of the User Request Evaluation Tool with such key systems as the Display System Replacement, HOST, Bandwidth Manager, and Weather and Radar Processor. In addition, FAA formulated a highly detailed synchronization strategy to help ensure that the User Request Evaluation Tool would work effectively within the National Airspace System. The strategy proved successful with the contractor completing government acceptance testing during August and September 2001. On December 3, 2001, Kansas City Center declared the system ready for use.

Recommendation: To make the most informed decision about moving to phase two of the Free Flight program, the Secretary of Transportation should direct the FAA Administrator to improve the development and the provision of local training to enable field personnel to become proficient with the free flight tools.

Agency Affected: Department of Transportation

Status: Closed - implemented

Comments: The Free Flight Program Office has developed a national training plan that describes courses, training outcomes, delivery methods, and suggested timeframes for training site personnel. Key to this training plan is the use of a core cadre of trainers, who will train site personnel who reside at the facilities to provide training to site staff. Contractor personnel and subject matter experts will also facilitate the implementation of training at each site.

Recommendation: To make the most informed decision about moving to phase two of the Free Flight program, the Secretary of Transportation should direct the FAA Administrator to determine that the goals established in phase 1 result in a positive return on investment and collect data to verify that the goals are being met at each location.

Agency Affected: Department of Transportation

Status: Closed - implemented

Comments: The Free Flight Program Office has established goals for phase I based on previous experience at prototype sites. FAA has also established a formal measurement process to ensure accountability for the free flight investments. Goals and results are presented on a monthly basis through the FAA's Monthly Performance Report. Details of the measurement process are provided semi-annually through the Free Flight Performance Metrics Report, which is published every June and December. For Free Flight phase II, goals will be established for future sites based on experience from phase I. The formal measurement process will continue in Phase II to ensure accountability and provide feedback to stockholders.

Dig:
A PRIVACY Forum Special Report -- 11/1/99
Lauren Weinstein (lauren@vortex.com)
http://www.vortex.com/../../priv-sis.html
PRIVACY Forum Moderator

Greetings. As the percentage of computer users with either on-demand or permanent connections to the Internet continues to creep ever closer to 100%, some techniques are beginning to appear in software which can only be described as underhanded--apparently implemented by software firms who consider it their right to pry into your behavior.

It's becoming increasingly popular for various software packages, which would not otherwise seem to have any need for a network connection, to establish "secret" links back to servers to pass along a variety of information or to establish hidden control channels.

One rising star in this area of abuse is remote software control. Various firms now promote packages and libraries, which can be "invisibly" added to *other* software, to provide detailed "command and control" over the software's use, often without any clue to the user as to what's actually going on. These firms promote that they can monitor usage, remotely disable the software, gather statistics--anything you can imagine. The oft-cited major benign justification for such systems is piracy control, leading to gathering of information such as site IP numbers, for example. If the software seems to be running on the "wrong" machine, it can be remotely disabled. But information gathering and control most certainly doesn't necessarily stop there!

Another example is the use of such systems in "demo" software. I recently received promotional material from a firm touting their package's ability to prevent demo software from running without it first "signing in" to a remote server on each run, which would then report all usage of the demo--so the demo producer could figure out who to target for more contacts ("buy now!") or to disable the demo whenever they wished--or whatever might be desired.

It is frequently the case that software using such techniques will establish network connections without even asking the user (though I did succeed in getting one such firm to promise to change this policy after a long phone conversation with their president). But as a general rule, you cannot assume that you'll ever know that software is establishing a "hidden" channel, except in cases with dialup modems where you might actually hear the process. With permanent net connections, there'd typically be no clue.

If you think that your firewalls will protect you against such systems, think again. The protocol of choice for such activities is HTTP--the standard web protocol--meaning that these control and monitoring activities will typically flow freely through most firewalls and proxies that permit web browsing.

Other examples of such "backchannels" have also been appearing, such as e-mail messages containing "hidden" HTTP keys which will indicate to the sender when the e-mail was viewed by the recipient (assuming the e-mail was read in an HTTP-compliant mail package). Is this any of the firms' business? No, of course not. They just think they're being cute, and do it since they can. If you care about this sort of thing, read your e-mail in text-based packages--they're safer from a wide variety of e-mail "surprises" (including viruses) in any case. In the Unix/Linux world, "mh" is a good choice.

Whether one cares to view any particular application of these sorts of "network spy" technologies as trivial or critical will vary of course. Some people probably couldn't care less. Others (especially in business and government, where hidden flows of information can have serious consequences indeed) will be much more concerned.

Unfortunately, until such a time as it is clearly illegal for such packages to siphon information from, or remotely control, users' computers without their knowledge or permissions, such abuses are likely only to continue growing in scope and risks. We haven't seen anything yet.

--Lauren--
Lauren Weinstein
lauren@vortex.com
Moderator, PRIVACY Forum --- http://www.vortex.com
Member, ACM Commit

Dig:
Information Security: Weaknesses Place Commerce Data and Operations at Serious Risk
GAO-01-751 August 13, 2001
Full Report (PDF, 48 pages)     Recommendations (HTML)
http://www.gao.gov/products/GAO-01-751

Summary

The Department of Commerce generates and disseminates important economic information that is of great interest to U.S. businesses, policymakers, and researchers. The dramatic rise in the number and sophistication of cyberattacks on federal information systems is of growing concern. This report provides a general summary of the computer security weaknesses in the unclassified information systems of seven Commerce organizations as well as in the management of the department's information security program. The significant and pervasive weaknesses in the seven Commerce bureaus place the data and operations of these bureaus at serious risk. Sensitive economic, personnel, financial, and business confidential information is exposed, allowing potential intruders to read, copy, modify, or delete these data. Moreover, critical operations could effectively cease in the event of accidental or malicious service disruptions. Poor detection and response capabilities exacerbate the bureaus' vulnerability to intrusions. As demonstrated during GAO's testing, the bureaus' general inability to notice GAO's activities increases the likelihood that intrusions will not be detected in time to prevent or minimize damage. These weaknesses are attributable to the lack of an effective information security program with a lack of centralized management, a risk-based approach, up-to-date security policies, security awareness and training, and continuous monitoring of the bureaus' compliance with established policies and the effectiveness of implemented controls. These weaknesses are exacerbated by Commerce's highly interconnected computing environment. A compromise in a single poorly secured system can undermine the security of the multiple systems that connect to it.


Recommendations

Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work. Director:    Gregory C. Wilshusen
Team:    Government Accountability Office: Information Technology
Phone:    (202) 512-3317



Recommendations for Executive Action

Recommendation: The Secretary of Commerce should direct the Office of the Chief Information Officer (CIO) and the bureaus to develop and implement an action plan for strengthening access controls for the department's sensitive systems commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or modification of information resulting from unauthorized access. Targeted timeframes for addressing individual systems should be determined by their order of criticality. This will require ongoing cooperative efforts between the Office of the CIO and the Commerce bureaus' CIOs and their staff. Specifically, this action plan should address the logical access control weaknesses that are summarized in this report and will be detailed, along with corresponding recommendations, in a separate report designated for "Limited Official Use." These weaknesses include password management control, operating systems controls, and network controls.

Agency Affected: Department of Commerce

Status: Closed - implemented

Comments: Action completed 10/15/2001. In compliance with requirements of the Government Information Security Reform Act (GISRA), the Department prepared an agency Plan of Action and Milestones (POA&M). The agency POA&M included time frames and interim milestone tasks for correcting system weaknesses at individual operating units as detailed in GAO-02-164 (LOUO). In April 2002, the Department CIO issued a memo to all operating units directing completion of corrective actions for the POA&M weaknesses, which include the GAO recommendations, no later than 30 September 2002. The Department is on track to meet this target.

Recommendation: The Secretary of Commerce should direct the Office of the CIO and the Commerce bureaus to establish policies to identify and segregate incompatible duties and to implement controls, such as reviewing access activity, to mitigate the risks associated with the same staff performing these incompatible duties.

Agency Affected: Department of Commerce

Status: Closed - implemented

Comments: IT Security Program Policy and Minimum Implementation Standards were issued January 25, 2003. The policy addresses segregation of incompatible duties by stating that compensating management controls must be implemented to ensure changes to the security posture are properly authorized.

Navigation

[0] Message Index

[*] Previous page

Go to full version