Author Topic: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure  (Read 38709 times)

0 Members and 1 Guest are viewing this topic.


  • Guest

"9/11 is a prime example of what Indira Singh calls "extreme event risk." In the months prior to the attack, she was working on a program capable of providing data in real time to prevent these types of events from happening. She had pitched this very idea just one week before 9/11 at In-Q-Tel Headquarters (CIA) in Virginia, giving a final presentation for an ICH (Interoperability Clearinghouse) project code named "Blue Prophet." Indira explained to CIA why she was supporting ICH in this project.

She told the In-Q-Tel team, "The intelligence and other agencies need this now." One of the men with CIA looked at Indira with what she describes as the "blackest, coldest look anyone has ever given her."

Official Document
W N 99W 0000066
Adapting Information Engineering for the
National Airspace System and Its Application to
Flight Planning

September 1999

Michael A. Hermes
Sally E. Stalnaker
Dr. Nels A. Broste
Gary L. Smith

© 1999 The MITRE Corporation.  All rights reserved.  This is the copyright work of The MITRE Corporation and was produced for the U.S. Government under Contract Number DTFA01-93-C-00001 and is subject to Federal Acquisition Regulation Clause 52.227-14, Rights in Data-General, Alt. III (JUN 1987) and Alt. IV (JUN 1987).  No other use other than that granted to the U.S. Government, or to those acting on behalf of the U.S. Government, under that Clause is authorized without the express written permission of The MITRE Corporation.  For further information, please contact The MITRE Corporation, Contracts Office, 1820 Dolley Madison Blvd., McLean, VA 22102, (703) 983-6000.
Dept. No.:
Federal Aviation Administration
Contract No.:
Project No.:

©1999 The MITRE Corporation.  All rights reserved.

For internal use and not an official position of The MITRE
Center for Advanced Aviation System Development
McLean, Virginia

A combined team from the FAA, the aviation community, contractors, and CAASD jointly adapted the information engineering process for the information flows in the National Airspace System.  Information engineering was then applied to the information flows necessary for flight planning in a Free Flight environment.  The combined team created high level information engineering products and an interactive prototype.  CAASD documented an overview of the information engineering approach in this assessment.  The application of information engineering to flight planning shows that an enhanced and more dynamic flight planning process is necessary to implement Free Flight advances for improved access, predictability, flexibility, and capacity in the National Airspace System.  The study also demonstrates the power of the information engineering process in assessing system needs.

KEYWORDS: Free Flight, Flight plan, information engineering, object-oriented


The concepts documented in this paper were the results of the combined effort of the
Flight Object Working Group under the aegis of the NAS Information Architecture Committee (NIAC).  The Working Group was comprised of FAA, CAASD, SETA, DMR, Ptech, and ILOG staff.  The Flight Object Working Group included:
FAA: Josh Hung (project leader), Felix Rausch, Carol Uri
SETA: Dick Sullivan (TRW), Phil Prassee (JTA), Tony Rhodes (TRW)
DMR:  Bill Holden
Ptech: Dr. Samer Minkara, Walid Assad
ILOG: Alain Neyroud, Olivier Nicolas

In addition, Long Truong and Ron Schwarz of CAASD provided significant insight and commentary on this work.  The following operations experts at CAASD also provided valuable assistance: Jerry Baker, Dusty Rhodes, Don Olvey, and Larry Newman. The authors would also like to thank Lynn McDonald and Patricia Palmer for documentation, cleanup, and administrative support.

6.2  Object-based Development

The development of the Dynamnic Flight Planning demonstration integrated elements across many different NAS user and FAA domains.  These domains included NAS and engineering elements.  Figure 6-1 illustrates many of these elements and their inter-relationships.  The NAS operations were reviewed to identify and develop scenarios for assessment.  The NAS services and capabilities provided definition for the prototype.  The object-oriented prototype development required the definition of sequence diagrams and use cases.  The assessment of information sources, sinks, and flows required reevaluation of scenarios and use cases.  Several iterations were required to produce the final demonstration.  The Ptech Framework object modeling tool was used to develop the Dynamic Flight Plan framework and rules for moving and processing information.  Figure 6-2 is a high-level view of the much more detailed object models developed for the demonstration [Rumbaugh, 1991].  The shaded portions of Figure 6-2 were the focus of this study.  The detailed models reside in the tools themselves and are to be documented at a later date.  Many of the objects were derived from the entity-relationship model developed for the NAS data model analysis, see Figure 3-4.

Figure 6-3.  High-Level Scenario Flow (see the document for image)

The Dynamic Flight Plan demonstration was developed with a commercial set of prototype development tools.  This tool set, from ILOG Inc., used the object models developed with the Ptech Framework modeling tool to populate the object core, see Figure 6-4.  The use cases and sequential meetings of the development team allowed the evolution of the views and validation of the core objects and rules.  As the actor views were developed, the scenarios were played out.  Each session uncovered more opportunities for improvement in the demonstration, and even more importantly, a broader conception of the extent that dynamic flight planning could be exercised under the appropriate circumstances.  
Official Document
MT R 00W 0000097

FAA Data Registry (FDR) Concepts of Use and

September 2000

Dr. Nels A. Broste
Ronald G. Rhoades
Ronald A. Schwarz

The MITRE Corporation.  Al  rights reserved.
Center for Advanced Aviation System Development
McLean, Virginia

©2000 The MITRE Corporation. ALL RIGHTS RESERVED.

3.3.2  Additional Attributes
The basic attribute list can be extended to meet FAA administrative requirements.  An FDR prototype developed in FY99 using the Ptech, Inc.'s Framework environment [AOP] identified a requirement for additional attributes beyond those suggested in ISO/IEC 11179.  A combined list of additional attributes is shown in Table 3-2.

July 2001,
Volume 5
Number 2

Information Support to Multinational Operations

A Global Diplomatic Common Platform

New Architecture to Ensure Interoperability of the NATO Bi-Strategic Command Automated Information System with U.S. and Allied Systems

Worldwide Air Traffic Control Analysis

Bringing Visibility, Efficiency, and Velocity to America's Mobility Forces

Joint Force Integration - A Challenge for the Warfighter

Global Information Grid Architecture

Implications and Challenges of the Global Combat Support System

Homeland Defense

IDEX II Replacement Project: Leveraging MITRE's Unique Role and Global Presence

Hexagon: A US Joint Force Command Solution to Coalition Interoperability

Worldwide Information Systems Issue!

Joint Vision 2020, building on Joint Vision 2010, focuses on interoperability in joint, multinational, and interagency operations, with particular emphasis on alliance/coalition operations. One of the fundamental issues confronting commanders in a multinational operation is the sharing of information among participants. “Need to know” has been operationally overtaken by “need to share.” MITRE has played a significant role in this shift. A number of approaches are identified in this article for addressing this “need to share” issue.

Two principle aspects to information sharing are what to share and how to share it. How the information is shared is a technical issue and more easily solved. What to share is constrained by the foreign disclosure policies under which the sharing takes place. The first step to sharing is getting the information into a form that enables foreign disclosure. In many ways the first step is the most difficult and time consuming. Currently it is a manual process that is not consistent with the growth of processing of command and control information, especially intelligence. The second step is providing the system capabilities for providing access to the information after it has been cleared for disclosure.

Deciding how to share the information can be driven by the information itself, the update rate required, the fidelity required for the mission, and the capabilities of the systems employed by the participating nations. For example, in the case of nine Partnership for Peace (PfP) countries, the objective is to share air situation information between military and civil air traffic control systems and across bordering countries. The PfP nations determined that defining common systems and interfaces that provide the level of interoperability required and acquiring that same system for all participating nations was the best approach. The PfP countries now have a common baseline with common business rules to build upon as the capabilities of their systems grow. The interfaces were defined to facilitate the exchange of track data with the NATO command and control centers as the PfP countries are accepted into NATO. One of the challenges is for the nations to continue to maintain functionally equivalent baselines in the future in order to enhance interoperability.

From a broader perspective, a number of technical solutions are being employed to share information within the political constraints. One of these is to maintain a set of servers at an alliance or coalition level of security that can be accessed by all participants. An example of this is the U.S. European Command’s Linked Ops/Intel Centers/Europe (LOCE). LOCE consists of a set of servers containing databases and imagery information and a network supporting more than six hundred workstations distributed among U.S. and multinational users. LOCE has been used in support of operations in both Bosnia and Kosovo. LOCE is also an element of the Battlefield Information Collection and Exploitation System (BICES). BICES is a network connecting the National Contribution Databases (NCDs) of participating NATO nations. As with LOCE, BICES makes use of a dedicated communications network with bulk encryption devices. LOCE and BICES have been evolving over a number of years and have been making ever-greater use of Internet technology to facilitate the movement of information. The U.S. Central Command (USCENTCOM) is carrying out Proof of Concept experiments for a Coalition Wide-Area Network employing a capability similar to LOCE. This LOCE-derivative is called the Central Region Information Exchange System (CENTRIXS), with servers to be placed in Tampa and the USCENTCOM area of responsibility (AOR) for use with coalition partners in the Middle East.

The U.S. Southern Command and 20 Caribbean nations have adopted a BICES approach as a model for sharing information among themselves and have fielded a capability called the Caribbean Information Sharing Network (CISN). The CISN employs commercial encryption for sharing unclassified but sensitive information and uses the Internet rather than a dedicated communications network as used with BICES.

A number of other approaches for information sharing in a multinational environment are being assessed. Two of these are the Multi-Domain Dissemination System (MDDS) at the U.S. Pacific Command and the Content-Based Information System (CBIS) at the U.S. Joint Forces Command. (See the related article on Hexagon.)

Briefly, the MDDS effort is investigating ways of providing access from multiple security domains to one source of Web-based information, a common data set of authoritative information. MDDS will acquire information from multiple sources, provide a security review, and place the information in appropriate repositories for access by those who have been cleared for access to the repository.

CBIS is taking a different approach. Information is protected based on its content. Two elements of this approach are content labeling and user security attributes. Information is labeled and encrypted electronically to identify classification, dissemination, and release controls. At a user’s workstation, the user is identified by a personal authenticator smart card that includes, among other things, biometric templates, cryptographic keys, and a digital certificate providing authorization to access secure material.

There have been proposals for the development of a generic U.S. Multinational Information Infrastructure (USMII) that is derived from the LOCE capability and experience. The USMII would be developed to facilitate deployment and employment in a multinational operation as situations demand. It would support both intelligence and operations, connecting the U.S. with coalition partners worldwide for the two-way exchange of information. Also, it would provide U.S.-releasable information such as intelligence products, target folders, missile alerts, and mission plans. It would also be connected through necessary guards with appropriate U.S.-only capabilities. Operational missions could include support to offensive or defensive operations, drug interdiction, humanitarian assistance, and peacekeeping.

How can these various initiatives be brought together to capitalize on their strengths and avoid duplication of effort? One approach is to develop a set of coalition servers similar to that used in the USMII that would have as their primary purpose the support of worldwide multinational operations. The servers would be transportable, have a set of software that would be applicable to a range of situations, and make use of Web technology with Web browsers to facilitate employment in multiple theaters. Another approach to consider is coupling servers employing Web technology as the coalition servers. The coalition servers could have interfaces with a Web access secure server as well as local U.S. and allied servers. Similarly, a server using the CBIS technology could be employed as the coalition server, with users from multiple nations having access to the information as a function of the content-based protection mechanism. As the Worldwide Web technology evolves, new opportunities are provided for nations to more easily share information in support of multinational operations.

MITRE believes a worldwide information infrastructure, relying upon many different technologies, will evolve to support the multitude of multinational operations across defense commands and civil organizations.

In 1999, the President, Secretary of State, and members of Congress commissioned a blue ribbon panel, the Overseas Presence Advisory Panel (OPAP), to assess needs and make recommendations for improving our ability to conduct foreign affairs overseas. OPAP’s November 1999 report begins: “The United States overseas presence, which has provided the essential underpinnings of U.S. foreign policy for many decades, is near a state of crisis. Insecure and often decrepit facilities, obsolete information technology, outmoded administrative and human resources practices, poor allocation of resources, and competition from the private sector for talented staff threaten to cripple our nation’s overseas capability, with far-reaching consequences for national security and prosperity.” (Download report [PDF])

Among its key recommendations, the report proposes the creation of a common operating platform to enable the more than 40 agencies operating in nearly 200 countries to increase U.S. global engagement and influence in an increasingly complex and dangerous world. A few months after the November release of the report, the Department of State proposed creating a “collaboration zone,” providing a common platform for the worldwide diplomatic community. On June 22, 2000, because of MITRE’s extensive systems engineering and research involvement in distributed collaboration systems (see The Edge Collaboration issue), MITRE testified as the industry expert before the Congressional Committee on International Relations, alongside the Department of State Chief Information Officer, the Head of Diplomatic Services, and the General Accounting Office. MITRE’s statements and recommendations to Congress included:

   1. Collaboration and knowledge management technologies offer great promise for helping to create a common platform to enhance our overseas presence across multiple countries and operating agencies.
   2. The infrastructure must be secure to manage risk.
   3. Success comes from a step-by-step creation of a solution with milestones leading toward the implementation of a clear vision with explicit objectives and measured outcomes.
   4. Effective collaboration between technical and operational experts and organizational commitment are necessary for success.
   5. Cultural change is required to fully realize business process improvements.

Establishing a system to support the varied needs of more than 40 agencies in nearly 200 countries with varied communications infrastructure is a great challenge for the foreign affairs community. MITRE continues to share its knowledge with key decision makers. For example, the Chairman of the International Relations Committee, Rep. Benjamin A. Gilman (R-NY), requested that MITRE share its extensive collaboration knowledge with Department of State executives. Acting in the public interest, MITRE held a series of technical exchanges at MITRE and the Department of State with former Undersecretary of Management Bonnie Cohen, Chief Information Officer Fernando Burbano, and senior intelligence personnel to share MITRE’s corporate expertise in the areas of extranets, expert finding, and automated information management.

United States military forces are and will continue to be involved in coalition operations involving NATO and NATO allies. To remain successful, these coalition operations will need a greater degree of interoperability than required in the past. This can only be achieved if the U.S., NATO, and NATO allies utilize the same set of specified standards and, to the greatest degree possible, the same set of tested interoperable products.

To achieve this goal, MITRE is supporting the U.S. Mission to NATO in NATO’s efforts to develop a NATO Consultation, Command, and Control (C3) technical architecture for implementing a Bi-Strategic Command (SC) Automated Information System (AIS). The Bi-SC AIS will provide a single core capability for the command, control, and information systems for both of NATO’s Strategic Commands—Allied Command Europe (ACE) and Allied Command Atlantic (ACLANT).

NATO policy requires the Bi-SC AIS be implemented with the mandatory core set of standards and products specified in the NATO C3 Technical Architecture (NC3TA). MITRE is actively supporting U.S. efforts to align our national systems (e.g., Global Command and Control System (GCCS)) with the Bi-SC AIS in order to support multinational operations. To promote interoperability, Bi-SC AIS must be implemented with the standards specified in the NATO Common Standards Profile (NCSP) and the products specified in the NATO Common Operating Environment (NCOE). These are volumes 4 and 5 respectively of the five-volume NC3TA, which includes:

    * Volume 1: Management
    * Volume 2: Architectural Models and Description
    * Volume 3: Base Standards and Profiles
    * Volume 4: NATO Common Standards Profile (NCSP)
    * Volume 5: NATO Common Operating Environment (NCOE)

As part of our support on this initiative, MITRE is chairing the NATO Ad-Hoc Working Group for NCOE, which has the responsibility for developing the NCSP and NCOE. Volume 4: NCSP, provides guidance on mandated and emerging standards for NATO information system acquisition. Volume 5: NCOE, is a NCSP standards-based computing and communications infrastructure, composed of selected off-the-shelf products and supporting services, that provides the structural foundation necessary to build interoperable and open systems.

Version 2 of the NC3TA was completed in December 2000 and; approvedal by the NATO C3 Board (NC3B) is expected in May 2001. Once With NC3B approvaled, the standards in the NCSP and the products in the NCOE will have become mandatory for the Bi-SC AIS. By aligning NATO’s NCSP and NCOE with the analogous U.S. Joint Technical Architecture (JTA) and Defense Information Infrastructure (DII) Common Operating Environment (COE), the NATO Bi-SC AIS will be interoperable with GCCS and thus facilitate U.S. leadership or participation in any NATO-led operation.

Version 2 of the NC3TA was completed in December 2000 and approval by the NATO C3 Board in May 2001. With NC3B approval, the standards in the NCSP and the products in the NCOE are mandatory for the Bi-SC AIS. By aligning NATO's NCSP and NCOE with the analogous U.S. Joint Technical Architecture (JTA) and Defense Information Infrastructure (DII) Common Operating Environment (COE), the NATO Bi-SC AIS will be interoperable with GCCS and thus facilitate U.S. leadership or participation in any NATO-led operation.

The United States air traffic control (ATC) system currently handles as many as 100,000 passengers per hour on 4,000 aircraft, or about 650 million passengers per year. The volume of air traffic is increasing at least as fast as the general economy. During the 10 years ending in 1998, the number of domestic passenger-kilometers flown increased at a 3.8 percent annual rate. Overseas the growth is even higher, increasing at a 6.3 percent average annual rate. Because the numbers of airports and runways are growing more slowly, and the volume of airspace is static, increased air traffic leads to increased congestion and its resulting delays. Increasing congestion is a global phenomenon, and air traffic analysts must be prepared to think and act globally to resolve near- and far-term problems.

MITRE’s Center for Advanced Aviation Systems Development has been actively involved in ameliorating overseas ATC problems for a number of years. MITRE currently has active long-term projects with Egypt, Belgium, Japan, and Canada. Additionally, it is engaged in projects in Mexico, Latin America, Switzerland, and other regions. Within these areas, MITRE has worked on air traffic safety analysis through infrastructure evaluation, modernization, and project management support.

MITRE’s international program has succeeded in providing substantial and tangible support in every region. For example, MITRE’s current work in Latin America focuses on configuring the airport and airspace regions for both Sao Paulo, Brazil and Buenos Aires, Argentina. Although the volume of air traffic is small relative to other regions of the world, these large metropolitan areas in Latin America have a substantial delay problem due to their airport and airspace configuration. MITRE is not only helping Brazil and Argentina identify and solve these specific problems, but is also teaching customers and local resources in the area about the use of modeling tools and the process of analysis so that they develop the ability to solve future problems themselves.

We need to be continually evaluating air traffic volume, procedures, and growth both on a local and a global level. In many cases, problems are not unique to any one country, but are common across national boundaries. Solutions to air traffic problems in one region can often be implemented in other regions. Additionally, air traffic problems that appear to be local often affect distant airports either on the same continent or overseas. These factors combine to create a continuing need for a global view of air traffic and for access to tools that can analyze problems globally.

The Detailed Policy Assessment Tool

One such tool that has been used for international aviation analysis is a MITRE-built simulation model called the Detailed Policy Assessment Tool (DPAT). First developed as a MITRE-Sponsored Research project in fiscal 1994 and 1995, DPAT has subsequently been used in numerous national and international projects by MITRE staff. Its extensive use of data-driven software, its parallel computation for processing speed, and its avoidance of built-in ATC logic that could have made it region-specific enable its use internationally.

DPAT’s main contribution to aviation analysis is its ability to predict and measure congestion-related delays. Congestion delays occur when system resources are overworked. Over 70 percent of the time this situation occurs during bad weather; the remaining delays occur for a variety of other reasons, including airline scheduling practices, equipment outages, unplanned incidents, and so forth. Because DPAT can be configured with data representing an historical situation, the current live situation from radar feeds, or a hypothetical future situation, it has great use in predicting traffic throughput and delays.

DPAT is derived from an earlier MITRE product called National Airspace System Performance Analysis Capability (NASPAC), originally built as a domestic model only. There are no limits to the number of airports, flights, or en route airspace sectors it can simulate. As such, DPAT can be configured for different regions of the world. Already, DPAT has been used for analysis in the United States, East Asia, Latin America, Canada, and Egypt. Its versatility is enhanced through an interface that allows it to be accessible through Web browsers.

The Use of DPAT for Global Air Traffic Evaluation

One of the original DPAT studies concentrated on East Asian air traffic. The team produced air traffic forecasts for each year from 1995 (the study year) through 2015, looking at expected arrivals and departures from each airport as well as at the expected cities of origin and destination that are served by each flight. In developing the forecast, the team consulted with a variety of industry and government sources, fitting exponential models to predictions of regional Asian growth rates. Subsequently, the team ran DPAT with over 400 different configurations, constituting a sensitivity study of every combination of forecasted demand and capacity. Among other findings, the team isolated several Asian airports where delayed construction programs could result in severe delays that would propagate to surrounding airports.

A second example of how DPAT can be used to evaluate global air traffic is an analysis of two Canadian airports, one in Montreal (Dorval) and the other in Toronto. In both cases, the Canadian aviation authority, NAV CANADA, requested an analysis of the expected reduction in delay due to the installation of new metering systems and procedures. The team considered up to six different aircraft metering technologies, and used DPAT to show how system delay would be affected as the capacities of the airports changed with the different metering technologies.

Because DPAT contains a Web-based interface, the first of its kind when MITRE produced it in 1995, it is a useful system for demonstrating MITRE capabilities at conferences and to potential international sponsors. DPAT’s Web-based capabilities have been demonstrated during two different Asia-Pacific aerospace conferences on the West Coast, as well as during conferences and technical exchange meetings with the Egyptian government. Additionally, DPAT has been integrated with a Java-based flight visualization tool for use in graphically displaying current air traffic in Asia.


As the world economy continues to globalize and as worldwide air traffic continues to rise, it becomes increasingly important to maintain a global focus on air traffic management. Problems in distant regions can propagate and become local problems. Solutions to local problems can often become solutions to problems in distant regions. Economies of scale, efficient use of resources, and a common multinational understanding of complex problems all become powerful motivators for international collaboration and a global focus on problem solving—both generally and specifically for aviation.

The contents of this material reflect the views of the author and/or the Director of the Center for Advanced Aviation Systems Development. Neither the Federal Aviation Administration nor the Department of Transportation makes any warranty or guarantee, or promise, expressed or implied, concerning the content or accuracy of the views expressed herein.

The Air Force Air Mobility Command (AMC) provides global air support for the entire Department of Defense. This includes air support for the President, day-to-day operational support, natural disaster support, humanitarian support, aerial refueling, and, as job number one, warfighter support during operations in hostile environments. AMC’s aircraft fleet contains more airframes than the total of seven major commercial airlines, averages over 900 sorties each day and, in 1999, provided air service to 153 of 197 countries worldwide. AMC may not actually go where "no man has gone before," but it does go to many locations where commercial aircraft do not. AMC’s annual costs exceed $7.0 billion, and its per annum projected growth by geographic region ranges from 3.7% to 13.1%.

Consequently, AMC’s daily challenges are not trivial. Further, in a time of shrinking resources, AMC must seek ways to perform its mission more effectively and efficiently. One major response to this challenge is Mobility 2000 (M2K), a comprehensive AMC initiative to modernize Command and Control (C2) enterprise architecture in order to increase operational effectiveness, save on use of personnel and aircraft, and improve safety. M2K leverages new technologies in communications and information systems to significantly enhance the ability of AMC to plan for use of assets, schedule personnel and aircraft, task operating units, and execute operations using America’s air mobility forces worldwide.

M2K is a system integration effort to improve the visibility of all AMC resources to AMC decision-makers, enable more efficient decisions, and increase movement of goods and people through the Defense Transportation System. A major M2K goal is to reduce aircrew workload by eliminating time spent on the ground in airports so that they can make more efficient use of their flying duty day. M2K will revolutionize C2 communications and data flow to position the Command for more efficient and responsive air mobility operations in the 21st century.

Current operations are hampered by limited support to aircrews, who receive an inadequate near-real time view of the mission and less-than-current information on changes in weather and airport capabilities. Limited connectivity to the aircraft, characterized by only basic voice communications and limited data transmissions, creates problems for the dynamic decision making required in a changing environment. Improved communications to and from aircraft is critical for positive command and control of AMC assets.

The AMC MITRE Team provides multi-faceted and wide-ranging support to the M2K concept through its participation in the programmatic approach used to implement it. For example, MITRE personnel were instrumental in developing the vision and concept of operations. Further, the MITRE Team includes leads on corporate architecture, database engineering, and security engineering projects who play key roles in the implementation of M2K. The AMC MITRE Team is a key contributor to the re-engineering, within M2K, of the business practices and supporting information technology in the Tanker Airlift Control Center (TACC). (TACC is the organization responsible for worldwide management of AMC assets and is the command center from which flight managers develop and file flight plans and monitor the flights in progress.)

MITRE personnel play key roles in all three M2K critical subcategories: Aircraft Enabling Technology, Communication Pipeline, and Integrated Flight Management (IFM).

Aircraft Enabling Technology

The vast growth in air traffic presents increasing challenges. All operators of aircraft, both military and commercial, are competing for the best slot times and most fuel-efficient air routes. Civil aviation authorities (as highlighted in the article on global air traffic) are implementing air traffic architecture that will increase system capacity, flight efficiency, and flight safety, and that will culminate, in 2010, with the attainment of dynamic routing (a.k.a. "free flight"). Dynamic routing will give users the freedom to choose their own routes, speeds, and altitudes in near real-time, representing a shift from air traffic control to air traffic management. As a user of the same airspace, AMC must change its business practices to operate within the same architecture. As part of its work with M2K, MITRE is evaluating and recommending new technology for use in a new Advanced Computer Flight Plan (ACFP) system to select the most fuel efficient flight plan. The new system is anticipated to reduce fuel costs by up to 3%; that equates to $20 million per year in savings. The ACFP provides a capability to file flight plans with the FAA electronically, thereby speeding up the flight planning process and reducing the time that the crew spends on the ground developing flight plans. The new ACFP is projected to be operational in FY01.

Global Air Traffic Management (GATM) and Aircraft Communication Addressing and Reporting Systems (ACARS) are two other key M2K aircraft technologies that will keep AMC in synch with the civil aviation community. By leveraging GATM equipment installation and digital data link technologies, AMC will realize real-time, global, end-to-end data connectivity among the TACC, air traffic control systems, and mobility aircrew. MITRE contributes to the implementation of GATM through technology evaluation and implementation planning and execution.

Communications Pipeline

Other emerging communications capabilities are being exploited to support M2K communications needs through cooperative relationships with such military and civilian agencies as the Air Force Research Lab, ARINC GLOBALink Services, and the Aerospace Command, Control, Intelligence, Surveillance, and Reconnaissance Center (AC2ISRC). These relationships form an active, strategic, long-term partnership for continuous insertion and logical development of AMC information technology. MITRE engineers also support the Air Force-wide deployed communications that are critical to AMC.

The more robust communication pipeline provided by emerging digital data technology and enhanced applications software will facilitate improvements to C2 processes inside and outside of the TACC and enhance information flow to aircrews planning and flying AMC missions. As a result, real-time connectivity with the aircraft worldwide and automated reporting to the TACC will become a reality.

Integrated Flight Management

Integrated Flight Management (IFM) is the catalyst for team collaboration among aircrews flying AMC missions, air traffic control, and the TACC, producing more effective planning, communications, and resource utilization. At the core of IFM are the aircrews and the flight managers. Flight managers will be the aircrews’ primary conduit to the full support of the TACC. In a program modeled after commercial airline operations, they will assist aircrew in flight planning and flight following, and will act as a resource to aircrews as they perform their missions. Those who fulfill such supporting functions as weather reporting and maintenance are part of a team that provides the flight manager with resources, planning, and information for decision making.

The AMC MITRE Team has played integral roles in developing the first of many tools to implement M2K by supporting the new concept of flight managers. With MITRE providing project management, logical and physical database design, systems interface design, and implementation, an Integrated Management Tool (IMT) prototype was installed, tested, and made ready for use by the flight managers in the TACC in July 2000. The system is projected to be fully operational by 2003, ensuring that AMC can continue to operate effectively and globally in air traffic management. Through the IMT, flight managers can develop flight plans using ACFP, file the plans electronically with the FAA, provide the crews with electronic versions of all documentation required for the flight, and monitor the flight. Flight managers can also use IMT to inform in-flight aircrews dynamically about changes and re-routing as well as to receive information about such changes from the crews. Real-time, global connectivity paired with IFM operations will be a force multiplier. It will put the full complement of TACC resources at the aircrew’s fingertips. Closer coordination and shared responsibilities between the crew and dispatcher will create improved efficiencies in ground time, route and alternate route selection, and bad weather avoidance. These efficiencies will result in significant dollar savings across the command; more importantly, they will result in safer flight operations.

Lessons Learned

One of the benefits of the AMC MITRE Team’s efforts in supporting the M2K implementation has been the capture of principles by which AMC will build systems in the future. The embodiment of these principles was presented to and endorsed by the AC2ISRC as the method by which to develop C2 systems. Some of these principles, adhered to by the program managers during the implementation of M2K to date, are shown in the table below.

Principles, Not Standards, Make AMC System Development a Success

Know the mission and take advantage of opportunities.
Make the business more efficient and more effective.
Build the business case (for DoD, the "Mission Case").
Keep efforts small.
Provide quality data that can be transformed into useful information.
Use mainstream techniques.
Involve the user of the future system when developing architecture.
Don’t settle on one architecture or development tool; take advantage of many.
Recognize that IT provides tools for business use, not the business itself.
Focus on the task at hand.
Create and maintain a repository to capture the processes and data.

Employing these principles has allowed the AMC MITRE Team to contribute to the successful implementation of the M2K concept to date.

The Goldwater-Nichols Reorganization Act of 1986 aimed to integrate service capabilities and strengthen Department of Defense (DoD) joint elements. It also designated the chairman of the Joint Chiefs of Staff as the principal military adviser to the president, National Security Council, and the Secretary of Defense. The act established the position of vice chairman and streamlined the operational chain of command from the president to the Secretary of Defense to the unified commanders and made the unified commanders fully responsible for accomplishing the missions of their commands. As part of the implementation of Goldwater-Nichols, three significant roles were assigned to the then United States Atlantic Command (USACOM) in 1993: Joint Force Provider, Joint Force Trainer, and Joint Force Integrator. The goal of assigning these roles was to provide a foundation to formulate and conduct Joint Operations throughout all levels of conflict that would have the focused attention of a Commander in Chief (CINC). USACOM was redesignated as the United States Joint Forces Command (USJFCOM) on 1 October 1999.

The evolution of Joint Vision 2010/2020 has focused the DoD on four strategic pillars: Dominant Maneuver, Precision Engagement, Focused Logistics, and Full Dimension Protection. These lead to Information Superiority in the battlespace. In an effort to formulate the concepts and support for future warfighting needs and capabilities, the role of DoD-wide Joint Experimentation was assigned by the President to USJFCOM in 1999.

MITRE’s Role
The MITRE Corporation has been a contributor in different roles at USJFCOM (previously USACOM) since 1980, when MITRE was asked to establish operating locations at most of the CINC locations throughout the world. Our engineers were involved with developing concepts, designing and engineering network infrastructure based on what is known today as "Internet Technology," and assisting the intelligence community in establishing automated intelligence production capabilities. Today, we operate offices at the Unified CINC locations, as well as other client locations throughout the world.

The Challenge of Joint Force Integration
The role of Joint Force Integration has been defined by USJFCOM as a "collection of activities whose purpose is the synergistic blending of doctrine, organization, training, material, leadership, personnel and facilities (DOTMLPF) from the different Military Services to improve interoperability and enhance joint capabilities." The supporting principles are: Future Oriented, Fully Interoperable, Functional Across the Entire Spectrum of Conflict, and Enhanced Competitive Advantage. One of the key factors in achieving the goal of a fully integrated joint force that supports various levels of conflict is the development of a successful Joint Experimentation Program.

MITRE was an original member of the Joint Experimentation Strategic Concepts Team formed in 1998. Since the needs of the USJFCOM team required skills that MITRE could offer, strategic partnerships were formed across the company to draw on Command, Control, Communications, Computer, and Intelligence, Surveillance, and Reconnaissance (C4ISR) operational, functional, and technical skills. As the role of Joint Experimentation evolves, MITRE expertise and the ability to reach back into the corporation are key elements to our providing C4ISR-based systems engineering skills.

Joint Experimentation Program
The Joint Experimentation Program is a "concept-based" experimentation program. Promising new concepts are developed and examined through a rigorous experimentation campaign. At present, USJFCOM is actively working five concepts: Rapid Decisive Operations (RDO), Attack Operations Against Critical Mobile Targets, Adaptive Joint Command and Control, Joint Interactive Planning, and Common Relevant Operational Picture. RDO is our overarching "integrating concept," while the others are considered to be "functional concepts," as each enables RDO. While MITRE is supporting the concept development and experimentation programs for each of the concepts, MITRE is the co-lead on the latter three, referred to as the Information Superiority-Command & Control (IS-C2) concepts.

RDO Concept
The essence of RDO is to conduct rapid, simultaneous, and parallel operations aimed at destroying the coherence of the enemy’s military capability through an early, direct, and distributed attack against his critical assets. The objective is to rapidly coerce the enemy to do our will.

RDO pulls together the best features of the Services’ programs as well as competing private sector ideas to examine alternative operational concepts focused on bringing future conflicts to quick closure. The operational concepts under consideration offer potential improvements in the ability of the Joint Force Commander to generate rapid and decisive outcomes in small-scale contingencies.

IS-C2 Vision
To prepare for the demands of future warfighting, the U.S. military is undergoing an unprecedented transformation fueled, in large part, by the rapid advances in information technology.

Future lighter, smaller, more mobile forces will require proportionally smaller C2 headquarters in theater that employ information technology to provide reachback to supporting staffs and world class experts distributed worldwide. The need to operate inside an adversary’s decision cycle will require that planning and execution transition from the current serial hierarchical process to a more parallel, collaborative process. The need for unity of effort to ensure the rapid and decisive accomplishment of the desired effects when and where needed will require superior shared battlespace awareness and a common understanding of the commander’s intent and the current operational plan. Joint forces of the future will be seamlessly interoperable because they will operate from common, shared data that will provide the right information, at the right time, in the right format. Real-time information sharing will enable combat identification and the reduction of fratricide. Multimedia display technologies will be used to ensure that the information is readily recognized and understood by the warfighter.

Information technology will provide tools for course of action analysis that will shorten planning times and allow dynamic, continuous plan modification during execution. These same tools will support realistic mission rehearsal and training. Commanders will use collaboration tools to confer with other commanders, their distributed staffs, and subject matter experts for planning and battle management.

Information superiority requires not only the acquisition of information but that the information be kept secure from attack by our adversary. Thus, information assurance must be a key element of any IS-C2 concept. Conversely, we must be able to share critical information with coalition partners and selected non-governmental agencies. Multi-level security tools will be employed to ensure that appropriate information, and only appropriate information, is released when needed. All of these capabilities will be supported by a worldwide information infrastructure that will provide seamless, secure, transparent connectivity.
Navy - Air Force - DARPA - BMDO - DTRA - SOCOM - CBD - NIMA

---------- AF ----------

375 Phase I Selections from the 00.1 Solicitation

(In Topic Number Order)

160 Federal Street
Boston, MA 02110

Topic#: (800) 747-5608
Hussein Ibrahim
AF 00-116
Title: Advanced C2 Process Modeling and Requirements Analysis Technology
Abstract: This effort will demonstrate the ability to develop an innovative C2 investment decision support system. The objective system springs from a completely original conceptualization of the problem. It will support "product and process modeling of integrated operational and system architectures" and will produce results that can be used within the Air Force spiral development process, C2 management philosophy, and PPBS. This system will improve access to mathematically rigorous, token-based architecture by orders of magnitude. Ptech and George Mason University's System Architectures Laboratory will integrate object oriented C2 architecture modeling and a Discrete Event System model to construct a software system that can: - Synthesize Colored Petri Nets from a set of object oriented products. We will develop and employ a file-based interface between Ptech's FrameWork modeling environment and Design/CPN.- Verify the logical soundness and behaviors of architectures by executing the models and using token-based, state space and behavioral analysis techniques against an agreed set of measures. -Report results in a variety of agreed graphical and textual formats. This Phase I effort includes early proof-of-concept demonstrations to enable the Team to gain favorable position for development funding or approval for FAST TRACK funding.
Experimentation Strategy
The three IS-C2 concepts described above have the potential to transform significantly the way future U.S. forces will do command and control. In order to mature our IS-C2 concepts, MITRE has assisted the Joint Experimentation Program with the development of a rigorous experimentation campaign guided by a strategy that provides a roadmap from ideas to refined military capabilities and DOTMLPF recommendations. Through a series of spiral events, experimentation results are used to further develop and refine the concepts under study. As shown in Figure 1, systematic progression of experimentation activities begins with exploratory or discovery experiments that lead to better understanding of the concepts, issues, and scientific hypotheses. These are followed by confirmatory experiments that seek to test the hypotheses, and finish with demonstrations of enhanced military capabilities.

Figure 1. Nature of "spiral" experimentation.

The Experimentation Strategy incorporates events that span the spectrum of experimentation venues: seminars, workshops, wargames, controlled laboratory experiments, analytical studies, constructive simulations, virtual (man-in-the-loop) simulations, and live simulations. In addition, real operations like Kosovo frequently provide a great opportunity to examine specific concepts in operation.

Figure 2 illustrates considerations for venue selection. Venues at the base of the hierarchy are lower cost and will usually offer greater scientific control and reproducibility. However, they generally have little operational credibility. On the other hand, virtual simulations, live events, and real operations have much greater operational credibility and greater cost, but usually cannot be controlled sufficiently to be considered very rigorous or scientific.

The mix of venues compensates for the shortfalls of individual venues and supports the spiral approach for concept development. In the early stages, the venues will tend to be at the lower end of the hierarchy. The spiral concept suggests that experiment strategies should start with simple, relatively inexpensive, low-fidelity experiments when there is little knowledge, and increase complexity and fidelity with more resource-intensive virtual and live simulations as our knowledge matures and the concept is refined. The experiment strategy should take immature concepts, mature them through experiments, and turn them into demonstrated capabilities.

Major Experiments
Four of our major experiments to date are outlined below:

(1) RDO Wargame—Conducted May–June 2000, this was a modeling and simulation-supported wargame with an active opposing force. Participants examined a baseline and three alternatives for conducting RDO. The wargame highlighted the need to focus future experiments on reorganizing the RDO C2 organization to better support effects-based operations.

(2) Millennium Challenge 2000—From August through mid-September 2000, USJFCOM conducted the first in a series of live joint experiments in concert with the four military services to explore concepts that may shape how DoD conducts business in the future. MITRE was a contributor in the design and preparation of this joint experiment, the main focus of which was on exploring the IS-C2 concepts. An overarching joint context for the four service experiments allowed the Joint Experimentation Program to work with each of the services to examine the IS-C2 joint concepts. These experiments examined how a robust, joint information environment, coupled with the use of collaborative tools, increases shared battlespace awareness and concurrent, parallel crisis action planning to support more timely and effective decision making.

(3) Attack Operations Virtual Simulation—From August through October 2000, this virtual simulation experiment focused on the “battle management (decide)” portion of the attack operations chain (detect, decide, deliver). The Joint Semi-Automated Forces model was used to simulate participants in a proposed new Time Critical Targets Cell. The experiment used the MITRE-developed After Action Reporting System to collect data and provide quick-look reports. It also used various constructive simulations, including the MITRE-supported Pegasus Federation and a “quick and dirty” model developed by MITRE using EXTEND, a dynamic modeling software application.

(4) Unified Vision 2001—In May 2001, this virtual simulation explored the RDO concept, focusing upon the structure of the Joint Task Force organization, the conduct of an Operational Net Assessment, and the production of an Effects Tasking Order.

Joint Experimentation is one of the key ingredients for the Joint Integration role of USJFCOM. The joint concepts being developed and explored by the Joint Experimentation program offer the potential to significantly transform the way future U.S. forces accomplish their missions, and MITRE is playing a key role in this activity.

Circa 2000—An F-15 is flying at 25,000 feet over unfriendly territory. The aircraft’s multifunction color display portrays static lines and colors specified by the pilot before takeoff. Command and control information is developed over several hours at a large forward-based command center, where the intelligence and other information is manually aggregated and synthesized from multiple systems. It is sent to the pilot through the Airborne Warning and Control System over a voice channel that may or may not be jam resistant and secure. There is no change to the multifunction color display. The result is a minimum capability information environment with potential to turn a low-risk mission into a high-risk mission.

Circa 2010—An F-22 is flying at 25,000 feet over unfriendly territory. A multifunction color display portrays dynamically changing lines and colors based on assured command and control information developed at a small forward based- command center, relying on a robust integrated set of information. It enters the cockpit via a joint, automated secure data link. The result is a timely information environment that enables mission success and decreases risk.

Both warfighting environments are information intense. Warfighters need access to fused information about the battlespace to be able to know the enemy’s whereabouts and capabilities, and to determine what weapons and friendly assets are available. This includes targeting, intelligence, and battle information and support information on supplies, transportation, and medical capabilities. They need communications and computing tools to be able to receive and understand the vast array of information, and they need dissemination, management, and security tools to enable the communications and computing to operate in the most effective manner.

Today’s threats present a wide array of asymmetric challenges to warfighting capability across a variety of missions—joint, service, and in multinational environments. These missions are ongoing around the world in support of ad hoc military and civil organizations. The current information technology (IT) infrastructure no longer provides the best solution to meet the globally distributed information superiority needs of warfighters and sustainers within the increasingly important context of coalition operations. The Global Information Grid (GIG) will provide the joint and coalition warfighter with a single, end-to-end information system capability that includes a secure network environment, allowing users to access shared data and applications regardless of location, and is supported by a robust network/information-centric infrastructure. MITRE is supporting the development of the GIG Architecture.

Offline 911aware

  • Member
  • *****
  • Posts: 519
great research!  thanks!  not sure i completely understand yet...

what i'm getting from this is that all military action can be taken over at anytime by remote control by ???
It's a dog eat dog world, and I'm wearing Milkbone underwear.  -norm

Offline Elvis

  • Member
  • *****
  • Posts: 1,356
  • just one
email text & link to someone - then read it. :)
"A great civilization is not conquered from without until it has destroyed itself from within." - Will Durant


  • Guest
The GIG Architecture Approach
Currently, the GIG concept is supported by the Department of Defense Chief Information Office (DoD CIO) memorandum “Global Information Grid,” dated September 22, 1999, validating the requirement for this initiative. The memorandum describes GIG as the globally interconnected, end-to-end set of information capabilities, associated processes and personnel for collecting, processing, storing, disseminating, and managing information on demand to warfighters, policy makers, and support personnel. The GIG includes all owned and leased communications and computing systems and services, software (including applications), data, security services, and other associated services necessary to achieve information superiority. It also includes national security systems as defined in section 5142 of the Clinger-Cohen Act of 1996. The GIG supports all DoD, national security, and related intelligence community missions and functions (strategic, operational, tactical, and business), in war and in peace. The GIG provides capabilities from all operating locations (bases, posts, camps, stations, facilities, mobile platforms, and deployed sites). The GIG provides interfaces to coalition, allied, and non-DoD users and systems.

The GIG encompasses all of DoD’s warfighting, combat support, and business IT. Building a supporting architecture is a daunting task. MITRE has worked with the GIG architecture team to develop a highly streamlined approach to develop an integrated architecture. Many architectures currently exist in various forms. The GIG approach will leverage the existing architectures—even those still in progress—and is organized into three interrelated track activities: the Joint Operational Architecture (JOA), the Combat Support and Business Area Architectures, and the Communications and Computing System Architecture. MITRE believes providing a means to use existing architectures in a “plug and play” manner is essential to building this global enterprise view.

The GIG architecture will be a blueprint of a current or postulated future configuration of resources, rules, and relationships. Three logically combined, mission-oriented architectural perspectives or views will be developed: an operational view, a systems view, and a technical view. To achieve the needed compatibility, flexibility, and interoperability, the GIG elements will be made available in a plug and play “toolbox” from which the required system configuration can be assembled. The blueprint will be substantive, but not prescriptive in detail. This will allow flexibility to accommodate local needs and innovation.

Concurrent with DoD CIO-led development of the GIG architecture, U.S. Joint Forces Command (JFCOM) is developing the GIG Capstone Requirements Document (CRD). The purpose of the CRD is to describe the overarching capabilities and requirements for a globally interconnected, interoperable, and secured system of systems designed for collecting, processing, storing, disseminating, and managing information. JFCOM has a seat on the general officer-level GIG Architecture Integration Panel to ensure GIG architecture development and GIG CRD development are complementary.

Transitioning from AS IS to TO BE Architectures
One of the more difficult, but absolutely essential, aspects of developing a viable warfighting architecture is to describe adequately how the architecture accommodates both the AS IS environment and the TO BE environment. Tackling this architecture transition starts with a rigorous, top-down, traceable approach to defining required joint warfare capabilities at the activity/task/information exchange level for both the AS IS and TO BE time frames. The basic architectural tenets identified in the DoD architecture framework, and its associated DoD-wide integrated architecture concept, make it very clear that operational views (OVs) provide the warfighting context for deriving and developing system views (SVs) that identify potential warfighting system solutions. An invalid OV may result in an invalid SV and associated invalid warfighting systems.

To be of value to the Office of the Secretary of Defense (OSD), joint, and service decision makers, the OVs must delineate required joint warfare capabilities both now and in the future. This will enable DoD to develop valid SVs and procure the correct, capabilities-based, TO BE warfighting systems. A necessary element in creating a worldwide capability is a supportive, worldwide, joint, multinational procurement and investment approach.

Defining TO BE architectures will require high-level, long-range OSD and military guidance and direction. MITRE is involved with developing an approach to address this transition and believes simulation will have a major role in the recursive definition and validation of both AS IS and TO BE operational and systems views. Definition of the TO BE architecture will require multiple iterations of executable AS IS and objective architectures that reflect the multiple worldwide futures that must be accommodated. This thorough approach to TO BE architecture development will contribute to development of a robust TO BE integrated architecture based on future warfare requirements.

The architecture results will be used to determine the DoD CIO information technology issues and investment strategies for the future. This work will also demonstrate the ability to integrate extant architectures and develop the methodology for moving from current to future architectures. The GIG architecture will be used by various commanders in chief, services, and other government agencies to assist in the formulation of their mission-specific applications and required infrastructure support, to assist in meeting their Clinger-Cohen Act requirements, and to formulate budget strategies for planning, programming, and budgeting system (PPBS) and program objective memorandum (POM) activities. Although the GIG architecture will not be the sole tool required to guarantee interoperable DoD information systems, the architecture goes a long way in providing the missing enterprise roadmap necessary for enabling interoperability among DoD systems.

The Global Combat Support System for the Commander in Chief and Joint Task Force Commander (GCSS(CINC/JTF)) is member of the GCSS family of systems that addresses the age-old need for accurate, timely, and complete combat support information to aid in making command decisions. GCSS will be the means by which the CINC/JTF commander accesses disparate data stored in a variety of formats on a number of theater and global data sources, manipulates and converts the data to useful information, and displays it in a manner that assists the commander in making warfighting decisions.

Though still in the relatively early stages of its development, several prototype versions of GCSS(CINC/JTF) have proved useful, over the last three years, during military exercises in the Pacific Command (PACOM) Area of Responsibility. With MITRE leading the engineering teams addressing technical and data challenges, GCSS(CINC/JTF) prototypes have been successfully demonstrated to the Chairman of the Joint Chiefs of Staff and other high-ranking Pentagon officials during military and humanitarian operations in Bosnia and Kosovo. When completely mature, GCSS(CINC/JTF) will provide the warfighter with a powerful new decision support capability.

GCSS(CINC/JTF) will be approved for operational use on the Secret Internet Protocol Router Network (SIPRNET) this year. This transition to operational use has major implications for the GCSS(CINC/JTF) program beyond those normally associated with turning a prototype into an operational system. MITRE has provided guidance to the Department of Defense (DoD) in making GCSS(CINC/JTF) a worldwide capability, particularly in the areas of user access to data sources and security.

The system provides users with access to data sources from both a traditional heavyweight client and web browsers in a secure environment. For client access, public key certificates and Lightweight Directory Access Protocol (LDAP) directories support user authentication and access control. Users must have a certificate issued by the DoD Public Key Infrastructure (PKI) to establish a GCSS(CINC/JTF) account. Certificates and directory data are imported from the DoD PKI into a GCSS(CINC/JTF) LDAP directory, which is replicated to GCSS(CINC/JTF) server sites worldwide. Users access GCSS(CINC/JTF) web servers over Secure Sockets Layer connections. GCSS(CINC/JTF) uses its LDAP directory to authenticate the user and check the user’s privileges.

GCSS(CINC/JTF) will be the first DoD IT system using DoD PKI certificates on the SIPRNET to become operational worldwide, making the establishment of an operational DoD PKI supporting classified users, complete with clearly defined processes and security community acceptance, an important priority. Hence, part of the GCSS(CINC/JTF) fielding process must be to work out the roles and responsibilities for user validation at each of the sites where GCSS(CINC/JTF) will be fielded. User and site responsibilities for certificate protection must be identified and supported by users. It has become very clear that, while worldwide capability obviously entails both hardware and software, both people and procedures are also critical components.

Security approval of GCSS(CINC/JTF) software and servers has proven to be another challenge. Web servers and data access middleware servers will be deployed worldwide, with five regional servers planned. Heavy client software is installed on Global Command and Control System (GCCS) client workstations because GCSS(CINC/JTF) extends GCCS capabilities. Thus the client software must be approved by GCCS accreditors and certifiers, while the accrediting and certification of the regional servers require both local site and global approval by the Joint Staff. It has been a challenge to reach agreement on who should be the type and site Designated Approval Authorities for GCSS(CINC/JTF).

GCSS(CINC/JTF) currently runs only on SIPRNET and can access only data sources that also reside on that network. However, much of the data needed to support warfighter requirements only exists on unclassified networks. To access data on unclassified servers, a high assurance guard must be inserted between the unclassified and classified networks to ensure that data transferred from the unclassified side to the classified side compromises neither the integrity of the classified network nor the data on it. Most available guard technologies use file-based transfer protocols that limit the integration of the networks to data replication. This leads to potential data timeliness issues because of replication delay and resource cost for maintaining the replicated information on the high side. New guard technologies using socket-based connections are evolving and may allow for a dynamic query capability from clients on the high side to servers on the low side. There are significant issues regarding either approach that will lead to significant changes in the existing GCSS(CINC/JTF) operational concept.

MITRE security experts have had significant involvement with GCSS(CINC/JTF) for some time. MITRE has identified security requirements, design alternatives, and potential issues with software testing, operational deployment, and future capabilities. They recommended that GCSS(CINC/JTF) address security issues and involve the agencies who would do security testing and accreditation as soon as possible. MITRE has monitored evolving Joint Staff policies on GCCS security, PKI, directories, and related concerns and advised DISA on their impacts and approaches to complying with those policies.

Worldwide fielding causes challenges for providing information to the warfighter. Many data sources contain only data of interest to a specific region or CINC. For example, the ‘XYZ’ data base in European Command (EUCOM) will contain data relevant to EUCOM; a different data base with the same name and structure in the PACOM will contain data relevant to PACOM; and there will be still other data bases with the same name and structure and with different data yet. This presents a challenge in providing a global view of a particular Combat Support situation. The commercial software packages and custom software components GCSS(CINC/JTF) uses do not currently support access to multiple data bases having the same name and structure but different contents. GCSS(CINC/JTF) can access only one instance of such a database, providing the user with a view of the data that is specific to a particular region or CINC that owns the data base. MITRE believes that military operations are no longer isolated theater actions, especially in the worldwide supply-chain management environment of GCSS(CINC/JTF). Future versions of GCSS(CINC/JTF) must access all the databases, fuse the data, and provide a consolidated view to the user, allowing global access to data. MITRE Systems Engineering is leading the effort to revise the GCSS(CINC/JTF) architecture and software to provide this more robust global view of information.

With MITRE’s System Engineering and Integration assistance, GCSS(CINC/JTF) will continue to enhance its architecture in an effort to respond to the needs of the Combat Support community for both regional and global users at all echelons and in both an unclassified and classified environment.

As Cold War threats have diminished, a new set of threat types has emerged into the forefront. These threats present unique difficulties and pose a greater homeland defense challenge than previously encountered. They include weapons of mass destruction in the hands of terrorists; Internet hackers attacking the nation’s critical infrastructure; global environmental changes that cause an increase in natural catastrophes; the worldwide spread of infectious diseases; the widespread contamination of food, water, and the environment; global organized crime; and narcotics. These are basically worldwide threats that can impact any country at the national, regional, and local levels. In the United States, most of these threats are the responsibility of many segmented organizations across federal, state, and local governments. The result is that no unified information capability exists to support the mission for overall protection management for homeland defense.

But many of these potentially catastrophic threats present a critical need for rapid indications and warnings (I&W) similar to the Strategic Air Command’s I&W function during the Cold War. Thus, the challenge for homeland defense is immense. The solution is long overdue.

A concept was initiated at MITRE in June 2000 to develop an internal MITRE prototype information service for homeland defense. Several teams were established to develop an internal Homeland Defense Information Service (HDIS) Web site, the HDIS information domain watches, and the HDIS information technology applications.

The HDIS owes a lot to the well established Intelink, a secure, Web-based repository of information that provides uniform access to intelligence information. In 1993, the Intelligence Community called upon MITRE to help standardize the way intelligence information was disseminated to its customers. Our sponsors approved our concept of using emerging Web technology to tie together all of the United States intelligence capability, and asked us to prototype the concept—with real intelligence information. Based on MITRE’s long association with the Intelligence Community, and with MITRE people working at U.S. government sites around the world, we were able to quickly set up servers and content that could be shared over the classified intelligence network. The prototype was a phenomenal success.

Using now-familiar Web techniques, Intelink provides an Intelligence Community “information space” where analysts and operations users can browse for needed information, thus eliminating the need for unique systems. Intelink became operational in December 1994, when then Deputy Secretary of Defense John Deutch and Director of Central Intelligence James Woolsey jointly declared Intelink as the strategic direction for all Intelligence Community “finished intelligence” dissemination systems.

Web technology presents the best opportunity to rapidly develop and deploy an integrated information infrastructure for homeland defense that can provide synergy along with the necessary all-source information, collaboration, and multicultural perspective on the diverse set of threats.

Homeland Defense Information Service (HDIS)
A strawman HDIS structure was formulated and applied to information management functions. Within several weeks, a beginning HDIS capability evolved. The intent is to build the prototype to such a level of robustness that it can be made available to a set of external users for test and evaluation. The vision is to build a show-the-way exemplar HDIS that could support all appropriate government users at local, state, regional, and national levels. The exemplar would provide all applicable threat information domains, with all-source information in a secure infrastructure.

The HDIS Cyber Analyst
As envisioned, a key capability in HDIS is support for analysts responsible for threat management and consequence management using all-source information. The goal is to build into HDIS analytical tools that help analysts discover and visualize information and collaborate with other analysts for optimum decision making. The support tools provide a “sixth sense” for rapidly finding information and analyzing the information space.

HDIS Web Site
The HDIS Web site is the entry point into a collection of web resources and tools for managing threats and consequences and exploring I&W. The HDIS structure is populated with links to a variety of open source materials (e.g., news reports, historical documents, and online communities of interest) organized by “watch” category. These are valuable resources for fulfilling the homeland defense mission and are mostly static, with some news feeds for automatically collected information providing more up-to-date information. The tools facilitate accessing information and sharing it among analysts.

HDIS Analyst Support Tools
Keeping track of what is happening in the world is no easy task. Many military and civilian command centers rely on CNN and other news agencies to stay informed and even put television monitors in prominent places to watch late breaking news. The HDIS analyst tools also help with discovering information, but they don’t stop there; they provide capabilities for managing and analyzing the information through custom software and software integration.

For example, the I&W activity helps trained analysts to find open source news reports that could indicate a potential threat to citizens. Each watch, as shown on the HDIS homepage, has a set of indicators or triggers used to convey the level of concern regarding a specific type of event. The status of the indicator is changed to reflect the news reports associated with it. For example, the biological watch analyst is responsible for monitoring the spread of disease and the use of biological weapons. Indicators in the biological watch include outbreak, suspicious deaths, pathogen threat, etc. Given that each watch is monitoring a different area, the indicators in each area are different. Indicators may have a status of normal, meaning no real concern; possible, meaning an incident is possible; or probable, meaning an incident is inevitable or has already occurred. A biological watch analyst who comes across a report of three deaths caused by the West Nile virus might change the indicator status to reflect heightened concern, perhaps by changing it to possible.

Software was written to help the analyst find reports and maintain indicators. Watch analysts have work screens for discovering new reports or events, associating them with indicators, changing the status of indicators, and producing reports. The Watch Indicator interface is designed for easy integration with link, timeline, or geospatial analysis software.

Another individual, perhaps a state governor, uses a high-level view—the Watch Summary—of the watches to monitor changes and to take appropriate action when necessary. HDIS serves as a live situation report, allowing the governor to stay abreast of local or national concerns.

Indications and Warnings Implementation
The custom software written for indications and warnings consists of two interactive views. One provides synoptic views of all the watches and their status, the other shows the information for one watch.

The watch analyst’s screen is split into two parts: event discovery and watch indicators. Event discovery is supported by a search engine operating on a focused collection of documents. After doing a search, the analyst can drag an interesting article and drop it on an indicator where it will be stored. An analyst who believes the article should trigger the indicator will press the status bar at the appropriate location to change the indicator status. From the same screen, the analyst can create reports or change the status of the watch.

Future versions of the HDIS software will use technology developed by the Defense Advanced Research Projects Agency (DARPA) Translingual Information Detection Extraction and Summarization (TIDES) project for enhanced information retrieval and report discovery. There will also be a profile mechanism to push information to analysts based on preset criteria.

The Warning Summary view displays all the watches and their statuses, with access to more detail. All the indicators for every watch are shown on the right side of the display. The displays show events that have been associated with the indicators. All the data seen in this view can be passed to other applications. This information sharing technique was initially developed for the DARPA Translingual Information Detection, Extraction, and Summarization Portal prototype, to pass query results from multiple search engines to the Geospatial News On Demand Environment (GeoNODE), which provides a more effective basis for navigating and reasoning over an ever increasing news space.

Powered by a Java server, the system tracks all the watches, their indicators, and events and reports. It stores data in XML format and serves it to clients over the Web. The client software uses Microsoft’s Internet Explorer support for XML handling, and JavaScript to present highly interactive and dynamic pages.

Defending the U.S. homeland requires an extensive worldwide I&W and information management system. MITRE believes application of current technology can make HDIS a reality across national, state, and local government.

Watching over the world—a formidable task. The National Imagery and Mapping Agency (NIMA) manages the information systems to meet this need. Currently, NIMA is in the process of replacing one of the most complex and critical Imagery Intelligence exploitation tools in its arsenal—the Imagery Data Exploitation System (IDEX) II. According to the the IDEX II Replacement Project (IRP) Concept of Operations (CONOPS), Version 1.1, dated November 24, 1999:

“The purpose of NIMA’s IDEX Replacement Project (IRP) is to replace the functionality of IDEX II with new Commercial Off-The-Shelf (COTS)-based United States Imagery and Geospatial Information Service (USIGS) components. The IRP is a system of systems comprised of new USIGS capabilities: Command Information Library (CIL), Integrated Exploitation Capability (IEC), Enhanced Analyst Client (EAC), and Imagery Access Service/Common Client (IAS/CC), and enhanced legacy systems: Imagery Exploitation Support System (IESS) and Dissemination Element (DE). These systems are designed to work together in providing imagery, imagery metadata and management information to the analyst and exploitation manager to support imagery analysis and production of imagery products. [The] IRP was mandated by Congressional direction to [NIMA to] migrate to COTS-based solutions and integration and eliminate the high per seat cost of maintaining the existing legacy IDEX II.”

The success of this high priority/high visibility project depends on more than the successful integration of its independently developed component systems. It also depends on its successful integration into a number of sites around the world—U.S. Joint Forces Command (USJFCOM), U.S. Pacific Command (USPACOM), U.S. Central Command (USCENTCOM), U.S. European Command (USEUCOM), U.S. Strategic Command (USSTRATCOM), and the National Air Intelligence Center (NAIC)—each possessing diverse network and workstation infrastructures. Additionally, the IRP must be integrated into its users’ Concepts of Operations (CONOPS), which are as different as the USPACOM and USEUCOM Areas of Responsibility (AORs). How is NIMA effecting this worldwide integration and fielding effort?

During its long history of supporting NIMA and its predecessor organizations, MITRE has played a key role in the development of the USIGS architecture and its documentation suite. MITRE’s past performance, global presence, and unique customer-oriented role at the Commands made us the ideal candidate to assist in the IRP implementation at these Commands. So in 1998, NIMA funded MITRE engineers at the IDEX-equipped Commands to assist with site integration planning for the IEC. The IEC consists primarily of imagery servers, Imagery Analyst workstations, and software that can run on IEC or Commercial Analyst Workstations (CAWs). This task expanded to include support of the entire IRP integration effort.

MITRE’s early work consisted of documenting our customers’ IDEX-based imagery CONOPS, identifying network architectures, and determining site-unique requirements. USJFCOM analysts, for example, relied heavily on the IDEX Display Broker facility, which allowed analysts to download only the portion of an image currently displayed on their workstations. This was an important capability when dealing with images as large as 1 GB and a Sun SPARCStation 10 or 20 with 64 MB of memory and a switched 10 Mbps Ethernet connection. The information MITRE gathered was provided to the IEC contractor and supported the development of a “Display Broker”-like capability within IEC.

The “Display Broker” capability is a case in point where MITRE presence at the commands has helped identify differences in worldwide needs and architectures that have led to changes in IEC configurations delivered to the sites. For example, USJFCOM preferred to use the IEC Tiling service to access images, since it required less robust networking and workstations and less additional storage for raw imagery. USPACOM, on the other hand, preferred to use a file-based interface to the IRP, wherein the image is downloaded to a server before display, because its analysts are geographically dispersed and share a Wide Area Network (WAN) connection with other users.

Differences in the networks at each site have also led to customizations of IRP site integration plans. The IRP, whose components are interconnected with Fore Asynchronous Transfer Mode (ATM) switches, has to interface with each command’s network and workstation infrastructure. The IRP will be essentially the same at each site but each site’s overall configuration will be quite different. For example, USJFCOM has a Fore ATM network while USPACOM has Gigabit Ethernet. USEUCOM, on the other hand, has a Cisco Systems ATM network.

Subsequent to the requirements gathering phase, MITRE continued to participate in the IRP implementation, from design reviews and formal testing through installation and acceptance testing at USJFCOM.

MITRE site engineers informally shared information throughout the entire process. Beginning in late 1999 and early 2000, as the IRP began to come together in the NIMA Integration and Test Facility (ITF), information sharing became more intense. This was especially true among the USJFCOM, USPACOM, and USEUCOM engineers, whose commands were to receive the IRP in that order.

By the end of September 2000, the IRP had been installed and tested at USJFCOM and was at Full Operational Capability (FOC). The installation and test evolution at USJFCOM was another period of heavy information sharing among MITRE engineers, as many lessons were learned. Several of the most significant follow:

Site Infrastructure. The infrastructure—network, workstations, file severs and their software configurations—must be stable and ready to support the load presented by the IRP.

Training. Comprehensive IRP training must be accomplished prior to the onset of formal testing.

Additionally, analysts must know how to use the Electronic Light Table (ELT) software supported by the IRP at their site. They must also understand the overall flow of imagery within the IRP in order to use it successfully.

On-site Coordination. Email accounts should be created for IRP staff on the site’s Secret mail system to facilitate communication among IRP and site personnel located in different rooms or buildings and working on different shifts.

Concept of Operations (CONOPS). Imagery Analysts must begin to review their production CONOPS and make adjustments to accommodate the IRP before its installation. At USJFCOM this has been an iterative process that began before the installation of the IRP and has continued beyond IRP FOC. All phases of the imagery exploitation cycle, from collection requirements through product dissemination must be examined as part of this process.

Site Buy-in. Finally, managers at all levels must be briefed on the IRP and its installation, and buy into it.

The IRP is an extremely complex system encompassing a major portion of the imagery cycle. Its implementation is providing a classic example of the application of MITRE’s system engineering skills and of the exchange of ideas and of lessons learned made possible by MITRE’s presence at the Commands. In particular, lessons learned from the USJFCOM installation (and subsequent installations) and shared with MITRE engineers at the other Commands will pay increasing dividends as the deployment activity progresses. The MITRE “value added” demonstrated by the command team during the IRP has been significant, and has laid the groundwork for continued involvement with NIMA as the USIGS continues to evolve.

For worldwide system implementation, MITRE uses all the tools available. One of the best tools is being there with knowledge of the needs and challenges and striving for as much sharing as possible.

Support to coalition operations in the future is the Information Assurance challenge of today. As each coalition operation (Haiti, Somalia, Bosnia, Kosovo) comes and goes, the lessons learned always yield cries for better interoperability among coalition members. The tough part of coalition information sharing is creating the mechanism by which any nation transfers information outside its own system. MITRE believes true interoperability with our coalition partners will come only after we have an information exchange system that has been designed from the ground up for use by coalition forces.

The United States Joint Forces Command (USJFCOM), with MITRE as the lead engineer, has prototyped such a system. It is called the Coalition MLS Hexagon Prototype (CMHP), or, simply, Hexagon. Hexagon, as the name implies, is built around six functions that allow the exchange of information with our coalition partners in a secure and flexible manner.

Side One of Hexagon, Marking Standards, uses the classification and control marking standards adopted by the U.S. intelligence community. These standards were coordinated by the Controlled Access Program Coordinating Office (CAPCO), assisted by MITRE technical staff who support intelligence community members.

Side Two of Hexagon is called Document Marking. With USJFCOM direction, MITRE developed the Electronic Document Marking System (EDMS) to implement human-readable markings. EDMS enables the originator of the information to mark Microsoft Word, PowerPoint, and Excel documents in accordance with CAPCO and Executive Order 12958 standards. The marking is a simple operation. It is done with the point and click of a mouse and pull-down menus that provide the user choices for classification, handling caveats, and "release to" options for countries, operations, organizations, and exercises. The "human-readable" markings are stored as "computer-readable" electronic document property labels.

Side Three of Hexagon is called Digital Labels. The saved file is encrypted using a dynamically generated encryption key based on the document properties or computer readable labels. Saving the document also generates a plain text metadata file that the "Coalition Server," an Oracle 8 Relational Database Management System, parses in order to facilitate searches.

Hexagon’s fourth side, Personal Authentication, is the linchpin of CMHP. A "smartcard" personal token called a HexCard is used to identify the user and all of his or her security attributes. The HexCard stores a user’s fingerprint template and a cryptographic credential set that is based on his clearance levels, citizenship, and need-to-know roles, along with any organization memberships. The HexCard also generates public/private keys, using a Public Key Cryptography System (PKCS) to store a user’s signed x.509v3 digital certificate. The standard Windows NT log on has been replaced with a CMHP log on that requires a user ID, password, live scan of the finger and, of course, the HexCard. The user ID and password are "hashed" together using a Secure Hash Algorithm version 1 (SHA1) to create a decryption key to open the private storage area of the HexCard. The stored fingerprint templates are then compared with the live scan, prior to the user being able to gain access to the desktop. As the user logs on to the workstation, the stored x.509v3 digital certificate is read from the HexCard and used to populate the Windows NT "current user" registry hive. The EDMS software uses the registry values to display tailored marking options available to the user.

Side Five of Hexagon is the system’s Workstations and Server hardware. This includes NT workstations equipped with fingerprint scanners and smartcard readers, and requisite software for marking, encrypting, and decrypting documents. It also includes the two servers, one used as the enrollment station and certificate authority, the other running an Internet Information Server version 5 Web server and an Oracle database. The Web server communicates with the client workstations using a Secure Socket Layer (SSL) protocol established by presenting the digital certificate stored on the HexCard. When establishing the SSL session, the user’s security attributes (from the user’s digital certificate) are used to compose the database query. Search results will display only those documents that match both the search criteria and security attributes.

Hexagon’s sixth side is Security Management. A special staff security officer must be assigned to coordinate system security requirements and to generate and issue HexCards to CMHP participants. The staff security officer must also operate and maintain the certificate authority (CA) and understand the information assurance requirements.

The Hexagon concept provides the flexibility required in coalition-supported Joint Task Force operations by encrypting and protecting the information objects (e.g., a Word document, PowerPoint briefing, etc.) as opposed to protecting only the network. This is the key difference between the CMHP and other Multi-level Security (MLS) solutions. MLS, according to the NSTISSC 4009 definition, is the "concept of Processing Information with different classifications and categories that simultaneously permits access by users with different security clearances and denies access to users who lack authorization."

Using information object protection, we can compare the attributes of an individual with the attributes of objects that reside on the server. If there is a match, the coalition participant can retrieve and decrypt the (document) object.

The Joint C4ISR Battle Center (JBC) conducted a formal military utility assessment in August. The Hexagon prototype was also an integral part of the JBC-sponsored exercise Millennium Challenge 2000 in August.

MITRE, as the lead engineer and system integrator, was responsible for bringing the six sides of the Hexagon together to satisfy the CINC’s MLS requirement. Both the technical concept and system engineering have been spearheaded by MITRE. This was recognized by the Director of Central Intelligence, Mr. George Tenet, who presented MITRE’s Allan McClure the Intelligence Community Seal Medallion during a ceremony held at CIA headquarters this past June.

The Hexagon prototype formed the basis for the Fiscal Year 2000 "proof of concept" Content Based Information Security (CBIS) Advanced Concept Technology Demonstration (ACTD). MITRE, again, has been asked by USJFCOM and SPAWAR Systems Center to play a key role in the technical and operational development of the CBIS ACTD. In order to work across the breadth of worldwide operations, capabilities like those of CMHP and CBIS ACTD are critical.

ILOG Server     

ILOG Server is a C++ library designed to speed creation of visualization servers by aiding development of distribution and synchronization software. ILOG Server provides:

    * Modeling interface with Rational Rose and other third-party CASE vendors
    * Built-in CORBA-compliant distribution mechanism to exploit CORBA architectures
    * Sophisticated real-time synchronization mechanism to ensure consistent display across multiple operator interfaces

Simplified application development
As networks increase in size and complexity, new applications must distribute just-in-time information and synchronize workflow in real time. With applications distributing a wide variety of graphical interfaces, development time represents a huge investment. ILOG Server makes development easy, providing integration and synchronization for many C++- or Java-based graphical user interfaces (GUIs).

ILOG Server
Connecting supervision GUIs

Highly scalable modeling framework

Synchronize Java™ and C++ GUIs

High-performance notification engine

Rapid prototyping I Web-enabled supervision

“For one of the biggest telecommunications users in the U.S., we delivered a telecom inventory management prototype with a fully populated ser ver and a highly interactive Java GUI in less than one man-month of development effort. [ILOG Server’s] dynamic modeling, as well as the Java data sources, dramatically cut our development period by two-thirds. They smoothly supported a lot of iterations in the development process in order to meet the end user's requirements.”
Olivier Nicolas
Chief Technical Officer

OpTech Software Corp.

Rapid response and complete connectivity
The supervision centers of complex networks like those in telecommunications, transportation and gas distribution must share information among scores of operators in real time to enable them to respond within seconds to a malfunction or other unexpected event. This makes the software linking their graphical user interfaces (GUIs) invaluable. It must deliver high performance and scalability as a middle-tier mediation server, and lend itself readily to maintenance and expansion.

Stay in sync

ILOG Server can synchronize hundreds of GUIs and connect them to the data flow of a network. When an alarm is triggered in the network, it is sent directly to an operator, and the operator’s response is instantly shared with other operators throughout the supervision system. ILOG Server shortens the time between alarm acquisition and display even when several events per second must be processed and the system has thousands of objects.

Supports knowledge integration into supervision systems

ILOG Server can efficiently map a physical description of a network to one or more graphical displays. These displays are then shared among computers to ensure communication among operators. ILOG Server provides the high-performance synchronization services needed for supervision applications. It frees system integrators to concentrate on delivering their core expertise instead of spending valuable time building a server support system to link widely distributed GUIs. Supervision systems are delivered faster and serve longer with ILOG Server.

Highly scalable modeling framework

ILOG Server is a highly scalable, C++ object framework that provides powerful business modeling facilities for representing the elements and topology of a supervised system as shared in-memory services. Its modeling abstractions match those offered by object-oriented design notation like UML, and bridges the gap between business model design and implementation. Objects stored in the ILOG Server-based mediation server are active, meaning that all business events, such as object modifications and structural changes, are registered and buffered for forwarding to subscribing clients.

Synchronize hundreds of Java or C++ GUIs

ILOG Server allows developers to define one or more mappings from the physical system’s object model to the graphical model. It provides ready-to-use graphical models for the Java Swing controls in the ILOG JViews
Component Suite and ILOG Views Data Access. GUI clients subscribe to a view of the system, such as a subpart of the system or a category of events, through bidirectional connectors called Dynamic Views. The clients are notified when the system is modified, and can send modification requests to the server. The GUI can be connected using the fast communication layer for Java and C++ provided with ILOG Server or a CORBA bus based on IONA Orbix or Borland VisiBroker.

High-performance notification engine

ILOG Server implements sophisticated optimization to provide the high performance required by supervision applications. This allows hundreds of clients to be connected to a single server managing thousands of objects and hundreds of events per second. For supervising very large systems, like countrywide telecommunications networks with millions of monitored elements, ILOG Server supports a pyramid architecture in which low-level servers receive events for part of the network and report to a higher-level server that aggregates the events for a wider part of the network.

Rapid Prototyping

System integrators often need to rapidly demonstrate a prototype of their solution for scalability or functionality tests. ILOG Server provides powerful services for developing customized prototypes in record time. The business object model corresponding to the supervised system can be defined using the XMI format, enabling communication with production specification environments, including Rational Rose. In addition, ILOG Server features an implementation of the well-known JavaScript language, which enables code to be added to the server without the need for C++ compilation. ILOG Server Studio enables developers to rapidly create a GUI through drag-and-drop editing, and connect it directly to the business object model.

Web-enabled supervision

Control-room managers often need to have a synthetic view of a system and see the status of the problems requiring attention. ILOG Server provides a specific client, based on the Java Servlet technology, that can generate reports for the Web. By providing both Java integration and thin-client capabilities, ILOG Server has proven to be ideal for creating Web-enabled supervision applications.

ILOG delivers software and services that empower customers to make better decisions faster and manage change and complexity. Over 2,000 global corporations and more than 400 leading software vendors rely on ILOG’s market-leading business rule management system (BRMS), optimization and visualization software components, to achieve dramatic returns on investment, create market-defining products and services, and sharpen their competitive edge. The BRMS market share leader, ILOG was founded in 1987 and employs more than 600 people worldwide.

ILOG Worldwide Information Center - Tel: 1-800-FOR-ILOG (US only) or 1-775-881-2800 (International) • URL:
Australia - ILOG - Sydney - Tel: +61 (0) 2 9955 7210 - E-mail: [email protected]
China - ILOG (S) Pte. Ltd. - Beijing Representative Office - Tel. +86 10 8518 1080 - E-mail: [email protected]
France - ILOG S.A. - Gentilly - Tel: +33 (0)1 49 08 35 00 - E-mail: [email protected]
Germany - ILOG Deutschland GmbH - Bad Homburg v.d.H. - Tel: +49 6172 40 60 - 0 - E-mail: [email protected]
Japan - ILOG Co., Ltd - Tokyo - Tel: +81 3 5211 5770 - E-mail: [email protected]
Singapore - ILOG (S) Pte. Ltd. - Singapore - Tel: +65 67 73 06 26 - E-mail: [email protected]
Spain - ILOG S.A. - Madrid - Tel: +34 91 710 2480 - E-mail: [email protected]
UK - ILOG Ltd. - Bracknell - Tel: +44 (0) 1344 66 16 00 - E-mail: [email protected]
USA - ILOG, Inc. - Mountain View, CA - Tel: +1 650 567-8000 - E-mail: [email protected]
Representatives and distributors in other countries
ILOG, CPLEX and the ILOG logotype are registered trademarks, and all ILOG product names are trademarks of ILOG. All other brand, product and company names are trademarks or registered trademarks of their respective holders. The information presented in this brochure is summary in nature, subject to change, non-contractual, and intended only for general information.

Offline Revolt426

  • Member
  • *****
  • Posts: 6,190
Very detailed - i would suggest researching "Global Hawk" Technology - That is what Tarpley/Griffin Exposed was behind the 9-11 Auto Piloted Planes, and it's been around for well over a decade.
"Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate … It will purge the rottenness out of the system..." - Andrew Mellon, Secretary of Treasury, 1929.

Offline TheHouseMan

  • Member
  • *****
  • Posts: 3,837
I think it would help if Anti_Illuminati explained what his info was about.. instead of posting stuff in stupid colors and fonts.. I sense extreme paranoia.

Offline 911aware

  • Member
  • *****
  • Posts: 519
I think it would help if Anti_Illuminati explained what his info was about.. instead of posting stuff in stupid colors and fonts.. I sense extreme paranoia.

i think we're all paranoid at this point...and for good reason.
It's a dog eat dog world, and I'm wearing Milkbone underwear.  -norm

Offline Matt Hatter

  • Member
  • *****
  • Posts: 1,543
Anti, can you upload all your research somewhere, maybe a torrent?

why when I read this thread do I get notice certificates from.  ???

Offline Dig

  • All eyes are opened, or opening, to the rights of man.
  • Member
  • *****
  • Posts: 63,090
    • Git Ureself Edumacated
I think it would help if Anti_Illuminati explained what his info was about.. instead of posting stuff in stupid colors and fonts.. I sense extreme paranoia.

Ummm...this information is fairly self explanatory.  The colors highlight key sentences detailing the connections between terrorist owned risk management software, the military industrial complex, false flag cover provided by insane war games, and the companies/government agencies behind the scenes.  Paranoia is based on the unrealistic idea that the world is out to get you, Anti_Illuminati has exposed the connections concerning the companies/agencies/people that create and maintain tyranny control grids throughout our country.

The ones that are paranoid are the elites that mandate these fricking control structures of deception, oppression, terror, and secrecy.
All eyes are opened, or opening, to the rights of man. The general spread of the light of science has already laid open to every view the palpable truth, that the mass of mankind has not been born with saddles on their backs, nor a favored few booted and spurred, ready to ride them legitimately

Offline Elvis

  • Member
  • *****
  • Posts: 1,356
  • just one
The ones that are paranoid are the elites that mandate these fricking control structures of deception, oppression, terror, and secrecy.
Well just because their paranoid doesn't mean ... oh yeah.
"A great civilization is not conquered from without until it has destroyed itself from within." - Will Durant


  • Guest
Re: Underpinnings to Global Govt. Exposed/C4ISR-AFCEA-Ptech-ILOG-GIG-NCOIC
« Reply #10 on: February 27, 2009, 09:59:44 pm »
Some of the following information is repetitious, but will be left intact as the delivery is powerful and significantly drives home the meaning to this.

The PROMIS of 9/11 and beyond
By Jerry Mazza
Online Journal Associate Editor

Oct 17, 2006, 00:45

As whistleblower Richard Grove points out, SilverStream (the software company he worked for at the time of 9/11), served not only AIG (American Insurance Group), it also built trading applications for Merrill Lynch, Deutsche Bank, Banker’s Trust, Alex Brown, Morgan Stanley, and Marsh McLennan. With this impressive list, according to Grove, “you pretty much had the major players involved in the financial aspect of the 9/11 fraudulent trading activity.” I might add that fraudulent activity extended from finance to our government as well.

In the weeks preceding 9/11, many elves on Wall Street were working into the night to make Marsh and others’ SilverStream fully operable. The software, as Grove pointed out to me, “was an internet portal framework. Panacya is AI (Artificial Intelligence). PROMIS and P-Tech are the grandparents if you will.” So let’s consider the grandparents to fully understand the power of the grandchildren. Sort of like Prescott Bush in relation to George W., Jeb, Neil and Marvin Bush.

PROMIS software (originally Prosecutor Management Intelligence System) appeared in the early 1980s. It was developed by a small Washington, DC, company, Inslaw Inc., and proved to be the perfect intelligence tool. Though designed for the Department of Justice to help prosecutors in case management, it hooked the attention of corrupt officials and Israeli intelligence. Subsequently stolen from Inslaw, the software was hacked and given a “trap door.” This trojan gave it the power to retrieve info for the US and Israel from the very foreign intelligence services and banks it had been sold to in some 40 countries.

The software helped the US win the Cold War against the Soviets, but also helped the Russian mafia, Saddam Hussein, Osama bin Laden & Company and any number of spies and crooks. In 1985 Mossad spy and British media tycoon Robert Maxwell, opened the “trap door” secret to Chinese Military Intelligence (PLA-2), at the same time selling them a copy of PROMIS for $9 million, turning it against the US. Unfortunately, in the mid-90s PLA-2 hacked the databases of Los Alamos and Sandia laboratories to cop US nuclear secrets.

The KGB also bought PROMIS from Maxwell, and also received the back door trojan to plant in a tender part of the FBI. Yes there is no honor among thieves. We also provided PROMIS to Russia and China to backdoor their intelligence, figuring the 64 federal agencies they could expose did not outweigh the many other look-sees PROMIS provided the US. Actually, using the same PROMIS bought from Russia, Saddam and his regime shifted major money through the banking system. Some of these funds still feed Iraqi anti-coalition and resistance fighters.

Unfortunately, when Maxwell tried to extort more money from the KGB to pay off his huge corporate debts, he ended up falling off the back of a yacht into the deep blue drink, stung by a hot shot needle, this with a little help from his friends. Nevertheless PROMIS was as Michael Ruppert described in Crossing the Rubicon . . .

“ . . . software that could think, understand every major language in the world, that provided peepholes into everyone else’s computer ‘dressing rooms,’ that could insert data into computers without people’s knowledge, that could fill in blanks beyond human reasoning, and also predict what people would do — before they did it? You would probably use it wouldn’t you? But PROMIS is not a virus. It has to be installed as a program on the computer systems that you want to penetrate. Being as uniquely powerful as it is, this is usually not a problem. Once its power and advantages are demonstrated, most corporations, banks, or nations are eager to be a part of the 'exclusive' club that has it. And, as is becoming increasingly confirmed by sources connected to this story, especially in the worldwide banking system, not having PROMIS -- by whatever name it is offered -- can exclude you from participating in the ever more complex world of money transfers and money laundering. As an example, look at any of the symbols on the back of your ATM card. Picture your bank refusing to accept the software that made it possible to transfer funds from LA to St. Louis or from St. Louis to Rome.”

PROMIS Plus P-Tech Equal Disaster

The disaster I refer to is 9/11 . . . and is referenced in a FTW article by Jamey Hecht, with research assistance by Michael Kane and editorial comment by Michael C. Ruppert. The article is startlingly titled “PROMIS Connections to Cheney Control of 9/11 Attacks Confirmed.” It’s part of an equally startling piece “PTECH, 9/11, and USA-SAUDI TERROR -- Part 1." In it, is an interview between FTW and Wall Street whistleblower Indira Singh. Here’s a piece of it . . .

“FTW: You said at the 9/11 Citizens' Commission hearings, you mentioned -- it's on page 139 of transcript - that Ptech was with Mitre Corporation in the basement of the FAA for 2 years prior to 9/11 and their specific job was to look at interoperability issues the FAA had with NORAD and the Air Force, in case of an emergency [italics added].

”Indira Singh: Yes, I have a good diagram for that.

“FTW: And that relationship had been going on mediated by Ptech for 2 years prior to 9/11. You elsewhere say that the Secret Service is among the government entities that had a contract with Ptech. Mike Ruppert's thesis in Crossing the Rubicon, as you know, is that the software that was running information between FAA & NORAD was superseded by a parallel, subsuming, version of itself that was being run by the Secret Service on state of the art parallel equipment in the PEOC with a nucleus of Secret Service personnel around Cheney. In your view, might it have been the case that Cheney was using Ptech to surveil the function of the people in FAA & NORAD who wanted to do their jobs on 9/11, and then intervene to turn off the legitimate response?

“Indira Singh: Is it possible from a software point of view? Absolutely it's possible. Did he (Cheney) have such a capability? I don't know. But that's the ideal risk scenario - to have an overarching view of what's going on in data. That's exactly what I wanted for JP Morgan. You know what's ironic about this - I wanted to take my operational risk blueprint which is for an operational event going wrong and I wanted to make it generic for extreme event risk to surveil across intelligence networks. What you're describing is something that I said, 'boy if we had this in place maybe 9/11 wouldn't have happened.' When I was going down to DARPA and getting these guys excited about creating an extreme event risk blueprint to do this, I'm thinking of doing exactly what you're saying Cheney might have already had!

“I believe that Dick Cheney also had the ability using evolutions of the PROMIS software, to penetrate and override any other radar computer or communications system in the government.

(Mike Ruppert, in "Summation: Ladies and Gentlemen of the Jury," from Crossing The Rubicon, p.592.)

Also of prime importance is this second statement from the same piece . . .

“The Ptech story is a crucial piece of 9/11 because the software was used to simultaneously coordinate the FAA with NORAD and the Secret Service. But it transcends 9/11 because that terror attack is continuous with preceding decades of violent Islamic extremism epitomized in the international Muslim Brotherhood, of which al Qaeda is only one, relatively recent, incarnation. Worse, the Muslim Brotherhood has from its first days been linked to the Nazi party and its Swiss neo-Nazi epigones. Anti-Soviet projects of the CIA and the Pentagon (from 11-22-63 to the Afghan War) have long been recognized as continuous with the absorption of Nazi SS personnel into what became the CIA. The connection of the Bush crime family to the political economy of the Nazi movement is familiar from the excellent work of former Justice Department Nazi war crimes prosecutor John Loftus and others.9 Its triangulation with the Bush-Saudi alliance forms a powerful explanatory paradigm - one to which FTW will be paying further attention in the sequel to this story.”

And in another place, FTW reports, “September 1996/ Ptech already working with the DoD’s research group DARPA: ‘Ptech, based in Cambridge, Mass., offers an integrated set of object-oriented tools that enable users to create interactive blueprints of business processes. Software code can be generated from the hierarchical layout, providing rapid and consistent application development. The [Defense] Advanced Research Projects Agency is using [Ptech’s program called] Framework to help transfer commercial software methodologies to the defense sector.”

The point of all this is aptly summed up in the “CODA: Knowledge is Power":

“The computational power of the Ptech evolution of PROMIS software represents a daunting new surveillance-and-intervention capability in the hands of the same elites who planned 9/11, prosecute the subsequent resource wars, and are presiding over what may become a full economic and military disaster for the resource-consuming citizens of America and the world. Since the ‘War On Terror’ and this coming dollar / natural gas collapse will necessitate new levels of domestic repression, this is just the capability those elites require. Ptech is Total Information Awareness . . .

“Programs based on datamining are powerful analytical tools; finding meaningful patterns in an ocean of information is very useful. But when such a tool is driven by a high-caliber artificial intelligence core [P-tech], its power gets spooky. The datamining capability becomes a smart search tool of the AI [Artificial Intelligence] program, and the system begins to learn.

“ . . . ’Neural Network’ programming is modeled on the computational techniques used by the human brain - an electrochemical computer that uses neurons instead of semiconductors; the firing or non-firing of neurons instead of ones and zeros.

With neural networking, software has become much smarter than it had been . . .

“ . . . Ptech's Framework can exploit the patterns it detects and extrapolate future probabilities. Then it can integrate itself with the computers from which it's getting the information and intervene in their functioning. The result is a tool for surveillance and intervention. The program can identify suspect streams of cash in a banking network and allow a bank officer to freeze the suspect assets. Of course, a user could direct the same program to prevent detection. It can discover salient anomalies in a person's movements through a city and either flag those anomalies for further scrutiny, or erase them from the record. And it can find errant flights in an air traffic map and initiate an intercept response. Or not.”

We seem to have arrived not only at 1984 but taken off for an unbidden future of governmental "Total Information Awareness" (TIA from DARPA) to be used in a new kind of warfare, not only on enemies, but on the people, too. For instance, IBM in a 2001 newsletter/promotion piece, boasts that “IBM Enterprise Architecture Method [is] enabled through Ptech Framework” for commercial purposes. But considering Ptech’s nasty beginning and development through financial support from a trilogy of elites, neo-Nazis plus Muslim Brotherhood forces, we should take a look back to IBM’s historic contributions to the Nazi effort on Final Solutions: How IBM Helped Automate the Nazi Death Machine in Poland by Edwin Black.

“When Adolf Hitler came to power in 1933, most of the world saw a menace to humanity. But IBM saw Nazi Germany as a lucrative trading partner. Its president, Thomas J. Watson, engineered a strategic business alliance between IBM and the Reich, beginning in the first days of the Hitler regime and continuing right through World War II. This alliance catapulted Nazi Germany to become IBM's most important customer outside the U.S. IBM and the Nazis jointly designed, and IBM exclusively produced, technological solutions that enabled Hitler to accelerate and in many ways automate key aspects of his persecution of Jews, homosexuals, Jehovah's Witnesses, and others the Nazis considered enemies. Custom-designed, IBM-produced punch cards, sorted by IBM machines leased to the Nazis, helped organize and manage the initial identification and social expulsion of Jews and others, the confiscation of their property, their ghettoization, their deportation, and, ultimately, even their extermination”


[INSERT:  And today, unknown to virtually everyone in the world that is against the New World Order, Dr. Samer Minkara, once of Ptech Inc. (who also worked directly with Felix Rausch, the man who installed the software on NORAD's and the FAA's systems, now works at IBM.)

Some background on Minkara:

54. Samer Minkara, "Microelectronic Realization of Toda's Lattice and Its Application to Soliton Gates," U. of MD, 07/1992.
independent computer system consultant; Instructor, University College (UMD)

Samer Minkara, '83 M.S. EE, is a senior consultant for IBM in Massachusetts.

A Method for Aligning Architecture
Frameworks and System Requirements
See the PDF file for graphs)

Richard Eilers, Kevin Hall, Mark Rhoades, Dr Samer Minkara

NDIA System Engineering Conference 2008
22 October 2008


   Objective

   Description of Model Driven Systems
Development Process

   FEAF and DODAF Application Examples

   Observations


   Communicate on-going IBM Federal Systems, Public
Sector’s work translating Architecture Framework (FEAF,
DODAF) material into Architectural Components and
–   Employing IBM’s Service Oriented Modeling and Architecture
–   Model Driven Systems Development/ Engineering (MDSD/E) 
Methods and Tools supporting System of System (SoS)

Both examples are works in progress at different levels of maturity

Description of IBM Model Driven           © 2008 IBM Corporation
Systems Development Process



   IBM is an acknowledged  world leader in Service Oriented
Architecture methods application in enterprise level
transformational and distributed systems development

   Over the last several years our Service Oriented Modeling and
Architecture (SOMA) methods have been employed to
increasingly challenging opportunities within the Federal

   IBM is evolving SOMA using model driven system development
and engineering methods coupled with improvements in Rational
Tools (which includes System Architect and DOORS) to increase
process automation and product configuration control

Current IBM SOMA Design and Development Approach
(Architecture Lead/SE Follow)

IBM’s Evolving Model Driven Systems Development (MDSD)
Approach For System of Systems Engineering (SoSE)
Our evolutionary approach involves a collaborative interaction of
business, architecture and system engineering skills

Application Examples

IBM’s Federal Agency Transformation MDSD

55 System Interfaces
   47 Business Processes
   65 SOA Services
   19 Applications (of 240)
Build Custom
Buy-Enhance COTS
Enhance Existing
   116 Data Entities
   54 Integrators/Hosts
   39 COTS Packages
   50 Logical Server Images
   65 Physical Servers & Appliances
Time to initial architecture and
requirements definition < 3 months

IBM’s DoD Distributed System Example (in Progress)

Standards Architecture Governance

Infrastructure and Hosting Integration


   System of system engineering (SOSE) requires more
simultaneous collaboration
amongst participating

– Interaction between the business, architecture and
engineering functions using an integrated
methodology tied into an integrated toolkit

   SOSE projects require a cadre of highly experienced
personnel to plan, orchestrate and lead the tightly
coupled synthesis efforts

   SOSE’s role is to orchestrate this interaction using
traditional SE control mechanisms

IBM Points of Contact

Richard  L. Eilers                     
IBM Global Business Services
Chief Engineer, DHS/NS&J Programs

[email protected]

Mark Rhoades   
IBM Global Business Services
Distinguished Engineer, Chief
Engineer, DoD Programs

[email protected]
Kevin W Hall     
IBM Global Business Services
Executive IT Architect

[email protected]

Dr Samer Minkara
IBM Global Business Services
Senior IT Architect

[email protected]
Enterprise Architecture: Roadmap for Modernization

Rick Tucker and Dennis Debrosse

Every year, federal and state governments spend billions of dollars building roads. The majority fulfill the mission for which they were designed: offering drivers a better way to get from point A to point B. Every year these governments also spend billions of dollars (or more) on modernizing information technology (IT) and services. Unfortunately, the final result of those purchases isn't always as smooth as the newly opened roads.

While it's hard to build a road without a lot of cooperation and coordination, it's not very difficult to buy software and hardware that fulfill the needs of a single division without considering the needs of the entire organization. If each division of an organization develops its own business processes and IT infrastructure, the end result may be lack of interoperability, duplicated components, functional gaps, and inability to share information. [INSERT:  Now are you beginning to see the big lie with this, and how this sets up world government?]

To avoid these problems, the federal government now mandates the use of enterprise architectures (EAs) by federal agencies seeking to obtain funding for any significant IT investment. Enterprise architectures act as a kind of roadmap for the design, development, and acquisition of complex, mission-oriented information systems. The goals of the planned capability might be general, such as achieving system-wide interoperability for daily operations, or specific, such as gathering and disseminating the intricate information needed to launch a surgically precise military strike. The key concept now is the mission, whereas in the past the focus was on technologies, in general, or specific systems, often within a single business unit in the overall organization.

An EA should describe all aspects of an organization—its mission, organizational structure, business processes, information exchanges, software applications, and underlying technical infrastructure--as well as the overarching need for information security. A change in one of these dimensions may impact the other enterprise dimensions. The ultimate EA, in conjunction with enterprise life cycle processes and enterprise engineering methods, should allow an organization to evolve and achieve near-term and long-term strategic business goals—while continuing to function efficiently day to day.

One driver behind the government's requirement for EAs—codified in the Office of Management and Budget's (OMB's) Circular A-130—is the recognition that federal agencies should operate more like commercial corporations. Market forces dictate that for-profit companies make choices, particularly IT and enterprise management decisions, that are based on improving productivity, performance, and efficiency—and, ultimately, the bottom line. Many of the e-government initiatives coming out of the OMB are compelling agencies to adopt programs similar to those of private companies to become more efficient and customer-focused.

Standard Roadsigns

EAs focus on identifying and selecting components that can be assembled into solutions to support business functions—and on the interactions among these various components. Individual elements will change over time (and should, based on improvements in technology, as well as other change drivers). But consideration of how changes in each of the various enterprise dimensions affect the other dimensions, including the overall mission, must remain paramount in order to manage change successfully. Many government agencies and other organizations have spent millions of dollars on massive IT infrastructure purchases or projects, only to find in the end that the new purchases or systems don't support their business needs or are not accepted within the organization. Such projects sometimes fail because they promote a narrow point of view; their designers and developers haven't considered the impact on the overall enterprise.

Each EA is different, reflecting the unique characteristics of the organization and its goals. But according to OMB's definition, all EAs should have three major components: the "As-Is," or baseline, architecture, which captures the organization's current architecture; the "To-Be," or target, architecture, which describes the organization's desired architecture as designed to achieve strategic goals; and a transition plan, which uses a phased approach to get from "As-Is" to "To-Be." An organization must also have a structured process for managing change to its EA, which needs to change as the organization changes—in a continuous process.

Most EAs are documented in the form of "work products," such as models, graphics, and other descriptions of the enterprise's environment and design. To enforce some level of consistency among architecture description content and format, EAs are based on templates called "frameworks," which specify models for describing and documenting the individual architectures.

Critical Intersections

MITRE has worked with a variety of government customers to help them build their EAs and establish EA programs, most often in the context of supporting their overall enterprise modernization or transformation programs.

A key skill that MITRE staff bring to the world of enterprise modernization is an understanding of the intersection among business needs, information technology, and people—the point where EAs become central to enterprise modernization. We also bring world-class capabilities in strategic planning, enterprise engineering, investment management, program acquisition, and program management to help our sponsors effectively use their EAs to achieve their modernization goals.

MITRE is now involved in EA development for the IRS, Customs and Border Protection, U.S. Coast Guard, Department of Treasury, Department of Defense (DOD), Federal Aviation Administration, Department of Homeland Security (DHS), and the Peace Corps. We are the guiding force behind the Air Force's overall EA program and have leveraged this experience to help the Army, Marine Corps, and Navy with their EAs and modernization programs.

MITRE's Center for Enterprise Modernization (CEM) is currently conducting research into the complex problem of multi-agency enterprise architecture, i.e., how can different government entities share their business processes, information stores, technical systems, and human resources in a cohesive, secure way to accomplish a common mission?

In a post-9/11 world, this type of EA has grown in importance. For example, DHS is a very prominent driver for establishing multi-agency missions and architectures.

MITRE has also conducted many pivotal EA-related studies, providing timely recommendations that help federal agencies make crucial decisions on the road to modernization. In recognition of our groundbreaking efforts developing both foundational EA frameworks and specific architectures, the Federal CIO Council chose MITRE to coordinate, edit, and produce the seminal work, A Practical Guide to Federal Enterprise Architecture, in 2001 (available for download at The book gives a thorough overview of the steps required to develop an EA and describes the most commonly used frameworks.

Creating an enterprise architecture requires participation from many areas of the organization and a great deal of communication to plan and implement each stage of the process. The result is a roadmap that guides an organization through the modernization process and enables it to achieve its goals.

Air Force  See:

All branches of the U.S. military have undertaken the overarching mission of modernization and transformation: fielding 21st century forces using the latest technologies. Central to the DOD's transformation is the development of EAs. Our longstanding relationship with the Air Force's Electronic Systems Center (ESC)—first as chief engineer and now also as chief architect—made us a natural choice to assist in the creation of the Air Force's Chief Architect's Office, or CAO. ESC's Architecture Councils have been the guiding force behind the evolution of an EA for the infrastructure and mission applications that make up the Air Force Command and Control Constellation.

The stakes are high: the CAO, in conjunction with the Air Force's chief information officer, is committed not only to establishing and overseeing Air Force architecture policy, guidance, standards, and products, but also to making sure the EA works to enhance warfighter IT-based capabilities. Moreover, those capabilities must function in a joint environment with other service branches—and often in a coalition environment with other countries or organizations such as NATO.

Working through the CAO, our staff is helping the Air Force move toward several EA goals, such as building architectural processes through collaborative models, integrating the architecture with core Air Force processes, and aligning the architectural responsibilities with assigned Air Force roles. Some of our other objectives in supporting DOD transformation include creating a set of IT standards that leverage commercial-off-the-shelf products as much as possible and developing a multi-level architecture training program for Air Force stakeholders.

Coast Guard

Today, the U.S. Coast Guard (USCG) faces perhaps its greatest period of change since its creation more than 200 years ago. Besides undertaking two long-term modernization programs, Deepwater and Rescue 21, the entire service has moved under the jurisdiction of the newly founded DHS.

In June 2002, the USCG initiated an EA program. The initial focus has been on command, control, communications, computers, intelligence, surveillance, and reconnaissance capabilities, with four basic objectives: provide guidance for coordinating modernization, identify enterprise-wide standards to enhance interoperability, support the capital planning and investment processes, and comply with federal requirements for EAs.

A CEM team has undertaken a central role in developing the Coast Guard's EA. For example, we functioned as integrator across the plans and architectures generated by various groups and contractors documenting USCG capabilities and needs, while also coordinating requirements from the DOD and DHS.

The most important result of our efforts is the Coast Guard's Enterprise Transition Plan. As primary author, one of our main responsibilities is to examine the pathway from "As-Is" to "To-Be" and identify any gaps, overlaps, omissions, and incompatibilities. The first increment of the target architecture is focused on capabilities to be deployed in 2005. The MITRE team, which supports the needs of the capital investment cycle, has already learned valuable lessons. For example, the EA process must be integrated with the USCG's budget planning cycle to support the decision-making needs of the leadership and enable buy-in from across the organization. In addition, it is vital to facilitate enterprise change management early on by identifying stakeholders and communicating the details of the architecture process, products, usage, and impact.

A man grieves outside the World Trade Center site in New York September 11, 2006. Stress brought on by the September 11 attacks in New York and Washington in 2001 led to heart problems for some Americans, even if they had no personal connection to the events, a study released on Monday found.
Detailed Collaborative Analysis
    Samer Minkara, Ptech Inc.
This demonstration is based on a philosophy of cooperative teamwork and a unique approach for specifying the requirements of collaborative systems. A methodology for analyzing and prototyping collaborative systems is supported by the TeamWork feature of FrameWork from Ptech Inc. The tool for capturing specifications allows users to extend the concepts for requirements modeling, making it possible to capture system stipulations from various points of view. The development and prototyping environments use a graphical interface that simplifies design and analysis and makes them visible to all levels of workers. We will show a design example for a multiuser locking mechanism from which scenarios can be modeled and prototyped.

The bolded part shows how the Ptech Framework facilitates extreme collaboration, engineering, process modeling, development skills of many people.  This is the next generation think tank.  The New World Order is WEAK, and MUST rely on these systems created as a result of hoarding technology and brutally murdering any who stood in their way for decades.  Like Danny Casoralo, whom I will talk about after this post.
This opening paragraph of the article from the Village Voice, March 27 - April 2, 2002, should not only give you pause, but hopefully propel you through the rest of the piece. You will see specifically how IBM of 1933 and beyond enabled the organization of the death camps -- thanks to the chairman of IBM, Thomas Watson, the New York Madison Avenue branch of IBM, the German Subsidiary, acronym “Dehomag,” and Watson Business Machines headquartered at Warsaw, continuing operation well past 1941 under German management, preserving and delivering profits on all the information-organization machines of IBM.

Here we have a military industrial complex at one of its lowest ebbs in history. Perhaps it is about to be matched in the use of various software technology and its intelligence gathering capability to initiate and implement the War on Terror vis-a-vis 9/11, the New Pearl Harbor, and in the ongoing neocon march to world hegemony. That may be the PROMIS of our future. Or do we have something to say about it as a people?

If this all seems terribly grim to you, its purpose is to help keep you aware and alive, to enjoy the beauty and power of life; and also to understand just how low sectors of humanity can sink to reach the greatest heights of political power, which inevitably ends up destroying untold numbers of innocent people around the world.

This theme is explored as well in Carolyn Baker’s FTW article, The War On You: U.S. Government Targeting of American Dissidents, Part II, provided free. It not only connects many dots but blows its whistle loud enough to wake up the most “doubting Thomas” as to what was and is being planned for the near future. My thanks to Michael Ruppert, wherever he is in the Wilderness, to have so diligently put his intelligence and life on the line to first inform us. What began as software to help prosecutors manage their intelligence data has evolved into a Big Brother Prosecutor with a super human intelligence to manage our existence through data mining.

Have a good and conscious day.
Jerry Mazza is a freelance writer living in New York. Reach him at [email protected].

Copyright © 1998-2007 Online Journal
Email Online Journal Editor


  • Guest
Re: Technological underpinnings to Global Govt./
« Reply #11 on: February 28, 2009, 03:43:38 am »

Folks, when you read this, keep in mind this was in 2003!  They are much more hardcore and advanced now than what they talk about here.  But MITRE themselves are basically coming right out and telling you flat out one of the main ways the NWO (their Masters obviously, the people at MITRE are low level minions) achieving world government without them coming right out and telling you that.  It as AS obvious as the fact that you are breathing right now.

Frameworks Are Valuable Templates for Developing Enterprise Architectures

Ann Reedy

It's the rare house that's built without an architectural blueprint. Whether Colonial or Victorian, split-level or ranch, architects follow certain common practices in developing the specifications for building the entire house from various perspectives: foundation, walls, electricity, ventilation, plumbing, etc. In the world of enterprise modernization for government organizations, designers and developers also use common frameworks to integrate multiple perspectives and stakeholder interests within an enterprise architecture (EA). A common framework allows all stakeholders to understand and use an agency's EA, just as homeowners, contractors, and subcontractors build a house from a set of blueprints.  [INSERT:  One problem:  The New World Order is the ONLY provider of the frameworks!  Which gives them total control over everything on the planet--hence their term e-Governance!  They aren't playing games like Alex has always said!]

A framework establishes a common terminology for cross-organization communication, and a common structure for classifying and organizing the EA contents. Frameworks provide a template for collecting information and they define the information to be stored in an EA repository. They frequently include guidance in terms of formats and detailed information about the contents of architecture work products (such as diagrams, models, or other descriptions of the enterprise's environment and design), as well as graphs, charts, and other tools to help communicate what is contained in the EA.

In the federal government, EAs need to be consistent not only horizontally with other organizations at the same level, but also vertically with their higher-level managing organizations (and with external organizations). [INSERT:  Remember how Alex always talks about how the NWO vertically integrates everything?  They have to, to have total control] For example, EAs for agencies operating under the Department of Treasury need to be horizontally consistent to ensure they can cooperate in Treasury-wide missions and objectives, and EAs for the same agencies need to be consistent with the Department of Treasury's own EA. Moreover, if multiple agencies use the same EA framework, the work products could be used to enhance communication and interoperability among these agencies with respect to business processes and IT systems.

MITRE's role in the development of many EA frameworks within the federal government puts us in a position to help ensure these frameworks are converging as they evolve. Our experience has allowed us to share lessons learned from multiple organizations with customers and incorporate the lessons into successive framework updates. MITRE's Center for Enterprise Modernization (CEM) is also participating in working groups that support the recently formed Federal Enterprise Architecture (FEA)  [INSERT:  Felix Rausch's Institute is the FEAC, Federal Enterprise Architecture CertificationProgram Management Office (PMO). An important FEA PMO objective is to guide federal agencies in implementing architectures that focus on each agency's primary mission and core competencies and eliminate replication of support functions that could be centralized or outsourced.

The four major frameworks used by U.S. government organizations are described below, along with information on the framework adopted by the National Association of State Chief Information Officers and the rapidly progressing FEA.

Zachman Framework

Though not a federal EA framework, the Zachman Framework has been widely accepted as a model for basic enterprise architecture. Many commercial and government frameworks base at least part, if not all, of their underlying structure on Zachman.

John Zachman is credited with introducing the concept of an enterprise architecture framework in a 1987 issue of the IBM Systems Journal. His framework is a conceptual structure, in the form of a matrix, which organizes EA information in a standardized manner. It provides perspectives into the enterprise for five layers of enterprise stakeholders (planner, owner, designer, builder, and subcontractor) and orthogonal views based on functional areas (what, how, where, who, when, and why). Zachman does not specify architecture work products, although his publications have suggested the types of information to detail for each "cell" of his matrix. (For more information, see the Zachman Institute for Framework Advancement at

One area the Zachman Framework does not cover is information security. Many organizations attempt to retrofit security features into an architecture after it has reached the design or development stages. MITRE is strongly encouraging organizations to build security into the earliest concepts of an architecture. We are sponsoring internal research and development work on a new approach, using a concept called "security patterns" (analogous to design patterns) to build information security into each layer of an EA from initial planning through implementation. Although the MITRE project is based on the Zachman model, the security patterns concept could be used with any structured EA framework.

Since the passage of the Clinger-Cohen Act and the revision of the OMB's Circular A-130 in 2000, federal agencies have been required to develop EAs to manage their information technology investments and projects. Starting in 2004, agencies must correlate their budget submissions with their EAs.  [So there you have an element of FORCED COMPLIANCE, hence the word "must".]

DOD/C4ISR Framework

Before EA frameworks were adopted by other government organizations—and before they were required by the Office of Management and Buget (OMB)—the Department of Defense (DOD) had begun developing an overall standardized architecture framework. DOD's C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance) Architecture Framework, Version 1, was produced in June 1996. For several years prior to that time, MITRE worked with the DOD to develop standard technical architectures (such as the Joint Technical Architecture) and technical reference models. Since 1998, C4ISR Version 2 has been the mandated framework for development of DOD architectures and has influenced the development of several other civilian and military architecture frameworks around the world, including NATO's and the Federal Enterprise Architecture Framework (FEAF).

The C4ISR Architecture Framework is undergoing an evolutionary revision and is re-emerging as the DOD Architecture Framework. The DOD Architecture Framework is expected to become mandatory for all architectures within the Department of Defense, not only for C4ISR-related architectures, but for other elements as well, such as acquisitions, logistics, and financial management. In addition to being one of the principal authors of the original C4ISR framework, MITRE has contributed to its ongoing development, particularly during this crucial time of transition.  [The crucial time being when you needed to carry out 9/11 to help DoDAF move through its transition phase, because it was known that it would be the groundwork for the final GIG for a total global dictatorship from hell.]

The C4ISR/DOD Architecture Framework defines three views and their relationships: (1) operational: tasks, activities, operational nodes, and information flows associated with a mission or business; (2) systems: capabilities that implement operational requirements; and (3) technical: standards and other criteria for systems. Each view has an associated set of architecture work products. Products can be selected and used based on the specific purpose of the architecture. (For more information, see Assistant Secretary of Defense, Networks and Information Integration at


MITRE played a strong role in the development of the Treasury EAF (TEAF), which is an adaptation of the Zachman Framework that has a specified set of work products similar to those used in the DOD. The TEAF is a common framework for the Treasury Department and its bureaus (including its largest, the IRS), enabling them to produce their EAs in a coherent and consistent manner.

Recently, the Customs and Border Protection (CBP) bureau developed an EA based on the TEAF for its enterprise modernization program. This year Customs moved from the Treasury to become part of the Department of Homeland Security (DHS), along with more than 20 other government agencies, including the U.S. Coast Guard.

The Coast Guard had also recently initiated an enterprise modernization program and had developed an EA based on the C4ISR Architecture Framework. Some agencies in DHS, including CBP and the Coast Guard, will need to continue modernizing their organizations according to pre-DHS plans, while simultaneously transitioning many of their functions to multiagency missions managed by DHS. MITRE is continuing to help Customs and Border Protection, as well as the Coast Guard and other agencies, with their modernization and architecture programs during this transition. (For more information about TEAF, see the U.S. Department of the Treasury at


In 1999, the Federal CIO Council developed the FEAF to provide architecture guidance for federal cross-agency or "segment" architectures. The idea is that a full Federal Enterprise Architecture is sufficiently complex that it must be built up out of segment architectures. Segments represent government business areas in which multiple agencies need to cooperate, either to coordinate activities or to share resources.

Like the TEAF, the FEAF is based on the Zachman Framework. Unlike the DOD/C4ISR Architecture Framework, the current FEAF does not specify architecture work products, but instead focuses on introducing federal EA concepts and references. MITRE has been working with the Federal CIO Council's Architecture and Infrastructure Committee to update federal architecture guidance and practices in areas such as architecture work products, EA governance, tailored usage by small agencies, criteria for selecting EA tools and repositories, multi-agency planning, and information security. (For more information about FEAF, see the Chief Information Officers Council at

As MITRE continues to participate in working groups that advise the Federal CIO Council in updating federal architecture guidance, we will help determine how best to establish a "federated" architecture framework that integrates the best practices in frameworks and architectures across government agencies.  [Remember what I highlighted in red here, it is very important to understand everything else.]

NASCIO Framework

In addition to the federal government EA frameworks, the National Association of State Chief Information Officers (NASCIO) has established an EA framework for use by states. Compatibility with the NASCIO framework is important to federal government agencies such as the Department of Homeland Security, which must coordinate with state and local agencies and first responders. (For more information, see the National Association of State Chief Information Officers at


In 2001, a Presidential task force identified 24 e-government initiatives that could both transform the federal government and simplify common business and technical functions implemented by multiple government agencies. Driven partly by these e-government initiatives, OMB is facilitating implementation of a top-level Federal Enterprise Architecture that will identify common lines of business, performance measures, and data across federal agencies, and will specify common information technologies, components, and services for use by federal agencies.

To help carry out this ambitious plan, OMB established the FEA Program Management Office ( in 2002. The OMB's goals for this office are to provide outstanding service to citizens, manage performance, and take advantage of market successes (products and processes) to improve government. The FEA is being constructed as a set of interrelated "reference models," which represent both unique and common functions in government agencies and serve as a basis for identifying duplicative investments, gaps, and opportunities for collaboration within and across federal agencies.

In June 2003, the FEA PMO released the newest version of its Business Reference Model, along with initial versions of the Service Component Model and Technical Reference Models. In August, it released the initial draft of the Performance Reference Model. As part of their annual budget submissions to OMB, agencies will be required to report information about their planned investments in terms of these reference models starting in FY2005.

Using Interoperable Process Models in a Multi-agency Planning Toolkit for Enterprise and C4ISR Architecture Analysis

David Payne, Kenneth Hoffman, Kangmin Zheng

Increasingly, government initiatives such as e-Government, along with new governmental challenges such as homeland security, require groups of agencies to interoperate at the mission, activity, and organizational levels, as well as at the information system support level. The collaborating agencies typically have incompatible information or enterprise architectures at varied stages of development and sophistication. Key aspects of successful interagency collaboration depend on the interplay of human and IT and non-IT systems forming a work flow or activity network. Issues include timing, synchronization, priorities, bandwidth, systems compatibility, and agency-unique syntaxes. The syntax issues exist at both the data and information exchange levels, with information exchange syntax differences impacting both human and machine information exchanges.

A MITRE research team is exploring the issues surrounding enterprise modernization planning in a multi-agency environment. Its objective is to support integrated mission and information technology planning covering enterprise strategic plans, performance management, investment plans, and information resource management plans through more detailed representations of mission and business processes that are coupled to the Enterprise Architecture (EA). The team is developing the Multi-agency Planning Toolkit, which will include use of process models developed in several commercial process modeling environments, as well as static information architecture products conforming to defined architecture frameworks. One of our goals is to find ways to easily move architectural and process information between the static architecture environments and the dynamic process model environments. A second goal is to explore environment to environment level interoperability of the process models built in different commercial-off-the-shelf (COTS) environments. This capability will support use of existing models from individual agencies as active components of a larger multi-agency planning environment.

The work is being funded by MITRE's Center for Enterprise Modernization, a federally funded research and development center sponsored by the Internal Revenue Service.

Planning Framework

Specific multi-agency missions we are considering in the design of the planning framework include:

    * International trade and the sequence of activities starting with the entry request and shipping documentation, through tracking and tracing, targeting for inspection, and the release or seizure of the shipment [Hey, wait!  I thought this was for Al-Qaeda!?]
    * E-government services to the citizen
    * Joint military missions
    * Civil-military operations and collaborations
    * Leveraging of the existing federal processes to deliver new or expanded programs or services (e.g., the new health care tax credit requires cooperation between the Internal Revenue Service and the Department of Health and Human Services.)

Planning and analysis for multi-agency missions covers:

    * Modernization and performance improvement programs
    * Investment analysis and portfolio management for mission/business and information systems
    * Systems acquisition
    * Operations and resource allocation

Such planning is a significant challenge that requires a comprehensive base of information on mission objectives and the resources to be applied. Resource categories that must be addressed include the workforce, facilities, technology, and information. Much of the required information base for planning is provided through the significant effort devoted to enterprise architectures by individual agencies. Critical information on assets, processes, and location may be contained directly in the architecture or may be identified as an external information resource.

The dynamic changes in technology and business practice have imposed greater pressures on many government agencies to provide faster delivery of more accurate and up-to-date information to customers and to all levels of their organizations. To tackle these challenges, the Office of Management and Budget has established a policy in its Circular Number A-130 requiring each government agency to develop and use an enterprise architecture as the primary mechanism for enterprise modernization. These architectures also form the principal basis for justifying and managing future information technology investments, as part of the U.S. federal budget process.

Many agencies have already developed EAs, and most others are in the process of developing one. A complete enterprise architecture contains a number of specifically formatted information products depicting: the agency's concept of operations or business model; information flows within the agency and between external mission partners; dependencies and relationships among business activities; information systems; data elements; and the interconnections of the hardware, software, and telecommunications within the agency. The ability to represent and efficiently use EA data in a uniform way—regardless of the source, platform, and formats—is crucial for successful and effective multi-agency mission planning.

Responding to the Challenge: The Planner's Workbench

Our response to the multi-agency planning challenge is a research and development effort to produce what we call the "Planner's Workbench.”

Our vision of this Planner's Workbench organizes and integrates a variety of analytical tools and methods that will provide planners and analysts access to process, geographical, and technical information pertinent to planning mission and support activities involving multiple agencies. The Workbench will integrate process models developed in several commercial process modeling environments with static information architecture products conforming to defined architecture frameworks.

Our concept for the Planner's Workbench includes a toolkit, containing integrating tools to easily move architectural and process information between different architecture environments, different process modeling environments, and between the static architecture environments and the dynamic process model environments. The toolkit will also provide a data repository engine to store and exchange architecture data between the various architecture products and tools. A third component will be a geographic information system (GIS) interface that will allow the user to easily enhance architecture products with accurate geospatial information, keyed to a scenario timeline. The final component of the toolkit is a generic high level architecture (HLA) link for a COTS environment-to-environment level interoperability of the process models. This will support use of existing models from individual agencies as active components of a larger multi-agency planning environment.

In addition to the toolkit, the Planner's Workbench will also provide a non-volatile architecture metadata repository to store information about the architecture products and their components. This repository can grow with use, enhancing and speeding future multi-agency architecture efforts by supporting reuse of agency specific architecture information. The Workbench will also have a detailed "Users Guide," in the form of a detailed multi-agency architecture analysis and improvement methodology. We hope to provide this methodology to users via a Web-based interface with links to the supporting Toolkit components.


The integration view of the toolkit along with the planning workbench and enterprise data is diagrammatically represented in Figure 1. Following, we discuss the core elements of the toolkit.

Figure 1: Multi-agency Enterprise Planning Framework


Translators are programs that take either the native file format or some supported export file format from one application or environment and convert to the native file format or some supported import file format of another application or environment. The Toolkit needs translators because there is no mandated or standard format for information architecture products. Architectures are often documented in a mix of document systems (often Microsoft Word or Adobe Acrobat), presentation or graphics systems (such as Microsoft PowerPoint or Visio), and spreadsheets (such as Microsoft Excel). Additionally, specialized architecting environments are becoming more common, and each has its own file format.

Common architecting environments include Popkin System Architect, pTech, Rational Rose, and netViz. Finally, many process models built in process-simulation environments document key details of enterprise and information architecture and are, of course, useful for analyzing the static architectures depicted in the architecture tool files. A rich set of translator tools provides the means to rapidly get the information documented in a number of formats into a single format for additional analysis and development, and into an executable format for dynamic analysis through simulation.

Our toolkit will attempt to leverage a MITRE-developed translator suite called ICAMS. To do so we will extend the capabilities of ICAMS, attempt to identify and use another translator tool or tools, or develop our own translator tools. Note also that many tools and environments support import and export capabilities, and when the export format of the source environment matches a supported import format of the target environment, no additional translator tool is needed.

Architecture Environment and Repository

We have selected Popkin System Architect (SA) as our target architecture environment, based on availability within MITRE and the fact that several of our MITRE sponsors have also chosen to use this environment. What we hope to do with SA, however, is to leverage architecture metadata to allow us to chain a number of architecture products either done in SA or translated to SA into one virtual, multi-agency architecture.

We then intend to use this virtual multi-agency architecture to generate executable process models to first support benchmarking the current performance of the multi-agency coalition and then re-engineer the multi-agency process to produce a quantifiable predicted performance improvement.

Finally, we will reverse the translation process to import the improved processes back into SA, modify or develop the rest of the "to-be" architecture package in SA, and then export the improved "to-be" architecture products into the native (that is, preferred) file formats used by the agencies in the coalition. These products will then support systems design, acquisition, systems modification or development, and integration of the new processes and information technology required to field the new multi-agency capability.

We plan to leverage the repository capabilities built into SA for the initial version of the toolkit. Later on we hope to develop a full and open metadata repository that will serve correctly formatted data files to a number of architecture and modeling environments, all based on one common architecture repository. This will help ensure that the architecture data remains consistent and make it easily available to users regardless of the architecture tool they chose to use.

GIS Location Tool

Early in our exploratory research we noticed that many multi-agency missions, and hence the architectures that support those missions, are dependent on the geospatial location of the multi-agency nodes participating in a process or scenario. Moreover, this geometry changes over time. Existing architecture tools and the other applications commonly used to build architecture products all seem to share several common weaknesses in documenting node locations.

The biggest problem is that the location attributes either provided (as in the architecture tools, which have a library of predefined entity types) or defined by the user (as in the non-specific applications used to document architecture) use general, subjective descriptions of nodes and node locations. What we need is a way to precisely describe locations in a recognized coordinate system. We believe we can use a GIS interface based on ArcView to allow users to precisely locate nodes in the world, relate their coordinates to other nodal attributes, and relate sets of interconnected nodes at specific times in mission scenarios or business cases.

Providing Interoperability

In addition to being a source of enterprise architecture information, simulations of key processes in member agencies of a multi-agency coalition often exist and offer a tempting opportunity to leverage past work. These simulations are typically executable process models built in a COTS modeling environment. The modeled processes typically take an input that would initiate in another organization or from the public, perform some internal work, and result in an output to another agency. The initial agency may eventually provide an output to the public (or other initiator), but the inputs and outputs to other agencies are typically limited and occur at discreet points in the simulation. Hence it is attractive to try to chain several process simulations together to model an entire end-to-end process. However, the component models may be built in different COTS environments, or the total process chain may overwhelm the capacity of the typical desktop or laptop computers supporting the models. So distributing the models across an interoperable federation is attractive to address either problem.

Our toolkit will attempt to use the HLA simulation interoperability standard to link several COTS modeling environments at the environment level. Our intent is to ease the use of HLA for non-HLA users, such as the typical modelers and analysts using the COTS process modeling environments. Our current strategy is to design a federation object model (FOM) and a base object model (BOM) that will allow a standard, though limited, level of model interaction. The FOM will define the interchange objects and guide design of HLA gateways for COTS environments where they don't yet exist (at least one COTS vendor has already developed a gateway for its tool), and the BOM will serve as a template to build export and import components in the native form for each supported COTS environment. The COTS modeling environments we are targeting support component-based graphical modeling interfaces, so the idea of portal components should be easily grasped by the users of the COTS tools. We tentatively call this interoperability capability "COTSFed."

COTSFed will help us make maximum use of existing process models, even if those models are implemented in different COTS simulation environments.

Meta Data Respository

The long range goal for the Planners Workbench is to build a capability to link inconsistent data, different technologies, and diverse EA application models for the multi-agency planning environment. In order to build such an advanced capability, we recognize the need to use the most current available technologies and industrial standards. The technologies considered include the OMG Model Driven Architecture strategy, providing platform-independent models that can be mapped to evolving technical platforms, and the HLA standard for simulation interoperability. We are also giving close attention to the impact of widely accepted standards and technologies—such as the Common Warehouse Model, the metadata model (MDF, XMI, JMI, and XML), software agent technology, data warehousing, EAI Technology, and electronic commerce—to address mapping data between diverse EA application models dynamically.  [This is THE foundation for DHS's no buy or sell/criminal/dhsMMR/"terror" database and all other databases used by LE.  They had to get this in place to be able to carry out future martial law to be able to have "actionable intelligence" of the enemies of the NWO.]

Multi-agency Planning Framework Methodology

The Multi-agency Planning Framework methodology is designed to plan and integrate:

    * Mission planning—identifying the specific resources needed to perform the mission and/or support activities
    * Investment Resource Management planning and investment analysis—identifying and prioritizing appropriate information services to support mission activities and internal operations.
    * The requirements, acquisition, operations, and maintenance life cycle—putting the changes in place to modernize and/or improve processes and activities
    * Mission performance planning and analysis to meet tactical and strategic objectives
    * Budgeting and cost accounting for specific mission and support activities

The planning methodology is based on the concept of an Enterprise Work Breakdown Structure (EWBS) describing the multi-agency mission and support activities, and uses principles of Activity Based Planning and Management. Given a complete and precise EWBS, the architecture and supporting process models can be developed at the appropriate level of detail to provide a comprehensive knowledge base applicable to all of the above planning functions.

The first step in integrating architecture from several agencies is of course to collect the architectures. If each agency has a recent architecture and it is accurate and complete, this is a trivial step. If not, then "step 0" is to collect and construct at least a minimal agency enterprise architecture. Several methods exist to do this, but this topic is outside the scope of this paper. The next step is to get all of the architecture products into a common format. This is where the translator tools in the toolkit will come into play. Our short-term approach is to translate the various architecture products from the various agencies into a set of equivalent System Architect products.

Next we propose to link the various agency architecture products using a minimal "meta-architecture." This meta-architecture will consist primarily of pointers into agency architecture products, maximizing leverage of the existing products. Then the absolute minimum of additional linkages, information exchanges, and activities may have to be added to complete the multi-agency architecture. We plan to do the additional architecture work in System Architect, as well as devise extensions to it to support the linkage pointers into the component System Architect single agency architecture libraries.

Architecture Analysis and Improvement

At this point we will be able to export architecture products to build process models in an automated fashion, as well as link whatever existing process models already exist. The goal is to establish a dynamic test bed of the multi-agency enterprise, which will support performance benchmarking and quantify the effects of process changes. Typical metrics will include cycle times, queuing delays, the number of completed and missed actions, and a wide range of resource related activity-based costing (ABC) metrics. The specific metrics for a given multi-agency enterprise will of course be driven by the goals and objectives of the enterprise and the enterprise leadership.

Process changes that improve overall mission accomplishment or resource efficiency, and pass return on investment tests, will make it into the so-called "to-be" process. The to-be process will in turn lead to a to-be architecture. We will then modify the System Architect meta-architecture and component agency architecture products to reflect the to-be architecture. Finally, we will again use the translator tools in the toolkit to generate agency to-be architecture products in the formats used by each agency. These to-be architectures will then support design, acquisition, and implementation of the new processes and information systems that will support the improved multi-agency enterprise.


  • Guest
Re: Raytheon ADMITS using Ptech to develop DoDAF for GIG & FCS
« Reply #12 on: March 02, 2009, 11:57:22 pm »

Official Raytheon document

See the above full document for diagrams, and a lot more information.  This is an excerpt with the critical damning info:

Commercial, industrial, and military knowledge workers frequently find themselves immersed in data smog, with far more capability to create information than to find and retrieve it when needed. Because of this, huge amounts of amorphous, unstructured data overwhelm us, just when we need pertinent actionable data for informed decisions, e.g., when the decisions are time-critical.

Technologies to help us manage our search and retrieval efforts are discussed below (metadata for data descriptions, taxonomies for data categories, and ontologies for data relationships). Such applications have been driven by commercial needs to identify information on the Semantic Web and to provide web services that deliver the right information to consumers. The value of such technologies to military applications has been recognized by DARPA, which sponsored development and deployment of the DARPA Agent Markup Language (DAML) a machine-processable ontology description language.

M o t i v a t i o n

Problems arise when users and systems employ different terminology for the same concepts and information, or the same terminology for different concepts and information. However, a number of World Wide Web Consortium (W3C)2 thrusts show promise in improving this situation. Although the original impetus was Web knowledge management and e-Commerce, the resulting benefits clearly can apply to any situation where a common vocabulary and established terms and synonyms are needed. A common language with defined semantics supports more accurate and precise retrieval of information by intelligent software agents.

D e f i n i t i o n s

These definitions will help you understand the language that is being used to describe W3C initiatives:

Data – specific instances of an information category. Example: for the category books, a specific instance is “War and Peace”.

Metadata – information that describes other information (or data about data). Example: for the category books, information about the instances includes author, publisher, date of publication, etc.

Taxonomies – categories of entities defined for a particular purpose. The term itself comes from biology, where it is used to define the single location for a species within a complex hierarchy. Example: a taxonomy for publications could include books, dictionaries, thesauruses, journals, magazines, newspapers, etc.

Ontologies – categories of entities and the relations among them. Example: for the taxonomy publications, some relations could be that dictionaries are a subclass of books, thesauruses are a subclass of books, journals are disjoint from magazines, etc. Here the relations are “subclass of" and “disjoint from”.

The Semantic Web – an extension of the current Web where embedded ontology descriptors specify the meaning of the information in a manner suitable for machine processing. The Semantic Web is similar to Hypertext Markup Language (HTML), which is interpreted by Web browsers to format the display of Web content for human use. These are both languages that are interpreted by agents to control interpretation and processing of content. Current languages are built on Extensible Markup Language (XML). These include Resource Description Framework

(RDF) to specify properties, and DAML to specify relations.

Web Services – software that supports interaction among systems on a network. One language that supports such interac tion is built on DAML, DAML for Services (DAML-S)3.

O n t o l o g i e s   A p p l i c a t i o n s

Technology Taxonomy – An enterprise project currently underway will define a taxonomy to organize technology information. The objectives are to make it easier to find specific information and identify subject matter experts (SMEs). A taxonomy of technologies at Raytheon is being defined by analyzing the capabilities and restrictions of commonly used technology categorization schemes, notably the Defense Technical Information Center (DTIC) taxonomy, the UK Ministry of Defense (MOD) Taxonomy, and the ACM Computing Classification System. This effort will define a controlled vocabulary and create a thesaurus containing synonyms, to ensure standardized terminology. Technical papers and SME biographies will be marked up using a set of metadata elements to identify title, subject, description, creator, date, and other relevant publication characteristics. A commonly adopted metadata definition for documents is the Dublin Core4, which is an option under evaluation.

Reference Architecture – A reference architecture models, evaluates, compares, and improves the implementation and performance of operational concepts and system designs to support Network Centric Operations (NCO) An established set of ontologies ensures that common concept definitions for architecture elements and information are used in the modeling and simulation predictions.

Military Application – Military information users must make life-critical decisions based on large amounts of time-sensitive, rapidly changing inputs from multiple sources.

The Common Operating Picture (COP) is a distributed database, currently packed with disparate and sometimes incompatible data. In the future, it will be generated by human operators and software agents marking up information from sensors or sources in accordance with (IAW) military standardized ontologies.

Figure 2 shows how the marked-up sensor and source information is stored in the Information Grid and retrieved by subscribers. [See the Document at top]

The Common Relevant Operating Picture (CROP) is obtained by consumers (humans or software agents) subscribing to relevant information specified IAW the same ontologies used to create the COP; information irrelevant to the consumer’s context is suppressed. The ontologies are stored and maintained in the Information Grid, ensuring identical specifications are accessible to both producers and consumers.

S u m m a r y

Employing a single, consistently applied meaning (semantics) for concepts, categories, and relationships reduces confusion, misinterpretation, and mistakes by utilizing a single, consistently applied meaning (semantics) for concepts, categories and relation ships. This approach also reduces cognitive overload by supplying users with information that is relevant to their location, situation, and responsibilities. Taxonomies and ontologies support both improvements.

1 – The DARPA Agent Markup
Language Homepage
2 – World Wide Web Consortium W3C
3 – DAML Services
4 – Dublin Core Metadata Initiative
5 A Customer Focused Process for Reference
Architecture Development, previous article – this
technology today issue

Raytheon recognizes the importance of mastering emerging engineering methods and tools to successfully perform its role as Mission Systems Integrator on large complex programs. These methods and tools support holistic systems thinking and promote engineering best practices for implementing System of Systems solutions. The capabilities necessary to excel as a Mission System Integrator fall into the following major categories (with modeling and simulation integrated across all areas):

•   Mission Analysis & Architecture
•   Performance Analysis & Prediction
•   System Design & Specification
•   Specialty Engineering
•   Verification Analysis & Execution

Activities conducted in each of these areas rely on specialized subject matter expertise, information management and engineering processes and tools. Each capability must be developed with an eye on the Integrated System Model (Figure 1) to support continuous improvement and accommodate growth as a world-class Mission System Integrator.

The activities shown in Figure 1 may be performed independently when working on small programs, resulting in islands of analysis, where there is limited sharing of culture and information between models. Even if each “island” represented a world class capability, maximum benefit could not be achieved without a higher level of integration. The need for Integrated System Models increases as the scope and complexity of the system or Systems of Systems (SoS) solution increases. This article describes efforts currently underway within Raytheon to establish the framework by which this integrated model can leverage Mission Systems Integration (MSI) techniques for maximum stakeholder benefit.

Mission Analysis & Architecture
The Raytheon Enterprise Architecture

Process (REAP) uses the Department of Defense Architecture Framework (DoDAF) to develop, present and integrate architecture descriptions. DoDAF products are used to define the operational domains, rules and constraints by which all systems in the domain operate to perform specified missions. DoDAF products set the mission context, define required activities, identify participating elements and manage pertinent mission information. Figure 2 uses a color-coded number to place each product into one of four DoDAF views: All, Operational, System, or Technical Standard.

Icons represent the type of data each product documents, and lines between products show the relationships between objects modeled in each product. These products also contain information related to the other MSI models. Table 1 provides a mapping between DoDAF and these other models.

DoDAF tools such as Popkin System Architect (SA) provide editors for each product; an integrated dictionary manages the inter-relationships between them. Application and use of DoDAF and REAP are growing within Raytheon through individual efforts and the efforts of the Architecture Technical Interest Group (TIG) sponsored by the Systems Engineering Technology Network (SETN). The DD(X) program, along with other programs in Garland, use Popkin SA for CAF/DoDAF architecture development, linking their architectural elements to requirements in DOORS. DD(X) has extended the Popkin SA meta-model to accommodate specific interface definition and specification tree structures.

Raytheon has also used the Ptech Enterprise tool to develop Zachman and DoDAF views. Ptech’s concordant knowledge base was used to create the AV-2 and the OV-3 for a portion of the Military Information Architecture Accelerator (MIAA). Raytheon used the Ptech software to publish a CD, allowing the customer to examine the architecture through a web browser and also exported the OV-5 activity model as code for a Colored Petri Net simulation.

Other programs use the Extend modeling tool to run system performance Analysis, helping them better understand and derive complex system requirements. Extend has also helped Missile Systems model Netted Weapons Systems architectures using simu lation to measure the "power" of alternate IR&D and NetCentric approaches. Network Centric Systems has built elaborate Extend models used to test out SoS architectures, building executable Concept of Operations.

As the DoDAF products mature, information can be flowed down to the system analysis and design models for additional analysis and refinement at the system level. Table 1 shows how the DoDAF products can provide useful information to the Performance Analysis, System Design & Specification, and Specialty Engineering capabilities. The following paragraphs define each of these advanced capabilities in more detail.

S y s t e m   D e s i g n   a n d   S p e c i f i c a t i o n

Raytheon has considerable experience applying sophisticated system modeling tools like Foresight, RDD-100, CORE and StateMate to solve complex system design problems. These tools are used to develop static and dynamic holistic models of the system, as represented in Figure 3. By carefully controlling the level of abstraction of the system’s internal components and interfaces, over-specification and unnecessary design detail can be avoided. Raytheon continues to gain experience in using automation to extract specifications from well-structured system models.

This powerful capability has provided unprecedented insight into the system design as it matures. By checking requirements traceability, functional allocation, interface completeness, and dynamic execution of the system model, Raytheon has shown that
the completeness and consistency of the resulting specification can be assured. In addition, a well structured system model provides a flexible framework for relating performance budgets to system components, identifying detailed analysis needs and relating analysis results back to the overall system context. The analytical techniques that are used have included closed form expressions, discrete event models and network analysis.

The Unified Modeling Language (UML), a popular software modeling language, is now showing promise as a systems modeling language. Raytheon is an active participant in SysML Partners,  currently defining conventions for use of UML 2.0 in Systems Engineering ( SysML emphasizes a flexible component-based model-driven approach to system specification, promising the encapsulation and reuse of Object Oriented techniques as well as the strong interface management necessary for successful MSI. SysML supports allocation of function to form and facilitates system trade-off and technology upgrade Analysis.

Continued from page 13

Use Cases, a methodology used to identify, clarify and organize system requirements, promotes good systems engineering practices by segregating analysis and design. On large complex systems, it is easy to get carried away and produce “Use Case Explosions,” decomposing use cases as if they were functional hierarchies. Raytheon is building on successes in applying Use Case methods to requirements development. Figure 5 shows a proven method for parsing Mission Level Use Cases into a minimal set of Design Level Use Cases.

The System Holistic Architecture and Requirements Process (SHARP) pilot program initiated by Raytheon IDS in 2003 has already provided initial standardized guidelines for complex system specification using Use Cases. This pilot will continue into 2004, leveraging Raytheon’s strategic alliance with IBM/Rational Software, and capitalizing on the Rational Unified Process for Systems Engineering (RUP SE). Leveraging on this work, the Raytheon SEC has also provided funding to incorporate Object-Oriented model-driven system design practices and enablers into IPDS in 2004.

P e r f o r m a n c e  A n a l y s i s

After enjoying initial successes implementing Design-for-Six Sigma (DFSS) on programs, Raytheon is now moving into a broader deployment. DFSS helps ensure that the system meets our customer’s needs, supports design optimization through Critical Parameter Management techniques, and guides product development using statistically enabled engineering methods and metrics. One tool currently being evaluated within Raytheon is the Six Sigma Cockpit (SSC) by Cognition, with general capabilities shown in Figure 6.

SSC provides a framework for requirements development through Quality Function Deployment, enabling capture of the Voice of the Customer and derivation of requirements using a tiered House-of-Quality approach. Transfer functions implemented within the tool model the relationships between all Critical-to-Quality parameters (CTQs) and associated input parameters. A series of scorecards permit tracking all of the CTQs and input parameters, taking into consideration the variability of each input and defined transfer function. Interfaces to external statistical design tools allow for leveraging data and analysis performed outside of the Cockpit.

S p e c i a l t y   E n g i n e e r i n g

Raytheon Specialty Engineering is evolving to support Mission Systems Integration. Our customers are mandating order- of-magnitude system reliability improvements to meet extremely aggressive SoS cost and availability goals. Methods and tools have undergone a transformation from a point estimation focus to a model-driven simulation approach that comprehends probabilistic distributional variability. SoS mission success and performance objectives are being improved through simulation-based Specialty Engineering Measures of Effectiveness and trades analysis.

[INSERT:  The above was one of the motives to carry out the 9/11 black op, to help them fast-track the above performance objectives, they needed massive funding for this--9/11 secured that for them.]

The U.S. Army Materiel Systems Analysis Activity is a key advocate of implementing “Ultra-Reliability” requirements in many SoS proposals. Raytheon is addressing these challenges by implementing design process and tool changes from the device level through the system level. During system requirements analysis, specialty engineering personnel now take a more active role in determining and characterizing the end user’s environment and mission profiles.

Raytheon is working with the University of Maryland’s Computer Aided Life Cycle Engineering (CALCE) Electronic Products & Systems Center to better understand the impacts that temperature, vibration and shock have on the performance and expected life of the product over its entire life cycle (including production, shipping, storage and mission execution). Tools such as CALCE PWA allow the Specialty Engineer to utilize use-case and environmental characterization data to perform Physics of Failure-based Analysis to calculate fatigue life, accounting for probabilistic considerations. Resulting design improvements in packaging and mounting/supports have proven to drastically reduce stresses and board displacements, thereby significantly increasing reliability.  (

Specialty engineers are now able to perform engineering analysis based on a thorough understanding of failure mechanism modeling and simulation. Raytheon has successfully used tools such as Clockwork Solution’s SPAR ( on the DD(X) program to model and simulate system performance for a given scenario of design, operations and maintenance parameters. By leveraging system configuration data, including statistical performance properties of the major subsystems, the system’s behavior can be simulated through time-dependent Monte Carlo techniques identifying the most appropriate configuration, operating doctrine and maintenance practices to achieve performance and cost objectives.

Achieving Ultra-Reliability performance required in complex SoS architectures, such as the U.S. Army’s Future Combat Systems, requires a comprehensive understanding of the health of all components. Raytheon is utilizing tools such as DSI’s eXpress ( to help assess, derive and implement an integrated diagnostics and health management strategy. Specialty engineers use eXpress’s hierarchical functional dependency modeling to enable the flow down of test and diagnostics requirements for the optimized balance between embedded, runtime diagnostics and ground-based maintenance diagnostics.

V e r i f i c a t i o n   A n a l y s i s   a n d  E x e c u t i o n

Verification analysis starts during the requirements analysis phase and continues through product development, integration and test phases.  Verification tools must a) have extensible schemas to support program tailoring; b) provide traceability back to the system requirements; and c) be able to interface well with our other engineering design tools. Most Raytheon programs use DOORS for Requirements Management. DOORS supports the extensibility, traceability and interface requirements necessary to adequately implement the verification process. NCS has recently implemented a DOORS Standard Project Image which is available for use on all new program starts. It includes a set of templates useful for verification planning and execution.


  • Guest
Re: Raytheon ADMITS using Ptech to develop DoDAF for GIG & FCS
« Reply #13 on: March 03, 2009, 01:04:12 am »

Offline lordssyndicate

  • Member
  • *****
  • Posts: 1,141
  • Stop The New World Order
    • LinkedIn Profile
When coupled with the thread you have on the NIAC document  we have conclusive proof as well as links to all the key players involved in this.

There is no denying exactly who was responsible for the attacks on 9/11 they put their names on the documents , patents, and planned all of the drills...

We now have solid court submissible evidence that 9/11 was an inside job - as well as at least all of the people mentioned in this thread as well as NIAC and others of similar note involvement.

Also, by very design these projects required prior for knowledge of what these people  (demons) were designing, building, and testing this system for. 

If you delve deeper into these people's pasts you find all of them have direct ties to many NWO Builderberg / CFR / Trilateral shills if not being exactly such themselves time and again proving themselves traitorous vipers.

"Biotechnology it's not so bad. It's just like all technologies it's in the wrong HANDS!"- Sepultura


  • Guest
Re: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure
« Reply #15 on: May 07, 2009, 05:10:22 pm »

MITRE also has a headquarters in McLean, Va., on a campus it shares with Northrop Grumman.

The MITRE Corp. is a major defense contracting organization headed by the former Director of Central Intelligence (DCI), Dr. James Rodney Schlesinger. Schlesinger, who was reportedly made DCI at the request of Henry Kissinger in 1973, later served as Secretary of Defense.

Schlesinger, a former director of strategic studies at the RAND Corp., was described in a 1973 biography as a "devout Lutheran," although he was born in New York in 1929 to immigrant Jewish parents from Austria and Russia. Schlesinger earned three degrees from Harvard University.

Schlesinger's father, an accountant, founded the accounting firm Schlesinger & Haas, and was a trustee and chairman of the budget of the Stephen Wise Free Synagogue. His father was also a member of the New York State Grand Lodge of Masons.

The MITRE Corp., of which Schlesinger is chairman of the board of trustees, is connected to the Massachusetts Institute of Technology (MIT), MIT's Lincoln Laboratory, and Mitretek Systems of Falls Church, Va.

Schlesinger is a senior advisor for the Lehman Brothers investment firm and a member of the Defense Policy Board and advisory council for the Department of Homeland Security (DHS).

The MITRE Corp. has provided computer and information technology to the FAA and the U.S. Air Force since the late 1950's. MITRE is a Federally Funded Research and Development Center (FFRDC) for the Dept. of Defense, the FAA, and the Internal Revenue Service.

The chairman of the board of trustees of Mitretek Systems, a spin-off of MITRE Corp., is Martin R. Hoffmann, who served as Secretary of the Army when the "perfect terrorist plan" was reportedly prepared in 1976.

MITRE's Command, Control, Communications, and Intelligence (C3I) FFRDC for the Dept. of Defense was established in 1958. The C3I "supports a broad and diverse set of sponsors within the Department of Defense and the Intelligence Community. These include the military departments, defense and intelligence agencies, the combatant commands, and elements of both the Office of the Secretary of Defense and the office of the Joint Chiefs of Staff," according to MITRE's website. "Information systems technology," it says, "coupled with domain knowledge, underpin the work of the C3I FFRDC."

The U.S. Air Force maintains its Electronic Systems Center (ESC) at the Hanscom AFB in Bedford, Mass. The ESC manages the development and acquisition of electronic command and control (C2) systems used by the Air Force.

The ESC is the Air Force's "brain for information, command and control systems," according to Charles Paone, a civilian employee of the ESC. It is the "product center" for the Air Force's Airborne Warning and Control System (AWACS) and Joint Surveillance Target Attack Radar System (J-STARS), Paone said.

Asked about MITRE's role at the ESC, Paone said, "MITRE does the front-end engineering. It's basically our in-house engineer." MITRE employees operate the computer systems at Hanscom AFB, Paone said.

MIT's Lincoln Laboratories, the parent of MITRE, is located on the Hanscom AFB.

A second FFRDC, the Center for Advanced Aviation System Development (CAASD) provides computer engineering and technology to the FAA. MITRE's support of the FAA began in 1958, when the company was created.

The FAA's Airspace Management Handbook of May 2004, for example, was written and published by the MITRE Corp.

Posted by plunger at 10/04/2006 @ 06:07am


  • Guest
Re: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure
« Reply #16 on: May 07, 2009, 10:03:51 pm »
I got a message pop up :  to help protect your security, IE has blocked this wesite from displaying content with security certificate errors.  Click for options


  • Guest

Offline Satyagraha

  • Global Moderator
  • Member
  • *****
  • Posts: 8,941
Re: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure
« Reply #18 on: October 14, 2009, 10:09:29 am »

Boston to Pilot Solar-Powered Evacuation System
Oct 13, 2009, News Report

Boston -- one of the U.S. Department of Energy's (DOE) Solar America Cities Special Projects -- has been awarded a DOE grant to establish a solar evacuation route, according to a release from Mayor Thomas M. Menino's Office. The $1,343,020 ARRA grant will allow the city to create a pilot solar evacuation route that will feature a backup photovoltaic (PV) system, as well as solar-powered traffic control and monitoring equipment, lighting and emergency radio repeaters. Grant funds will be used to make the Washington Street evacuation route operable on solar energy in the case of an emergency, such as a blackout or a natural disaster. Street lights, traffic lights, video cameras, message boards and gas pumps along the evacuation route will be solar powered.

And  the King shall answer and say unto them, Verily I say unto you, 
Inasmuch as ye have done it unto one of the least of these my brethren,  ye have done it unto me.

Matthew 25:40

Mike Philbin

  • Guest
Damn straight!

Now, let's see some action in court!



When coupled with the thread you have on the NIAC document  we have conclusive proof as well as links to all the key players involved in this.

There is no denying exactly who was responsible for the attacks on 9/11 they put their names on the documents , patents, and planned all of the drills...

We now have solid court submissible evidence that 9/11 was an inside job - as well as at least all of the people mentioned in this thread as well as NIAC and others of similar note involvement.

Also, by very design these projects required prior for knowledge of what these people  (demons) were designing, building, and testing this system for. 

If you delve deeper into these people's pasts you find all of them have direct ties to many NWO Builderberg / CFR / Trilateral shills if not being exactly such themselves time and again proving themselves traitorous vipers.

Offline squarepusher

  • Member
  • *****
  • Posts: 2,013
Re: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure
« Reply #20 on: February 23, 2010, 11:32:20 am »
Just in case someone wanted to read the document/press release/white paper by IBM entitled 'IBM Enterprise Architecture Method enabled through Ptech FrameWork', here it is:

IBM Enterprise Architecture Method enabled through Ptech FrameWork

Infowars Wiki - Help make this become the official wiki of - contribute!

Offline birther truther tenther

  • Member
  • *****
  • Posts: 2,726
  • Against all forms of tyranny
Re: 9/11- Schlesinger | MITRE | Hanscom ESC | CAASD | GIG Infrastructure
« Reply #21 on: October 14, 2010, 10:04:58 pm »
160 Federal Street
Boston, MA 02110
Topic#: (800) 747-5608
Hussein Ibrahim
AF 00-116
Title: Advanced C2 Process Modeling and Requirements Analysis Technology
Abstract: This effort will demonstrate the ability to develop an innovative C2 investment decision support system. The objective system springs from a completely original conceptualization of the problem. It will support "product and process modeling of integrated operational and system architectures" and will produce results that can be used within the Air Force spiral development process, C2 management philosophy, and PPBS. This system will improve access to mathematically rigorous, token-based architecture by orders of magnitude. Ptech and George Mason University's System Architectures Laboratory will integrate object oriented C2 architecture modeling and a Discrete Event System model to construct a software system that can: - Synthesize Colored Petri Nets from a set of object oriented products. We will develop and employ a file-based interface between Ptech's FrameWork modeling environment and Design/CPN.- Verify the logical soundness and behaviors of architectures by executing the models and using token-based, state space and behavioral analysis techniques against an agreed set of measures. -Report results in a variety of agreed graphical and textual formats. This Phase I effort includes early proof-of-concept demonstrations to enable the Team to gain favorable position for development funding or approval for FAST TRACK funding.

I think I just s**ted out a brick reading that.