Implementing SMS

Information Sharing & Data Management & Taxonomies

SMS comprises of a vast amount of data and most of its process refer to data collection, data acquisition and data extrapolation. Data is therefore a critical component of the system. For an effective SMS, data must be right sized as the undue burden for overcollection can disrupt the operations, can create confusion and can create a lost cause situation, where the data are so many that it is too hard to analyse. Before starting data collection, one should ask the questions “Do we have the data we need?, then What data we need? And then Do we get the data we need?”. As SMS is a live system and will evolve and mature in time, essential information and data will change in the progress. So, data management is an ongoing process throughout the life of the system.

It is important to remember that data gathering is not always as simple as it sounds, as many data are outside the control of the organisation. Many of the information required will come from the third-party providers and other organisations. But a mature SMS requires both inside and outside data. This is where information sharing plays a vital role in the industry. Each organisation individually cannot have access to all the required data. For this reason, there are collective groups of organisations sharing data. On top of that we have the data collected by the Authorities etc. Information and data can be gathered from industry specific fora, information sharing programmes, alliances, meetings, reports from authorities etc. The importance in data gathering is to have a standardise frame and similar target dataset.

The principles of data management aim to create a growing database of reactive and proactive data for the generation of reliable, valid, reproduceable, stable, traceable results. The following 6 principles will help and guide in the successful creation of data sources with the above-mentioned features.

1. Data source management

Two requirements will result in an established and continuously contributing source of information. The source assessment, which aims to identify the utilisation of the data and the filling of the gap, as well as assessing and defining the requirements for inputs/ process/ output of the data. The second requirement is the data stewardship which defines the responsibilities and accountabilities for the management of the source. We need to identify who has the overall oversight of the data, how the data are protected, what methods of distribution we use, as well as the supporting mechanisms the development and coordination of the data.

2. Data model and taxonomies Standards

Through the taxonomies and the standards, we aim to maximise the utility of the data source and provide the capability of fusion with other data sources. Taxonomies are a set of concepts established to provide a set of reference and a framework. Taxonomies will create the effectiveness required for analysing the data and for retrieval of information. For correct taxonomies, we need to define each category through a data dictionary. Then we create the groups and the sub-groups which must be comprehensive exclusive and clearly defined. Taxonomies can be custom based or used from other sources, according to the use, the purpose and the scope, and the extent of their use. Successful system is an easy one to use and the extent of compliance with industry standards, in terms of easy integrations and fusion of data. The aviation industry has some taxonomies standards based on the need. The most common one is the CAST/ ICAO Common Taxonomy Team (CICTT) standards which are updated every quarter. Some of the categories included are: aircraft make/model/series, engines make/model, phases of flight, occurrence categories, Human Factors, aerodrome etc.

Some other aviation related taxonomies are:

  • UT ASAP Pilot Reporting
  • Aviation Safety Reporting System (ASRS) Anomaly Codes
  • BA Safety Information System (BASIS)
  • ICAO Accident/ Incident Reports Data Reporting System (ADREP) 2000
  • Human Factors Analysis and Classification System (HFACS)
  • Aviation Casual Contributors for Events Reporting System (ACCERS)
  • Threat and Error Management (TEM)

3. Reliability Framework

Data sources must be supported by a multi-tiered and configuration-managed data acquisition and a processing system. This provides reliable and flexible architecture for task support. This way we achieve capturing, mapping, transcribing, loading, transformation and expansion of data. It will also increase our monitoring quality.

4. Data Quality

The data must provide fitness to support the intended uses, they need to be applicable to the purpose. The data must be quality checked in terms of validity, consistency, accessibility, completeness, accuracy, timeliness and security. Data quality needs to be an ongoing process with continuous monitoring.

5. Data Fusion

Fusing data sources should be an ideal goal, as we reduce our exposure bad quality data and we harmonise our analysis process. Fusion is a very sensitive process and data sources must be closely monitored before fusion begins. Fusion can be in the form of expansion/ enhancement of one source using data from a different source, or it can be two difference sources fused into a new one.

6. Configuration Management

Is defines as the establishment and maintenance of the consistency for the information source. It is a form a data quality assurance that provides reproduceable and traceable results.