INFORMATION ASSURANCE AND CORPORATE STRATEGY 109
INFORMATION ASSURANCE AND CORPORATE STRATEGY 109
statement from the above category, for example, was “Including IA metrics in general IT reports”. These statements were then combined and used for the second stage of the research—the Delphi study.
The first step in the Delphi procedure is to choose an expert panel (Brancheau et al., 1996; Larreche & Montgomery, 1977; Malhotra, Steele, & Grover, 1994). This is a particularly impor- tant step because it is the panel that lends content validity to the task (Anderson et al., 1994). Preble’s (1984) research has found that there is little difference between a panel of members chosen from a single organization and a panel of experts cho- sen from multiple organizations. The latter, however, provides a greater range of views and helps improve the generalizability of the results (Nambisan, Agarwal, & Tanniru, 1999; Okoli & Pawlowski, 2004).
We selected the second method and chose two different types of panelists. The first type included senior managers who are prominent members of the information security community (Mitchell & McGoldrick, 1994). Each have at least five years of practical experience within the IA field and are renowned for their competence in this area. The second type of pan- elists are academics who have expertise in information assur- ance (Guimaraes, Borges-Andrade, Machado, & Vargas, 2001; Okoli & Pawlowski, 2004). This provided a wider knowledge- base and a greater range of experience. There were 36 members in the panel (see Appendix B for more information on the participants).
The Delphi approach started with two preliminary rounds (Schmidt et al., 2001). The initial stage involved generating the concepts that would be evaluated in later rounds. In some research studies these have been supplied for the panel as a start- ing point for idea generation (Anderson et al., 1994; Guimaraes et al., 2001; Nambisan et al., 1999; Saunders & Jones, 1992) while in others, the panel commences with a completely blank sheet of paper (Okoli & Pawlowski, 2004; Schmidt et al., 2001; Schmidt, 1997). We preferred to follow the example of the for- mer studies where we used the results from our interviews to provide a list of factors that influence information assurance alignment. The panel members were free to amend or com- ment upon these ideas as well as generate their own concepts. The comments produced by the panel in each round were always fed-back to the participants in the next round (Schmidt, 1997). This provided them with qualitative information on the thoughts, ideas and questions raised by other panel members. In addition, many panelists developed a rationale for why cer- tain statements were important—or less important—to them, and this was presented anonymously to the rest of the panel in subsequent rounds. This helped the group to better understand the concepts and encouraged a form of nominal group debate (Malhotra et al., 1994).
Once the ideas had been collected and consolidated, the terminology was clarified and exact duplicates were removed. The resulting list was then sent back to the panel members for the second preliminary round. The objective here was to reduce
the number of concepts into a manageable list. We achieved this by asking the panel to rate the concepts in terms of desirability and feasibility on a scale of one to six. The aggregate mean for each concept was calculated for the desirability score and those with a very low mean—that, is, those that were deemed to be undesirable—were either refined for clarity or removed. The resulting list—which consisted of 29 statements—was then sent back to the panel. The members were again asked to rate the concepts in terms of desirability and feasibility. This was the first of the consensus rounds. After each round the panel were assessed for consensus using the standard deviation. A standard deviation of less than one implied a high consensus for that statement and it was, therefore, removed from the list and set aside for later consideration during the theory building process. If the consensus was low, however, the statement was left on the list. The amended list was subsequently sent back to the panel with the aggregated means for each statement and a record of the comments made by the members so that they were aware of the reasons for particular scores. This continued for three rounds until consensus was achieved. The resulting list of statements was then used to develop our theory (a more detailed summary of the analysis process is shown in Appendix D). This was achieved in the following way:
• The final statements were categorized into the four key groups.
• The statements for each group were plotted on a graph which showed the relationship between desirability and feasibility.
• Each graph was divided into four quadrants denot- ing the levels of desirability and feasibility. This was achieved by plotting the mean for desirability and feasibility in each category.
• Finally, we developed a number of models showing the relationships between the concepts (Anderson et al., 1994; Strauss & Corbin, 1998).
4. RESULTS As stated above, the 29 statements were classified using the
four categories from the literature review. These are discussed in more detail below.
4.1. Options for Developing IA Goals and CSFs The panel developed a consensus regarding ten desirable
goals and critical success factors pertaining to information assurance alignment. As for all the options put to the panel, we asked for the CSFs to be given a feasibility rating, shown in Figure 2.
The most desirable critical success factor was considered to be acquiring senior management support for information assurance (Statement A). According to the panel of experts,
110 E. MCFADZEAN ET AL.
Key
A Gaining senior executive support for information assurance
B Instilling IA values and awareness amongst employees
C Anticipating IA threats
D Developing a security architecture that can rapidly respond to changes in the business environment
E Clarifying individual IA roles and responsibilities for all employees in the organization
F Developing IA policy beyond legislation and regulation
G Developing a 3 to 5 year IA strategy
H Working together with members of the same industry to develop solutions for IA issues
I Responding to changing organizational needs by providing flexible IA procedures and regulations
J Using the latest security technology, when appropriate
K Improving communication between IA and business functions
L Aligning IA measures with business objectives
M Prioritising IT/IA projects in line with organizational goals
N Improving the knowledge of both IA and Corporate goals and requirements for all relevant personnel
O Involving the IA function in corporate strategy development
P Developing collaboration between IA and the organization’s other functions
Q Discussing at board level key strategic dilemmas e.g. sharing information vs. tight security pertaining to IA
R Ensuring IA practitioners’ discuss how IA processes can support or restrict corporate strategy when undertaking IA changes
S Dedicating resources to making the IA practices responsive to changes in the environment
T Identifying different (internal and external) stakeholders’ requirements in terms of IA
U Determining information assurance success by qualitative as well as quantitative measures
V Using metrics to measure information assurance
W Evaluating employees’ IA practices
X Benchmarking IA against external organizations (best practices/standards)
Y Having IA metrics which focus on time performance (for example, how long did it take to discover incidents and how long did it take to recover)
Z Providing non-technical reports to the Board of Directors so that they can understand and approve IA policy
(a) Reporting to the board on how IA goals are being achieved
(b) Frequent auditing of IA policies
(c) Including IA metrics in general IT reports
5.2
Incomplete Options Incomplete OptionsPremier Choices
Incomplete Options Incomplete OptionsPremier Choices Premier Choices
Premier Choices