All content on this blog is original work produced by Allison Primack. Do not republish or print without permission.

Wednesday, April 25, 2012

Performance Measures Analysis


For my program evaluation course, we had to create an analysis of the performance measures used in a particular program. I chose to write about the Terrorism Prevention program run by the Department of Homeland Security (DHS) and Transportation Security Administration (TSA):


April 10, 2012

To: Assistant Secretary Barnow, Transportation Security Administration
From: Allison Primack, Special Assistant
Re: Performance Management System for Preventing Terrorism
______________________________________________________________________________


I.          Purpose
The purpose of this report is to assess the new performance management system for the Terrorism Prevention Program under the Transportation Security Administration (TSA) in the Department of Homeland Security (DHS). These measures were established as a result of the DHS Bottom-Up Review from November 2009 to be used for FY 2011 and forward. This is a part of the Quadrennial Homeland Security Review (QHSR), which laid out the strategic framework in February 2010 to “guide the activities of homeland security enterprise toward a common end – a homeland that is safe, secure, and resilient against terrorism and other hazards.”

II.        Current Measures and Standards
The current measures and standards for this program are clearly outlined in the United States DHS Annual Performance Report, Fiscal Years 2011-2013. “Preventing Terrorism and Enhancing Security” is one of six missions of the QHSR outlined in this report.

This mission has three goals to achieve:
  1. Prevent Terrorist Attacks
  2. Prevent the Unauthorized Acquisition or Use of CBRN Materials and Capabilities
  3.  Manage Risks to Critical Infrastructure, Key Leadership, and Events


This report will focus on the first goal of the program, preventing terrorist attacks. This goal is supported by five objectives:
  1. Understand the threat
  2. Deter and disrupt operations
  3.  Protect against terrorist capabilities
  4. Stop the spread of violent extremism
  5. Engage communities


To determine if the goal and objectives are being met, there are currently the following performance measures in place:

Measure
Type of Measure
Results
Projected Targets
Percent of intelligence reports rated “satisfactory” or higher in customer feedback that enable customers to understand the threat
Output Measure of Objective 1, “Understand the threat”


No results yet, data to begin collection in FY 2012.
FY 2012 (80%)
FY 2013 (80%)

Percent of air carriers operating from domestic airports in compliance with leading security indicators
Process Measure of Objective 3, “Protect against Terrorist Capabilities”**


FY 2008 (96%)
FY 2009 (98%)
FY 2010 (98%)
FY 2011 (99.2%)
FY 2011 (100%)
FY 2012 (100%)
FY 2013 (100%)
Percent of domestic air enplanements vetted against the terrorist watch list through Secure Flight
Input Measure of Objective 2, “Deter and Disrupt Operations”

FY 2011 (100%)
FY 2011 (100%)
FY 2012 (100%)
FY 2013 (100%)
Percent of international air enplanements vetted against the terrorist watch list through Secure Flight
Input Measure of Objective 2, “Deter and Disrupt Operations”

FY 2011 (100%)
FY 2011 (100%)
FY 2012 (100%)
FY 2013 (100%)

Percent of air cargo screened on commercial passenger flights originating from the United States and territories
Input Measure of Objective 2, “Deter and Disrupt Operations”
FY 2011 (100%)
FY 2011 (100%)
FY 2012 (100%)
FY 2013 (100%)

Percent of inbound air cargo screened on international passenger flights origination from outside of the United States and Territories
Input Measure of Objective 2, “Deter and Disrupt Operations”
No results yet, new measure in FY 2011
FY 2012 (85%)
FY 2013 (100%)
Average number of days for DHS Traveler Redress Inquiry Program (TRIP) redress requests to be closed
Output Measure of Objective 3, “Protect against Terrorist Capabilities”
FY 2011 (99)
FY 2011 (<100)
FY 2012 (<97)
FY 2013 (<95)
Percent of law enforcement officials trained in methods to counter terrorism and other violent acts that rate the training as effective
Input Measure of Objective 2, “Deter and Disrupt Operations”
FY 2011 (84%)
FY 2011 (80%)
FY 2012 (82%)
FY 2013 (84%)

**This is the only standard carried over into this program from the previous set of performance measures, explaining the results before 2010.

In addition, this mission is evaluated by two cross-cutting performance measures, which every mission in DHS is evaluated by. These standards were carried over from the previous performance measurement program, so they have more results to analyze:

Measure
Type of Measure
Results
Projected Targets
Percent of breaking homeland security situations integrated and disseminated to designated partners within targeted timeframes
Process Measure contributing to Objective 2, “Deter and Disrupt Operations”
FY 2009 (88%)
FY 2010 (100%)
FY 2011 (100%)
FY 2011 (95%)
FY 2012 (95%)
FY 2013 (95%)

Percent of Partner Organizations satisfied that the Federal Law Enforcement Training Center training programs address the right skills needed for their officers/agents to perform their law enforcement duties
Output Measure contributing to Objective 3, “Protect against Terrorist Capabilities”
FY 2008 (79.75%)
FY 2009 (82%)
FY 2010 (96%)
FY 2011 (98.5%)
FY 2011 (84%)
FY 2012 (97%)
FY 2013 (97%)



III.      Appropriateness of Current Measures and Standards
In a perfect world, the performance measures would be the same as the goals and the objectives of this program. However, because it is not easy to observe and quantify these objectives, the above performance measures were created. Below is a breakdown of how appropriate the current performance measures are for the terrorism prevention program, based on the criteria outlined by Theodore H. Poister. The Appendix of Measure Descriptions and Data Collection Methodologies, published with the DHS Annual Performance Report, was utilized to make these assessments:

Reliability and Validity: To ensure the completeness and reliability of the data being used, the DHS has a “two-pronged approach” of checking their measures against the GPRA Performance Measure Checklist for Completeness and Reliability and an Independent Assessment of the Completeness and Reliability of GPRA Performance Measures. The first checklist is used to increase the quality of rating justifications overall, while the second checklist was from a report of the pilot program of best practices, and how the missions could improve data collection and reporting processes. Additionally, during FY 2011 all department heads were asked to verify that the DHS Performance and Accountability Reports were “complete and reliable”. All but two measures passed, neither of which was classified under this mission or program.

Meaningful and Understandable: These measures have a high level of stakeholder credibility. Program managers are required by the DHS to “assess results and summarize their findings” on a quarterly basis. This provides them with an understanding on the results of the program to date, and they are asked to look ahead and assess if they are going to reach their targets by the end of the year. Managers are also asked to outline corrective measures for hitting targets, if they are not going to be hit. This process additionally occurs more formally at the end of each fiscal year. Because of their constant interaction with the standards, one would believe that they lead to meaningful results to those in charge of the program.

Balanced and Comprehensive: As indicated in the appendix, data is being gathered from several sources in order to create a complete picture of what is happening with terrorism prevention. From collecting data from citizens/customers to have to participate in the program, to interviewing those in the training processes, to concrete numbers, a wide range of sources is being referenced to make conclusions. The data being so rich and diverse helps increase the validity of these measures as well.

Timely and Actionable Results: By requiring quarterly reports, the measurements must be addressed frequently by managers. Because the data is almost all quantitative, it is relatively easy to keep up to date and analyze. This allows for adjustments, and gives managers more time to hit annual targets.
Resistant to Goal Displacement: A common problem with performance management systems is goal displacement, in that sometimes the true goal of the program is lost in attempts to hit the performance measurement targets. All of the targets listed above connect directly to a goal, and promote the prevention of terrorism as a whole. By striving to achieve the performance measures outlined, the DHS will not be compromising the anti-terrorist mission.

Nonredundant: It is also essential to make sure each of the measures is analyzing a different aspect of the program. As seen in the table, each measure is collecting a separate set of statistics and is focusing on different aspects of the program. The only redundancy that can be seen is the same measure in domestic and international context; however, it is smart to have these as separate measures because they pose different types and levels of terrorist threats.

IV.      Suggestions for New Measures
The performance measures currently being utilized are relatively new and up to date because of bottom-up review. This was conducted by the DHS in November 2009 to ensure that the all of the systems and measures of the department aligned with the QHSR missions and standards. With that said, there are still a few more measures that would be useful to include.

As indicated in the table above, there are many measures to support the first three objectives of this program, but there are none that directly link to the last two objectives, which are to stop the spread of violent extremism and to engage communities. In order to get a complete perspective on how well the DHS and TSA are preventing terrorism, it is essential to create measures that evaluate these objectives. However, it is understandable why there is no measure to evaluate them yet. Because they are so broad, and outside the scope of current terrorism prevention, it is difficult to quantify their success. Currently, the actions of the TSA are more reactive in nature, rather than preventative in most cases – after a terrorism attack occurs, they look for the signs and try to stop repeat cases. It appears that the DHS does not have programs that currently engage communities or work to stop the spread of violent extremism, so it is unfair to measure them at this point.

There are still some measures that would be useful to analyze the first three objectives that are not currently being utilized. One such measure that would be interesting to see is what percent of customers that are pulled aside for additional screening are actually a security threat, and lead to terrorism prevention. This would be a process variable because it would measure the effectiveness of the extra security process. To keep customers happier and make the system more efficient, while still promoting safety, the DHS should find a way to better select who they bring into additional screening to ensure that they are targeting less people, while still protecting citizens from terrorism threats

Additionally, it would be enlightening to see a measure of customer satisfaction. This would be an output variable, because it would measure the customer experience after they undergo the security program. DHS is planning on measuring how well customers understand the threat, but they are not planning on measuring how safe customers feel as a result of these terrorism prevention measures, and how effective they perceive TSA to be. This measure would have to be collected with care, because when collecting qualitative data measurement errors could arise.

V.        Setting the Standard
Setting the standard is crucial to this program, because the standard sets the tone of what level of performance is expected from the management. When it comes to the standard for terrorism prevention, it should be set extremely high. While it is valid that they want to set achievable targets, when it comes to the safety of the United States they should not settle for anything less than perfect. As seen in the table, the DHS currently takes this into consideration, since many of the targets are 100%.

Due to the novelty of these standards, it may take some time to decide if they are the best measures to determine the success of terrorism prevention program. For example, for the standards on percent of air cargo screened, where the target is 100% and the result in the initial year is 100%, it may not make sense as the best measure for how well the program is functioning; that standard may just be expected, and not an achievement that is worth noting or striving for. It is crucial for the DHS to continue to collect as much data and information as possible in the case that different standards need to be set that would better determine the success of the program.

VI.      Adjusting the Current Standards
The current performance standards should not be adjusted. This program should not have different results based on different demographics, locations, economic situations, or any other factor; preventing terrorism should be in full force for all citizens, at all times. The percent of baggage being scanned, the percent of cargo being searched, nor the percentage of adequately trained officers should vary in standard based on where in the country they are, or the people they are serving. While the goals may be harder to achieve in larger, more busy airports, that should not be an excuse to lower the standard; these are the airports that probably need the high standard the most, as seen on the attacks from September 11, 2001. Nothing would be gained by curbing the standards – more people would be put in danger, and it would ultimately work against the goals of the mission.

In addition, regression-based models are more difficult to understand, which would create issues when managers are asked to make their quarterly reports. Adjusting does not add much explanatory power to the regression, and it may lead to unintended results and consequences. The more simple the numbers are, the easier it is for managers to create useful and comprehendible reports.

VII.    Suggestions on Improving the Performance Management System in the Future
One of the best components of this performance measurement program is it’s consistently evolving nature; by checking in with managers frequently, they are able to adjust the measures in order to better address current needs. Because terrorism and its methods are constantly evolving, it does not make sense to have static performance measures to check against. This will keep DHS from falling to goal displacement, and striving to perform well in the measurement as opposed to focusing on current topics of terrorism arising.

This set of standards is a solid start for this new program. Even though the list of measures should be as condense as possible, it is essential to have quantitative and qualitative measures for every objective, which currently does not exist in this performance management system. In addition, it would be wise to use these measures in conjunction with a more thorough program evaluation every couple of years. This type of evaluation would be able to measure the impact of the program, which is difficult to encapsulate in performance measures alone.


VIII.  References
Appendix A: Measure Descriptions and Data Collection Methodologies. U.S. Department of
Homeland Security, Annual Performance Report, Fiscal Years 2011-2013. Published
Barnow, Burt and Heinrich, Carolyn. “One Standard Fits All?  The Pros and Cons of
Performance Standard Adjustments.”  Public Administration Review, Jan./Feb. 2010.
Blalock, Ann B. and Barnow, Burt S.  "Is the New Obsession with 'Performance Management'
Masking the Truth about Social Programs?" in Dall Forsythe editor.  Quicker, Better
Cheaper: Managing Performance in American Government.  Albany, NY:  Rockefeller
Institute Press, 2001.
Poister, Theodore H. “Chapter Five Performance Management: Monitoring Program Outcomes.”
Handbook of Practical Program Evaluation, 2nd edition. Viewed Online. Pages 98-125.
U.S. Department of Homeland Security, Annual Performance Report, Fiscal Years 2011-2013.

No comments:

Post a Comment