skip to main content
Environmental Review Toolkit
 

Back to SAFETEA-LU Section 6009 Implementation Study

Appendix C. TRB Letter Report

June 9, 2008

The Honorable Mary E. Peters
Secretary of Transportation
U.S. Department of Transportation
1200 New Jersey Ave, SE
Washington, D.C. 20590

Dear Secretary Peters:

We are pleased to transmit to you this first letter report of the Committee for a Review of U.S. Department of Transportation (USDOT) Study on Implementation of Changes to the Section 4(f) Process.

As described in the report, Section 6009 of the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) called for USDOT to implement regulations to streamline the process of evaluating impacts of transportation projects on 4(f) resources and further specified the following:

SECTION 6009. PARKS, RECREATION AREAS, WILDLIFE ANDWATERFOWL REFUGES, AND HISTORIC SITES.
(c) IMPLEMENTATION STUDY.—
(1) IN GENERAL.—The Secretary shall—

  • (A) conduct a study on the implementation of this section and the amendments made by this section; and
  • (B) commission an independent review of the study plan and methodology, and any associated conclusions, by the Transportation Research Board of the National Academy of Sciences.

A committee appointed by the National Research Council (NRC) was convened under the auspices of the Transportation Research Board (TRB) to carry out this charge through a series of brief letter reports. This first report in the series was developed through the following process: the committee reviewed USDOT's draft study plan and other relevant background information, held an open-session meeting to hear presentations from and engage in dialogue with USDOT staff and other stakeholders, and developed the report through closed-session meeting discussions and subsequent correspondence.

The report then went through peer review, following standard NRC procedures.

We note that the committee's charge is not to evaluate whether regulatory decisions made by USDOT with regard to 4(f) projects under the new law are appropriate; rather, its role is limited to (a) advice on the study design for appropriate methodology and data collection for evaluating impacts and (b) review of the final study report to determine whether the findings and conclusions are justified by the data collected and methods applied in interpreting the data.

The committee recognizes the many challenges faced by USDOT and its contractors in designing a draft study plan to evaluate the consequences of changes in procedures and regulations required by SAFETEA-LU. As discussed in the following sections, however, we believe that the study plans can be strengthened in a number of ways to ensure a rigorous, well-balanced assessment of how 4(f) resources are being affected by streamlining processes. We hope you will find our guidance to be a constructive aid to your efforts.

Sincerely,

signature of Michael Meyer

Michael Meyer

Committee Chair

Enclosure A: Illustrative list of Suggested Interview Questions
Enclosure B: Membership, Committee for a Review of U.S. Department of Transportation Study on Implementation of Changes to the Section 4(f) Process
Enclosure C: Speakers and Guests at Committee Meeting, March 31 - April 1, 2008

Letter Report

Transportation Research Board / National Academies

 

Evaluating the Implementation of Section 4(f) Streamlining Provisions

A Review of the U.S. Department of Transportation's

Phase I Draft Study Plans

1. Background context

Established in the U.S. Department of Transportation Act of 1966, Section 4(f) was designed to protect publicly owned parks, recreational areas, wildlife and waterfowl refuges, and public and private historical sites [all of which are referred to herein as “4(f) resources”] from use by transportation projects, unless the Administration determines that there is no “feasible and prudent” avoidance alternative and that “all possible planning to minimize harm” has occurred. Consideration of 4(f) resources is included as part of the environmental analyses typically conducted under the National Environmental Policy Act (NEPA) process followed by transportation and environmental regulatory agencies, but 4(f) considerations have separate legislative and regulatory authority.

Section 4(f) originated during the peak period of Interstate highway construction, with the goal of preserving urban parks and historical sites that were in jeopardy of being destroyed. In its 1971 Overton Park decision, the U.S. Supreme Court articulated a high standard for compliance with Section 4(f). In the years that followed, however, federal courts applied the Overton Park ruling differently in similar situations, reaching diverse conclusions about the extent to which certain mitigating factors may be considered in determining whether an avoidance alternative is feasible and prudent.

Congress amended Section 4(f) in Section 6009 of SAFETEA-LU in August 2005, leading to two important changes intended to streamline the Section 4(f) process. First, USDOT was granted authority to approve a project that results in a use that is so minor that it does not “adversely affect the activities, features, and attributes” of the Section 4(f) resource, referred to as a de minimis impact. When USDOT determines that an impact is de minimis and the responsible officials with jurisdiction over the resource agree, compliance with Section 4(f) is complete. No analysis of avoidance alternatives is necessary. Second, USDOT was directed to clarify the factors to consider and standards to apply for determining the "prudence and feasibility" of alternatives that avoid the use of Section 4(f) property.

Most transportation agency officials believe that the streamlining provisions of SAFETEA-LU reduce costs and save time in cases where alternatives are not available or when the impact on a 4(f) resource is minor. Although historic preservation an environmental protection groups generally agreed that some modification of 4(f) was acceptable in principle, development of the regulations for implementing changes to evaluate “prudent and feasible avoidance alternatives” proved contentious, and many of these groups remain wary of how the regulations will be applied. (Implementation of the de minimis rulings did not require regulations.) Partly in anticipation of these controversies, SAFETEA-LU legislation specified that USDOT conduct a study of the effectiveness of implementing the streamlining regulations. Specifically, USDOT was asked to report on the following:

  • “Efficiencies that may result from implementation of Section 6009,”
  • “The post-construction effectiveness of impact mitigation and avoidance commitments,” and
  • “Quantity of projects with impacts that are considered de minimis

SAFETEA-LU also specified that USDOT commission an independent review of the study plan and methodology and any associated conclusions by the Transportation Research Board (TRB) of the National Academy of Sciences.

2. The process for this activity

USDOT intends to conduct the mandated study in two phases. Phase I will focus on examining how the de minimis provision has been applied since it was enacted in August 2005. The Phase I study will also describe the feasible and prudent avoidance alternative standard and review the process used to develop the regulations. Phase II of the study will focus on evaluating implementation of the “feasible and prudent avoidance alternatives” standards, as well as updating and extending the evaluation of the de minimis impact provision.

In accordance with this plan, TRB agreed to carry out its independent review in three stages. In the first stage, the TRB committee has reviewed USDOT's Phase I draft study plans. In the next stage, the TRB committee will review the Phase I draft report and the Phase II draft study plans. In the final stage, the TRB committee will review the draft Phase II report. This letter report represents the outcomes of the TRB committee's first stage of work, focusing on the USDOT plans for evaluating implementation of the de minimis provisions.

The draft study plan developed by USDOT (and designated contractors at the Volpe Center) was shared with the committee and discussed at its meeting on March 31, 2008. Representatives of the following relevant stakeholder organizations were also invited to participate in the meeting and share their views about the USDOT study plans:

  • National Trust for Historic Preservation,
  • Advisory Council on Historic Preservation,
  • National Conference of State Historic Preservation Officers,
  • National Recreation and Park Association,
  • Rails-to-Trails Conservancy,
  • American Association of State Highway and Transportation Officials/Standing Committee on the Environment,
  • State department of transportation office (Ohio), and
  • U.S. Department of the Interior.

The committee also had the opportunity to examine the basic data that USDOT has been gathering on cases where the de minimis provision has been applied since implementation of the new rules in 2005. This database remains a work in progress, but at the time of the committee's meeting, USDOT had documented 237 projects with de minimis impact findings, with 11 of these projects having actually completed construction.

3. Evaluation of the draft study plans

The USDOT draft study plans are based on the following general structure: collect and analyze information on existing de minimis cases; conduct initial explorations on this full set of de minimis cases through written surveys and phone interviews with relevant stakeholders; on the basis of this input, identify cases for in-depth interviews and site visits; and evaluate the data and information collected in the steps above. We support the overall architecture of the study plans in a general sense but believe that the draft plans are too vague about many key details and fall short of what is needed for a sound study design on a number of levels, including the following:

  • Framing the study based on interpretation of statutorily defined questions and defining the study questions in measurable terms;
  • Articulating clear strategies for collecting data and survey information; ensuring that an adequately representative set of cases is examined; and clearly delineating between the evaluation of historic properties and that of parks, recreation areas, and wildlife/waterfowl refuges; and
  • Weighing alternative study designs and analytical methodologies for drawing conclusions from the information collected.

In the following sections, we offer some specific suggestions and general strategies for addressing these shortcomings.

3.1 Framing the study questions

In response to questions from the committee, USDOT acknowledged that Congress had provided little guidance with regard to the interpretation of the three study questions included in the SAFETEA-LU legislation. In the absence of more precise guidance, USDOT should clearly articulate its interpretation of the questions and identify how that interpretation relates to study design choices. The committee offers the following suggestions:

  • Question 1 directs USDOT to evaluate the efficiencies that may result from implementation of the Section 6009 amendments. This language suggests an “efficiency evaluation” approach that would consider the impacts of the amendments in terms of both cost and outcomes.
  • Question 2 directs USDOT to evaluate the post-construction mitigation and avoidance commitments adopted as part of projects conducted under Section 6009 and its amendments. The intent of this question is less transparent, and the appropriate evaluation approach will depend on USDOT's interpretation of the question. The following questions represent possible alternative interpretations:
    • Are the 4(f) resources better off than they would have been under the processes that existed prior to the amendments [i.e., were the 4(f) resources adequately protected]? If this is the intent, this would suggest an “impact evaluation” approach.
    • Do the post-construction conditions meet the expectations embodied in mitigation and avoidance commitments? If this is the intent, this would suggest an evaluation of the processes used to define and implement commitments (i.e., a “process evaluation”).
  • Question 3 directs USDOT to collect information about projects that are considered de minimis, including location, size, and cost. The question did not state an evaluation purpose. This information could be used to help inform the first two study questions. It could also be used as the starting point for a process evaluation to assess trends and identify possible issues in the implementation of the de minimis provisions.

Articulating the interpretation of these three questions and translating this interpretation into a clear study framework will help focus evaluation design choices and implementation. It will also help USDOT communicate its choices and help the audience of the study better interpret its findings.

3.2 Operationalizing the study questions and defining key terms

Once the study questions are clearly articulated, the USDOT study plan should ensure these questions can be “operationalized” (i.e., applied in a way that elicits clear, meaningful answers) and should define key concepts in measurable terms. This could include, for example, concepts such as efficiency, cost-effectiveness, time/duration, resource outcomes, and transportation outcomes, all of which could be interpreted differently by the various study participants. The study plan should likewise be clear in designating the direction of an impact on efficiency, effectiveness, or resource or transportation outcomes (e.g., what would constitute an “improvement” or “success”). Clearly defining these concepts will help in developing the data collection methodology, improving the comparability of feedback from different participants, and interpreting the findings. Cases where USDOT is defining concepts in ways that are inconsistent with definitions used in relevant statutes and regulations, if any, should be explicitly noted.

One general concern is that the draft study plan gives an impression that USDOT has conceived of a narrow interpretation of “efficiency” (or in more general terms, of “success” in implementing the streamlining provisions) based largely or entirely on time and cost savings. We suggest that the studies also evaluate success on the basis of outcomes for the 4(f) resource and for the transportation project. There may, in fact, be instances where a de minimis designation leads to greater than average time and costs (e.g., as a result of extra efforts going into mitigation measures), but the project could still be viewed as a success if the mitigation measures led to better outcomes for the 4(f) resource and the transportation project.

A related concern is how one would actually measure the timeline for a 4(f) analysis in a way that would show the impact of the Section 6009 amendments. As USDOT well recognizes, defining the time and costs saved from use of de minimis provisions can present a difficult methodological challenge. Section 4(f) analyses are often embedded in long, complex NEPA analyses, and technically the 4(f) process does not conclude until the larger NEPA process is concluded. This timeline may be determined by a myriad of factors that reach well beyond the 4(f) process itself, and thus it can be difficult to define when the clock starts or stops for the 4(f) process. USDOT will need to evaluate whether different states are using consistent methodologies to define the timeline of 4(f) evaluations. Ideally, USDOT would develop and recommend guidelines for how states should do this sort of time accounting in the future to help ensure that consistent data are available for future studies.

A related key issue to consider is defining the appropriate evaluation frame. For example, efficiency can be defined in terms of the efficiency within an organization (e.g., USDOT), within a group of organizations (e.g., state DOTs), or for society as a whole. If the study is framed primarily in terms of impacts on U.S. or state DOT operations, for example, this should be clearly stated and the implications of this decision for the interpretation of the study findings should be considered. Because of the nature of the Section 6009 amendments, it is possible that in some cases, time and costs saved in one arena (e.g., by USDOT officials) could result in shifting time or cost burdens to other stakeholders in the process (e.g., to state historic preservation officers (SHPOs) or local officials with jurisdiction who are given new responsibilities as a result of the streamlining provisions). Likewise, cost and time savings could extend to more than one organization. These net costs and broader impacts should be considered, particularly if efficiency is to be defined in terms of a broader societal outcome.

3.3 Weighing alternative study designs

The draft study plan includes a brief section outlining its rationale for choosing “qualitative methods” to evaluate efficiencies and impacts of the Section 6009 amendments. The committee is concerned that this approach does not fully consider alternative study designs that could be used to maximize the study's validity within the context of the existing data and resource constraints.

As outlined in Section 3.1, the study could potentially involve two types of evaluation: impact evaluation and process evaluation. Elements of the study that are focused on impacts are necessarily concerned with causal assessment of relationships between the Section 6009 amendments and observations regarding cost, time, and 4(f) resource and transportation outcomes. Causal assessment depends on quantitative methods that control for threats to internal validity (that is, methods to control for possible confounding variables, which may lead to alternative explanations for the observed outcomes). In contrast, the process evaluation elements of the study would be less concerned with causation and, as such, quantitative methods would be less critical.

The committee acknowledges that USDOT will not be able to address the study questions by using a classical experimental design, for a variety of reasons. Section 4(f) projects are each unique in that they depend upon a myriad of local factors, such as the type, size, and distinctive attributes of the 4(f) resource in question; the type and extent of the use of that property; and the goals and actions of the various players involved. Also, the “subjects” potentially affected by the SAFETEA-LU legislation are 4(f) projects that exist for a finite period, and the characteristics of interest (e.g., cost, impacts of mitigation) reflect a single event; they are realized either before or after implementation of the Section 6009 amendments. It is not possible to measure, for example, the cost of a project utilizing a 4(f) resource before and after the amendments took effect. In addition, because these projects are not undertaken within the context of a controlled study, conditions for random assignment cannot be met.

These factors all pose significant challenges in terms of study design. We do not believe, however, that defaulting to an entirely qualitative study will allow USDOT to generate supportable conclusions. Rather, pursuing multiple lines of evidence that combine a variety of investigative strategies (as is often done in social science research and program evaluation efforts) will greatly increase the chance of producing meaningful results. If quantitative studies are combined with consistent qualitative evidence, this may collectively provide a sufficient weight of evidence to allow USDOT to draw reasonably sound conclusions from the study process. Toward that end, we recommend that the study plan include a structured assessment of mixed-method evaluation strategies that include quantitative methods to address internal validity, sampling strategies to address external validity (i.e., the extent the findings can be generalized beyond the sample included in the study), and qualitative inquiry to better understand and interpret the quantitative results.

Two possible approaches to the quantitative component of the study include:

  • Ex post facto design with nonequivalent control. USDOT could collect data from states (where sufficient data are available) and evaluate the duration, costs, and outcomes associated with two comparable sets of 4(f) evaluations - one set where a de minimis designation was applied and another, pre-amendment set where the characteristics of the project suggest that a de minimis designation would have been applied. The historic data would be considered a nonequivalent control1, and, depending on the types and quality of data tracked in the state database(s), statistical analyses could be conducted to measure the comparability between the control and treatment groups. One or more within-state analyses could be conducted and evaluated relative to the generalizability of findings, or control group data could be pooled and compared with the full de minimis data set.
  • Proxy pre-post design (“scenario-based” strategy). USDOT could examine a set of cases where the de minimis provisions have been applied and ask the following: What would have happened in the absence of the de minimis provision if the project had instead been carried out under a programmatic or regular 4(f) analysis? How would this have changed time, costs, protection of the 4(f) property, and outcome of the transportation project? In assessing the validity of this approach, it would be critical to consider the variability in the alternative outcomes (e.g., cost) for projects for which de minimis designations are sought, the sources of information that interviewees/survey participants would use to answer these questions (e.g., a database versus personal experience), and asymmetries in the experience of interviewees/survey participants (e.g., differences between DOT and local stakeholders)2.

USDOT will need to assess the pros and cons of these different quantitative evaluation methods in terms of threats to internal validity; the type and quality of data available from state DOT databases; the sampling, data collection, and statistical techniques that could be used to improve internal and external validity; and the associated budget and timing implications. Regardless of the quantitative method used, it will be important to consider the limitations of the method and ways in which qualitative methods could be used to address those limitations.

For instance, evaluation of outcomes for de minimis 4(f) projects that have yet to be constructed (e.g., as part of the evaluation of “efficiencies”) may need to rely largely on questions about expected outcomes. To help assess the validity of these data, the evaluation of “post-construction effectiveness of impact mitigation and avoidance commitments” could include questions about whether the actual outcomes were consistent with expectations. In this way (if variables were included to help compare the representativeness of these post-construction projects to the overall de minimis dataset) the postconstruction evaluation could be used to help assess the validity of the quantitative findings of the efficiency evaluation.

The evaluation of post-construction effectiveness will be largely qualitative in nature, due to the very limited number of completed projects available for study, but this could still provide useful information and have great value for “telling a story” from the participant's viewpoint. And as noted above, one can combine quantitative and qualitative methods (e.g., through use of structured survey questions, representative sampling, emergent design, logical analysis approaches) to assist in the interpretation of this information.

Other inherent limitations in the validity of the quantitative evaluation approaches outlined above include the fact that the comparison groups for the ex post design would not be truly equivalent. The challenge of defining equivalent groups would be a particular concern with respect to collecting information about impacts on 4(f) resources, such as ecological impacts on wildlife and habitat (which are often non-linear and take several years to fully emerge). Statistical techniques such as cluster analysis can be used to evaluate equivalence of comparison groups, but these techniques rely on data that may not be available at a sufficient level of detail for 4(f) resource outcomes. In addition, depending on the inherent variability in the data, the proxy pre-test approach could be a highly speculative exercise that is colored by the subjective judgments of the participants and evaluator.

All of these various limitations notwithstanding, we suggest that where true scientific experiments are not possible, a well-conceived quasi-experimental design, if executed with statistical sophistication and in recognition of its limitations, will provide better information than a purely qualitative exercise. Reasonable inferences can be drawn from such studies; which combined with consistent qualitative evidence, could yield useful findings.

3.4 Gathering study information

The core of the USDOT studies will be the efforts to compile data on 4(f) cases and gather additional information and opinions though written or telephone surveys and face-to-face interviews. It is thus of central importance that these steps be strategically designed to assess a representative sample of cases and to elicit the needed information from the right stakeholders. Some specific concerns and suggestions are discussed below.

3.4.1 Assuring a representative study sample

One could argue that the existing population of de minimis cases (237 cases, at the time of the committee's meeting) is itself a relatively small population size for a study of this nature, and should be evaluated in its entirety. We recognize, however, that this may be deemed infeasible by USDOT, due to the time and costs required. Furthermore, there is no guarantee that the 237 cases are, as a whole, truly representative of the “universe” of de minimis cases that are likely to arise over time. Thus, it seems reasonable for USDOT to adhere to its representative sampling strategy. It is imperative, however, that the study plans provide ample detail on how this representative sample will be chosen. The study plans acknowledge some, but not necessarily all, of the various representativeness factors that need to be considered, including the following:

  • Type of 4(f) resource (e.g., historical properties, parks, recreation areas, wildlife refuges, including cases involving federal and tribal lands),
  • Type of transportation project (e.g., highway, bridge, bike/pedestrian facility, transit project, including New Starts transit project),
  • Geographic location (e.g., at least one state in each major geographical region of the country and, ideally, within each U.S. circuit court of appeals jurisdiction), and
  • Class of action (e.g., Categorical Exclusion, Environmental Assessment, Environmental Impact Statement).

The existing database of de minimis cases is not evenly distributed among these various factors. For example, there is a large proportion of cases involving historical properties and very few involving wildlife refuges. It will be important to ensure that the less represented categories are explored as well (for example, through targeted selection of case studies), because they may present unique problems or challenges with respect to implementing the streamlining provisions.

While the USDOT studies should of course focus primarily on places where de minimis provisions have been frequently applied, it would also be instructive to investigate places where de minimis provisions are not being applied. In some cases, few de minimis findings may reflect the fact that there are few 4(f) properties (or uses of these properties) in the state, or no cases where these uses could legitimately be considered as de minimis impacts. But in other scenarios, few de minimis findings may reflect a reluctance on the part of relevant officials to apply the new provisions, a lack of understanding or information about the process, or other factors. Assessing these factors that affect the use (or lack of use) of the de minimis provisions are an important part of assessing the overall effectiveness of the streamlining process, and should be explicitly considered in determining a representative study sample.

3.4.2 Collecting baseline data

USDOT does not routinely collect information about individual 4(f) evaluations; instead, this task has historically been left up to individual state DOT offices. In preparation for this study, USDOT has compiled information about all of the projects where 4(f) impacts have been designated as de minimis since implementation of the Section 6009 amendments. We appreciate that collecting these data has been a formidable task, given the fact that the state DOT offices have used widely varying methodologies and degrees of rigor in their data collection processes and have exhibited varying degrees of cooperation in making the data available.

We do, however, have a fundamental concern that gathering data on de minimis cases alone is not sufficient as a basis for a sound study design. Also needed is comparable information (about time, costs, outcomes) on non-de minimis cases, to provide a baseline for evaluating how the streamlining provisions have actually changed the efficiency and outcomes of 4(f) analyses. We suggest that USDOT make a concerted effort to canvass the state DOT offices to find out what types of baseline 4(f) evaluation data might actually be available.

In the longer term, this study process could be used as an opportunity for USDOT to work with the state DOTs to identify and define best practices and standard procedures for documenting Section 4(f) evaluation timelines, costs, and outcomes. While it may be infeasible for USDOT to actually mandate specific data collection procedures, encouraging the use of a more common framework among the states (as well as within each state organization) may help create a stronger foundation for future studies of the 4(f) process.

3.4.3 Selecting survey and interview participants

A critically important part of the study design is identifying who should be asked to provide input, both in the broad-based surveys and in the in-depth case studies. Interviewing an inappropriate or insufficiently broad set of individuals will not only waste valuable time and resources but also could lead to inaccurate or biased conclusions.

The draft study proposal naturally includes plans to seek input from state DOT officials (as well as others playing key roles in the 4(f) process: SHPOs, tribal historic preservation officers (THPOs), local officials with jurisdiction), but it generally does not specify who within the agencies will be interviewed. We note that people in different positions within the state DOT structures may have different perspectives and knowledge, and it will thus be important to aim for consistency in terms of who is interviewed in order to ensure a reasonable degree of comparability among states.

Another important consideration is finding people who have been directly involved for the duration of the 4(f) project in question and can help assess the “end-to-end” process. Ideally, the interviews should also include individuals whose tenure stretches across the pre- and post-de minimis eras and individuals who have been exposed to many, rather than few, 4(f) projects. This sort of institutional memory is needed to obtain a broad, firsthand perspective on how the 4(f) process has changed since implementation of the new provisions.

The draft study plans are unclear as to whether USDOT intends to engage many stakeholders beyond those mentioned above. We strongly suggest that USDOT seek input from other key players-for example, representatives of state and local resource agencies, professional societies (e.g., national and state parks and recreation associations), and environmental and historical protection groups. They could offer valuable insights into certain aspects of the streamlining provisions, in particular with regard to questions about consultation and public notification processes, changes in roles and responsibilities, and satisfaction with the outcomes for 4(f) resources. The direct knowledge and experience of these other stakeholders may often be limited to just one or two 4(f) cases but nevertheless could provide a necessary complement to the perspectives obtained from state DOT officials.

The study plan does not address the issue of the confidentiality and anonymity of participants in surveys or interviews (including people interviewed as part of case studies). We suggest that the study plan include some explanation of measures that will be taken to protect the identities of study participants, in cases where this may be warranted or desired.

3.4.4 Designing survey and interview questions

Designing the survey and interview questions is not a simple matter. Poorly or unclearly worded questions can lead to inadequate, biased, or ambiguous results. It is important to consider not only the content, scope, and purpose of the questions being asked but also the precise wording and response format of the questions.

The survey/interview questions included in the draft study plan are made up to a large extent of openended, unstructured questions. For example, respondents are asked to describe the process by which streamlining provisions have been implemented or to explain what time and cost savings have occurred as a result of these provisions. These types of questions are appropriate and useful for certain contexts, but we suggest that USDOT explore a complementary strategy of including more closed-ended, structured questions aimed at drawing out quantifiable information and imposing a degree of uniformity (and hence comparability) on the responses.

For example, dichotomous (yes/no, true/false) questions, multiple choice questions, or various sorts of nominal/ordinal response questions, in which respondents are asked to rank or assign a numerical value to their feelings about certain statements, could be asked. Another possibility is to ask participants to rate (e.g., on a 1-9 scale) their agreement with statements about how the streamlining provisions have improved outcomes with respect to time or cost, collaboration and consultation processes, protection of 4(f) resources, and so forth. Box 1 offers a simple example of the various ways a particular question could be worded; USDOT could draw on numerous references in the social science literature in developing more sophisticated survey design strategies.3

Through these types of structured questioning methods, qualitative information can be assigned meaningful numerical values (“quantitatively coded”) and then manipulated in a variety of ways to examine hypotheses and achieve greater insight into the meaning of the responses. For example, to support evaluation of the processes associated with post-construction mitigation and avoidance alternatives, one could use a log-linear analysis of contingency tables, in which stakeholders are asked to rate the extent to which DOT fulfilled mitigation commitments on a scale ranging from “completely” to “not at all.” Likewise, the USDOT studies could ask stakeholders to rate their perceptions of the impact of the de minimis process on 4(f) resources and use this, in combination with cost and/or time data, to help evaluate perceived efficiencies. One could also evaluate whether different types of stakeholders feel differently about whether the process is working or whether perceptions suggest that the process is working well in some contexts but not others.

Box 1
Simple Example of Alternative Formats for Structuring Interview Questions

Version A: Open Response
How do you feel the use of the de minimis provision has affected the outcome for the 4(f) resource, compared with what the outcome might have been if a programmatic or regular 4(f) evaluation had been carried out?

Version B: Multiple Choice
Do you feel that, as a result of use of the de minimis provision, the outcome for protection of the 4(f) resource is

  1. better than it would have been if it had proceeded as a programmatic or regular 4(f) evaluation,
  2. the same as it would have been if it had proceeded as a programmatic or regular 4(f) evaluation, or
  3. worse than it would have been if it had proceeded as a programmatic or regular 4(f) evaluation?

Version C: Rating System
On a scale of 1-9, do you feel that use of the de minimis provisions has resulted in the 4(f) resource being worse off/the same/better than it would have otherwise been, if it had proceeded as a programmatic or regular 4(f) evaluation (1 = worst possible outcome, 9 = best possible outcome)?

Version C, in particular, could lend itself to a variety of options for comparing and analyzing responses. For example, statistical analyses could be carried out to compare the responses for different types of stakeholders, for different states, and for different types of 4(f) resources. Furthermore, the responses to this question could be arrayed in a matrix against the responses to analogous questions (perhaps about the impacts of the de minimis provision on cost outcomes), providing a wealth of useful information about the relative strengths, weaknesses, and trade-offs of the streamlining provisions. Figure 1 is a hypothetical example of what the resulting matrix could look like.

The list of survey/interview questions in the USDOT draft study plan is a reasonable starting point but could be improved in a number of ways. It is not the committee's role to design the final questionnaires, but for illustrative purposes we have suggested some revisions and additions to USDOT's original list (see Enclosure A). These questions will likely need to be further sorted, for example, (a) among those pertaining to individual 4(f) projects and those pertaining to broad, general experiences and (b) among those directed to state DOT officials and those directed to SHPOs, local resource officials, and others.

USDOT would be wise to pre-test the survey questions on a small group of prospective respondents and revise any questions that these test audiences find to be unclear, difficult to answer, or obviously leading. We also strongly recommend sending out the survey questions well in advance of the interview dates, since many of the questions being asked require significant forethought and, in some cases, require significant data collection and evaluation efforts on the part of the respondent.

Finally, we suggest that in addition to the phone surveys and personal interviews, USDOT explore the idea of convening interview focus groups at events that bring together large numbers of relevant stakeholders. Conferences and meetings organized by the American Association of State Highway and Transportation Officials, the National Conference of State Historic Preservation Officers, and the National Recreation and Park Association are examples. Such convening events could provide uniquely valuable opportunities to seek input from a broad cross section of peer groups from around the country.

3.4.5 Planning case studies and site visits

The case studies and site visits will provide valuable opportunities to explore not only the postconstruction impacts on the cases selected but also the entire end-to-end 4(f) project development process (including factors such as consultation, coordination, and evaluation of alternatives) to gain insights into the institutional and interpersonal dynamics and to probe deeper into complex or sensitive matters that may arise in the broad-based surveys. Given the time-consuming and laborintensive nature of carrying out detailed case studies however, they must be strategically planned. We concur with USDOT's general strategy of using the telephone survey results as a basis for identifying the most fruitful case study candidates. Meanwhile, the draft study plans should include more detailed forethought on the protocol for organizing the site visits (e.g., who will be interviewed, what questions will be asked, how the site visits will correspond to the development of case studies).

There are, of course, practical limits on the number of in-depth case studies that can be carried out. However, we are concerned with USDOT's suggestion that among the 11 de minimis cases for which construction has been completed, only a subset of these (perhaps five) may be examined as case studies. It may be difficult to draw sound conclusions from such a small sample of cases. We recommend that the full set of 11 completed cases be considered as possible case studies and that this set be augmented with additional case studies as needed (and as possible, within time and budgetary constraints) to ensure an adequately representative sample set. In identifying cases for this broader sample, we suggest focusing on states with a large, diverse set of de minimis cases to examine and a strong system of tracking and collecting data on 4(f) evaluations (from the period both before and after implementation of the streamlining provisions).

We note also the importance of planning the case studies in a way that will allow distinct processes for analyzing streamlining impacts on the two general classes of 4(f) resources: historic sites versus parks, recreation areas, and wildlife/waterfowl refuges. Given the significant statutory distinctions between these classes of resources, the evaluation of the latter should not be viewed as a mere variation on the theme of the evaluation of historic properties.

The interviews carried out during the site visits would be a good opportunity to explore one particularly sensitive question-whether new streamlining provisions lead to situations of undue influence wherein, for example, SHPOs or local officials with jurisdiction face political pressure to concur with a state DOT's de minimis findings (or conversely, where DOT officials face exorbitant mitigation demands in exchange for a concurrence with de minimis findings). Such concerns could feasibly arise in cases where the relationship between the relevant DOT and 4(f) resource-responsible representatives is highly asymmetrical in terms of political power or knowledge about the 4(f) process and the transportation projects involved and in cases where the state DOT and resource-responsible representatives involved both report to the same boss (e.g., a governor). In addition to interviewing SHPOs, THPOs, and local resource officials about their specific experiences with such matters, the relevant state DOT officials should be asked to explain exactly how they arrive at “no adverse effect” findings and whether any system of checks and balances has been implemented for these processes.

4. Key findings and recommendations

USDOT has made a reasonable start in designing its Phase I draft study plans; however, the committee finds that the plans fall short of the standards for a rigorous, well-designed analysis in several respects:

  • The plans fail to document a systematic approach to study design, such as interpreting the statutorily defined questions in terms of study objectives, defining and operationalizing key concepts and terms, and considering alternative evaluation designs.
  • The study plans rely too heavily on the collection of information reflecting unstructured subjective perceptions, without sufficient consideration of how to incorporate more rigorous, quantitative, and quasi-experimental analysis strategies.
  • The study plans are lacking in sufficiently detailed strategies for assuring that a representative sample of Section 4(f) cases are evaluated, that a full range of relevant stakeholders are engaged in the analyses, and that the surveys and site visits will elicit the needed information.

Given the inherently heterogeneous nature of the Section 4(f) evaluation process and the lack of a consistent set of baseline data, a truly experimental study that will lead to scientifically verifiable conclusions and allow USDOT to fully answer the questions posed by Congress does not appear possible. We suggest, however, that the study plans can be strengthened in a number of ways to help address the weaknesses identified above. We recommend the following as priority actions:

  • The study plans should begin with an explanation of how USDOT is interpreting the statutorily-defined study questions, and with clear definitions of key terms and concepts. This includes recognition of the need to define “success” beyond just time and cost savings, to include impacts upon protection of 4(f) resources.
  • The plans should clearly identify the objectives for the study, weigh alternative study designs that could be used to accomplish these objectives, and present a rationale for design choices.
  • The study plans should describe methodologies for ensuring that a sufficiently diverse set of Section 4(f) cases are examined, and that an appropriately representative and knowledgeable set of stakeholders are identified and engaged in the evaluation.
  • Vigorous efforts should be made to collect and evaluate whatever 4(f) project data are available from state DOT offices, including data from the time period before implementation of the streamlining provisions, in order to establish a baseline for comparison.
  • The surveys and interviews should include “structured” questions that are designed to collect information that is at least semi-measurable and intercomparable in nature.
  • Interview survey questions should be pre-tested to ensure the questions are clear and meaningful to the intended audience, and should be distributed sufficiently in advance to allow the respondents to gather the needed information.

Figure 1. Hypothetical example of a matrix analysis of responses to survey questions (see Box 1).

  Resource Worse Off Resource Status Same Resource Better Off
1 2 3 4 5 6 7 8 9
Lower Cost 1             white star   green star
2     green star   green star   three white stars green star  
3                  
Same Cost 4         two white stars        
5   white star white star red star white star        
6                  
Higher Cost 7     red star       white star    
8         white star     white star  
9                  

green star Project in State 1.
red star Project in State 2.
white star Project in State 3.

Back to Top


Enclosure A
Illustrative Interview Questions

Explanatory notes:

  • This list is suggested for illustrative purposes only, not as a recommendation for a final, complete questionnaire. It includes both new questions proposed by the committee and select questions taken from the list originally proposed by USDOT in the draft study plan.
  • The list contains a mix of project-specific and general-experience questions, which will need to be more clearly segregated when interviews are conducted. Many of the project-specific questions (especially for projects that have completed construction) will need to be explored in depth as part of the site visits.
  • An asterisk indicates examples of questions that should be directed to stakeholders in addition to transportation agency officials (e.g., to SHPOs, local resource agency officials, historical and environmental protection groups).
  • The letter report (section 3.4.4.) includes a suggestion to employ ‘structured’ survey and interview questions, but we leave it to the discretion of USDOT to determine exactly how and where these types of questions should be incorporated into the list below.

Baseline data

Prior to implementation of the de minimis impact provisions (in December 2005), what was the average number of Section 4(f) evaluations per year carried out in your state over the previous 5 years? How many of these, on average, were done as programmatic evaluations? As full 4(f) analyses?

Following the implementation of the de minimis impact provisions, how many Section 4(f) determinations have been made in your state? How many of these were done as programmatic evaluations? As full 4(f) analyses? How many of these cases have been designated as de minimis?

How many (if any) of your state's de minimis determinations have ultimately been overturned by USDOT as not satisfying de minimis designation criteria?

Section 4(f) resource outcomes

In projects where there was a use of Section 4(f) property, what was the process for determining whether the impact was de minimis? What criteria did you use to make the de minimis impact finding? [Specify the different processes/criteria used for different types of 4(f) resources.]

To what extent have mitigation measures played a role in arriving at determinations of de minimis impact?

Were mitigation measures planned for the purpose of minimizing impacts on the 4(f) property? To what extent have planned mitigation measures actually been implemented or committed? Describe any factors that may have prevented the implementation of these mitigation commitments.

Who generally defines the key “activities, features, and attributes” of the 4(f) properties? [Specify for different types of 4(f) resources.]
Have these key attributes been affected (positively or negatively) as a result of the de minimis process?

How do you define the users of a Section 4(f) property?

For projects where a de minimis finding was made and that have been constructed, has the use of the property changed as a result of the transportation project? If so, how?

* Has user satisfaction with the property changed as a result of the project? If so, how? How was this evaluated?

* On the basis of your experiences thus far, how do you feel the use of the de minimis provisions has generally affected the protection of Section 4(f) resources? Has it resulted in any unanticipated positive or negative impacts compared with the outcomes that would likely have resulted from a programmatic evaluation or regular 4(f) analysis?

Transportation project outcomes

For implemented projects that have had a de minimis finding, how did the use of de minimis impact provisions affect the outcome of the transportation project? Did it result in any unanticipated positive or negative impacts on the outcome of the project?

For projects that have had a de minimis finding (including projects that might not yet be implemented), did the project design change to achieve a de minimis finding? If so, how?

Were any specific avoidance, minimization, mitigation, or enhancement measures taken to achieve a de minimis finding?

On New Starts/Small Starts transit projects, has the cost-effectiveness constraint affected the inclusion of avoidance or minimization because of the additional project cost needed for these items? If so, has that influenced the use of de minimis?

Time and cost implications

If a de minimis finding had not been available for this project, would it have been evaluated as a programmatic evaluation or a regular 4(f) analysis?

If so, how would that have changed the time required to conduct the 4(f) process? For example, how much time was saved from

  • —   Eliminating the need for DOI review?
  • —   Reducing the FHWA/FTA review?
  • —   Eliminating FHWA/FTA legal sufficiency reviews?
  • —   Eliminating the need to design and evaluate alternatives?

How would that have changed the total time required to complete the environmental compliance process (EIS/EA/CE, including 404 permitting, Section 7)?

How would that have changed the cost of conducting the 4(f) process (including the impacts of eliminating the need to design and evaluate alternatives)? If possible, please identify where specific costs savings may have occurred.

* Have any of the time/cost savings achieved by FHWA/transit agency authorities occurred as a result of shifting these time/cost burdens to other stakeholders in the process (e.g., to SHPOs, local officials with jurisdiction)? If yes, please describe.

Time/cost specific to historic properties

What effect has relying on the Section 106 determination had on the time to reach a 4(f) determination?

How have costs changed by not duplicating the Section 106 determination?

How has the de minimis impact provision affected the time and costs of the coordination process (up/down, by how much)?

How much (if any) time was eliminated in the review process through a finding of “no adverse effect”?
If there was a decrease in the time for the review process, can this be attributed to the relationship between the no adverse effect determination and de minimis impact determination? Please explain.

Time/cost specific to parks, recreation areas, and refuges

How has the de minimis impact provision affected the time and costs of the coordination process (up/down, by how much)?

What time and costs have been associated with conducting public notice and public comment review?

How has the need for written concurrence from officials with jurisdiction affected time and costs?

Are there state laws protecting the resource that would affect the timeliness of this process?

Does the 6(f) process (required by the Land andWater Conservation Fund Act, 16 U.S.C. 4601- 8(f)(3)) reduce the timeliness of the de minimis process?

General institutional/process issues

* Have the de minimis impact provisions changed the way the DOT or transit agency interacts with the relevant stakeholders (e.g., SHPOs, THPOs, officials with jurisdiction, and resource agencies)? Explain how.

* Do these stakeholders have sufficient knowledge about the relevant processes to participate in the de minimis impact process effectively?

* Have stakeholders been reluctant to engage in the de minimis impact process? Why? How have they expressed their reluctance?

If the de minimis impact process has not been used in your state, can you identify why (are there particular factors impeding or discouraging the use of this process)?

For de minimis impact cases, how are public notice requirements being fulfilled to ensure that the relevant stakeholders are involved?

Has the state DOT updated the public involvement procedures to include the de minimis impact process? How are transit agencies fulfilling the public involvement requirement?

Can you identify examples of public involvement affecting the outcomes of de minimis projects?

Have the de minimis provisions created any new challenges related to communicating with the general public about the process or outcomes of 4(f) evaluations?

*Have the new provisions led to any new concerns about political pressures on stakeholders (e.g., SHPOs, officials with jurisdiction) related to concurrence with de minimis determinations?

Have you experienced, as a result of the de minimis provisions, any new challenges related to documentation, particularly as it relates to the need to make NEPA documents understandable to the public? [This may be most applicable to situations where there is an individual Section 4(f) and a de minimis 4(f) in the same document.]

Are there specific types of information, guidance, or training that could be provided by USDOT to assist your agency in effectively implementing the new provisions? Are there other ways in which USDOT can assist your efforts to implement new provisions?

Back to Top


Enclosure B
Membership, Committee for a Review of U.S. Department of Transportation Study on
Implementation of Changes to the Section 4(f) Process

Michael D. Meyer, Chair
Professor, School of Civil and Environmental Engineering, Georgia Institute of Technology
Atlanta

Allyson Brooks
State Historic Preservation Officer, Department of Archeology and Historic Preservation
Olympia, Washington

Sarah C. Campbell
Capital Budget Coordinator, District of Columbia Council
Washington, D.C.

William R. Hauser
Administrator, Office of Stewardship and Compliance, New Hampshire Department of Transportation
Concord

Mary E. Ivey
Director, Office of Environment, New York State Department of Transportation
Albany

James C. Kozlowski
Associate Professor, School of Recreation, Health, and Tourism, George Mason University
Manassas, Virginia

Jeffrey Lerner
Director, Conservation Planning, Defenders of Wildlife
Washington, D.C.

Virginia McAfee
Principal and Group Manager, Jacobs, Carter and Burgess
Denver, Colorado

William R. Michaud
Principal, Environmental and Organizational Services, SRA International, Inc.
New Hartford, Connecticut

Joseph R. Trnka
Senior Cultural Resources Management Specialist, HDR, Inc.
Minneapolis, Minnesota

Paul Tufts
Environmental Program Specialist (Retired)
Flossmoor, Illinois

Jonathan E. Upchurch
Transportation Engineering Consultant, Grand Canyon National Park
Grand Canyon, Arizona

Back to Top


Enclosure C
Speakers and Guests at Committee Meeting
March 31- April 1, 2008

Cassandra Allwell, Volpe Center
Bethaney Bacher-Gresock, U.S. Department of Transportation, Federal Highway Administration
Carol Braegelmann, U.S. Department of Transportation, Federal Highway Administration
Joe Burns, U.S. Department of Agriculture, National Forest Service
Richard Dolesh, National Recreation and Park Association
Sharon Chan Edmiston, Volpe Center
Shannon Eggleston, American Association of State Highway and Transportation Officials/Standing
Committee on the Environment
Andrea Ferster, Rails-to-Trails Conservancy
Gina Filosa, Volpe Center
Sean Furniss, U.S. Department of the Interior, Fish and Wildlife Service
Katry Harris, American Council for Historic Preservation
Tim Hill, Ohio Department of Transportation
Jacob Hoogland, U.S. Department of the Interior, National Park Service
Betsy Merritt, National Trust for Historic Preservation
Joe Ossi, U.S. Department of Transportation, Federal Transit Administration
Amy Phillips, BNA Publications
Nancy Schamu, National Conference of State Historic Preservation Officers
Lamar Smith, U.S. Department of Transportation, Federal Highway Administration
Neel Vanikar, U.S. Department of Transportation, Federal Highway Administration

Back to Top


1 “Non-equivalence” refers to experimental groups that are not randomly selected, and may differ in ways other than just the independent variable being tested. This limits the confidence with which the results can be attributed to the independent variable.
Back

2 USDOT may wish to consider, for example, the “constructed counterfactual” approach developed as part of the “Systematic Evaluation of Environmental and Economic Results (SEEER)” method currently being applied by the U.S. Environmental Protection Agency and U.S. Department of Interior, among others.
Back

3 The following are examples: (a) Trochim, W. 2001. The Research Methods Knowledge Base, 2nd ed. Atomic Dog Publishing. (b) Rossi, P. H., M. W. Lipsey, and H. E. Freeman. 1999. Evaluation: A Systematic Approach, 6th ed. Sage Publications. (c) Patton, M. Q. 2002. Qualitative Research and Evaluation Methods. Sage Publications. (d) Dillman, D.A. 1997. Mail and Internet Surveys: The Tailored Design Method. Second Edition. Hoboken, NJ: Wiley and Sons.
Back