Section A1: The Scope of the Report
A1.1: A Focus on Public Institutions
In Leaders & Laggards, we chose to focus almost entirely on public institutions of higher education. While we recognize the important role that private, nonprofit institutions play in many states’ higher education portfolios, we analyzed public systems for two basic reasons. First, public two-year and four-year colleges educate the majority of postsecondary students. The latest data from the National Center for Education Statistics reveal that in 2010, 13.8 of the 18.6 million undergraduates enrolled in 2010 (74%) were in public institutions. At the four-year level, public enrollments made up about 6.5 million of the 10.4 million enrollments (62%), while the proportion at the two-year level was much larger (just fewer than 7.3 million of the 7.85 million, or 93% of enrolled students were at public institutions). i
Second, state-level policymakers have the most direct control over public institutions in their state. State legislatures provide direct subsidies to state institutions of higher education, while state agencies set the rules and regulations that govern college and university behavior. Because we were focused on the policies states have developed to promote productivity, transparency, and student success, a focus on the institutions that are most likely to be shaped by those policies is warranted.
A1.2: Institutions Included and How They Were Categorized
For the data-driven metrics in areas 1 (Student Access and Success) and 2 (Efficiency and Cost-Effectiveness), within the set of public postsecondary institutions, we chose to focus on four-year, degree-granting, two-year, degree-granting, and two-year, non-degree-granting institutions. We excluded less-than-two year institutions, which enroll just under 71,000 of the 13.8 million students enrolled in public postsecondary institutions and often include vocational high schools.
In contrast to some other state-by-state comparisons, we chose to include two-year, non-degree granting institutions because they have become the subject of considerable national attention and have become a significant part of many states’ sub-baccalaureate portfolio.ii Including these institutions provides a more complete picture of the sub-baccalaureate sector in each state.
We excluded tribal colleges, service academies, and stand-alone institutions that shared a basic Carnegie classification as “special focus” medical schools or medical centers. Historically Black Colleges and Universities (HBCU’s) were included. Because stand-alone medical centers receive large amounts of state money for medical research and other non-educational functions, we decided to exclude them from the analysis where possible. Four-year institutions that have medical schools housed within them were included, as it is not possible to disaggregate the data across the various schools and colleges within a given institution. We also excluded the District of Columbia from the analysis, as it houses only one public, four-year university and one public, two-year community college.
Lastly, we disaggregated our measures across the four-year and two-year sectors where appropriate. To do so, we categorized public institutions according to the primary credential awarded rather than their formal designation as a “Public, four-year” or “Public, two-year” institution. Some state systems include institutions that are considered “four-year, primarily associate’s, public:” these institutions typically grant some bachelor’s (BA) degrees, but their primary focus is on sub-baccalaureate programs.iii Some existing efforts to compare states using data from the Integrated Postsecondary Educational Data System (IPEDS) include these types of institutions in their analysis of four-year colleges.iv Given their explicit focus on sub-baccalaureate credentials, we included these institutions in our two-year college category.
In practice, this meant starting with all institutions in the “Public, four-year” and “Public, two-year” sector, and then subdividing those institutions according to the “institutional category” variable in IPEDS. Public institutions labeled “degree-granting, primarily baccalaureate or above” were included in our four-year analyses. Those institutions labeled “degree-granting, not primarily baccalaureate or above,” “degree-granting, associate’s and certificates,” and “non-degree granting, sub-baccalaureate” were included in the two-year analyses. In sum, 81 non-tribal institutions classified as both “Public, four-year” and “degree-granting, not primarily baccalaureate or above” were included in our two-year category.
A1.3: Disaggregating Across Sectors
For areas 1, 2, 3 (Meeting Labor Market Demand) and 4 (Transparency and Accountability) of the report, we disaggregated results across the four-year and two-year sectors. For area 5 (Policy Environment) we reported one grade per state. For area 6, we reported separate grades for each state’s openness to innovative providers and for its encouragement of online learning.
Section A2: Weighting Credentials
Many of our metrics are “per-unit” measurements, where the unit is a completed degree or certificate. Higher education analysts face a dilemma in that the “units”—degree and certificate programs—often vary considerably in terms of length and/or the cost of providing instruction. Treating all credentials equally in calculating per-unit measures raises problems when you compare institutions or states that may have very different mixes of programs. At the two-year level, for instance, one institution may award mostly short-term certificates, while another produces mainly associate’s (AA) degrees. The problem is even more acute at institutions that award both baccalaureate and advanced graduate degrees in research, medicine, or law. Treating all completions equally ignores clear differences in program length and the cost of delivery.
Researchers have developed various ways to account for differences in the mix of credentials produced.v One approach is to attach different weights to different types of credentials, and calculating per-unit measures based on a weighted sum of completions. For instance, in a 2009 Delta Cost Project paper, Patrick Kelly developed a weighting scheme for measuring productivity that weighted credentials according to their labor market value. Kelly described the logic of weighting degrees succinctly, arguing: “If two state higher education systems have similar funding levels and credentials awarded but one state produces more bachelor’s degrees and the other state produces more certificates, the state that awards more bachelor’s degrees would be considered more “productive” relative to the other state.” vi We agree with this logic, and opted for a simpler approach: we weighted degrees according to their normal time to degree and, for credentials above the baccalaureate, their costs of delivery.
At the undergraduate level, our weighting technique is straightforward. Credentials are weighted according to how their normal time to completion compares to the reference category (for the two-year sector, the reference category is the associate’s degree; for the four-year sector, the reference category is the bachelor’s degree). For certificates of uncertain length—those denoted “less than one year” or “more than one year but less than two—we assumed a time to degree that fell in the middle of that time frame (six months for less than one year, 18 months for more than one, but less than two). In contrast to other state-by-state comparisons, we included certificates of less than one year but weight them less heavily than an associate’s degree to acknowledge differences in program length.
Using this framework, undergraduate degrees and certificates were weighted at the following values.
- AA degrees: weighted at 1
- Certificates of less than one year: weighted at ¼ (assuming six months to completion)
- Certificates of at least one year but less than two: weighted at ¾ (assuming 18 months to completion)
- Certificates of at least two years but less than four: weighted at 1.
- BA degrees: weighted at 2
Four-year metric (completions per 100 FTE)
- BA degrees: weighted at 1
- AA degrees: weighted at ½
- Certificates of less than one year: weighted at 1/8
- Certificates of at least one year: weighted at 3/8
- Certificates of more than two years but less than four: weighted at ½
Weighting graduate degrees is more challenging, as programs differ in terms of both the length of required coursework and the cost of delivery. Because of data limitations, national estimates of the cost per credit hour at various levels of postsecondary education are nonexistent. An ongoing State Higher Education Executive Officers (SHEEO) study of four states has found that undergraduate credit hours are less expensive to deliver than graduate-level credit hours. For instance, the 2010 SHEEO study found that the cost of upper-division undergraduate credit hours were about half of that for “graduate 1”-level courses, which make up about 12% to 15% of credit hours. The Illinois Board of Higher Education’s annual cost study found that in 2010, the cost of upper-division undergraduate credit hours was about 60% of the cost of “graduate-1” credit hours and 40% of that for “graduate-2” credit hours. vii
While undergraduate courses are less expensive, bachelor’s degrees typically require more course credits. Compared to the four-year, 120-credit bachelors degree, Masters programs typically require between one-quarter and half that amount of time.viii Doctoral programs typically require 90 credits above the BA, plus independent dissertation research. Professional practice doctoral degrees (previously categorized as “first professional” in IPEDS) include programs like medicine, dentistry, and optometry that take four or more years of coursework (in addition to three-year law degrees).
We developed a simple weighting scheme that reflected information about the cost of graduate instruction and typical program lengths. For simplicity, we assumed that graduate instruction is, on average, about twice as expensive to deliver as undergraduate, upper-division courses. The weights are therefore equivalent to two times the ratio of the coursework required for graduate degree to the program length of a BA. While we acknowledge that different disciplines are more expensive, we did not distinguish between program types.
For baccalaureate and above:
- BA degrees are weighted at one (reference category).
- MA degrees are weighted at ¾ of a BA degree (two times as expensive to provide but between ¼ and ½ as long).
- PhD’s in research are weighted at 1.5 times a BA degree (twice as expensive but three-quarters of the credits).
- PhD’s in professional practice (also known as “first professional” degrees) are weighted at 2 times the BA degree.
- Graduate certificates (post-baccalaureate, Post-master’s) are weighted at ½.
We readily acknowledge that this weighting technique is not perfect. In the absence of more granular cost data or a consensual method for weighting degrees, however, we believe this approach is reasonable and simple to understand. Other researchers should experiment with alternative weighting schemes.
Section A3: How the Grades Were Tabulated
We calculated grades using three different grading techniques.
On areas 1, 2, and 3, we used the following technique:
First, for each metric we rated states on a five-point scale according to how many standard deviations above or below the national mean the state’s performance fell. We used a standard grading curve to assign ratings:
- states that were 1.5 standard deviations above the mean received the top rating (5 points);
- states that were between 0.5 and 1.5 standard deviations above the mean received 4 points;
- states that were between 0.5 standard deviations above or below the mean received 3 points;
- states that were between 0.5 and 1.5 standard deviations below the mean received 2 points; and
- states that were 1.5 standard deviations below the mean received the lowest rating (1 point).
In order to avoid outliers that might skew the mean and standard deviation, we eliminated the highest and lowest values in calculating each. We rounded standardized differences to the second decimal place and assigned ratings as described above. To tabulate a total grade for the area, we summed the individual ratings, weighting each metric equally, to calculate a total score. We then assigned overall grades on the basis of how this final score compared to the average total score using the standard deviation scale.ix
In areas 4 and 5, we rated states on whether they had particular policies and practices in place. To ensure that the top ratings were reserved for states with policies that most closely met our criteria, we rated states on an absolute scale rather than how they compared to one another.
To do so, we generated coding criteria for each individual metric (each criteria is described in detail below). We then rated each state according to how many elements of those criteria the state had fulfilled. This coding process produced a set of scales of different lengths, ranging from a three-point scale for measuring learning outcomes to a seven-point scale for online learning. Within a given area, we combined those scales by converting them to decimals, weighting each of them equally, and then summing those weighted scores across the metrics. Multiplying that sum by 100 produced scores ranging from zero to 100. We then assigned overall grades based on quintiles (scores of 0–19 received an F, 20–39: D, 40–59: C, 60–79: B, 80–100: A). To grade support for online learning, we simply used the seven-point scale described below.
For area 6, openness to innovative providers, Eduventures used the following grading technique:
Eduventures rated each state on three criteria: regulatory jurisdiction, financial burden, and approval process burden (the criteria are described in detail below). For each criterion, they rated states on a scale from 40% to 100%. To calculate an overall grade, they calculated a grade point average, weighting each of the criteria equally. The grades were assigned as follows:
- Overall score of 90%–100%: A
- Overall score of 80%–89%: B
- Overall score of 70%–79%: C
- Overall score of 60%–69%: D
- Overall score of 59% or below: F
In some cases, Eduventures analysts made slight adjustments in letter grade assignments for certain states when appropriate. For example, a state with a numerical score of 88% or 89% may have received a letter grade of A, based on Eduventures’ knowledge of that state’s application process or fee structure.
Section A4: Description of Data Sources and Methodology for Individual Metrics
Area 1: Student Access and Success
1. Percentage of undergraduate students receiving Pell Grants.
Data Source: The Postsecondary Education Opportunity database compiled by the Mortenson Seminar on Public Policy Analysis of Opportunity for Postsecondary Education (available via subscription at: www.postsecondary.org).
Postsecondary.org compiles a database of Pell Grant recipient data as a percentage of undergraduate enrollment by state and control of institution. The data are drawn from the federal Pell Grant End of the Year Report issued by the Department of Education as well as the IPEDS database.
Calculation: We summed data on public four-year and two-year institutions from the two most recent years available in this database, 2008–2009 and 2009–2010 and calculated the percentage of undergraduates receiving Pell Grants across the two years.
Numerator: Number of Pell Grant Recipients in Public four-year (two-year) institutions.
Denominator: Undergraduate Enrollments in Public four-year (two-year) institutions.
Result: Percentage of undergraduate students receiving Pell Grants.
2. Retention Rates
Data Source: IPEDS full-time student retention data. IPEDS calculates full-time retention rates by having institutions report the full-time, adjusted cohort of students from the prior year and the number of students from that cohort who were enrolled again in the current year. We used IPEDS retention data from the three most recent years available: 2008, 2009, and 2010.
Calculation: To calculate the overall retention rate, we summed the full-time, adjusted cohorts and the number of returning students from across the three cohorts for each institution, then summed those numbers across all institutions in the state (in a given category).The retention rate is simply the overall percentage of students from the overall adjusted cohort who were enrolled the following fall.
Numerator: Students from the full-time adjusted fall cohort (2007, 2008, 2009) enrolled in fall 2008, 2009, and 2010.
Denominator: Full-time, adjusted fall cohorts for 2007, 2008, and 2009.
Result: Overall retention rate for full-time students across the three cohorts.
3. Completion Rates
Data Source: IPEDS “Student Right to Know” graduation rate across three cohorts. Federal law requires institutions to report the number of first-time, full-time, degree- or certificate-seeking students who complete their programs within 150% of the normal time to degree. As was the case with retention rates, the graduation rate is simply the number of first-time, full-time students in the adjusted cohort who received a degree or certificate within the allotted time (six years for BA seekers, 150% time for AA and certificate seekers).
For the analysis of four-year colleges, we used the bachelor’s degree completion rate: the percentage of first-time, full-time bachelor’s-degree-seeking students that finished bachelor’s degrees within six years. For analysis of two-year institutions, we used the overall graduation rate (the number of degree or certificate completers within 150% of normal time over the adjusted cohort of first-time, full-time students).x We used data from the three most recent years available: 2008, 2009, and 2010.
Numerator: Number of first-time, full-time, degree- or certificate-seeking students completing a credential (BA for four-years; any degree or certificate for two-years) within 150% of normal time, summed across the three cohorts.
Denominator: The adjusted cohort of first-time, full-time, degree or certificate seeking students (BA-seekers for four-year colleges; all certificate or degree seekers for two-year colleges) summed across the three cohorts.
4. Credentials produced per 100 undergraduate FTE
Data Source: IPEDS data on undergraduate completions data and reported 12-month full-time equivalent (FTE) undergraduate enrollment data. We used data from the three most recent years available: 2007–2008, 2008–2009, and 2009–2010.
Calculation: We divided our three-year, weighted sum of undergraduate degrees and certificates by the three-year sum of reported full-time equivalent undergraduate enrollment and multiply the quotient by 100.
Numerator: ((# of BA’s)+ 0.5 * (# of AA’s) + 0.125 * (# of certificates of less than one year ) + 0.375 * (# of certificates at least one year, less than two) + 0.5 * (# of certificates at least two years, less than four))
Denominator: Reported 12-month full-time equivalent undergraduate enrollment.
Numerator: ((# of AA’s) + 2 * (# of BA’s)+ 0.25 * (# of certificates of less than one year ) + 0.75 * (# of certificates at least one year, less than two) + (# of certificates at least two years, less than four))
Denominator: Reported 12-month full-time equivalent undergraduate enrollment.
5. Risk-adjusted completion rates
Data Source: IPEDS data on the percentage of first-time, full-time undergraduates receiving Pell Grants and the 150% graduation rate for the total cohort. We used data from the three most recent years available. For graduation rate data: 2008, 2009, 2010. For percent Pell: 2007–08, 2008–09, 2009–10.
Calculation: In an ideal world, we would measure how well institutions and states serve low-income students by asking how many Pell Grant recipients graduate within 150% time. Unfortunately, institutions are not currently obligated to report this metric to IPEDS.
In the absence of a better metric, we used a simple bivariate regression model to calculate a “risk-adjusted” measure of completion rates. Essentially, this measure asks how well a given institution does (in terms of its graduation rate) after controlling for how many Pell Grant recipients are in their incoming class.
To calculate this measure, we ran a bivariate regression model that regressed the graduation rate on the percentage of Pell Grants recipients for each of the three years. We looked at the relationship cross-sectionally (i.e., 2010 graduation rate was regressed on 2009–2010 Pell Grant percentage). We then calculated a standardized residual for each institution, what education researchers call a “z-score”. In simplest terms, a z-score measures how far above or below an institution’s actual graduation rate falls compared to where the model predicts they would perform based on their Pell Grant percentage.
These regressions produced three z-scores for each institution, one for each year. We averaged these to produce a mean z-score. To account for institutions of different sizes, we weighted each mean z-score by the full-time equivalent undergraduate enrollment for 2009–2010 at that institution, and summed those weighted average z-scores across all institutions in a given category in a state. We then calculated a weighted average z-score for the entire state:
Numerator: Sum((average z-score) * (Undergraduate FTE)).
Denominator: Sum(Undergraduate FTE).
Result: Weighted average z-score for the state.
Area 2: Efficiency and Cost-Effectiveness
Inflation: All spending figures were converted to 2010 dollars via the Consumer Price Index (CPI). We used the reported CPI from the Federal Reserve Bank of Minneapolis to generate conversion factors for 2008 and 2009 expenditures. For 2008 dollars, we used a conversion factor of 0.9872. For 2009 dollars, the conversion factor was 0.9835.xi
Regional Differences in Cost of Labor: The cost of labor, goods, and services varies considerably across states. Failure to adjust for these differences can penalize states where costs are high and reward states where costs are lower. We adjusted estimates of per-unit costs to account for state-by-state differences using the Comparable Wage Index (CWI), a tool developed by education finance experts to compare per-pupil funding and teacher salaries in K–12 education across states and districts. NCES issued an official version of the CWI in 2006 for this purpose. We used an updated version of the CWI for 2010 provided by Dr. Lori Taylor of the Bush School for Government and Public Service. Dr. Taylor notes that the 2010 version was not subject to the exhaustive NCES peer review process.
1. Cost Per Completion
Data Source: Delta Cost Project on Postsecondary Costs uses information from the IPEDS finance survey to produce yearly estimates of “education and related expenses” (“E and R”). We use the “E and R” data for the two most recent years available: 2008 and 2009.xiii We merge those data with IPEDS completion data for 2007–2008 and 2008–2009. Because these expenditure data are sometimes aggregated across multiple institutions, data were merged by hand to ensure that all costs and degrees were properly allocated.xiv
Calculation: We summed education and related expenses across 2008 and 2009 and for each institution, then summed across all institutions in a given category in a state. This provides a state-level estimate of the total amount spent on education and related expenses across the two years. We then divided that total by our weighted sum of degrees produced in 2007–08 and 2008–09 (including degrees above the baccalaureate at the four-year level; see the weights above).
Numerator: Sum of Delta Cost Project’s estimates of education and related expenses, 2008 and 2009.
Denominator: Weighted sum of degrees awarded, 2007–08, 2008–09.xv
Result: Cost per weighted completion for in a given state.
We then adjusted this raw estimate of cost per completion using the CWI.
2. State and Local Funding Per Completion (also: State, Local, and Tuition Funding per completion).
Data Source: IPEDS finance and IPEDS completion surveys, 2007–08, 2008–09, 2009–10. To obtain state and local funding figures, we summed the following revenue categories in IPEDS for each institution for each year: state appropriations, local appropriations, state operating grants, local operating grants and contracts, state non-operating grants and contracts, and local non-operating grants and contracts (institutions using Governmental Accounting Standards Board principles).xvi To examine state, local, and tuition funding per completion, we simply added reported tuition revenue to this total. As is the case with the cost per degree analysis, not all institutions report finance data independently but do so through a partner or parent institution. Care must be taken to ensure that funding and degrees are allocated properly.xvii
Calculation: We summed the revenue categories for each institution, then summed those revenues across the three years for each institution (after adjusting for inflation). We then added those three-year revenues across all institutions in a given category in a given state. We divided the three-year total by the weighted sum of degrees across the three years (including degrees above the baccalaureate at the four-year level; see previously mentioned weights).
Numerator: Sum of state and local appropriations, operating grants and contracts, and nonoperating grants and contracts, 2007–08, 2008–09, 2009– 2010.
Denominator: Weighted sum of degrees awarded, 2007–08, 2008–09.xviii
Result: State and local funding per weighted completion.
We then adjusted this raw estimate of cost per completion using the CWI.
Adding Tuition Revenue:
Numerator: Sum of tuition revenue, state and local appropriations, operating grants and contracts, and nonoperating grants and contracts, 2007–08, 2008–09, 2009–2010.
Denominator: Weighted sum of degrees awarded, 2007–08, 2008–09.
Result: State, local, and tuition revenue per weighted completion.
3. Combined Measure
Data Source: The standardized differences on cost per degree and state and local funding per degree. The combined measure rewards states based on how far above or below the mean (in terms of standard deviations) the state’s cost per completion and state and local funding per completion.
Calculation: This combined measure ran from a maximum of eight points to a minimum of negative eight (-8) points. States could receive a maximum of four points on each cost-effectiveness measure: four points if their cost per completion was 1 or more standard deviations below the mean cost per completion, three if it was ¾ of a standard deviation below the mean, two if it was ½ a standard deviation below the mean, and one point if it was ¼ of a standard deviation below. The scale was identical for state and local funding per completion. States lost points if the cost-effectiveness measure was below the mean, using the mirror image of that scale: -1 points if the cost per completion was more than ¼ of standard deviation above the average cost per completion, and so on.
Result: A combined rating for each state ranging from 8 to -8 that captures how well they perform on both dimensions of cost-effectiveness.
Area 3: Meeting Labor Market Demand
General Notes: To estimate employment and wage gaps between workers at different attainment levels, we used data from the American Community Survey (ACS). We obtained data for the 2008–2010 ACS from the Integrated Public Use Microdata Series (IPUMS), a data clearinghouse run by the Minnesota Population Center at the University of Minnesota. We excluded active duty military from the analysis.
We recognize that these measures do not directly capture the quality of the credentials or the degree of the “match” between what public institutions of higher education are producing and the demands of employers. But in the absence of better data on labor market outcomes, these are the best data available.
1. Wage Gap Between Degree Holders and High School Graduates (overall and for youngest workers)
Data Source: We used the ACS variables that capture educational attainment to isolate those respondents with a high school diploma or equivalent, those with an associate’s degree, and those with a bachelor’s degree.xix We then calculated the median wage for workers in a given credential category (using the variable incwage). We only included workers that reported non-zero income and those that reported they typically work more than 35 hours a week. We calculated the median wages for workers for each age group of interest: workers 25–64, and the subset of the youngest workers, 25–34.
Calculation: Using these estimates of the median wage, we calculated both the raw wage gap between respondents of different educational attainment as well as a percentage that captures how much higher the median wage is for a degree-holder than it is for a high school graduate. We rated states on that percentage for both age groups (all workers, and the youngest cohort).
Numerator: Median wage for workers with a bachelor’s degree.
Denominator: Median wage for workers with a high school diploma or equivalent.
Numerator: Median wage for workers with an associate’s degree.
Denominator: Median wage for workers with a high school diploma or equivalent.
Result: Multiplied by 100, this quotient provides an estimate of the percentage of the median wage for a high school graduate that is the median wage for a bachelor’s degree holder. The higher the ratio, the better off degree holders are in terms of their earnings than their peers with a high school diploma.
2. Unemployment Ratio Between High School Graduates and Degree Holders
Data Source: We used the ACS variables that capture labor market status (whether or not respondents are in the active labor market) and employment status to generate estimates of the unemployment rates for various levels of educational attainment. We included only those who report that they are in the labor force. The unemployment rate for a given level of educational attainment is simply the percentage of workers who report being unemployed out of the civilian population that is in the labor force.xx
Calculation: We calculated both the raw gap between the unemployment rates for those with a high school diploma and each group of degree-holders as well as the ratio of the unemployment rates for those groups. We used the ratio because we did not want to penalize states that had low unemployment across the board. If the high school unemployment rate is low in a given state (as it is in some of the western states), this precludes that state from having as large a gap in the unemployment rate as another state where unemployment is high among those with a high school diploma. To rate states on this raw gap would penalize states that have a tight labor market across the board. Instead, the ratio captures how much more likely those with a high school diploma are to be unemployed and, conversely, how much better you fare in a given state if you have a college degree.
Numerator: Unemployment rate for those with a high school diploma or equivalent.
Denominator: Unemployment rate for those with a bachelor’s degree.
Numerator: Unemployment rate for those with a high school diploma or equivalent.
Denominator: Unemployment rate for those with an associate’s degree.
Result: This ratio provides an estimate of how much more likely high school graduates are to be unemployed than those with a postsecondary degree. The higher the value, the less likely respondents with an undergraduate degree are to be unemployed when compared to their high school-educated peers. It is a proxy for the employment premium that goes along with having a postsecondary degree.
Area 4: Transparency and Public Accountability
General Notes: This area was divided into four metrics regarding state policies and practices around transparency in higher education. Specifically, we looked at two basic areas. First, does the state measure student outcomes beyond degree completion like student learning outcomes or student labor market success? Second, does the state provide information to the public and to prospective students about the performance of individual institutions and the system as a whole? To determine which states had policies and practices in place, we contacted officials in each state to ask specifically what efforts the state was making around these areas. Each official was asked the following questions:
Does your state provide or submit an annual report that includes student outcome information?
- Please provide link to the report here:
Does your state have benchmarks to gauge progress in higher education?
- If so, please provide link to state benchmarks here:
Does your state have a policy in place that routinely makes comparable information about costs and outcomes at state institutions of higher education available to prospective students and families?
- If so, please provide link to policy here:
Whether or not a policy is in place (see question 3), does the state provide a resource to inform and help consumers, such as prospective families, make a choice on where to attend college? (If this is the same report as your state’s accountability report, please assert as much.)
- If so, please provide link to consumer information here:
- Does your state fund institutions of higher education based in part on outcomes metrics like course and degree completion? (see Area 5).
- Does your state link student-level data on postsecondary education to labor market outcomes?
- Does your state require state colleges and universities to measure student learning in a way that is comparable across institutions?
A total of 41 states responded to our questionnaire.
We then followed up by searching various sources online for evidence of these policies, including any relevant higher education websites (such as a state higher education office, university or community college system, governor’s office, and legislative records). We also consulted previous studies and experts in the field to fact check and help ensure that we hadn’t missed any policies or practices.xxi Note: given how fluid state policies can be, assessing these questions is a bit of a moving target. In general, we focused on whether states had formally enacted and implemented these policies by mid-April 2012.
We rated the states on four metrics in this area. A description of the grading for each of the four individual metrics follows, in addition to how we turned the raw numbers into letter grades for the final report.
1. Transparency: Public Accountability
Data source and measurement: This metric looked at what kinds of information and data state systems collect and make public to inform key stakeholders about the productivity, performance, and costs of the state’s higher education system. We were ecumenical in the type of resources that we examined. Some states had detailed annual reports that reported on multiple outcomes; others had websites or dashboards that provided information on a particular outcome measure; still others had institution-by-institution performance reports. We rated whatever resources we could find.
We looked at four- and two-year institutions separately for this metric. In the event that a state had multiple entities within a given sector (ie: the University of Minnesota system and Minnesota State Colleges and Universities), we looked at the reporting in both.
We rated state policies, practices, and resources on the following criteria:
Does the state regularly report on basic student outcomes like graduation or retention rates? (1 point)xxii
Does the state collect and report on additional student outcomes?
- Student learning outcomes, student engagement, licensure exam passage rates, and other measures of student learning or engagement? (1 point)
- Data on graduates’ experiences in the labor market? (1 point)
Does the state report on measures of institutional efficiency and cost-effectiveness? (1 point)
Does the state use benchmarks to place performance data in context and measure progress? (Up to a total of 1 point: states that used external benchmarks such as the performance of other states or the nation as a whole received a full point, followed by those that compared performance to goals set by the state (½ a point). Every state reported year-to-year performance, which served as our baseline benchmark.
Result: A maximum of five points each for four- and two-year systems.
Note: For the second criterion, measuring student outcomes beyond graduation, we took a deliberately broader view than our more specific measures of labor market outcomes and learning described below. For example, we awarded a point if the state provided licensure passage rates, results from the National Survey of Student Engagement, and alumni surveys of post graduation employment and wages. This is in contrast to our discrete measurements student learning and labor market linkages that we describe below, for which we had a more restrictive standard (see further on for a description of those metrics). The point here was to award states that are making some attempt to report on student outcomes. We cast a broad net for measures of efficiency, where we gave credit for a variety of measures, including cost per degree, savings from operational reforms, and graduates with excess credits. Importantly, we did not consider reporting on revenues and expenditures to fulfill this criterion.
2. Transparency: Consumer Information
This metric assessed state efforts to provide prospective students and families with useful information about the costs and student outcomes at the state’s colleges and universities. The goal was to reward states that had made an effort to help consumers make informed investment decisions. Note: we saw consumer information as a distinct piece of transparency policy. Here, we attempted to see which states provide consumers with essential, comparable information on costs and student outcomes at institutions to better inform their decisions. Again, we examined any official resources that we could find on this question.
We looked at both four- and two-year institutions separately for this metric, with states receiving points for the following criteria:
- Does the state have a website geared specifically towards students and parents with information on basic student outcomes like graduation rates? (1 point)
- Does this consumer information source include additional student outcomes, such as labor market outcomes or licensure passage rates? (1 point)
- Does the state include information about the net price of state institutions (e.g., a “net price calculator”)? (1 point)
Result: A maximum of three points each for four- and two-year systems.
3. Linking Labor Market Outcomes to Postsecondary Programs
This metric looked at states that have made a concerted effort to develop longitudinal data systems to track students from institutions of higher education and into the workforce in order to measure the employment outcomes of graduates. We made a key distinction between institutions who survey graduates on their employment and wages (which is fairly common) and more precise programs that link unit records across postsecondary and employment systems, calculate labor market outcomes like employment rates and wages, and make those data public. For those states that have done so already, we also rated the extent to which states disaggregated these data. The highest marks were reserved for those states that link labor market outcomes to particular programs at particular institutions. We looked at both four- and two-year institutions separately for this metric, with states receiving points for the following criteria:
- Does the state track student performance in the labor market and make those data public? (1 point)
- Does the state connect labor market outcomes to individual institutions? (1 point)
- Does the state connect labor market outcomes to particular programs of study or majors? (1 point)
- Does the state disaggregate data down across both institutions and programs, such that stakeholders see the labor market outcomes for a particular major at a particular institution? (1 point)
Result: A maximum of four points each for four- and two-year systems.
4. Measuring Student Learning Outcomes
This metric looked at which states were making an effort to measure student learning outcomes in a systematic fashion across institutions. Note that states were not graded on the use of a particular assessment, nor how they used the assessment (e.g., if it was mandated for graduation). In some cases, states mandated the institutions use a particular national assessment or choose from a set of available assessments (such as the Collegiate Learning Assessment or the Collegiate Assessment of Academic Proficiency). In other cases, the state simply required each institution to assess learning in whatever way the institution decided. Similarly, some states mandate that institutions assess a student before that student can graduate (South Dakota), or the state uses the results as part of a funding formula. Other states just use assessments for internal monitoring purposes. We did not pass judgment on which particular assessment a state used or how the state decided to use them.
However, given our emphasis on transparency and comparability, we did reward states using an assessment that allows for comparisons across both institutions and states. Likewise, we rewarded states that made assessment results public to stakeholders and consumers. We looked at both four- and two-year institutions separately for this metric, with states receiving points for the following criteria:
- Does the state or state system have a policy of measuring student learning in a systematic fashion across institutions? (1 point)
- Does the state policy call for the use of a national assessment that allows for comparison across institutions and states? (1 point)
- Does the state make the assessment results public? (1 point)
Result: A maximum of three points each for four- and two-year systems.
Once points were tallied for each metric in this area, we aggregated those scales, weighting each metric equally. To do this, we divided a state’s score on an individual metric by the total possible points on that metric, multiplied that figure by one quarter, and then adding the resulting weighted scores together. The result was multiplied by 100 generating a point total on a 0–100 scale. We then assigned letter grades using quintiles, where:
- A: 100–80
- B: 79–60
- C: 59–40
- D: 39–20
- F: 19–0
For example, for Minnesota four-year institutions, the state received four points for Transparency: Public Accountability; two points for Transparency: Consumer Information; three points for Labor Market Outcomes; and three points for Student Learning Outcomes. Each of these was divided by the total possible points for that metric, weighted equally as a quarter of the grade, and added together:
(4/5)*(1/4) + (2/3)*(1/4) + (3/4)*(1/4) + (3/3)*(1/4) = 0.804 * 100 = 80.4
Minnesota’s raw score was an 80.4.
Minnesota received an A for four-year institutions in this area.
Area 5: Policy Environment
General Notes: This area was divided into three aspects of state policy aimed at fostering student access and success and encouraging postsecondary productivity. We looked at three areas: state-sanctioned goals for higher education, outcomes-based funding, and credit transfer and articulation. To determine state policies, we searched various sources online for evidence of these policies, including any relevant higher education websites (such as a state higher education office, university or community college system, governor’s office, and legislative records). In the case of outcomes-based funding, we included a question on the survey to state officials asking if the state funded institutions in part on the basis of outcomes metrics like course and degree completion.
Because state goals and credit transfer policies are public-facing policies, they were more readily identifiable. Moreover, to our mind they should be easy for stakeholders to locate.
Note: As was the case in the Area 4, given the fluidity of state policies, we focused on whether states had formally enacted and implemented these policies by mid-April 2012.
1. State Goals for Higher Education
This metric looked at which states had espoused specific goals for their higher education systems. Like the public accountability area, we were ecumenical in the types of statements we included here: state goals, a strategic vision, or a higher education master plan. The key was whether it was a tangible effort to establish clear, publicly-espoused goals. For many states, goals for both four- and two-year institutions were contained in the same document. In the event they were not, we searched for separate documents for both systems and looked at both to award points.
States were rated using the following criteria:
- Does the state have a public set of goals focused on student outcomes? (Maximum of two points: 1 point for each four- and two-year systems)
- Does the state express their goals in terms of concrete, empirical targets (e.g., a 60% attainment rate by 2025)? (Maximum of two points: 1 point for each four- and two-year systems)
- Does the set of state goals include goals that go beyond basic student outcomes (e.g., student learning outcomes or student labor market outcomes)? (1 point)
- Does the set of state goals include goals for institutional and system-level efficiency (e.g., cost per degree)? (1 point)xxiii
Result: a maximum of six points.
2. Outcomes-Based Funding
This metric measured which states were tying a portion of an institution’s budget to various outcomes, such as course or degree completion, and not merely to enrollment. Because we acknowledge the potential for outcomes-based funding to harm access goals, we also measured whether the state’s policies included safeguards to ensure that institutions continue to enroll underrepresented and low-income students. States received points for the following criteria:
- Does the state have an outcomes-based funding system? (1 point)
- Is the funding that is tied to outcomes a component of an institution’s base funding, or is it an “add-on” that institutions can earn on top of base funding? (1 point for base funding)
- Does the funding formula include safeguards to ensure institutions continue to enroll traditionally-underrepresented students (e.g., do states provide extra reward for graduating low-income students)? (1 point)
Result: A maximum of three points each for four- and two-year systems (for a maximum of six total points).
- Credit Transfer and Articulation
This metric looked at which states were making a concerted effort to facilitate the transfer of credit between institutions within a state, regardless of the sending and receiving institution (e.g., a student could transfer from a four-year institution to another four-year institution, or from a two-year to a four-year institution, etc.).
The main issue in credit transfer is uncertainty about which credits would transfer; we rewarded states that had enacted clear policies and expectations around the transferability of credits. For instance, states can identify a set of core courses that are guaranteed to transfer across any institution, can ensure that students earning an associate’s degree can transfer to a four-year college with junior standing, and can develop common course numbering to help remove uncertainty. Our goal was to identify and reward states that are leading the pack in terms of facilitating transfer of credit.
States received points for the following criteria:
- Does the state have a formal policy governing articulation and credit transfer? (1 point)
- Does the policy permit a student to transfer an associate’s degree from a community college in full and enter with junior standing at a four-year institution? (1 point)
- Does the policy also permit students to transfer individual courses, or is transferability limited to blocks of courses (e.g., a full associate’s degree or a set of predefined general education courses)? (1 point if individual classes transfer)
- Does the state specifically identify which courses transfer between campuses? (1 point)
- Does the state have a common course numbering system? (1 point)
Result: A maximum of five points.
Once points were tallied for each metric, we graded the states by converting the rating to a decimal, weighting each of these scale scores equally, and summing the weighted scores across the metrics (identical to Area 4: Transparency and Public Accountability). In this case, each metric counted for one-third of the state’s overall grade. The result was multiplied by 100 to get a point total on a 0–100 scale.
For example, for Kansas, the state received five points for State Goals; two points for Outcomes-Based Funding; and four points for Articulation and Credit Transfer. Each of these was divided by the total possible points for that metric, weighted equally as one-third of the grade, and added together:
(5/6)*(1/3) + (2/6)*(1/3) + (4/5)*(1/3) = 0.656 * 100 = 65.6
Kansas’ raw score was 65.6
Letter grades were again assigned on a quintile system, where:
- A: 100–80
- B: 79–60
- C: 59–40
- D: 39–20
- F: 19–0
Kansas receive a B in this area.
Area 6: Innovation
General Notes: This area was designed to identify states that are encouraging innovative higher education models. We looked at two pieces of the innovation equation. First, have states encouraged the development and use of online delivery at the public institutions in their state? Second, have states erected regulatory barriers to innovative providers who wish to serve students within their borders? For the first question, we developed a criteria described below. For the second, the Institute for a Competitive Workforce contracted with Eduventures, a higher education consulting firm, to assess state regulatory environments.
1. Support for Online Learning
This metric identified states that had set a goal to expand online learning in their public systems and those that had made an effort to facilitate access to the online offerings—both degrees and programs—that are available across their campuses. For state goals, we used the same document acquired for that metric in Area 5 (previously described). To determine which states were facilitating access to online learning, we asked a simple question: which states have developed a clearinghouse that provides information about and access to the online course and degree offerings that are available? In order to find these resources, we searched state higher education websites (such as a state higher education office or university or community college system) to identify portals that aggregated online offerings across campuses.
For this metric, we graded four- and two-year institutions together. In many cases, states had a single portal that aggregated all online courses at both types of institutions, as well as a single statewide goal for online learning. However, we are aware that these systems are often governed independently, with their own policies, set of goals, and, in this case, online portals. As such, in the event that a given state portal only covered one institutional level, we searched for other online learning resources available that covered the other level.
States received points for the following criteria:
- Has the state developed a goal to promote online learning? (1 point)
- Is the goal expressed using an empirical target (e.g., to increase the number of online degrees by 50%)? (1 point)
Encouraging Access to Online Learning
- Does the state have a portal that provides information on the online programs and courses that are available across institutions? (up to 1 point, ½ a point if there is a portal for four-year institutions and ½ a point if there is a portal for two-year institutions. If there was a single portal that covers both types of institutions, the state receives the full point)
- Do the online learning offerings include full degree programs? (up to 1 point, ½ a point for four-year institutions and ½ a point for two-year institutions)
- Do the online offerings include individual courses? (up to 1 point, ½ a point for four-year institutions and ½ a point for two-year institutions)
- Has the state enacted policies that allow students to take online courses from other state institutions and be certain that those credits will transfer to a home campus? Are their clear expectations and agreements as to which online courses from which institutions will transfer? (up to 2 points, 1 point for four-year institutions and 1 point for two-year institutions)
TOTAL: a maximum of seven points
To convert from a raw score to a letter grade, we simply converted the fractions to percentages (e.g., 7/7 points is a 100%, 6/7 points is an 85.7%, etc.) and assigned grades on a quintile system like Areas 4 and 5, where:
- A: 100–80
- B: 79–60
- C: 59–40
- D: 39–20
- F: 19–0
1. Openness to Innovative Providers
Data Source: To gauge the openness of state regulatory environments, we worked with Eduventures, a firm that specializes in advising institutions of higher education on regulatory issues. Eduventures looked at three characteristics: regulatory jurisdiction, financial burden, and approval process burden. Here is how Eduventures described their criteria and processes.
Each U.S. state has been assigned a letter grade based on the regulatory climate for higher education institutions operating in that state.
The assessment of a state’s regulatory climate was based on the requirements faced by a postsecondary, degree-granting institution of higher education. In most but not all states, non-degree-granting postsecondary schools face a similar regulatory climate.
Although some states regulate institutions domiciled in their state differently than out-of-state schools delivering education in their state, most do not at this time. The letter grades assigned reflect regulatory developments involving cross-border activity, but also represent the overall regulatory climate affecting higher education institutions in most states.
Although the important role regulation plays in the protection of consumers is recognized and valued, no effort has been made to measure or incorporate this value here.
This grading of state regulatory climate does not take into account tax codes that may impact profit or nonprofit institutions, nor does it capture state regulations with regard to solicitation and fundraising by non-profits.
Grades are based on the simplified Grade Point Average system listed below, although Eduventures has made slight adjustments in letter grade assignments for certain states when appropriate. For example, a state with a numerical score of 88% or 89% may have received a letter grade of A, based on Eduventures’ knowledge of that state’s application process or fee structure.
|A||90% and higher|
|B||80% – 89%|
|C||70% – 79%|
|D||60% – 69%|
|F||59% or lower|
A three-factor scoring rubric was designed by Eduventures to assess the overall regulatory climate faced by postsecondary, degree-granting institutions of higher education. Each state earned a separate, numerical score for each of the three factors. These three scores were then averaged to produce a final numerical grade. The following items were measured, in most cases on a relative scale, for each state:
- Regulatory Jurisdiction
- Financial Burden
- Approval Process Burden
Regulatory Jurisdiction—This score was determined based on the institutional activities that require state oversight and approval. States care in varying degrees about these activities, which include advertising to state residents, having recruiters and conducting recruiting in the state, hiring state residents as faculty, requiring students residing in the state to complete internships and practicums as part of a degree program, maintaining physical space within the state, or simply enrolling state residents. These factors have varying impact on the quality of higher education delivery, and some of them are much more integral to a school’s program and ability to innovate. For example, the presence of recruiting agents in any particular state does not directly affect the quality or innovation of the education delivered to residents of that state, and can be withdrawn without impacting either. Internships and practicums have a much greater impact on the quality of an educational program, so there is a higher “cost” for schools when these activities trigger state regulation. A state received a high score (100%) if there were no institutional activities that would require the state’s oversight. High-scoring states rely instead on accreditors and “home” states to ensure that institutions are delivering quality education. A low score (40%) was assigned if the state’s oversight was triggered simply by an institution enrolling students in that state.
|100%||None if no brick-and-mortar presence or listed activities|
|80%||Recruiting agents/activity in the state|
|60%||Hiring of faculty in the state|
|50%||Internship, practicum, clinical, student teaching in the state|
|40%||Enrollment/delivery of education to any person in the state|
Financial Burden—This score was developed by taking the average of three separate financial scores based on: (1) fees for an institution’s initial approval to operate in a state, (2) fees for initial approval of an institution’s degree programs, and (3) fees for maintaining or renewing the institution’s approvals in that state. In each part, the score was based on the state’s variation from the median fee level.
|100%||None, or less than $500|
|80%||Less than median|
|60%||More than median|
|40%||Most expensive outliers|
Fees for each of these categories can vary within a state depending on whether the institution is accredited or not, and whether it offers degrees or not. The baseline for analysis in this scoring system was an accredited, degree-granting, non-profit institution; however, extreme variations were taken into consideration in the final scoring. For example, if two states had a similar fee structure for accredited, degree-granting, nonprofit institutions, but one of the states had significantly higher fees for unaccredited institutions, that state may have received a slightly lower overall Financial Burden score in comparison.
Variation also occurs in the timing of renewal fees. Some states require annual or biennial reauthorization (and fee submittal); others only require reauthorization every five years. This category was scored based on what the annual fee would be. For example, if a state required a $5,000 renewal fee every five years, and there was some question about whether that state would be an outlier or just above the median fee, the annual fee (in this case, $1,000) was used to compare the state to the median.
Approval Process Burden—This score was determined by the amount and type of information required of institutions applying for authorization in each state combined with the time and effort required to undergo authorization.
|100%||No action needed/no application required|
|80%||Simple application with basic information required|
|60%||"Typical" application requiring names/addresses/backgrounds of staff/faculty/owners, program info, facility/library/student services, financials, organization, student records|
|50%||Extensive application required (detailed data on personnel, programs, or both—e.g., Social Security numbers and resumes for staff/faculty, syllabi for all courses), including details such as outcomes or program demand data|
|40%||Extensive application details required (personnel, program) plus an additional burden (e.g., in-person attendance at orientation/meeting, have to circulate program proposals to all in-state institutions, have to submit applications to multiple state departments)|
- See Institute for Education Sciences (2012), “First Look: Enrollment in Postsecondary Institutions, Fall 2010; Financial Statistics, Fiscal Year 2010; and Graduation Rates, Selected Cohorts, 2002–07.” U.S. Department of Education, National Center for Education Statistics 2012-280.
- For instance, the Chronicle of Higher Education’s excellent new resource, College Completion: Who graduates from college, who doesn’t, and why it matters, does not include two-year, non-degree granting institutions in its analysis of public two-year colleges. Available at: http://collegecompletion.chronicle.com/
- For instance, Florida’s network of “colleges” (e.g., Miami Dade College, St. Petersburg College), several of Ohio’s branch campuses (e.g., Kent State-Geauga, Ohio University-Chillicothe, Wright State-), Indiana’s Vincennes University, and others are classified as “Public, four-year, primarily associate’s.”
- See Chronicle of Higher Education, College Completion: Who graduates from college, who doesn’t, and why it matters.
- The State Higher Education Executive Officers (SHEEO) uses an index that measures “enrollment mix” to account for the fact that the distribution of enrollments across different types of institutions (Research universities, Master’s colleges, community colleges, and so on) varies considerably across states. States with high enrollment in community colleges would have a low “enrollment mix index” less than one, while those with large research university enrollments would have a higher one.
- Patrick Kelly (2009), “The Dreaded ‘P Word’: An examination of productivity in postsecondary education.” Washington, DC: The Delta Cost Project. http://www.deltacostproject.org/resources/pdf/Kelly07-09_WP.pdf. p. 10
- Illinois Board of Higher Education (2012), “FY 2010 Cost Study Summary Tables.” Available at: http://www.ibhe.state.il.us/Data Bank/costStudies/default.asp
- The IPEDS definition of an MA degree is “an award that requires the successful completion of a program study of at least the full-time equivalent of 1 but not more than 2 academic years of work beyond the bachelor’s degree.”
- In the event the standard deviation scale produced few top or bottom grades, we also examined the raw sum of the standardized differences across the metrics to identify additional leaders or laggards.
- Using the overall graduation rate ensured that we provided credit to the “four-year, primarily associate’s degree-granting” colleges for their degree-seeking students who completed bachelor’s degrees.
- The Consumer Price Index was 218.1 in 2010, 214.5 in 2009, and 215.3 in 2008. See Federal Reserve Bank of Minneapolis: http://www.minneapolisfed.org/community_education/teacher/calc/hist1913.cfm
- See NCES (2006), “Comparable Wage Index.” Washington, DC: Department of Education. Available at: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2006865.
- Some institutions are not individually reported in the Delta Cost Project’s longitudinal dataset. This can complicate efforts to merge them directly with IPEDS data. The Delta Cost Project’s online analysis tool, “Trends in College Spending Online,” allows users to generate institution-level estimates of education and related expenses per completion for most institutions in a given state(reported in 2009 dollars). For the four-year and two-year degree-granting institutions, we used TCS Online to obtain E and R per completion estimates, which we then multiplied by the number of completions in a given year to give us overall spending on E and R. We then merged these estimates of “E and R” spending with IPEDS completions data. For the two-year, non-degree granting institutions (not reported in the TCS Online reports), we obtained E and R estimates from the longitudinal dataset.
- For instance, in some cases the education and related expenses for a two-year college are aggregated with the data for an allied four-year college. Thankfully, the vigilant researchers at the Delta Cost Project have noted where this is the case, so we were able to correctly account for costs and degrees. In this case, we made sure that all degrees and certificates completed were attributed to the college for whom the education and related expense data were reported.
For two years, the weighted sum of degrees for a given year =
(# of AA’s) +
(# of certificates at least two years, less than four)+
0.75 * (# of certificates at least one year, less than two) +
0.25 *(# of certificates of less than one year )+
2*(# of BA’s)
For four-years, the weighted sum of degrees for a given year =
(# of BA’s) +
0.5*(# of AA’s) +
0.5*(# of certificates of more than two years but less than 4) +
0.125*(# of certificates of less than one year)+
0.375*(# of certificates longer than one year but less than two)+
1.5*(PhD degrees in research and PhD’s in “other”) +
2*(first-professional degrees and PhD’s in professional practice)
IPEDS shifted its label for “first-professional degrees” to “PhD—professional practice” between 08-09 and 09-10. Both types of degrees are weighted at 2.
- For schools that adhere to FASB, we summed: state appropriations, local appropriations, state grants and contracts, and local grants and contracts (with tuition added to that sum for the analysis of state, local, and tuition funding per degree).
- An example from Missouri’s system of two-year colleges illustrates the point. St. Louis Community College consists of four campuses (Florissant Valley, Forest Park, Meramec, and Wildwood). Those campuses report completions to IPEDS independently, but they do not report finance data, and their revenue fields contain zeros. Instead, the system reports its revenues via its administrative unit, “St. Louis Community College—Central Office” (IPEDS unit id: 179283; classified as an administrative unit). A similar reporting issue applies to Metropolitan Community College in Missouri, where finance data are reported through Metropolitan Community College—Kansas City, an administrative unit. Using IPEDS to create a dataset of institutions that grant credentials may run the risk including the completions without correctly accounting for the public funding that pays for them.
- See note 16 for a description.
- Unfortunately, the 2008-2010 ACS did not have variables that capture whether an individual received a postsecondary certificate.
- We included those workers who were classified as “has job, not working” as among the employed.
- We found the following resources particularly useful:
Kevin Carey and Chad Aldeman, “Ready to Assemble: A Model State Higher Education Accountability System,” Education Sector, 2009; Stacey Zis, Marianne Boeke, and Peter Ewell, “State Policies on the Assessment of Student Learning Outcomes: Results of a Fifty-State Inventory,” NCHEMS 2010; Government Accountability Office, “Many States Collect Graduates’ Employment Information, but Clearly Guidance on Student Privacy is Needed,” September 2010; Lumina Foundation for Higher Education, “Critical Connections: Linking states’ unit record systems to track student progress,” January 2007; State Higher Education Executive Officers, “Strong Foundations: The State of State Postsecondary Data Systems,” July 2010.
In addition, a series of conversations with higher education analysts who have studied these policies was particularly helpful. Specifically, we thank Natasha Jankowski, research analyst at the National Institute for Learning Outcome Assessment.
- In the event that a state provided prospective students with a direct link to “College Navigator” pages for the institutions in its state, we provided states with credit for both having a resource listing basic student outcomes and for providing access to net price information.
- For these final two items in the criteria, states that had separate goals for their two and four-year systems received credit if either set of goals fulfilled the criteria in question.