Follow AI

Higher Ed Impact

Predicting Student Success: When SAT and GPA Are Not Enough

Share

 

 

Historical efforts by admissions officers and enrollment managers to assess a student's potential for high academic performance and academic persistence have focused on cognitive potential, measured most frequently by past academic performance (high school GPA) and standardized test scores (SAT, ACT). Yet there is a growing awareness among enrollment managers (driven and confirmed by the research of recent years) that these two measures, taken by themselves, offer limited predictive accuracy.

Scores and high school GPA only account for about 20 percent of the variability we see in student outcomes. Some students with a respectable GPA and high scores underperform academically in college and drop out, while other students who appear academically under-prepared then proceed to perform highly. This means that some of the students you are losing are in good academic standing. They don't appear to be "at-risk students." To ensure that programming to improve student success is effective, we need better predictors of student success.

Paul Gore, University of Utah

To learn more, we turned to Paul Gore, who serves as the student success special projects coordinator at the University of Utah in addition to his roles as professor, training director for graduate counseling programs, and director of institutional research. Gore has also served as the director of the Career Transitions Research Department at ACT in Iowa City.

Citing recent research, Gore emphasizes the need to adopt assessments of non-cognitive skills, in order to:

  • Identify those students who enter your institution with an average GPA and average test scores, but who are nevertheless likely to be at risk.
  • Identify those first-generation, academically under-prepared students who did not perform as well on the standardized test, but who are engaged, resilient, confident, driven to succeed, and who have the psychological constitution to thrive under stress.
  • Develop, based on this information, more targeted and effective student services programming to support students' academic success and persistence.

A TOOLKIT FOR MEASURING AND IMPROVING ACADEMIC "GRIT" IN FIRST-YEAR STUDENTS

Join us online on October 30, 2013 and hear from Paul Gore on how to promote persistence by helping first-year students develop an educational action plan to improve four key non-cognitive skills:

  • Self-efficacy
  • Academic engagement
  • Educational commitment
  • Campus engagement

Following this webcast you will receive a toolkit with resources to help you use non-cognitive assessment data with first-year students, including checklists and a sample goal-setting chart.

The Critical Non-Cognitive Variables to Assess

Directing attention to a 2004 meta-analysis, Gore notes six non-cognitive variables that appear to have the greatest impact on an institution's ability to identify those students who are likely to succeed. These are not the only non-cognitive variables that impact student success (for example, communication skills are also important), but these are the six variables that, when assessed together with other traditional, cognitive variables, offered an incremental increase in predictive accuracy.

The first two variables are predictors of academic performance:

  • Academic engagement or academic conscientiousness: in other words, how seriously does the student take the business of being a student? Does the student turn in assignments on time? Attend class diligently? Ask for help when needed?
  • Academic efficacy: the student's belief and confidence in their ability to achieve key academic milestones (such as the confidence to complete a research paper with a high degree of quality, or to complete the core classes with a B average or better, or their confidence in their ability to choose a major that will be right for them)

The next two variables are predictors of academic persistence:

  • Educational commitment: This refers to a student's level of understanding of why they are in college. Students with a high level of educational commitment are not just attending college because it is "what I do next" after high school (i.e., in order to attain a job or increase their quality of life); these students have a more complex understanding of the benefits of their higher education and are more likely to resist threats to their academic persistence. (On an assessment instrument, a sample question that would help measure educational commitment would be: "If I were offered a good job, I would leave college..." If a student answers "Strongly Agree," that response is not a predictor of persistence -- even though the student may show high academic performance.)
  • Campus engagement: This is the intent or desire to become involved in extracurricular or cocurricular activities. Does the student show interest in taking a leadership role in a student organization, or participating in service learning opportunities, intramural sports, or other programs outside of the classroom?

Gore notes that the final two variables tend to provoke the most controversy, as they assess emotional intelligence and emotional development. Yet they are often key predictors of both academic performance and academic persistence:

  • Resiliency: How well does the student respond to stress? Do small setbacks throw the student "off track" emotionally, or are they able to draw on their support network and their own coping skills to manage that stress and proceed toward their goals?
  • Social comfort: Gore notes that "social comfort is related to student outcomes in a quadratic way -- a little bit of social comfort is a good thing, while a lot may be less likely to serve a student well, as this may distract their attention from academic and cocurricular pursuits." Assessing social comfort involves asking whether students make friends easily, work well in groups, and enjoy engaging with others. Gore adds that many high-performing students are introverts, and social comfort is not a prerequisite for student success -- but it is a variable that, when present, increases the chances of academic persistence.

Non-Cognitive Assessments

We asked Gore to direct us to some of the most prominent non-cognitive assessments available, and he offered the following list. These assessments were initially deployed as instruments that students would take at orientation or shortly after their arrival on campus. "Now, however," Gore emphasizes, "these instruments are often integrated into larger, comprehensive systems of student support."

  • Student Strengths Inventory (SSI). Currently owned and supported by Campus Labs, this insrument measures the six key non-cognitive variables and forms part of a larger interface between students and advisors. You can learn more here.
  • College Student Inventory (CSI), developed by Noel-Levitz. You can learn more here.
  • Personal Potential Index (PPI), developed by Educational Testing Services (ETS). Interestingly, Gore notes that ETS has not embedded the PPI in the SAT standardized test; instead, they are co-administering this instrument with the Graduate Record Examinations (GRE), using non-cognitive variables to predict the academic performance and persistence of graduate students. Among the non-cognitive variables assessed by the PPI: knowledge and creativity, resiliency, integrity, planning and organization (e.g., disciplined and self-regulatory learning), teamwork, social comfort, and conscientiousness. You can learn more here.
  • Student Readiness Inventory (SRI), developed by ACT and now packaged as part of Encore. You can learn more here.

Gore adds that there are also a number of instruments that were not specifically developed to measure non-cognitive variables, but that do include them. There is also the non-cognitive instrument William Sedlacek developed and described in Beyond the Big Test (Jossey-Bass, 2004), and which is now being used by the Bill and Melinda Gates Foundation's Millenium Scholars program.

The Key Question

Gore offers this list with the caveat that often, he hears the questions of which instrument is best, or most effective, or least expensive. "These are not the most important questions," Gore cautions. "The reality is that you need to devote 10 percent (or less) of your effort to looking at the instrument and determining if it is right for your institution, and 90 percent (or more) of your time and effort in building the system and the programming on your campus that will enable you to something with the results. So many institutions buy the instrument -- and then it sits there."

The correct question to ask at the outset is not Which instrument? but rather, How will I use the instrument to advance the success of my students? Remember the adage that taking the test never helped anyone. A non-cognitive assessment is not about completing the test; it's about what you and the student will be able to do with the information that's provided by the test.

Paul Gore, University of Utah

Read More ArticlesSubscribeBack to Top

About the Authors

Daniel Fusch, Director of Publications & Research

Daniel provides strategic direction and content for AI’s electronic publication Higher Ed Impact, including market research and interviews with leading subject matter experts on critical issues. Since the publication’s launch in 2009, Daniel has written more than 350 articles on strategic issues ranging from student recruitment and retention to development and capital planning. If you have a question or a comment about this article, feel free to contact Daniel at daniel@academicimpressions.com.