Wednesday, September 23, 2015

What Are Students Learning? Not What Colleges Teach.



 



The majority of of educational effort has been applied learning the art of passing set exams.  This typically involves serious reliance on memory and the maxing out of the short term memory coming into the exam.  Far too soon, what is lost is actual mastery of the material itself which may never be achieved.


The same holds true for writing skills.  These are not difficult to develop, particularly in today's technological environment.  Yet what is wrong with asking each student to take the days lecture and the related text and prepare a 500 word synopsis of the materiel.  Five hundred words is not difficult.  What is more, the student must read the material, digest it and then write.  These are not notes anymore.  They are preparation for the next lecture and the exam as well.


A simple demand will advance the student's skills.  Better, that same student is preparing a day book for any papers needed as well..


My point is that learning is proactive and ongoing and must not be allowed to be slack.  Our colleges fail because they are slack!!!.  Demand what i just asked and no student can fail or fall behind.  I would go so far as to demand such summaries been turned in for weekly review and immediate termination for non compliance.  That is the complete antithesis of slack.


What Are Students Learning? Not What Colleges Teach.
What are we paying for at university?

Jesse Saffron

Friday, September 11, 2015

http://fee.org/anythingpeaceful/what-are-students-learning-not-what-colleges-teach/

It seems shocking that, in 2015, it is difficult to fully answer the question, “What are college students learning?”

After all, people can find out about admissions policies, degree programs, student debt levels, graduation rates, campus amenities, and financial aid options with the click of a mouse.

But if they want to find out how well a school stacks up in terms of building students’ knowledge, they’ll be largely dead-ended. And the information they might come across is worrisome.

For instance, a 2005 report produced by the Education Department’s National Center for Education Statistics indicated that college-educated Americans’ literacy declined “significantly” between 1992 and 2003. Only 25 percent of the 19,000 graduates surveyed were deemed capable of “using printed and written information to function in society, to achieve one’s goals, and to develop one’s knowledge and potential.”

Also, a scientific literacy survey of 10,000 college students conducted between 1988 and 2008 revealed that students who had taken two or three science courses had test gains of only 10-15 percent over individuals who had not taken those courses. (And many students indicated belief in pseudoscience, such as astrology and the notion that some people have lucky numbers.)

Students also appear weak when it comes to the humanities and social sciences. Since 2006, the Intercollegiate Studies Institute (ISI) has administered a 60-question civic and historical literacy exam to more than 28,000 college students from more than 80 schools. According to ISI, the average score has been about 54 percent (an “F”).

But there is a glimmer of hope. In the last decade and a half there has been increased interest in assessing broad learning outcomes. The dismal results from the tests and surveys that are available, along with complaints from the business community about graduates’ lack of workforce preparedness, have created a surge of interest in creating and improving learning measurements.

Some higher education organizations, as well as accreditors and state governments, have pushed for measurements of overarching skills such as critical thinking, problem solving, and written communication — skills that should transcend majors and academic disciplines and be possessed by all college graduates. In Missouri and Pennsylvania, for example, a portion of public universities’ funding is tied to assessment results.

In 2002, the Council for Aid to Education began developing the Collegiate Learning Assessment (CLA). The CLA is a “value-added” standardized test that shows how well colleges are building students’ general academic skills. It compares students’ baseline, freshman-year skill levels in areas such as critical thinking and written communication to their senior-year skill levels.

So far, the results have been bad — so bad that sociology professors Richard Arum and Josipa Roksa, drawing on CLA data, initiated a national debate about learning outcomes in 2011 with the release of Academically Adrift: Limited Learning on College Campuses.

Results from a CLA administered last year to 32,000 students from 169 colleges revealed that not much had changed in the intervening three years: 40 percent of college graduates were found to be unprepared for the white collar workforce because they had not sufficiently developed the skills mentioned above.

Other standardized tests, such as the ETS Proficiency Profile and the Collegiate Assessment of Academic Proficiency, have been used by colleges and universities to determine the efficacy of general education programs.

The website College Portraits, a byproduct of the Voluntary System of Accountability (a collaboration between the American Association of State Colleges and Universities and the Association of Public and Land-Grant Universities) was designed to provide the public with a wide range of college data — including results from employer and graduate surveys, the National Survey of Student Engagement, and standardized tests such as the CLA and ETS Proficiency Profile.

Each university chooses which test to use and how to use it. For example, North Carolina State University chose to use the ETS Proficiency Profile. Its results were alarming for a major research university with average SAT scores over 1200 (for math and reading): only 12 percent of seniors tested were considered “proficient” in terms of their critical thinking skills and about 60 percent were considered “not proficient.” (The remaining 28 percent were considered “marginal.”)

UNC-Chapel Hill, with average 2-test SAT scores of 1300, chose to use the CLA, but its results were much the same as NC State’s. For critical thinking, only 16 percent of seniors were considered at the level an exiting senior should be, while 9 percent of seniors were at the level of an entering freshman.

Also, neither university incorporated value-added testing, which would have shown students’ educational gains — or lack thereof — made during their time in school.

College Portraits has not gained much traction. After eight years, only about 300 colleges are listed on the site, and less than half provide learning outcomes information.

Another reform attempt was made by the New Leadership Alliance for Student Learning and Accountability. The Alliance was formed in 2009 with support from 40 accreditors and college associations. Its goal was to encourage colleges to gather more learning data and share assessment results with the public.

Eventually, more than 100 college and university leaders signed a pledge promising to devote more attention to assessment. Ultimately, however, the project failed because it never established a steady source of funding.

In general, the learning outcomes movement has not gained broad-based support in the academic community. Hundreds of schools have experimented with tests, but few share the results publicly, and almost none show how results inform curricula changes.

Lack of funding has not been a major impediment for the assessment movement. The faculty, though, have been highly resistant to attempts to measure their productivity. Many are skeptical about standardized tests of student learning.
 
They say that such tests lack reliability because they aren’t tied to coursework and have no impact on students’ grades (meaning that test takers lack motivation to perform well). They also say that the tests don’t provide feedback that allows for curricula improvement. That’s one reason why professors show stronger enthusiasm for initiatives that embed overarching learning goals (critical thinking, etc.) into coursework, allowing them to make adjustments as they see fit.

Hence the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics, crafted by the Association of American Colleges and Universities. They’re designed to help professors assess students’ competencies via traditional class assignments, papers, and so forth.

Ten states — Connecticut, Indiana, Kentucky, Maine, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island, and Utah — are participating in a $2.3 million initiative, funded by the Bill & Melinda Gates Foundation, that uses the rubrics.

Reflecting a desire on the part of faculty to avoid external regulation, the consortium’s guiding principles state that “[Assessment] should be based upon authentic student work and allow for the use of multiple measures of student learning…without a single mandated statewide test.”

Somewhat similar to VALUE rubrics in that it rejects the standardized testing approach is the Lumina Foundation’s Degree Qualifications Profile (DQP). It’s been used by more than 400 colleges and universities since 2011. It defines an array of skills and knowledge that graduates should possess, and provides professors with guidance on how to “fine-tune” courses to achieve specific learning outcomes.

But one wonders how the DQP would address, for instance, many students’ apparent political and historical illiteracy; its “civic learning” section emphasizes diversity issues and current events.

At any rate, the use of employer surveys, national standardized tests, and rubrics is on the rise and has never been more prevalent in higher education. However, a report released last year by the National Institute for Learning Outcomes Assessment indicated that more than two-thirds of schools do not share assessment results, and only 8 percent show how results are being used to shape curricula.

The report concluded, “Too often…the results of assessment…do not lead to action. [Governing boards] should expect to see annually a comprehensive set of student learning indicators and enough examples of productive use of assessment to be confident that…academic quality controls of the institution are operating effectively.”

Today, given the evidence we have of substandard learning outcomes, the longstanding assumption that colleges are adequately preparing students for life and work should be called into question by those who oversee our universities.

If universities wish to avoid micromanagement of curricula, they must provide more information about learning outcomes. If they don’t do so voluntarily, pressure from legislators, governing boards, employers, students, and parents will likely force them to act.

No comments: