How do national standards play a part in this approach to education, an approach which I am sure is practiced by most experienced teachers?
With regard to documenting improvement, I can suggest one apaproach. I am experimenting with in-class exercises in my course. Lecture stops and the students work with hand-outs. They work in small groups, in a class of 60 students, in a lecture hall that is not suited to this approach. But it seems to work. How do I determine the efficacy of this approach? Last semester, when I started doing this, I gave them the same multiple-choice final examination I have given for the last five semesters. Some of the questions change each semester, but the kinds of questions do not. The mean grade did not change last semester (the mean is a very stable statistic) but the variance was considerably lower than in the past (with one exception, which I can explain). So, until I get more data, I am hypothesizing that having students work together brings them closer together "knowledge-wise" than does the straight, traditional lecture approach.
This is an internal method of assessment, independent of what people do in other places, and, I think, is more meaningful than a comparison with scores on a national standard, because it tests the efficacy of a method, not the number of facts students have memorized for a test.
Pat de Caprariis
Dept. of Geology
Indiana-Purdue Univ
pdecaprr@iupui.edu
First: comparing physics and geology is like comparing birds and dino's - they are both science but that is where the similarity ends. Physics, like chemistry and especially math do not have "out of the classroom" visualizations for students. As such, unlike Geology they are the same from New York to Houston to Calif.. This leads me to,
Second: I do not know how everyone else teaches intro Geology classes, which have about 95% non-majors and maybe a few majors (and hopefully a few future ones) but I, at the risk of skipping a topic or two, structure my classes towards what students see every day. I know I cover neotectonics and Cenozoic clastic sedimentation in much greater detail than say point bars, deltas, Paleozoic tectonics etc. - because that's what we have here, Tippecanoe transgressive sands are not going to be an integral part of my students life, but earthquakes and the Sierra Nevada are. I think a list of concepts which we think all students should know would be very useful but not standardized exams.
Jon Sloan
Dept. of Geology
CSUN
at or near Northridge (pending the next quake) 91330
It is true that the textbooks have a relatively standardized content, however I have never met an instructor who felt the entire text material could be taught in a single semester. Given the necessity of culling material, and the merits of customizing that material based on geography, interest, expertise, etc., standardized exams would only serve to make the course content more restricted and less interesting and less relevant to the majority of those taking the course!
Lastly, I would also emphasize that in all cases, restricting content must be done without sacrificing standards, which is a seperate issue entirely.
Bill
William R. DupreÄ
Associate Professor
Department of Geosciences
University of Houston
Warren
A practical proposal might be to a) find out how many folk use some of the question sets made available with introductory texts, then b) find out which questions are commonly actually used from those sets, and c) think about whether the correlation of responses to the common questions might throw some light on teaching practices. (I haven't used such question sets, but I'm interested in starting.)
(I teach beginner Geology for Engineering students to classes of 30-100, and it isn't much like our beginner Earth Science classes for Science students. I also teach Geophysics to classes of 30-40 at senior undergraduate level. I'm always surprised at what they learn, though.)
Lindsay Thomas
School of Earth Sciences, The University of Melbourne
Victoria 3010, Australia
Warren Huff
From an industry point of view, we still have good use for the textbook, large numbers of which are written in house for specific equipment pieces, and techniques of use. (We also deliver to the field with all the texts on .pdf format for ease of transport.) We still need people who will pick it up read it through and ask questions related to it. Even if those questions are based on "Must I really do it this way? Or can I not do it by a safer/cheaper/ more efficient method?" So a there is a direct plea to all educationalists of the geosciences keep on bringing out inquisitive people. As a trainer in industry, the worst case scenario that Iam faced with is people who just sit and absorb, requiring to be spoon fed absolutely everything. You probably feel much the same !
Enough digression from the topic:
Where we expect personnel to be on the move frequently such as in the oil industry, a text has an advantage. Texts do not need battery life in the way a laptop does. It's never going to get shutdown by airlines insisting that the electrical energy may affect the navigation systems in aircraft! If you are struggling with selling the printed works to students perhaps casually informing them that they are read 25% faster than the electronic word may bring a bit of focus back. The down side is that it has to be well written to be stimulating, and that is also linked into personal "tastes" and the qualities of the author. A lot can be attached to the electronic work in terms of video/sounds demonstration of processes that makes it a very competitive medium. The only way a text could compete is to go to the live action in the field !
Perhaps some of you would be suprised at the lack of, or low quality of Intranets within industry. Maybe you wish to warn your students that what they have now in front of them may be the best they get untill they have had some years in industry and industry has cought up! I am employed by a large multi-national service company. We have frequently come across problems where large oil companies have outsourced their network management to a third party This is on a strict budget and perhaps the oil company's management do not fully comprehend the system they have outsourced and now they have one with a limited use. At least two of the major oil companies cannot run anything more than IE3! That becomes a problem where companies are trying to feed their geologists "live" drilling data requiring a minimum network browser standard of IE4. So it is some indication of the system clashes that people have to learn to cope with outside education. I feel that will change again in the future, as the vogue for outsourcing gets reviewed on a case by case basis.
In my own case I have chosen text-based materials over e-publishing. I decided that for a basic level Electric Log Interpretation Course for geologists, I would use Asquith's book from the AAPG. The benefits were that I did not have to compete for computers in training rooms or guarantee everyone had access to a PC to get at the material when they were away from the classroom. In the cost benefit-analysis I could not validate making my own course if I was using less than 1000 copies. Which approximates to more than 5 years supply! The book I chose to use is now over 14 years old, a standard work known by many within the industry, and it still delivers. Who is using electronic material today that is 14 years old and still delivers the required message?
In another case I am looking to use in house publishing to re-write a self teach drilling and engineering manaul for logging geologists. There is little point in me writing the whole text as I can use the current Directional Drilllers, Surveyors, Drilling Fluids and Wellbore Construction Engineers manuals which are already in place and in .pdf format; and html link to the relevant chapters/pages and run it all off a CDROM. (Although I am still faced with an access problem to PC's.)
I hope that texts and e-publishing will continue to co-exist, each has its strengths which can be played to. It is a case of knowing your subject matter, audience and presenting accordingly.
Thanks for the interesting comments over the past weeks. For the person who noted that they still kept their college texts, I still have mine also and they are still in use 20 years on!
Regards
Stuart Pressage
Is the idea proposed from Melbourne worth pursuing? Take a look at the link above. Are there questions that can be asked that allow testing of mastery of a concept ... questions such that response a suggests one set of though processes and response b another?
Of course, we might be forced into a discussion about what should students take away from an introductory geoscience course ..... or what do they bring in ...... do we add value.
I agree with Lindsay's comment about "shared goals" and I too am interested in taking this another step
John Butler
1. identify good practices (as discussed by others, Warren, Pat, ...);
2. encourage others to adopt them.
The second step may be more difficult than the first. There is a steadily increasing body of literature that discusses how to improve improve learning. Like Pat, I have added in-class group exercises to my courses and they have changed the whole classroom experience for the better (and I have the anonymous surveys to prove it). I have also instituted short (2-4 question) reading quizzes every day to allow me to get away from regurgitating the text and to allow for more discussion in lecture (approx. 70 students). Miraculously, several students responded on the survey that they liked having a daily quiz because it provided incentive for them to study (only 2 complained). So class now typically involves quiz/brief lecture/group exercise with other components when necessary (student observations of images; group quizzes; minute essays). Initial results reveal higher average scores on the same questions on this semester's exams vs. last semester when class was more "traditional". However, results are far from conclusive and it is not clear that a multiple choice test is really assessing improvements in learning.
I'm convinced that this is a better method than the "sage on the stage" model. But what will it take to convince my colleagues of this? Improved test scores on our own exams can be explained away by a variety of factors (e.g. easier exams, less content, "easier" topics, "hints" in lecture) and the rest of the evidence is pretty anecdotal. In discussions about various teaching methods, one of my colleagues labeled reading quizzes and in-class discussions "tricks". For some, the only way they will be convinced of the utility of alternate teaching methods is to have an unbiased outside evaluation method. The most familiar method is a standard test or group of questions. Are there alternatives? Is there another mechanism that could be used to evaluate improvements in learning?
David McConnell
1. I believe our product is an earth scientist. We like to believe that this beast has at least a basic knowledge of his/her subject, has the skills to find out more and the understanding to analyse, synthesise and present.
2. How do we quantify the quality of our product? How do we check our quality from year to year? How do we check our quality against somebody else's?
The key things here are learning under 1 above and assessment under 2. Teaching may help in this process, but it is now well known that different students learn differently so what is "good" teaching for one student may be "less good" teaching for another.
Assessment has to be keyed in to course aims and objects by overtly addressing intended learning outcomes. Some survey work that I did demonstrates something which I think we already know, that there is much commonality in earth science programmes of study. For example, a sedimentology module will deal with the various classification schemes for sediments, with the various methods for recording and interpreting data (logs etc), and the processes and environments that can be deduced from these. Of course, the sample materials studied will differ, but the underlying generic core of knowledge, skills development and understanding are common. They represent a benchmark.
Assessments that address such benchmarks can be used to test whether students have attained that level. I believe that computer-based assessments have great potential in this area because they are objective and can be set up to address actual learning outcomes, or benchmark knowledge, skills, understanding etc. However, other forms of assessment - essays, reports, dissertations - are still needed to test the range of skills and understanding not easily done by computer. However, such modes of assessment do suffer from the problems of objectivity associated with human marking.
I realise that talking of benchmarks brings up the question of "national standards", or even "international standards", which I am uneasy with myself (would I agree with the "agreed" benchmarks?). However, the current presence of anomalous outcomes from year to year within an institution or between institutions must make us also feel uneasy about not having "national standards".
In my own part of the UK, three comparable institutions in terms of size and quality of student intake (in all subject areas), have markedly different graduate output quality if final degree results are the measure. Degrees are classified in the UK, and the key measure is the proportion of first and upper second class degrees (60% or greater mark average) against lower second, third and fail (less than 60%). Three institutions within 100 km of each other have intake quality measured (in 1998) at 180/250, 193/250 and 184/250, but percentages of graduate student scoring 60% or better at 45%, 58% and 64% respectively. Now of course, it may be that the teaching at the third institution is absolutely fantastic and leads to greater value-added when it comes to measuring the learning attained. However, it may also mean that there are different standards operating. We do not have the data to test which hypothesis is true. This ought to make us feel uncomfortable as educators. Any test will involve some form of "national standard". My suspicion is that such things will happen. The UK government has already had benchmarking pilot studies in some subject areas with the implication that these will take place in all subject areas. It is probably better to help frame benchmarking than it is to ignore it and then complain when you don't like the outcome.
To sum up, we should be wary of imposed benchmarks, but we should not be so smug as to believe that our current practices are really that good. Perhaps some attempts at designing some sets of benchmarking questions and trying them out at different institutions might be a useful exercise. TRIADS would be an excellent medium for the tests.
Alan Boyle
University of Liverpool