Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday December 26 2016, @07:24PM   Printer-friendly

The common thought that learning by experience is most effective when it comes to teaching entrepreneurship at university has been challenged in a new study.

An analysis of more than 500 graduates found no significant difference between business schools that offered traditional courses and those that emphasise a 'learning-by-doing' approach to entrepreneurship education.

The research challenges the ongoing trend across higher education institutes (HEIs) of focussing on experiential learning, and suggests that universities need to reconsider their approach if they are to increase entrepreneurship among their students.

http://phys.org/news/2016-12-entrepreneurial-textbooks.html

[PhD Thesis]: Evaluation of the Outcomes of Entrepreneurship Education Revisited

[Related]: College can cultivate innovative entrepreneurial intentions

[Source]: http://www.aston.ac.uk/news/releases/2016/december/entrepreneurial-experiences-no-better-than-textbooks-says-study/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday December 26 2016, @07:54PM

    by Anonymous Coward on Monday December 26 2016, @07:54PM (#446131)

    Statistical significance is a function of sample size. Let's look at the thesis:

    Data were collected from 16 entrepreneurship educators

    No surprise they found no "significant" difference.

  • (Score: 4, Insightful) by ikanreed on Monday December 26 2016, @08:31PM

    by ikanreed (3164) Subscriber Badge on Monday December 26 2016, @08:31PM (#446138) Journal

    Erm... the sample size for the study would be the students of those 16 educators, presumably reaching well into the hundreds.

    If there were a real effect size larger than a couple percentage points, that sample size would have found it within at least 3 sigma. Admittedly, something trivial could be hiding in there, but this is pretty substantial for discrediting the hypothesis "Experiential learning provides substantially better results(and thus is worth paying more for)"

    • (Score: 0) by Anonymous Coward on Monday December 26 2016, @08:51PM

      by Anonymous Coward on Monday December 26 2016, @08:51PM (#446143)

      The elephant in the room, of course, is that there is no objective reliable way to measure the effectiveness of a university course in this situation.

    • (Score: 1) by khallow on Monday December 26 2016, @09:02PM

      by khallow (3766) Subscriber Badge on Monday December 26 2016, @09:02PM (#446148) Journal
      I see this blurb from the abstract for the dissertation:

      Experiential EE was associated with higher skill-based and affective outcomes than traditional EE, but only in Estonia.

      So the author of the dissertation did find a significant difference contrary to the assertion. The author noted also that "objective expressions of entrepreneurial behaviour" didn't show a similar difference.

      And I agree with the original poster. The sample size is far too small. Sure, there's almost 600 students in the sample size, but you have just 16 slots those students fall in. It just may be that the sample of experiential EE schools aren't good compared to the other half. They might even be worse, if they attempted the traditional approach.

      • (Score: 2) by ikanreed on Monday December 26 2016, @09:24PM

        by ikanreed (3164) Subscriber Badge on Monday December 26 2016, @09:24PM (#446156) Journal

        Errr.... No. You're completely full of shit? And don't really know how longitudinal studies work? At all?

        If you can 100% verify that each of your "buckets" suits the independent variable condition, it doesn't matter how many there are.

        "Oh no!" says user khallow, "this drug trial only included 3000 patients from 3 hospitals. Better throw it out."

        I couldn't say why, but you're actively seeking an ad-hoc reason why the null hypothesis couldn't possibly stand. Which is ass-backwards science.

        • (Score: 2, Informative) by khallow on Monday December 26 2016, @10:31PM

          by khallow (3766) Subscriber Badge on Monday December 26 2016, @10:31PM (#446181) Journal

          If you can 100% verify that each of your "buckets" suits the independent variable condition

          There's two problems right there. The fewer the buckets the more likely that you're inserting hidden biases from your choice of buckets. And bucket partitions (technically a near partition since it is possible, but rare to go to two or more schools) never are independent by definition.

    • (Score: 2) by VLM on Monday December 26 2016, @09:04PM

      by VLM (445) on Monday December 26 2016, @09:04PM (#446149)

      Isn't that lack of a difference proof in itself that something is wrong with the study?

      Those seem like wildly different learning experiences, so there should be at least some differences in outcome.

      Could be a lack of cause and effect between whats being tested and measured, or perhaps whats being measured has nothing to do with whats learned.

      For example a standard SN automobile analogy is I can try to run a scientific study on oil pressure vs max output horsepower for a highly selected subpopulation, lets say all the engines have to be a very specific Toyota model arriving at a local dealership. All you're really going to measure is the noise level of various tolerances in the oil-stuff and the noise level of various engine computer sensor tolerances which probably won't correlate.

      However, in an absolute sense it seems "obvious" that among a wide range of different sized engines the higher power engines will have slightly lower oil pressures because all things being equal the higher power engine should run hotter and hot oil is lower viscosity and thin watery oil will pour thru the engine faster resulting in lower pressure. I think that a reasonable hypothesis.

      This is assuming they didn't totally F up the study. The SN electronic workbench analogy I'll provide is they decide to measure the voltage across a resistor and across a capacitor and then analyze the result, because they are idiots, furthermore the battery in the voltmeter is dead so they say "F it" and pencil whip the results as 0 each time. (Assuming 3 sig fig hobbyist cheap gear, no true but irrelevant stuff about resistor noise voltage per Hz or cap dielectric charge absorption, I suppose my car analogy needs a similar disclaimer)

      Oh wait I got an even better one, its like giving one dude a shell account and telling him to figure out how to run "hello world" and giving another dude a copy of some 90s C++ textbook like Dietel and Dietel's pink phone book sized text but no computer access at all, and then having them turn in identical midterms, all that means is one bastard cheated and copied the answers of the other. Or you give one future "code hero" a bottle of vodka and the other a bottle of whiskey and act surprised at the identical midterm results. Or different brand condoms.

      • (Score: 5, Insightful) by Anonymous Coward on Monday December 26 2016, @09:32PM

        by Anonymous Coward on Monday December 26 2016, @09:32PM (#446159)

        One explanation could be that neither experience nor book learning has much effect on entrepreneurial success.

  • (Score: 0) by Anonymous Coward on Tuesday December 27 2016, @09:00PM

    by Anonymous Coward on Tuesday December 27 2016, @09:00PM (#446455)

    16 groups vs 600 students:

    That means you have to 'cluster' your standard errors by professor, reducing the statistical power of the sample, depending on the intensity of your intra-cluster correlation. For example if all students of a professor were 100% identical, it would be like having only 16 observations in your sample. Conversely, if you had no correlation between the professor and any characteristic of the students, you would have the full power of 600 observations.

    It looks like the author did not cluster her standard errors, leading to narrower standard (and potentially false positives), but to be perfectly frank I only say that based on a keyword search on 'cluster'; that thesis is an exasperating 416 pages.

    But clustering is not the only thing that threatens the internal validity of the research (forget external validity, it's not 'representative' of any population of schools and it does not appear to be its goal anyways).

    The main issue is that those results come from simple regressions. This means the experimental design does not allow the author to make claims about causal inference. With such a design, one cannot say that any effect observed is 'caused' by the exposure to the treatment (there could and most certainly is omitted variable bias for example, a major threat that often flips the signs of regression coefficients -- like finding that an effect is positive when in reality it's negative).

    Kinda sucks to be realistic with the statistical tools used (correlation does not imply causation here, find your obligatory XKCD). But the paper remains interesting exploratory research, and it looks like the author deserves to get funding to run an actual randomized controlled trial, or find other quaisi-experimental designs that have stronger claims to causality (such as RDD, my favorite); I know there are such papers in that field with such designs, but I don't know if any compare traditional and experience-based entrepreneurship courses.

    - An actual economist