Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Saturday August 27 2016, @01:51AM   Printer-friendly
from the thought-entropy dept.

For many years now, I've been following the blog, Knowing and Doing — Reflections of an Academic and Computer Scientist by University of Iowa college professor Eugene Wallingford. I admire his efforts at understanding his students' perspectives and his taking pains to try and help them to grow and understand what they are doing, meanwhile providing a solid foundation for future exploration.

I found this recent (August 7th) entry, Some Advice for How To Think, and Some Personal Memories, to be especially interesting (emphasis from original):

I've been reading a bunch of the essays on David Chapman's Meaningness website lately, after seeing a link to one on Twitter. (Thanks, @kaledic.) This morning I read How To Think Real Good, about one of Chapman's abandoned projects: a book of advice for how to think and solve problems. He may never write this book as he once imagined it, but I'm glad he wrote this essay about the idea.

[...] Artificial intelligence has always played a useful role as a reality check on ideas about mind, knowledge, reasoning, and thought. More generally, anyone who writes computer programs knows this, too. You can make ambiguous claims with English sentences, but to write a program you really have to have a precise idea. When you don't have a precise idea, your program itself is a precise formulation of something. Figuring out what that is can be a way of figuring out what you were really thing about in the first place.

This is one of the most important lessons college students learn from their intro CS courses. It's an experience that can benefit all students, not just CS majors.

Chapman also includes a few heuristics for approaching the problem of thinking, basically ways to put yourself in a position to become a better thinker. Two of my favorites are:

Try to figure out how people smarter than you think.

Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.

This really is good advice. Subject matter is much easier to come by than deep understanding of how the discipline work, especially in these days of the web.

[...] Chapman's project is thinking about thinking, a step up the ladder of abstraction from "simply" thinking. An AI program must reason; an AI researcher must reason about how to reason.

This is the great siren of artificial intelligence, the source of its power and also its weaknesses: Anything you can do, I can do meta.

[more]

Have you ever stopped to think about how you think? Ever try to optimize your thinking processes? I often sense myself intuitively attempting to categorize new concepts in the fashion of "all A are B, but not all B are A"... in other words, finding encompassing abstractions that are proper supersets and subsets of other sets. What are the defining and distinguishing characteristics? How does this relate to other abstractions I've learned?

As a concrete example, when trying to get up to speed on a new programming project, I have found it helpful to make three passes through the documentation. On the first (rapid) read-through, I seek to identify the vocabulary and the higher-level interdependencies between the terms used. On the next pass, I read slower and seek a much stronger mental model of how everything is defined and interrelated. On the third pass, I read with a critical eye to clearly distinguish dependencies and assumptions that may, or may not, hold — I seek the outliers and corner cases.

So, my fellow Soylentils, how do you think you think?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Saturday August 27 2016, @02:29AM

    by Anonymous Coward on Saturday August 27 2016, @02:29AM (#393836)

    Marty, you haven't sorted out what you were thinking hence the rambling. Try again.

  • (Score: 3, Interesting) by JNCF on Saturday August 27 2016, @02:50AM

    by JNCF (4317) on Saturday August 27 2016, @02:50AM (#393842) Journal

    As a concrete example, when trying to get up to speed on a new programming project, I have found it helpful to make three passes through the documentation. In the first (rapid) read-through, I seek to identify the vocabulary and the higher-level interconnections between the terms used. On the next pass, I read slower and get a much stronger mental model of how everything is defined and interrelated. On the third pass, I'm now reading with a critical eye to clearly distinguish dependencies and assumptions that may, or may not, hold — seeking the outliers and corner cases.

    This is sort of a less neurotic version of the WHATWG reading instructions [whatwg.org] (emphasis theirs, to denote heading):

    1.9.1 How to read this specification

    This specification should be read like all other specifications. First, it should be read cover-to-cover, multiple times. Then, it should be read backwards at least once. Then it should be read by picking random sections from the contents list and following all the cross-references.

    They aren't wrong, though... except for maybe the backwards part. I feel like picking semi-random sections of a specification to refresh yourself on is a helpful habit, though (at least if the spec in question is too big to digest in a single sitting). It refamiliarizes you with some of the parts that you wouldn't normally find yourself looking up.

    • (Score: 0) by Anonymous Coward on Saturday August 27 2016, @11:09AM

      by Anonymous Coward on Saturday August 27 2016, @11:09AM (#393904)

      Wow, this Wallingford guy is in for a rude awakening when he realizes people won't even RTFS let alone RTFA three times!

  • (Score: 2) by jdavidb on Saturday August 27 2016, @03:57AM

    by jdavidb (5690) on Saturday August 27 2016, @03:57AM (#393854) Homepage Journal
    Henry Hazlitt wrote an interesting book called Thinking As a Science which can be found online... I bought it a few years ago, but unfortunately I never made it through it; I merely skimmed it. Hazlitt probably would've been appalled.
    --
    ⓋⒶ☮✝🕊 Secession is the right of all sentient beings
    • (Score: -1, Flamebait) by Anonymous Coward on Saturday August 27 2016, @06:24PM

      by Anonymous Coward on Saturday August 27 2016, @06:24PM (#393995)

      is that the lolbertarian scumbag? Just read Heidegger's Being and Time FFS

  • (Score: 3, Interesting) by TheLink on Saturday August 27 2016, @06:43AM

    by TheLink (332) on Saturday August 27 2016, @06:43AM (#393869) Journal
    1) Start with various goals for the current situation
    2) Predict multiple outcomes using multiple existing prediction models (self, world, device, object, person-specific) based on the multiple predicted possible choices.
    3) Select a choice based on how good probably outcomes are
    4) Compare difference between outcomes and predictions by multiple models
    5) Correct/modify/rerank prediction models
    6) Remove/Modify/Add goals based on outcomes and new predictions
    7) Go to 2)

    Smarter people have better prediction models and auxiliary calculation systems. Maybe a quantum computer can help run multiple predictions and consciousness is what happens when a quantum computer predicts itself :).

    Seriously though, most current AIs don't _know_ anything. It's obvious from the mistakes they make ( See: http://www.aol.com/article/2011/02/17/the-watson-supercomputer-isnt-always-perfect-you-say-tomato/19848213/ ). A stupid person could copy answers from the smartest people and might be able to fake stuff that looks like zen insights based on clever heuristics some smart person gave them. The mistakes of understanding will show you how much that person understood - how close the person's model is for the problem or whether the person actually has a model or not.

    That's why I know in many ways dogs are still smarter than IBM's Jeopardy bot. Dogs can create and maintain decent models of humans, other creatures and the environment. Based on what they do and the mistakes they make, I know that they know something.

    By the way, to me it's amazing that a crow with a walnut size brain is smarter than many mammals with much larger brains for many problem solving talks. But do crows do as well at maintaining models of other creatures? Many crows seem to have difficulty distinguishing between a human trying to help another injured crow and one trying to hurt it. Whereas many other animals seem to be OK at figuring out whether we are helping or hurting another animal. But maybe most crows don't need to? Or perhaps my stats/data is skewed/wrong.
    • (Score: 0) by Anonymous Coward on Saturday August 27 2016, @11:12AM

      by Anonymous Coward on Saturday August 27 2016, @11:12AM (#393906)

      I think you forgot step '0':
      0) Check Stack Overflow & Stack Exchange because someone there has probably already tried what you are thinking on some level.

    • (Score: 0) by Anonymous Coward on Friday September 09 2016, @06:44AM

      by Anonymous Coward on Friday September 09 2016, @06:44AM (#399506)

      Specifically on the bird thing, Its been found that bird brains have a much denser arrangement of neurons than mammalian brains.