Eureka moments shared by Chemists.
Interviews were conducted with 18 chemists from several subdisciplines of chemistry and include a diversity of demographics on the topic of creativity as seen through the eyes of Eureka moments. The experiences fell within three categories, i.e., (1) analytical problem-solving which can be reconstructed into a series of logical steps that can be identified; (2) memory retrieval processes of previously acquired knowledge; and (3) insights characterized by a sudden and unexpected understanding. There were variations of detail within each category. Suggestions for enhancing the probability of experiencing Eureka moments are provided.
Derek Lowe shares his thoughts, always worth a read.
I've had 2 in my life. 11 y/o me was struggling with fractions when suddenly all became clear. Then a few years later while teaching myself Z-80 assembly I watched my debugger single step into a text string and suddenly how computers execute code crystallized. What Eureka moments have you had?
(Score: 3, Funny) by turgid on Monday November 18, @08:24PM
I once shouted Eureka! Someone piped up, "You don't smell so great yourself." It's a Scottish thing.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by JoeMerchant on Monday November 18, @08:29PM
For the longest time I was mystified about how a computer actually looked up the contents of a memory address.
Then I had the formal introduction to multiplexers... I knew about them before but never thought to scale them up for something like 16 bit addressing.... That's a big damn multiplexor, and that's one reason why RAM ain't cheap.
🌻🌻 [google.com]
(Score: 5, Insightful) by Rosco P. Coltrane on Monday November 18, @09:24PM (4 children)
because I'm not intelligent and knowledgeable enough to grasp the concepts. But Richard Feynmann was such a gifted lecturer that he can make me understand quantum electrodynamics for 5 minutes 🙂 Or rather, he can make me feel as if I understand it. And then it slips away again and I still don't have a clue what quantum electrodynamics is.
You can feel the effect yourself by watching his introduction to quantum eletrrodynamics lecture [youtu.be].
I regularly watch this lecture just to feel the eureka moment when the lecture is over - the eureka moment that keeps vanishing and that I have to watch the lecture again to feel once more.
Feynmann could make randos like me grasp concepts that I don't really have the abilities to grasp for a few minutes. He was that good. The effect doesn't last very long, but it feels amazing while it does.
(Score: 4, Interesting) by Gaaark on Monday November 18, @10:32PM
I'm the same way with the BBC show Connections.
The way one discovery leads to another and leads to something the first discoverer could never conceive. He throws names at you and discoveries and after the show is over I could watch it again because... 'duh' kicks in.
Getting old SUUUUUUUUUUUcks.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 1, Interesting) by Anonymous Coward on Tuesday November 19, @12:36AM
Similar experience listening to Buckminster Fuller lecture. Once when I was a kid, pre-teen. And once when I was in college. Over several hours it seemed like he disassembled the world and all it's major problems, and then put it back together better.
Then there the opposite sorts of experiences, the opposite of eureka, where *nothing* makes sense. Happened the first time I watched the movie version of Catch-22. I was so confused and disconnected when I walked out that I went back to the ticket window, bought another ticket (very cheap, college movies) and watched it a second time. That was enough that I could "re-integrate" and get some sleep that night.
(Score: 0) by Anonymous Coward on Tuesday November 19, @05:48PM (1 child)
It's like anecdotes. They provide a brief a-ha then fade away like wisps of unearned knowledge.
(Score: 0) by Anonymous Coward on Tuesday November 19, @05:51PM
Gah, I meant aphorisms. Stupid stupid brain!
(Score: 5, Interesting) by Covalent on Monday November 18, @09:37PM (6 children)
I was about 10. I was coding on an Amiga 500 in BASIC. I had gotten a book that described some simple functions and challenged the reader to write programs to do various tasks. One task was to write a prime number finder. I wrote the code and watched in amazement as the numbers 2, 3, 5... appeared on the screen. But by 100 or so, the computer was slowing noticeably.
I talked to my uncle about it - he was in the air force and did computer work. He looked at my code and said "You know... you don't have to check ALL of the divisors..."
I thought about it for a minute. "Wait, even numbers obviously won't work!"
"And..."
"OH OH Numbers bigger than half..."
"not half exactly..."
"ooh ooh square root"
I ran back to the computer and edited my code. This time, it produced primes will into the hundreds before slowing down.
My mind was BLOWN. Such small changes could result in such incredible improvements in performance!
I became kinda obsessed with optimization, an obsession I lost never because optimization is life.
Eureka!
You can't rationally argue somebody out of a position they didn't rationally get into.
(Score: 3, Interesting) by turgid on Monday November 18, @09:44PM
My Eureka moment came when I suddenly realised that any computer could emulate any other in software, given enough time and space (memory). I was about 9 or 10 too. The have been many other moments like that since and that's what keeps me interested. I love learning new things.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by DrkShadow on Monday November 18, @10:16PM (3 children)
> you only need to check the list of primes that you've generated so far, up to the square root of the current number.
(Score: 3, Insightful) by vux984 on Monday November 18, @10:53PM (2 children)
That requires storing the list of primes you've generated so far, which on an early computer with early BASIC was not necessarily simple.
(Score: 2) by DannyB on Tuesday November 19, @06:14PM
Yep! Imagine having only, say, 48K (on an Apple II) or 128K on an Apple ///, or if you really had money, 256 K of memory on an IBM PC!
We sooooo take for granted having gigabytes of memory.
Of course, if your integers are only 16 bit, you're not going to be computing with primes which are all that big.
One technique for testing the primality of a number is to GCD it (a fast algorithm) with a "highly composite number". [wikipedia.org]
Back in the day one of the first things I did to test if a number was prime was GCD a number with 5040 to see if it had any common factor. After that, I might try checking all the primes up to the square root -- excluding the primes that were part of the highly composite number.
But there can be no perfect prime minister . . .
A more fun problem I worked on in BASIC. How to generate all of the prime (not multiples of) Pythagorean triples where a2+b2 = c2. Starting with 3, 4, 5 but excluding 6, 8, 10 because it is a multiple. Okay, you need an a, b and c. You would obviously try increasing the a's using only prime numbers of course, and then try b's which are higher than the current a. But how much higher can b get? Eventually as b keeps increasing, it gets far enough away from a that the sum of squares exceeds the distance to the next square number after b. That means this program not only needs a list of primes, but a list of squares. Or an abstract collection or sequence that dynamically generates the next prime or square upon demand. This is where modern language abstractions are handy along with gigabytes of memory.
As a teen, even entering college at 18, I was definitely Dunning-Kruger'ed with BASIC. I thought I could do anything in BASIC! I read an AI book and saw the MINIMAX algorithm. Ah, so I just need to be able to clone TicTacToe boards. But BASIC didn't have data structures, and I didn't yet know what data structures were. But I hypothesized that any struct (not a term I knew) that had members, say x, y and z, could be simulated by three arrays:
DIM x(100), y(100), z(100)
Then each "instance" of the "struct" containing x, y, z, would be an index into the three arrays. So I was inventing heap managed structs in basic and having to implement the management myself.
Then I encountered Pascal.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by namefags_are_jerks on Wednesday November 20, @01:16AM
Yep, array sizes where fixed (in Microsoft-originated BASICs at least), so a growing list of numbers was hard to manage.
One of my favorite techniques on Apple2/Commodores with an MS BASIC was to use the Integer "%" type -- "FOR A%=3 to 32767 STEP 2", etc., as they used only 2 bytes of memory (instead of ?5) and was /slightly/ faster.
(Score: 2) by namefags_are_jerks on Wednesday November 20, @01:09AM
I didn't have an Uncle around, but I remember doing the exact same thing on my VIC20 when I was 13. :)
My most-fondly remembered young programmer Eureka:
I was 16, and writing a File Compression program (just RLE..) in machine code on the Commodore 64. To give me a nice humanized result display, I wanted the amount of size reduction shown as a decimal percentage, but the uint16_t words in my code weren't going to be usable by the int16_t INT2FLOAT routines in the 64's BASIC ROMs...
I looked into finding an easy way to knock up my own percentage calculator in 6502 machine code, to at least 3 digits of precision.. but it was going to add about ?150 bytes to the decompression routines..
While thinking about doing all that work for nothing, saving the code for perhaps to use in another program, the blinding flash of inspiration ZOT like it had never ZOTTED before..
...just divide the uint16_t's by 2
(Score: 5, Interesting) by stormreaver on Monday November 18, @09:48PM (4 children)
I was trying to learn to program multitasking (really multiprocessing) on the CoCo 3 using its IRQ interrupt, and was getting nowhere fast. The program locked the computer immediately upon starting. I had spent many hours trying to figure it out, but to no avail. It was late at night, so I went to bed angry.
That night, I had a dream where assembly code was scrolling from the bottom of a long, wide, imaginary paper tape to the top of the tape. I immediately recognized the IRQ patterns, and woke up when the scrolling stopped. I immediately jumped onto my computer and typed in the assembly code I remembered from my dream. And it worked! On the first try! I was stunned in two ways.
(Score: 3, Informative) by vux984 on Monday November 18, @10:49PM (1 child)
Heh what a trip... my first moment was also on a CoCo -- I was trying to write a game using the joystick, and the example in the manual simply sampled joy-x and joy-y and drew a pixel at the point. (kind of like a very terrible drawing app) and I was just completely stymied as to how to use this to control my 'little guy' for weeks.
I lived out in the sticks, no user groups, no support, nothing. And then one day, at school, it just clicked ... Instead of setting the X,Y coordinates them to what i read from the joystick, increment or decrement them based on whether the joystick was reading above or below the midpoint value. From there it was trivial. I couldn't wait to get home to try it... and have rarely felt prouder then when it worked. Haha.
It was shortly after that that i figured out i could control a whole glyph instead of just a pixel by drawing the other pixels relative to the first one. It was a couple years later before I started using a 'data model' for the game state, instead of using the screen itself and just reading pixel values right from the screen (e.g. to know if i was hitting a wall in a maze, or if I'd hit an enemy...)
So... yeah... here's the manual: With the one example they gave "Painting with Joysticks" on page 116 -- I can't believe i remembered it so clearly:
https://colorcomputerarchive.com/repo/Documents/Manuals/Hardware/Color%20Computer%203%20Extended%20Basic%20(Tandy).pdf [colorcomputerarchive.com]
(Score: 2) by stormreaver on Thursday November 21, @01:02AM
I just skimmed through the book, and it brought back a ton of memories. That was one of the great things about getting into computing before Microsoft rose to power -- the computer came with a programming manual, and the language taught by that manual was in the computer's ROM. It took the CoCo 3 (and I presume Commodore and Apple computers) about 0.25 seconds to go from power-on to ready to use. It was an almost magical time.
(Score: 4, Interesting) by Tork on Tuesday November 19, @12:05AM
I've had similar moments many times, solving execution issues with dreams I mean. I actually couldn't wrap my head around classes in Python for the longest time. I had a couple people explain them to me, but for me it was like trying to eat metal wool. Then one night, in the middle of a dream, it was like a little voice in my head said: "Look, stupid, see how using that command and sometimes it needs parentheseseses and sometimes it doesn't? That's because that object has functions AND variables! Put two and two together, dummy!" ... ok I'm not explaining it clearly here but in my dream I finally made that intuitive connection I needed to get what my buddies were saying.
Then I proceeded to make every function a class and over-designed a bunch of helper tools. :D
🏳️🌈 Proud Ally 🏳️🌈
(Score: 2) by DannyB on Tuesday November 19, @06:16PM
You are not alone in having solved programming problems in a dream.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 1, Insightful) by Anonymous Coward on Monday November 18, @10:14PM
An "external" eureka moment could be summed up as,
> I never thought of it like that. Huh.
It's not really new information, it's a way of combining things that you already know.
Generally I don't have "eureka" moments. It's always just small pieces fitting into a puzzle, finally linking multiple separate chunks together. Often it comes just because I needed the method (kubernetes, ugh), and I spend a *great deal* of time preparing for these moments by, e.g. reading the C23 list of new features, GCC attributes, the kubernetes object schemas, just for a recent version of what's-been-on-my-plate. You learn things, in some depth, and then you "eureka" them together with one connecting piece. When I need to relate knative "services" to "pods", it's all because I read the "dependencies" -- not because I suddenly discovered the "VirtualService" object.
It's just a small piece, and it doesn't make the whole puzzle - it's just one small piece that would be meaningless without all the others, it doesn't shape the puzzle more than the piece linking the others, ... whatever. I feel like "eureka" is overrated. You get older, things become much more methodical, you have *MUCH MUCH* more to draw on, and so things lose their magic. OTOH, when it feels like magic, you go "Eureka!" but imo it's just not.
(Score: 2) by VLM on Tuesday November 19, @02:14PM (1 child)
In C, you have to learn ALL of pointers, arrays, strings, structs... then suddenly it all makes sense. You need to learn it all, enough at the same time, until then it seems mysterious.
In LISP you have lambdas and anonymous functions and none of that makes sense individually until you understand the whole system, then, oh, that's how it works, pretty cool.
In automata theory there's tons of proofs and definitions to more or less memorize and then you see how it all works together and suddenly it makes sense.
Maybe trigonometry, at least it's identities, are like that. Oh I see its not just luck that you can express some trig functions in terms of some other trig functions, its that there's a set where you can represent ALL trig functions in terms of ALL other trig functions. Oh that's interesting now, its a closed set of formulas, almost like one of those ring structure things in abstract algebra. AA is another "you have to understand this much to make sense of anything, after which it all makes sense".
(Score: 2) by DannyB on Tuesday November 19, @06:24PM
I remember when I first learned Lisp in late 1986. And it was profound. However the textbook I was using started with the basics. How to change the values of variables. How to create functions. How to do list manipulation, splicing and list surgery.
Then it introduced the idea that functions could be first class. That is, a function could be assigned to a variable, or stored as a member (and changed) in a data structure, or passed as a parameter. That was my first Eureka moment with Lisp.
Then it introduced anonymous functions. You didn't just take the name of a named function, and pass it as a parameter, or store it in a variable. A function might have no name and just be a value, like the value 5. You could store that function in a variable or pass it as a parameter. Next Eureka moment.
Then, later, when I was reading Common Lisp the Language (1st edition) and using Macintosh Common Lisp (MCL), I was introduced to the concept of closures. Another mind blowing Eureka moment. An anonymous function could "capture" or "close over" variables in the current lexical environment outside of but surrounding the function definition. And each of these closures was unique and had captures a different separate snapshot of the then current lexical environment.
The next eureka moment was for me to puzzle out how this could be implemented efficiently, and I did.
From 1986 to 1993 I was dazzled with Lisp. Eventually in 2014 I spent a few more years being dazzled with Clojure.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by VLM on Tuesday November 19, @02:19PM
Perhaps a future format of post calculator, post computer, post search engine, post wikipedia, post AI, education or higher education, would focus primarily on teaching all the Eureka moments rather than mainly on mechanistic processes like now and hoping the kids notice or assuming they will.
The idea of teaching math from a perspective of its overall shape and how it works together rather than starting by memorizing details and processes and then hoping they get the Eureka moment later on, is interesting.
Rather than "teaching databases" by memorizing MySQL statement formats and hoping they get the right idea, start by teaching them Codd-Normal Form and why they want it. Maybe giving them the motivation to learn JOINs on their own would be more successful than forcing them to learn JOINs but not understanding why they'd want to do one.
(Score: 3, Interesting) by VLM on Tuesday November 19, @02:29PM
I will toss out a single discrete example.
Lets say you're teaching kids the Quadratic formula by demonstrating how to derive it.
The ones that memorize it because the teacher said so, will grow up to be the most boring and unthinking "I freakin love science" authoritarians you can imagine.
The very small number of kids who realize they are watching a problem solving process or strategy being turned into a formula ... or being turned into a function ... hey ... isn't that the process of writing a computer program being done live on a blackboard but they call it Algebra? I wonder what other processes could be turned into a program. Why I bet someone could write a program that operates on Algebra itself you could call it a computerized algebra system, oh wait isn't that a commercial product? Maybe the Turing guy has some insight on what programs can be turned into other programs. Wait, isn't that exactly what a compiler does? So maybe any Turing complete language can be translated into any other Turing complete language even if its a PITA? And there seems to be only one "universe" of Turing complete programs separate from English conversational prose language? A lot to think about even if I didn't know the names and words for this stuff for a decade or so afterwards. Huh so the quadratic equation is a computer program that runs on brain cells...