More like divining what direction the text is pointing to and from, and what it says nothing about... it is very subjective, as most text analysis, i think... But instead of goats liver or tarot cards, i use text-composting tools people usually use for linguistic analysis... No objectivity, just pattern recognition...
I'll give an example.
I was looking for how concept of "synaptic current" in neurons was defined, to model it computationally for some unspecified reason. So i downloaded Izhikevich's "the geometry of excitability and bursting" and downloaded all texts from its reference list, then did same for more papers with high impact rating, as well as numentas output and all papers their people wrote ever etc. I think i ended with several thousand pdf files. I 3 sci-hub.
Then i made a corpus with that program, like 900 megabyte-ish, then i used that corpus to look up definitions of all kinds of things (neuron, synapse, hyperpolarisation, depolarisation, current)in all the texts simultaneously, that is what a concordancer does... Which i think provided me with data on how many possible ways currently there exists to understand and eventually model the synapses on a neuron. Like they can be seen as discrete units, an aggregate, with varying fidelity in dynamics, as a statistical distribution, as a binary digit if youre BlueBrain and so on... And also told me what ways of modelling it are missing, considered infeasible and so on, all the places worth looking into..
I did same for edward said's textual output once, lol, and long ago for all output of george w. bush junior... that is how i found out they both were full of.. but that's an entirely different matter, heheh
(Score: 2, Interesting) by Dichzor on Tuesday August 09 2022, @11:15AM (1 child)
So hard to express it...
More like divining what direction the text is pointing to and from, and what it says nothing about... it is very subjective, as most text analysis, i think...
But instead of goats liver or tarot cards, i use text-composting tools people usually use for linguistic analysis...
No objectivity, just pattern recognition...
I'll give an example.
I was looking for how concept of "synaptic current" in neurons was defined, to model it computationally for some unspecified reason.
So i downloaded Izhikevich's "the geometry of excitability and bursting" and downloaded all texts from its reference list, then did same for more papers with high impact rating, as well as numentas output and all papers their people wrote ever etc. I think i ended with several thousand pdf files. I 3 sci-hub.
Then i made a corpus with that program, like 900 megabyte-ish, then i used that corpus to look up definitions of all kinds of things (neuron, synapse, hyperpolarisation, depolarisation, current)in all the texts simultaneously, that is what a concordancer does... Which i think provided me with data on how many possible ways currently there exists to understand and eventually model the synapses on a neuron. Like they can be seen as discrete units, an aggregate, with varying fidelity in dynamics, as a statistical distribution, as a binary digit if youre BlueBrain and so on... And also told me what ways of modelling it are missing, considered infeasible and so on, all the places worth looking into..
I did same for edward said's textual output once, lol, and long ago for all output of george w. bush junior... that is how i found out they both were full of.. but that's an entirely different matter, heheh
Guess i just love stats.. *shudder*
(Score: 0) by Anonymous Coward on Wednesday August 10 2022, @11:19PM
That's pretty hardcore. Pretty awesome, but also pretty hardcore.