Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Cool Vandalism

Posted by takyon on Friday November 09 2018, @12:13AM (#3656)
2 Comments

Incredible success tonight!!!!

Posted by realDonaldTrump on Wednesday November 07 2018, @05:30AM (#3653)
10 Comments
News

Congrats to our brand-new Senators. I call them our Senators-Elect. Senator Cramer from North Dakota. Who's taking a seat away from the Dems. Mike Braun in Indiana, taking another Dem seat. Great job, guys!

Senator Elect Romney of Utah. Who said I'm PHONEY. Who said, I'm a fraud. And I said, Mitt, how would you like to be my Secretary of State? He said, "oh, thank you, President Trump!" What a dope, he couldn't tell I was just kidding! But he's taking over from Orrin. And that's a guy that believes the Crooked Failing Fake News @nytimes. When they said HORRIBLE things about my taxes. Things that according to my lawyer are 100 percent false and HIGHLY defamatory.

And we have so many more Senator-Elects. Marsha Blackburn of Tennessee. And from Tennessee. The upgrade from Liddle Bobby Corker. I did a BRILLIANT rally for Marsha, in Johnson City. They love me in Johnson City, Tennessee. Marsha's a tremendous woman. I'm sure Low I.Q. Taylor Swift has nothing or doesn't know anything about her.

Matt Rosendale in Montana, I gave Matt my FULL endorsement. I don't give these endorsements easily. And he's winning very solidly. Great job, Matt!

HUGE victory for our @GOP -- and for the American people. A BIG HAND to everyone that voted. And everyone that didn't. But especially, thanks to me. I've been working very hard. As everybody knows. Holding so many #MAGARally's. Letting the Forgotten People know they're FORGOTTEN NO MORE!!!! 👏 #RedTide

PSA: Google Becoming Crap at Finding Old SoylentNews Stories

Posted by takyon on Tuesday November 06 2018, @12:40AM (#3651)
10 Comments
Soylent

I have used Google to look for previous SN stories for a year or two now. Mainly because there have been inconsistencies with SN's own internal search engine in the past (that may have been fixed since). I use the "site:soylentnews.org" search parameter, plus the keywords, and I have a textbox that automatically adds the "site:soylentnews.org" bit to the query.

While writing this submission:

China Still Has Trouble Staffing the World's Largest Radio Telescope

I'm pretty sure that the following story exists on site, although I don't know where it is or know the headline just yet:

China Can't Find Anyone Smart Enough to Run its Whizzbang $180M 500 Meter Radio Telescope

So I search for stuff like "china radio telescope" and "aperture spherical telescope". No dice.

I use SoylentNews internal search, looking for "Aperture Spherical Telescope", and it works.

Let's try "china radio telescope smart enough". Nope, nothing.

Let's try the exact title of the submission. Welp, there it is, finally. Except that that just before finishing this journal entry a few minutes later, it no longer works (I checked to see if it was a special character issue, and it doesn't seem to be).

This isn't the only example that I've come across, and it seems to have gotten worse in recent weeks. IIRC I had trouble looking for previous stories for an opioid-related submission. These have been popping up often enough that I may just ditch the GOOG for this purpose, especially since SN's search seems to work just fine.

AMD Event on Nov. 6th

Posted by takyon on Saturday November 03 2018, @12:47PM (#3646)
2 Comments
Hardware

AMD Investor Relations Announces “Next Horizon” Event for November 6th

On Election Day? Gee, what's the bad news?

Anyway, this is likely related to "7nm" Zen 2 Epyc server CPUs, which will debut well before desktop or mobile variants. They might also announce a Radeon RX 590 "12nm" Polaris GPU or talk about "7nm" Vega GPUs.

If Zen 2 Epyc has 64 cores, and Zen+/Zen 2 Threadripper has 32 cores, then Zen 2 Ryzen could have up to 16 cores.

Fuck Paul Ryan. 🖕

Posted by realDonaldTrump on Wednesday October 31 2018, @08:26PM (#3639)
19 Comments
Topics

Paul Ryan should be focusing on holding the Majority rather than giving his opinions on Birthright Citizenship, something he knows nothing about! Vote Republican Tuesday! Our new Republican Majority will work on this, Closing the Immigration Loopholes and Securing our Boarders. This is a NATIONAL EMERGY! Must change laws!!!! #RedTide #14thAmendment #MAGA

Linkage again: Sunday, 2018/10/28

Posted by HiThere on Wednesday October 31 2018, @06:41PM (#3638)
0 Comments
Software
This raises a problem, though, as it implies that the central coordinator needs to know, i.e. have a link to, every object that _MIGHT_ be the foreground object, and that means all objects. This is readily doable by a program, which would just need to register the address of the foreground object, but neurons use a different connection protocol (i.e., physical proximity based connections), and appear to be limited to 10,000 or so connections. It’s hard to see how a network style connection would work, so it seems as if a hierarchical connection could be needed. This, however, would still raise problems with back connections. Also, connecting distant neurons to each other has no obvious mechanism quicker than growing an extension to the axon. Even that would require maintaining a particular chemical gradient for quite a long time, days or weeks.
This is a problem that repeatedly appears. Some known processes depend on a local chemical gradient to work properly, i.e. to properly localize action. Others seem to require coordination of distant connections between particular objects. Neither has an obvious analog in program logic, which implies that this is the wrong order of abstraction. Probably the correct physical analog is the neural column. Since a neural column is composed of thousands of neurons the arguments used above don’t apply. Unfortunately I don’t understand the properties of a neural column anywhere near as well, so it’s much less useful as a model.
Abandoning the analog, then, the active foreground object must register itself centrally so that it can be found by not-currently-linked objects that become active. The links established are two way, but stronger from the new object to the foreground object than in the reverse direction.
Now being the foreground object is a very transient condition.
… … ...
I note a bit of confusion here. The foreground object is not the same as the top object. The top object is the active object that contains all the other active objects. New objects need to be linked both to the top object and the foreground object. This enables “state specific memory” as it means that, e.g., being in a location makes it more likely that other things learned in that location will be brought to mind, but as the link is weak it does not itself suffice. Of course it could be reinforced in the location sufficiently enough that merely thinking of the location would re-activate the memory.

State Specific Memory: Saturday, 2018/10/27

Posted by HiThere on Wednesday October 31 2018, @06:19PM (#3637)
0 Comments
Software

All this makes the “state” of state specific memory quite crucial, and I haven’t yet defined it. To say “it’s all those things that current memories get attached to” is true, but not very useful. I tend to conceive of “state”, in this sense, as the active world image, but this is also a bit vague. Experiments have shown that consciously being aware of something isn’t necessary for it to participate in state. It doesn’t seem useful to conceptualize it as an object, though it is the top container of active objects. Perhaps it’s pure epiphenomenon, and not a real thing at all, but in that case one needs to explain how the activity of the rest of the system could create the illusion that “state” exists, i.e. would provide the effects.
Still, state existing as phenomenon rather than as epiphenomenon seems to create numerous problems. E.g. it seems to exist in innumerable variations and seems to experience partial activation. The only problem it really seems to solve is limiting the necessity for centralized communication. So I need to address that.
Possibly an answer lies in the hierarchical embedding of objects. So, for example, kitchen remains kitchen whether or not the cat is currently being fed. There’s a time linked variation in the “current state of activation” of kitchen. In other words, objects need to allow for components that are not always active (or even present).
The result of this is that objects being linked into any nested subcomponent of the current foreground or top active object are linked into the entire chain, with the strongest link at the lowest level. Repeated stimulation will over time strengthen some links. Links that are not strengthened (unless above a threshold of strength) will decay. Perhaps there can also be degrees of strengthening, so that rubber will be strongly linked with tires, but linked to carts much more weakly.

Separation/Connections: Friday, 2018/10/26

Posted by HiThere on Wednesday October 31 2018, @05:30PM (#3636)
0 Comments
Software

The “separation” of neurons needs consideration. Clearly sensations that are physically close should be considered close, and likely have direct physical contacts at the neuron level, but other linkages are more difficult. On the other hand, these other linkages probably only happen at a higher level (i.e. at a more abstract level). A snarl combined with teeth being bared is at a relatively high level, white spot combining with white spot to for partial image that will be parsed as a tooth is at a much lower level. At what level is centralized communication needed, and on what basis should this be decided? Well, what’s the purpose of communication? One answer is to create links between objects, so perhaps only top level objects need to link … but this seems insufficient. Actually, the proposal seems roughly equivalent to “frames”, with lots of things left hanging, and that is known not to suffice.
I think that what is needed to solve this problem is the “state specific memory”. When a signal is already a part of the “state” then it doesn’t need the centralized communication, but can simply strengthen and expand the current one. Only when a new signal is being associated with the state does it need to communicate centrally to determine to what stat it is to be added. Since this will generate lots of false or “noisy” connections, it’s important that weak connections fade over time.

Connections: Thursday 2018/10/25

Posted by HiThere on Wednesday October 31 2018, @05:28PM (#3635)
0 Comments
Software

The co-occurrence of objects even in description is sufficient to create the perception of a connection. Consider how this is used in the “Grandfather’s Clock” song. It is not without reason that Crowley said that the basic rule of ritual magic is “invoke often”.
These seem to all be things that are implemented via Hebbs Law1, but the mechanism is obscure. When there is a synaptic connection, then the mechanism is reasonably clear, but when there’s no connection except synchronicity it’s harder to explain. It does take many repetitions, so even a weak connection would be reinforced, rather like math tables … in fact probably exactly like math tables … but that doesn’t explain the mechanism. We know that physiologically it’s connected somehow to the hippocampus, so some specialized mechanism is quite appropriate. It has to be done via “passive monitoring”, i.e. via receiving signals from the active neurons … but probably only at a rather high level. And we believe that unusual wiring in this area is behind synethesia.
So … I am assuming that when a cluster of sensations above threshold of strength is activated that a signal is sent to a central function that receives the signals sent during a small interval of time2 and establishes or reinforces a connection between them. This appears as if it might strengthen the perception of boundaries between different clusters of sensation. It would also seem to foster the creation of composite objects. Perhaps it also enables the invention of new composite objects from known pieces.

Persistence: Wednesday 2018/10/24

Posted by HiThere on Wednesday October 31 2018, @05:27PM (#3634)
0 Comments
Software

The persistence of objects means not only that they continue to exist when you can’t observe them, but also, and more primitively, that when you are watching them they remain the same object1. This will probably be inherent in what it means to be an object, but such a concept cannot predate the concept of object.