Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Non Sequor (1005)

Non Sequor
(email not shown publicly)

Journal of Non Sequor (1005)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Wednesday March 02, 16
02:27 AM
/dev/random

So, let's start with game theory. Picking a side of the road to drive on is a "coordination game". If you add someone who drives on the left to a population of right-side drivers, the situation rapidly corrects itself (in some fashion).

People tend to think that truth is absolute and that interactions between rational actors should tend to cull untenable viewpoints. When others diverge from their own opinions, they come to the conclusion that the other side is irrational.

However, a set of beliefs consists of a mixture of valid, reasonable observations, partially valid heuristics, and the occasional comfortable lie. The question to consider is how do people update these beliefs for new facts and after interactions with people carrying opposing viewpoints.

It's entirely possible to imagine that you could have two or more sets of beliefs distributed in a population that are jointly stable, so that if there are two belief systems A and B, all interactions between members of the population reinforce both belief systems. When a member of A interacts with another member of A, they both adjust their beliefs to be closer to the center of A. When a member of A interacts with a member of B, they both adjust their beliefs to be further from each other.

Different groups can have different approaches to assimilating new information that maintain the stability of the set of beliefs. There may be limits to this. There may be some developments that destabilize the system and forces a transition to a new equilibrium.

In American political history, the migration of the Dixiecrats to the Republican party might be an example of this.

Thursday November 05, 15
01:34 AM
/dev/random

I have just seen the most revolting thing the human race has ever produced in a British baking show. Behold and tremble in terror:

http://www.pbs.org/food/recipes/rhubarb-prune-apple-pork-pies/

Mankinds greatest atrocity consists of a kilogram of flour, half a kilogram of lard, four kilograms of pork, a puree of rhubarb, prunes, and apples, and is garnished with edible flowers and bay leaves. This monstrosity was the day's winning dish.

The losing dish involved a lavender meringue, and stands as a small consolation to the civilized world by showing that there is some outer limit to what the British consider to be edible.

Saturday April 04, 15
01:53 AM
/dev/random

Lots of very smart people think that superhuman AI is just a matter of time. They say, once we get our foot in the door, the ability to have an intelligence that can learn more quickly.

1. There are barriers to making a given strategy for intelligence scale arbitrarily.

If you make intelligence denser, i.e. if you put more computation capacity in a smaller space, you increase heat dissipation problems and you decrease the ratio of I/O speed to processing speed. This is because these things are related to surface area and you have a surface area vs body mass type problem where the optimal size of a given design ends up being determined by how much surface area is needed to service the needs of a given amount of body mass.

On the other hand if you make your intelligence more diffuse, i.e. you use a network of agents which individually have a small amount of intelligence but collectively have a larger amount of intelligence, you run into coordination, synchronization, and communications overhead problems as well as a lack of traction on problems that are inherently serial. Your nodes may spend more time talking to each other than they spend on the problem you want to solve or different regions of the network may be taking different tacks on the problem when a single coordinated approach is better.

There are, obviously, ways of working around these issues. Human brains are made up of diffuse nodes with some coordinating structures and the brain has a high surface area with key information processing being performed close to the surface. Humans in turn organize themselves into groups which are in turn part of a society. There are tradeoffs between individual autonomy and pursuit of one's own problem solving agenda and social organization toward a common problem solving agenda.

Improvement in problem solving capability is possible by hill climbing along some of those trade offs, however, local maxima are frequently encountered resulting in a lack of an overall progression towards a global maximum, only ongoing negotiation between changing circumstances and newly developed opportunities.

2. Some social problems can't be resolved because the data needed to come to an objective resolution can't be collected or can be collected but is too readily polluted by a combination of subjective methodology decisions or the data can only be collected as it is generated by natural and human processes which operate on a timescale which is slower than the time pscale on which a decision is needed. As a result of these problems people settle into stable equilibria of interpretations on the data and methodologies that they view as credible because having a bad strategy for dealing with incomplete data tends to outperform making no decisions whatsoever.

We call these stable equilibria political beliefs. Just because an intelligence settles into one of these stable equilibria does not mean that the decisions that are consistent with those beliefs are right, it just means that they are one of a number of reasonable strategies for reducing the inconsistencies in the body of data and interpretations on that data. If a superintelligence disagrees with you politically, is that because its right and you're wrong, or is it just simething that it got stuck on because of its upbringing?

3. Lots of real world problems have upper bounds on their margin for improvement.

There are plenty of problems that as currently framed are bounded by thermodynamic or material resource based limits or where optimal solutions are not computationally tractable and where only approximations are possible. Reframing problems to relax these constraints may impose new constraints in other problems and this metaproblem is likely to be more difficult than the individual problems.

4. Awareness of cognitive fallacies does not necessarily make one immune to those fallacies.

Many highly intelligent individuals are in many ways unreliable and in some cases turn their intelligence towards rationalizing and propagating things a majority of society sees as errors. This point may be a consequence of items 1-3.

Altogether, these issues mean that while AI holds great possibilities for improving society, it's more likely to be a tool in our arsenal (and potentially a new type of individual in our society) but it's unlikely to be a victory condition for society. This leads me toward an expectation of an ongoing coevolution of humans and the intelligence embedded in our technology rather than reaching a point where the embedded intelligence hits some critical mass and runs off on its own.