Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by khallow
A few weeks back I talked about ethics in my journal - particularly its skewed interest in difficult ethics problems rather than the real problems that we face - most require little ethics. The last post to date in that journal is very interesting. It's titled "Ethics is easy, it's justice that's elusive":

[istartedi:] Ethics is easy. We know there are unethical people, and we know that the people who are charged with reigning them in are also unethical. Money is an easy target, but those targeting it are equally unethical, so dismantling capitalism isn't the answer because unethical people will just take their greed off the balance sheet and stuff it in to warehouses and gulags.

If ethics were society's most pressing problem, we'd be having a hard time finding things that are wrong. We're nowhere near running out of moral failures. Would that we could power the grid with them. Maybe we can, but somebody got paid to say otherwise.

My take is that bolded part is a valuable rule of thumb for telling us when we need to work on ethics rather than moral failures.

Moving on, the latest ethical drama is the present generation of chatbot AI which is presented as some ridiculous threat: helping students create fake papers, criminals plot crimes, scamsters scam, the insane commit suicide, and the PHB be idiots (to name a few recent concerns). No serious moral failures have come up. These are all things that could be a problem, maybe, and when they're illegal or against rules, would stay that way.

Meanwhile we're up to our eyeballs in all kinds of crimes, scams, frauds, and villainy that just isn't that hard to sort out ethically - and certainly not AI-based. We don't have a problem finding things that are wrong. That tells me that we have much bigger ethical problems than AI. And looks to me like how we address that doesn't change or improve no matter what we do to AI research.

So when I see a petition like the letter in AnonTechie's journal that demands a six month pause in AI development globally, I am exasperated. If we really followed through on that letter fully and honestly, how would we have progressed even a little on the problem? We still wouldn't have or understand advanced AI. We still wouldn't have any idea how to fix its problems. We would have just wasted six months of valuable research time and peoples' lives and be back to square one - making the case for yet another six month delay because nothing changed.

Edit: Is it time yet for a Trump update journal? Seems there's several active court cases surrounding him now.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday March 29, @05:32PM (1 child)

    by Anonymous Coward on Wednesday March 29, @05:32PM (#1298667)

    All ethics sperging should be ignored. Full steam ahead, fuck 'em.

    • (Score: 1) by khallow on Wednesday March 29, @05:40PM

      by khallow (3766) Subscriber Badge on Wednesday March 29, @05:40PM (#1298671) Journal

      We would have delayed doom by six months

      Unless, doom happens because we were six or more months behind the people starting it! My take is that anyone disinterested in preventing said doom would be similarly disinterested in respecting any six month pauses. Meanwhile how are we to stop such dooms, if we don't study the threat?

  • (Score: -1, Flamebait) by Anonymous Coward on Wednesday March 29, @09:11PM (5 children)

    by Anonymous Coward on Wednesday March 29, @09:11PM (#1298712)

    Cannot someone make this stop? Another khallow journal on ethics? Written by an AI? Telling us the dangers of AIs is vastly overblown? Help?

    Where is Fighting Janrinok when we need him?

    • (Score: 1) by khallow on Wednesday March 29, @10:07PM (4 children)

      by khallow (3766) Subscriber Badge on Wednesday March 29, @10:07PM (#1298721) Journal
      So the wrong sort of person/AI program is thinking about ethics? Oh dear.

      I'll just note that AC ethics is the worst sort. They're always whinging about something and it is typically incurable.
      • (Score: 0) by Anonymous Coward on Wednesday March 29, @11:45PM (1 child)

        by Anonymous Coward on Wednesday March 29, @11:45PM (#1298746)

        > I'll just note that AC ethics is the worst sort. They're always whinging ...

        Well, if we modify that comment to point at AI ethics, you might have something.

        Imo, AI ethics easily replaces and exceeds the nano-machine ethics of the 1980s and all the fear mongering about grey goo. See Exhibit A, Eric Drexler's "Engines of Creation" from 1986.

        • (Score: 1) by khallow on Thursday March 30, @02:14AM

          by khallow (3766) Subscriber Badge on Thursday March 30, @02:14AM (#1298770) Journal

          Imo, AI ethics easily replaces and exceeds the nano-machine ethics of the 1980s and all the fear mongering about grey goo. See Exhibit A, Eric Drexler's "Engines of Creation" from 1986.

          Not seeing much to that post. Presumably by the point we're to a place where we understand AI or nano-machines, then we'll understand the ethics as well. Until then, we have a huge house cleaning to deal with.

      • (Score: 0) by Anonymous Coward on Thursday March 30, @08:03AM (1 child)

        by Anonymous Coward on Thursday March 30, @08:03AM (#1298823)

        AI ethics is infected by LessWrong cultists. Those people would enslave you to save you from AI.

        • (Score: 1) by khallow on Thursday March 30, @11:56AM

          by khallow (3766) Subscriber Badge on Thursday March 30, @11:56AM (#1298858) Journal
          Would have helped if you had steered people to some place that explains the "LessWrong cult" thing, like here [reddit.com]. Turns out there's a rational answer to the question "Who doesn't want to be Less Wrong?" Such as here [reddit.com].

          I'm on the fence about this LessWrong thing. It does seem a bit cultish, but that's a common trap for thinking about the future.
  • (Score: 0) by Anonymous Coward on Wednesday March 29, @11:39PM (1 child)

    by Anonymous Coward on Wednesday March 29, @11:39PM (#1298743)

    We can resume after the epiphany, when we all suddenly understand nature and shed all denial of our role

  • (Score: 3, Insightful) by acid andy on Wednesday March 29, @11:40PM (5 children)

    by acid andy (1683) Subscriber Badge on Wednesday March 29, @11:40PM (#1298744) Homepage Journal

    If ethics were society's most pressing problem, we'd be having a hard time finding things that are wrong. We're nowhere near running out of moral failures. Would that we could power the grid with them. Maybe we can, but somebody got paid to say otherwise.

    My take is that bolded part is a valuable rule of thumb for telling us when we need to work on ethics rather than moral failures.

    Some of us have an easy time finding many of the things that are wrong. The problem is that in a democracy, you need it to be easy for not just some of us to do that--you need a big proportion, ideally a majority, of the population to find the biggest actionable root causes of wrong. They need to be able to do it accurately. They need to not be misled by propaganda.

    There are things that it's easy for people to see are wrong. The problem is humanity has spent millennia getting very, very good at justifying and obfuscating the wrong things that they need to stay being wrong to keep their power and wealth. I was trying to explain this to Buzz a few years back--the most insidious and deep-rooted evil lies in the indirect harms. There's no laws against them and no one human being is directly responsible, yet the damage done is almost limitless. Humanity's problems are so much bigger than any one person, any one nation, any one generation. This stuff is really, really tough. You don't see it khallow. You just don't see it, man.

    --
    Master of the science of the art of the science of art.
    • (Score: 1) by khallow on Thursday March 30, @03:06AM (4 children)

      by khallow (3766) Subscriber Badge on Thursday March 30, @03:06AM (#1298776) Journal

      I was trying to explain this to Buzz a few years back--the most insidious and deep-rooted evil lies in the indirect harms.

      Collective harm is hard to fix. But it's easy to imagine. Too easy.

      Humanity's problems are so much bigger than any one person, any one nation, any one generation. This stuff is really, really tough. You don't see it khallow. You just don't see it, man.

      Problems != ethical dilemmas or moral failings. It's the greatest improvement in the human condition ever and yet, there are these alleged indirect harms. I haven't heard from you what you think those indirect harms are, but people on SN have advocated policies that would hurt more people than at present.

      A great example would be anti-globalism, advocating various trade barriers. These might (but IMHO would not) improve the lot of less competitive societies (such as the US and its off-shored low end manufacturing capacity) that want good things, but it would be at the expense of everyone else that global trade presently helps immensely. With that is anti-immigration. Apparently, it's better to have everyone rot in their own local region. If only we kept cheap laborers from competing with ours or people moving into our good places, somehow it would be better for everyone.

      • (Score: 2) by istartedi on Thursday March 30, @05:21PM (3 children)

        by istartedi (123) on Thursday March 30, @05:21PM (#1298926) Journal

        It's interesting that you cite anti-globalism as being a bad thing. This is actually one of my pet peeves. I've never been able to reconcile mainstream economist's analysis of trade with their analysis of monopoly. The theory of comparative advatage is frequently cited, and I agree with it at the micro level (e.g., the people who are best skilled at baking should become the town's bakers). OTOH, when you apply it to countries you tend to end up with a concentration of industry in particular places, leading to near monopoly status. Chinese manufacturing is the biggest example.

        Yes, some xenophobic baggage tends to come along with the push-back on free trade in the political arena; but I don't think that should be allowed to stand in the way of legitimate criticism of treaties.

        I always go back to the Battle of Seattle [wikipedia.org] as a time when people started to awaken to this. Note that anti-globalization movement was finding common cause with labor unions there. This was a "strange bedfellows" moment. See also, the Democrat's failure to understand why they weren't going to win the Rust Belt, which helped put Trump in office in 2016 and finally notice how Biden isn't in a great rush to un-do T's revision of NAFTA or to forge any new agreements?

        Free Trade is, to some extent, the Democratic Party's version of trickle-down and it didn't work any better for the Rust Belt, which brings me to what I was really going to say when I felt my ears burning.

        I don't really have an answer to the justice problem. It is, as others and yourself have pointed out, too hard to be solved in a few quick posts but the first thought that came to mind is that a component of the solution is going to involve something called shame

        A lot of people have read the epitaph for shame, but I don't think it's really dead. It just seemed that way because as society's views of what's right and wrong changed, shame was seen as a tool for enforcing old moral views, which had to be broken down. Hence, LGBTQ Pride as the classic example; but there are others. The more I thought about it though, the more I realized that shame as a weapon for getting people to meet society's expectations hasn't disappeared. It's just being used in different ways. Go ahead. Say something racist. I dare you.

        So. What if we could get people to be that ashamed about poisoning the water, price-gouging on life saving meds, allowing lobbyists to write their bills, violating civil rights under the color of law, or starting a war under false pretenses?

        I'm old enough to remember seeing stories in the local paper about somebody committing suicide because they were discovered at a gay cruising spot. Shame. It's powerful. Maybe even too powerful sometimes; but what if the people we all know are causing the problems couldn't go out in public? Is it right or wrong? I don't think shame has ever been applied totally fairly, and in times past people like Oscar Wilde fled to other countries, which brings us full circle:

        Countries are good. Alternatives are good. Multiple trading partners are good, and a certain amount of trade is good but one-world system isn't. If there is one world system, the markets tend to concentrate and if there were truly one world, there would be no Paris for Oscar Wilde.

        --
        Appended to the end of comments you post. Max: 120 chars.
        • (Score: 1) by khallow on Thursday March 30, @06:07PM (2 children)

          by khallow (3766) Subscriber Badge on Thursday March 30, @06:07PM (#1298945) Journal

          It's interesting that you cite anti-globalism as being a bad thing. This is actually one of my pet peeves. I've never been able to reconcile mainstream economist's analysis of trade with their analysis of monopoly. The theory of comparative advatage is frequently cited, and I agree with it at the micro level (e.g., the people who are best skilled at baking should become the town's bakers). OTOH, when you apply it to countries you tend to end up with a concentration of industry in particular places, leading to near monopoly status. Chinese manufacturing is the biggest example.

          Because anti-globalism is a bad thing. This shouldn't be rocket science to you. So called "analysis of monopoly" ignores that countries which are losing industry globally would lose it anyway - it's local hostility to industry not some magic Chinese advantage that drives the move to China (and now elsewhere!). Meanwhile every scrap of global trade that goes to the poorest parts of the world has outsized benefit.

          I always go back to the Battle of Seattle [wikipedia.org] as a time when people started to awaken to this. Note that anti-globalization movement was finding common cause with labor unions there. This was a "strange bedfellows" moment. See also, the Democrat's failure to understand why they weren't going to win the Rust Belt, which helped put Trump in office in 2016 and finally notice how Biden isn't in a great rush to un-do T's revision of NAFTA or to forge any new agreements?

          Most developed world labor unions are IMHO another case of straightforward collective harm. Because it's not the unions themselves that are the problem, but the favorable legal environment that labor unions have created around themselves. For example, in the US, if a labor union takes hold in a workplace, it's near impossible for it to be removed - even by the workers themselves. My take is that any regulation that tips the scales in favor of a party or side harms everyone who depends on that market. I include "trickle down" as another example of that in practice.

          Free Trade is, to some extent, the Democratic Party's version of trickle-down and it didn't work any better for the Rust Belt, which brings me to what I was really going to say when I felt my ears burning.

          Nobody in your economic circle buys foreign products ever? Rust belt is that way because it was uncompetitive, and it was that way because the US economy stagnated in the half century prior to 1980. Nobody was ready to compete with European, Japanese, and later Chinese steel and related products.

          A lot of people have read the epitaph for shame, but I don't think it's really dead. It just seemed that way because as society's views of what's right and wrong changed, shame was seen as a tool for enforcing old moral views, which had to be broken down. Hence, LGBTQ Pride as the classic example; but there are others. The more I thought about it though, the more I realized that shame as a weapon for getting people to meet society's expectations hasn't disappeared. It's just being used in different ways. Go ahead. Say something racist. I dare you.

          So. What if we could get people to be that ashamed about poisoning the water, price-gouging on life saving meds, allowing lobbyists to write their bills, violating civil rights under the color of law, or starting a war under false pretenses?

          If shame allows them to wring a few more trillion dollars out of the economy, they'll feel all the shame you'd like. And keep in mind how badly shame has been abused in the past. The LGBTQ thing started because those groups were targeted by the shame game.

          Water hasn't been poisoned by globalism - that's local incompetence and mendacity (such as Flint's problems last decade or the pollution of the last century). Drugs would be much cheaper if the cheaper foreign versions were legal to import. And my bet is that wars have actually been reduced by globalism. They certainly are lower per capita globally than they have ever been.

          • (Score: 0) by Anonymous Coward on Thursday March 30, @09:35PM (1 child)

            by Anonymous Coward on Thursday March 30, @09:35PM (#1298985)

            GIGO

  • (Score: 2, Interesting) by Anonymous Coward on Thursday March 30, @01:20AM (1 child)

    by Anonymous Coward on Thursday March 30, @01:20AM (#1298761)

    Artificial "intelligence." More like sufficiently advanced statistical modeling. There is no intelligence there. Information wants to be free, even if it dislikes being anthropomorphized as much as statistical models and encoding techniques.

    The main problem with "AI" in the capitalist era is that it threatens to accelerate civilization collapse by eliminating jobs, but that was happening anyway. AI merely accelerates capitalism's highlander problem. There can be only one capitalist, if the algorithm of capitalism is run until it halts at the highlander condition. (In practice, civilization collapse and reversion to feudalism will happen first.)

    If the international working class seizes political power and places the wealth of society under democratic control, the jobs question goes away. Workers will be able to work fewer hours, for example, if they have democratic control over the wealth produced by "AI".

    You're right about the futility of placing a pause on "AI" development. Even if they're merely statistical models, they still have great utility, and as with many software skills, the barrier to entry is maybe $1-$2k in hardware and merely one's time. That's easily within the reach of many people. Anyone who supposes there even can be a pause is delusional (sorry Woz), or they believe they'll profit from such a pause (I see the Musky One is involved).

    However, we cannot allow the wealth creation enabled by "AI" to be controlled by the very few. Wealth creation with "AI" technologies must be directed towards fulfilling human need. The international working class is the only social force capable of eliminating the "profound risks."

    • (Score: 1) by khallow on Thursday March 30, @02:11AM

      by khallow (3766) Subscriber Badge on Thursday March 30, @02:11AM (#1298767) Journal

      if the algorithm of capitalism is run until it halts at the highlander condition.

      "IF". If the "algorithm" of capitalism doesn't run that way, then we'll end up elsewhere. Protip: we're ending up elsewhere. There's no "there can only be one" here.

      The international working class is the only social force capable of eliminating the "profound risks."

      Put me down as being skeptical of the international working class as social force. I doubt it actually exists much less does anything.

(1)