Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Friday September 04 2015, @01:24AM   Printer-friendly
from the but-first-we-need-a-definition-of-genuine-intelligence dept.

When we talk about artificial intelligence (AI), what do we actually mean ?

AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.

This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation Automated Weapons Systems (AWSs) [PDF] are treated under the laws of war.

True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.

http://theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by SomeGuy on Friday September 04 2015, @02:27AM

    by SomeGuy (5632) on Friday September 04 2015, @02:27AM (#232093)

    That makes use of data weighted by relevance, but does so in a highly nonlinear fashion so that in situations of typical complexity the results cannot be accurately predicted even by those familiar with the program's innards.

    And this is exactly why you don't want real "AI" in critical systems. You don't actually know what the results will be, and may not even know what it has actually "learned" that causes it to give its results. You can't mathematically audit it in any realistic manor.

    A non-"AI" system will be based on specifications and act using known algorithms. Well, the end result may be a trillion lines of if..then..else cranked out by outsourced monkeys, and equally unauditable (and the output will be "42" because you didn't understand your specifications)

    Informally, however, the public already has a second definition involving their common perception of fictionalized "AI". Which usually includes getting happy, getting sad, laughing at your jokes, and KILLING ALL HUMANS.

  • (Score: 1, Troll) by c0lo on Friday September 04 2015, @04:17AM

    by c0lo (156) Subscriber Badge on Friday September 04 2015, @04:17AM (#232122) Journal

    And this is exactly why you don't want real "AI" in critical systems. You don't actually know what the results will be, and may not even know what it has actually "learned" that causes it to give its results. You can't mathematically audit it in any realistic manor [sic].

    This is why you don't want electronics in critical systems: the flow of electricity is governed by the quantum mechanics, so you will never going to exactly know what will happen next.
    This is why you don't want humans in control of critical systems: you never know when any of them are going to go beserk and start a mass shooting.
    ... Should I continue?...

    Mate, life is full of risks, deal with it

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 4, Interesting) by VortexCortex on Friday September 04 2015, @05:14AM

    by VortexCortex (4067) on Friday September 04 2015, @05:14AM (#232137)

    And this is exactly why you don't want real "AI" in critical systems. You don't actually know what the results will be, and may not even know what it has actually "learned" that causes it to give its results. You can't mathematically audit it in any realistic manor.

    Well, this is exactly the reason why I abandoned back propagation for my machine learning systems. You see, back prop actually does give you the ability to know what a n.net has actually "learned" and how it achieves its results, and you can even mathematically audit it in a realistic manor, great for reducing processing power of the result via pruning. So, instead I use non deterministic learning with unordered neural nets which continuously adapt to input over time rather than require an overseer to provide (and rate) a training set. This way, I know not even what how they decide to solve the problem (initially). However, I have been able to reverse engineer the method by which many capabilities have emerged, similar to how a neurologists reverse engineer organic brains. One of my goals is to catalog the least amount of complexity required to achieve certain cognitive tasks, and thus quantify intelligence itself in terms of information theory (and eventually blow philosophers' minds with with ethics/epistemology backed by mathematic proofs and empirical evidence).

    For most AI applications I think it would be better to have a self corrective fault tolerant AI that embraces a degree of chaos by design and is thus able to adapt "on the job" rather than getting a small hardware glitch and destroying an entire assembly line, or becoming completely non functional. "Oops, I've got a division by zero in my pinky toe, I'm afraid I can't carry you to the medic now," isn't any better than a dumb BASIC terminal running an expert system. One can always apply additional levels of control such as a simple expert system performing sanity checks (not in MFG plant anymore, shut down), or hardware level controls such as disabling live rounds by pulling out the firing pin or kill-switch (as some cars have).

    IMO, you're probably worried about the wrong thing. It's not the AI you should fear, it's the ones who own the hardware and deploy it against you that you really have to watch out for. I'm against removing the human element of battle. Support bots / machines, fine, but removing the human element from war means it's more of an economic war. Without dead soldier men and women being shipped back we won't have as much of an incentive to end the war. With AI and drones it puts more power in the hands of the fewer. Already a single drone operator could command a fleet of drones. The human piloted drone gets knocked out, he takes control of the next one in the swarm's holding pattern. Less humans means less people have to agree with the commands to carry them out. A soldier knows better than to obey orders blindly, first and foremost they are sworn to protect the constitution from enemies both foreign and domestic. A drone? An AI? Even if you trained one to obey the constitution a simple firmware update can turn it into an indiscriminate killing machine.

    Additionally, there will be a lot of AI devs who are saddened by your use of "real AI" here, as their asymptotic algorithms would disqualify as working on "real AI". Perhaps you meant "strong AI", as in nearly indistinguishable from human decision making? I'll take as close as human as I can get, even if it means a bit of unpredictability within a bounded set of actions rather than mindless amoral drones.