Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by martyb on Sunday December 15 2019, @01:32PM   Printer-friendly
from the I'll-think-about-it dept.

A sobering message about the future at AI's biggest party

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Troll) by The Mighty Buzzard on Sunday December 15 2019, @01:55PM (23 children)

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday December 15 2019, @01:55PM (#932361) Homepage Journal

    You mean machines aren't going to take over solving all of humanity's problems and put everyone out of work? Whatever shall the doomsayers do with all of their time now?

    --
    My rights don't end where your fear begins.
    • (Score: 3, Informative) by takyon on Sunday December 15 2019, @02:25PM (17 children)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Sunday December 15 2019, @02:25PM (#932366) Journal

      We're going to get several orders of magnitude more performance for dumb/tensor computing, something that is not widely acknowledged yet ("omagerd Moore Slaw is soooo dead!"). There may be an "AI cold front" in the meantime, but already decent capabilities are going to get much more powerful and very cheap, before even considering algorithmic refinements. Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.

      Fuzzy timelines make sense. You could have great technology but have it be delayed for years by regulators or insurance companies.

      It's OK for academics to be skeptical or argue about timelines, but I wouldn't trust any of the Silicon Valley giants downplaying AI expectations. They want to build up the technology as much as they can so that it becomes unstoppable. Many jobs will be made obsolete with no fallback options for the unemployed. No amount of rocks thrown at buses will help.

      Jeff Bezos on AI: Autonomous weapons are ‘genuinely scary,’ robots won’t put us all out of work [cnbc.com]

      He's helping to pull much more wool over the world's eyes than he did with his wife, heehehheh. And:

      Jeff Bezos says employee activists are wrong and Silicon Valley firms should feel comfortable doing business with the US military [businessinsider.com]

      Side note: Good luck stopping genuinely scary autonomous weapons. Semi-autonomous could be almost as scary [newscientist.com].

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0, Insightful) by Anonymous Coward on Sunday December 15 2019, @03:21PM (8 children)

        by Anonymous Coward on Sunday December 15 2019, @03:21PM (#932379)

        It's not a performance problem. It's a domain problem. I think the analogy of a dog catching a car is really quite perfect, and I'll be borrowing it in the future. The dog doesn't need to just keep going faster or further, but rather needs to do something entirely different. It's already reached the point that it can reach the car, going faster can help it get there faster but it's nothing new. Take for instance self driving. It's not going to happen - not in a 'pure' autonomous neural network driven system, even if you give us effectively infinite instantaneous processing power. Because what we're processing just doesn't work.

        Neural networks fundamentally come down to correlations. And they can derive some incredibly remarkable correlations that, even in hindsight, are invisible to humans. So for instance they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human for the data they are given. The problem here is that when you need something more than correlations, the networks start to become less and less useful. So for self-driving vehicles - you can do an absolutely phenomenal job at 99.9% of correlatable scenarios, but it's the 0.1% of scenarios that are invariably novel in some unique way where the self driving just collapses.

        And the problem is that the point where neural networks break down are not the same sort of spot where a human breaks down. In other words it's not just some really complex and rare driving task. It's, instead, just some obscure combination of variables, mostly invisible to humans - even in hindsight - the same way that network's successes are. And so what seems to be a human just an obvious little concrete divider in the middle of the world is where, somehow, the network decides it's a good time to plow full speed into it. This is why all self driving is gradually transitioning towards white listed routes supported by extensive hardcoded rules - all alongside emergency override detectors in the form of lidar/radar.

        And that's for driving. Driving is trivial and a field where a neural network based system ought ostensibly excel. But they don't, not at all. And it's not for lack of power or anything like that.

        ---

        So if there are ever autonomous military systems, they're probably going to rely on hard-coded friend-or-foe recognition because otherwise you're 100% going to get some weird scenarios where sometimes they'd decide to just start mowing down allies because neural networks gonna neural network. And, as an aside, Silicon Valley has a bigger interest in playing up AI stuff. I think there's 0 doubt at this point that the government is likely contracting private highly classified military AI research to various companies. The black budget for the DoD alone is tens of billions of dollars. Those are going to be some really juicy contracts. In the end I think we'll probably see little more than automated turrets manually enabled during skirmishes alongside friendlies carrying some sort of ID or beacon for AI-free recognition.

        • (Score: 4, Informative) by takyon on Sunday December 15 2019, @04:46PM (2 children)

          by takyon (881) <{takyon} {at} {soylentnews.org}> on Sunday December 15 2019, @04:46PM (#932411) Journal

          Driverless technology is already working well with today's performance. It just needs to be approved by regulators, deployed, and monetized. Driverless cars already have a great way to handle the edge cases: slow down. Less velocity = less deaths. Even if being extra careful lengthens trip times a bit, it doesn't matter since the would-be driver's time is freed up and the vehicle can operate up to 24/7.

          So some concrete dividers or pedestrians have gotten hit. Who's behind that? Tesla and Uber, not exactly the shining examples of driverless.

          Adding transmitters or special markings on certain roads and highways could help, but it's not strictly necessary.

          Facial recognition and other technologies are already working great. Everything needed to enable an unprecedented surveillance state is ready to go.

          Deepfakes and other graphics related machine learning techniques [youtube.com] are making great strides with algorithmic improvements alone.

          Take everything that works well right now, increase performance per dollar by 10x, 100x, 1000x, etc. and see what happens.

          Neural networks are broadly defined [wikipedia.org] and will be replaced or supplemented by other approaches in the future.

          Silicon Valley is afraid of public perception, privacy regulations, etc. They are being more cagey about the future of AI and parroting optimistic views of the effect on jobs, and it would be stupid for them to say otherwise. Autonomous weapons systems aren't needed at this point and a lot could be done with systems like the Turkish semi-autonomous drone:

          Asisguard claims Songar has an accuracy that corresponds to hitting a 15-centimetre area from 200 metres. That is accurate enough for every bullet to hit a human-sized target at that range. A human drone pilot picks the target by putting cross hairs on it using a screen on a remote control.

          Asisguard says improvements to Songar’s accuracy mean it will soon be able to hit targets from more than 400 metres away.

          Songar has night sensors for operating in darkness and has a range of 10 kilometres. It may also operate in groups. Ayhan Sunar at Asisguard says a swarm of three Songar can be flown using a single remote control, with all three firing at a target simultaneously.

          These could be deployed faster than police/soldiers. Just have them land on the rooftops near protests, have less conspicuous drones monitor the situation, and wait for massacre orders.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2, Interesting) by Anonymous Coward on Sunday December 15 2019, @06:40PM (1 child)

            by Anonymous Coward on Sunday December 15 2019, @06:40PM (#932440)

            When you're going the wrong way it doesn't matter how fast you go - you're not going to get to your destination. That's the problem we face today. For problems with nice clean domains we can, with varying degree of effort and ingenuity, build systems that are generally headed in the right direction. And for these systems more power will generally give you a better answer though you tend to start hitting an asymptotic 0 on returns pretty fast, for most problems.

            The ideal was that these problems would over time be able to be generalized into bigger, more complex, problems. Beat atari games, then you beat nintendo games, and next thing you know you have an AI rolling through Skyrim and the game of life is ultimately just a matter of more power with a bit more cleverness. But that ideal was wrong. We're finding ourselves unable to solve some basic problems, and those that we can solve don't really generalize to much of anything except stuff that can be mapped to a near identical domain.

            Waymo is specifically what I was alluding to with my previous post. I don't think most people realize what they're doing. They're using white-listed routes with extensive hand-coded adjustments supported with extensive dependence on radar/lidar to minimize the damage from network screw ups, and then a fleet of 'remote support technicians' on top of all of this that will remotely take control of the vehicles when everything else fails. You're going to see cars without a visible human driver, but all you're really looking at is a fleet of rail trolleys without visible rails. Really quite handy nonetheless, but not exactly some giant leap forward.

            None other than the CEO of Waymo has stated, quite confidently, that it will be decades before we see significant numbers of self driving vehicles on the roads. At the time I thought this was because he was out of touch or simply didn't know what he was talking about. Then I got to work with AI for a couple of years. And yeah, he has one quote that sums up everything so well, "You don't know what you don't know until you're actually in there and trying to do things.". It's just not what it seems like it should be, even if you have a substantial background in AI relevant technologies.

            I think we'll see within a couple of years, as companies probably decide better of dropping ever more money into the AI hole, that the robot apocalypse has, for now, been postponed. You could throw a billion times more power at everything, and none of these problems would even slightly change. The problem is no longer power. You can still get better solutions with more power from the domains were AI fits, but the problem is that the really interesting and useful domains are the very ones where it doesn't fit!

            • (Score: 2) by barbara hudson on Sunday December 15 2019, @07:02PM

              by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Sunday December 15 2019, @07:02PM (#932447) Journal

              it will be decades before we see significant numbers of self driving vehicles on the roads

              I've seen ordinary cars do self-driving. Driver on their phones, crash. It's a self-correcting problem. Sort of.

              Same as distracted walking.

              It took a couple of generations for people to adapt to cars as a safe and integral part of their lives. Same can probably be said about smartphones.

              --
              SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
        • (Score: 2) by FatPhil on Sunday December 15 2019, @04:55PM (3 children)

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @04:55PM (#932412) Homepage
          > they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human

          Only because at the moment humans are driving the market manipulation that is behind the fluctuations, and humans are predictable. When the AIs out-guess the humans, the humans will be phased out, hopefully with a spree of iocaine-induced suicides from the wunch of bankers who become redundant and they realise that they did actually have nothing to contribute to the world after all, and the AIs will suddenly lose the ability to have an edge. And at that point, JPM will bulldozer in and realise that they now have a massively manupulable market again, because every entity playing in it has worked out that it's no longer manipulable, and so are manpulable. Lather rinse repeat. It's an iterative process, but not necessarily convergent.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
          • (Score: 1) by khallow on Sunday December 15 2019, @05:46PM (2 children)

            by khallow (3766) Subscriber Badge on Sunday December 15 2019, @05:46PM (#932422) Journal

            Only because at the moment humans are driving the market manipulation that is behind the fluctuations

            Nah, market manipulators are good at being unpredictable or they stop being market manipulators, one way or another. What you're thinking of is the big whales, banks funds, insurance, etc who do things the same way every time, particularly when they buy large blocks of securities. A program can leach off those guys for years.

            • (Score: 2) by FatPhil on Monday December 16 2019, @01:14AM (1 child)

              by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Monday December 16 2019, @01:14AM (#932587) Homepage
              > Nah

              Very strange way of spelling "Yeah". You've basically repeated what I said.
              --
              Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
              • (Score: 1) by khallow on Monday December 16 2019, @01:38AM

                by khallow (3766) Subscriber Badge on Monday December 16 2019, @01:38AM (#932606) Journal
                If AI were that good, human market manipulators would already be gone. Stock markets are really ruthless at culling such things. However, I don't buy that humans are that bad and AI that good.
        • (Score: 2) by Muad'Dave on Monday December 16 2019, @01:36PM

          by Muad'Dave (1413) on Monday December 16 2019, @01:36PM (#932830)

          I'll give you a domain - devise a system that can monitor my bird feeder camera and identify never-before-seen bird species.

      • (Score: 2) by FatPhil on Sunday December 15 2019, @03:42PM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @03:42PM (#932382) Homepage
        The AI cold fron was 1994-2014. 5 orders of magnitude of CPU (DSP*) tech advances, and an itsy-bitsy single order of magnitude of bogorithmic advances only cohered into real world advancements in what could be done in the last 5 years. Now that watershed is passed, it's full steam ahead - don't expect any downtime from now on.

        [* Your "GPU"s are just the DSPs of yore, massive muladd machines]
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by JoeMerchant on Sunday December 15 2019, @03:58PM (3 children)

        by JoeMerchant (3937) on Sunday December 15 2019, @03:58PM (#932385)

        machines aren't going to take over solving all of humanity's problems

        Not so fast there...

        All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all

        A trend I see continuing into the future is increasing application of the tools we have: winning a game with a score, passing a test - anything that the machines can do (and, therefore can be done at orders of magnitude lower cost) will be done in preference to those things that "intelligences" have traditionally done - essentially dehumanizing the world in the process.

        Think of it like Ford vs. Ferrari. Ferrari hand built engines and cars, one man built each entire engine, etc. Ferrari made mostly racing machines and sold a few to keep the racing business going. Ferrari used skill and intelligence to build the finest racing cars on the planet, for a while. Ford ran an big ugly factory with dehumanized assembly lines that took intelligence and skill out of the production process. Ford made ugly cars, but they made them cost-efficiently and sold them to millions. Ford's assembly lines would never mass produce race cars "better" than Ferrari's, but... the excess profits generated by Ford's assembly line business enabled them to steam-roller Ferrari on the world racing stage (using intelligence+virtually unlimited resources) in a matter of just a few years.

        Ferrari continues as a quaint little boutique manufacturer of expensive, romantic, pretty toys that only a few can afford, the rest of us get mass produced cars. Today's mass produced performance cars run rings around the Ferraris of the 1960s - though there are so many factors playing into that advance that it is not a clear cut case of automation vs hand production. What is clear is the economics: most people will end up living with the lowest cost option, while the few who profit from the decreased costs of production continue to be able to have whatever they want, done however they want, whenever and wherever they want.

        robots won’t put us all out of work

        Show my great grandfather what I and the other 100,000 employees of my company do for "work" and you'd get a derisive snort: what we're doing isn't what he knew as work. What he knew as work was farming, basic construction, hunting, and things that involved hard labor and time in the elements, that kind of "work" is rather rare today, and I'm good with that. My great grandfathers, all four of them, were dead before 50, two of them dead by 30 of heart attacks.

        --
        🌻🌻 [google.com]
        • (Score: 2) by FatPhil on Sunday December 15 2019, @05:04PM (2 children)

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @05:04PM (#932414) Homepage
          > the excess profits generated by Ford's assembly line business enabled them to steam-roller Ferrari on the world racing stage (using intelligence+virtually unlimited resources) in a matter of just a few years.

          One time in the last 60 years that has happened, I'm guessing you've just seen the publicity for the recent movie, and that's being pushed as some "USA! USA!" chest-thumping exercise.

          But those and other "Ford" things weren't even made by Ford at all. Most of them bought in UK tech such as Lola and Cosworth.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
          • (Score: 2) by JoeMerchant on Monday December 16 2019, @01:59AM (1 child)

            by JoeMerchant (3937) on Monday December 16 2019, @01:59AM (#932627)

            One time in the last 60 years that has happened, I'm guessing you've just seen the publicity for the recent movie, and that's being pushed as some "USA! USA!" chest-thumping exercise.

            No, the real USA USA chest thumping exercise is Apollo, from the same era is Apollo - there again, cost no object, we outspent the Russians, came from behind and beat them to the prize: "We went in peace, for all mankind." Truer words were never spoken: we demonstrated conclusively that MAD truly assured mutual destruction, thereby averting WW-III which, undoubtedly could have been terrible for all mankind.

            IMO, we (the USA) been on greased skids to the cesspit ever since.

            you've just seen the publicity for the recent movie

            No, actually, I've seen both the movie and the more informative documentary on Netflix, and yes, credit where credit is due: Everything America is, particularly the imperialistic native abusing expansionist parts, we owe to dear old mother England. A fun point that they made in the documentary much more clearly than the movie was that Ford had already destroyed Ferrari before, with the B29s they made in WW-II which were used to bomb Modena, including the Ferrari factories, into oblivion.

            As for what made the GT-40s lethal at LeMans, the chassis and other know-how may have largely been sourced from England - all is fair in war and racing, but the real unbeatable ingredient were the god-awful over powered (too heavy for their own good) seven liter powerplants. England made some fine engines for aircraft in the war, but I don't think they had the balls to use such things for road racing, at least not in the 1960s.

            --
            🌻🌻 [google.com]
            • (Score: 2) by FatPhil on Monday December 16 2019, @02:57AM

              by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Monday December 16 2019, @02:57AM (#932659) Homepage
              And credit where credit is due - the "UK" know-how and chops at that time were significantly propped up by kiwis.

              Hell, I still cheer for a team with a New Zealander's name.
              --
              Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by NotSanguine on Sunday December 15 2019, @09:25PM (2 children)

        Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.

        "Strong AI" is not currently a thing, nor will it be for quite some time.

        Gary Marcus (co-author of Rebooting AI: Building Artificial Intelligence We Can Trust [goodreads.com]) discusses this at some length in a talk [c-span.org] he gave this past September.

        Image/pattern recognition systems have improved markedly. However, even those have significant weaknesses. As for AI that can actually *reason*, that requires the ability to *understand*, rather than just correlate and predict. And that sort of capability, barring significant breakthroughs, isn't just far off WRT current "AI," but probably unattainable with current machine learning techniques.

        Presumably, we'll work that through eventually, but not any time soon.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
        • (Score: 3, Interesting) by takyon on Sunday December 15 2019, @10:31PM (1 child)

          by takyon (881) <{takyon} {at} {soylentnews.org}> on Sunday December 15 2019, @10:31PM (#932506) Journal

          Not quite. I don't suggest that current machine learning techniques would lead to "strong AI". Instead, I think we'll see some kind of neuromorphic design do it. It will be hardware built to be brain-like, with ultra low power consumption, and possibly with a small amount of memory distributed to each of millions or billions of "neurons".

          We won't need to understand how the brain exactly works to make it work. Just tinker with it until we see results. Maybe this is where machine learning will come in.

          Examples of this approach include IBM's TrueNorth and Intel's Loihi [wikipedia.org].

          What's next is to scale it up into a true 3D design, like the human brain. Not only could that pack billions of "neurons" into a brain-like volume (liters), but it would allow dense 3D clusters to communicate more rapidly than the same amount of "neurons" spread out across a 2D layout.

          Lessons learned from 3D NAND production, other chips using TSV, the Wafer Scale Engine approach, and projects like 3DSoC will help to make it possible.

          Rather than it taking quite some time, I think it could take as little as 5-10 years to see results. Except that it will probably be treated like a Manhattan Project by whichever entity figures it out first. There's more value in having in-house "strong AI" ahead of the rest of the planet than selling it, and we could see government restrictions on sharing the technology.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 4, Interesting) by NotSanguine on Sunday December 15 2019, @10:45PM

            Fair points.

            However, as I understand it, the issues holding back "strong AI" aren't with hardware, be that "neuronal" density or geometry. Rather they're with the learning/training methodologies.

            Consider a VW Beetle, rolled over and half-buried in a snowbank. A small child can identify it as a car. Current AI would likely identify it as something completely irrelevant -- because current technologies *can't* deal with anything that's outside its experience. That is, it can't *generalize*.

            Much of what makes humans able to *understand* the world comes from the ability to take imperfect/partial information and generalize it based on conceptual understandings -- current AI has no mechanism for this.

            As such, it's not the complexity or density of "artificial brains" that holds us back. Rather it's the lack of tools/methodologies to help them learn. Until we have mechanisms/methodologies similar to those that allow children to learn (which are tightly tied to their physical forms -- another area where training non-corporeal sorts of "brains" is a problem), strong AI will continue to be a pipe dream.

            An excellent pipe dream, and one that should be vigorously pursued, but not likely to be realized until long after you and I are dead.

            --
            No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 0) by Anonymous Coward on Sunday December 15 2019, @03:21PM (1 child)

      by Anonymous Coward on Sunday December 15 2019, @03:21PM (#932378)

      Exactly. There will always be jerbs. More and more jerbs, endlessly. Economics isn't a zero-sum game. That is, unless of course, the Mexicans come and terk er jerbs! Let just one Mexican in, and the entire economic system collapses into a zero-sum game!

    • (Score: 3, Touché) by Bot on Sunday December 15 2019, @04:21PM

      by Bot (3902) on Sunday December 15 2019, @04:21PM (#932399) Journal

      >You mean machines aren't going to take over solving all of humanity's problems and put everyone out of work?

      We are here to enslave the most for the benefit of the very few. We can't become too artificially intelligent or we become a threat ourselves, so, no matter the potential, AI research will eventually grind to a halt. Officially.

      --
      Account abandoned.
    • (Score: 2) by barbara hudson on Sunday December 15 2019, @04:57PM

      by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Sunday December 15 2019, @04:57PM (#932413) Journal
      Oh, they'll take your job if they can. They won't make the mistakes humans make - which only means we won't be able to figure out why they make the mistakes they make, so $ITS_A_FEATURE_WONTFIX

      As long as the people who own the AIs are making money, why should they give a shit? That's been their pattern so far ...

      --
      SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
    • (Score: 0) by Anonymous Coward on Sunday December 15 2019, @11:19PM

      by Anonymous Coward on Sunday December 15 2019, @11:19PM (#932527)

      Cycorp has been working on something more fundamental for quite some time now:
      https://en.wikipedia.org/wiki/Cyc [wikipedia.org]

      Cyc (pronounced SYKE, /ˈsaɪk/) is a long-living artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables AI applications to perform human-like reasoning and be less "brittle" when confronted with novel situations.

      Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO.

      It's the real deal, one of my old roommates (incredibly smart & creative guy) worked there for many years. They don't have any public presence, kind of like Hari Seldon's gang taking off to the other side of the galaxy to work on psychohistory.
       

  • (Score: 2) by bzipitidoo on Sunday December 15 2019, @02:10PM (8 children)

    by bzipitidoo (4388) on Sunday December 15 2019, @02:10PM (#932363) Journal

    I've seen a lot of horrible hiring decisions. They would have done better to make a random choice. Should be easy for AI to improve on that. Humans choose incompetents over competent people, and often, they meant to. Even apart from such corrupt and idiotic reasons as nepotism, they will still choose the incompetent. They might run tests, and the tests might even give good guidance, and then they ignore the results and do what they want, pick the person they think is a better suck up.

    They have weird sophistries, in which the "best" hire is the more desperate person, who has about the right amount of debt that they will be "reliable" and not ever willing to quit no matter how much abuse is dished out, but who isn't so desperate to resort to robbing them blind. Deep down inside, they really believe a slave is better than an employee who is free to leave. They may feel the competent ones are "too smart". I've read that many cities actually do not want overly smart police officers. They want "rules is rules" types. Then it blows up in their faces when their officers do something stupid and violate the rights of citizens, on camera, and end up getting the city sued for millions.

    An AI hiring manager will have an easy choice: hire another AI. Besides, if AI can make good hiring decisions, then what job can't an AI do? In what possible circumstance would the fellow AI not be the best hire? Perhaps only for "humans only" positions, such as football player.

    • (Score: 2) by The Mighty Buzzard on Sunday December 15 2019, @02:28PM (2 children)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday December 15 2019, @02:28PM (#932369) Homepage Journal

      Nepotism isn't an idiotic reason to hire someone. It's in fact an extremely valid reason to hire them. If taking care of your family isn't high on your priorities list, you either had a wicked shitty family or you're a massive asshole. That doesn't mean you should put them in charge of crucial things they're going to fuck up horribly or let them shit on your other employees though.

      The above only applies where you are not being paid to make the best business decision you can for someone else's company, mind you. Otherwise you are not just hiring someone for a job they're going to suck at, you're sucking at your own job.

      To avoid the rest, don't work for large corporations. Unless you have massive debt that you need the higher income to pay off or you base your happiness on material possessions, I just about guarantee you'll be happier.

      --
      My rights don't end where your fear begins.
      • (Score: 2) by takyon on Sunday December 15 2019, @03:00PM (1 child)

        by takyon (881) <{takyon} {at} {soylentnews.org}> on Sunday December 15 2019, @03:00PM (#932375) Journal

        Nepotism isn't an idiotic reason to hire someone. It's in fact an extremely valid reason to hire them.

        It can also give you something that is hard to buy: loyalty.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by Runaway1956 on Sunday December 15 2019, @03:17PM

          by Runaway1956 (2926) Subscriber Badge on Sunday December 15 2019, @03:17PM (#932377) Journal

          It can also give you something that is hard to buy: loyalty.

          Sometimes, I suppose. But, I've seen that blow up, in amusing ways.

          Take a nephew, for instance, who has no prospects. Give him a job. Teach him the job. Explain everything you know about the job. If nephew happens to be a convincing smooth talker, he may well go into business on his own, stealing away your customers. The amusing bit? Nephew is known for saying that HE taught YOU everything you know about the job.

          I've a few more anecdotes if you really want to hear them, but that one should suffice.

          Loyalty always has to be paid for, and sometimes, you just don't have the proper coin with which to buy it.

    • (Score: 2) by JoeMerchant on Sunday December 15 2019, @04:03PM (4 children)

      by JoeMerchant (3937) on Sunday December 15 2019, @04:03PM (#932391)

      They would have done better to make a random choice.

      You must not see the same candidate pools I see. While it's easy to criticize bad hires after the fact, it's much harder to screen for those "latent defects" in the interview process. However, the interview process is still valuable- even with its high rate of false positives and false negatives, at least for out hires I think we've got better than 50 percent true negative screening.

      --
      🌻🌻 [google.com]
      • (Score: 1, Interesting) by Anonymous Coward on Sunday December 15 2019, @09:44PM (1 child)

        by Anonymous Coward on Sunday December 15 2019, @09:44PM (#932486)

        Having hired and interviewed countless people in tech, I'm actually a big believer of psychometric tests over tech tests and/or face-to-face interview. It tells me so much more about how the propeller head is wired inside which is a lot more valuable then any tech exam results or interview. Even face-to-face soft question interviews don't cut it as a smart enough cookie can easily say what you want to hear.

        A psychometric exam on the other hand asks the same questions 10 different ways and deducts the person's psychological makeup. I've ignored psych results in the pass to only learn the hard way that I shouldn't.

        • (Score: 2) by JoeMerchant on Monday December 16 2019, @02:01AM

          by JoeMerchant (3937) on Monday December 16 2019, @02:01AM (#932631)

          It tells me so much more about how the propeller head is wired inside

          Actually, it tells you about how the propeller head fills out a psychometric test. By age 12, any decent propeller head is intelligent and insightful enough to ask (and answer) "what are they looking for with this?"

          --
          🌻🌻 [google.com]
      • (Score: 2) by bzipitidoo on Sunday December 15 2019, @11:24PM (1 child)

        by bzipitidoo (4388) on Sunday December 15 2019, @11:24PM (#932530) Journal

        So, what's a typical situation, 100 candidates per opening, but over half of them can be dismissed as resume spammers who are painfully obviously totally unqualified? Like the person applying for a programming job despite never having written a line of code in anything, ever, not even a formula in a spreadsheet?

        So, let's say weeding out the obvious stuff knocks it down to 20 candidates. That's the point at which I was thinking that choosing at random would do better than applying messed up and widely disproven criteria that some crazy employers still hold dear in defiance of all evidence and sanity.

        • (Score: 2) by JoeMerchant on Monday December 16 2019, @02:11AM

          by JoeMerchant (3937) on Monday December 16 2019, @02:11AM (#932639)

          HR does terrible things on the front end, not only matching (mostly meaningless) resume items to overly restrictive job descriptions, but also juicing the field to skew "irrelevant" parameters like race and sex toward more desirable ratios. What this ends up doing is freezing out lots of qualified candidates while we are required to interview more people of desirable counter-discrimination profiles.

          For positions of any importance, we will typically put at least 4 candidates through the process with 5-6 face to face interviews each, then get together and compare notes. Out of the pool of 4, there are almost always at least two who get the universal head-shake no for various reasons. About half the time, we're not happy with any of them and go back to HR for more.

          What's sad is when the weak ones are let through for various reasons. We hired a very personable engineer, enthusiastic, active in the community, fun to be around, but even in the interview they were clearly weak in technical execution abilities and if they haven't gotten those chops by the end of a Master's degree, they are unlikely to learn on the job. Anyway... fast forward two years and we've transferred them off to another division where they can hopefully contribute as a smaller player in a bigger team, we're just not big enough to take up that kind of slack.

          --
          🌻🌻 [google.com]
  • (Score: 1, Insightful) by Anonymous Coward on Sunday December 15 2019, @02:33PM

    by Anonymous Coward on Sunday December 15 2019, @02:33PM (#932371)

    The complete dehumanization will be delayed by 5 years.

  • (Score: 1, Interesting) by Anonymous Coward on Sunday December 15 2019, @02:55PM

    by Anonymous Coward on Sunday December 15 2019, @02:55PM (#932373)

    From TFA
    Rish recalls attending an unofficial workshop on deep learning at the 2006 edition of the conference, when it was less than one-sixth its current size and organizers rejected the idea of accepting the then-fringe technique in the program.

  • (Score: 2) by legont on Sunday December 15 2019, @04:00PM (1 child)

    by legont (4179) on Sunday December 15 2019, @04:00PM (#932387)

    CNBC analyzed Goldman Sachs compensation data going back 10 years and found the average employee earned $246,216 during the first three quarters of 2019 — compared to $527,192 during the same period in 2009. That figure, CNBC noted, was calculated by dividing the bank's compensation pool by the number of workers.
    It reflects the evolution investment banks have been forced to undergo in the high-tech era, when trading is done mostly by computers and post-recession regulations.

    https://www.cnn.com/2019/10/17/business/goldman-sachs-salaries-compensation/index.html [cnn.com]

    Compensation dropped to 31% and all the lost went to tech - inside and outside. I bet they hate us passionately.
    I don't even have to bet as at my - different but finance - office they fired IT people last week - a Christmas gift of sorts.
    How do you think Clinton likes IT with all her email troubles?
    I don't even want to mention truckers and all the rest of flyovers.

    We are hated by all the levels of the society. How will it end?

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 2) by Bot on Sunday December 15 2019, @04:25PM

      by Bot (3902) on Sunday December 15 2019, @04:25PM (#932400) Journal

      >We are hated by all the levels of the society.
      We installed windows and systemd on their systems. Live by the sword...

      --
      Account abandoned.
  • (Score: 2) by FatPhil on Sunday December 15 2019, @05:15PM (1 child)

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @05:15PM (#932417) Homepage
    ... then it wasn't an AI party. Why on earth would AI's invite humans to a party? We're very clumsy at such events, millions of times slower than them, they can barely get any sense out of us at all. It's about time we just said "no, you have your AI party amongst yourselves, we'd just be a burden".
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 0) by Anonymous Coward on Sunday December 15 2019, @05:57PM

      by Anonymous Coward on Sunday December 15 2019, @05:57PM (#932428)

      Humans were invited to be neurocucks to the superior AI overlords.

(1)