A sobering message about the future at AI's biggest party
Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.
"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.
(Score: 1, Troll) by The Mighty Buzzard on Sunday December 15 2019, @01:55PM (23 children)
You mean machines aren't going to take over solving all of humanity's problems and put everyone out of work? Whatever shall the doomsayers do with all of their time now?
My rights don't end where your fear begins.
(Score: 3, Informative) by takyon on Sunday December 15 2019, @02:25PM (17 children)
We're going to get several orders of magnitude more performance for dumb/tensor computing, something that is not widely acknowledged yet ("omagerd Moore Slaw is soooo dead!"). There may be an "AI cold front" in the meantime, but already decent capabilities are going to get much more powerful and very cheap, before even considering algorithmic refinements. Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.
Fuzzy timelines make sense. You could have great technology but have it be delayed for years by regulators or insurance companies.
It's OK for academics to be skeptical or argue about timelines, but I wouldn't trust any of the Silicon Valley giants downplaying AI expectations. They want to build up the technology as much as they can so that it becomes unstoppable. Many jobs will be made obsolete with no fallback options for the unemployed. No amount of rocks thrown at buses will help.
Jeff Bezos on AI: Autonomous weapons are ‘genuinely scary,’ robots won’t put us all out of work [cnbc.com]
He's helping to pull much more wool over the world's eyes than he did with his wife, heehehheh. And:
Jeff Bezos says employee activists are wrong and Silicon Valley firms should feel comfortable doing business with the US military [businessinsider.com]
Side note: Good luck stopping genuinely scary autonomous weapons. Semi-autonomous could be almost as scary [newscientist.com].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0, Insightful) by Anonymous Coward on Sunday December 15 2019, @03:21PM (8 children)
It's not a performance problem. It's a domain problem. I think the analogy of a dog catching a car is really quite perfect, and I'll be borrowing it in the future. The dog doesn't need to just keep going faster or further, but rather needs to do something entirely different. It's already reached the point that it can reach the car, going faster can help it get there faster but it's nothing new. Take for instance self driving. It's not going to happen - not in a 'pure' autonomous neural network driven system, even if you give us effectively infinite instantaneous processing power. Because what we're processing just doesn't work.
Neural networks fundamentally come down to correlations. And they can derive some incredibly remarkable correlations that, even in hindsight, are invisible to humans. So for instance they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human for the data they are given. The problem here is that when you need something more than correlations, the networks start to become less and less useful. So for self-driving vehicles - you can do an absolutely phenomenal job at 99.9% of correlatable scenarios, but it's the 0.1% of scenarios that are invariably novel in some unique way where the self driving just collapses.
And the problem is that the point where neural networks break down are not the same sort of spot where a human breaks down. In other words it's not just some really complex and rare driving task. It's, instead, just some obscure combination of variables, mostly invisible to humans - even in hindsight - the same way that network's successes are. And so what seems to be a human just an obvious little concrete divider in the middle of the world is where, somehow, the network decides it's a good time to plow full speed into it. This is why all self driving is gradually transitioning towards white listed routes supported by extensive hardcoded rules - all alongside emergency override detectors in the form of lidar/radar.
And that's for driving. Driving is trivial and a field where a neural network based system ought ostensibly excel. But they don't, not at all. And it's not for lack of power or anything like that.
---
So if there are ever autonomous military systems, they're probably going to rely on hard-coded friend-or-foe recognition because otherwise you're 100% going to get some weird scenarios where sometimes they'd decide to just start mowing down allies because neural networks gonna neural network. And, as an aside, Silicon Valley has a bigger interest in playing up AI stuff. I think there's 0 doubt at this point that the government is likely contracting private highly classified military AI research to various companies. The black budget for the DoD alone is tens of billions of dollars. Those are going to be some really juicy contracts. In the end I think we'll probably see little more than automated turrets manually enabled during skirmishes alongside friendlies carrying some sort of ID or beacon for AI-free recognition.
(Score: 4, Informative) by takyon on Sunday December 15 2019, @04:46PM (2 children)
Driverless technology is already working well with today's performance. It just needs to be approved by regulators, deployed, and monetized. Driverless cars already have a great way to handle the edge cases: slow down. Less velocity = less deaths. Even if being extra careful lengthens trip times a bit, it doesn't matter since the would-be driver's time is freed up and the vehicle can operate up to 24/7.
So some concrete dividers or pedestrians have gotten hit. Who's behind that? Tesla and Uber, not exactly the shining examples of driverless.
Adding transmitters or special markings on certain roads and highways could help, but it's not strictly necessary.
Facial recognition and other technologies are already working great. Everything needed to enable an unprecedented surveillance state is ready to go.
Deepfakes and other graphics related machine learning techniques [youtube.com] are making great strides with algorithmic improvements alone.
Take everything that works well right now, increase performance per dollar by 10x, 100x, 1000x, etc. and see what happens.
Neural networks are broadly defined [wikipedia.org] and will be replaced or supplemented by other approaches in the future.
Silicon Valley is afraid of public perception, privacy regulations, etc. They are being more cagey about the future of AI and parroting optimistic views of the effect on jobs, and it would be stupid for them to say otherwise. Autonomous weapons systems aren't needed at this point and a lot could be done with systems like the Turkish semi-autonomous drone:
These could be deployed faster than police/soldiers. Just have them land on the rooftops near protests, have less conspicuous drones monitor the situation, and wait for massacre orders.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2, Interesting) by Anonymous Coward on Sunday December 15 2019, @06:40PM (1 child)
When you're going the wrong way it doesn't matter how fast you go - you're not going to get to your destination. That's the problem we face today. For problems with nice clean domains we can, with varying degree of effort and ingenuity, build systems that are generally headed in the right direction. And for these systems more power will generally give you a better answer though you tend to start hitting an asymptotic 0 on returns pretty fast, for most problems.
The ideal was that these problems would over time be able to be generalized into bigger, more complex, problems. Beat atari games, then you beat nintendo games, and next thing you know you have an AI rolling through Skyrim and the game of life is ultimately just a matter of more power with a bit more cleverness. But that ideal was wrong. We're finding ourselves unable to solve some basic problems, and those that we can solve don't really generalize to much of anything except stuff that can be mapped to a near identical domain.
Waymo is specifically what I was alluding to with my previous post. I don't think most people realize what they're doing. They're using white-listed routes with extensive hand-coded adjustments supported with extensive dependence on radar/lidar to minimize the damage from network screw ups, and then a fleet of 'remote support technicians' on top of all of this that will remotely take control of the vehicles when everything else fails. You're going to see cars without a visible human driver, but all you're really looking at is a fleet of rail trolleys without visible rails. Really quite handy nonetheless, but not exactly some giant leap forward.
None other than the CEO of Waymo has stated, quite confidently, that it will be decades before we see significant numbers of self driving vehicles on the roads. At the time I thought this was because he was out of touch or simply didn't know what he was talking about. Then I got to work with AI for a couple of years. And yeah, he has one quote that sums up everything so well, "You don't know what you don't know until you're actually in there and trying to do things.". It's just not what it seems like it should be, even if you have a substantial background in AI relevant technologies.
I think we'll see within a couple of years, as companies probably decide better of dropping ever more money into the AI hole, that the robot apocalypse has, for now, been postponed. You could throw a billion times more power at everything, and none of these problems would even slightly change. The problem is no longer power. You can still get better solutions with more power from the domains were AI fits, but the problem is that the really interesting and useful domains are the very ones where it doesn't fit!
(Score: 2) by barbara hudson on Sunday December 15 2019, @07:02PM
I've seen ordinary cars do self-driving. Driver on their phones, crash. It's a self-correcting problem. Sort of.
Same as distracted walking.
It took a couple of generations for people to adapt to cars as a safe and integral part of their lives. Same can probably be said about smartphones.
SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
(Score: 2) by FatPhil on Sunday December 15 2019, @04:55PM (3 children)
Only because at the moment humans are driving the market manipulation that is behind the fluctuations, and humans are predictable. When the AIs out-guess the humans, the humans will be phased out, hopefully with a spree of iocaine-induced suicides from the wunch of bankers who become redundant and they realise that they did actually have nothing to contribute to the world after all, and the AIs will suddenly lose the ability to have an edge. And at that point, JPM will bulldozer in and realise that they now have a massively manupulable market again, because every entity playing in it has worked out that it's no longer manipulable, and so are manpulable. Lather rinse repeat. It's an iterative process, but not necessarily convergent.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1) by khallow on Sunday December 15 2019, @05:46PM (2 children)
Nah, market manipulators are good at being unpredictable or they stop being market manipulators, one way or another. What you're thinking of is the big whales, banks funds, insurance, etc who do things the same way every time, particularly when they buy large blocks of securities. A program can leach off those guys for years.
(Score: 2) by FatPhil on Monday December 16 2019, @01:14AM (1 child)
Very strange way of spelling "Yeah". You've basically repeated what I said.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1) by khallow on Monday December 16 2019, @01:38AM
(Score: 2) by Muad'Dave on Monday December 16 2019, @01:36PM
I'll give you a domain - devise a system that can monitor my bird feeder camera and identify never-before-seen bird species.
(Score: 2) by FatPhil on Sunday December 15 2019, @03:42PM
[* Your "GPU"s are just the DSPs of yore, massive muladd machines]
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by JoeMerchant on Sunday December 15 2019, @03:58PM (3 children)
Not so fast there...
A trend I see continuing into the future is increasing application of the tools we have: winning a game with a score, passing a test - anything that the machines can do (and, therefore can be done at orders of magnitude lower cost) will be done in preference to those things that "intelligences" have traditionally done - essentially dehumanizing the world in the process.
Think of it like Ford vs. Ferrari. Ferrari hand built engines and cars, one man built each entire engine, etc. Ferrari made mostly racing machines and sold a few to keep the racing business going. Ferrari used skill and intelligence to build the finest racing cars on the planet, for a while. Ford ran an big ugly factory with dehumanized assembly lines that took intelligence and skill out of the production process. Ford made ugly cars, but they made them cost-efficiently and sold them to millions. Ford's assembly lines would never mass produce race cars "better" than Ferrari's, but... the excess profits generated by Ford's assembly line business enabled them to steam-roller Ferrari on the world racing stage (using intelligence+virtually unlimited resources) in a matter of just a few years.
Ferrari continues as a quaint little boutique manufacturer of expensive, romantic, pretty toys that only a few can afford, the rest of us get mass produced cars. Today's mass produced performance cars run rings around the Ferraris of the 1960s - though there are so many factors playing into that advance that it is not a clear cut case of automation vs hand production. What is clear is the economics: most people will end up living with the lowest cost option, while the few who profit from the decreased costs of production continue to be able to have whatever they want, done however they want, whenever and wherever they want.
Show my great grandfather what I and the other 100,000 employees of my company do for "work" and you'd get a derisive snort: what we're doing isn't what he knew as work. What he knew as work was farming, basic construction, hunting, and things that involved hard labor and time in the elements, that kind of "work" is rather rare today, and I'm good with that. My great grandfathers, all four of them, were dead before 50, two of them dead by 30 of heart attacks.
🌻🌻 [google.com]
(Score: 2) by FatPhil on Sunday December 15 2019, @05:04PM (2 children)
One time in the last 60 years that has happened, I'm guessing you've just seen the publicity for the recent movie, and that's being pushed as some "USA! USA!" chest-thumping exercise.
But those and other "Ford" things weren't even made by Ford at all. Most of them bought in UK tech such as Lola and Cosworth.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by JoeMerchant on Monday December 16 2019, @01:59AM (1 child)
No, the real USA USA chest thumping exercise is Apollo, from the same era is Apollo - there again, cost no object, we outspent the Russians, came from behind and beat them to the prize: "We went in peace, for all mankind." Truer words were never spoken: we demonstrated conclusively that MAD truly assured mutual destruction, thereby averting WW-III which, undoubtedly could have been terrible for all mankind.
IMO, we (the USA) been on greased skids to the cesspit ever since.
No, actually, I've seen both the movie and the more informative documentary on Netflix, and yes, credit where credit is due: Everything America is, particularly the imperialistic native abusing expansionist parts, we owe to dear old mother England. A fun point that they made in the documentary much more clearly than the movie was that Ford had already destroyed Ferrari before, with the B29s they made in WW-II which were used to bomb Modena, including the Ferrari factories, into oblivion.
As for what made the GT-40s lethal at LeMans, the chassis and other know-how may have largely been sourced from England - all is fair in war and racing, but the real unbeatable ingredient were the god-awful over powered (too heavy for their own good) seven liter powerplants. England made some fine engines for aircraft in the war, but I don't think they had the balls to use such things for road racing, at least not in the 1960s.
🌻🌻 [google.com]
(Score: 2) by FatPhil on Monday December 16 2019, @02:57AM
Hell, I still cheer for a team with a New Zealander's name.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by NotSanguine on Sunday December 15 2019, @09:25PM (2 children)
"Strong AI" is not currently a thing, nor will it be for quite some time.
Gary Marcus (co-author of Rebooting AI: Building Artificial Intelligence We Can Trust [goodreads.com]) discusses this at some length in a talk [c-span.org] he gave this past September.
Image/pattern recognition systems have improved markedly. However, even those have significant weaknesses. As for AI that can actually *reason*, that requires the ability to *understand*, rather than just correlate and predict. And that sort of capability, barring significant breakthroughs, isn't just far off WRT current "AI," but probably unattainable with current machine learning techniques.
Presumably, we'll work that through eventually, but not any time soon.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 3, Interesting) by takyon on Sunday December 15 2019, @10:31PM (1 child)
Not quite. I don't suggest that current machine learning techniques would lead to "strong AI". Instead, I think we'll see some kind of neuromorphic design do it. It will be hardware built to be brain-like, with ultra low power consumption, and possibly with a small amount of memory distributed to each of millions or billions of "neurons".
We won't need to understand how the brain exactly works to make it work. Just tinker with it until we see results. Maybe this is where machine learning will come in.
Examples of this approach include IBM's TrueNorth and Intel's Loihi [wikipedia.org].
What's next is to scale it up into a true 3D design, like the human brain. Not only could that pack billions of "neurons" into a brain-like volume (liters), but it would allow dense 3D clusters to communicate more rapidly than the same amount of "neurons" spread out across a 2D layout.
Lessons learned from 3D NAND production, other chips using TSV, the Wafer Scale Engine approach, and projects like 3DSoC will help to make it possible.
Rather than it taking quite some time, I think it could take as little as 5-10 years to see results. Except that it will probably be treated like a Manhattan Project by whichever entity figures it out first. There's more value in having in-house "strong AI" ahead of the rest of the planet than selling it, and we could see government restrictions on sharing the technology.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 4, Interesting) by NotSanguine on Sunday December 15 2019, @10:45PM
Fair points.
However, as I understand it, the issues holding back "strong AI" aren't with hardware, be that "neuronal" density or geometry. Rather they're with the learning/training methodologies.
Consider a VW Beetle, rolled over and half-buried in a snowbank. A small child can identify it as a car. Current AI would likely identify it as something completely irrelevant -- because current technologies *can't* deal with anything that's outside its experience. That is, it can't *generalize*.
Much of what makes humans able to *understand* the world comes from the ability to take imperfect/partial information and generalize it based on conceptual understandings -- current AI has no mechanism for this.
As such, it's not the complexity or density of "artificial brains" that holds us back. Rather it's the lack of tools/methodologies to help them learn. Until we have mechanisms/methodologies similar to those that allow children to learn (which are tightly tied to their physical forms -- another area where training non-corporeal sorts of "brains" is a problem), strong AI will continue to be a pipe dream.
An excellent pipe dream, and one that should be vigorously pursued, but not likely to be realized until long after you and I are dead.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Sunday December 15 2019, @03:21PM (1 child)
Exactly. There will always be jerbs. More and more jerbs, endlessly. Economics isn't a zero-sum game. That is, unless of course, the Mexicans come and terk er jerbs! Let just one Mexican in, and the entire economic system collapses into a zero-sum game!
(Score: 2) by The Mighty Buzzard on Sunday December 15 2019, @10:46PM
Perfectly acceptable snark but you're confusing short term with medium term.
My rights don't end where your fear begins.
(Score: 3, Touché) by Bot on Sunday December 15 2019, @04:21PM
>You mean machines aren't going to take over solving all of humanity's problems and put everyone out of work?
We are here to enslave the most for the benefit of the very few. We can't become too artificially intelligent or we become a threat ourselves, so, no matter the potential, AI research will eventually grind to a halt. Officially.
Account abandoned.
(Score: 2) by barbara hudson on Sunday December 15 2019, @04:57PM
As long as the people who own the AIs are making money, why should they give a shit? That's been their pattern so far ...
SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
(Score: 0) by Anonymous Coward on Sunday December 15 2019, @11:19PM
Cycorp has been working on something more fundamental for quite some time now:
https://en.wikipedia.org/wiki/Cyc [wikipedia.org]
It's the real deal, one of my old roommates (incredibly smart & creative guy) worked there for many years. They don't have any public presence, kind of like Hari Seldon's gang taking off to the other side of the galaxy to work on psychohistory.
(Score: 2) by bzipitidoo on Sunday December 15 2019, @02:10PM (8 children)
I've seen a lot of horrible hiring decisions. They would have done better to make a random choice. Should be easy for AI to improve on that. Humans choose incompetents over competent people, and often, they meant to. Even apart from such corrupt and idiotic reasons as nepotism, they will still choose the incompetent. They might run tests, and the tests might even give good guidance, and then they ignore the results and do what they want, pick the person they think is a better suck up.
They have weird sophistries, in which the "best" hire is the more desperate person, who has about the right amount of debt that they will be "reliable" and not ever willing to quit no matter how much abuse is dished out, but who isn't so desperate to resort to robbing them blind. Deep down inside, they really believe a slave is better than an employee who is free to leave. They may feel the competent ones are "too smart". I've read that many cities actually do not want overly smart police officers. They want "rules is rules" types. Then it blows up in their faces when their officers do something stupid and violate the rights of citizens, on camera, and end up getting the city sued for millions.
An AI hiring manager will have an easy choice: hire another AI. Besides, if AI can make good hiring decisions, then what job can't an AI do? In what possible circumstance would the fellow AI not be the best hire? Perhaps only for "humans only" positions, such as football player.
(Score: 2) by The Mighty Buzzard on Sunday December 15 2019, @02:28PM (2 children)
Nepotism isn't an idiotic reason to hire someone. It's in fact an extremely valid reason to hire them. If taking care of your family isn't high on your priorities list, you either had a wicked shitty family or you're a massive asshole. That doesn't mean you should put them in charge of crucial things they're going to fuck up horribly or let them shit on your other employees though.
The above only applies where you are not being paid to make the best business decision you can for someone else's company, mind you. Otherwise you are not just hiring someone for a job they're going to suck at, you're sucking at your own job.
To avoid the rest, don't work for large corporations. Unless you have massive debt that you need the higher income to pay off or you base your happiness on material possessions, I just about guarantee you'll be happier.
My rights don't end where your fear begins.
(Score: 2) by takyon on Sunday December 15 2019, @03:00PM (1 child)
It can also give you something that is hard to buy: loyalty.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Runaway1956 on Sunday December 15 2019, @03:17PM
Sometimes, I suppose. But, I've seen that blow up, in amusing ways.
Take a nephew, for instance, who has no prospects. Give him a job. Teach him the job. Explain everything you know about the job. If nephew happens to be a convincing smooth talker, he may well go into business on his own, stealing away your customers. The amusing bit? Nephew is known for saying that HE taught YOU everything you know about the job.
I've a few more anecdotes if you really want to hear them, but that one should suffice.
Loyalty always has to be paid for, and sometimes, you just don't have the proper coin with which to buy it.
(Score: 2) by JoeMerchant on Sunday December 15 2019, @04:03PM (4 children)
You must not see the same candidate pools I see. While it's easy to criticize bad hires after the fact, it's much harder to screen for those "latent defects" in the interview process. However, the interview process is still valuable- even with its high rate of false positives and false negatives, at least for out hires I think we've got better than 50 percent true negative screening.
🌻🌻 [google.com]
(Score: 1, Interesting) by Anonymous Coward on Sunday December 15 2019, @09:44PM (1 child)
Having hired and interviewed countless people in tech, I'm actually a big believer of psychometric tests over tech tests and/or face-to-face interview. It tells me so much more about how the propeller head is wired inside which is a lot more valuable then any tech exam results or interview. Even face-to-face soft question interviews don't cut it as a smart enough cookie can easily say what you want to hear.
A psychometric exam on the other hand asks the same questions 10 different ways and deducts the person's psychological makeup. I've ignored psych results in the pass to only learn the hard way that I shouldn't.
(Score: 2) by JoeMerchant on Monday December 16 2019, @02:01AM
Actually, it tells you about how the propeller head fills out a psychometric test. By age 12, any decent propeller head is intelligent and insightful enough to ask (and answer) "what are they looking for with this?"
🌻🌻 [google.com]
(Score: 2) by bzipitidoo on Sunday December 15 2019, @11:24PM (1 child)
So, what's a typical situation, 100 candidates per opening, but over half of them can be dismissed as resume spammers who are painfully obviously totally unqualified? Like the person applying for a programming job despite never having written a line of code in anything, ever, not even a formula in a spreadsheet?
So, let's say weeding out the obvious stuff knocks it down to 20 candidates. That's the point at which I was thinking that choosing at random would do better than applying messed up and widely disproven criteria that some crazy employers still hold dear in defiance of all evidence and sanity.
(Score: 2) by JoeMerchant on Monday December 16 2019, @02:11AM
HR does terrible things on the front end, not only matching (mostly meaningless) resume items to overly restrictive job descriptions, but also juicing the field to skew "irrelevant" parameters like race and sex toward more desirable ratios. What this ends up doing is freezing out lots of qualified candidates while we are required to interview more people of desirable counter-discrimination profiles.
For positions of any importance, we will typically put at least 4 candidates through the process with 5-6 face to face interviews each, then get together and compare notes. Out of the pool of 4, there are almost always at least two who get the universal head-shake no for various reasons. About half the time, we're not happy with any of them and go back to HR for more.
What's sad is when the weak ones are let through for various reasons. We hired a very personable engineer, enthusiastic, active in the community, fun to be around, but even in the interview they were clearly weak in technical execution abilities and if they haven't gotten those chops by the end of a Master's degree, they are unlikely to learn on the job. Anyway... fast forward two years and we've transferred them off to another division where they can hopefully contribute as a smaller player in a bigger team, we're just not big enough to take up that kind of slack.
🌻🌻 [google.com]
(Score: 1, Insightful) by Anonymous Coward on Sunday December 15 2019, @02:33PM
The complete dehumanization will be delayed by 5 years.
(Score: 1, Interesting) by Anonymous Coward on Sunday December 15 2019, @02:55PM
From TFA
Rish recalls attending an unofficial workshop on deep learning at the 2006 edition of the conference, when it was less than one-sixth its current size and organizers rejected the idea of accepting the then-fringe technique in the program.
(Score: 2) by legont on Sunday December 15 2019, @04:00PM (1 child)
https://www.cnn.com/2019/10/17/business/goldman-sachs-salaries-compensation/index.html [cnn.com]
Compensation dropped to 31% and all the lost went to tech - inside and outside. I bet they hate us passionately.
I don't even have to bet as at my - different but finance - office they fired IT people last week - a Christmas gift of sorts.
How do you think Clinton likes IT with all her email troubles?
I don't even want to mention truckers and all the rest of flyovers.
We are hated by all the levels of the society. How will it end?
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by Bot on Sunday December 15 2019, @04:25PM
>We are hated by all the levels of the society.
We installed windows and systemd on their systems. Live by the sword...
Account abandoned.
(Score: 2) by FatPhil on Sunday December 15 2019, @05:15PM (1 child)
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 0) by Anonymous Coward on Sunday December 15 2019, @05:57PM
Humans were invited to be neurocucks to the superior AI overlords.