Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday August 03 2017, @07:17AM   Printer-friendly
from the how-about-a-nice-game-of-chess? dept.

Google has recently used humans and machine learning to review YouTube videos in a quest to label offensive content, and has found that the software does better "in many cases":

Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube.

The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review.

A month after announcing the changes, and following UK home secretary Amber Rudd's repeated calls for US technology firms to do more to tackle the rise of extremist content, Google's YouTube has said that its machine learning systems have already made great leaps in tackling the problem.

A YouTube spokesperson said: "While these tools aren't perfect, and aren't right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.

Controversial, offensive, hateful, violent content that does not obviously breach YouTube's guidelines will be allowed to remain, but will often be demonetized as well as de-emphasized by not being recommended/suggested, making such videos much harder to find. Comment sections and likes may also be disabled for these videos.

YouTube will also suggest curated playlists for certain keywords, because anti-terrorism propaganda artificially propped up by a megacorporation is definitely going to dissuade and not alienate budding terrorists. Maybe the new online jihad will be fought in the comment sections of the curated videos. Better disable the comment sections on those ones too.

Previously: Google Fails to Stop Major Brands From Pulling Ads From YouTube
YouTube Changes its Partner Program -- Channels Need 10k Views for Adverts
Google Taking New Steps to Fight Terror Online


Original Submission

Related Stories

Google Fails to Stop Major Brands From Pulling Ads From YouTube 44 comments

Google has failed to convince major brands (such as AT&T, Verizon, Enterprise Holdings, Volkswagen, and Tesco) to continue advertising on YouTube, following the "revelation" that ads can appear next to extremist, homophobic, anti-Semitic, raunchy, etc. content. From Google's Tuesday response:

We know advertisers don't want their ads next to content that doesn't align with their values. So starting today, we're taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites. We'll also tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program—as opposed to those who impersonate other channels or violate our community guidelines. Finally, we won't stop at taking down ads. The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform—not just what content can be monetized. [...] We're changing the default settings for ads so that they show on content that meets a higher level of brand safety and excludes potentially objectionable content that advertisers may prefer not to advertise against. Brands can opt in to advertise on broader types of content if they choose.

The growing boycott started in the UK:

On Friday, the U.K. arm of the Havas agency, whose clients include the BBC and Royal Mail, said it would halt spending on YouTube and Web display ads in Google's digital advertising network. In doing so, Havas UK CEO Paul Frampton cited a duty to protect clients and "ensure their brands are not at all compromised" by appearing alongside or seeming to sponsor inappropriate content. The decision by a global marketing group with a U.K. digital budget of more than $200 million to put its dealings with Google on "pause" followed a recent controversy over YouTube star Felix "PewDiePie" Kjellberg, who lost a lucrative production contract with Maker Studios and its owner, Walt Disney Co., over "a series of anti-Semitic jokes and Nazi-related images in his videos," as the Two-way reported. As the BBC reports, "Several high profile companies, including Marks and Spencer, Audi, RBS and L'Oreal, have pulled online advertising from YouTube."

Google's Chief Business Officer Philipp Schindler also promised to develop "new tools powered by our latest advancements in AI and machine learning to increase our capacity to review questionable content for advertising".


Original Submission

YouTube Changes its Partner Program -- Channels Need 10k Views for Adverts 17 comments

The YouTube Partner Program (YPP) has changed its rules, and two Soylentils wrote in to tell us about it:

YouTube Channels Need 10,000 Views for Adverts

YouTube is changing the rules about when users can start earning money through carrying adverts on their video channels.

New channels will have to get 10,000 views before they can be considered for the YouTube Partner Program, the firm announced in a blog post.

YouTube will then evaluate whether the channel is adhering to its guidelines before letting it carry adverts.

It will help clamp down on content theft and fake channels, YouTube said.

"After a creator hits 10k lifetime views on their channel, we'll review their activity against our policies," wrote Ariel Bardin, vice president of product management at YouTube.

"If everything looks good, we'll bring this channel into YPP [YouTube Partner Program] and begin serving ads against their content. Together these new thresholds will help ensure revenue only flows to creators who are playing by the rules."

Stay on message, Citizen. Wrongthink is not allowed.

YouTube Makes Changes to Partner Program

YouTube is making changes to the YouTube Partner Program. YouTube will make it easier to report a channel impersonating another channel. It will also stop serving ads on channels with less than 10,000 views:

Starting today, we will no longer serve ads on YPP videos until the channel reaches 10k lifetime views. This new threshold gives us enough information to determine the validity of a channel. It also allows us to confirm if a channel is following our community guidelines and advertiser policies.

[...] In a few weeks, we'll also be adding a review process for new creators who apply to be in the YouTube Partner Program. After a creator hits 10k lifetime views on their channel, we'll review their activity against our policies. If everything looks good, we'll bring this channel into YPP and begin serving ads against their content. Together these new thresholds will help ensure revenue only flows to creators who are playing by the rules.

At first, I thought the 10,000 view limit was per video. But it's actually the total amount of views on all videos on the channel. It remains to be seen whether the channel review that takes place after the 10,000 view threshold will be "hands on" enough to actually identify the content YouTube wants wiped away... before it can be used to scare advertisers away from the platform.

Also at The Verge.

Previously: Google Fails to Stop Major Brands From Pulling Ads From YouTube


Original Submission #1Original Submission #2

Google Taking New Steps to Fight Terror Online 49 comments

Google will step up efforts to censor terrorism-related content on YouTube and its other services. The company says it will take four steps to address violent extremism online:

We will now devote more engineering resources to apply our most advanced machine learning research to train new "content classifiers" to help us more quickly identify and remove extremist and terrorism-related content.

[...] [We] will greatly increase the number of independent experts in YouTube's Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants.

[...] [We] will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.

[...] Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the "Redirect Method" more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

Human video flaggers are paid to skim and evaluate hours worth of content in mere minutes (seconds?). But paying NGOs to watch YouTube all day could improve the situation. I would like to remind any potential terrorists reading this summary to "bomb violence with mercy".

Reported at Bloomberg and NYT.


Original Submission

YouTube AI Bots Are Now Heavily Involved in the Task of Removing "Problematic" Videos 29 comments

Machine learning algorithms are now involved in the bulk of video removal from YouTube. ~80% of videos removed by YouTube in Q4 2017 were initially flagged by a computer, with many receiving less than 10 views before removal:

The vast majority of videos removed from YouTube toward the end of last year for violating the site's content guidelines had first been detected by machines instead of humans, the Google-owned company said on Monday. YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 percent of those videos had initially been flagged by artificially intelligent computer systems.

The new data highlighted the significant role machines — not just users, government agencies and other organizations — are taking in policing the service as it faces increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organizations. Those videos are sometimes promoted by YouTube's recommendation system and unknowingly financed by advertisers, whose ads are placed next to them through an automated system.

[...] Betting on improvements in artificial intelligence is a common Silicon Valley approach to dealing with problematic content; Facebook has also said it is counting on A.I. tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

Also at Recode.

Previously:
Google Fails to Stop Major Brands From Pulling Ads From YouTube
AI Beating Mechanical Turks at YouTube Censorship Accuracy

Related:
YouTube Cracks Down on Weird Content Aimed at Kids
A.I. Algorithm Recognizes Terrorist Propaganda With 99% Accuracy


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @07:27AM

    by Anonymous Coward on Thursday August 03 2017, @07:27AM (#548255)

    The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review.

    Question is, can the human reviewers be tricked into introducing bias into the system? Preferably, *our* bias.

  • (Score: 4, Insightful) by jmorris on Thursday August 03 2017, @07:35AM (3 children)

    by jmorris (4844) on Thursday August 03 2017, @07:35AM (#548257)

    Care to get any action going as to whether ISIS snuff films and Islamic preachers explaining the moral necessity to kill infidels remain and Alt-Right "h8 speech" gets driven off YouTube?

    • (Score: 4, Interesting) by unauthorized on Thursday August 03 2017, @10:36AM (1 child)

      by unauthorized (3776) on Thursday August 03 2017, @10:36AM (#548288)

      It's not just going to be the alt wrong I'm afraid, notably Professor Jordan Peterson [twitter.com] was banned in this little experiment, although thankfully he was prominent enough that Google couldn't get away with it and had to reinstate him because it caused a massive stink for them. Judge the "horrible racism" for yourselves [youtube.com]. One has to wonder how many smaller non-extremist channels were permanently destroyed with no chance of recovery because they simply lack the social media traction to get Google to listen.

      • (Score: 0) by Anonymous Coward on Friday August 04 2017, @09:53AM

        by Anonymous Coward on Friday August 04 2017, @09:53AM (#548664)

        I clicked on one of his video, it was prefaced by the ad for a condom. It's like google has an agenda.

    • (Score: 2) by DeathMonkey on Thursday August 03 2017, @06:12PM

      by DeathMonkey (1380) on Thursday August 03 2017, @06:12PM (#548452) Journal

      I'll take that bet.

      Beheading videos will be removed. Because, Google isn't stupid.

      Whether KKK videos also get taken down is up for debate.

      Your persecution complex requires a vast conspiracy of really dumb people that all manage to keep their mouths shut.

  • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @07:55AM

    by Anonymous Coward on Thursday August 03 2017, @07:55AM (#548259)

    See this video posted yesterday by Paul Joseph Watson https://youtu.be/VNwWoPD1c9k [youtu.be] The humans Google have joined up with seem a bit partisan. Maybe Ai can learn what is extremist, and what is "within the bounds" . And now that it will have to do this in different societies, as West Coast Liberalism is not a universal truth.

  • (Score: 5, Insightful) by Arik on Thursday August 03 2017, @08:07AM (7 children)

    by Arik (4543) on Thursday August 03 2017, @08:07AM (#548261) Journal
    "Accuracy"

    Accuracy at what?

    "a quest to label offensive content"

    Ah. I see.

    This is nonsense. It is comparable to claiming that one method is more accurate than another at detecting demonic possession, or the detection of phlogiston.

    The best definition yet of "offensive content" came from the US Supreme Court, one of whose Justices famously proclaimed that he knew it when he saw it. It's an inherently subjective standard and 'accuracy' only applies in the context of an objective standard.

    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 2) by rigrig on Thursday August 03 2017, @08:22AM

      by rigrig (5129) Subscriber Badge <soylentnews@tubul.net> on Thursday August 03 2017, @08:22AM (#548264) Homepage

      Obviously "offensive" means it will generate enough PR backlash that companies might pull some of their advertising and cost Google more money than ads next to the video would generate.

      --
      No one remembers the singer.
    • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @05:51PM

      by Anonymous Coward on Thursday August 03 2017, @05:51PM (#548440)

      The best definition yet of "offensive content" came from the US Supreme Court, one of whose Justices famously proclaimed that he knew it when he saw it.

      Jacobellis v. Ohio [wikipedia.org]

    • (Score: 2) by DeathMonkey on Thursday August 03 2017, @06:13PM (4 children)

      by DeathMonkey (1380) on Thursday August 03 2017, @06:13PM (#548454) Journal

      Ok, "precision" then. Happy?

      • (Score: 2) by Arik on Thursday August 03 2017, @08:05PM (3 children)

        by Arik (4543) on Thursday August 03 2017, @08:05PM (#548472) Journal
        Precision, accuracy, same thing.

        The implication is of measuring some property of the videos. But there is no such property to measure, in the videos. Again, precision in the detection of demons? Precise triangulation of the fluffy pink unicorn? It's anti-intellectual nonsense. Things which do not exist cannot be measured, and there is no precision if there is no measurement.

        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 2) by DeathMonkey on Thursday August 03 2017, @09:43PM (2 children)

          by DeathMonkey (1380) on Thursday August 03 2017, @09:43PM (#548494) Journal

          Precision, accuracy, same thing.

          Nope. [ncsu.edu]

          • (Score: 2) by Arik on Thursday August 03 2017, @10:33PM (1 child)

            by Arik (4543) on Thursday August 03 2017, @10:33PM (#548509) Journal
            Read your own source.

            "Accuracy refers to the closeness of a measured value to a standard or known value [...] Precision refers to the closeness of two or more measurements to each other"

            Considering the standard or known value must also represent a measurement of some kind, this definition makes accuracy a type of precision, which is fine. You still need objective measurements, which means you need something that actually exists, for these words to be meaningful.
            --
            If laughter is the best medicine, who are the best doctors?
            • (Score: 0) by Anonymous Coward on Saturday August 05 2017, @04:27PM

              by Anonymous Coward on Saturday August 05 2017, @04:27PM (#549131)

              Precision refers to the number of decimal places that you're measuring to. Accuracy refers to how close you are to the correct value. They usually correlate with each other, but they don't have to. People who don't understand that can think they've got 4 digits of accuracy because that's what the instrument told them, but if they screwed up the measurement like forgetting to tare the scale, they'd have a number that was significantly off from what the weight should have been.

              The weight you got was very precise, but since the scale wasn't excluding the weight of the vessel used to contain what you're weighing, it's a precise, but inaccurate measurement. 3 is an extremely precise value of Pi, but it's not a very accurate value for it.

  • (Score: 2) by BenJeremy on Thursday August 03 2017, @11:27AM (2 children)

    by BenJeremy (6392) on Thursday August 03 2017, @11:27AM (#548298)

    Why can't we just use the term "humans"?

    ...unless we are somehow referring to some sort of fake chess-playing robots, then I wouldn't expect it to beat an AI, no.

    • (Score: 4, Insightful) by c0lo on Thursday August 03 2017, @12:01PM (1 child)

      by c0lo (156) on Thursday August 03 2017, @12:01PM (#548303) Journal

      Why can't we just use the term "humans"?

      Because people desperate enough to work for pennies on Mechanical Turk are driven in a sub-human condition, not much different than an animal subsisting on whatever crumb of food is available.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0
      • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @02:54PM

        by Anonymous Coward on Thursday August 03 2017, @02:54PM (#548372)

        I looked into it when it first came out and it's a ridiculously low rate of pay. It's low even by second world standards.

        It shouldn't be allowed to work for less than minimum wage if you don't have a business license. If you've got the license you can make a reasonable claim of self employment. Doing it would probably reduce the exploitation significantly.

  • (Score: 2) by donkeyhotay on Thursday August 03 2017, @02:22PM (2 children)

    by donkeyhotay (2540) on Thursday August 03 2017, @02:22PM (#548351)

    Google can't even send me calendar notifications using their own app. And I should expect them to police YouTube responsibly? Ha!

    • (Score: 2) by takyon on Thursday August 03 2017, @02:27PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday August 03 2017, @02:27PM (#548355) Journal

      And I should expect them to police YouTube responsibly?

      You don't get a choice or a voice.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @02:49PM

      by Anonymous Coward on Thursday August 03 2017, @02:49PM (#548369)

      Responsibly? Where did you find that claim?

      They are out to find and label content that might lose them advertising revenue. And since money is on the stake, you can be sure they'll make sure to avoid false negatives. False positives are likely much lower on their priority list: Unless it gets excessive, they'll at worst get some angry users; nothing that threatens their bottom line.

  • (Score: 0) by Anonymous Coward on Thursday August 03 2017, @05:31PM

    by Anonymous Coward on Thursday August 03 2017, @05:31PM (#548427)

    Oh how the "google buying youtube chickens" have come home to roost! It's a good thing we have LBRY [lbry.io] coming soonish. Hopefully the implementation can be improved enough over time to be viable to the unwashed masses. Those extremists!

(1)