Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by janrinok on Friday November 29, @02:12PM   Printer-friendly
from the good-luck-with-that dept.

"This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia. There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite.

(Michelle Rowland, Minister for Communications, Australian Parliament, Nov 21)

Australia's House of Representatives has passed a bill that would ban access to social media platforms TikTok, Facebook, Snapchat, Reddit, X and Instagram for youngsters under 16. The bill passed by 102 against 13.

Once the bill gets through the Senate -- expected this week -- the platforms would have a year to work out how to implement the age restriction, without using government-issued identity documents (passport, driving licenses), and without digital identification through a government system.

The leaders of all eight Australian states and mainland territories have unanimously backed the plan, although Tasmania, the smallest state, would have preferred the threshold was set at 14.

There are some counter-noises though (no, not you, Elon). More than 140 academics signed an open letter to Prime Minister Anthony Albanese condemning the 16-year age limit as "too blunt an instrument to address risks effectively."

The writers of that open letter fear that the responsibility of giving access to social media will fall on the parents, and "not all parents will be able to manage the responsibility of protection in the digital world".

Further, " Some social media 'type' services appear too integral to childhood to be banned, for example short form video streamers. But these too have safety risks like risks of dangerous algorithms promoting risky content. A ban does not function to improve the products children will be allowed to use."

The open letter pleads instead for systemic regulation, which "has the capacity to drive up safety and privacy standards on platforms for all children and eschews the issues described above. Digital platforms are just like other products, and can have safety standards imposed."

Australia's ban on social media will be a world-first, with fines of up to 50 million Australian Dollars for each failure to prevent them youngsters of having a social media account.

Under the laws, which won't come into force for another 12 months, social media companies could be fined up to $50 million for failing to take "reasonable steps" to keep under 16s off their platforms. There are no penalties for young people or parents who flout the rules. Social media companies also won't be able to force users to provide government identification, including the Digital ID, to assess their age.

From ban children under the age of 16 from accessing social media we also get the following:

Under the laws, which won't come into force for another 12 months, social media companies could be fined up to $50 million for failing to take "reasonable steps" to keep under 16s off their platforms. There are no penalties for young people or parents who flout the rules. Social media companies also won't be able to force users to provide government identification, including the Digital ID, to assess their age.

Social Media, or an "age-restricted social media platform" has been defined in the legislation as including services where:

  1. the "sole purpose, or a significant purpose" is to enable "online social interaction" between people
  2. people can "link to, or interact with" others on the service
  3. people can "post material", or
  4. it falls under other conditions as set out in the legislation.

Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by zocalo on Friday November 29, @04:52PM (5 children)

    by zocalo (302) on Friday November 29, @04:52PM (#1383779)
    I do think there's a genuine problem here - social media was never great, but the toxicity has skyrocketed over the last few years - but I also think Australia is trying for the wrong legislative solution to the problem, and doing so with far too broad a sword at that. This only way I can see this approach going for Australia is "wrong".

    An ISP-provided net nanny service wouldn't be flawless, but would definitely be a better approach than this, and they could even charge a little extra for it to soften the blow of making providing it a legal requirement for ISPs over a given threshold (number of users or revenue, maybe). Give parents a simple dashboard that controls what kinds of content/specific sites are allowed and when, perhaps aided by some optional site-provided metadata akin to game or movie ratings to define the types of content being hosted, and you're nearly there. Alternatively, maybe require parents (or legal guardians) to actually take responsibility for being a parent, including by making them at least partly legally responsible for their offspring's misdeeds while they are still a minor (subject to due process, naturally).

    The fact is that kids are well aware of Real World Shit at an earlier age than many people, and especially parents, like to think, and that includes a LOT of topics that might not be a parent's first choice to talk about, but it's still a much better option than them finding out the "facts" from the playground rumour mill or some random Internet site. The responsible thing for a parent to do is to be open about this kind of stuff, talk to their kids about it, make sure they know that while it's OK to be curious there is some nasty stuff out there and it's probably for the best if they don't go too far down the rabbit hole, and - most importantly - that they can talk frankly about it if they want to.

    That so many parents feel they can abrogate the implicit responsibility to provide parental guidance is perhaps the most compelling argument for actually having something of a Nanny State, and why governments feel they need to act. That's supposed to be *your* job, not your brat's peers, school teachers, neighbours, the police, and certainly not your government's, but that's the slope they're sliding down. And, no, ignorance of the Internet, that content filtering exists, and so on isn't really an argument for parents any more. Most parents with tweens & teens today are going to be in their 30s & 40s, meaning they were kids themselves during the .COM boom, so it's incredibly unlikely that if they are in a position to provide their kids with Internet access they have not had exposure to it themselves on either a personal or professional level.
    --
    UNIX? They're not even circumcised! Savages!
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Insightful) by pTamok on Friday November 29, @05:15PM (4 children)

    by pTamok (3042) on Friday November 29, @05:15PM (#1383782)

    How would you implement an ISP provided Net-nanny service?

    Assuming you don't install an ISP-provided certificate and exclusively use an ISP-proxy, TLS-protected traffic is opaque to casual inspection for filtering. Installing the ISP certificate gives the ISP access to all your traffic, including that to banks...
    If you want to refuse to do DNS lookups for certain domains, then DNS-over-HTTPS or DNS-over-TLS stops that.
    So you could play Whac-A-Mole™ with filtering particular IP addresses, which would need to be extended to all known VPN gateways...

    Perhaps there is a way that I have missed. I don't claim to be an expert here, and am willing to be educated.

    • (Score: 2) by zocalo on Friday November 29, @06:48PM (3 children)

      by zocalo (302) on Friday November 29, @06:48PM (#1383788)
      Some ISPs already do this as a value-add, which is why I suggested it - e.g. Plusnet SafeGuard and Sky's Parental Controls. One method is done via a combination of DNS filtering and a content-filtering web proxy, the latter either configured in the browser or (more securely) enforced by the ISP on a per-customer connection basis at the back-end. Depending on the system used, there are usually workarounds, including using VPNs and DoH/DoT though, and yes, it does have some privacy implications. Another is to use some client side software, usually secured by a parental password, that does the filtering locally and can be enabled on a per system/user basis, but again there are workarounds. Booting a Linux live distro and using that to access the web will let get you round pretty much anything, of course.

      Frankly, I don't think there are any totally effective technical or legislative solutions to this that will work in all situations, but there are potential solutions in both camps that may at least solve part of the overall problem and work for some subset of parents and kids - as long as the parents know the options exist and how to apply them if they feel it's needed. The only solution that will pretty much work in every circumstance is effective parenting, by the actual parents/guardians, built on a mutually understood level of trust.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by gnuman on Friday November 29, @09:57PM (2 children)

        by gnuman (5013) on Friday November 29, @09:57PM (#1383795)

        Some ISPs already do this as a value-add, which is why I suggested it - e.g. Plusnet SafeGuard and Sky's Parental Controls. One method is done via a combination of DNS filtering and a content-filtering web proxy

        Re: proxies, these are almost unused these days. TLS has killed the proxy. The best that can be done is at the "App" level. This is already happening with things like youtube-kids, where parents control what kids can see. The kids will work-around that via HTTPS access. I mean, a 6 year old, you can control. 12 year olds??

        This law is mostly a compliance checkbox. "Are you under 16? Yes - google, No - welcome!" and then there should be some way of detecting who is breaking the rule via content inspection.

        The thing is, this is not meant to have 100% compliance. Since most people follow the rules, it should make an impact in the psychological damage done to young kids by weird echo chambers. Especially if they start teaching kids about echo chambers in school before they are exposed to them.

        PS. SoylentNews is not affected by this. We are not much of a "social media" site. We are not even an echo chamber like 8chan or Reddit. But we could probably add to Terms of Service and signup page that this site is not meant for minors, and the checkbox is checked.

        • (Score: 2) by janrinok on Friday November 29, @11:12PM (1 child)

          by janrinok (52) Subscriber Badge on Friday November 29, @11:12PM (#1383805) Journal

          I'm not sure that I agree that the site is not for minors. We never ask for a person's age.

          If a teen joins in a technical conversation, perhaps searching for help setting up a server or something, I don't see why we have to immediately ban him. I know 12-14 year olds who have set up their own simple web pages and I do not see any harm in that. I also know that if their parents have told them not to do something then they probably wouldn't but, you know, kids will be kids. Much older as I am now I can still remember building HF radios at the same age. I got help from adults then too, it just wasn't on a computer.

          We did have a mid-teen probably 6 or 7 years ago as a member - I think he was 15 at the time but it is not a clear recollection. I will have to scroll the usernames to see if I can remember him.

          --
          I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
          • (Score: 2) by gnuman on Saturday November 30, @12:28AM

            by gnuman (5013) on Saturday November 30, @12:28AM (#1383813)

            I'm not sure that I agree that the site is not for minors. We never ask for a person's age.

            I agree with your disagreement. But if you want to be in compliance with Australian Law, then this can be added. Or maybe conditionally on detection connection from Australian IPs. Of course, then you need to have a GDPR notice because in EU they ruled that IP Address is "identifying, private information", even hashed.

            https://law.stackexchange.com/questions/61076/storing-ips-and-gdpr-compliance [stackexchange.com]

            ¯\_(ツ)_/¯

            If a teen joins in a technical conversation, perhaps searching for help setting up a server or something, I don't see why we have to immediately ban him. I know 12-14 year olds who have set up their own simple web pages and I do not see any harm in that.

            US had this policy about gays in military -- don't ask, don't tell. This is probably a good policy for "underage Australians" on public forums!

            I agree with your sentiment though. I've had some interesting discussions on FidoNet and NNTP servers back in the day -- math and astronomy groups were interesting, though they've had some strange people there too. I think I was in that pre-16 age group at the time.