Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday February 03, @11:53AM   Printer-friendly
from the think-of-the-AI-generated-children dept.

https://arstechnica.com/tech-policy/2024/01/surge-of-fake-ai-child-sex-images-thwarts-investigations-into-real-child-abuse/

Law enforcement is continuing to warn that a "flood" of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, every attorney general across the US quickly called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery.
[...]
"Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation," Steve Grocki, the chief of the Justice Department's child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm's way and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested "for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft." A search of Thompson's iCloud revealed "four additional instances" where Thompson allegedly recorded other minors in the lavatory, as well as "over 50 images of a 9-year-old unaccompanied minor" sleeping in her seat. While police attempted to identify these victims, they also "further alleged that hundreds of images of AI-generated child pornography" were found on Thompson's phone.
[...]
The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing "virtual" or "computer-generated child pornography." South Carolina's attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month.
[...]
There's also the "Preventing Deepfakes of Intimate Images Act," which seeks to "prohibit the non-consensual disclosure of digitally altered intimate images." That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years' imprisonment for sharing harmful images.

Previously on SoylentNews:
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter's Teammates, Police Say - 20210314


Original Submission

Related Stories

Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter’s Teammates, Police Say 67 comments

Cheer mom used deepfake nudes and threats to harass daughter's teammates, police say:

An anonymous cyberbully in Pennsylvania seemed to have one goal in mind: Force a trio of cheerleaders off their formidable local team, the Victory Vipers.

Doctored images were sent to the coach of the competitive squad that appeared to show the teen girls in humiliating or compromising situations that could get them kicked off the team, like appearing nude, drinking alcohol and using drugs, according to the criminal complaint.

In anonymous texts and calls, the bully told one girl "you should kill yourself."

When police unmasked the alleged culprit late last year, they found the bully hiding within the Victory Viper circle.

Raffaela Spone, a local cheer mom whose daughter is on the team, was charged last week with three misdemeanor counts of cyber harassment of a child and related offenses, according to the Bucks County District Attorney.

[...] If convicted, Spone could face between six months to a year in prison, though Weintraub, the district attorney, said the maximum penalty for low-level misdemeanors is unlikely.

Citron said the criminal justice system still lags behind deepfake technology when it comes to investigations and prosecutions. She and Weintraub each said deepfakes and similar technology pose a broader threat to the truth by muddying the information ecosystem.

"It's disturbing to me because we rely on being able to authenticate evidence as a foundation of the criminal justice system," Weintraub said. "If everyday people are capable of using deepfakes, that's going to make doing our job a lot more difficult."


Original Submission

AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action 70 comments

https://arstechnica.com/information-technology/2023/09/ai-generated-child-sex-imagery-has-every-us-attorney-general-calling-for-action/

On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.

"As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions," the letter reads. "And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult."

In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation's top prosecutors. (It's worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)

"Creating these images is easier than ever," the letter reads, "as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are 'open source,' the tools can be run in an unrestricted and unpoliced way."

As we have previously covered, it has also become relatively easy to create AI-generated deepfakes of people without their consent using social media photos.


Original Submission

Microsoft Accused of Selling AI Tool That Spews Violent, Sexual Images to Kids 13 comments

https://arstechnica.com/tech-policy/2024/03/microsoft-accused-of-selling-ai-tool-that-spews-violent-sexual-images-to-kids/

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Saturday February 03, @12:30PM (1 child)

    by Anonymous Coward on Saturday February 03, @12:30PM (#1342949)

    There are going to be more hit pieces like this to push AI censorship and keep taylor swift "safe".

    • (Score: 4, Insightful) by janrinok on Saturday February 03, @03:14PM

      by janrinok (52) Subscriber Badge on Saturday February 03, @03:14PM (#1342967) Journal

      Rubbish. The original problem was highlighted last year. It was agreed at that time that the US should perhaps have laws to cover such things, as have many other countries already. The source actually says that the real problem is that the lawmakers are not actually doing what they said they would at that time. This has nothing at all to do with Taylor Swift. You just want to score a political point where it isn't actually justified.

  • (Score: 2) by Opportunist on Saturday February 03, @01:35PM (19 children)

    by Opportunist (5545) on Saturday February 03, @01:35PM (#1342952)

    Taking some innocent looking picture of a child and planting it into a sexual context or generating a "child person" and putting it into the same context?

    Because I could see how the latter may actually reduce the related crimes, while the former sure is a huge problem that should be dealt with, but doesn't that problem already exist due to deepfakes?

    • (Score: 4, Insightful) by turgid on Saturday February 03, @01:54PM (3 children)

      by turgid (4318) Subscriber Badge on Saturday February 03, @01:54PM (#1342954) Journal

      That's a really simplistic way of looking at it. When the images of real children, real people, real adults are used those people are humiliated on a global scale. The Internet never forgets. It's an incredibly cruel form of bullying and exploitation leading to ruined lives up to and including suicide.

      • (Score: 2) by DrkShadow on Monday February 05, @05:34AM

        by DrkShadow (1404) on Monday February 05, @05:34AM (#1343054)

        That's a really simplistic way of looking at it. When the images of real children, real people, real adults are used those people are humiliated on a global scale.

        Perhaps the problem is others who deride those people based on a fictional creation that those people were not actively involved in creating.

        Maybe "other people" are the problem, not "things".

      • (Score: 2) by Opportunist on Monday February 05, @07:49AM

        by Opportunist (5545) on Monday February 05, @07:49AM (#1343069)

        That's why I said there is a difference between using a real image of a real person and a fake one.

        Using a real image and mounting it on a fake body to create the impression that this person did something has a very harmful component. Creating a fake image of a fake person that does not exist and doing the same ... well, what's the feelings of pixels and ragdoll models?

      • (Score: 2) by bussdriver on Monday February 05, @08:17AM

        by bussdriver (6876) Subscriber Badge on Monday February 05, @08:17AM (#1343071)

        But they could generate 100% AI images that don't involve anybody whatsoever. So, then nobody is harmed.

        Besides, eventually, anybody will be able to make stuff that looks similar or exactly like anybody else - and people will adapt by assuming it's all fake; it's not like pervs couldn't imagine way more than AI can yet create... and without 8 fingers or 3 arms or Loab. The unimaginative non-visual people will be the users of such things... and it might be a good idea to flag certain habits to point in a direction when something does happen; such as a the weirdo suspect turns out to be gay but the one into girls like the victim is the step brother... would be useful data, especially for weirdos who are irrationally picked on.

        I've yet to see a study that shows pervy images leads to rapists etc. I remember claims that porn couldn't be legalized because it would create rapists all over the place; instead we just found out that rapists have been hiding all over already and no increase in new ones corresponding to legalization. Maybe the pedos should get AI images (not deep fake) so they leave the real children alone? The science I've seen is that they are not really different than any other rapist but they have a biological defect in that children (or infants) are the lower age limit than normal people. ALL of them also are into adults. fact. It's like other forms of crazy people where they are completely normal but in one aspect they are way outside of the healthy. Oh, you should think about that-- it's certainly within the definition of insanity; not evil criminals but sick people who maybe are not safe enough to allow out in public; ever. But in most the USA they are criminals who are punished and released to act crazy again.

    • (Score: 3, Insightful) by JoeMerchant on Saturday February 03, @02:16PM (10 children)

      by JoeMerchant (3937) on Saturday February 03, @02:16PM (#1342956)

      >the former sure is a huge problem

      In the current context of social mores.

      Really, AI deep fake videos of your eleven year old daughter rapturously engaged in barely possible tentacle porn is just an extension of the old taunt: "Opp and Bobby sittin' in a tree K I S S I N G ..."

      We are not far off from a handheld app that allows you to select an actor/actress in a video of your choice, any video of your choice, then take a brief video of anyone to capture their features and even mannerisms, click on the substitute button and have a new 4K 120fps video with your captured subject substituted for the original actor/actress, wearing their clothes, or lack thereof, playing the part, if your video captured any audio it would be possible to substitute the captured voice as well...

      We can continue to be outraged and emotionally distraught, attempt to punish the taunters into obedience, or we can transcend the taunting (essentially: grow up) and erase the power the taunters currently hold over those who are distraught by their taunts.

      It's a big step, it will probably take generations, longer in many cultures. I predict some will get there within 30 years or less.

      --
      🌻🌻 [google.com]
      • (Score: 4, Insightful) by turgid on Saturday February 03, @02:22PM (6 children)

        by turgid (4318) Subscriber Badge on Saturday February 03, @02:22PM (#1342958) Journal

        Once again, a dangerously simplistic view. or we can transcend the taunting (essentially: grow up). People have had their lives ruined and even committed suicide.

        It's essentially another form of Revenge Porn [thesurvivorstrust.org].

        • (Score: 1, Insightful) by Anonymous Coward on Saturday February 03, @02:34PM (1 child)

          by Anonymous Coward on Saturday February 03, @02:34PM (#1342960)

          Sticks and stones, babe...

          I believe the phrase is, "Toughen the fuck up!" and "Don't believe anything you hear and only half of what you see"

          • (Score: 1, Touché) by Anonymous Coward on Saturday February 03, @02:43PM

            by Anonymous Coward on Saturday February 03, @02:43PM (#1342963)

            "Troll"

            Well, there ya go. Modding to show support for the offended snowflakes who want to repeal the 1st Amendment..

        • (Score: 2) by JoeMerchant on Saturday February 03, @06:59PM

          by JoeMerchant (3937) on Saturday February 03, @06:59PM (#1342973)

          Hey, I'm not saying such imagery is harmless today - far from it. Suicides, career terminations, political campaigns, all kinds of bad chaos is likely to result from arguments over what's a deep fake, what's not, and people who feel damaged regardless of whether something was proven fake or not.

          But, is the answer to run around "whack a mole" style attempting to ban the use, ownership and manufacturing of firearmsdeep fake software, or, unlike things that fire actual deadly projectiles, can we learn to live with its existence and transcend the current psychological/emotional traumas it causes?

          Last time this topic came up, I recalled a 1970 sitcom episode dealing with basically the exact same issue: https://www.imdb.com/title/tt0720250/ [imdb.com]

          "That Girl" managed to overcome it, 53+ years later can the rest of us learn to be as resilient as her character?

          --
          🌻🌻 [google.com]
        • (Score: 5, Insightful) by bussdriver on Monday February 05, @08:33AM (2 children)

          by bussdriver (6876) Subscriber Badge on Monday February 05, @08:33AM (#1343073)

          Revenge porn is real but fake stuff is not the same. right now it's NEW but it will become easily accessible to all. One can easily expect some boy generates some girl he likes and then his private creations leak or are stolen and it gets out; not unlike photoshops of the past or drawings. It can embarrass the boy into suicide as well or do lasting harm to the boy greater than the girl. It's how the teen processes the experience that decides the damage.

          2 years in jail for the teen boys? WTF? you think those boys are going to come out of that OK? We all know it's not really a "corrections" center they will be placed in. More like college for criminals or a mental illness factory. Mandatory therapy -- which for a teen boy who largely only expresses emotions on a primate level, it will be quite difficult of an experience with more suffering than a civilized incarceration (if that is possible in their local gov.)

          Teens test boundaries and also lack the sense of an adult due to lack of brain development so they'll do stuff outside of known boundaries too; foolishness is part of the definition of teenager.

          • (Score: 0) by Anonymous Coward on Monday February 05, @10:28AM

            by Anonymous Coward on Monday February 05, @10:28AM (#1343086)

            Photorealistic creations existed before Photoshop.

            But now, AI makes producing photorealistic motion art do-able to the layperson. With no previous camera images required to initialize the baseline image.

            An artist can now instantiate moving 3D images of anything that can be imagined. Including fantasy sex acts. With nothing more than imagination.

          • (Score: 1, Flamebait) by JoeMerchant on Monday February 05, @05:33PM

            by JoeMerchant (3937) on Monday February 05, @05:33PM (#1343167)

            >It can embarrass the boy into suicide as well or do lasting harm to the boy greater than the girl.

            Yes, it can.

            >It's how the teen processes the experience that decides the damage.

            And how the teen processes the experience is decided by how they were raised, and how their peers, parents, authorities and other influential figures in their life react to the event.

            There's no reason a sight, sound, or a whole two hours of audio-visual presentation, need cause physical harm to anyone. That it does, I believe is a failure of the society that breeds such fragile snowflakes.

            --
            🌻🌻 [google.com]
      • (Score: 2) by HiThere on Saturday February 03, @02:42PM (2 children)

        by HiThere (866) Subscriber Badge on Saturday February 03, @02:42PM (#1342962) Journal

        Asking society to "grow up" is silly. Society is not a person. It changes over time, but not in the sense of "growing up".

        And asking people to not believe things they find attractive also doesn't work very well. Not even when you've got hard (i.e. essentially indisputable) evidence in hand.

        OTOH, censorship also has an extremely bad track record. So we need a different answer...which doesn't mean I have one to offer.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 1, Informative) by Anonymous Coward on Saturday February 03, @02:48PM

          by Anonymous Coward on Saturday February 03, @02:48PM (#1342965)

          Ah, but society is a person, a reflection of true feelings, and it does have to grow up.

          OTOH, censorship also has an extremely bad track record. So we need a different answer...which doesn't mean I have one to offer.

          You don't have to, it is already there, but is being dismissed out of hand, because of sheer denial

        • (Score: 3, Insightful) by JoeMerchant on Saturday February 03, @07:17PM

          by JoeMerchant (3937) on Saturday February 03, @07:17PM (#1342975)

          But society _is_ constantly maturing, learning to accept realities of modern life, with the occasional backsliding into radical conservatism ala Ayatollah Ali Khamenei and similar.

          Several of my ancestors practiced forbidden "mixed marriage" with non-Christian heathen natives and had to basically withdraw from polite society as a result. Here, 160ish years later, such mixed marriages between races are becoming openly accepted (most places) and the places you "can't go" and things you can't do as a mixed couple, or child of a mixed mating, are dwindling into a small subset of society instead of the vast majority of it.

          Audio-visual video presentations have been deceptive since their inception, taken out of context, edited, and outright fabricated. It's just continually getting easier to do so, and people are going to need to internalize the fact that "video proof" isn't - not that it ever was, but there was a brief few decades where most people lacked the resources to fake a video convincingly. That time is rapidly drawing to a close.

          Broad release of convincing deep fakes of a herd of llamas taking turns riding Taylor Swift, and similar things, are perhaps the fastest way to educate society about where the tech is at today, and how little they can trust a "certified" video of Joe Biden, or Donald Trump, making a back room deal to sell Hawaii to Vladmir Putin for 20 kilos of cocaine and a dozen hookers.

          --
          🌻🌻 [google.com]
    • (Score: 2, Disagree) by turgid on Saturday February 03, @02:17PM (2 children)

      by turgid (4318) Subscriber Badge on Saturday February 03, @02:17PM (#1342957) Journal

      And the latter doesn't "reduce related crimes." Such things act as a magnet for dangerous lunatics [theguardian.com]

      She had downloaded a special browser on her phone to watch “real” murders and torture on the dark web, and kept detailed notes about serial killers including Richard Ramirez, the “Night Stalker”.

      • (Score: 2) by Opportunist on Monday February 05, @07:47AM (1 child)

        by Opportunist (5545) on Monday February 05, @07:47AM (#1343067)

        Can we try again with an example where watching fake murders convinced someone it would be a good idea? Because that's what this is about.

        • (Score: 0) by Anonymous Coward on Monday February 05, @10:41AM

          by Anonymous Coward on Monday February 05, @10:41AM (#1343090)

          Some 70-odd years ago, we kids commonly played "Cops and Robbers" with cap guns that emitted a sharp noise and released the smell of burning gunpowder when "fired". No projectiles. They were loaded with little spools of red paper tape containing pockets of explosive powder.

          I think every kid in the 50's had at least one. And the holster, cowboy hat, mask, and tin badge.

          Hi-Yo, Silver!

    • (Score: 3, Insightful) by driverless on Monday February 05, @07:47AM

      by driverless (4770) on Monday February 05, @07:47AM (#1343068)

      No, it looks like it's entirely-AI-generated:

      "Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,"

      So who exactly is being heinously exploited? The GPU? The neural network? And the remaining arguments about this imaginary exploitation encouraging pedophiles sound remarkably similar to the ones about video games encouraging violence.

      How about instead of obsessing over imaginary images of nonexistent things, law enforcement go after actual abusers targeting actual real children?

  • (Score: 1, Touché) by Anonymous Coward on Saturday February 03, @02:39PM (1 child)

    by Anonymous Coward on Saturday February 03, @02:39PM (#1342961)

    Now any politician caught on camera with a dead hooker in his room, or for that matter, caught speeding on a traffic cam can say it's a "deep fake"

    • (Score: 2) by driverless on Monday February 05, @07:42AM

      by driverless (4770) on Monday February 05, @07:42AM (#1343065)

      If you do this, remember to wear a prosthetic sixth finger on each hand, preferably with an extra joint as well. That way when you get caught you can point at the photo evidence and say it's clearly AI-generated.

  • (Score: 1, Offtopic) by Username on Saturday February 03, @02:47PM (9 children)

    by Username (4557) on Saturday February 03, @02:47PM (#1342964)

    Who lets a nine year old fly somewhere solo?

    • (Score: 3, Insightful) by janrinok on Saturday February 03, @03:40PM

      by janrinok (52) Subscriber Badge on Saturday February 03, @03:40PM (#1342968) Journal

      How many parents can you fit into a classroom during a lesson? The initial photograph could be entirely innocent; it might even be something that the child has taken themselves and posted online. Photographs get taken all the time. You would have to do some serious helicopter-parenting if you wished to never let them out of your sight.

    • (Score: 3, Informative) by JoeMerchant on Saturday February 03, @07:03PM (5 children)

      by JoeMerchant (3937) on Saturday February 03, @07:03PM (#1342974)

      >Who lets a nine year old fly somewhere solo?

      My parents, in 1975.

      --
      🌻🌻 [google.com]
      • (Score: -1, Redundant) by Anonymous Coward on Sunday February 04, @12:38AM (3 children)

        by Anonymous Coward on Sunday February 04, @12:38AM (#1342986)

        Did you survive?

        • (Score: 2) by JoeMerchant on Sunday February 04, @12:44AM (2 children)

          by JoeMerchant (3937) on Sunday February 04, @12:44AM (#1342987)

          Obviously not. My grandmother met me at the gate in Miami - back then you could just walk in without a ticket.

          --
          🌻🌻 [google.com]
          • (Score: 0) by Anonymous Coward on Monday February 05, @10:53AM (1 child)

            by Anonymous Coward on Monday February 05, @10:53AM (#1343092)

            I remember that well. The terminal was completely open to the public... same as Sears and K-mart. However, one needed a boarding pass for that flight to board the plane. The stewardesses typically took the boarding pass as one entered the plane.

            It was sure simpler times.

            • (Score: 2) by JoeMerchant on Monday February 05, @01:34PM

              by JoeMerchant (3937) on Monday February 05, @01:34PM (#1343118)

              And no ID check on the boarding pass or ticket. I "borrowed" a friend's ticket in 1986 incase he couldn't get to the airport, last minute he didn't show so I flew in his place.

              --
              🌻🌻 [google.com]
      • (Score: 0) by Anonymous Coward on Sunday February 04, @03:43PM

        by Anonymous Coward on Sunday February 04, @03:43PM (#1343006)

        My original reading of "fly solo" was that you were getting your private pilot's license at 9 and completed the required solo flight.

        For the USA: https://www.flyingmag.com/what-is-the-right-age-to-start-flight-training/ [flyingmag.com]

        While there is no minimum age to start flight training, you need to be able to reach the rudder pedals.
        [...]
        Technically there is no minimum age to start flight training, a fact I discovered as an aviation- stricken 13-year old who had just taken the stick for the first time on an EAA (Experimental Aircraft Association) Young Eagles ride. I called the local FSDO (Flight Standards District Office) and managed to get a hapless inspector on the phone; he confirmed that one must be 16 years old to solo and 17 to earn one’s private pilot certificate in an airplane (14 and 16, respectively, in a glider), but there is no minimum age to begin dual instruction with a flight instructor.

    • (Score: 1) by pTamok on Sunday February 04, @10:48AM (1 child)

      by pTamok (3042) on Sunday February 04, @10:48AM (#1342995)

      Many people. Airlines have protocols for travel by minors unaccompanied by parents/guardians. It's normal.

      https://www.flysas.com/en/fly-with-us/unaccompanied-minors/ [flysas.com]

      Airlines have different age limits. British Airways are particularly unhelpful, and I guess some airlines don't allow it at all.

      I know of plenty of kids under the age of 11 who travel over an hour to school (and the same home) unaccompanied on public transport - but I currently live in what is known by sociologists as a 'high trust' society. It's nice. I have also lived in low-trust societies.

      https://en.wikipedia.org/wiki/High-trust_and_low-trust_societies [wikipedia.org]

      • (Score: 3, Insightful) by JoeMerchant on Monday February 05, @01:08AM

        by JoeMerchant (3937) on Monday February 05, @01:08AM (#1343044)

        The fun thing about high trust and low trust societies is their low correlation to actual risk. Perception of risk? absolutely, very high correlation, but actual risk? Everyone being paranoid and always locking the doors isn't why nobody's house is ever broken into. When thieves want to burgle a house, they can generally back a truck up to the front door (while the good hardworking residents are away at work), open it with a sledgehammer - usually in one quick blow that nobody notices - and virtually empty the house of valuables into the truck in 15 minutes or less. My uncle lived in a neighborhood where this was endemic. Police response times to human monitored alarm company calls were generally 20-30 minutes, the thieves were generally pulling away in 10. Neighbor across the street from me had it happen a couple of years before I moved in, apparently they were in no hurry at her place, they spent almost 2 hours sorting their take/leave choices before departing.

        Somebody wants to steal your car? Flatbed towtruck. Alarm? Most are silenced in 20 seconds or less with a pair of diagonal cutters, not that anyone would look twice at a flatbed driving down the street with an alarming car on it.

        Your credit card number? Puhleeze...

        --
        🌻🌻 [google.com]
(1)