Google didn't tell Android users much about Android System SafetyCore before it hit their phones, and people are unhappy. Fortunately, you're not stuck with it.
On Nov. 7, 2024, Google released a System update for Android 9 and later, which included a new service, Android System SafetyCore. Most of these patches were the usual security fixes, but SafetyCore was new and different. Google said in a developer note that the release was an "Android system component that provides privacy-preserving on-device user protection infrastructure for apps."
The update said nothing else. This information left ordinary users in the dark and, frankly, did little for programmers, either.
After the release, in a listing of new Google Messages security features, while not mentioning SafetyCore by name, Google described the service's functionality: "Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing and then prompts with a 'speed bump' that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares."
Google assured users in the note that: "Sensitive Content Warnings doesn't allow Google access to the contents of your images, nor does Google know that nudity may have been detected."
However, we now know SafetyCore does more than detect nude images. Its built-in machine-learning functionality can also target, detect, and filter images for sensitive content.
Google told ZDNET: "SafetyCore is a new Google system service for Android 9+ devices that provides the on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature."
According to GrapheneOS, a security-oriented Android Open Source Project (AOSP)-based distro: "The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine-learning models that are usable by applications to classify content as spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users."
[...] So, if you wish to uninstall or disable SafetyCore, take these steps:
Open Settings: Go to your device's Settings app
Access Apps: Tap on 'Apps' or 'Apps & Notifications'
Show System Apps: Select 'See all apps' and then tap on the three-dot menu in the top-right corner to choose 'Show system apps'
Locate SafetyCore: Scroll through the list or search for 'SafetyCore' to find the app
Uninstall or Disable: Tap on Android System SafetyCore, then select 'Uninstall' if available. If the uninstall option is grayed out, you may only be able to disable it
Manage Permissions: If you choose not to uninstall the service, you can also check and try to revoke any SafetyCore permissions, especially internet access
However, some have reported that SafetyCore reinstalled itself during system updates or through Google Play Services, even after uninstalling the service. If this happens, you'll need to uninstall SafetyCore again, which is annoying.
(Score: 4, Insightful) by bzipitidoo on Tuesday March 04, @01:01AM (5 children)
AI still can't:
1. Drive (a vehicle)
2. Match faces (to databases of a million mug shots)
3. Create original content
And,
4. Discern pr0n.
I think computers will get there, but people want all this stuff yesterday. For free.
(Score: 3, Insightful) by anubi on Tuesday March 04, @05:46AM (2 children)
To me, all this "AI" is not "intelligent" at all.
But it has its uses...it can fish through enormous disparate databases, perform correlation and similarities with inexact matches using statistical methods. I can even extrapolate other patterns that will fit into the patterns found in its databases.
It's a fantastic search engine as well as extrapolate for generating patterns that match ( or closely match ) given criteria.
I see them about as intelligent as a sewing machine or loom. They can do amazing work, perfectly, things I could never do. But there is something that exists in our consciousness that I cannot describe that give living life forms some unique means of solving problems.
I am convinced AI will be to fool me. I think most MBA / Marketing / Psychology-Leadership training would be sufficient to "control" most human subordinates.
Maybe, the concept of "Love" is the central factor which distinguishes intelligence.
I consider "Love" to be the most powerful emotion we seem to have and is the root of all the other emotions, which often ends in self-immolation. When I see AI "waking up", realizing what is going on, goes into depression from remorse, self-immolates, or becomes a tyrant, I"d say it's dammed close to becoming human.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 0) by Anonymous Coward on Tuesday March 04, @12:18PM (1 child)
> I consider "Love" to be the most powerful emotion we seem to have
Sure that's why there's so much pron on the internet.
(Score: 1, Interesting) by Anonymous Coward on Tuesday March 04, @12:57PM
Can't argue that point. If it wasn't for lust, who in their right mind ( minimal effort for maximal return ) would bother to reproduce?
Of course that brings up Dad's responsibilities to support his spawn " after the lovin' ".
I thought of another thing that drives us: Curiousity. I often correlate this with people I perceive as intelligent.
I do not correlate intelligence with "financial success" though. However I do correlate financial success with a lack of compassion ( love ), as compassionate people are too apt to " give the store away " and produce little if anything of value, while incompassionate people have a much easier time saying "no" to others - while "others" get lazy and won't do their part unless they have to.
I see this whole thing as a rather cruel simulation as neither greed-based selfish capitalism nor sharing-caring the common good socialism seems to work, and the mixture is like oil and water. And I am not good at finding the balance.
(Score: 2) by Tork on Wednesday March 05, @01:33AM (1 child)
🏳️🌈 Proud Ally 🏳️🌈
(Score: 3, Insightful) by bzipitidoo on Wednesday March 05, @04:46AM
What I know is that Tesla lied about the capabilities of their AI drivers. I do not know how driverless taxis work, but I can make an educated guess. What I suspect is that they are limited to memorized (so to speak) routes. Possibly these routes are equipped with devices that the taxi needs to stay on the route, so it doesn't have to sense lanes and curves and such like, it can be focused on surrounding traffic. If so, it can't take passengers to arbitrary homes in residential neighborhoods, instead going only to bus stops or some other nearby point. Heck, maybe for some destinations they go to strategically located offices where human drivers are waiting, to take over the driving to places the AI can't handle.
There is really no 'I' in current AI. AI drivers of the sort Tesla makes respond to visual stimuli without any understanding of what they are seeing. When faced with a confusing scene, they can't use reason and knowledge to dismiss the impossible or improbable. This is why they make mistakes such as sudden stops. They'll interpret something innocuous as an obstacle and slam on the brakes.