Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 02 2019, @06:53AM   Printer-friendly
from the better-late-than-never? dept.

Submitted via IRC for Bytram

Microsoft explains the lack of Registry backups in Windows 10 - gHacks Tech News

We noticed back in October 2018 that Microsoft's Windows 10 operating system was not creating Registry backups anymore.

The scheduled task to create the backups was still running and the run result indicated that the operation completed successfully, but Registry backups were not created anymore.

Previous versions of Windows 10 created these backups and placed them in the C:\Windows\System32\config\RegBack folder. The backups could be used to restore the Windows Registry to an earlier state.

Microsoft published a new support page recently that brings light into the darkness. The company notes that the change is by-design and thus not a bug. The change was implemented in Windows 10 version 1803 and all newer versions of Windows 10 are affected by it.

Microsoft made the change to reduce the size of Windows on the system.

Starting in Windows 10, version 1803, Windows no longer automatically backs up the system registry to the RegBack folder. If you browse to to the \Windows\System32\config\RegBack folder in Windows Explorer, you will still see each registry hive, but each file is 0kb in size.

This change is by design, and is intended to help reduce the overall disk footprint size of Windows. To recover a system with a corrupt registry hive, Microsoft recommends that you use a system restore point.

The Registry backup option has been disabled but not removed according to Microsoft. Administrators who would like to restore the functionality may do so by changing the value of a Registry key:

  1. Open the Start menu, type regedit.exe, and select the Registry Editor entry from the list of results.
  2. Navigate to the following key: HKLM\System\CurrentControlSet\Control\Session Manager\Configuration Manager\
  3. Right-click on Configuration Manager and select New > Dword (32-bit) Value.
  4. Name it EnablePeriodicBackup.
  5. Double-click on it after creation and set its value to 1.
  6. Restart the PC.

Windows 10 will backup the Registry again from that point on.

Windows backs up the registry to the RegBack folder when the computer restarts, and creates a RegIdleBackup task to manage subsequent backups.

We have created two Registry files to enable and disable automatic Registry backups on Windows 10. You can download them with a click on the following link: Windows 10 Automatic Registry Backup Script


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by RS3 on Tuesday July 02 2019, @03:03PM (11 children)

    by RS3 (6367) on Tuesday July 02 2019, @03:03PM (#862408)

    Thanks for that Black Viper site- I had not seen that one yet. Great info and much of it applies to XP too.

    Pretty much any Windows computer under my control, even some not so much, I go in and turn off the piles of stuff that almost nobody needs or wants. MS would have done better to have those things run when needed. I used to like inetd in Linux world- just one running host process / sub-service starter.

    Over the years I've used many cleaner / tuners. Some are great at turning off generally unneeded services. I've forgotten some of them because I do it manually. One is the series "XPSmoker", "7Smoker", "10Smoker". Like with any cleaning / tuner tools, don't just blindly run it- be sure you understand what it's doing before you commit to changes. Another is "Ultimate Windows Tweaker" - gets at all kinds of things you never knew you could control (well, not obvious nor made obvious by MS).

    Oh, also go into "Control Panel -> Administrative Tools -> Task Scheduler" - generally, but specifically in "Task Scheduler Library -> Microsoft -> Windows". You might be stunned at how many things you'll never need or want that will pop up when you least want them. For example, if you've installed an SSD, be sure Windows doesn't try to run disk defragmentation.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Tuesday July 02 2019, @03:34PM

    by Anonymous Coward on Tuesday July 02 2019, @03:34PM (#862416)

    Black Viper has been around for awhile, but the recommended settings for Win10 borked up Windows updates. Win10 was stuck at 100% disc usage for about a week constantly trying to do an update, with no errors or messages saying anything was wrong.

  • (Score: 0, Informative) by Anonymous Coward on Tuesday July 02 2019, @03:49PM (9 children)

    by Anonymous Coward on Tuesday July 02 2019, @03:49PM (#862422)

    I have used that site for years. Word of warning though. Most services are very vague on what they do. So you do not know what you are gaining/losing by enable/disable. You can get the system into a very unusable state by disabling too much. But it can be fun to play with.

    Windows doesn't try to run disk defragmentation
    Why? I full defrag my SSD about every 6 months to a year. It is a nice cheap boost in speed. My SSD is rated for over 100TB in full re-writes (this is common these days). I am at about 20 after 5 years of fairly intensive use. I can prove it mathmatically that it slows you down. Do I do it all the time? Not so much. Once a month no big deal. The windows defrag just defragments fragmented files. It is not very smart. Fragmentation on NTFS used to be semi hard to do. It seems MS has severely relaxed the rules after 8. I have some files that have well over 2k in fragments sometimes. It shows. It is not the seek time that kills you anymore. It is the SATA command storms it makes and the CPU overhead of putting the pieces back together that kill you. Also due to the way most SSD flash memory is actually made you are better off reading as much as you can. That is the pagesize in the actuall flash chips. The SATA/NVME interface the drive presents does not match 1:1 to that. Lets say you have 1 8k file. It is fragmented into 2 pieces. If those two pieces live in different flash segments. You will need to issue two commands to get the whole file and the second piece probably will not be in the drive cache. But if it is contiguous. You may still need 2 commands to get the whole file. But in this case the second 4k is already in cache and does not need to be read off flash again. More than likely the OS would issue one command instead of 2 anyway to get the data as logically it is not in two places but one chunk. That is why fragmentation still matters. It is not as bad as years ago but it still chews up time and bandwidth.

    • (Score: 4, Informative) by RS3 on Tuesday July 02 2019, @04:46PM (6 children)

      by RS3 (6367) on Tuesday July 02 2019, @04:46PM (#862449)

      Windows doesn't try to run disk defragmentation
      Why? I full defrag my SSD about every 6 months to a year.

      I had started a much more complete response but don't have time to finish. I was explaining, for those who might not know, the specifics / mechanism of spinning magnetic disks vs. SSD.

      People like you drive me crazy, no pun intended. You're very good at your "logic" and argument and it sound 100% plausible. I forget what the logical fallacy is called. You make many great points, including ATA command "storms", cache, etc. For sure NTFS gets horribly fragmented.

      Your plausible-sounding argument is based on misunderstanding, at best, of actual facts. The main one being: your defragmenter thinks it's moving data blocks around based on cylinder, head, and sector addresses. These used to be actual way back in the day. At some point 30 years ago the hard disk electronics started doing address translation, aka. virtualizing. In other words, the cylinder, head, and sector addresses are virtualized by the drive electronics and there isn't squat your defragger can do about it.

      In spinning drives, even with address translation happening a defragmenter will work as desired, but with an SSD, you have NO CONTROL over where the data is.

      Also, regarding "storms", please read up on "ata native command queuing".

      You should enable TRIM https://searchstorage.techtarget.com/definition/TRIM [techtarget.com] if running Win 7. Supposedly Win 10 will do it for you. XP supposedly won't- I haven't investigated it (no need).

      Also, find, download, and run an SSD optimizer. My store-labeled SSD has a Phison controller, so I found and run Pfison ToolBox Complete.It will read SMART data, and also run an "optimizer" which does something to help the drive keep track of used spaces, deleted / empty blocks, etc., and maximize performance.

      • (Score: 1, Informative) by Anonymous Coward on Tuesday July 02 2019, @10:42PM (2 children)

        by Anonymous Coward on Tuesday July 02 2019, @10:42PM (#862560)

        If you run Windows 7 or later, the disk defragmenter does not actually defrag SSDs. Instead, it scans the disk for certain allocation errors, then does one of three things (in order by preference): runs the driver-specific optimization routine, issues TRIM commands on locked portions of the volume, or overwrites free areas with all zeros on locked portions of the volume. The first two are guaranteed to have the drive erase the blocks; the third causes many older drives (which don't support TRIM) to do the same, as they just report reads for unused areas as all zero anyway. My guess is that the GP has been doing the optimization procedure already and not realizing it.

        Besides, as you mentioned, almost all drives in the past 20 years use Logical Block Addressing. Sure the disk defragmenter can put drives on to consecutive LBAs, but unless it specifically checks the translation table (if that is even accurate). With SSDs, they are literally just put wherever and the controller just looks it up as necessary.

        • (Score: 0) by Anonymous Coward on Tuesday July 02 2019, @11:39PM (1 child)

          by Anonymous Coward on Tuesday July 02 2019, @11:39PM (#862570)

          Ooops, too much editing I cut off a sentence. It should read:

          Sure the disk defragmenter can put drives on to consecutive LBAs, but, unless it specifically checks the translation table (if that is even accurate), there is no guarantee that they are actually consecutive.

          • (Score: 2) by RS3 on Wednesday July 03 2019, @06:23AM

            by RS3 (6367) on Wednesday July 03 2019, @06:23AM (#862613)

            Awesome post, thank you.

            You probably know that address translation was initially done because drives were getting bigger (more data capacity), had more actual cylinders than the ATA spec allowed, but the spec allowed for far more heads than the actual drive had.

            But also, about the same time, they started doing ZBR - Zone Bit Recording https://en.wikipedia.org/wiki/Zone_bit_recording [wikipedia.org], which meant that the outer tracks, which have significantly more track length than the inner ones, can hold more data and more sectors. And, they started reserving spare sectors, so the result was again, could not adhere to original ATA spec. Translation (virtualization) got around this nicely, long before LBA, etc.

            Of course, as normal sectors fail, the drive's controller substitutes in the spares, so you get fragmentation that you don't know about, and there's nothing you can do about.

            As far as I know, there's no published spec. for getting access to address translation information, or ZBR, or spare sector allocation.

            MHDD gets some stats from the drive and you can get an idea of how many spare sectors have been mapped.

      • (Score: 1, Informative) by Anonymous Coward on Wednesday July 03 2019, @07:04AM (1 child)

        by Anonymous Coward on Wednesday July 03 2019, @07:04AM (#862619)

        People like you drive me crazy, no pun intended. You're very good at your "logic" and argument and it sound 100% plausible.

        I'm not the original AC but defrag for SSDs makes sense as long as SSD sequential read/write benchmarks continue to be significantly faster than random read/write benchmarks. And that's currently true for most popular SSDs on the market.

        The problem is existing defrag software might not be doing stuff like copy an entire fragmented file in one go then replace the fragmented file with the new copied file that the SSD has hopefully now made as "sequential" as it can.

        If the defrag software tries to defrag a fragmented file on an SSD by just moving the blocks "together" then it may not work for the reasons you mentioned.

        See also: https://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx [hanselman.com]

        By the way, regarding tests or testers that say there's no difference before and after defragging, I've actually seen some tests where they don't test reading or writing from/to the existing files that got defragged. They use stuff like the usual disk benchmarks which write stuff to the free space and then read from that! Doh, of course there's little difference or it even gets worse...

        • (Score: 2) by RS3 on Wednesday July 03 2019, @03:35PM

          by RS3 (6367) on Wednesday July 03 2019, @03:35PM (#862752)

          Thank you for another awesome post. I'm updating my stance. Of course it makes sense that sequential accesses would be faster than random ones, just like with RAM. Actual numbers would be interesting, though. IE, how much faster is it after defrag? If only a few percent, I say move on to more important things. If it's 50% faster, then sure, defrag, but the SSD controller manufacturers need to reconsider their designs.

          Like you said, the defragger doesn't know actual FLASH blocks, unless someone has gotten info on SSD controller innards. And maybe that's possible. I'm not sure what the "optimizer" software is actually doing. It says it's "reclaiming unused blocks". Wha? "Reclaiming"? So maybe it works with the filesystem data and coordinates it with the SDD controller? But did MS finally publish info on NTFS?

          And of course where does that put SSDs in Linux? A semi-techy friend handed me a laptop he had put Ubuntu on, and suddenly it won't boot. I pulled the SSD, tried to mount it read-only, scanned sectors, and find it's mostly zeros. I suspect a problem with the SSD and Linux drivers. Maybe an errant TRIM algorithm? Is his data still in FLASH and the controller is in some weird state- reporting empty blocks, but in fact the data is still there? Maybe a firmware problem? Wish I could pull the FLASH and read it somewhere else. Makes me want to make my own HD with uSD modules in a controller- at least I have some chance of data recovery...

          I've been involved in computers for 30 years, since the days of MFM controllers, (with no cache), and disks that had actual cylinders, heads, sectors, defect maps, and you could actually low-level format. Caching and pre-fetch are good and helpful, but they don't know the filesystem and pull in useless data. I've always thought a better approach would be a smart filesystem that worked directly with hard disk geometry, had control of cache controller, and did the pre-fetching, rather than some controller that has no clue what it's doing. Novell did that, and I suspect Oracle and probably others back in the day. I knew someone who wrote software for Data General Nova systems (before my time) who did that, and most of the OSes in the 50s and 60s optimized for the actual physical things. But we can't do that now.

      • (Score: 2) by http on Wednesday July 03 2019, @06:20PM

        by http (1920) on Wednesday July 03 2019, @06:20PM (#862830)

        People like you drive me crazy, no pun intended. You're very good at your "logic" and argument and it sound 100% plausible. I forget what the logical fallacy is called. You make many great points,

        It used to be called trolling, before the reporters corrupted it much like they corrupted `hacking'.

        --
        I browse at -1 when I have mod points. It's unsettling.
    • (Score: -1, Flamebait) by Anonymous Coward on Tuesday July 02 2019, @08:45PM (1 child)

      by Anonymous Coward on Tuesday July 02 2019, @08:45PM (#862532)

      God, you're stupid.

      • (Score: 1, Touché) by Anonymous Coward on Tuesday July 02 2019, @10:15PM

        by Anonymous Coward on Tuesday July 02 2019, @10:15PM (#862552)

        Live with it.

              -God