Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 7 submissions in the queue.
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rob_on_earth on Monday April 21, @06:25PM (29 children)

    by rob_on_earth (5485) on Monday April 21, @06:25PM (#1401016) Homepage
    I love the power Gentoo gives me, the user. I understand it's a fair bit more complicated than Ubuntu or Mint, but why is it not more popular?
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by Thexalon on Monday April 21, @06:38PM (22 children)

    by Thexalon (636) on Monday April 21, @06:38PM (#1401018)

    I've used Gentoo before. I generally like it. I've also put together my Linux From Scratch systems before.

    However, the compiling-time downside is significantly annoying.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    • (Score: 4, Interesting) by bzipitidoo on Tuesday April 22, @01:35PM (18 children)

      by bzipitidoo (4388) on Tuesday April 22, @01:35PM (#1401117) Journal

      > However, the compiling-time downside is significantly annoying.

      This. I am using Gentoo right now. Because of the very long compile times, I have not upgraded from Firefox 137.0.1 to the current Firefox, 137.0.2.

      A further problem is that all that compiling puts a full load on my computer for several hours, and sometimes drives it to overheating and locking up. If that happens in the midst of a compile of a very large package such as Firefox or, even worse, Chromium, I have to start that package over. Takes around 3 hours to compile Chromium. I have even resorted to good old ctrl-s to pause the compile, leaving it paused for an hour, to give the computer time to cool down.

      I tried Gentoo again, in the hopes that my new, faster PC would make short work of the compiles. Last time I tried Gentoo was 20 years ago, on a Pentium IV, a single core CPU. Then I switched to Arch, and stayed with that until they switched to systemd. Alas, the amount of code to compile has kept pace with hardware. Even with 6 cores, compile times are still most of a day. I hate to think how much time that old Pentium IV would need to compile current Gentoo.

      • (Score: 2, Informative) by DECbot on Tuesday April 22, @01:47PM (1 child)

        by DECbot (832) on Tuesday April 22, @01:47PM (#1401121) Journal

        You could give Artix a try if you like Arch and willing to give OpenRC a go. It seems to have spawned from Arch's [arch-openrc] and [arch-nosystemd] repositories. Here's the link: Artix Linux [artixlinux.org]

        --
        cats~$ sudo chown -R us /home/base
        • (Score: 2) by Gaaark on Thursday May 01, @04:11PM

          by Gaaark (41) on Thursday May 01, @04:11PM (#1402421) Journal

          or MX linux.

          --
          --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
      • (Score: 0) by Anonymous Coward on Tuesday April 22, @04:59PM

        by Anonymous Coward on Tuesday April 22, @04:59PM (#1401142)

        IIRC I stopped using Gentoo when there were multiple updates to glibc and KDE and the kernel all in one week. Cooked my laptop trying to compile it all.

      • (Score: 2) by fab23 on Tuesday April 22, @05:04PM

        by fab23 (6605) Subscriber Badge on Tuesday April 22, @05:04PM (#1401147) Homepage Journal

        Gentoo does now also have binary packages available, but I have not checked if they are able to build Firefox quickly enough. On the other hand you could always use the binary Firefox directly from Mozilla at e.g. https://download.cdn.mozilla.net/pub/firefox/releases/137.0.2/. [mozilla.net] See e.g. in https://download.cdn.mozilla.net/pub/firefox/releases/137.0.2/linux-x86_64/en-US/ [mozilla.net] and there is even something for Debian/Ubuntu Users.

        If you prefer not to adjust the URL each time to get it from the above mention download link, then go to https://www.mozilla.org/en-US/firefox/all/desktop-release/ [mozilla.org] or https://www.mozilla.org/en-US/firefox/all/desktop-esr/ [mozilla.org] and click through. For a full overview of available options see https://www.mozilla.org/en-US/firefox/all/ [mozilla.org] .

      • (Score: 3, Informative) by turgid on Tuesday April 22, @07:03PM (13 children)

        by turgid (4318) Subscriber Badge on Tuesday April 22, @07:03PM (#1401175) Journal

        A further problem is that all that compiling puts a full load on my computer for several hours, and sometimes drives it to overheating and locking up.

        Is this a problem with the heatsink/fan assembly? What CPU is it? Nowadays they tend to be pretty good at throttling the clock to prevent overheating. Maybe it's a power supply problem?

        • (Score: 1, Informative) by Anonymous Coward on Tuesday April 22, @10:00PM (12 children)

          by Anonymous Coward on Tuesday April 22, @10:00PM (#1401194)

          I'm with you on this. There is a good chance they don't have thermal throttling set. Modern CPUs will self-regulate their boosting down to 100%, but not below that amount unless they have a profile that allows it to underclock under load. That profile isn't the default, so the kernel needs to tell the hardware that is OK and most distros and kernels I know of aren't configured to do that. It could also be a power issue because compiling is known to be extra hard on CPUs and require more power than the baseline load at the same utilization. That is the reason why having a machine that you only use for compiling and another for working (or having one computer that is basically disposable) is the standard for people working on big projects.

          • (Score: 3, Interesting) by bzipitidoo on Tuesday April 22, @11:45PM (5 children)

            by bzipitidoo (4388) on Tuesday April 22, @11:45PM (#1401202) Journal

            The computer is a fanless desktop from silentpc.com. Was very expensive, but I wanted the silence. Ryzen 5600G CPU.

            Seems to have no problem with 2 hours of sustained compiling. Even 4 hours is often okay. Longer than that eventually brings it to a boil, so to speak.

            If I have all 6 cores doing compiling, and I fire up some game that engages the integrated 3D accelerated graphics (I don't have a dedicated graphics card, owing to them being extremely expensive at the time I got the PC, during the pandemic), then I can overheat it in perhaps 30 minutes.

            • (Score: 1, Insightful) by Anonymous Coward on Wednesday April 23, @04:25AM (4 children)

              by Anonymous Coward on Wednesday April 23, @04:25AM (#1401217)

              Do you use a profile that allows thermal throttling below the rated speed under load? Which CPU governor do you use? If it is cooking itself after 4 or so hours, then it probably isn't the power supply (unless that is undervolting due to overheating) but the thermal design. The problem for you is that each time the processor overheats to the point it exceeds the true maximum junction temperature, that temperature falls by a random but chaotic amount for a given voltage. So I'd check which governors and thermal controls you are using to help mitigate that if it is a problem for you. The kernel can be told to all sorts of things, including automatic underclocking and idle looping, to keep temperatures within user constraints. But you have to tell it that you want it to do that first.

              • (Score: 2) by bzipitidoo on Thursday April 24, @03:25AM (3 children)

                by bzipitidoo (4388) on Thursday April 24, @03:25AM (#1401342) Journal

                I confess I have never looked into this. I have no idea if a CPU governor is being used. But it sure sounds like a good idea. However, a bit of searching for info on this matter brought up a lot of docs to read. Was hoping for a simple, quick solution, along the lines of "echo something > /dev/something"

                • (Score: 1, Informative) by Anonymous Coward on Thursday April 24, @05:32AM (2 children)

                  by Anonymous Coward on Thursday April 24, @05:32AM (#1401350)

                  The easiest way to potentially solve it is to issue the command (cpufreq-set -g powersave) or (cpupower frequency-set -g powersave) which will cause the CPU to use only the minimum allowed speed regardless of load. Otherwise, you can use that tool to experiment on a CPU speed that will not overheat. It will slow everything at the cost of almost ensuring no ability to overheat until you next reboot. There are also a number of daemons you can use to control it based on your platform and requirements. Sadly there isn't an easy answer because what works for one system doesn't work for another. And part of the problem is that, since it appears that you have exceeded the maximum temperature before, the overheat protection may not be aggressive enough due to the lower temperature where the processor will fail now.

                  And as a frank side note: you'd think a fanless PC manufacturer would have better documentation on how to configure their servers in this manner.

                  • (Score: 0) by Anonymous Coward on Thursday April 24, @04:36PM (1 child)

                    by Anonymous Coward on Thursday April 24, @04:36PM (#1401400)

                    Setting CPU governor to powersave is easy but it might not work. During a really long compile the heat will build up and a fanless system can't clear it out. You can't cool a CPU with hot air.

                    Lowering the CPU thermal throttle temperature will probably help more, which you can do with the ryzenadj tool.

                    Realistically though, a fanless system just isn't a great choice for long sustained workloads. For silent, the best approach is water cooling open loop with a big radiator and fans that can throttle down to silent speed. Not really viable for a laptop but gives you silent 90% of the time and max performance (and still not very loud) the other 10% of the time.

                    • (Score: 0) by Anonymous Coward on Thursday April 24, @10:49PM

                      by Anonymous Coward on Thursday April 24, @10:49PM (#1401422)

                      It is a tradeoff. Lowering the CPU using ryzenadj vs the governor should affect the same settings under load. The difference is powersave is simpler at the expense of not having to do too much tuning and experimentation. Coming up with a complete thermal profile would be best. In the end, the solution will probably include a mix of hardware and kernel tuning. Right now, the APU is cooking itself, which means the throttling is already being exceeded. At a minimum the APU is signaling the platform to shutdown (either hard or soft) and it is a sign that the maximum junction temp is being exceeded and therefore lowered. That means that the built-in cooling profile is unreliable and that the kernel probably needs to get involved by actively cooling through injected idle loops.

          • (Score: 2) by hendrikboom on Friday April 25, @06:44PM (5 children)

            by hendrikboom (1125) on Friday April 25, @06:44PM (#1401528) Homepage Journal

            There is a good chance they don't have thermal throttling set. Modern CPUs will self-regulate their boosting down to 100%, but not below that amount unless they have a profile that allows it to underclock under load. That profile isn't the default, so the kernel needs to tell the hardware that is OK and most distros and kernels I know of aren't configured to do that.

            So I'd check which governors and thermal controls you are using to help mitigate that if it is a problem for you. The kernel can be told to all sorts of things, including automatic underclocking and idle looping, to keep temperatures within user constraints. But you have to tell it that you want it to do that first.

            Was hoping for a simple, quick solution, along the lines of "echo something > /dev/something"

            The easiest way to potentially solve it is to issue the command (cpufreq-set -g powersave) or (cpupower frequency-set -g powersave) which will cause the CPU to use only the minimum allowed speed regardless of load. Otherwise, you can use that tool to experiment on a CPU speed that will not overheat. It will slow everything at the cost of almost ensuring no ability to overheat until you next reboot. There are also a number of daemons you can use to control it based on your platform and requirements. Sadly there isn't an easy answer because what works for one system doesn't work for another.

            Setting CPU governor to powersave is easy but it might not work.

            [...]

            Lowering the CPU thermal throttle temperature will probably help more, which you can do with the ryzenadj tool.

            the kernel probably needs to get involved by actively cooling through injected idle loops.

            So is temperature management another black art? Like Linux audio also seems to be?

            How does one do all those things? How does one even find out how to do these things? How does one even find out what can be done?

            • (Score: 2) by turgid on Friday April 25, @07:25PM (1 child)

              by turgid (4318) Subscriber Badge on Friday April 25, @07:25PM (#1401544) Journal

              Settle down for a long night with the Linux kernel configuration menus?

              • (Score: 3, Touché) by Gaaark on Thursday May 01, @04:17PM

                by Gaaark (41) on Thursday May 01, @04:17PM (#1402422) Journal

                Geez i remember those days: haven't done a kernel config in ... decade & half?...longer?...

                ...then the compiling...

                The good ol' days, lol.

                --
                --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
            • (Score: 1, Insightful) by Anonymous Coward on Saturday April 26, @01:09AM (2 children)

              by Anonymous Coward on Saturday April 26, @01:09AM (#1401603)

              Is it a bit of a black art? Yes. Sadly, like audio, that is a bit by design because complexity slowly increased over time by without actually doing a clean redesign. Starting with the fact that you have multiple hardware manufacturers doing multiple (and often incompatible) things even between their own products. Next is the assemblers that put those components together in different combinations with different designs. Then there are OSes that do different things with the same settings. Additionally, you have users that want different things from identical platforms. Finally, most people don't have to actively do anything because it usually just works but when it doesn't, you need serious options.

              So how does one learn these things? I'm not really sure. I've had the benefit of being in the industry as these things cropped up. Adding a new piece into the picture you've already assembled is easy. Another benefit is that you really only need to do thermal design when you are designing a platform helps too because you usually have someone else's work to start with. I think the best way to learn is by looking at an OEM install or other professional design. Or you could look at what sort of things a distro like Debian or Fedora do on default hardware. Examine the power management profiles and tables, check their daemon configuration, look at udev rules, and browse the applicable sysfs entries for things like thermal and hwmon. See how they handle it and you can get a picture of what works and how it fits together.

              • (Score: 2) by Gaaark on Thursday May 01, @04:24PM (1 child)

                by Gaaark (41) on Thursday May 01, @04:24PM (#1402423) Journal

                multiple hardware manufacturers doing multiple (and often incompatible) things even between their own products.

                Looked up modem cards one time to see if my card was working: EVERY card manufacturer blinks their lights differently even in their own products.

                The card is blinking: one light green the other a steady yellow? I figured it might be receiving but not transmitting... but no: it was fine. Another card? It might mean there was a problem, it might not.

                Steady green or yellow? Blinking green or yellow? Some random combination of the two? Not blinking at all?
                You have to look up EVERY SINGLE CARD to look at it's specs to see what is going on.

                F*ck it... it wasn't working so i replaced it. Teh new one blinked or not in some combination... dunno...but this one worked, so....

                SHEEEEESH!

                --
                --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
                • (Score: 0) by Anonymous Coward on Sunday May 11, @01:56AM

                  by Anonymous Coward on Sunday May 11, @01:56AM (#1403356)

                  We had a switch where a green LED was normal and red was an error. Except for one hardware version. There, red was normal and green was an error. Next version they switched the colors back. The story told to us by our support rep was that they changed two-color LEDs and no one realized that it had the opposite pin out. Rather than eat the cost or get into a huge fight, the OEM just changed their documentation for the bad units as a new revision. It was a pain to scan the lights because you had to remember which switch was which revision. Finally we figured out how many "red LEDs" we would have and our redundancy needs, we started putting them in specific places and marking cabinets in with red painter's tape so the mental load was lower. Ended up saving a ton of money because they had a hard time selling that revision to other customers.

                  Moral of the story. Sometimes double checking your design can save your company hundreds of thousands of dollars down the road.

    • (Score: 2) by ese002 on Tuesday April 22, @08:35PM (2 children)

      by ese002 (5306) on Tuesday April 22, @08:35PM (#1401184)

      However, the compiling-time downside is significantly annoying

      Gentoo now has pre-compiled packages for many/most things. However, not for every possible build option. It might be virtually everything for typical new installs but for long time users like me, at least half of my updates still require compiling. It was just in the last month that I first saw Firefox and Thunderbird update from binaries.

      • (Score: 1) by shrewdsheep on Tuesday April 29, @09:00AM (1 child)

        by shrewdsheep (5215) Subscriber Badge on Tuesday April 29, @09:00AM (#1402028)

        I always thought that pre-compiling and flexible re-compiling would allow for the best of both worlds. By this I mean that most packages would never be re-compiled at all; only based on run-time profiling should packages be re-compiled. Re-compiling could be scheduled smoothly, e.g. based on CPU load/memory use. Frankly, I'm a bit disappointed that nobody tried this given they myriad of distros out there.

        • (Score: 2) by ese002 on Wednesday April 30, @04:31AM

          by ese002 (5306) on Wednesday April 30, @04:31AM (#1402172)

          I'm not sure what automatic profiling would do for you. Most updates don't have anything to do with performance and most build options don't either. Build options most enable/disable features and connect to different libraries. Python version changes causes the most or at least the most annoying rebuilds.

  • (Score: 2) by JoeMerchant on Tuesday April 22, @11:58AM

    by JoeMerchant (3937) on Tuesday April 22, @11:58AM (#1401110)

    I used Gentoo 2004ish through 2007ish. It was the only viable option for a full 64 bit OS at the moment.

    I recompiled the OS twice in the first month, taking roughly 24 hours per go. The first recompile was because I learned of a compiler flag that was supposed to make my AMD deployment more efficient. It may have worked but the truth about a good OS is that it spends most of its time sleeping, so 1% gains are super hard to notice.

    The last recompile was after a couple of years of learning about how it all worked and I applied what I knew, built for another day and again didn't perceive any real difference in the result.

    I know Gentoo has precompiled emerge options now, but for me the advantages of swimming in the Debian mainstream far outweigh any additional build control I experienced with Gentoo. If you really want to recompile something in Debian that's an option too. See my other response about a recent Debian downside.

    --
    🌻🌻🌻 [google.com]
  • (Score: 0) by Anonymous Coward on Thursday April 24, @08:16AM (2 children)

    by Anonymous Coward on Thursday April 24, @08:16AM (#1401358)

    Gentoo or die

    In some ways it's unfortunate that Gentoo originally pitched their niche as performance, because really the extra 0.5% you get from fiddling with compile options basically doesn't matter at all. What Gentoo gives you is flexibility. You can use systemd or not, pulseaudio or not, glibc or musl, gcc or clang (of course some programs actually need one or the other), X or Wayland, whatever. When the systemd wars happened I barely even noticed. I distribute RPi-based embedded systems, and they run Gentoo. This gives me free GPL compliance, even GPL3, because everything required to comply with GPL is right there. In theory, the end-user could log in, type emerge -e world and rebuild every GPL program from source. (The real-world performance of this is another matter, it would take a week). I build the system on an ARM-based AWS host, so I don't even need to cross-compile. Which is good, because a lot of stuff is in Rust now and you can't really cross-compile Rust. It's also possible to do this in QEMU, it's pretty slow, but it does work. Last time I tried this it took a week to build Chromium, but my CPU is slow.

    Although the time required for compiling is mildly annoying, because it's rolling release this matters a lot less. You upgrade when you want to, and while not every update is painless, generally I just run the update overnight and when I wake up it's done. The package manager is intelligent enough that even when some update, it never (well, almost never) leaves the system in a broken/inconsistent state. I have had more RedHat-based systems break on upgrade than Gentoo.

    The only programs that I find slow enough to compile to be a pain are GCC, LLVM/Clang, Rust, Chromium, Webkit and LibreOffice. GCC wouldn't be so bad if it didn't insist on recompiling itself three times. Firefox is a relatively light compile, it takes less than an hour, and I only have a six-core CPU. But there's a binary version if you want.

    I only use binary packages for bootstrapping runtime environments like Java and Rust where you can't really build them from scratch without a version already installed. It's not really different from starting with a binary C compiler. But there are binary packages for all the big, inconvenient packages.

    • (Score: 0) by Anonymous Coward on Friday April 25, @12:12AM (1 child)

      by Anonymous Coward on Friday April 25, @12:12AM (#1401428)

      0.5% improvement? We get more than that just by changing the instruction set flags to a more accurate values. All forms of optimization and tuning combined has resulted in speedups of almost 50% in some cases.

      • (Score: 0) by Anonymous Coward on Friday April 25, @05:13AM

        by Anonymous Coward on Friday April 25, @05:13AM (#1401447)

        Certainly you can get a big improvement by, say, having AVX enabled vs. having it disabled. But what happens in the real world is binary distros enable everything, and then the code determines at runtime what is available, and chooses the appropriate code path. So really what you are getting in most cases is just a size optimization, not a performance optimization. Maybe you can get more speedup in specific programs with -O3 or higher, which isn't safe to use system-wide.

        Gentoo doesn't even allow you to compile glibc any more without all the compatibility cruft. If you build glibc today, it will have compatibility code for kernels going all the way back to 2.6. Ironically, binary distros are now better at this than Gentoo.

  • (Score: 0) by Anonymous Coward on Saturday April 26, @09:31AM

    by Anonymous Coward on Saturday April 26, @09:31AM (#1401624)

    Try and build a package that depends on Qt after you've upgraded _one_ of the qt packages. You'll hit a dependency hell so hard the only two choices that you have are to either remove _all_ of the QT packages and let whatever needs them pull them in by rebuilding (it's fine, actually -- any linked files will remain, because Gentoo won't remove them), or by `emerge -uav --deep world` and spending 24 hours updating the *whole system*.

    Because you wanted to update *two packages*.

    I also run into situations where I can't update a package at all, because if I don't get *everything* that depends on a certain framework (qt, gtk, wx, ...) then I can't update anything at all. Unless you specify *all* dependencies, manually, something will block the upgrade of one, blocking the required upgrade of a framework, and you're just stuck. Gentoo has become a scene of "all or nothing".

    As far as control, power - apt has this beat. You can --force-depends to do things (and then apt-get install -f), but on Gentoo... there's no force-anything. If the system decides it's not proper, there's no way around it. If you want to break other packages to upgrade your frameworks, too damn bad. All or nothing. You only use one of those things built against an older framework twice a year and don't want to deal with it right now? All or nothing. Uninstall it, and forget you ever used it, or update the whole system, right now.

    It's gotten to the point of being practically unusable. I can't do anything at all unless I `emerce -C` five packages first.

  • (Score: 0) by Anonymous Coward on Tuesday April 29, @10:26AM

    by Anonymous Coward on Tuesday April 29, @10:26AM (#1402036)

    why is it not more popular?

    I'm glad it's not more popular. Imagine if it was popular and many more millions were wasting electricity to compile mostly the same thing.

    And if people aren't going to do that compile stuff they might as well use a different distro. 🤣

    Almost like Bitcoin where "My stuff is valuable because I can mathematically prove I wasted lots of energy on it"...