2020-07-01 00:00:00 ..
2020-09-22 11:53:27 UTC
2020-09-23 12:34:24 UTC --martyb
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
It is my great pleasure to announce that SoylentNews has just celebrated four years of service to the community! The very first story on the site appeared on 2014-02-12 and actually went live to everyone 2014-02-17.
It all started when a story on Slashdot made reference to its "audience" which ticked off quite a number of people. Soon after came a boycott of Slashdot — aka the "Slashcott". While this was in effect, an intrepid few people somehow managed to take a years-old, out-of-date, unsupported, open-sourced version of slashcode and somehow managed to get it up to speed to run on much more recent versions of Apache, MySql, etc. Recurring crashes and outages were the norm. (See last year's anniversary story for many more details!) Further, on July 4th, 2014 our application to become a Public Benefit Corporation was approved — this set the stage for us to be able to accept funding from the community.
By the time you read this, we will have posted 20,980 stories to the site to which over 639,907 comments have been made!
We could not have done this without all of you. You (the community) submit the stories for the site. You write the comments... and moderate them, too. You made recommendations for improvements to the site. You are SoylentNews.
It has been my privilege and honor to work with a great group of folks who have done the behind-the-scenes skunk-work which has kept this site running. It does bear mentioning that this site is entirely staffed by volunteers. Nobody here has received even a penny's worth of income from the site. Like you, we have home and work responsibilities, but in our spare time we still strive to provide an environment that is conducive to discussions of predominantly tech-related matters.
Having said all that, I must add that income to the site has dropped recently. Having let my subscription lapse in the past, I know how easy that can be. Take a moment to check your subscription status. We have on the order of 100 people who have subscribed in the past, have visited the site in the past month, and whose subscription has expired. If your subscription is up-to-date, please consider either extending it or making a gift subscription (default is to UID 6 - "mcasadevall" aka NCommander). NB the dollar amounts presented are the minimum payments required for that duration -- we'll happily accept larger amounts. =)
If financial contributions are infeasible for you, we always appreciate story submissions. Submit a link, a few paragraphs from the story, and ideally a sentence or two about what you found interesting and send it to us. Any questions, please take a look at the Submission Guidelines.
Of course, the comments are where it's at. Thoughtful, well-reasoned, well-supported comments seem to do best here. Inflammatory histrionics garner attention, and usually down-mods, too. Speaking of which, if you have good Karma, and have been registered with the site for at least a month, you are invited to participate in moderation. Unlike other sites that up-mod or down-mod to infinity, we have something more like olympic-scoring here. A comment score can vary from -1 to a +5. This is how I look at scores: -1 (total waste of your time), 0 (meh), +1 (okay), +2 (good), +3 (quite good), +4 (very good), +5 (don't miss this one!).
As you're probably aware we experienced some unplanned downtime today. It has been claimed it was entirely the fault of Russian Hackers. They invaded fluorine and caused the database updating code in rehash to not update the database this last site update. Which is just as well, I suppose, since two of the SQL statements refuse to complete even when run manually. That I'm going to have to chalk up to a misconfigured ndbd on helium and neon.
tl;dr The long and short of it is, we'll be fine until we can get those updates into the database, but it is going to mean more downtime this weekend.
Yeah, so life has managed to delay our December Update until February. Things happen. The only change to what's in it is that I wrote up a simple plugin to syndicate content to Twitter, which is very much preferable to our current situation since we're pushing stories from a hacky little script on my desktop at the moment and I'd like to be able to boot Windows 7 for some vidya once in a while.
Downtime should be five minutes or less starting around 2:00AM UTC (an hour and forty minutes from now).
Continuing Linode's efforts to mitigate the Meltdown/Spectre issues, we have learned that the last of our servers has been scheduled for its first reboot. This time it is hydrogen which
, among other things, hosts a copy of our database is one of our web frontends. The reboot is scheduled for tonight, 2018-01-19 @ 05:00 AM UTC (Midnight EST) or approximately 9 hours from the time this story is posted. Our plans are to move hydrogen's workload over to fluorine to cover the load during the hiatus and expect there to be no interruption of service on the site.
Please be aware that Linode considers these reboots to be "Phase 1" of their remediation efforts. We will keep you posted when we learn of other phase(s) being scheduled.
We appreciate your understanding and patience as we deal with this situation.
I recently came upon an article on Ars Technica which revealed that Linode (amongst many, many others) learned of these vulnerabilities at the same time as the rest of us -- when the news hit the press. They had no advance notice with which they could have performed mitigation planning of any kind. That has to count as one of their worst days, ever.
Linode (our server provider) is continuing with their server reboots to mitigate Meltdown/Spectre. This time, three of our servers are scheduled to be rebooted at the same time: lithium, sodium, and boron.
From TMB's update to our earlier story System Reboots: beryllium reboot successful; lithium, sodium, and boron soon to come [updated]:
[TMB Note]: Sodium is our currently-configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 22.214.171.124 to your hosts file if ten minutes is more than you can wait.
This reboot is scheduled for: 2018-01-18 at 0900 UTC (0400 EST). That is about 7 hours from the time this story goes 'live'. We anticipate no problems... that the site should resume operations on its own.
A workaround is to temporarily update your hosts file to include:
Upcoming: We just learned that hydrogen is scheduled for a reboot on 2018-01-19 at 05:00 AM UTC. Since we can get by just fine for a few hours on one web frontend though, no service interruption is anticipated.
[Update: Reboot of beryllium was successful and our IRC services were restored without issue. Hat tip to our sysops who made this happen so smoothly! --martyb]
Linode, which hosts our servers, is rolling out fixes for the Meltdown/Spectre bugs. This necessitates a hard reboot of their servers, and that means any guest servers will be down while this happens. beryllium is scheduled for a reboot with a two-hour window starting at 2018-01-17 07:00 AM UTC (02:00 AM EST). The outage should be relatively brief — a matter of just a few minutes.
We expect this will cause our IRC (Internet Relay Chat) service to be unavailable. We do not anticipate any problems, but if things go sideways, I'm sure the community will find a way to let us know via the comments.
Planning ahead, we have learned that lithium, sodium, and boron are all scheduled for a reboot at on 2018-01-18 at 09:00 AM UTC.
We appreciate your understanding and patience as we strive to keep the impact to the site to a minimum.
[TMB Note]: Sodium is our currently configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 126.96.36.199 to your hosts file if ten minutes is more than you can wait.
We recently received notifications that Linode, our hosting provider, will be performing "Emergency Security Maintenance" as a result of the recently Disclosed Meltdown and Spectre security issues.
So far, we have been informed of maintenance windows for two of our servers: magnesium and fluorine.
There is a two hour window for these reboots starting on Friday January 12th at 10:00AM UTC. Reboots should take on the order of about 10 minutes per server.
The reboot of magnesium should cause no service disruption as it is one of our redundant front-end servers. The same cannot be said for fluorine as TheMightyBuzzard so succinctly summed it up: "slashd and site payments won't work while fluorine's down".
We have not yet received any information as to when our other systems will be rebooted -- we will keep you advised as we learn more.
Another year is almost behind us and I thought it would be useful to take a look at what we have accomplished up to this point.
For those who may be new-ish here, SoylentNews went live on 2014-02-17. Since then, we have:
All of this was provided with absolutely no advertising by a purely volunteer staff!
Please accept my sincere thanks to all of you who have subscribed and helped to keep the site up and running! We could not have done it without your support.
I must also report that we have just over 100 people who have accessed the site in the past month whose subscription has lapsed. It is easy enough to do -- I've let it happen, myself. So, please go to the subscription page to check/renew your subscription. Be aware that the preferred amount is the minimum for the selected duration; feel free to increase the amount (hint hint).
Oh, and I would be remiss in not thanking the staff here for their dedication and perseverance. Linode decided to open a new data center and we had to migrate our servers to the new location. We accomplished this with almost no downtime on the site, and only about a 30-minute hiccup on our IRC (Internet Relay Chat) server.
Because of performance degradation on our servers when loading highly-commented stories, we rolled out a new comment display system early in the year. It had several issues at the outset, but seems to have settled down quite nicely. We appreciate your patience, and constructive feedback reporting issues as they arose. It helped greatly in stomping out those bugs.
We have a bug-fix update to the site in the works... mostly minor things that are waiting on testing for release. We hope to roll those out in the next couple of weeks.
To all of you who have contributed to the site, in other words: to our community, thank-you! It has been a privilege to serve you this past year and I look forward to continuing to do so in the year to come. --martyb
So, among the many nifty presents I brought back from my holidays with the family was a case of the black plague. Or possibly a cold. Either way, I don't want to be deploying new code on the production servers while my thinking's impaired*, so we're pushing the site update we'd planned for this weekend back another week**. That's all. Enjoy the last bits of 2017.
**or two. [martyb here. I've not had the time to finish testing the changes and am recovering from overload-mode at work and the start of a cold, as well.]
Seems we've hit one of those rare days where circumstances have conspired to keep all the eds busy enough that the story queue ran dry, so I figured I'd go ahead and tell you lot about the upcoming December site update just so they have a little less time to fill now that a couple of them have appeared and started refilling the queue.
It's mostly just a minor bug-fix update, stuff I could get coded quickly and didn't require extensive testing that martyb doesn't have much time for right now, so don't get your hopes too high. Most of the good stuff is currently slated for spring of next year. Here's what we've currently got up on dev for testing with an expected release date of between Christmas and New Years Eve:
Like I said, not a whole lot there on account of us not having time to thoroughly test much what with the holidays coming up. Here's the list (using tinyurl due to a Rehash bug that will definitely make the Spring 2018 cut) of what we'd like to get done for this spring if you're curious though.
Them of you of the disposition to celebrate Christmas, have a merry one. Them of you who celebrate otherwise, happy whatever you're celebrating. Them of you who don't celebrate at all, have a happy rest of December.
[Ed note: I nudged the time this story was due to be released by just a few minutes so that it coincided with the beginning of the Winter Solstice in the northern hemisphere... and the Summer Solstice for those of you who are south of the equator. Maybe we can get the next release out at the start of the equinox (2018-03-20 @ 16:15:00 UTC). --martyb]
So, apparently around November 5th we stopped posting to Twitter. We didn't find out until around the end of that month and when we did nobody had the time and/or ability to look into why until this past week.
Now how we get our headlines over to Twitter is overly complicated and, frankly, idiotic. It's done by one of our IRC bots pulling headlines from the RSS feed and posting them on Twitter as @SoylentNews. The bot was written back in 2014 with hand-rolled (as opposed to installed via package manager) Python libraries and hasn't been updated since. This was breakage that should absolutely have been expected to happen. Twitter's penchant for arbitrarily changing their unversioned API means you either keep on top of changes or expect things to break for no apparent reason.
Here's the question: do we even care? We can either find someone who's willing to rewrite the bot to a new Twitter library, do it the sane way as either a cron or slashd job, or just say to hell with it since we only have two hundred or so followers on Twitter anyway. What say you, folks?
[TMB Note]: Twitter's who-to-follow algorithms really impressed me this morning when I logged in to manually post this story. How did they know we were all huge @JustinBieber and @BarackObama fans?
[Update]: We're again annoying Twitter users by spreading relative intelligence across their platform of choice. Credit goes to Crash for wisely pointing out that we don't have to code everything ourselves.