2018-01-01 00:00:00 ..
2018-04-13 13:32:42 UTC
2018-04-14 11:41:58 UTC
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Although lots of different tasks are required to keep a site such as this operating, interesting, and relevant, the most visible are the stories that we publish on the front page. Some stories can be edited in a matter of a few minutes, others take rather longer. Among the editors we celebrate when we reach certain milestones. The first published story is a milestone of sorts but, more importantly, the first 100, 250, 500 and 1000th etc are all events that are more important milestones to an editor and each marks a significant contribution in effort.
Today we have an editor who is the first to achieve having his 5000th published story on SoylentNews: martyb (aka Bytram on IRC) has been with the site from the very beginning but has not limited his contribution to being an editor. He is also the site's one-man QA team and has spent many hours testing software and investigating many of the bugs that pop up from time to time.
Like all of us, he has to manage his own life and work too, and it is impossible to calculate the hours he has expended to keep SoylentNews worthy of being a place that many of us call 'home'.
So let me invite you to join me in thanking martyb for his contribution to this site. Congratulations martyb, and here's to the next 5000!
martyb here. It feels awkward receiving such praise so publicly. Yes, at many times it has been a labor of love. As most of you are aware, all staff work on a purely volunteer basis -- nobody has been paid for their work on the site. So it is with great pleasure that I can attest how privileged I feel to work with such an outstanding team... there is no way I could have achieved this milestone alone!
Along with editing and posting stories, we editors strive to second each story before it goes live. Many a mistake of mine has been caught and fixed by my fellow editors. I cannot thank them enough for saving my bacon on far too many occasions!
It has, indeed, been a team effort to keep the stories coming. A quick look at the Authors page shows that several editors are approaching milestones as well! janrinok (our Editor-in-Chief) just attained 3700 stories; cmn32480 is not far from 3000 stories; Fnord666 though arriving later on the scene is on the cusp of reaching 2000 stories. Coolhand, at 1121 stories, pops in from time to time and continues to help keep the story queue filled. Not to be left out, mrpg joined us comparatively recently, and yet has nearly reached 500 stories.
I would also like to take this opportunity to call out FatPhil who answered our call for editors not too long ago, as well as our newest editors chromas and fyngyrz -- please join me in welcoming them to our editorial staff!
A special thanks goes to takyon who has not only posted over 900 stories, but has also single-handily provided well over 3700 well-written story submissions which helps make our lives as editors so much easier.
A special shoutout to janrinok and LaminatorX; there was a long stretch where they were the editorial team and who, when the submissions queue went dry, would rustle up stories from across the web and submit them for the other to push out onto the site. I'd hate to imagine what would have happened without their steadfast effort and perseverance.
Lastly, a sincere thanks goes to the SoylentNews community. Your story submissions are tremendously important -- not just in giving us something to post on the site, but also in providing insight into what topics are suitable for the site, as well. Please keep those story submissions and comments coming!
Okay, that wasn't the last thing. We seem to be running behind prior periods in subscriptions to the site. We need the money to pay for the servers, domain name renewal, taxes, etc. Again, none of the staff receive any payment of any kind for our efforts -- we are all volunteers. Please take a moment to go to our Subscription Page. Even Anonymous Cowards can, via a gift subscription, make a contribution. If you are logged in and looking to start, renew, or extend your subscription, be certain that you select the correct radio button for renewal (i.e. NOT a gift subscription). Click through the FAQ link there to see the benefits to you for subscribing. Also, the amounts shown are minimums for the duration shown -- please consider changing to a larger amount to further help the site continue as a going concern.
Over the past week we've had at least three occurrences of this particular bug crop up. It's currently already fixed but I thought I'd fill you lot in just in case it got you too and you haven't noticed yet.
On the subscription page there are two radio buttons if you're logged in. One is to subscribe for yourself and one is to give a gift subscription. For some reason they were both set unchecked. If you didn't check one your subscription would to go NCommander's non-admin account, mcasadevall. It beats the complete hell out of me why this would be the default but it is.
If you've purchased a subscription recently please check that you got credit for it. If you didn't please let us know either here or via email.
It is my great pleasure to announce that SoylentNews has just celebrated four years of service to the community! The very first story on the site appeared on 2014-02-12 and actually went live to everyone 2014-02-17.
It all started when a story on Slashdot made reference to its "audience" which ticked off quite a number of people. Soon after came a boycott of Slashdot — aka the "Slashcott". While this was in effect, an intrepid few people somehow managed to take a years-old, out-of-date, unsupported, open-sourced version of slashcode and somehow managed to get it up to speed to run on much more recent versions of Apache, MySql, etc. Recurring crashes and outages were the norm. (See last year's anniversary story for many more details!) Further, on July 4th, 2014 our application to become a Public Benefit Corporation was approved — this set the stage for us to be able to accept funding from the community.
By the time you read this, we will have posted 20,980 stories to the site to which over 639,907 comments have been made!
We could not have done this without all of you. You (the community) submit the stories for the site. You write the comments... and moderate them, too. You made recommendations for improvements to the site. You are SoylentNews.
It has been my privilege and honor to work with a great group of folks who have done the behind-the-scenes skunk-work which has kept this site running. It does bear mentioning that this site is entirely staffed by volunteers. Nobody here has received even a penny's worth of income from the site. Like you, we have home and work responsibilities, but in our spare time we still strive to provide an environment that is conducive to discussions of predominantly tech-related matters.
Having said all that, I must add that income to the site has dropped recently. Having let my subscription lapse in the past, I know how easy that can be. Take a moment to check your subscription status. We have on the order of 100 people who have subscribed in the past, have visited the site in the past month, and whose subscription has expired. If your subscription is up-to-date, please consider either extending it or making a gift subscription (default is to UID 6 - "mcasadevall" aka NCommander). NB the dollar amounts presented are the minimum payments required for that duration -- we'll happily accept larger amounts. =)
If financial contributions are infeasible for you, we always appreciate story submissions. Submit a link, a few paragraphs from the story, and ideally a sentence or two about what you found interesting and send it to us. Any questions, please take a look at the Submission Guidelines.
Of course, the comments are where it's at. Thoughtful, well-reasoned, well-supported comments seem to do best here. Inflammatory histrionics garner attention, and usually down-mods, too. Speaking of which, if you have good Karma, and have been registered with the site for at least a month, you are invited to participate in moderation. Unlike other sites that up-mod or down-mod to infinity, we have something more like olympic-scoring here. A comment score can vary from -1 to a +5. This is how I look at scores: -1 (total waste of your time), 0 (meh), +1 (okay), +2 (good), +3 (quite good), +4 (very good), +5 (don't miss this one!).
As you're probably aware we experienced some unplanned downtime today. It has been claimed it was entirely the fault of Russian Hackers. They invaded fluorine and caused the database updating code in rehash to not update the database this last site update. Which is just as well, I suppose, since two of the SQL statements refuse to complete even when run manually. That I'm going to have to chalk up to a misconfigured ndbd on helium and neon.
tl;dr The long and short of it is, we'll be fine until we can get those updates into the database, but it is going to mean more downtime this weekend.
Yeah, so life has managed to delay our December Update until February. Things happen. The only change to what's in it is that I wrote up a simple plugin to syndicate content to Twitter, which is very much preferable to our current situation since we're pushing stories from a hacky little script on my desktop at the moment and I'd like to be able to boot Windows 7 for some vidya once in a while.
Downtime should be five minutes or less starting around 2:00AM UTC (an hour and forty minutes from now).
Continuing Linode's efforts to mitigate the Meltdown/Spectre issues, we have learned that the last of our servers has been scheduled for its first reboot. This time it is hydrogen which
, among other things, hosts a copy of our database is one of our web frontends. The reboot is scheduled for tonight, 2018-01-19 @ 05:00 AM UTC (Midnight EST) or approximately 9 hours from the time this story is posted. Our plans are to move hydrogen's workload over to fluorine to cover the load during the hiatus and expect there to be no interruption of service on the site.
Please be aware that Linode considers these reboots to be "Phase 1" of their remediation efforts. We will keep you posted when we learn of other phase(s) being scheduled.
We appreciate your understanding and patience as we deal with this situation.
I recently came upon an article on Ars Technica which revealed that Linode (amongst many, many others) learned of these vulnerabilities at the same time as the rest of us -- when the news hit the press. They had no advance notice with which they could have performed mitigation planning of any kind. That has to count as one of their worst days, ever.
Linode (our server provider) is continuing with their server reboots to mitigate Meltdown/Spectre. This time, three of our servers are scheduled to be rebooted at the same time: lithium, sodium, and boron.
From TMB's update to our earlier story System Reboots: beryllium reboot successful; lithium, sodium, and boron soon to come [updated]:
[TMB Note]: Sodium is our currently-configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 18.104.22.168 to your hosts file if ten minutes is more than you can wait.
This reboot is scheduled for: 2018-01-18 at 0900 UTC (0400 EST). That is about 7 hours from the time this story goes 'live'. We anticipate no problems... that the site should resume operations on its own.
A workaround is to temporarily update your hosts file to include:
Upcoming: We just learned that hydrogen is scheduled for a reboot on 2018-01-19 at 05:00 AM UTC. Since we can get by just fine for a few hours on one web frontend though, no service interruption is anticipated.
[Update: Reboot of beryllium was successful and our IRC services were restored without issue. Hat tip to our sysops who made this happen so smoothly! --martyb]
Linode, which hosts our servers, is rolling out fixes for the Meltdown/Spectre bugs. This necessitates a hard reboot of their servers, and that means any guest servers will be down while this happens. beryllium is scheduled for a reboot with a two-hour window starting at 2018-01-17 07:00 AM UTC (02:00 AM EST). The outage should be relatively brief — a matter of just a few minutes.
We expect this will cause our IRC (Internet Relay Chat) service to be unavailable. We do not anticipate any problems, but if things go sideways, I'm sure the community will find a way to let us know via the comments.
Planning ahead, we have learned that lithium, sodium, and boron are all scheduled for a reboot at on 2018-01-18 at 09:00 AM UTC.
We appreciate your understanding and patience as we strive to keep the impact to the site to a minimum.
[TMB Note]: Sodium is our currently configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 22.214.171.124 to your hosts file if ten minutes is more than you can wait.
We recently received notifications that Linode, our hosting provider, will be performing "Emergency Security Maintenance" as a result of the recently Disclosed Meltdown and Spectre security issues.
So far, we have been informed of maintenance windows for two of our servers: magnesium and fluorine.
There is a two hour window for these reboots starting on Friday January 12th at 10:00AM UTC. Reboots should take on the order of about 10 minutes per server.
The reboot of magnesium should cause no service disruption as it is one of our redundant front-end servers. The same cannot be said for fluorine as TheMightyBuzzard so succinctly summed it up: "slashd and site payments won't work while fluorine's down".
We have not yet received any information as to when our other systems will be rebooted -- we will keep you advised as we learn more.
Another year is almost behind us and I thought it would be useful to take a look at what we have accomplished up to this point.
For those who may be new-ish here, SoylentNews went live on 2014-02-17. Since then, we have:
All of this was provided with absolutely no advertising by a purely volunteer staff!
Please accept my sincere thanks to all of you who have subscribed and helped to keep the site up and running! We could not have done it without your support.
I must also report that we have just over 100 people who have accessed the site in the past month whose subscription has lapsed. It is easy enough to do -- I've let it happen, myself. So, please go to the subscription page to check/renew your subscription. Be aware that the preferred amount is the minimum for the selected duration; feel free to increase the amount (hint hint).
Oh, and I would be remiss in not thanking the staff here for their dedication and perseverance. Linode decided to open a new data center and we had to migrate our servers to the new location. We accomplished this with almost no downtime on the site, and only about a 30-minute hiccup on our IRC (Internet Relay Chat) server.
Because of performance degradation on our servers when loading highly-commented stories, we rolled out a new comment display system early in the year. It had several issues at the outset, but seems to have settled down quite nicely. We appreciate your patience, and constructive feedback reporting issues as they arose. It helped greatly in stomping out those bugs.
We have a bug-fix update to the site in the works... mostly minor things that are waiting on testing for release. We hope to roll those out in the next couple of weeks.
To all of you who have contributed to the site, in other words: to our community, thank-you! It has been a privilege to serve you this past year and I look forward to continuing to do so in the year to come. --martyb
So, among the many nifty presents I brought back from my holidays with the family was a case of the black plague. Or possibly a cold. Either way, I don't want to be deploying new code on the production servers while my thinking's impaired*, so we're pushing the site update we'd planned for this weekend back another week**. That's all. Enjoy the last bits of 2017.
**or two. [martyb here. I've not had the time to finish testing the changes and am recovering from overload-mode at work and the start of a cold, as well.]
Seems we've hit one of those rare days where circumstances have conspired to keep all the eds busy enough that the story queue ran dry, so I figured I'd go ahead and tell you lot about the upcoming December site update just so they have a little less time to fill now that a couple of them have appeared and started refilling the queue.
It's mostly just a minor bug-fix update, stuff I could get coded quickly and didn't require extensive testing that martyb doesn't have much time for right now, so don't get your hopes too high. Most of the good stuff is currently slated for spring of next year. Here's what we've currently got up on dev for testing with an expected release date of between Christmas and New Years Eve:
Like I said, not a whole lot there on account of us not having time to thoroughly test much what with the holidays coming up. Here's the list (using tinyurl due to a Rehash bug that will definitely make the Spring 2018 cut) of what we'd like to get done for this spring if you're curious though.
Them of you of the disposition to celebrate Christmas, have a merry one. Them of you who celebrate otherwise, happy whatever you're celebrating. Them of you who don't celebrate at all, have a happy rest of December.
[Ed note: I nudged the time this story was due to be released by just a few minutes so that it coincided with the beginning of the Winter Solstice in the northern hemisphere... and the Summer Solstice for those of you who are south of the equator. Maybe we can get the next release out at the start of the equinox (2018-03-20 @ 16:15:00 UTC). --martyb]
So, apparently around November 5th we stopped posting to Twitter. We didn't find out until around the end of that month and when we did nobody had the time and/or ability to look into why until this past week.
Now how we get our headlines over to Twitter is overly complicated and, frankly, idiotic. It's done by one of our IRC bots pulling headlines from the RSS feed and posting them on Twitter as @SoylentNews. The bot was written back in 2014 with hand-rolled (as opposed to installed via package manager) Python libraries and hasn't been updated since. This was breakage that should absolutely have been expected to happen. Twitter's penchant for arbitrarily changing their unversioned API means you either keep on top of changes or expect things to break for no apparent reason.
Here's the question: do we even care? We can either find someone who's willing to rewrite the bot to a new Twitter library, do it the sane way as either a cron or slashd job, or just say to hell with it since we only have two hundred or so followers on Twitter anyway. What say you, folks?
[TMB Note]: Twitter's who-to-follow algorithms really impressed me this morning when I logged in to manually post this story. How did they know we were all huge @JustinBieber and @BarackObama fans?
[Update]: We're again annoying Twitter users by spreading relative intelligence across their platform of choice. Credit goes to Crash for wisely pointing out that we don't have to code everything ourselves.
We've discovered over the weekend that soylentnews.org was failing to resolve with some DNSSEC enabled resolvers. After debugging and manually checking our setup, the problem appears to be occurring due to an issue with the Linode DNS servers when accessed over IPv6. As such, some users may experience slow waiting times due to these DNS issues. I have filed a ticket with Linode about this, and will keep you guys up to date.
73 de NCommander
Nearly two months ago, we received notice from Linode (which hosts the servers for SoylentNews) that they would be migrating our servers to a new data center in Dallas, TX. Our systems would gradually be scheduled for migration. We could either accept their scheduled date/time or trigger a manual migration. In theory, this should be a no-worry activity as we have redundancy on almost all of our servers and processes. But in practice, that is not always the case. Rather than take our chances, we were proactive and manually performed migrations as they became possible.
We had a couple hiccups with one server, but with NCommander, TMB, PJ on hand (among others), we were able to get that one straightened out with only limited impact to the site. We also lost access to our IRC server for about 20 minutes when that server was migrated.
So, with that backdrop, I'm pleased to announce that we completed the migration of our last Linode (hydrogen) to the new data center in Dallas this morning! Shoutout to TheMightyBuzzard for tweaking our load balancer to facilitate the migration, and for being on hand had things gone sideways.