2018-01-01 00:00:00 ..
2018-01-22 15:16:33 UTC
2018-01-24 02:25:51 UTC
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Continuing Linode's efforts to mitigate the Meltdown/Spectre issues, we have learned that the last of our servers has been scheduled for its first reboot. This time it is hydrogen which
, among other things, hosts a copy of our database is one of our web frontends. The reboot is scheduled for tonight, 2018-01-19 @ 05:00 AM UTC (Midnight EST) or approximately 9 hours from the time this story is posted. Our plans are to move hydrogen's workload over to fluorine to cover the load during the hiatus and expect there to be no interruption of service on the site.
Please be aware that Linode considers these reboots to be "Phase 1" of their remediation efforts. We will keep you posted when we learn of other phase(s) being scheduled.
We appreciate your understanding and patience as we deal with this situation.
I recently came upon an article on Ars Technica which revealed that Linode (amongst many, many others) learned of these vulnerabilities at the same time as the rest of us -- when the news hit the press. They had no advance notice with which they could have performed mitigation planning of any kind. That has to count as one of their worst days, ever.
Linode (our server provider) is continuing with their server reboots to mitigate Meltdown/Spectre. This time, three of our servers are scheduled to be rebooted at the same time: lithium, sodium, and boron.
From TMB's update to our earlier story System Reboots: beryllium reboot successful; lithium, sodium, and boron soon to come [updated]:
[TMB Note]: Sodium is our currently-configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 126.96.36.199 to your hosts file if ten minutes is more than you can wait.
This reboot is scheduled for: 2018-01-18 at 0900 UTC (0400 EST). That is about 7 hours from the time this story goes 'live'. We anticipate no problems... that the site should resume operations on its own.
A workaround is to temporarily update your hosts file to include:
Upcoming: We just learned that hydrogen is scheduled for a reboot on 2018-01-19 at 05:00 AM UTC. Since we can get by just fine for a few hours on one web frontend though, no service interruption is anticipated.
[Update: Reboot of beryllium was successful and our IRC services were restored without issue. Hat tip to our sysops who made this happen so smoothly! --martyb]
Linode, which hosts our servers, is rolling out fixes for the Meltdown/Spectre bugs. This necessitates a hard reboot of their servers, and that means any guest servers will be down while this happens. beryllium is scheduled for a reboot with a two-hour window starting at 2018-01-17 07:00 AM UTC (02:00 AM EST). The outage should be relatively brief — a matter of just a few minutes.
We expect this will cause our IRC (Internet Relay Chat) service to be unavailable. We do not anticipate any problems, but if things go sideways, I'm sure the community will find a way to let us know via the comments.
Planning ahead, we have learned that lithium, sodium, and boron are all scheduled for a reboot at on 2018-01-18 at 09:00 AM UTC.
We appreciate your understanding and patience as we strive to keep the impact to the site to a minimum.
[TMB Note]: Sodium is our currently configured load balancer and we weren't given enough notice to switch to Magnesium (DNS propagation can take a while), so expect ten minutes or less of site downtime. Or temporarily add 188.8.131.52 to your hosts file if ten minutes is more than you can wait.
We recently received notifications that Linode, our hosting provider, will be performing "Emergency Security Maintenance" as a result of the recently Disclosed Meltdown and Spectre security issues.
So far, we have been informed of maintenance windows for two of our servers: magnesium and fluorine.
There is a two hour window for these reboots starting on Friday January 12th at 10:00AM UTC. Reboots should take on the order of about 10 minutes per server.
The reboot of magnesium should cause no service disruption as it is one of our redundant front-end servers. The same cannot be said for fluorine as TheMightyBuzzard so succinctly summed it up: "slashd and site payments won't work while fluorine's down".
We have not yet received any information as to when our other systems will be rebooted -- we will keep you advised as we learn more.
Another year is almost behind us and I thought it would be useful to take a look at what we have accomplished up to this point.
For those who may be new-ish here, SoylentNews went live on 2014-02-17. Since then, we have:
All of this was provided with absolutely no advertising by a purely volunteer staff!
Please accept my sincere thanks to all of you who have subscribed and helped to keep the site up and running! We could not have done it without your support.
I must also report that we have just over 100 people who have accessed the site in the past month whose subscription has lapsed. It is easy enough to do -- I've let it happen, myself. So, please go to the subscription page to check/renew your subscription. Be aware that the preferred amount is the minimum for the selected duration; feel free to increase the amount (hint hint).
Oh, and I would be remiss in not thanking the staff here for their dedication and perseverance. Linode decided to open a new data center and we had to migrate our servers to the new location. We accomplished this with almost no downtime on the site, and only about a 30-minute hiccup on our IRC (Internet Relay Chat) server.
Because of performance degradation on our servers when loading highly-commented stories, we rolled out a new comment display system early in the year. It had several issues at the outset, but seems to have settled down quite nicely. We appreciate your patience, and constructive feedback reporting issues as they arose. It helped greatly in stomping out those bugs.
We have a bug-fix update to the site in the works... mostly minor things that are waiting on testing for release. We hope to roll those out in the next couple of weeks.
To all of you who have contributed to the site, in other words: to our community, thank-you! It has been a privilege to serve you this past year and I look forward to continuing to do so in the year to come. --martyb
So, among the many nifty presents I brought back from my holidays with the family was a case of the black plague. Or possibly a cold. Either way, I don't want to be deploying new code on the production servers while my thinking's impaired*, so we're pushing the site update we'd planned for this weekend back another week**. That's all. Enjoy the last bits of 2017.
**or two. [martyb here. I've not had the time to finish testing the changes and am recovering from overload-mode at work and the start of a cold, as well.]
Seems we've hit one of those rare days where circumstances have conspired to keep all the eds busy enough that the story queue ran dry, so I figured I'd go ahead and tell you lot about the upcoming December site update just so they have a little less time to fill now that a couple of them have appeared and started refilling the queue.
It's mostly just a minor bug-fix update, stuff I could get coded quickly and didn't require extensive testing that martyb doesn't have much time for right now, so don't get your hopes too high. Most of the good stuff is currently slated for spring of next year. Here's what we've currently got up on dev for testing with an expected release date of between Christmas and New Years Eve:
Like I said, not a whole lot there on account of us not having time to thoroughly test much what with the holidays coming up. Here's the list (using tinyurl due to a Rehash bug that will definitely make the Spring 2018 cut) of what we'd like to get done for this spring if you're curious though.
Them of you of the disposition to celebrate Christmas, have a merry one. Them of you who celebrate otherwise, happy whatever you're celebrating. Them of you who don't celebrate at all, have a happy rest of December.
[Ed note: I nudged the time this story was due to be released by just a few minutes so that it coincided with the beginning of the Winter Solstice in the northern hemisphere... and the Summer Solstice for those of you who are south of the equator. Maybe we can get the next release out at the start of the equinox (2018-03-20 @ 16:15:00 UTC). --martyb]
So, apparently around November 5th we stopped posting to Twitter. We didn't find out until around the end of that month and when we did nobody had the time and/or ability to look into why until this past week.
Now how we get our headlines over to Twitter is overly complicated and, frankly, idiotic. It's done by one of our IRC bots pulling headlines from the RSS feed and posting them on Twitter as @SoylentNews. The bot was written back in 2014 with hand-rolled (as opposed to installed via package manager) Python libraries and hasn't been updated since. This was breakage that should absolutely have been expected to happen. Twitter's penchant for arbitrarily changing their unversioned API means you either keep on top of changes or expect things to break for no apparent reason.
Here's the question: do we even care? We can either find someone who's willing to rewrite the bot to a new Twitter library, do it the sane way as either a cron or slashd job, or just say to hell with it since we only have two hundred or so followers on Twitter anyway. What say you, folks?
[TMB Note]: Twitter's who-to-follow algorithms really impressed me this morning when I logged in to manually post this story. How did they know we were all huge @JustinBieber and @BarackObama fans?
[Update]: We're again annoying Twitter users by spreading relative intelligence across their platform of choice. Credit goes to Crash for wisely pointing out that we don't have to code everything ourselves.
We've discovered over the weekend that soylentnews.org was failing to resolve with some DNSSEC enabled resolvers. After debugging and manually checking our setup, the problem appears to be occurring due to an issue with the Linode DNS servers when accessed over IPv6. As such, some users may experience slow waiting times due to these DNS issues. I have filed a ticket with Linode about this, and will keep you guys up to date.
73 de NCommander
Nearly two months ago, we received notice from Linode (which hosts the servers for SoylentNews) that they would be migrating our servers to a new data center in Dallas, TX. Our systems would gradually be scheduled for migration. We could either accept their scheduled date/time or trigger a manual migration. In theory, this should be a no-worry activity as we have redundancy on almost all of our servers and processes. But in practice, that is not always the case. Rather than take our chances, we were proactive and manually performed migrations as they became possible.
We had a couple hiccups with one server, but with NCommander, TMB, PJ on hand (among others), we were able to get that one straightened out with only limited impact to the site. We also lost access to our IRC server for about 20 minutes when that server was migrated.
So, with that backdrop, I'm pleased to announce that we completed the migration of our last Linode (hydrogen) to the new data center in Dallas this morning! Shoutout to TheMightyBuzzard for tweaking our load balancer to facilitate the migration, and for being on hand had things gone sideways.
As part of Linode's migration of servers to a new Data Center in Dallas, two of our servers were scheduled for migration at 10pm EDT on September 29, 2017. NCommander happened to be around when I sent out a reminder I'd received from Linode, so he 'hit the button' at 9:30pm tonight (Sept. 28) and did a manual migration ahead of time.
Unless you were on our IRC server (Internet Relay Chat) at the time, you probably didn't even notice... and even then, it was unavailable for only about 15-20 minutes. Redundancy for the win!
That leaves us with a single server, sodium, to migrate. It is currently scheduled for migration on Tuesday, October 3, 2017 at 10:00pm EDT. Since sodium is one of two front-end proxies for us (the other is magnesium which has already been migrated), I expect we'll be able to perform that migration without any site interruption.
Separately, and in parallel, we are slowly moving our servers from Ubuntu 14.04 LTS to Gentoo.
To the community, thank you for your patience as we work our way through this process. And, for those of you who may have been with us from the outset, and when up-time was measured in hours, please join me in congratulating the team for their dedication and hard work which has facilitated such an uneventful migration!
Just a quick heads up to the SN community. As we previously announced, Linode is migrating customers to a new data center. We already did the first stage of migration with most of the production servers two weeks ago. Now we're working our way through the remainder of the servers. As of this writing, we've migrated both webservers, both DB servers, our development server, and the fallback load balancer.
Tonight at approximately midnight EDT, we're going to migrate beryllium, which hosts our IRC server, wiki, and mail server, and boron, which is our redundant KDC/internal DNS server. During this process, IRC and email from SoylentNews will be unavailable. The site itself will stay up during this process.
After this migration, we'll only have our primary load balancer to migrate, which we will likely do over the weekend. Thank you all for your understanding.
[Update 1]: Fluorine (the web front end) has been back in the rotation since last night and we'll be checking on and bringing up Neon (the db node) tonight. Cross your fingers because if we can't get Neon up and happy by Friday 10:00 PM EDT[*], we'll have to temporarily down the site and copy the db over to our dev server to even keep the site online until we can get a db node back up.
[Update 2]: NC: I successfully CPRed neon, and was able to bring the DB cluster back into sync. I've stopped helium's database services so we're running on neon only now, and getting ready to migrate it after installing updates and such. With luck nothing blows up.
[Update 3]: Nothing blew up. All should be copacetic except for needing to update Neon tomorrow sometime.
* That's the deadline they've given us to move Helium (our other db node) over to the Dallas 2 facility, or they'll do it automatically themselves.
As most of you are already aware, Linode is our web hosting provider. A recent email from them informed us:
We recently announced our new Dallas 2 facility. Over the coming months, we'll be migrating all Linodes to this new, state-of-the-art facility. We're reaching out to let you know your Linode has been entered into a migration queue to move from Dallas 1 to Dallas 2.
We were informed in a separate email that the neon and helium servers were scheduled for an automatic migration. Manual migration was possible, if preferred. That's no big deal as we have redundancy on those servers. The site should continue functioning without a hiccup.
About an hour ago, we received another email saying that fluorine (one of our two web front ends) was also scheduled for migration. That one is a bit more interesting as that server also runs ipnd1 and slashd2 — daemons for which we have no redundancy.
Well, NCommander, TheMightyBuzzard and I happened to be on IRC at the same time as the fluorine migration notice arrived. No time like the present! So fluorine has been migrated. While we were at it, why not migrate neon, too? About 10 minutes later and that was been completed, as well. We discussed whether to migrate helium as well, but decided to hold off.
We did not anticipate any problems... but we found some pages loaded slowly and we were occasionally getting 403 and 503 errors. There are some issues with slower communications between the data centers than what we had within the same data center. Thanks to redundancy, it is not critical we get everything back up and running for the site to run, but it would definitely be best to not run in this configuration indefinitely.
The current state of the world? "one web frontend and one db node are shitting themselves. we're limping along on one of each but with backups in case of emergency." and... "fluorine is technically up but not in the rotation for serving up pages. it's just doing slashd and ipnd."
Hat tip to NCommander and TheMightyBuzzard -- I really enjoy watching these guys in action -- they know their stuff and we are truly fortunate to have them volunteer on SoylentNews.
 Instant Payments Notification Daemon
 The Daemon that makes it all work
Welcome, new trolls! We're pleased as punch to have you aboard, unfortunately as you may have noticed our moderators are unable to give you the moderations you've been working so hard for. Since we can't really do much about people not moderating more, we're going to be giving out more points so that the ones that do can give you the attention you so desperately crave.
Moderators: Starting a little after midnight UTC tonight, everyone will be getting ten points a day instead of five. The threshold for a mod-bomb, however, is going to remain at five. This change is not so you can pursue an agenda against registered users more effectively but so we can collectively handle the rather large uptick in anonymous trolling recently while still being able to have points remaining for upmodding quality comments. This is not an invitation to go wild downmodding; it's helping you to be able to stick to the "concentrate more on upmodding than downmodding" bit of the guidelines.
Also, this is not a heavily thought-out or permanent change. It is a quick, dirty adjustment that will be reviewed, tweaked, and likely changed before year's end. Questions? Comments?
This is a meta post concerning Soylentnews' background, finances, operations, staffing, story scheduling, and a conclusion. If this is not your cup-of-coffee++ (or tea, etc.), then please ignore this story — another will appear shortly.
In February of 2014, a group of ticked-off Slashdot users got together, said "Fuck Beta!", and launched an alternative web site focused on the community. It started with an out-of-date and unmaintained open source version of slashcode which was promptly forked and renamed 'Rehash'. We incorporated as a Public Benefit Corporation. We experienced site outages, questions of leadership, and faced predictions of failure. Thanks to persistence, dedication, many late nights (and some very early mornings), we persevered and are still here today.
Soylentnews is a place for people to engage in discussions about topics of interest to the community. Not all topics are of interest to everyone, of course. In large part it is up to the community to submit stories — the large majority of these do get accepted to the main page. This is all the more important during the "silly season" &mash; summer in the northern hemisphere — when many people are on vacation and fewer scholarly articles are published.
We are still an all-volunteer organization. Nobody here has made a profit off this site. In fact, Soylentnews is still in debt to the founders who put up the funds required to get us up and running. I am happy to report that we have finally made enough progress that some payback to the founders may be possible.
Here are the unaudited numbers from site subscriptions for the first half of our fiscal year (2017-01-01 through 2017-06-30):
Base goal: $3000
Stretch goal: $2000
Subscription count: 133
Gross subscription income : $3795
Net subscription income: $3645 (estimated - after payment processor fees)
Net over goal: $645
So, thanks to all you Soylentils who have donated, we have a surplus at the moment. The ultimate decision is up to the Board of Directors, but the current sense is that we should build a prudent reserve of some months' operating expenses before paying back the founders. In light of the foregoing, we are aiming for the same fundraising goals for the second half of the year... $3,000 base and $2,000 stretch goals. More in line with business norms, however, these are now being presented in the "Site News" box as quarterly goals: $1,500 base and $1,000 stretch goals, respectively.
We've been forthright and upfront right from the start and it is our continued commitment to keep you informed of any issues in the site's operations.
To wit, we recently received a notice from our web-hosting provider, Linode, that one of our servers had been reported as having been added to a spam-blocking list. Staff immediately responded and found a misconfiguration in our link-shortening service. (It was only supposed to shorten links originating on Soylentnews.org, but was accepting links for other domains, as well.) A dump of the database was taken, non-SN sites were purged, the shortening service was updated to correctly implement the restriction to only shorten links from soylentnews.org, and Linode was informed of these actions.
We also recently experienced a problem with our slashd daemon which, among many other tasks, hands out moderation points each night. This fell over on us for a couple of nights leading to our handing out mod points manually to all users. This seems to have been rectified — please let us know if you see a recurrence.
Lastly, one of the senior editorial staff has been on hiatus to deal with major illnesses in his family. His dedicated efforts in helping them has brought ill health upon himself, as well. I ask you to keep janrinok and his family in your thoughts and, if you are of a mind to do so, in your prayers.
There have been discussions in the past as how we should best handle circumstances when there is a dearth of acceptable stories in the queue. Do we post something marginal just to fill the time or should we hold out and only publish when we have enough suitable material to publish. Past efforts and comments have suggested the majority prefer we avoid posting stories just to fill time slots. In short: quality over quantity. Further, staff cannot work 24/7/365 without a break either. We all need a break sometimes and summer is a good time to take one. In other words, we have been running with reduced staffing for the past couple of months and will continue to do so for the next few months as well.
The result? Over the past month or so, we have experimented with further spacing out stories on holidays (Independence Day in the USA) and on weekends. Instead of the usual cadence of a story appearing every 90 minutes or so, we have tried slowing to posting a story every 2 hours or even every 2.5 hours.
My perception is that this has worked okay. At least I have not noticed any complaints in the comments. It could well be that I had missed something, too. So I put this question to the community: How has the story spacing been working out?
Please keep those story submissions coming, please continue to subscribe (you can offer more than the minimum suggested amount), and — most importantly — please keep reading and commenting! Discussion is