from the hoping-not-to-break-the-site-again-in-2015 dept.
There are two things we've always strived to do well here: (1) listen to community feedback and adjust our plans and processes based on that input, and (2) communicate changes with the community. We are working our way through a rather painful upgrade from slash to rehash whereby we've gone through four point releases to get mostly back to where we started. A lot of folks questioned the necessity of such a large-scale change, and while we posted changelogs summarizing changes, I'd like to provide a broad picture of everything and how we're going to fix it.
Dissecting The Rehash Upgrade
- Necessity of the Upgrade
- Improving Documentation
- Database Upgrade Framework
- Unit Testing the Site
- In Closing...
Check past the break for more.
Necessity of The Upgrade
Rehash was by far the largest and most invasive upgrade to the site, requiring modifications to nearly every component. To understand what went wrong, a full understanding of the context of the rehash upgrade is important, so I apologize in advance if this is a bit wordy; much of this information was present in previous upgrade posts, and comments, but I want to put it in all in one canonical spot for ease of reference
For those unfamiliar with mod_perl, the original Slash codebase was coded against mod_perl 1, which in turn was tied to Apache 1.3 — which, by the time we launched this site, had been unsupported for years. Thus it was known from day one that if we were going to be serious on both continuing on the legacy Slash codebase and keeping SoylentNews itself going that this was something that was going to have to be done. Not if, but when.
As a stopgap, I wrote and applied AppArmor profiles to try and mitigate any potential damage, but we were in the web equivalent of Windows XP; forever vulnerable to zeroday exploits. With this stopgap in place, our first priorities were to improve site usability, sort out moderation (which of course is always a work in progress), and continue with the endless tweaks and changes we've made since go-live. During this time, multiple members of the dev team (myself included) tried to do a quick and dirty port, with no success. The API changes in mod_perl were extensive; beside obvious API calls, many of the data structures were changed, and even environment flags were changed. In short, every single line of code in the site that interacted with Apache had to be changed. In other words, every place the codebase interacted with $r (the mod_perl request object) had to be modified, and in other places, logic had to be reworked to handle changes in behavior such as this rather notable example in how form variables are handled. Finally, after an intensive weekend of hacking and pain, I managed to get index.pl to properly render with Apache2, but it became clear that due to very limited backwards compatibility, the port became all-or-nothing; there was no way to simply piecemeal the upgrade.
Furthermore, over time, we'd also noticed other aspects which were problematic. As coded, Slash used MySQL replication for performance, but had no support for multi-master or failover. This was compounded by the fact that if the database went down, even momentarily, the entire stack would completely hang; apache and slashd would have to be manually kill -9ed, and restarted for the site to be usable. This was further complicated in that in-practice, MySQL replication leaves a lot to be desired; there's no consistency to confirm that both the master and slave have 1:1 data. Unless the entire frontend was shutdown, and master->slave replication was verified, it is trivial to lose data due to an ill-timed shutdown or crash (and as time has shown, our host, Linode, sometimes has to restart our nodes to apply security updates to their infrastructure). In practice, this meant that failing over the master was a slow and error-prone process, and after failover, replication would have to be manually re-established in the reverse direction to bring the former-master up to date, then failed back by hand.
While MySQL 5.6 implemented GTID-based replication to remove some pain, it still failed to be a workable solution for us. Although it is possible to get multi-master replication in vanilla MySQL, it would require serious tweaks to how AUTO_INCREMENT work in the codebase, and violated what little ACID compliance MySQL has. As an aside, I found out that the other site uses this form of multi-master replication. For any discussion-based website, the database is the most important mission critical infrastructure. Thus, rehash's genesis had two explicate goals attached to it:
- Update The Underlying Software Stack
- Find A Solution To The MySQL Redundancy Problem
The MySQL redundancy problem proved problematic, and short of simply porting the site wholesale to another DB engine, the only solution I could find that would keep ACID compliance of our database was with MySQL cluster. In a cluster configuration, the database itself is stored by a backend daemon known as ndbd; instances of mysqld act as a frontend to NDB. In effect, the mysql daemon becomes a frontend to the underlying datastore, which in turn keeps everything consistent. Unfortunately, MySQL cluster is not 100% compatible with vanilla MySQL. FULLTEXT indexes aren't supported under cluster, and some types of queries involving JOINs have a considerably longer execution time if they cannot be parallelized.
As an experiment, I initialized a new cluster instance, and moved the development database to it, to see if the idea was even practical. Surprisingly, with the exception of the site search engine, at first glance, everything appeared to be more or less functional under cluster. As such, this provided us with the basis for the site upgrade.
One limitation of our dev environment is that we do not have the infrastructure to properly load test the changes before deployment. We knew that were going to be bugs and issues with the site upgrade, but we were also starting to get to the point that if we didn't deploy rehash, there was a good chance we won't do it at all. I subscribe to the notion of release early, release often. We believed that the site would be mostly functional post-upgrade, and that any issues encountered would be relatively minor. Unfortunately, we were wrong, and it took four site upgrades to get us back to normality which entailed: rewriting many queries, performing active debugging on production, and a lot of luck to get us there. All things considered, not a great situation to be in.
Because of this, I want to work out a fundamental plan to prevent a repeat of such a painful upgrade even if we make large scale changes to the site, and prevent the site from destabilizing even if we make additional large scale changes.
Improving Documentation
On a most basic level, good documentation goes a long way in keeping stuff both maintainable and usable. Unfortunately, a large part of the technical documentation on the site is over 14 years old. As such, I've made an effort to go through and try and update the PODs to be more in line, including, but not limited to, a revised README, updated INSTALL instructions, notes on some of the quirkier parts of the site and so forth. While I don't expect a huge uptake of people running rehash for their personal sites, being able to run a development instance of the site may hopefully increase the amount of involvement of drive-by contributions. As of right now, I've implements a "make build-environment" feature which automatically downloads and installs local copies of Apache, Perl, and mod_perl, plus all CPAN modules required for rehash. This both makes it easier for us to update the site, and get security fixes from upstream rolled in.
With luck, we can get to the point that someone can simply read the docs and have a full understanding of how rehash goes together, and as with understanding is always the first step towards succeeding at anything.
Database Upgrade Framework
One thing that came out of the aftermath of the rehash upgrade is that the underlying schema and configuration tables between production and dev have deviated from each other. The reason for this is fairly obvious; our method of doing database upgrades is crud at best. rehash has no automated method of updating the database; instead queries to be executed get written to sql/mysql/upgrades, and executed by hand during a site upgrade, and the initial schema is upgraded for new install. The practical end result of this is that the installation scripts, the dev database, and production all have a slightly different layout due to human error. Wherever possible, we should limit the amount of manual effort required to manage and administrator SoylentNews. If anyone knows of a good pre-existing framework we can use to do database upgrades, I'm all ears. Otherwise, I'll be looking at building one from scratch and intergrating it into our development cycle.
Unit Testing the Site
For anyone who has worked on a large project before, unit testing can be a developer's best friend. It lets you know that your API is doing what you want and acting as expected. Now, in normal circumstances, unit testing is difficult to impossible as much of the logic in many web applications is not exposed in a way that makes testing easy, requiring tools like Windmill to simulate page inputs and outputs. Based on previous projects I've done, I'd normally say this represents more effort than is warranted since you frequently have to update the tests even for minor UI changes. In our case, we have a more realistic option. A quirk of rehash's heritage is that approximately 95% of it exists in global perl modules that are either installed in the site_perl directory, or or in the plugins/ directory. As such, rehash strongly adheres to the Model-View-Controller design and methodology.
As such, we have a clear and (partially) documented API to code against which allows us to write simple tests, and confirm the output of the data structures instead of trying to parse HTML to know if something is good or bad. Such a test suite would have made porting the site to mod_perl 2 much simpler, and will come in useful if we ever change database engines or operating system platforms. As such, I've designated it a high priority to at least get the core libraries connected with unit tests to ensure consistent behavior in light of any updates we may make. This is going to be a considerable amount of effort, but I strongly suspect it will reduce our QA workload, and make our upgrades close to a non-event.
In Closing...
The rehash upgrade was a wakeup call for us that we need to improve our processes and methodology, as well as automate aspects of the upgrade process. Even though we're all volunteers, and operate on a best-effort basis, destabilizing the site for a week is not something I personally consider acceptable, and I accept full responsibility as I was the one who both pushed for it, and deployed the upgrade to production. As a community, you've been incredibly tolerant, but I have no desire to test your collective patience. As such, in practice, our next development cycle will be very low key as we work to build the systems outlined in this document, and further tighten up and polish rehash. To all, keep being awesome, and we'll do our best to meet your expectations.
~ NCommander
(Score: 5, Informative) by Anonymous Coward on Friday June 19 2015, @01:00PM
Keep up the good work; minor disruptions of the site are nothing.
(Score: 3, Touché) by NCommander on Friday June 19 2015, @01:15PM
I appreciate the sentiment but I'm something of a perfectionist. Even if I wasn't having the site 500ing on a regular basis for a week was not my idea of a successful upgrade. What's done is done though and I can only move forward to fix it.
Still always moving
(Score: 5, Insightful) by WizardFusion on Friday June 19 2015, @01:38PM
I too am a perfectionist, but sometimes you just have to say "fuck it". It is impossible to be perfect 100% of the time - I am close :)
For me, I only use the site when I am at work with a spare 10 minutes here or there. I didn't see any really issues with the upgrade, but what there was only small cosmetic stuff.
Upgrades are hard. Testing for them is also hard. You are all volunteers and we do appreciate all you guys (and gals) do for this site. Don't beat yourself up about it, learn from your mistakes.
"There are no mistakes, just happy little accidents."
(Score: 4, Informative) by edIII on Friday June 19 2015, @08:02PM
I feel the same way, but my issues with the site are quite rare. All I've ever noticed is occasionally a "missed connection" when posting my comment, but the back button always takes me right back and I've never even lost my comment.
Other than that small hiccup, I've not noticed much of anything to complain about. So yeah, try to run it correctly and passionately, but also know that most of users are happy. At least I think so.
Technically, lunchtime is at any moment. It's just a wave function.
(Score: 4, Insightful) by kbahey on Friday June 19 2015, @10:43PM
I full agree. Don't be too hard on yourselves. The site has been superb. Some glitches every now and then are no big deal.
Thank you for the transparency. Valuable, specially when Slashdot is removing features (comment link) and replacing it with social media stuff, despite the users' uproar.
2bits.com, Inc: Drupal, WordPress, and LAMP performance tuning [2bits.com].
(Score: 4, Interesting) by goodie on Friday June 19 2015, @01:15PM
My suggestion is part human, part automation. Reason is that DBs are not built into binaries. But then again, it's often the same in web dev. If you don't commit a file, 95% of the site might work until you click a link that had that resource... The approach proposed here has been used for more than 15 years where I used to work. It's simple, reliable, and only requires little discipline.
There is however one golden rule to this: Under no circumstances are you allowed to modify and re-commit a script, unless it has a syntax error or something. For example: You added a column in Table A 10 minutes ago, committed a file, then add another column to Table A, you commit a new file. You DO NOT RE-COMMIT THE ORIGINAL FILE. This is because in the meantime, other people, build systems etc. may have pulled your code. Perhaps with git this is less of an issue but with CVS and SVN it was a policy we put in place due to infrequent branching.
- For the human part: Every change in the DB should be committed as a sequential sql file. 001, 002, etc. Basically to get a current version of the DB, you run a series of scripts. Yes this may take longer than having a single "clean db" script for every release but then again, SN is in constant evolution and devs use past backups. Now if somebody forgets to commit their file, obviously there is a problem. But the rule is that you don't modify your DB using client apps. You write SQL files and when you're happy with it, you commit it. Whatever source control app you use will show you an uncommitted file. If you still forget to commit it, then there isn't much to do.
- For the automation part: In your DB, have a system table that has a list of scripts that have already been executed (001, 002...) with dates, user info etc for basic auditing purposes. During an upgrade, write a little script that does the following: backup; run every script in the repo that is newer than the last script in the maintenance table. In case of an error, restore or send mail etc. After that, you could even have a compare script to see whether the prod resembles a dev state that is known to be good.
Interestingly, this approach is very simple but many companies we dealt with were still trying to figure out how to package DB upgrades like it's supposed to be magical or something... The nice thing with this approach is that your DB scripts are tagged and versioned much like your source code. If it ends up representing a large number of SQL files, at a certain version, you may restart at 001 and add up all the scripts into 1 large SQL script. But to be honest, I'd doubt that this would be an issue for SN.
(Score: 2) by NCommander on Friday June 19 2015, @01:25PM
This was pretty close to what I was thinking implementing. The one slight headache is we might need to execute perl to migrate stuff if we change a data format (Paul has been chomping at the bit to redesign aspects of the database). I just wish with MySQL that you could rollback an ALTER TABLE in a transaction.
Still always moving
(Score: 1, Redundant) by tibman on Friday June 19 2015, @02:36PM
I've yet to use a database that allowed transactional schema changes : /
SN won't survive on lurkers alone. Write comments.
(Score: 2) by NCommander on Friday June 19 2015, @03:47PM
And people think I mad when I want to port the site to postresql.
Still always moving
(Score: 2) by goodie on Friday June 19 2015, @06:46PM
Mmhhh i may be wrong here but ddl is auto commit. There is no rollback possible on pretty much every dbms. But look at it this way. Instead of a rollback you just do a restore, debug and redo the whole db upgrade process. Ideally you would have an environment where you could test a full upgrade beforehand.
(Score: 2) by tibman on Friday June 19 2015, @07:01PM
Restoring the DB is heavy handed but almost every "it failed" arrow on the deployment flowchart points to it. Unfortunately.
I'm sure someone could chime in on why very few databases allow it. But since user tables are just records in a master table it seems strange that you can't put a transaction on the data that represents your table changes.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by goodie on Friday June 19 2015, @07:34PM
That would be my advice too. Partial rollbacks etc. Are a pain to debug. When a problem arises you probably want to restore so that eventually you can have a process that is 100% successful. But if you have an environment where you can do a dunp of prod/upgrade/unit tests that may already improve your chances of not having any issues during the real deal.
(Score: 3, Informative) by choose another one on Friday June 19 2015, @07:38PM
SQL Server will rollback ddl within transaction - confuses some people see e.g.: http://www.sqlservercentral.com/Forums/Topic1071141-392-1.aspx [sqlservercentral.com]
It is Snapshot Isolation that causes problems (if you have it turned on) because metadata is not versioned so you can't have one process reading one version of a table and the other (in as yet uncommitted transaction) reading another version. So you can't use some ddl within transactions under snapshot isolation - that is documented somewhere.
Oracle supports it too according to this page: https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis [postgresql.org]
(Score: 2) by goodie on Friday June 19 2015, @08:20PM
I'm gonna have to try this on my mssql setup at home and specify the isolation level.
Mor generally i think that the idea here is that the upgrade should either work 100% or fail entirely and result in a restore. If you have somethig that fails halfway through and causes a rollback on that transaction you still have to rollback another 49% of the stuff which is the equivalent of a restore. You either want it fully functional or back to square one. I've seen enough stuff upgraded halfway that had to be debugged by hand... It costs so much time and effort to debug...
(Score: 2) by choose another one on Saturday June 20 2015, @01:14PM
We generally went with "any script that is not transactional must be re-entrant" - i.e. if something goes wrong you can (fix it and) try again to complete the upgrade. Mostly we aimed for transactional.
But in the end, a restore is always the final rollback process for a failed upgrade.
(Score: 1, Insightful) by Anonymous Coward on Saturday June 20 2015, @02:26AM
Mmhhh i may be wrong here but ddl is auto commit. There is no rollback possible on pretty much every dbms.
DDL can be rolled back on just about every commercial quality database. This means PostgreSQL can roll back table changes, but MySQL cannot.
MySQL cannot roll back DDL due to a structural design / defect (depending on how you look at it). Per table, you can choose a storage engine. This is MySQL specific "feature". Most other database have just one storage technology. That means every table join is equivalent to communicating across databases in other database software. This is why foreign keys suck in MySQL, and why DDL rollback is not possible. In MySQL, DDL is visible among different database storage engines. Everything that is not MVCC [postgresql.org] cannot roll back a table change, so table changes have not point of return. This is a MySQL specific hell and it makes DDL upgrades unnecessarily risky.
(Score: 2) by goodie on Saturday June 20 2015, @12:13PM
Cool, I was under the impression that it was not possible (and that's after years of doing MSSQL work so I feel kinda shamed here thanks for that ;). Back in 2000 I could have sworn this was not doable. The isolation level must be selected properly though, but like other types of transactions anyway. And the default/custom settings in tools like SSMS must then be selected accordingly too. But thanks for that tip, I feel a little less stupid now :).
I think though that the main point of the DB upgrade (and it certainly is the way I've experienced things in the past) is that overall, if something fails halfway through, you just want to restart the process, not try to revert/pick up where it failed. This is especially true if the upgrade is somewhat complex (e.g, lots of changes). Same goes with source code. If the upgrade fails, you want to redo everything and not necessarily try to figure out which files worked out and which ones did not.
The other reason is that depending on your data files and logging options you may see your log grow substantially during the upgrade. Doing a rollback to restart the process will just take a long time for nothing. At that point, restore is a better option.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @09:05PM
It seems like a lot of people deploying virtualized servers seem to forget about tools like LVM. Its still useful, even inside virtual machines. I don't deal with databases much, but using LVM snapshots inside VMs has saved me some headaches.
If, during DB upgrades, the site can function with a single DB master running with no slaves, why not have the DB master use a layer of LVM for its storage? Then the upgrade might become:
Stop all DB replication and slaves
Temporarily stop the master DB (just so all its file system data is consistent)
Snapshot the DB file system
Restart the master DB. So far this process should only take seconds.
Perform all DB upgrades/updates
If any piece fails:
- informational logs can be copied off for review
- stop DB
- roll back the snapshot
- restart DB
- die
If everything succeeds, the slaves and replication are re-enabled
Eventually the snapshot is deleted
But like I said, I'm a database newbie. But I have used such a process on other types of servers, and its worked for me.
(Score: 0) by Anonymous Coward on Saturday June 20 2015, @02:16AM
So you are telling us that you have only used MySQL?
(Score: 2) by tibman on Saturday June 20 2015, @05:04AM
MSSQL
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Saturday June 20 2015, @06:49AM
It looks like MS SQL supports DDL rollback, but DDL changes leak into other sessions:
http://stackoverflow.com/questions/1043598/is-it-possible-to-run-multiple-ddl-statements-inside-a-transaction-within-sql-s [stackoverflow.com]
http://stackoverflow.com/questions/7823964/how-to-enable-transaction-for-ddl-on-sql-server [stackoverflow.com]
But you have to be extra mindful of configuration and transaction boundaries to get the useful behavior of rolling back on an error.
(Score: 2) by tibman on Saturday June 20 2015, @08:06AM
Yeah, it is fiddly. Not even usable with some configurations. Being unable to use 'alter table' is a killer.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by Lemming on Tuesday June 23 2015, @07:13AM
Have you looked at DbMaintain [dbmaintain.org]? It's from the java-world, but it might be usable in other contexts. It has Ant and Maven integration, but it can als be used directly from the command line.
(Score: 2) by Runaway1956 on Friday June 19 2015, @01:53PM
Learning from our mistakes? When the wife asks, "Does this outfit make me look fat?" I still answer honestly. One day that cast iron skillet will be the LAST thing I see in this life.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by CoolHand on Friday June 19 2015, @02:14PM
Anyone who is capable of getting themselves made President should on no account be allowed to do the job-Douglas Adams
(Score: 1, Touché) by Anonymous Coward on Friday June 19 2015, @02:28PM
Does this outfit make me look fat?
That is someone asking for reassurance of their decision. The proper answer is 'yes you look good in that'. They will then interpret that however they like and go change into what they really wanted to wear.
My wife asks that question when she is experimenting with a 'look'.
To a guy it seems like a stupid question. You either do or do not. To a woman it is 'please verify my experiment here'.
(Score: 2) by mcgrew on Friday June 19 2015, @04:56PM
That is someone asking for reassurance of their decision
Bullshit, it's a bitch looking for an argument who doesn't have a real reason for one. It's a question with no good answer. If she asks that and you're not married, RUN LIKE HELL. Only women who love angry arguments ask that question. Only fools answer it.
Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
(Score: 2) by maxwell demon on Friday June 19 2015, @05:45PM
You answer the question if the outfit makes her look fat with "yes" and add that she looks good that way? Uh oh! :-)
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by isostatic on Friday June 19 2015, @09:36PM
You answer the question if the outfit makes her look fat with "yes" and add that she looks good that way? Uh oh! :-)
OP is Ethiopian
(Score: 3, Insightful) by DeathMonkey on Friday June 19 2015, @06:01PM
To a guy it seems like a stupid question. You either do or do not. To a woman it is 'please verify my experiment here'.
Am I the only one here who actually wants his wife to look hot when we go out?
When asked if I like an outfit:
If I like it I answer Yes. If I don't I answer No.
Seems a lot easier that way...
(Score: 3, Insightful) by isostatic on Friday June 19 2015, @09:38PM
Am I the only one here who actually wants his wife to look hot when we go out?
My wife always looks hot. Doesn't matter if she's in the cavegirl outfit from our honeymoon, or if she has babysick in her hair.
(Score: 1) by DutchUncle on Friday June 19 2015, @06:01PM
I finally came up with "That outfit doesn't flatter you", with some detail about how a horizontal element is too high/low on her figure. It helps, of course, if another option she tried on (thinking of purchases here) *did* flatter her, and it's a matter of which is preferable.
(Score: 2) by Anne Nonymous on Friday June 19 2015, @06:02PM
> "Does this outfit make me look fat?"
Oh baby, I just want to peel that dress off of you and drop it on the bedroom floor.
(Score: 2) by choose another one on Friday June 19 2015, @07:42PM
> Oh baby, I just want to peel that dress off of you and drop it on the bedroom floor.
With the lights off...
(Score: 5, Insightful) by mtrycz on Friday June 19 2015, @02:24PM
I sent a donation
In capitalist America, ads view YOU!
(Score: 2) by pkrasimirov on Friday June 19 2015, @02:50PM
Why was this marked as troll?
(Score: 2, Informative) by Anonymous Coward on Friday June 19 2015, @02:26PM
there's no consistency to confirm that both the master and slave have 1:1 data. Unless the entire frontend was shutdown, and master->slave replication was verified,
Uh oh! You better not let any SJWs read your "racist" use of master/slave terms. And no, I shit you not that I'm not joking [djangoproject.com]:
The docs and some tests contain references to a master/slave db configuration.
While this terminology has been used for a long time, those terms may carry racially charged meanings to users.
(Score: 1) by mechanicjay on Friday June 19 2015, @03:13PM
Thanks for that. I'd read about this last year and forgotten. I need to keep this one on my short list of "things wrong with world".
My VMS box beat up your Windows box.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @03:18PM
Yeah, it takes having quite a comfortable life to care about stupid shit like a DB being called "master" or a "slave".
(Score: 0) by Anonymous Coward on Friday June 19 2015, @04:30PM
Just as it also takes quite a comfortable life to care about someone else caring about it.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @05:09PM
Sure, but I'm not some white liberal whining about being oppressed because a computer is being called a slave.
(Score: 1, Insightful) by Anonymous Coward on Friday June 19 2015, @05:55PM
> Sure, but I'm not some white liberal whining about being oppressed because a computer is being called a slave.
Right, you are just a busy-body whining about something that by your own words does not affect you.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @06:14PM
Sure. I already admitted as much. Doesn't change the fact that I will continue to make fun of white liberals who whine that a computer was called a "slave" as if that was some sort of world-changing cause.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @06:41PM
> as if that was some sort of world-changing cause.
Because that's what happened.
(Score: 0) by Anonymous Coward on Friday June 19 2015, @06:54PM
Of course it isn't what happened. They whined about something that only a couple of internet SJWs would ever care about. Not a single black person was offended by a DB being called a "slave" and another one its "master".
(Score: 2) by mcgrew on Friday June 19 2015, @05:00PM
A co-worker was scolded for mentioning reverse Polish Notation once. Ignorance knows no bounds; fuck the thin skinned morons.
Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
(Score: 0) by Anonymous Coward on Saturday June 20 2015, @02:28AM
OMG being Polish myself I rather enjoyed that chapter in CS280. What a bunch morons on this planet... Reminds me of when my girlfriend was talking on the phone with me from work and we were talking about our cats and she said :They are such fatties" when referring to their appetites. A fat woman overheard her and confronted her about calling people "fatties." The palm was firmly planted in my forehead that day.
(Score: 0) by Anonymous Coward on Saturday June 20 2015, @06:13AM
A purchasing department in Los Angeles County sent out a letter to vendors decrying the master/slave thing.
'Master'/'Slave' Computer Labels Unacceptable [snopes.com]
Several other examples of political correctness run amok on that page.
-- gewg_
(Score: 2, Insightful) by Anonymous Coward on Friday June 19 2015, @03:30PM
django project unchained
(Score: 0) by Anonymous Coward on Friday June 19 2015, @07:28PM
Hm. I was happy about making the transition from /. Then I realized that half of this thread is an idiot or two defending fabricated indignation about a hypothetical group of people. Grrrrr conserv-a-rage!
(Score: 2) by DECbot on Friday June 19 2015, @10:44PM
Hmm... If master/slave is out, can we use terms like pitcher/catcher or top/bottom?
cats~$ sudo chown -R us /home/base
(Score: 3, Insightful) by Anonymous Coward on Saturday June 20 2015, @01:35AM
I have said it before that: [soylentnews.org]
Most sites that use MySQL successfully either heavily patch it, do not mind a small amount of hard to quantify data going missing, or use it as a dumb key value store
The PostgreSQL community does not put up with shit software, and you will notice that multi master is not in their core database layer. This is because doing it right is difficult and involves application specific trade-offs (one size does not fit all). A simple one size fits all approach will simply lead to data loss.
This page lists some multi master solutions:
https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling [postgresql.org]
You will notice that every single one of them requires some kind of trade off. If you think that there is not a trade off, you just have not found it yet (this includes solutions for other database products/projects).
But the best solution may be one that you design for yourself. A complete multi master solution requires some kind of "multi master accommodation" within your application's code, and sometimes, the best multi master solution might be one that you implement for yourself on top of a non multi master databases.
Like the Internet, you do not have reliability and scaleability without having smart end points and dumb & completely redundant middle paths. This means you have smart web nodes and smart database notes. Everything has redundancy and the web nodes can connect to different database nodes without the permission (but maybe the guidance) of a single coordinator. Within a layer, nothing is shared among the redundant parts (do not use one SAN for all nodes, regardless of its reliability claims, reliability code can have bugs too). This means that there is no shared fail-over manager that everything gets routed through. Every layer should support multiple paths. Doing this incorrectly offers a false sense of comfort. You think that you are more protected, when in fact you could have easily added more points of failure that are less understood and more difficult to diagnose.
(Score: 2) by goodie on Saturday June 20 2015, @12:25PM
+1 Agree with this. I'm fairly certain this is the same AC who's been posting around this thread :).
When you deal with writing rather than reading, you have, at some point, to send the data somewhere and have it copied/replicated onto other machines automatically. Whether this means using 2PC which can slow down writes or log shipping etc. you need to consider the tradeoffs and decide what best serves you. Something with a hot spare that can be queried for reads is not necessarily a bad idea. If you want a true multimaster replication scheme where writes can be sent to one or the other machine, you have to be willing to do some changes to your code as well, as certain data types don't work well, especially for identifiers etc.
Anyway, very interesting conversation here, loving it :)