Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


irc logging bot

Posted by crutchy on Wednesday March 05 2014, @11:09AM (#132)
1 Comment
Code

had a go at scripting a little quick & dirty irc bot for soylent

requires sic (http://tools.suckless.org/sic)
if you're using debian: sudo apt-get install sic

#!/bin/bash

chan="#test"
log="test.log"
pipe="log-pipe"

trap "rm -f $pipe" EXIT

if [[ -f $log ]]; then
    rm $log
fi

if [[ ! -p $pipe ]]; then
    mkfifo $pipe
fi

substr="End of /MOTD command"
joined=""

sic -h "irc.sylnt.us" -n "log-bot" <> $pipe | while read line; do
    if [[ -n "$line" ]]; then
        echo $line >> $log
    fi
    if [[ -z "$joined" ]] && [[ -z "${line##*$substr*}" ]]; then
        joined="1"
        echo ":j $chan" > $pipe
    fi
done

exit 0

also posted on the wiki @ http://wiki.soylentnews.org/wiki/index.php/User:Crutchy#IRC_logging_bot

DRAFT: The Moderation Talk

Posted by NCommander on Tuesday March 04 2014, @04:14AM (#125)
17 Comments
Answers

NOTE: This is just a draft copy of my post, likely still incomplete. Once edited and reviewed, I'll post to the main index.

Ok, so first, I want to apologize that this is a few days late. Due to real life insanity (involving, but not limited to, 30 hours of flying, horrible jetlag, and seasickness), I wasn't able to get this discussion started when I promised, so please accept my deepest apologizes. Anyway, here's the moderation discussion, as promised. I've made it clear multiple times that the current algo is something of a temporary hack. I've been reading comments on my journal, and on the articles we've had discussing in-depth.

Before we begin, there are a couple of things I'd like to go into first before we go into rewriting the algorithm. A lot of people have suggested alternative moderation systems (i.e., something Reddit like, or a tag-based system) instead of trying to "fix" slash's system. While I'm not inherently object to replacing moderation wholesale, it would require someone to actually implement a new system, get it setup somewhere, let people review it, and then perhaps roll it out to the site. As the saying goes, talk is cheap. I'm personally not going to replace what I see as a "good enough" system without the community deciding that they want it, and that requires that said system exists to be evaluated. If someone is seriously interested in still perusing this, I invite them to drop by #dev, and discuss it 1:1.

*big exhale*

Right, now that we got that out of the way, I'd like to address what I've seen the biggest concerns towards moderation. I recommend that people read my writeup about the current system before diving in, as I will be referring that post considerably.

I've got some pretty graphs here that show how points are being spread through the system, and that, for the most part moderation is mostly working as adversed.

*FIXME, put graphs here*

Point expiration: Oh boy, people really have let me know about this one. I've written a fair bit about this, but to sum-up, modpoints with a short half-life *are* a good thing. On Soylent, we post upwards of 10-20 articles a day, and once an article is no longer in the "top 10" so to speak, the number of new comments essentially drops into single digits. With a smaller userbase, we need lots of mod points in circulation to make the system work, and even then, generally half to 3/4th of all modpoints expire out without being used.

*graph to points expiration table*

That's not to say that the current four hour period isn't short. My largest concern at the moment is that any large increases of mod point expiration has something of a cascading effect. At any given moment, we have a specific number of slots of people who can be moderators, and if someone doesn't bother to moderate at all, that slot is effectively taken until the points go "POOF". I'm tentatively willing to increase the duration to six hours, to relief some of this pressure, and then see how moderation spreads are affected. Any large scale increases in the expiration however means making more of the userbase eligible to moderate at a given time. I'm open to thoughts on this one.

slashdev

Posted by crutchy on Sunday March 02 2014, @12:00PM (#114)
3 Comments
Code

After a minor problem with virtualbox (f*ck you nvidia) I got the slashdev virtual machine going. If you're running a 32-bit host OS (as I do), you can probably still run the 64-bit slashdev VM. You just need to make sure your CPU supports it (Intel VT-x or AMD-V) and that it's enabled in your BIOS (usually disabled by default). GIYF.

When you're importing the vm, gotta make sure you don't hit the checkbox that reassigns mac addressses on network interfaces, cos eth0 won't show up in ifconfig and you won't have internet access.

After a quick flick through the bash history I realised that sudo works with the "slash" user.

sudo apt-get update
sudo apt-get upgrade

sudo apt-get install gnome

*hides* (cli is awesome, but on its own is claustrophobic for me)

login under gnome classic session (default ubuntu session fails to login, not that i mind)

Ephiphany works as a web browser, but I prefer firefox/iceweasel:

sudo apt-get install iceweasel

Can also use synaptic with same password as slash user.

To start apache (compiled per slashcode install instructions, not from repositories), open a terminal:

./apache/bin/apachectl start

Full command is (just for the curious):

/srv/slashdev/apache/bin/apachectl start

Start the slashd (slash daemon) - gleaned from bash history:

sudo /etc/init.d/slash start

Close slashd terminal window (will continue to run in background).

Open Firefox:
http://localhost:1337/

Apache public directory:
/srv/slashdev/slash/themes/slashcode/htdocs/
It contains mostly links to files in the /srv/slashdev/slash/ directory.

It was nice of NCommander to make the slash user home directory as /srv/slashdev... thanks for that

Tried to register a new user but doesn't seem to work. Seemed like maybe MTA not configured. I use exim4 normally on my debian boxen (removes postfix):

sudo apt-get install exim4
sudo dpkg-reconfigure exim4-config

During configuration, mostly self-explanatory (select defaults for all except make sure to select option "internet site; mail is sent and received directly using SMTP"). Tested password retrieval with exim4 ok. As per usual check your junk folder in hotmail etc.

Sagasu is an awesome search tool:

sudo apt-get install sagasu

After install, you'll find it under Application -> Accessories
Change your file pattern to *.pl or whatever (can just use * if you want), select "/srv/slashdev/slash" as your search directory, uncheck match case, enter a search string such as "sub displayComments" and click Search.
Couldn't find sub createEnvironment though (is called at the bottom of a lot of perl files). Anyone got any ideas?

Also recommend installing mysql-workbench.

If anyone finds anything wrong with any of this stuff please let me know.

edit: the other reason why i prefer to install gnome is cos gedit is a great little development tool.

edit: thanks heaps to paulej72 for the git advice. here's the script provided by paulej (i just added the git pull, as also mentioned by paulej):

#!/bin/sh

cd /srv/slashdev/slashcode
git pull
make USER=slash GROUP=slash SLASH_PREFIX=/srv/slashdev/slash install

rm -rf /srv/slashdev/slash/site/slashdev/htdocs/*.css

/srv/slashdev/slash/bin/symlink-tool -U
/srv/slashdev/slash/bin/template-tool -U

/srv/slashdev/apache/bin/apachectl restart

Note: This produced a couple of errors for me. Don't run this under sudo cos the script has a hissy fit (I had to do a "sudo chown slash:slash -R ./slashcode" to recover).
Also, I use this command to execute the script:

bash ./Desktop/deployslash.sh > ./Desktop/deployslash.log

more so that I can have a squiz at what happened if it goes pear shaped.

9-mar-14
paulej72: If you hand install to /srv/slashdev/slash/themes/slashcode/templates/dispComment;misc;default you need to run /srv/slashdev/slash/bin/template-tool -U to update the templates in the database. Should also restart apache when touching the tempates

perl code doc project

Posted by crutchy on Sunday February 23 2014, @12:44PM (#82)
0 Comments
Code

work in progress

a minor difficulty i'm having with wrapping my head around slashcode is figuring out where functions are declared. i can use a search tool like sagasu, but i've done something similar to this for php so i thought it would be a fun perl project.

objective: parse code files in a directory tree and output page with linked index of files and functions

doc.pl

#!/usr/bin/perl
print "Content-Type: text/html\n\n";
use strict;
use warnings;

##########################
sub doc__main {
    print "<!DOCTYPE HTML>\n";
    print "<html>\n";
    print "<head>\n";
    print "<title>Slashcode Doc</title>\n";
    print "<meta name=\"description\" content=\"\">\n";
    print "<meta name=\"keywords\" content=\"\">\n";
    print "<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\">\n";
    print "</head>\n";
    print "<body>\n";
    print "<p>blah</p>\n";
    print "</body>\n";
    print "</html>\n";
}

##########################
sub doc__functionTree {
    my($structure, $allDeclaredFunctions, $allFunctions, $allFiles) = @_;
}

##########################
sub doc__recurse {
    my($structure, $allDeclaredFunctions, $allFunctions, $allFiles, $allTreeItems, $caption, $type, $level, $id) = @_;
}

##########################
sub doc__aboutFile {
    my($structure, $allFunctions, $allFiles, $fileName) = @_;
}

##########################
sub doc__aboutFunction {
    my($structure, $allFunctions, $allFiles, $functionName) = @_;
}

##########################
sub doc__linkFile {
    my($allFiles, $fileName) = @_;
}

##########################
sub doc__linkFunction {
    my($allFunctions, $functionName) = @_;
}

##########################
sub doc__allFiles {
    my($structure) = @_;
}

##########################
sub doc__allFunctions {
    my($structure) = @_;
}

##########################
sub doc__declaredFunctions {
    my($structure) = @_;
}

##########################
sub doc__loadStructure {
}

##########################
sub doc__parseFile {
    my($structure, $fileName) = @_;
}

##########################
doc__main();
1;

perl

Posted by crutchy on Saturday February 22 2014, @07:24AM (#72)
1 Comment
Code

I'm a perl noob. Hopefully if I do some journal writing on my experience it will help keep me motivated.

Got some sort of perl server configuration going. Google not very helpful since most guides are for mod_perl pre 2.0 and apache foundation docs are jibberish to me (maybe I'm just stupid).

Anyway, here's a conf that I kinda butchered up based on a bunch of different sources:

<VirtualHost *:80>
  ServerName slash
  DocumentRoot /var/www/slash/
  Redirect 404 /favicon.ico
    <Directory />
        Order Deny,Allow
        Deny from all
        Options None
        AllowOverride None
    </Directory>
    <Directory /var/www/slash/>
        SetHandler perl-script
        PerlResponseHandler ModPerl::Registry
        PerlOptions +ParseHeaders
        Options +ExecCGI
        Order Allow,Deny
        Allow from all
    </Directory>
  LogLevel warn
  ErrorLog  /var/www/log/slash/error.log
  CustomLog /var/www/log/slash/access.log combined
</VirtualHost>

By the way, this is for Debian Squeeze.

My first hellow world script was also a bit more of an adventure than expected. Most tutorials leave out a header in examples.

/var/www/slash/test.pl

#!/usr/bin/perl
print "Content-Type: text/html\n\n";
use strict;
use warnings;
print "Hello world.\n";

I could (probably should) have used a text/plain mime header, but it worked nonetheless.
Also I can apparently use the following to add a path to @INC

use lib "/var/www/slash/Slash";

I downloaded the soylent/slashcode master branch from https://github.com/SoylentNews/slashcode/archive/master.zip so that I could have a squiz and see if I could be of any help with debugging etc, but although I can read some of it, I need to go to perl school before I can contribute.

My bread and butter programming languages are Delphi and PHP.

This explains a lot about the beginning of slashcode functions that aren't familiar to me:

http://stackoverflow.com/questions/17151441/perl-function-declaration
Perl does not have type signatures or formal parameters, unlike other languages like C:

// C code
int add(int, int);

int sum = add(1, 2);

int add(int x, int y) {
  return x + y;
}

Instead, the arguments are just passed as a flat list. Any type validation happens inside your code; you'll have to write this manually. You have to unpack the arglist into named variables yourself. And you don't usually predeclare your subroutines:
my $sum = add(1, 2);

sub add {
  my ($x, $y) = @_; # unpack arguments
  return $x + $y;
}

Is it possible to do pass by reference in Perl?
http://www.perlmonks.org/?node_id=6758

Subroutines:
http://perldoc.perl.org/perlsub.html

SNQ*: Squadron of Circus Chickens or Barrel of Rabid Geese?

Posted by Yog-Yogguth on Friday February 21 2014, @08:02PM (#66)
0 Comments
Answers

Betteridge be damned: which is better?.

I think I'll keep the CC Squad as protection and use the BRG as a grenade! I also have a spare Barrel of Uber Robots but I'm not sure what their stats are.

* SNQ is short for "Soylent News Question"

http relay

Posted by crutchy on Thursday February 20 2014, @09:58AM (#58)
3 Comments
Code

Lately I've been working on a little tool to allow remote access to some intranet applications I've been working on. Would be interesting to see what others here thought about the concept.
The applications are normally only accessible on a LAN, with the usual NAT router to the internet.
The aim is to be able to access the applications from the internet without port forwarding in the router.
I've heard of things like BOSH (http://en.wikipedia.org/wiki/BOSH) but haven't found much in the way of specifics and I'm not sure if it does what I want.
The general idea I've been working on is to use a publicly accessible host as a relay between the client (connected to the internet) and the application server (connected to a LAN).
This is kinda how it works at the moment:
To allow remote access, a workstation on the LAN must have open a browser to a URL that uses iframe RPC to periodically poll the relay server. I've set this interval to 3 seconds, which seems OK for testing purposes (would need to be reduced for production). Every 3 seconds the LAN server sends a HTTP request (using php's fsockopen/fwrite/fgets/fclose) and the relay server responds with a list of remote client requests. Most of these responses are empty unless a remote client requests something.
From the remote client perspective, if a user opens their browser to a URL on the relay server, they would normally be presented with some kind of authentication process (I've neglected that for testing purposes) and then they would be able to click a link to access an application that would normally be restricted to the LAN. When they click that link, the relay server creates an empty request file. To respond to the LAN server with a list of requests, the relay server reads the filenames from a directory and contructs the requests list based on files with a certain filename convention (for testing i'm just using "request__0.0.0.0_blah" where 0.0.0.0 is the IP address of the remote client and blah is the raw url encoded request (special chars replaced with % codes).
So one job of the relay server is to maintain a list of remote client request files (including deleting them when the requests have been fulfilled). It would probably be best to use a simple mysql table for this, but for testing I've just used a simple text file in a location that can be written to by apache.
After saving the request, the relay server script instance initiated by the remote client doesn't die, but loops until the request file isn't empty. So while the following is going on, this instance is just looping (although it has a timeout of 5 secs).
After a remote client requests an application from the relay server and the LAN client requests the remote client requests from the relay server (asynchronously, hence the need to use a file or database) the LAN server (through the LAN client iframe and a bit of js) constructs a HTTP request and sends it to the application server (for testing purposes the RPC stub sends the request to its own server, which is processed by the application through a dispatch handler). The application response is returned by fgets call and is processed to modify hyperlinks and img sources etc to suit the relay server instead of the LAN server (still working on this bit for testing) and then posts another request to the relay server with the application page content.
The relay server then takes the page content and saves it to a text file.
The relay server script instance mentioned earlier, that is busy looping away, is checking for the existence of this page content in the request file. I tried doing this check with a call to php's filesize function, but didn't seem to work (thought maybe something to do with the writing and filesize processes being asynch but I don't know) but I found that reading the file using file_get_contents and checking if the content length is greater than zero seemed to work (though not very efficiently I'll admit).
So if the LAN server HTTP request to the relay server containing the application page content gets written to the remote client request file on the relay server, the remote client process on the relay server will read it and output it to the remote client.
If the application page content is output, or the content checking loop times out, the request file is deleted.
Except for link/img targets everything works in testing; I can request a page and it renders on the remote client browser as it would on the LAN (minus images).

Does anyone have any thoughts on this?
The code is fairly simple and short; there's a single routine on the relay server with about 150-odd lines of very sparse code, and there's a single routine on the LAN server with about 100 lines of code (will grow a bit when I get the link/img replacement and get/post param forwarding working, but not much). The application that generates the page content being relayed is thousands of lines of code but I've kept the remote stuff separate.
I'm pretty sure there are dedicated appliances that do this kind of stuff, but does anyone have any experience with them?
There's no doubt other ways to skin this cat, but I'm interested in security, simplicity and of course cost. Aspects that I liked about this approach were that I didn't have to punch a hole in the router and that the process was controllable and monitorable from the client within the LAN (every poll outputs a request status summary).
Would be interesting to find out if you think the idea is good or shit, or if there are aspects that could be improved (no doubt there are plenty). Feel free to comment or not.

Thanks to all those who made SoylentNews a reality!

edit: the setup in this case is a little different from the usual dmz/port forwarding case in that there aren't any ports exposed in the LAN router; i get through because the relay server only ever responds to outbound requests originating from the LAN server. there aren't ever any outbound requests originating from the relay server directly

Home!

Posted by Yog-Yogguth on Thursday February 20 2014, @05:05AM (#56)
0 Comments
/dev/random

So lovely to be back!!! Yes that's how it feels isn't it?

Had two UIDs on that other site, one relatively ancient forgotten one and one mostly unused as I joined the AC horde :)

But now... now home has been rebuilt! Awesome. Way back then I don't think I truly appreciated what was available --and I'm probably not the only one this applies to-- but now that we have lost it we have gained more so this time I'll try to make better use of it.

Not that I'll be prolific or anything like that but I'll scamper about once in a while *crams stuff into 255 char bio*.

How Mod Points Work Today

Posted by NCommander on Tuesday February 18 2014, @07:04AM (#36)
45 Comments
Code

So, given my last journal, a writeup on how they work today. For the most part, my original story on this topic is true, but I changed a fair bit since then and now, nor did I go much into the thought process in how it was divined.

In contrast to the original system, the current one wants to keep a specific number of moderation points always in circulation, with the concept that mod points are a constantly moving and fluid item. Moderation simply doesn't work if there isn't enough of the damn things, and having too many wasn't a problem at all (Overrated exists for a reason).

The original idea is we should dynamically generate our pool of modpoints based on our activity levels, so the original implementation of this script took the comment counts for the last 24 hours, with the basic notion that every comment should have the potential to be moderated at least once. This number was multiple by two, and provided our baseline moderation count. Since we were based our mod point count on a 24h window, mod points were set to expire every 24 hours instead of every 72. At this point, I hadn't realized the fundamental problem with the slashcode moderation system; my thoughts were "need lots of mod points", "this is incredibly complex, I can do better". That realization came as I was stripping the old one out of slash.

As part of this, I also changed the eligibility requirements for moderation. Instead of having a specific number of tokens, I wanted only users who were active to get mod points. The ability to retain drive by moderations by lurkers was something worth maintaining, and part of what I suspect makes up the bulk of Slashdot moderations.

I also wanted to avoid the problem of "moderator burnout", or users getting mod points too frequently, and just being turned off from moderation. I know that happened to me on slashdot, and others as well who ignored modpoints (or chose to become ineligible). As such, I wanted there to be a cooldown on how frequently someone can get modpoints.

That being said, I didn't want everyone and their mother being moderators all at once, so I decided that 30% of all active users (defined (at the time) as anyone active within the last 24 hours) who had neutral or better would be eligible for modpoints.

Version 1 was fairly simple. It basically took the comment count for the last 24 hours, multiple by 2, this is the minimum number of modpoints that exist at all times. Take all users who were active in the activity_period, take mod_points_to_issue/(elligable_moderators*.3), and hand out those points equally. As a failsafe, the system will hand out ten mod points minimum (the underlying thought here being that I don't just want to get one or two modpoints; more is better, so lets take Slashdot's 5 and multiple it by 2).

And for the most part it worked. When we were in closed alpha on Thursday, we opened the test site to 100 users to try and test it in something resembling real world logic. And, for the most part it worked, because everyone was very highly active. You might see the mistake with that logic when applied to a production site.

Come go-live. User counts surge through the roof, active users are flowing in (can't believe we hit 1k users in a single day), and the moderation script starts handing modpoints in the thousands. At one point, there was close to 2000 modpoints in circulation at any given time).

For that moment, moderation was working well. Then users started going offlining, and EODing, or worse, users were getting modpoints when they signed off, and not seeing them until they signed in. The script was happy, 30% of users were moderators, but there were a lot of +1s. When I looked at the database, most people who had modpoints hadn't been signed in for hours.

Suddenly in a flash of inspiration, I saw the mistake. Slashdot could get away with handing out users with no activity level because even with 80% of their system being moderators, most people would be inactive at any given time. With our 30%, there simply weren't enough modpoints in the hands of active users.

So, in an attempt to salvage the situation, I did a critical adjustment on how the damn thing works. Activity periods for users was seperated into a new variable, and dropped to 1 hour (then five minutes, so any logged in user has a chance), and process_moderation had its crontab shorted to five minutes (it used to run hourly).

To keep modpoints constantly in circulation, expiration time was dropped to four hours, so only people who are active RIGHT NOW were moderators, especially since our editor team had posted 20 articles that day already. Whenever a user looses his points (via expiration or using them all), their slot is freed up, and a new user immediately gets modpoints.

That change in logic underpins version 2 of this script. Now the minimum count is what we hand out, except in the very rare case that we need more modpoints in circulation, in which case, the active users start getting more and more (up to a cap of 50, then it spills past 30 of users). For the most part, it seems to be working, comment moderation scores are generally going up, but it may still require further tweaking to make it work well. I generally am not seeing as many +3-5s as I like, but its right now a whole hell a lot better than it used to be.

I'm open to any thoughts, criticisms, or whacky ideas relating to how mod points are being dished out. Let me hear them below.

How Mod Points Worked In Stock Slash

Posted by NCommander on Tuesday February 18 2014, @06:26AM (#35)
3 Comments
Code

So for the curious (or the morbid), I thought I'd do a bit of a writeup on how modpoints worked in the stock slashcode. To my knowledge, this is how they work on slashdot.org today, and all other Slash sites.That being said, caveat emperor. I'm not QUITE sure I understood the code correctly, and I'm writing this from memory, but if enough people want it, I'll fish the old code out of git, and paste it here.

In stock slashcode, every user has something called a "Token" count, which represents their chances at getting modpoints. The best way to think of tokens is like chances at winning a raffle. Keep this in mind, as it will become relevant in short order. Tokens are (theoretically) generated from various clicks in the site UI,and are granted off some serious voodoo involving magic numbers, and other such insanity.

My best understanding is tokens are only issued after a specific random number of clicks are hit, and are later pulled out of the access log by the process_modertion slashd script. But more on that later. The logic that does this is fairly uncommented perl spread across several perl modules, so its rather hard to keep track off.

Tokens convert to modpoints on a strict ratio (if I remember correctly, its 8 tokens becomes one mod point, so you need at least 40 tokens to be eligible to receive modpoints, stock slash only hands out modpoints in increments of five).

Having tokens is not however enough to make you eligible for mod points, it only represents your chances at getting modpoints. When process_moderate kicked, it would go through and essentially dump the entire user table for users that had tokens, were willing to moderate, was not banned from moderation and within the 80% percentile of oldest accounts. This is where metamoderation comes into play. (note: this was true when metamod existed, firehose replaced it, and I have no idea how the logic (if at all) has been changed to handle that)

For users that had been metamodded, those acted as a weight, either increasing or decreasing your chances at getting modpoints in the system. For moderations that were good, you got additional chances in the index, and the reverse decreased them. It also appears that your individual metamods were (somehow) taken into account, but I haven't quite pierced the logic of that. As the metamod module is broken, I never looked to see how it works in practice.

Now, none of this promises you'll actually GET modpoints. As I said, its a raffle. And its a raffle that includes accounts that been inactive but still have tokens. At this point, random users are choosen to get modpoints, which then converts your tokens to modpoints. If you get picked more than once, then you get another increment of 5.

So far, so good? Right now, you might be asking what's the problem with that. Aside from being perhaps a bit longwinded, there seems to be nothing wrong. The problem isn't that the algorithm is wrong, its fundamentally broken.

If you want a hint, I recommend checking out http://slashdot.jp or http://barrapunto.com/ (which are the only other slash sites still on the net that I know of), and look for +5 comments. Take your time, I'll wait.

The problem comes from what ISN'T in the algorithm; it takes no account into how many modpoints MUST be in circulation. I had the advantage of being a frequent poster on macslash.org while it was still around. In the years I was active on that site, I can count the number of +5 comments I saw on one hand. +4s were almost just as rare.

For a comment from an normal user to get to +5, it needs four seperate people with mod points to vote it up four times, and it needs that many people who 1. have modpoint 2. want to use them 3. want to use them on THAT comment.

That's a lot of freaking ifs. While this site was still in closed-testing, the stock modpoint algorithm ran from Monday to Friday until I ripped it out and replaced it with my version of it. In that entire time, it issued a grand whooping total of 10 modpoints (5 to my dummy account which I use for account testing, I don't remember where the other 5 went). At that point, we were getting about 20 comments per article.

In short, the stock modpoint method is not only broken, it is fundamentally broken, and it only works on Slashdot because their userbase is large enough that it works out of dumb luck. Even then, I question that as a lot of good comments never seem to get to +2 or +3, let alone the higher tears. This is was prompted the rewrite, which I'll document in my next journal.