After a minor problem with virtualbox (f*ck you nvidia) I got the slashdev virtual machine going. If you're running a 32-bit host OS (as I do), you can probably still run the 64-bit slashdev VM. You just need to make sure your CPU supports it (Intel VT-x or AMD-V) and that it's enabled in your BIOS (usually disabled by default). GIYF.
When you're importing the vm, gotta make sure you don't hit the checkbox that reassigns mac addressses on network interfaces, cos eth0 won't show up in ifconfig and you won't have internet access.
After a quick flick through the bash history I realised that sudo works with the "slash" user.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gnome
*hides* (cli is awesome, but on its own is claustrophobic for me)
login under gnome classic session (default ubuntu session fails to login, not that i mind)
Ephiphany works as a web browser, but I prefer firefox/iceweasel:
sudo apt-get install iceweasel
Can also use synaptic with same password as slash user.
To start apache (compiled per slashcode install instructions, not from repositories), open a terminal:
./apache/bin/apachectl start
Full command is (just for the curious):
/srv/slashdev/apache/bin/apachectl start
Start the slashd (slash daemon) - gleaned from bash history:
sudo /etc/init.d/slash start
Close slashd terminal window (will continue to run in background).
Open Firefox:
http://localhost:1337/
Apache public directory:
/srv/slashdev/slash/themes/slashcode/htdocs/
It contains mostly links to files in the /srv/slashdev/slash/ directory.
It was nice of NCommander to make the slash user home directory as /srv/slashdev... thanks for that
Tried to register a new user but doesn't seem to work. Seemed like maybe MTA not configured. I use exim4 normally on my debian boxen (removes postfix):
sudo apt-get install exim4
sudo dpkg-reconfigure exim4-config
During configuration, mostly self-explanatory (select defaults for all except make sure to select option "internet site; mail is sent and received directly using SMTP"). Tested password retrieval with exim4 ok. As per usual check your junk folder in hotmail etc.
Sagasu is an awesome search tool:
sudo apt-get install sagasu
After install, you'll find it under Application -> Accessories
Change your file pattern to *.pl or whatever (can just use * if you want), select "/srv/slashdev/slash" as your search directory, uncheck match case, enter a search string such as "sub displayComments" and click Search.
Couldn't find sub createEnvironment though (is called at the bottom of a lot of perl files). Anyone got any ideas?
Also recommend installing mysql-workbench.
If anyone finds anything wrong with any of this stuff please let me know.
edit: the other reason why i prefer to install gnome is cos gedit is a great little development tool.
edit: thanks heaps to paulej72 for the git advice. here's the script provided by paulej (i just added the git pull, as also mentioned by paulej):
#!/bin/sh
cd /srv/slashdev/slashcode
git pull
make USER=slash GROUP=slash SLASH_PREFIX=/srv/slashdev/slash installrm -rf /srv/slashdev/slash/site/slashdev/htdocs/*.css
/srv/slashdev/slash/bin/symlink-tool -U
/srv/slashdev/slash/bin/template-tool -U/srv/slashdev/apache/bin/apachectl restart
Note: This produced a couple of errors for me. Don't run this under sudo cos the script has a hissy fit (I had to do a "sudo chown slash:slash -R ./slashcode" to recover).
Also, I use this command to execute the script:
bash ./Desktop/deployslash.sh > ./Desktop/deployslash.log
more so that I can have a squiz at what happened if it goes pear shaped.
9-mar-14
paulej72: If you hand install to /srv/slashdev/slash/themes/slashcode/templates/dispComment;misc;default you need to run /srv/slashdev/slash/bin/template-tool -U to update the templates in the database. Should also restart apache when touching the tempates
For comparison, Slashdot serves an estimated 15 million pageviews per month.
The pageview rate is also climbing - we passed the 2 million mark somewhere around our 9th day online. We'll soon need a higher service tier.
The site's estimated value grew from $43 (Tue) to $639 (Fri) to $2000 (Tue - today). Woot!
It's been a wild ride!
The sys team is building the infrastructure to support a mainstream site. We purchased 3 more linodes (full year, for a 10% savings), which are being provisioned for development, test, and production. The dev team is preparing a turn-key slashcode package that developers can run locally, and we should start to see bug fixes appear in the live site in the next couple of days, possibly by this Friday (Feb 28).
The style team has a long list of planned improvements, and the content groups have been feeding us a steady supply of delicious article summaries, spirited debate (IRC, Forums), plans and roadmaps (Wiki, status posts), with contributions from many other groups. We have our own customer relations person!
I promised that the project would be community driven, and we are largely that. Each overlord has agreed to run their department by community consensus, only making executive decisions when there is no general agreement, or if there is a global overriding concern.
This is working well. For the majority of cases consensus is clear and feels "clearly the right decision". For a split consensus, both choices seem equally good so it doesn't matter which one we choose.
The overlords have authority to make decisions in their area, which means people can get involved with areas that interest them without wading through everything. If you would like to participate, come join us!
Global issues will be decided by community vote. Notable votes coming up will be 1) Choosing the permanent name, 2) Choosing a business model, and 3) Choosing revenue streams. I have researched these and have notes and observations to set before the community as a starting point for discussion.
That's my next step: setting down the notes for discussion, some background information (such as projected expenses), and orchestrating the voting process. Once the business/financial models have been chosen we can start building a proper business.
It looks like we've got ourselves a winner!
SoylentNews is growing much faster than expected, so I need to put the project under a corporate veil to protect my personal assets (mostly my house).
I could google for a lawyer in my area, but I'd rather give some business to one of our users.
If you are a lawyer familiar with business/corporate issues (including non-profit) in Southern NH (especially near Milford) and would like some new business please contact me.
John (at) SoylentNews (dot) org
(Note: This is a stop-gap measure, done only for short-term protection. We'll still choose the business and financial models by community consent, I just don't want to be sued before that can happen. Also, this is a pre-emptive move on my part, as yet we have no legal problems.)
work in progress
a minor difficulty i'm having with wrapping my head around slashcode is figuring out where functions are declared. i can use a search tool like sagasu, but i've done something similar to this for php so i thought it would be a fun perl project.
objective: parse code files in a directory tree and output page with linked index of files and functions
doc.pl
#!/usr/bin/perl
print "Content-Type: text/html\n\n";
use strict;
use warnings;##########################
sub doc__main {
print "<!DOCTYPE HTML>\n";
print "<html>\n";
print "<head>\n";
print "<title>Slashcode Doc</title>\n";
print "<meta name=\"description\" content=\"\">\n";
print "<meta name=\"keywords\" content=\"\">\n";
print "<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\">\n";
print "</head>\n";
print "<body>\n";
print "<p>blah</p>\n";
print "</body>\n";
print "</html>\n";
}##########################
sub doc__functionTree {
my($structure, $allDeclaredFunctions, $allFunctions, $allFiles) = @_;
}##########################
sub doc__recurse {
my($structure, $allDeclaredFunctions, $allFunctions, $allFiles, $allTreeItems, $caption, $type, $level, $id) = @_;
}##########################
sub doc__aboutFile {
my($structure, $allFunctions, $allFiles, $fileName) = @_;
}##########################
sub doc__aboutFunction {
my($structure, $allFunctions, $allFiles, $functionName) = @_;
}##########################
sub doc__linkFile {
my($allFiles, $fileName) = @_;
}##########################
sub doc__linkFunction {
my($allFunctions, $functionName) = @_;
}##########################
sub doc__allFiles {
my($structure) = @_;
}##########################
sub doc__allFunctions {
my($structure) = @_;
}##########################
sub doc__declaredFunctions {
my($structure) = @_;
}##########################
sub doc__loadStructure {
}##########################
sub doc__parseFile {
my($structure, $fileName) = @_;
}##########################
doc__main();
1;
Won't you join us?
irc://irc.soylentnews.org/Soylent
Some have asked why we run our own servers instead of using a public one such as freenode.net. We did this to have control of the TOS, copyright, DMCA, and other legal issues. I like freenode (and their TOS) a lot, but we're building a community and we should make our own choices.
We've got a bot named Bender that monitors the newsfeed and posts announcements whenever a new article comes up.
Bender also posts the headlines to our twitter account, so feel free to follow us there for timely announcements. (Nineteen followers today - woot!)
And FrogSmash, our overlord of graphics arts is distinguishing our various communication channels slightly so that, for example, bookmarks have differently colored icons to make them more easily identifiable by eye. He's set up a test twitter account to try out new skins - check it out and send him some feedback.
Landon, our overlord of IRC, set all this up. He even set us up a link-shortener sylnt.us domain for the twitter account: that rocks! So send him some love if you see him on IRC - he's doing a bang-up job!
I'm a perl noob. Hopefully if I do some journal writing on my experience it will help keep me motivated.
Got some sort of perl server configuration going. Google not very helpful since most guides are for mod_perl pre 2.0 and apache foundation docs are jibberish to me (maybe I'm just stupid).
Anyway, here's a conf that I kinda butchered up based on a bunch of different sources:
<VirtualHost *:80>
ServerName slash
DocumentRoot /var/www/slash/
Redirect 404 /favicon.ico
<Directory />
Order Deny,Allow
Deny from all
Options None
AllowOverride None
</Directory>
<Directory /var/www/slash/>
SetHandler perl-script
PerlResponseHandler ModPerl::Registry
PerlOptions +ParseHeaders
Options +ExecCGI
Order Allow,Deny
Allow from all
</Directory>
LogLevel warn
ErrorLog /var/www/log/slash/error.log
CustomLog /var/www/log/slash/access.log combined
</VirtualHost>
By the way, this is for Debian Squeeze.
My first hellow world script was also a bit more of an adventure than expected. Most tutorials leave out a header in examples.
/var/www/slash/test.pl
#!/usr/bin/perl
print "Content-Type: text/html\n\n";
use strict;
use warnings;
print "Hello world.\n";
I could (probably should) have used a text/plain mime header, but it worked nonetheless.
Also I can apparently use the following to add a path to @INC
use lib "/var/www/slash/Slash";
I downloaded the soylent/slashcode master branch from https://github.com/SoylentNews/slashcode/archive/master.zip so that I could have a squiz and see if I could be of any help with debugging etc, but although I can read some of it, I need to go to perl school before I can contribute.
My bread and butter programming languages are Delphi and PHP.
This explains a lot about the beginning of slashcode functions that aren't familiar to me:
http://stackoverflow.com/questions/17151441/perl-function-declaration
Perl does not have type signatures or formal parameters, unlike other languages like C:// C code
int add(int, int);int sum = add(1, 2);
int add(int x, int y) {
return x + y;
}Instead, the arguments are just passed as a flat list. Any type validation happens inside your code; you'll have to write this manually. You have to unpack the arglist into named variables yourself. And you don't usually predeclare your subroutines:
my $sum = add(1, 2);sub add {
my ($x, $y) = @_; # unpack arguments
return $x + $y;
}
Is it possible to do pass by reference in Perl?
http://www.perlmonks.org/?node_id=6758
Subroutines:
http://perldoc.perl.org/perlsub.html
Lately I've been working on a little tool to allow remote access to some intranet applications I've been working on. Would be interesting to see what others here thought about the concept.
The applications are normally only accessible on a LAN, with the usual NAT router to the internet.
The aim is to be able to access the applications from the internet without port forwarding in the router.
I've heard of things like BOSH (http://en.wikipedia.org/wiki/BOSH) but haven't found much in the way of specifics and I'm not sure if it does what I want.
The general idea I've been working on is to use a publicly accessible host as a relay between the client (connected to the internet) and the application server (connected to a LAN).
This is kinda how it works at the moment:
To allow remote access, a workstation on the LAN must have open a browser to a URL that uses iframe RPC to periodically poll the relay server. I've set this interval to 3 seconds, which seems OK for testing purposes (would need to be reduced for production). Every 3 seconds the LAN server sends a HTTP request (using php's fsockopen/fwrite/fgets/fclose) and the relay server responds with a list of remote client requests. Most of these responses are empty unless a remote client requests something.
From the remote client perspective, if a user opens their browser to a URL on the relay server, they would normally be presented with some kind of authentication process (I've neglected that for testing purposes) and then they would be able to click a link to access an application that would normally be restricted to the LAN. When they click that link, the relay server creates an empty request file. To respond to the LAN server with a list of requests, the relay server reads the filenames from a directory and contructs the requests list based on files with a certain filename convention (for testing i'm just using "request__0.0.0.0_blah" where 0.0.0.0 is the IP address of the remote client and blah is the raw url encoded request (special chars replaced with % codes).
So one job of the relay server is to maintain a list of remote client request files (including deleting them when the requests have been fulfilled). It would probably be best to use a simple mysql table for this, but for testing I've just used a simple text file in a location that can be written to by apache.
After saving the request, the relay server script instance initiated by the remote client doesn't die, but loops until the request file isn't empty. So while the following is going on, this instance is just looping (although it has a timeout of 5 secs).
After a remote client requests an application from the relay server and the LAN client requests the remote client requests from the relay server (asynchronously, hence the need to use a file or database) the LAN server (through the LAN client iframe and a bit of js) constructs a HTTP request and sends it to the application server (for testing purposes the RPC stub sends the request to its own server, which is processed by the application through a dispatch handler). The application response is returned by fgets call and is processed to modify hyperlinks and img sources etc to suit the relay server instead of the LAN server (still working on this bit for testing) and then posts another request to the relay server with the application page content.
The relay server then takes the page content and saves it to a text file.
The relay server script instance mentioned earlier, that is busy looping away, is checking for the existence of this page content in the request file. I tried doing this check with a call to php's filesize function, but didn't seem to work (thought maybe something to do with the writing and filesize processes being asynch but I don't know) but I found that reading the file using file_get_contents and checking if the content length is greater than zero seemed to work (though not very efficiently I'll admit).
So if the LAN server HTTP request to the relay server containing the application page content gets written to the remote client request file on the relay server, the remote client process on the relay server will read it and output it to the remote client.
If the application page content is output, or the content checking loop times out, the request file is deleted.
Except for link/img targets everything works in testing; I can request a page and it renders on the remote client browser as it would on the LAN (minus images).
Does anyone have any thoughts on this?
The code is fairly simple and short; there's a single routine on the relay server with about 150-odd lines of very sparse code, and there's a single routine on the LAN server with about 100 lines of code (will grow a bit when I get the link/img replacement and get/post param forwarding working, but not much). The application that generates the page content being relayed is thousands of lines of code but I've kept the remote stuff separate.
I'm pretty sure there are dedicated appliances that do this kind of stuff, but does anyone have any experience with them?
There's no doubt other ways to skin this cat, but I'm interested in security, simplicity and of course cost. Aspects that I liked about this approach were that I didn't have to punch a hole in the router and that the process was controllable and monitorable from the client within the LAN (every poll outputs a request status summary).
Would be interesting to find out if you think the idea is good or shit, or if there are aspects that could be improved (no doubt there are plenty). Feel free to comment or not.
Thanks to all those who made SoylentNews a reality!
edit: the setup in this case is a little different from the usual dmz/port forwarding case in that there aren't any ports exposed in the LAN router; i get through because the relay server only ever responds to outbound requests originating from the LAN server. there aren't ever any outbound requests originating from the relay server directly
To everyone who contributed to first rollout, thank you! It was an amazing effort, and we couldn't have done it without you.
I've set down some notes and status, with an overview of where I see the project heading in the next few weeks. As always, we can stop and discuss if the community feels we should be moving in a different direction.
Thus begins the status:
Some have noticed that we don't have a structure or plan for development. This was *on purpose* for the duration of first release. I wanted to stay out of the developers' way and avoid anything that wasn't directly related to the rollout.
We were wildly successful, and can now proceed at a more leisurely pace. I have always intended to do development the right way - a strong foundation of tools, with people to oversee and coordinate the effort between people and other groups.
We have some overlords in place, but finding them is exceedingly difficult since I only know people from E-mail and for less than a week. For the near term, I'm the overlord of development, and I'm looking for someone to fill that spot. (And sys and style)
For this upcoming week I've told sys to take a break. Do minor bug fixes
at a leisurely pace if they feel bored, but I want people who are relaxed and refreshed. I don't want to lose people, and this has already happened.
The people who made the rollout happen are in the sys group. It's tiny (about a dozen people) and is concerned with system and server issues: bandwidth tiers, Varnish, Sphinx, linode accounts, registrars, load balancers, and so forth.
There's a much bigger group "dev" which is all the people who want to help develop code. There's some overlap, and of course everyone in sys has been doing dev for the past week. Anyone in sys is welcome to do dev work at any time, someone in dev who wants to do sys work has to be vetted.
A third group is "content": story editors, graphics, the wiki, forums, IRC and related. I'm hoping we'll have a more rich and varied landscape of content than just the news feed; for example, Landon (the overlord of IRC) wants to try weekly IRC chats with notable people, and Cactus (of the wiki) suggested the wiki could have entries for interesting discussions which are largely settled, but which keep cropping up.
A fourth group is "style": how the site is presented. CSS, layout, usability, ergonomics, advice on functionality.
The fifth group, "business", has not officially started (I have a total of three, count them thee, volunteers). This will be business-related topics such as marketing, legal, finance, [business] governance, and so on.
So for the near term, for a week or so, the editors are serving us delicious and interesting stories, while our users are getting comfortable with the system.
There's been some concern about decisions made during first rollout. I
promised that we would operate by community consensus, and I have to be looking at the big picture anyway. We can't afford to alienate anyone,
especially in these first stages.
In light of this:
1) We will use Git/GitHub for source control, since this seems to be consensus.
2) NCommander has proposed a heirarchy of development servers (three, in
addition to of the production server IIRC) for development, testing and
experimentation. We'll go with this because it's a good system that works for other projects (ie - the structure has been vetted) and it covers or surpasses suggestions from the community.
3) We will revisit the bug tracking system. Dev thinks it may be appropriate to have different systems with different intent (different systems, not different projects within one system). Dev should sort this out in the next 2 weeks or so, I'll make an executive choice if there is no general agreement. Start discussing!
4) The next dev effort will be version 2.0 of the newsfeed. Whether this is a rewrite or fixup will be based on community input. I've looked at
the perl code and NCommander's assessment that "it's not too bad" is entirely accurate for large swaths. I'm naming the effort v2.0.
As usual, if you have concerns feel free to E-mail and we'll talk.
R. Barrabas
For some time I have been working on a JavaScript library for working with timezones. The name is tzdata-javascript
To use it, you would first load the library like this:
<script type="text/javascript" src="http://tzdata-javascript.org/tzdata-javascript.js">
</script>
(SoylentNews seems to break the URL for the .js file in a weird place, but I'm sure you can figure out what it should be...)
And then use it like this:
<script type="text/javascript">
# Load the timezone you want to use:
var la=new tzdata_javascript.zoneinfo("America/Los_Angeles");
var cph=new tzdata_javascript.zoneinfo("Europe/Copenhagen");
var hk=new tzdata_javascript.zoneinfo("Asia/Hong_Kong");
# Find the timestamp (in ms since Epoch) you want to convert to localtime:
var now=new Date().valueOf();
# Call the strftime() function of each of the timezones, you loaded earlier:
alert(
"The time in Los Angeles is : "+la.strftime("%+",now)+"\n"+
"The time in Copenhagen is : "+cph.strftime("%+",now)+"\n"+
"The time in Hong Kong is : "+hk.strftime("%+",now)+"\n"
);
</script>
And that's about it... :-)
The librarys website has some more demos and examples.