Legislation key to US President Barack Obama's trade agenda has passed a key hurdle in the Senate, just two weeks after it appeared to have failed.
The bill known as the Trade Promotion Authority (TPA) or, more commonly, Fast Track, makes it easier for presidents to negotiate trade deals.
Supporters see it as critical to the success of a 12-nation trade deal known as the Trans-Pacific Partnership (TPP).
The bill is expected to pass a final vote in the Senate on Wednesday.
Tuesday's 60-37 vote - just barely meeting the required 60 vote threshold - is the result of the combined efforts of the White House and many congressional Republicans to push the bill through Congress, despite the opposition of many Democrats.
This is primarily a tech news site, and it's generally good to avoid political news, but the TPP is a huge trade deal, negotiated in secret, that will have large ramifications for the world economy that affects us all, and that also has large implications for the accountability of major world governments to their citizens.
(Score: 3, Insightful) by VortexCortex on Wednesday June 24 2015, @06:38PM
You think congress isn't exerting itself? This in the age where global corporations who stand to benefit by the TPP have bought congress, turning the nation into an oligarchy or plutocracy? [economyincrisis.org] (PDF [cambridge.org]).
It's not congress that needs more power, it's the people themselves. Though, we're unlikely to get it because, unlike corporations and plutocrats who buy off politicians, our cut is taken in taxes. What I'd like to see is programs proposed in congress allocated funding via what the population wants. Give congress, perhaps, 25% of the funding pool since sometimes they really do know more about what the public should fund, and the rest be left up to the public as to where their tax dollars get spent. Of course this will shift the balance of power to those who control the media -- which is full of propaganda and falsehoods due to a 2003 amendment to the Smith Mundt Act [ wikipedia.org (Warning: Unicode in URL) ] in order to allow propaganda to be used against US citizenry -- Prior the propaganda had to be at least somewhat believable from the perspective of the government, but now no holds are barred. This is some of us are currently fighting for more journalistic integrity... We can't fix anything so long as the 4th estate is against us. [youtube.com] One way to help prevent news from becoming corrupted by conflict of interest is by preventing corporate sponsorship and relying on community support, thus new information outlets are starting to take off using this business model. This doesn't guarantee corruption is absent, but competition then incentives other outlets to point out falsehood rather than stand in solidarity as propagandists like the mainstream media does. In both instances, budget allocation and media funding the commonality is giving more influence to the citizens directly rather than to gatekeepers of information.
It's folly to blame the Presidency for the failings of Congress, when both are essentially bought and paid for sock puppets of the same elites. In other words: It's not that congress needs to take back presidential powers for themselves, it's that the government needs less power in relation to its citizenry at this stage. If we're ever successful in swinging the pendulum towards less government powers (a first, but necessary to prevent cybernetic death of the USA) it will eventually swing too far and more governmental powers will be favorable than less. Without allowing the pendulum to swing, the clock will stop for this nation.
(Score: 3, Interesting) by buswolley on Wednesday June 24 2015, @06:56PM
No problem this big comes from a single source. The decreasing role of congress is but one.
There is also a problem with Congress. For example, the House needs to go back to its roots and restablish traditional ratios between represented and and representative (~25,000:1; http://www.thirty-thousand.org/) [thirty-thousand.org] which is now at least ~800,000:1. In one case I might regularly run into my rep at the coffee shop, but in the present case? We are no longer represented.
subicular junctures
(Score: 3, Interesting) by VortexCortex on Wednesday June 24 2015, @07:55PM
(Warning: Unicode in URL)
Interesting. There are ways of spoofing a URL to make it appear as something it's not via alternate glyphs. In this case it's the "en dash" symbol that's triggering this flag, which I suppose could cause people to mistake a domain name having a dash: - vs – (U+002D vs U+2013 respectively).
Granted wickedpedians could simply use the U+2D to avoid such issues; However, what I posted, "https://en.wikipedia.org/wiki/Smith%E2%80%93Mundt_Act" does not contain unicode, it contains ASCII that has URL escaped UTF-8 encoded Unicode characters in it, which some browsers display as Unicode glyphs in the URL bar.
Now, let's experiment. Here I encode http://soylentnews.org/ as URL escaped UTF-8 encoded Unicode characters: "https://%73%6F%79%6C%65%6E%74%6E%65%77%73%2E%6F%72%67/" homepage [https]
The lack of warning demonstrates that the warning is misleading as it doesn't detect Unicode, but higher Unicode codepoints, perhaps those that don't overlap with ASCII. Now let's place a Unicode NAK ("Negative Acknowledgement") within the domain name and see if it, being both an ASCII and Unicode code for NAK is acknowledged as "Unicode" being in the URL: "https://%73%6F%79%6C%65%6E%74%6E%65%77%73%2E%6F%72%67/%15" homepage/NAK []
I put it to you that with the adoption of UTF-8 Unicode in URLs such a warning is moot as having more frequent false positive will cause the dangerous links to be clicked anyway despite the filter actually being warning about them. As more URLs adopt such encodings the signal to noise ratio will drop, and the filter will be unable to reliably detect the signal. Yet another instance of bubble gum and duct tape in the guts of our web. Finally, I'd like to point out that were one to have discovered an exploit via making such posts, one could be jailed under the Computer Fraud and Abuse Act...
To that end I'd like to point out that there's a glitch in the source code that converts inline plaintext URLs into links. I won't give you the XSS exploit, which I, obviously, DID NOT exploit... on a live server, anyway... but you might want to look into why a URL that looks like this:
http:<b>//mal.net/</b>
becomes the following:
//mal.net" rel="url2html-24961">http://mal.net
Which is to say: Look into what happens when placing an HTML element after "http:". I must now conclude the experiments, since the exertion of even the slightest mental force by a curious individual will crack almost any website's security; Web security is so bad that it's frequently cracked by accident... This is primarily not the fault of website coders but due to the complex parsing headaches caused by the flawed original assumptions made by the Architects of the Web that its sites would have no state (and thus have no user generated input to sanitize).
P.S. My my, just look at the interesting [domain.name]s after those URL-escaped links above...
(Score: 2) by Non Sequor on Thursday June 25 2015, @02:04AM
There's a pretty major problem with using any large character set with amiguous glyphs in an application where you are using the characters to make identifiers which are intended to be unambiguously readable by both humans and machines. Honestly, it seems like the real solution is to break unicode into language specific (not necessarily disjoint) subsets and forbid identifiers that mix and match from multiple subsets. Allowing such identifiers will surely lead to pestilence and crop failures.
Write your congressman. Tell him he sucks.
(Score: 2) by VortexCortex on Thursday June 25 2015, @05:20AM
Good points, but some of this has been addressed.
A novel solution I've experimented with is to render the Unicode string then run it through an OCR program that favors output of the "expected" characters, typically due to multi-character recognition which is sort of like on the fly language detection. Compare the raw input URL chars to the OCR'd output URL chars and if any char doesn't match then it's likely an impostor URL using Unicode to mask itself. However, this is not a 100% fool proof solution. Without language detection it can generate false positives as there are valid uses of Unicode in URLs / domain names. For instance, Unicode can be included in domain names via Punycode's ToASCII() specified by Internationalized Domain Names. [wikipedia.org] (RFC3492 [ietf.org], required by RFC5890 [ietf.org], etc.) The Punycode function is complex as it attempts to avoid using common look-alike characters. Rehash should be using this to sanity check domain names, and not worry about the Unicode included after the domain name, as that's not important. Punycode is yet another encoding instead of using a escaped UTF-8 as in URL encoding; Used mostly due to the 64 char max limit of domain names, but also to remain somewhat human readable (Naming it Punnycode would be too punny).
However, the complexity of Unicode encodings in URLs has little to do with the failure to sanitize the anchor tag's title attribute and following "[domain.name]" via something like Perl's CGI module's escapeHTML() function prior to output, without which soylentnews.org is vulnerable to HTML and/or Javascript injection...
(Score: 2) by TheLink on Thursday June 25 2015, @08:19AM
More than a decade ago I proposed an HTML tag that will disable active/fancy stuff (the opening and closing tags need a matching random string).
That way even if the browser makers or HTML bunch come up with new features, the new stuff would still be disabled.
It's ridiculous to have a car with only "GO" pedals and not a single "STOP" or brake pedal, and to stop the car you are required to make sure that all the "GO" pedals are not pressed.
Apparently in recent years Mozilla has come up with something called CSP ( https://developer.mozilla.org/en-US/docs/Web/Security/CSP [mozilla.org] ) which supposedly would help do this sort of thing and is better than my original suggestion.
FWIW if they had implemented my suggestion, many of those XSS worms wouldn't have worked.
And by now people could say "add another brake pedal option" to stop such stuff - since the concept of brake pedals would no longer be novel in the browser/HTML arena.