Join our Folding@Home team:
Main F@H site
Our team page
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
2022-10-05 14:04:11 UTC --fnord666
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Google may turn Bard generative AI chatbot into a widget on Pixel smartphones and tablets, according to a 9to5Google code dive report. Bard is publicly available, but only through a web portal. An Android-accessible version, even limited to Pixel devices, could help the company nab more of the market currently dominated by OpenAI and Microsoft through ChatGPT and the ChatGPT-powered Bing:
Though Google hasn't spread Bard beyond its initial entry point, other tools fueled by the same LaMDA large language model (LLM) have become more available. Generative AI text generators and editors for Gmail, Docs, and other parts of Google's software suite now offer some version of the technology. It looks like Google is looking to make Bard a widget on the main Pixel screen.
[...] A built-in widget for Bard might be a way for Google to accelerate the adoption of its generative AI since Pixel devices and the Android OS are part of its ecosystem. Mobile apps with ChatGPT in some form, such as SoundHound and ParagraphAI, can't do what a first-party tool could.
Originally spotted on The Eponymous Pickle.
NASA's Lucy spacecraft adjusts course for asteroid flyby in November:
On May 9, NASA's Lucy spacecraft carried out a trajectory correction maneuver to set the spacecraft on course for its close encounter with the small main belt asteroid Dinkinesh. The maneuver changed the velocity of the spacecraft by only about 7.7 mph (3.4 m/s).
Even though the spacecraft is currently traveling at approximately 43,000 mph (19.4 km/s), this small nudge is enough to move the spacecraft nearly 40,000 miles (65,000 km) closer to the asteroid during the planned encounter on Nov. 1, 2023. The spacecraft will fly a mere 265 miles (425 km) from the small, half-mile-(sub-km)-sized asteroid, while traveling at a relative speed of 10,000 mph (4.5 km/s).
The Lucy team will continue to monitor the spacecraft's trajectory and will have further opportunities to fine tune the flight path if needed.
The Lucy team is also continuing to analyze the data collected from its spring instrument calibration campaign and make other preparations for the mission's first asteroid encounter. This encounter will provide a valuable test of the spacecraft's systems and procedures to make sure that everything operates as expected during the mission's high-speed asteroid encounters.
If you've ever had to write a program which interfaces directly with hardware — perhaps while writing a program for an MCU or embedded system or a kernel driver — you may have noticed a few common patterns in register map behaviour and design. I'm not sure anyone has ever really collected them together, so I decided to make a list of all the ones I can think of.
This is a post that I have suspected that I was going to have to write since late December last year.
You will now know that SoylentNews.org is closing down on 30 June but things have not been standing still behind the scenes since we first became aware of NCommander's decision at the end of last week. In fact, it has been a very busy weekend.
A small group of existing staff are looking at alternative possibilities for a 'replacement' site to keep the flow of stories going and allowing discussions to continue. This is a big task, especially in the 38 days remaining in which to try to achieve it. There are several possibilities which spring to mind, Pipedot for example. I have reached out to Bryan but have not yet received a response. However, things as not as straightforward as they seem. The pipecode is written in Php-5 which some of you will realise is no longer supported. We do not want to become dependant on old software which cannot be maintained into the future; that lesson has been taken aboard and reinforced by NCommander's explanation regarding his decision announced today. There are other options but at the moment it is still a search for what is available out there today which also appears maintainable into the future.
But the first thing we need to know is "Is there still sufficient interest in having a discussion site such as ours?" Do you, the community, still want to have your daily dose of stories and the ability to exchange views with many others on this site? Are there any community members who would be willing to join us in trying to establish such a site? Your views are crucial to everything that we do over the coming days and weeks. So please let us know what you think about whether a site is still required with all the alternative technology available today that simply didn't exist 9 years ago. What form should a new site take? What changes to how we operate are essential for you to continue to remain interested in the future site?
Of course, it cannot be a mirror image of what we have today - which many will see as a good thing! But I hope that we would be able to transfer existing accounts, usernames and passwords directly to any new site that we create. We would also have to start with a relatively simple site and build on that over time.
At the end of the day we would have to restart the voluntary subscriptions but not immediately. We can raise some funds to see us get established without the requirement of a financial commitment from the community. Subscriptions were always sufficient in the past and I don't see why that would not be the case in the future too. The fact that we currently have enough to keep this site going until next year bears witness to that. We have also found that we can significantly reduce our running costs based on our current community rather than being ready for a major stream of new members which never materialises. I have no grandiose ideas of becoming a huge site employing our own journalists but just a community that enjoys the discussions as we have been doing for several years. Nevertheless, we would also be trying to build on our existing community which is beginning to happen on this site now that things have settled down.
So don't hold back - let us know what you think.
This is the post I never thought I would have to make. I am also writing this post on behalf of SoylentNews PBC, the legal owner of SoylentNews, and not as a member of the staff or the community.
SoylentNews is going to shut down operations on June 30th.
This wasn't an easy decision to come to, and it's ultimately the culmination of a lot of factors, some which were in my control, and some that weren't. A large part boils down to critical maintenance to the site not properly being performed for a very long time. To pay back the mountain of technical debt we've built up, it would require relaunching the site from scratch.
I'll discuss this more in depth below, but I can't personally justify the time any more, especially due to the negative impact that SN is having on my personal life.
Before we shut down, at least for the foreseeable future, I'm going to outline the situation as I see it, my own personal responsibility, and what happens next.
Let's start with the technical nitty gritty. SoylentNews was, in November 2022, at the point where it was about to have a fatal database crash. The database cluster was wedged in an invalid state. Backups weren't properly being done. As it was, we lost several days of postings after a hard crash. On top of that, we had multiple public facing machines running outdated versions of CentOS and Ubuntu running net facing services.
There's two distinct problems here, both of which have to be addressed.
The first is the site itself, and specifically what we can or can't do with it. At this point, rehash, the backend that runs SoylentNews, is nearly 30 years old, and it was written in what can be generously described as angry and especially esoteric Perl. Perl was already going extinct when we launched in 2014, and at this point is mostly relegated to legacy backend code which is slowly but surely disappearing.
Complicating matters is that rehash is specifically tied into Apache 2.2 via mod_perl, a version that is well past end of life and significantly out of date. While the website is well sandboxed and battle hardened, running obsolete code on the public Internet is not a smart move especially when combined with many of the other factors listed below.
To just keep up with patched software, we would either need to port the site to Apache 2.4 and a recent version of mod_perl, or break the Apache dependency with an alternative like FastCGI. This would also require updating the base version of Perl5 to something more recent and hoping all the necessary CPAN modules have either been updated, or can be reasonably replaced, which is doubtful at best
As the person who actually did the base port of rehash from Apache 1.3 to Apache 2, this is a massive project regardless of which way we would go. This would require a full rebuild of the /srv/soylentnews.org directory which alone could easily take weeks or months of work. That doesn't take into account all the other bits of infrastructure and software that would need to be reworked, rebuilt, or replaced.
When we migrated to rehash, we had more staff who could QA the site and quite a bit went wrong trying to do that relatively simpler migration.
As we are now?
I don't see how it's possible anymore.
After everything that has played out, I'm having trouble working up sufficient motivation to work on the site and bring it up to a serviceable state when combined with the amount of friction I've experienced just getting us here.
I had hoped to hire outside help or at least raise enough through livestreaming other SN related work to offset the costs. At this point though I believe that is a lost cause as well. If this was the only major problem, it would be bad enough. Unfortunately, it's just the tip of a very large and very ugly iceberg.
The deeper problem is that everything else has bitrotted over time.
SN's backend is something of a jigsaw puzzle which is documented in one of three places: on the site, on the internal technical wiki (which is currently down), and on the old public facing wiki which is also down. None of that documentation was or is consistent with the actual state of reality, and quite a few parts, like the MySQL cluster, were somewhat esoteric.
In practice, if you want to know how anything was plugged into anything, it was a matter of pulling cables and figuring out what broke. It also doesn't help that the backend is notoriously noisy. That makes it hard to sort out real errors from the chaff. This was a large part of why the Zoo plugin, which does the sidebars, was broken for most of December. It also didn't help that we had three different OSes (Ubuntu, CentOS, Gentoo) which complicated system administration.
Furthermore, there have been major disagreements among the sysops on actually doing any major upgrades. Someone would complain that we should do something. There would be a lot of arguments about it. In most cases, nothing got done. Because of this ongoing friction, it became increasingly more common for no one to install updates. This is why we never upgraded from Ubuntu 14.04 to 16.04 back in 2016. I eventually said we should just go to Gentoo, since there was a widespread belief that upgrading the distribution would break everything. This suggestion ultimately just ended up with only our development machine on Gentoo, and that too was woefully out of date.
When I finally checked in in November 2022, after two years, the site had finally reached a breaking point. I talked with some people in #chillax, and I got a state of affairs from mechanicjay, and I decided to do what should have been done long ago.
I didn't ask for permission. I didn't wait for people to answer DMs. I just did it because we had done this go around one too many times in the past.
I will let the community decide if I was justified or not in doing so.
A lot of this involved installing over a decade of upgrades. Setting up and configuring firewalls and removing unneeded services. Backing up and decommissioning old boxes. Given the extended period of time without updates, you can imagine that I at least have some concern at the number of potentially vulnerable backend services that were exposed to the Internet.
I found no evidence of breach, but given the period of time, and general lack of maintenance, I am at best uneasy.
I could have done better.
In the end, I finally installed almost a decade of upgrades in December of 2022, but that only postponed the inevitable. I also trimmed the number of machines and services in an effort to be at least slightly more secure on the Internet. However, ultimately, without a way to bring in new users, SN is slowly going to attrition itself to death.
Some might argue that I simply let it be, or should have let it be in November, but I really did hope that I could pull it out of this death spiral. Over the last few months, it has become clear that the only way work is going to be done is if I do it or if a miracle happens.
The problem is: as part-owner, where do you go from here?
There's also the matter of liability.
Ultimately speaking, if something happened with SN, Matt and I would be jointly responsible since our names are on the legal documents. I tried to find someone to take my place, and failed. I am legally attached to something that is barely being maintained, and frankly, I can't carry this cross any further.
My Role In This Outcome
I guess this falls down to a lot of my personal responsibility. While SN is at its heart a community project, it is also a business, one for which I have served as its president for its entire life. I really had no idea what I was doing when we started, and this had long term effects on SN as a whole. Part of this was that we only had subscriptions as a revenue stream.
Without a more solid revenue stream, the PBC was essentially hostage to the small trickle of money subscriptions. In a volunteer organization, it's a matter of "who shows up to do the work" dictating the direction of the site.
In the early days this wasn't a problem, I had plenty of free time, and people were often willing to help. That's largely how the site got ported to Apache 2, and why we were able to stay up for more than a year. Meanwhile, solving UTF-8 support was one of TMB's and MartyB's projects. As the early enthusiasm died off and staff began to leave, essential tasks were becoming less and less likely to be done.
That ultimately created a negative feedback loop in which technical debt continued to pile up.
SoylentNews also doesn't have a growing community, partially because we have very few inbound links, and are fairly low in search results. In our early days, folks followed us from Slashdot, and some viral posts on places like reddit and HackerNew did help to build the community, but this has largely evaporated.
Growth of some sort is important because communities have a natural attrition rate. People leave, die, or otherwise go inactive. Year over year, the community has shrunk, primarily because we don't bring in a lot of new blood.
Furthermore, the Internet as a whole has changed. When we started, GamerGate was yet to happen. The world couldn't even imagine the rise of the Trump presidency. In theory, the moderation system should have been able to handle disinformation, but the mod system requires a certain critical mass to work. Slashdot's mod system could only work as it does on a large community, and we found at least one critical flaw with its base assumption:
People rarely if ever downvote.
This, combined with ineffective anti-spam meant that it was relatively easy to game the system, and allowing bad actors outweighed good. My perception was SN's signal to noise ratio was becoming more noise year over year, and there were many conversations on this, which ultimately went nowhere. For me, personally, it finally reached a head with COVID. The amount of medical misinformation and similar such disinformation got to the point that I felt we needed to drastically overhaul the site.
This lead to some very bitter arguments.
Ultimately, I was overruled, and I attempted to resign after bitter arguments in the staff channel. My resignation was written, but ultimately never posted, and I left on bad terms with the staff at the time. Consequently, I remained President of the PBC. At that time, I requested Matt remove me from the position, but we never formalized this, primarily because there was no one to replace me. It should also be noted that we were missing a secretary and unable to find a replacement after mrcoolbp withdrew due to personal life reasons.
I could have, and perhaps should have, forced the issue then, but I could still remember how the domain was hijacked in our early days, and didn't want this to be a case of sour grapes. I also had a reasonable belief that SN would still be maintained by the active staff. I turned my attention towards my other endeavors such as my YouTube channel and tried to put it behind me.
Two years passed.
That was not the end of the infighting, and that ultimately led up to TMB leaving in 2021. The site was now running with the bare minimum of maintenance mechanicjay supported by audioguy could give it with no hope of a long term solution in sight. Had I not checked in, and decided to do emergency maintenance, the odds are that it would have been a matter of weeks or months before a severe system crash would have irreparably corrupted the database.
As it was, we had two hard crashes that lost weeks of posts. There were no functioning backups that I could find.
I did two emergency rounds of maintenance that saw the backend database replaced with a standard vanilla MySQL instance and drastically downsized the number of machines, cutting the monthly bill more than half.
However, it's become clear to me that this was too little, too late.
Many of the issues that were present when I resigned were still here. At the end of the day, I found myself caught between my responsibility towards my site and my own frustrations for what it had become. This combined with a personal disaster in my life starting in December meant that I had very little time for SN.
This was also combined with the dawning realization of how difficult it would be to get new sysops and devs to replace myself and those that had left. While I was willing at least to put some of the legwork in, no one really wanted to sit down and help with the business side of things. It felt like everyone else decided we should all hum loudly. While we had some volunteers for sysops, my lack of time, combined with the relatively arcane nature of our backend mostly nothing being done.
I honestly don't know if there was one specific misstep that led to this outcome. However, the need for sites like Slashdot and SoylentNews was already passing when we launched. Slashdot is a shell of itself, and most of the role of news aggregator is taken up with sites like reddit and HackerNews. The need for something like SN has largely disappeared.
That means for SN to exist, it has to exist for itself, and well, that's the rub of it. SN stopped being maintained while I was absent. It wasn't being well maintained well before that point. It's not going to be maintained now simply because I can't justify the time and effort anymore and no one else is putting time or effort into this either.
Suggestions like running ads to try and pay for some of the maintenance costs have either been rejected or at least treated with enough skepticism that makes me doubtful it would help.
Finally, I'm tired of fighting over every single issue which in the end leads to nothing being done and everyone just walking away unhappy.
What Would Have Been Needed To Save SN?
As before, I'll break this into two sections, involving the technical, and the non-technical. To summarize, it essentially required people to take responsibility and pledge to fix it as well as relieve me from my position from the PBC.
Technically speaking, we'd need to be able to refresh the site infrastructure as well as the site's backend dependencies. You're essentially dealing with a legacy Linux install that has been upgraded from Ubuntu 12.04 to Ubuntu 22.04 that at least a dozen of sysops have worked on.
To reduce site admin burden, we'd probably end up migrating email, and most services beside IRC and the website to third party hosting providers. This would have solved many of the email and registration issues that have plagued the site since GMail made their spam barriers extremely hostile to external SMTP hosted mail.
We would also need a development environment that properly tracked with production to allow changes to be done incrementally and rolled back, something that was a continuous problem throughout every major site upgrade. This would let us test each aspect of the overhaul and deploy it piecemeal instead of having the site be broken for weeks or months as happened with the much smaller November upgrade.
Ideally, we would use an automation deployment solution such as GitHub Actions which would make sure the machine state was always in sync with the build files, and allow for easy and rapid deployment of backend patches and security updates.
With this all done, it would have allowed site maintenance to easily be done en masse to all machines and without risk of the site breaking in new and arcane ways.
I did talk to Matt about the possibility of either fundraising or selling stock in an effort to finance it.
I also made multiple efforts to find someone who was willing to seriously take over the site and take over the PBC. There were a few email discussions that went ultimately nowhere.
What it boils down to is that to do anything with the site, I would have to put in legwork that, after everything that has been said and done, I am no longer willing to do nor is anyone truly stepping in to try and take over for me.
It doesn't help that nearly every single thing I've laid out here was shot down by at least one other member of staff while at the same time no realistic alternatives were worked on or even proposed.
What Happens Now?
At this point, we need to get the expenses of the PBC to zero. We have about $1,500 USD in the bank, most of which will go to handle our shutdown fees. I want to give a window for people to exchange contact info and write goodbyes. Subscriptions will be disabled on the site by time this post goes live. SN doesn't have a robust infrastructure to process refunds, and TMB wrote most of the code involving that. I am discussing with Matt what our options here are, but in the worst case scenario, any leftover will be donated to the EFF.
A final backup of the VMs and site database will be taken and soylentnews.org will be redirected to a static page. Everything representing the site be archived and taken offline. I'm going to hold the domain name and backups in trust in the hope that circumstances in the future may allow for the site to return in some form.
I wish I did not need to say such a thing might happen, but all things must end.
Until we meet again, ~ NCommander
The Moon is not made of green cheese after all:
A thorough investigation has found that the inner core of the Moon is, in fact, a solid ball with a density similar to that of iron. This, researchers hope, will help settle a long debate about whether the Moon's inner heart is solid or molten, and lead to a more accurate understanding of the Moon's history – and, by extension, that of the Solar System.
"Our results," writes a team led by astronomer Arthur Briaud of the French National Centre for Scientific Research in France, "question the evolution of the Moon magnetic field thanks to its demonstration of the existence of the inner core and support a global mantle overturn scenario that brings substantial insights on the timeline of the lunar bombardment in the first billion years of the Solar System."
[...] To figure it out once and for all, Briaud and his colleagues collected data from space missions and lunar laser ranging experiments to compile a profile of various lunar characteristics. These include the degree of its deformation by its gravitational interaction with Earth, the variation in its distance from Earth, and its density.
[...] And they found that the lunar core is very similar to that of Earth – with an outer fluid layer and a solid inner core. According to their modeling, the outer core has a radius of about 362 kilometers (225 miles), and the inner core has a radius of about 258 kilometers (160 miles). That's about 15 percent of the entire radius of the Moon.
The inner core, the team found, also has a density of about 7,822 kilograms per cubic meter. That's very close to the density of iron.
[..] We know not long after it formed, the Moon had a powerful magnetic field, which started to decline about 3.2 billion years ago. Such a magnetic field is generated by motion and convection in the core, so what the lunar core is made of is deeply relevant to how and why the magnetic field disappeared.
Briaud, A., Ganino, C., Fienga, A. et al. The lunar solid inner core and the mantle overturn. Nature (2023). https://doi.org/10.1038/s41586-023-05935-7
Intel Publishes "X86-S" Specification For 64-bit Only Architecture
Intel quietly released a new whitepaper and specification for their proposal on "X86-S" as a 64-bit only x86 architecture. If their plans workout, in the years ahead we could see a revised 64-bit only x86 architecture.
Entitled "Envisioning a Simplified Intel Architecture", Intel engineers lay the case for a 64-bit mode-only architecture. Intel is still said to be investigating the 64-bit mode-only architecture that they also refer to as "x86S". Intel is hoping to solicit industry feedback while they continue to explore a 64-bit mode only ISA.
[...] Under this proposal, those wanting to run legacy 32-bit operating systems would have to rely on virtualization. To further clarify, 32-bit x86 user-space software would continue to work on modern 64-bit operating systems with X86-S.
Also at Tom's Hardware.
Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.
Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:
It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."
Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.
[...] When asked about solutions for privacy, the witnesses tended toward proposals, such as opt-outs and policy notices, that will do little to curb the misuse of AI systems. The key to effective legislation will be to allocate rights and responsibilities for AI developers and users. This allocation will necessarily be asymmetric as those who are designing the big models are far more able to control outcomes and minimize risk than those who will be subject to the outputs. That is why regulation must start where the control is most concentrated. A good model for AI policy is the Universal Guidelines for AI, widely endorsed by AI experts and scientific associations.
[...] The news media is still captivated by tech CEOs. Much of the post-hearing reporting focused on Altman's recommendation to Congress. That is not how democratic institutions operate. Industry support for effective legislation will be welcomed by Congress, but industry does not get the final say. There are still too many closed-door meetings with tech CEOs. Congress must be wary of adopting legislation favored by current industry leaders. There should be more public hearings and opportunities for meaningful public comment on the nation's AI strategy.
TFA also includes arguments on whether we even need legislation and observations on the risk of repeating past mistakes, among other points.
The UK governmeent is quietly expanding and developing a controversial surveillance technology that could be capable of logging and storing the web histories of millions of people:
Official reports and spending documents show that in the past year, UK police have deemed the testing of a system that can collect people's "internet connection records" a success, and have started work to potentially introduce the system nationally. If implemented, it could hand law enforcement a powerful surveillance tool.
Critics say the system is highly intrusive, and that officials have a history of not properly protecting people's data. Much of the technology and its operation is shrouded in secrecy, with bodies refusing to answer questions about the systems.
At the end of 2016, the UK government passed the Investigatory Powers Act, which introduced sweeping reforms to the country's surveillance and hacking powers. The law added rules around what law enforcement and intelligence agencies can do and access, but it was widely criticizedfor its impact on people's privacy, earning it the name the "Snooper's Charter."
Particularly controversial was the creation of so-called internet connection records (ICRs). Under the law, internet providers and phone companies can be ordered—with a senior judge approving the decision—to store people's browsing histories for 12 months.
[...] Little is known about the development and use of ICRs. When the Investigatory Powers Act was passed, internet companies said it would take them years to build the systems needed to collect and store ICRs. However, some of those pieces may now be falling into place. In February, the Home Office, a government department that oversees security and policing in the UK, published a mandatory review of the operation of the Investigatory Powers Act so far.
The review says the UK's National Crime Agency (NCA) has tested the "operational, functional, and technical aspects" of ICRs and found a "significant operational benefit" of collecting the records. A small trial that "focused" on websites that provided illegal images of children found 120 people who had been accessing these websites. It found that "only four" of these people had been known to law enforcement based on an "intelligence check."
WIRED first reported the existence of the ICR trial in March 2021, when there were even fewer details about the test. It is still unclear which telecom companies were involved. The Home Office's February report is the first official indication that the trial was useful to law enforcement, and could help lay the groundwork for expanding the system across the UK. The Home Office review also states its trial found that "ICRs appear to be currently out of reach for some potentially key investigations," raising the possibility that the law may be changed in the future.
[...] The Home Office FOIA response also refused to provide details of an internal review into ICRs, citing national security and law enforcement grounds. A Home Office spokesperson said the UK has "one of the most robust and transparent oversight regimes for the protection of personal data and privacy anywhere in the world" and confirmed that trials of ICRs are ongoing.
[...] The possible expansion of ICR collection in the UK comes as governments and law enforcement agencies globally try to gain access to increasing amounts of data, particularly as technology advances. Multiple nations are pushing to create encryption backdoors, potentially allowing access to people's private messages and communications. In the US, a storm is brewing about the FBI's use of Section 702 of the Foreign Intelligence Surveillance Act (FISA), which allows it to intercept the communications of overseas targets.
Haidar of Privacy International says that creating powers to collect more of people's data doesn't result in "more security" for people. "Building the data retention capabilities of companies and a vast range of government agencies doesn't mean that intelligence operations will be enhanced," Haidar says. "In fact, we argue that it makes us less secure as this data becomes vulnerable to being misused or abused."
Industry insiders are warning that hundreds of pot shops could go out of business this year:
California's pot industry could be on the verge of an "extinction event," with pot shops going out of business as they miss tax payments and sink under millions of dollars of debt.
Debt problems have plagued the industry for years — a 2022 report estimated that the industry was collectively sitting on over $600 million in debt — but a change in tax law that took effect this year has stakeholders worried the mounting debt bubble will finally become fatal. A San Francisco politician introduced a law this year in the state legislature that would crack down on pot businesses that don't pay their debts.
State law recently shifted the burden for paying cannabis excise taxes from distributors to retailers, with the first tax payments due May 1. Retailers have historically had the most trouble paying their bills, and it appears that many shops lack the cash to pay their state excise taxes, according to new state tax data obtained by SFGATE.
Over 13% of California's retailers, or 265 pot shops, failed to make any tax payment by the May 1 deadline, according to the California Department of Tax and Fee Administration. Those businesses are now facing a 50% penalty on the taxes they owe, which could be a death blow to many shops.
[...] The entire cannabis supply chain has faced a chronic debt problem: Farmers report never getting paid for thousands of dollars in product, distributors say retailers don't pay them and have started blacklisting some shops, and even the federal government is getting stiffed. An analysis done last fall by Green Market Report found that 10 of the largest pot companies in the country owed over $500 million combined in unpaid taxes.
Related: How State Cannabis Legalization Became a Boon for Corruption
Iowa State researchers in psychology and engineering found women experience cybersickness with virtual reality headsets more often than men:
Psychology professor Jonathan Kelly studies human computer interaction, spatial cognition and virtual reality. He says gender discrepancies in cybersickness may not seem that important when it's related to video games and other forms of entertainment.
"But it's still a problem, and when VR gets to the point where it's a bigger part of job training or education in a classroom, it's even more important to make sure people can access this technology. If not, a lot of people are going to get left out, and there could be a backlash," says Kelly.
Like motion sickness, cybersickness can occur when there's a mismatch between visual motion and body motion. Symptoms, including nausea, dizziness, headaches and eye fatigue, usually resolve quickly after removing the headset. But in severe cases, they sometimes last for hours.
[...] As part of a larger study on adaptation to cybersickness, the ISU researchers recruited 150 participants to play up to 20 minutes of a VR game with a headset. The participants were new to VR and could stop if they felt too sick to continue. The researchers found women ended the game early twice as often as men and reported a sickness intensity that was 40% higher.
[...] For the second paper, the researchers explored whether the distance between an individual's pupils could help explain the gender difference in cybersickness. VR headsets have an adjustable lens set-up to accommodate different users, but some people fall outside the range. The researchers found women participants on average had smaller distances between their pupils than men, but it did not predict whether they would get cybersick during the game.
What seemed to matter more was whether they had previous experience with motion sickness or screen sickness (e.g., feeling sick in movie theaters, while playing a video game.)
"Women reported experiencing more motion sickness and screen-based sickness than men, and this increased susceptibility is part of the reason that women experience more cybersickness," says Kelly.
J. W. Kelly, S. B. Gilbert, M. C. Dorneich and K. A. Costabile, "Gender differences in cybersickness: Clarifying confusion and identifying paths forward," 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 2023, pp. 283-288, doi: 10.1109/VRW58643.2023.00067
T. A. Doty, J. W. Kelly, M. C. Dorneich and S. B. Gilbert, "Does interpupillary distance (IPD) relate to immediate cybersickness?," 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 2023, pp. 661-662, doi: 10.1109/VRW58643.2023.00173
New Futurama episodes are hitting Hulu in July:
One of the greatest cartoons of the modern era is making a return after a 10-year layoff, and you won't have to wait long to start enjoying brand new episodes. The first new episode of season eight will premiere on Hulu on July 24, with subsequent episodes to follow on Mondays.
Hulu ordered a 20-episode run of Futurama in February of last year that will see much of the original show's voice actors and crew return, including executive producers Matt Groening (The Simpsons) and David X. Cohen (Beavis and Butt-Head). Billy West, Katey Sagal, Maurice LaMarche, Tress MacNeille, Lauren Tom, Phil LaMarr, and David Herman are all back, as is John DiMaggio who voices Bender.
Futurama premiered on Fox in the spring of 1999 and ran on the network for five seasons before getting canceled. The show would return in 2010 for a two season run on Comedy Central, with the final episode of that deal airing on September 4, 2013. The order with Hulu will mark the program's third platform, or fourth if you count the direct to DVD movies.
Season eight will initially consist of 10 episodes. It is unclear if the remaining 10 episodes from the original order will arrive as a second half of season eight or a new season entirely.
According to Hulu's description, new viewers will be able to pick up the series from here while Futurama diehards will be rewarded with payoffs to longstanding mysteries. Highlights are said to include developments in the relationship of Fry and Leela, the contents of Nibbler's litter box, the whereabouts of Kif and Amy's tadpoles, and the history of evil Robot Santa.
A language model trained on the fringes of the dark web... for science:
We're still early in the snowball effect unleashed by the release of Large Language Models (LLMs) like ChatGPT into the wild. Paired with the open-sourcing of other GPT (Generative Pre-Trained Transformer) models, the number of applications employing AI is exploding; and as we know, ChatGPT itself can be used to create highly advanced malware.
As time passes, applied LLMs will only increase, each specializing in their own area, trained on carefully curated data for a specific purpose. And one such application just dropped, one that was trained on data from the dark web itself. DarkBERT, as its South Korean creators called it, has arrived — follow that link for the release paper, which gives an overall introduction to the dark web itself.
DarkBERT is based on the RoBERTa architecture, an AI approach developed back in 2019. It has seen a renaissance of sorts, with researchers discovering it actually had more performance to give than could be extracted from it in 2019. It seems the model was severely undertrained when released, far below its maximum efficiency.
Originally spotted on The Eponymous Pickle.
Related: People are Already Trying to Get ChatGPT to Write Malware
Study finds 90% of Australian teachers can't afford to live where they teach:
The teaching profession is already struggling with shortages and a lack of new candidates in a situation widely regarded as a crisis. Now, research warns that teachers are being priced out of housing near their schools, with many areas even too expensive for educators at the top of the pay scale.
The study, published recently in The Australian Educational Researcher analyzed quarterly house sales and rental reports in New South Wales (NSW) and found more than 90% of teaching positions across the state—around 50,000 full-time roles—are located in Local Government Areas (LGAs) where housing is unaffordable on a teacher's salary.
The situation is particularly dire for new teachers. There are 675 schools—nearly 23,000 full-time teaching positions—where the median rent for a one-bedroom place is unaffordable on a graduate teacher's salary.
Housing is considered unaffordable if a person spends more than 30% of their income on housing costs—sometimes called being in housing stress. Those in housing stress may not have enough money remaining to cover the cost of food, clothing, and other essentials.
But affordability isn't just an issue for early career teachers. For experienced educators at the top of the pay scale, 70 schools—about 2,000 full-time roles—are in an LGA where a single-bedroom dwelling is also unaffordable.
"The study shows the last time a first-year teacher salary could comfortably afford the rent for a one-bedroom dwelling was around a decade ago," says Professor Scott Eacott, the author of the study and Deputy Director of the Gonski Institute for Education at UNSW Arts, Design & Architecture.
"Fundamentally, there's been an increasing gap between salary and the costs of housing that the standard pay rise isn't covering, and it's pushing teachers further away from their workplaces or out of the profession entirely.
"The issue is not just limited to teachers, but all essential workers who are increasingly finding it difficult to find affordable places to live within a reasonable distance of their
"The school system is struggling to find enough teachers as it is," Prof. Eacott says. "If teachers can't afford to live near or within reasonable commuting distance of their schools, we can only expect those shortfalls to continue to grow."
[...] Prof. Eacott says part of the challenge is that no single government department or the private sector is ultimately responsible for housing essential workers. While more investment from superannuation funds in essential worker housing developments is welcome, it won't be enough to address the issue at scale.
"The simple answer is we do need to be paying teachers more. But that may not necessarily solve supply problems," Prof. Eacott says. "For example, it is just incredibly difficult right now for teachers to find a place to rent given record low vacancy rates.
"It's also important that we're not confining teachers to just teacher apartments, but creating pathways to home ownership."
[...] "We rely so much on our teachers, so it's only fair we take steps towards providing them and other essential workers with affordable and secure housing options," Prof. Eacott says.
Eacott, Scott. The systemic implications of housing affordability for the teacher shortage: the case of New South Wales, Australia [open], The Australian Educational Researcher (DOI: 10.1007/s13384-023-00621-z)
Deep sea researchers have used two submersibles to make the first full, 3-dimensional scan of the wreck of the sunken passenger ship, The Titanic, including much of the 3-mile long debris field. This is a major step forward in evidence-based analysis of the wreck from over a hundred years ago.
The new scan was "devoid of that," he said, adding, "It is completely based on data and not human interpretation and that is why we are now seeing it in its larger context for the first time ever."
Atlantic Productions said "one major area of deterioration" had already been observed in the officers' quarters. "This included the room of Captain Edward John Smith and discovered that the iconic captain's bathtub has now disappeared from view," it added.
"Now we're getting objective, so we can get really serious with the science of understanding the wreck," Stephenson said.
He added that he was "absolutely convinced," that the photogrammetry model would now be used "not just for Titanic, but for all underwater exploration," because it "ushers in a new phase of exploration and analysis."
Much of the wreck lies in two main pieces, far apart from each other, at a depth of about 4,000 meters. Around 700k images where taken and stitched together to created the model.
(2022) Researchers Discover Wreck of Ship that Tried to Warn the Titanic
(2022) OceanGate Ramps Up the Research for its Second Deep-sea Expedition to the Titanic
(2020) An Aurora that Lit Up the Sky Over the Titanic Might Explain Why It Sank
(2020) US Court Grants Permission to Recover Marconi Telegraph from Titanic's Wreckage [Updated]
(2018) Finding the Titanic with ROVs and Navy Funding