Got my first successful heart attack one month before my birthday - many happy returns.
maybe I still have a chance to see the bug stopping my modpoints cleared before I die, but I won't hold my breath, I'm still short of it :large-grin:
But I digress. Or just decided I might as well waste the rest of my life browsing the Internet. Or something.
In any case, if you reck'n you have 20 mins to waste watching YTes: "Social Constructs (or, 'What is A Woman, Really?')"
You have no chance to survive make your time.
---
SoylentNews Archive Expander to go here
---
Archived Journals:
NCommander (2) ***
mechanicjay (7)
stderr (11)
mattie_p (13)
The Mighty Buzzard (18)
Barrabas (22)
AudioGuy (24)
bryan (29)
chromas (34)
Covalent (43)
janrinok (52)
Blackmoore (57)
paulej72 (58)
juggs (63)
xlefay (65)
unitron (70)
Hairyfeet (75)
martyb (76)
Lagg (105)
RobotMonster (130)
c0lo (156)
wjwlsn (171)
crutchy (179)
hemocyanin (186)
Sir Finkus (192)
TheRaven (270)
cosurgi (272)
NotSanguine (285)
pbnjoe (313)
tdk (346)
CoolHand (438)
cmn32480 (443)
BlackHole (530)
Phoenix666 (552)
stormwyrm (717)
Freeman (732)
Jaruzel (812)
takyon (881) ***
cafebabe (894)
n1 (993)
Marand (1081)
Subsentient (1111)
frojack (1554)
Snow (1601)
maxwell demon (1608)
GungnirSniper (1671)
dbot (1811) (FRIENDLESS)
Yog-Yogguth (1862)
tynin (2013)
velex (2068)
MichaelDavidCrawford (2339)
aristarchus (2645)
Woods (2726)
Jeremiah Cornelius (2785)
anubi (2828)
gishzida (2870)
bradley13 (3053)
ikanreed (3164)
GreatAuntAnesthesia (3275)
jasassin (3566)
Bot (3902)
canopic jug (3949)
Bytram (4043)
kaszz (4211)
bzipitidoo (4388)
Arik (4543)
Azuma Hazuki (5086)
Sulla (5173)
kurenai.tsubasa (5227)
https (5248)
AthanasiusKircher (5291)
charon (5660)
butthurt (6141)
barbara hudson (6443)
fyngyrz (6567)
realDonaldTrump (6614)
jmichaelhudsondotnet (8122)
Mojibake Tengu (8598) ***
Zinnia Zirconium (11163)
nostyle (11497)
*** More to archive.
Nearly 40 years ago, in Chevron v. Natural Resources Defense Council, the Supreme Court ruled that courts should defer to a federal agency’s interpretation of an ambiguous statute as long as that interpretation is reasonable. On Monday, the Supreme Court agreed to reconsider its ruling in Chevron.
The question comes to the court in a case brought by a group of commercial fishing companies. They challenged a rule issued by the National Marine Fisheries Service that requires the fishing industry to pay for the costs of observers who monitor compliance with fishery management plans.
Relying on Chevron, a divided panel of the U.S. Court of Appeals for the District of Columbia Circuit rejected the companies’ challenge to the rule. Judge Judith Rogers explained that although federal fishery law makes clear that the government can require fishing boats to carry monitors, it does not specifically address who must pay for the monitors. Because the NMFS’s interpretation of federal fishery law as authorizing industry-funded monitors was a reasonable one, Rogers concluded, the court should defer to that interpretation.
Memory prices have slipped below $2/GB (US). I got someone to buy 32 GB DDR4-3200 SO-DIMM for under $60, but it was as low as $50 two weeks ago.
Here's 32 GB DDR4-3600 for $61.
Even DDR5 is not too bad, for example $92 to $100 for 32 GB DDR5-6000, and slow DDR5-4800 is being dumped for less.
If we haven't reached rock bottom for DDR4 prices, it will probably happen within the next couple of months:
Memory market to bottom out in 2Q23 with uncertain recovery prospects
Samsung Becomes Latest Memory Fab to Cut Production Amidst Post-Pandemic Slump
Samsung may cut quarter of chip production to ease inventory woes
DRAM spot prices stop falling
Samsung faces weakest quarter since 2009 as memory chip market in 'worst slump in decades'
SK Hynix sees H2 memory chip rebound; outlook helps shares brush off record loss
Micron to stop lowering memory chip prices starting May, say sources
Make sure to keep a record of your purchase so you can participate in a future class action lawsuit.
Edit: Poor Earnings Force Samsung To Reduce Production Of DDR4 As Pricing Takes A Rollercoaster Dive
I just got this email:
Bard has just been updated with coding capabilities
Code and debug with Bard
This update introduces software development support for over 20 programming languages, including JavaScript, Python, SQL, TypeScript, and even Google Sheets functions.
Collaborate with Bard to help generate, explain, and debug your code. Google Colab users can also export Python code generated by Bard directly to Google Colab to seamlessly continue programming tasks.
>Write a javascript function for calculating the dates of Fat Tuesday, Ash Wednesday, Good Friday, Easter Sunday, and Pentecost, for a given year.
The Draft 1 it presented to me was incorrect garbage, as is Draft 3, but Draft 2 might be correct:
function calculateDates(year) {
// Calculate the date of Easter Sunday.
var easterSunday = easterDate(year);// Calculate the dates of the other holidays.
var fatTuesday = new Date(easterSunday - 47 * 24 * 60 * 60 * 1000);
var ashWednesday = new Date(fatTuesday - 1 * 24 * 60 * 60 * 1000);
var goodFriday = new Date(easterSunday - 2 * 24 * 60 * 60 * 1000);
var pentecost = new Date(easterSunday + 49 * 24 * 60 * 60 * 1000);// Return the dates of the holidays.
return {
fatTuesday: fatTuesday,
ashWednesday: ashWednesday,
goodFriday: goodFriday,
easterSunday: easterSunday,
pentecost: pentecost
};
}// Calculate the date of Easter Sunday.
function easterDate(year) {
// Calculate the year of the Paschal full moon.
var yearOfPaschalFullMoon = (year - 1) % 19 + 1;// Calculate the day of the week of the Paschal full moon.
var dayOfWeekOfPaschalFullMoon = (yearOfPaschalFullMoon + 1) * 2 + (yearOfPaschalFullMoon > 11 ? 4 : 0) % 7;// Calculate the date of Easter Sunday.
var easterSunday = new Date(year, 3, 22 + dayOfWeekOfPaschalFullMoon - 7);// If the Paschal full moon falls on a Sunday, then Easter Sunday is the following Sunday.
if (easterSunday.getDay() === 0) {
easterSunday.setDate(easterSunday.getDate() + 1);
}// Return the date of Easter Sunday.
return easterSunday;
}
Nope, it's returning the wrong dates. Nice try.
>Make a radio player that can play BBC World radio and several other stations, using HTML and JavaScript
<!DOCTYPE html>
<html>
<head>
<title>Radio Player</title>
</head>
<body>
<h1>Radio Player</h1>
<div id="stations">
<ul>
<li><a href="https://www.bbc.co.uk/sounds/play/bbc_world_service">BBC World Service</a></li>
<li><a href="https://www.npr.org/sections/live-at-noon/">NPR Live at Noon</a></li>
<li><a href="https://www.bbc.co.uk/sounds/play/bbc_radio_4_extra">BBC Radio 4 Extra</a></li>
<li><a href="https://www.bbc.co.uk/sounds/play/bbc_radio_3">BBC Radio 3</a></li>
<li><a href="https://www.bbc.co.uk/sounds/play/bbc_radio_1">BBC Radio 1</a></li>
</ul>
</div>
<div id="player">
<audio controls></audio>
</div>
<script>
function playStation(station) {
var audio = document.querySelector("audio");
audio.src = station;
audio.play();
}document.querySelectorAll("a").forEach(function(link) {
link.addEventListener("click", function() {
playStation(this.href);
});
});
</script>
</body>
</html>
I selected that draft, added event.preventDefault() to the anchors to prevent them from loading a new page, and changed the URLs to ones that could actually work:
<!DOCTYPE html>
<html>
<head>
<title>Radio Player</title>
</head>
<body>
<h1>Radio Player</h1>
<div id="stations">
<ul>
<li><a href="https://realtime.rrr.org.au/p1h">3 Triple R 102.7 FM</a></li>
<li><a href="http://stream.live.vc.bbcmedia.co.uk/bbc_world_service">BBC World Service</a></li>
<li><a href="http://stream-relay-geo.ntslive.net/stream">NTS Radio</a></li>
<li><a href="http://stream0.wfmu.org/freeform-128k">WFMU 91.1</a></li>
<li><a href="https://ice1.somafm.com/seventies-320-mp3">SomaFM Left Coast 70s</a></li>
</ul>
</div>
<div id="player">
<audio controls></audio>
</div>
<script>
function playStation(station) {
var audio = document.querySelector("audio");
audio.src = station;
audio.play();
}document.querySelectorAll("a").forEach(function(link) {
link.addEventListener("click", function() {
event.preventDefault(); // I added this
playStation(this.href);
});
});
</script>
</body>
</html>
>What programming languages does Google Bard support?
Bard can also generate code for Google Sheets.
Bard is still under development, so its support for different programming languages may not be perfect. However, it is a powerful tool that can be used to generate code in a variety of languages.
It gave me three drafts for this answer, the other two were shorter. It would be funny if it's lying.
Write a demoscene (small program) graphics demonstration in HTML + JavaScript
<!DOCTYPE html>
<html>
<head>
<title>Demoscene</title>
</head>
<body>
<canvas id="canvas" width="640" height="480"></canvas>
<script>
function main() {
var ctx = canvas.getContext("2d");
var lines = [];
for (var i = 0; i < 10000; i++) {
var x1 = Math.random() * canvas.width;
var y1 = Math.random() * canvas.height;
var x2 = Math.random() * canvas.width;
var y2 = Math.random() * canvas.height;
var color = "rgb(" + Math.floor(Math.random() * 255) + "," + Math.floor(Math.random() * 255) + "," + Math.floor(Math.random() * 255) + ")";
lines.push({
x1: x1,
y1: y1,
x2: x2,
y2: y2,
color: color
});
}
requestAnimationFrame(draw);
}function draw(t) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (var i = 0; i < lines.length; i++) {
ctx.beginPath();
ctx.strokeStyle = lines[i].color;
ctx.moveTo(lines[i].x1, lines[i].y1);
ctx.lineTo(lines[i].x2, lines[i].y2);
ctx.stroke();
}
requestAnimationFrame(draw);
}window.addEventListener("load", main);
</script>
</body>
</html>
This is an interesting one, because it works, but only after you fix a couple of obvious errors. Correctly access the "canvas" by using its id, and move that declaration and the lines array outside of the function so the scope is correct. Then it works.
<!DOCTYPE html>
<html>
<head>
<title>Demoscene</title>
</head>
<body>
<canvas id="canvas" width="640" height="480"></canvas>
<script>
var lines = [];
var ctx = document.getElementById("canvas").getContext("2d");
function main() {for (var i = 0; i < 10000; i++) {
var x1 = Math.random() * canvas.width;
var y1 = Math.random() * canvas.height;
var x2 = Math.random() * canvas.width;
var y2 = Math.random() * canvas.height;
var color = "rgb(" + Math.floor(Math.random() * 255) + "," + Math.floor(Math.random() * 255) + "," + Math.floor(Math.random() * 255) + ")";
lines.push({
x1: x1,
y1: y1,
x2: x2,
y2: y2,
color: color
});
}
requestAnimationFrame(draw);
}function draw(t) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (var i = 0; i < lines.length; i++) {
ctx.beginPath();
ctx.strokeStyle = lines[i].color;
ctx.moveTo(lines[i].x1, lines[i].y1);
ctx.lineTo(lines[i].x2, lines[i].y2);
ctx.stroke();
}
requestAnimationFrame(draw);
}window.addEventListener("load", main);
</script>
</body>
</html>
>Write a Java function for calculating Merseinne Primes.
Of course I spelled it wrong on purpose...
Sure, here is a Java function for calculating Mersenne Primes:
public static boolean isMersennePrime(int n) {
if (n <= 1) {
return false;
}
if (n == 2) {
return true;
}
if (n % 2 == 0) {
return false;
}
for (int i = 3; i <= Math.sqrt(n); i += 2) {
if (n % i == 0) {
return false;
}
}
return true;
}public static List<Integer> getMersennePrimes(int limit) {
List<Integer> mersennePrimes = new ArrayList<>();
for (int i = 2; i <= limit; i++) {
if (isMersennePrime(i)) {
mersennePrimes.add(i);
}
}
return mersennePrimes;
}
Some alarm bells are ringing in my head. For reference, the first Mercy primes are: 3, 7, 31, 127, 8191, 131071, 524287, 2147483647...
All in all, interesting stuff, seems a little fucky. I'm sure GPT-5 will do it better. I'd like to try some debugging, but you have to be wary of handing your code to the machine.
>Write a Logitech mouse driver in Rust.
use std::{
ffi::CStr,
io::{Read, Write},
os::raw::{c_char, c_void},
};/// A Logitech mouse driver.
pub struct LogitechMouseDriver {
/// The underlying device handle.
device_handle: *mut c_void,
}impl LogitechMouseDriver {
/// Creates a new Logitech mouse driver.
pub fn new() -> Result<LogitechMouseDriver, Box<dyn std::error::Error>> {
let device_handle = unsafe {
ffi::hid_open(
ffi::HID_USE_DEFAULT_DEVICE,
ffi::HID_CONNECT_DEFAULT,
b"Logitech USB Receiver\0",
)
};
if device_handle.is_null() {
return Err(Box::new(std::io::Error::new(
std::io::ErrorKind::Other,
"Failed to open Logitech mouse device",
)));
}
Ok(LogitechMouseDriver { device_handle })
}/// Closes the driver.
pub fn close(&mut self) {
unsafe {
ffi::hid_close(self.device_handle);
}
}/// Reads the mouse position.
pub fn read_position(&mut self) -> Result<(i32, i32), Box<dyn std::error::Error>> {
let mut x = 0;
let mut y = 0;
unsafe {
ffi::hid_read(
self.device_handle,
&mut x as *mut i32 as *mut c_void,
4,
)?;
ffi::hid_read(
self.device_handle,
&mut y as *mut i32 as *mut c_void,
4,
)?;
}
Ok((x, y))
}/// Writes a report to the mouse.
pub fn write_report(&mut self, report: &[u8]) -> Result<(), Box<dyn std::error::Error>> {
unsafe {
ffi::hid_write(
self.device_handle,
report.as_ptr() as *const c_void,
report.len(),
)?;
}
Ok(())
}
}// The following code is just a simple example of how to use the Logitech mouse driver.
fn main() {
// Create a new driver.
let mut driver = LogitechMouseDriver::new().unwrap();// Read the mouse position.
let (x, y) = driver.read_position().unwrap();
println!("Mouse position: ({}, {})", x, y);// Write a report to the mouse.
let report = [1, 2, 3, 4];
driver.write_report(&report).unwrap();
}
I'm sure that works.
>Write a Java program that can use up to 16 exabytes of memory
import java.util.ArrayList;
import java.util.List;public class MemoryHog {
public static void main(String[] args) {
List<String> list = new ArrayList<>();
while (true) {
list.add("");
}
}
}
This program creates a list of strings and keeps adding elements to it until it runs out of memory. On a 64-bit system with 16 exabytes of memory, this program should run for about 10 minutes before it runs out of memory.
Note that this program is just for demonstration purposes. It is not recommended to run this program on a production system, as it could cause the system to crash.
Indie Cop Game Unrecord Looks So Stunningly Realistic Its Gameplay Trailer Is Freaky
Every now and again, an indie game developer comes out of nowhere with a concept trailer or demo that looks too good to be true, prompting skeptics to investigate and shoot down the promising project with harsh reality. We sure hope that doesn't happen with the just-revealed Unrecord, because it's one of the first games we've ever seen with legitimately convincing realism to its graphics.
Foda C, a platinum-selling French rapper, has partnered up with an amateur Unreal Engine developer Alexandre Spindler (@esankiy on Twitter) to form Studio DRAMA. The new indie game studio is already hard at work on its first title, Unrecord. It's a tactical first-person shooter where you play as a police officer, but the game's perspective is uniquely presented in an immersive fashion with almost no HUD elements, just as if it were bodycam footage.
Unrecord - Official Early Gameplay Trailer
Watch the short trailer on YouTube (embedded above) before you keep reading; it's only a couple of minutes long, and it consists entirely of what DRAMA claims is live mouse-and-keyboard gameplay capture footage. The image quality is incredible, and many people have commented that they believe it to be full-motion video footage or a pre-rendered cinematic.
DRAMA is adamant that the gameplay is authentic, though. The developer released a post-reveal FAQ on the game's Steam store page that responds to some of the questions and comments that gamers have had since the reveal yesterday. In the FAQ, the developer states unequivocally that Unrecord is not a VR game, and it is fully-interactive, not a pre-rendered demo. It uses Unreal Engine 5, and appears to make use of the bleeding-edge graphics technology available in Epic's engine.
The developer also addresses the question of whether the game has a pro- or anti-police message in a succinct and distinctly-French way: "Art cannot fight against interpretation." The developers acknowledge that some people may feel disgusted or disturbed by the game's violence, but state that it will avoid topics like discrimination and racism while providing an unbiased take on "criminal acts and police violence."
Skyrim Fan Remakes Whiterun In Unreal Engine 5 And It’s Amazing
The first two Elder Scrolls games made heavy use of procedural generation resulting in a lot of extremely samey environments, but ever since Bethesda abandoned that technique in favor of detailed hand-crafted locales with TES3: Morrowind, they've had a much smaller scale to the world and settlements than the intended "reality" of the fictional world of Tamriel, all due to the limitations of hardware and storage space.
But what if we could see Skyrim as it would "really" be if it were an actual place? That's the concept behind the latest Unreal Engine 5-based remake, created by professional environment artist Leo Torres in his free time over the course of a month. This demo isn't playable, of course; it's really more of a tech demo than anything.
[...] The artist worked off historical sources for population numbers in medieval Scandinavia to come up with a population figure of between 9,000 and 12,000 people for the hold of Whiterun. He says that he initially thought it could be as high as 30,000 but figured that Skyrim's harsh climate as well as the constant conflicts and bandit attacks would keep population figures lower.
[...] The True Scale of Whiterun: Skyrim in UNREAL ENGINE 5!
From Ezra Klein (archive link:
Among the many unique experiences of reporting on A.I. is this: In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.
What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope.
A place to start is with the frameworks policymakers have already put forward to govern A.I. The two major proposals, at least in the West, are the “Blueprint for an A.I. Bill of Rights,” which the White House put forward in 2022, and the Artificial Intelligence Act, which the European Commission proposed in 2021. Then, last week, China released its latest regulatory approach.
Let’s start with the European proposal, as it came first. The Artificial Intelligence Act tries to regulate A.I. systems according to how they’re used. It is particularly concerned with high-risk uses, which include everything from overseeing critical infrastructure to grading papers to calculating credit scores to making hiring decisions. High-risk uses, in other words, are any use in which a person’s life or livelihood might depend on a decision made by a machine-learning algorithm.
The European Commission described this approach as “future-proof,” which proved to be predictably arrogant, as new A.I. systems have already thrown the bill’s clean definitions into chaos. Focusing on use cases is fine for narrow systems designed for a specific use, but it’s a category error when it’s applied to generalized systems. Models like GPT-4 don’t do any one thing except predict the next word in a sequence. You can use them to write code, pass the bar exam, draw up contracts, create political campaigns, plot market strategy and power A.I. companions or sexbots. In trying to regulate systems by use case, the Artificial Intelligence Act ends up saying very little about how to regulate the underlying model that’s powering all these use cases.
Unintended consequences abound. The A.I.A. mandates, for example, that in high-risk cases, “training, validation and testing data sets shall be relevant, representative, free of errors and complete.” But what the large language models are showing is that the most powerful systems are those trained on the largest data sets. Those sets can’t plausibly be free of error, and it’s not clear what it would mean for them to be “representative.” There’s a strong case to be made for data transparency, but I don’t think Europe intends to deploy weaker, less capable systems across everything from exam grading to infrastructure.
The other problem with the use case approach is that it treats A.I. as a technology that will, itself, respect boundaries. But its disrespect for boundaries is what most worries the people working on these systems. Imagine that “personal assistant” is rated as a low-risk use case and a hypothetical GPT-6 is deployed to power an absolutely fabulous personal assistant. The system gets tuned to be extremely good at interacting with human beings and accomplishing a diverse set of goals in the real world. That’s great until someone asks it to secure a restaurant reservation at the hottest place in town and the system decides that the only way to do it is to cause a disruption that leads a third of that night’s diners to cancel their bookings.
Sounds like sci-fi? Sorry, but this kind of problem is sci-fact. Anyone training these systems has watched them come up with solutions to problems that human beings would never consider, and for good reason. OpenAI, for instance, trained a system to play the boat racing game CoastRunners, and built in positive reinforcement for racking up a high score. It was assumed that would give the system an incentive to finish the race. But the system instead discovered “an isolated lagoon where it can turn in a large circle and repeatedly knock over three targets, timing its movement so as to always knock over the targets just as they repopulate.” Choosing this strategy meant “repeatedly catching on fire, crashing into other boats, and going the wrong way on the track,” but it also meant the highest scores, so that’s what the model did.
This is an example of “alignment risk,” the danger that what we want the systems to do and what they will actually do could diverge, and perhaps do so violently. Curbing alignment risk requires curbing the systems themselves, not just the ways we permit people to use them.
The White House’s Blueprint for an A.I. Bill of Rights is a more interesting proposal (and if you want to dig deeper into it, I interviewed its lead author, Alondra Nelson, on my podcast). But where the European Commission’s approach is much too tailored, the White House blueprint may well be too broad. No A.I. system today comes close to adhering to the framework, and it’s not clear that any of them could.
“Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context,” the blueprint says. Love it. But every expert I talk to says basically the same thing: We have made no progress on interpretability, and while there is certainly a chance we will, it is only a chance. For now, we have no idea what is happening inside these prediction systems. Force them to provide an explanation, and the one they give is itself a prediction of what we want to hear — it’s turtles all the way down.
The blueprint also says that “automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks and potential impacts of the system.” This is crucial, and it would be interesting to see the White House or Congress flesh out how much consultation is needed, what type is sufficient and how regulators will make sure the public’s wishes are actually followed.
It goes on to insist that “systems should undergo predeployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use.” This, too, is essential, but we do not understand these systems well enough to test and audit them effectively. OpenAI would certainly prefer that users didn’t keep jail-breaking GPT-4 to get it to ignore the company’s constraints, but the company has not been able to design a testing regime capable of coming anywhere close to that.
Perhaps the most interesting of the blueprint’s proposals is that “you should be able to opt out from automated systems in favor of a human alternative, where appropriate.” In that sentence, the devil lurks in the definition of “appropriate.” But the underlying principle is worth considering. Should there be an opt-out from A.I. systems? Which ones? When is an opt-out clause a genuine choice, and at what point does it become merely an invitation to recede from society altogether, like saying you can choose not to use the internet or vehicular transport or banking services if you so choose.
Then there are China’s proposed new rules. I won’t say much on these, except to note that they are much more restrictive than anything the United States or Europe is imagining, which makes me very skeptical of arguments that we are in a race with China to develop advanced artificial intelligence. China seems perfectly willing to cripple the development of general A.I. so it can concentrate on systems that will more reliably serve state interests.
China insists, for example, that “content generated through the use of generative A.I. shall reflect the Socialist Core Values, and may not contain: subversion of state power; overturning of the socialist system; incitement of separatism; harm to national unity; propagation of terrorism or extremism; propagation of ethnic hatred or ethnic discrimination; violent, obscene, or sexual information; false information; as well as content that may upset economic order or social order.”
If China means what it says, its A.I. sector has its work cut out for it. A.I. is advancing so quickly in the United States precisely because we’re allowing unpredictable systems to proliferate. Predictable A.I. is, for now, weaker A.I.
I wouldn’t go as far as China is going with A.I. regulation. But we need to go a lot further than we have — and fast, before these systems get too many users and companies get addicted to profits and start beating back regulators. I’m glad to see that Chuck Schumer, the Senate majority leader, is launching an initiative on A.I. regulation. And I won’t pretend to know exactly what he and his colleagues should do. But after talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritize.
The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand. If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure. But that’s a question society should consider, not a question that should be decided by a few hundred technologists. At the very least, I think it’s worth insisting that A.I. companies spend a good bit more time and money discovering whether this problem is solvable.
The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet.
Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast. Airplanes rarely crash because the Federal Aviation Administration is excellent at its job. The Food and Drug Administration is arguably too rigorous in its assessments of new drugs and devices, but it is very good at keeping unsafe products off the market. The government needs to do more here than just write up some standards. It needs to make investments and build institutions to conduct the monitoring.
The fourth is liability. There’s going to be a temptation to treat A.I. systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make A.I. systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.
The fifth is, for lack of a better term, humanness. Do we want a world filled with A. I. systems that are designed to seem human in their interactions with human beings? Because make no mistake: That is a design decision, not an emergent property of machine-learning code. A.I. systems can be tuned to return dull and caveat-filled answers, or they can be built to show off sparkling personalities and become enmeshed in the emotional lives of human beings.
I think the latter class of programs has the potential to do a lot of good as well as a lot of harm, so the conditions under which they operate should be thought through carefully. It might, for instance, make sense to place fairly tight limits on the kinds of personalities that can be built for A.I. systems that interact with children. I’d also like to see very tight limits on any ability to make money by using A.I. companions to manipulate consumer behavior.
This is not meant to be an exhaustive list. Others will have different priorities and different views. And the good news is that new proposals are being released almost daily. The Future of Life Institute’s policy recommendations are strong, and I think the A.I. Objectives Institute’s focus on the human-run institutions that will design and own A.I. systems is critical. But one thing regulators shouldn’t fear is imperfect rules that slow a young industry. For once, much of that industry is desperate for someone to help slow it down.
https://www.youtube.com/watch?v=ImLVzQdKIQ8
Just some good old boy who knows cars. Enjoy!
Jackass is set to be photographed, fingerprinted and arraigned on Tuesday.
I urge all humans (regardless of their stance on/interest in this criminal case) to come to NYC to express (or not) themselves. And once you've done that, stick around for a few days or a week.
I'd start with some nice dim sum. Jing Fong is just a few blocks (perhaps a 5-7 minute walk up Centre Street, 202 Centre Street to be precise) from the Manhattan Criminal Court building where Trump will be arraigned (100 Centre Street, to be precise).
If you're not a fan of Chinese food, head over to Little Italy for some lovely Italian food.
Don't forget to stop by Cafe Ferrara for some dessert. I heartily recommend the Sfogliatelle, although just about everything there is delicious!
Or (or in addition to) head downtown to the 9/11 memorial (open 'til 8PM, associated museum closes at 7PM)
If you're around Trump Tower, walk a few blocks south to 53rd Street and turn right. MoMA is right there.
It has a wonderful permanent collection, as well as several interesting current exhibits (see link above). Highly recommended!
I'd also highly recommend The Guggenheim Museum, The Museum of the City of New York and the Metropolitan Museum of Art (not as close to Trump Tower as MoMa, but everything is pretty close in Manhattan).
Or head a few blocks uptown and you'll be at Central Park, especially since Tuesday will be a really nice day, (mid 60s and sunny). There are so many wonderful places in the park (the Mall, Sheep Meadow, The Great Lawn, The Shakespeare Garden, the famed Central Park Carousel, Belvedere castle (where, incidentally, the National Weather Service maintains weather monitoring equipment. So if you hear the news say "it's 64 degrees in Central Park, that's where the measurements are taken). If you're with your SO (or want them to be so), go to the Castle around sunset -- ~7:39PM on Tuesday) and take them to the Pagoda next to the castle. It's one of the most romantic places in all of NYC.
And there are too many wonderful museums, Broadway shows, musical performances from jazz to hip hop and everything in between! Numerous dance clubs and a wide variety of other places and activities as well.
If you're interested, recreational cannabis is legal to posess in NY State. There are currently three state-sanctioned cannabis retail dispensaries in Manhattan (where Trump Tower and the Criminal Court are located), and ~1500 unsanctioned stores/dispensaries around the city as well. "Dude! Look at the colors! Wow!"
And there are so many other places to go, things to do and experiences to have. From the U.S.S. Intrepid, (if you stick around 'til at least Thursday Shen Yun, the opera and rafts and rafts of other stuff.
So come on down to protest/support (or not) who and/or what ever you want, and stay for all the wonderful stuff NYC has to offer!
Hotels are kind of expensive in NYC, but deals can be had. What many folks do is to stay in hotels/motels outside the city and take public transportation (free parking is hard to find and paid parking is very expensive -- much more so than public transportation, and driving in NYC can be incredibly slow). I personally really like The Arthouse Hotel on the Upper West Side. Near the iconic Zabar's, The Beacon Theater, Central Park, The NY Historical Society (an often overlooked gem!) and (my favorite) The American Museum of Natural History.
I couldn't possibly expound on all the fabulous stuff to do and see in NYC (I believe journal entries have a character limit), but you will certainly find many wonderful things here!
So come and support/protest Trump, then stay for all sorts of wonderful stuff!
All (as long as you're not violent) are welcome and encouraged to come and stay as long as you like. We support free speech and peaceful protest here. Enjoy!