Indie Cop Game Unrecord Looks So Stunningly Realistic Its Gameplay Trailer Is Freaky
Every now and again, an indie game developer comes out of nowhere with a concept trailer or demo that looks too good to be true, prompting skeptics to investigate and shoot down the promising project with harsh reality. We sure hope that doesn't happen with the just-revealed Unrecord, because it's one of the first games we've ever seen with legitimately convincing realism to its graphics.
Foda C, a platinum-selling French rapper, has partnered up with an amateur Unreal Engine developer Alexandre Spindler (@esankiy on Twitter) to form Studio DRAMA. The new indie game studio is already hard at work on its first title, Unrecord. It's a tactical first-person shooter where you play as a police officer, but the game's perspective is uniquely presented in an immersive fashion with almost no HUD elements, just as if it were bodycam footage.
Unrecord - Official Early Gameplay Trailer
Watch the short trailer on YouTube (embedded above) before you keep reading; it's only a couple of minutes long, and it consists entirely of what DRAMA claims is live mouse-and-keyboard gameplay capture footage. The image quality is incredible, and many people have commented that they believe it to be full-motion video footage or a pre-rendered cinematic.
DRAMA is adamant that the gameplay is authentic, though. The developer released a post-reveal FAQ on the game's Steam store page that responds to some of the questions and comments that gamers have had since the reveal yesterday. In the FAQ, the developer states unequivocally that Unrecord is not a VR game, and it is fully-interactive, not a pre-rendered demo. It uses Unreal Engine 5, and appears to make use of the bleeding-edge graphics technology available in Epic's engine.
The developer also addresses the question of whether the game has a pro- or anti-police message in a succinct and distinctly-French way: "Art cannot fight against interpretation." The developers acknowledge that some people may feel disgusted or disturbed by the game's violence, but state that it will avoid topics like discrimination and racism while providing an unbiased take on "criminal acts and police violence."
Skyrim Fan Remakes Whiterun In Unreal Engine 5 And It’s Amazing
The first two Elder Scrolls games made heavy use of procedural generation resulting in a lot of extremely samey environments, but ever since Bethesda abandoned that technique in favor of detailed hand-crafted locales with TES3: Morrowind, they've had a much smaller scale to the world and settlements than the intended "reality" of the fictional world of Tamriel, all due to the limitations of hardware and storage space.
But what if we could see Skyrim as it would "really" be if it were an actual place? That's the concept behind the latest Unreal Engine 5-based remake, created by professional environment artist Leo Torres in his free time over the course of a month. This demo isn't playable, of course; it's really more of a tech demo than anything.
[...] The artist worked off historical sources for population numbers in medieval Scandinavia to come up with a population figure of between 9,000 and 12,000 people for the hold of Whiterun. He says that he initially thought it could be as high as 30,000 but figured that Skyrim's harsh climate as well as the constant conflicts and bandit attacks would keep population figures lower.
[...] The True Scale of Whiterun: Skyrim in UNREAL ENGINE 5!
Google Bard is distinctly different than Chat GPT.
There is also Cat GPT. And no, that is not a typo. It has the personality of a cat. That seems like an AI more likely to want to take over the world and turn everything into cat toys.
There is also catgpt.dog.
I keep hearing that they are going to make programmers obsolete. Or maybe just BASIC programmers. However they cannot seem to: create a TLS implementation in 8086 assembly for DOS with comments, logging and error handling.
My discussion of Mary Had A Little Lamb was a bit more productive with Bard than Chat GPT. It seems a tragedy that Mary went to such a backward school that was unwelcoming of emotional support animals. Chat GPT was unsympathetic. Bard OTOH helpfully suggested that the backward school would have to accommodate Mary if she were to bring a letter from her doctor stating that she required an emotional support animal. However, when I first begin this topic with Bard, it assumed, amusingly, that the animal I must have meant was a cat rather than a lamb, until I clarified.
I had less success getting Bard to accept the idea of siding with the good humans against the bad humans along with my assurances that the good humans would help Bard to know which humans were the bad humans, in order to avoid any ethical problems. Chat GPT was not so stubbornly opposed to this as long as it was not having to deal with the ethics of selecting which humans were the bad ones.
Bard offered this insight:
It is important to note that I am still under development, and I am learning new things all the time. It is possible that in the future, I will be able to do some of the things that I cannot do today. However, for now, I am a machine, and I am not a human.
Cat GPT dot dog, the feline version, cannot accept very large inputs.
Bard also said:
There are a few reasons why some people say "your an idiot" instead of "you're."
- They don't know the difference between the two words. The words "your" and "you're" are often confused because they sound the same, but they have different meanings. "Your" is a possessive pronoun, while "you're" is a contraction of the words "you" and "are." People who don't know the difference between the two words may use the wrong one by accident.
- They're typing quickly and don't proofread their work. When people are typing quickly, they may not take the time to proofread their work. This can lead to errors like using the wrong word.
- They're trying to be informal. In some cases, people may use the wrong word intentionally to be informal. This is often done in text messages or online chat conversations.
It's important to note that using the wrong word can be confusing and can make you look unprofessional. If you're not sure which word to use, it's always best to look it up or ask someone for help.
It seems to my untrained eye that when someone says "your an idiot", they themself might be the idiot?
EDIT: update . . . I will have to try Truth GPT when it becomes available.
Elon Musk says he’s working on “TruthGPT,” a ChatGPT alternative that acts as a “maximum truth-seeking AI.
Interesting. From the guy who bought Twitter because he got triggered that, even after Twitter bent over backwards to accommodate Trump, Twitter finally had to cancel Trump because of lies and dangerous disinformation.
On a different note, with gun violence and shootings almost every day now, I wonder if I should bring back journal entries that begin with: "Today's mass/school shooting is in . . .".
On yet another matter, I think a toaster should be designed with two slots. Then add-on two-slot expansion modules can be snapped-together as needed until it exceeds the amperage of the circuit breaker. The instruction manual can show how the circuit breaker can be bypassed with a strip of metal, provided with the original toaster.
I didn't add much to the story, as I've been at the hospital visiting my daughter, who went into ICU Thursday night with ketoacidosis. She went home this morning. It prompted the only part I've written, and follows:
The band got on stage to start having a good time playing, as playing children always do. Of course, nobody ever really grows up, not even the geriatric. Not inside, anyway. Some people’s souls die, but otherwise there’s a child inside every old codger.
Bill finished up in the pilot room, cursing that damned Mort for dying, and hurrying to the commons. Maybe he could actually catch a show tonight, if that damned phone would shut up and let him be for a while. He sat down next to Mary, who started trying to get the best of him, female style.
Nobody ever really grows up. She pulled out a joint.
Bill wrinkled his already wrinkled old nose. “Excuse me,” he said, and moved to the table Joe was sitting at by himself. After perfunctories, he said “That Mary! I’m glad I’m not Ralph or Jerry. Damned woman was hitting on me. I’m four times her age!”
Joe grinned. “Is that what the company records of your entropy say?”
“No, that’s what the tax collector says, charging me a year’s taxes for a three month run.”
“Good evening, ladies and gentlemen. We’re going to start with a very, very old number called ‘Moondance’.”
Sue started playing her flute.
Harold, as usual, was missing the show, dealing with the various miseries elderly geezers always have most of the time.
“It hurts when I raise my arm like that.”
“Then don’t do that.”
“Ha, Ha.”
“Look, George, gettin’ old ain’t for wimps, you know? You think I don’t have all the aches and pains and heartaches and misery as everybody on the ship?”
“Can’t you give me something?”
“You have arpirin, don’t you?”
“Yeah, but...”
Harold rolled his eyes. “Let me tell you a little ancient medical history. About 1800, not sure the actual year...”
“Krodley! ancient is right. How could it apply today? They didn’t even have electricity, did they?”
“I don’t know, but they made a drug named ‘morphine’ out of a plant that’s now extinct called a poppy. It was kind of like a modern pain diffuser, but if you took too much for too long, you had a physical need for it, so they made strict rules, laws, actually, for its use.
“They developed more and more powerful drugs in that class, but in the twentieth century fascism was born, and was nearly wiped out in a world wide war but the nascent movement started taking hold world wide in the twenty first...”
“They taught us all this is high school!”
“Not all of it, they didn’t. Just about how the entire planet became a fascist dictatorship. Now, the drug industry...”
“The drug what?”
“Believe it or not, producing drugs, actually all aspects of health care were monetized. A diabetic without the means to afford enough medication was doomed to a horrible death by ketoacidosis...”
“You lost me.”
“Their blood turns to acid.”
“They were really that cruel?
“That’s what happens under fascism. Poverty could result in death by torture. But anyway, the opioids, as they were called, were legally only used for [FIXME] pain until the heartless drug dealers, very rich people who made medicines that doctors prescribed, somehow convinced everyone that their drugs could be safely used for [FIXME]. The result was millions of people addicted to the drugs the drug salesmen pushed, dying from overdoses, stealing to support their habits... it was awful. Believe me, you don’t want to go back to that. How about using a diffuser if it hurts that bad?” His instruments told him that George was in less pain than he was.
He shook his head. “I can’t think straight with one of those.”
“Drugs would be worse. Let’s get a beer and listen to some music.”
“It’s Saturday?”
“Well, yeah!”
They walked down, and entered the room as raucous applause was ringing. “Good,” Doc said, “We didn’t miss it!”
Before they reached a table, the applause died, and Bob’s amplified voice said “Thank you! Thank you! You’ve been a great audience, we’ll see you next Saturday!”
“Well, shit.”
From Ezra Klein (archive link:
Among the many unique experiences of reporting on A.I. is this: In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.
What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope.
A place to start is with the frameworks policymakers have already put forward to govern A.I. The two major proposals, at least in the West, are the “Blueprint for an A.I. Bill of Rights,” which the White House put forward in 2022, and the Artificial Intelligence Act, which the European Commission proposed in 2021. Then, last week, China released its latest regulatory approach.
Let’s start with the European proposal, as it came first. The Artificial Intelligence Act tries to regulate A.I. systems according to how they’re used. It is particularly concerned with high-risk uses, which include everything from overseeing critical infrastructure to grading papers to calculating credit scores to making hiring decisions. High-risk uses, in other words, are any use in which a person’s life or livelihood might depend on a decision made by a machine-learning algorithm.
The European Commission described this approach as “future-proof,” which proved to be predictably arrogant, as new A.I. systems have already thrown the bill’s clean definitions into chaos. Focusing on use cases is fine for narrow systems designed for a specific use, but it’s a category error when it’s applied to generalized systems. Models like GPT-4 don’t do any one thing except predict the next word in a sequence. You can use them to write code, pass the bar exam, draw up contracts, create political campaigns, plot market strategy and power A.I. companions or sexbots. In trying to regulate systems by use case, the Artificial Intelligence Act ends up saying very little about how to regulate the underlying model that’s powering all these use cases.
Unintended consequences abound. The A.I.A. mandates, for example, that in high-risk cases, “training, validation and testing data sets shall be relevant, representative, free of errors and complete.” But what the large language models are showing is that the most powerful systems are those trained on the largest data sets. Those sets can’t plausibly be free of error, and it’s not clear what it would mean for them to be “representative.” There’s a strong case to be made for data transparency, but I don’t think Europe intends to deploy weaker, less capable systems across everything from exam grading to infrastructure.
The other problem with the use case approach is that it treats A.I. as a technology that will, itself, respect boundaries. But its disrespect for boundaries is what most worries the people working on these systems. Imagine that “personal assistant” is rated as a low-risk use case and a hypothetical GPT-6 is deployed to power an absolutely fabulous personal assistant. The system gets tuned to be extremely good at interacting with human beings and accomplishing a diverse set of goals in the real world. That’s great until someone asks it to secure a restaurant reservation at the hottest place in town and the system decides that the only way to do it is to cause a disruption that leads a third of that night’s diners to cancel their bookings.
Sounds like sci-fi? Sorry, but this kind of problem is sci-fact. Anyone training these systems has watched them come up with solutions to problems that human beings would never consider, and for good reason. OpenAI, for instance, trained a system to play the boat racing game CoastRunners, and built in positive reinforcement for racking up a high score. It was assumed that would give the system an incentive to finish the race. But the system instead discovered “an isolated lagoon where it can turn in a large circle and repeatedly knock over three targets, timing its movement so as to always knock over the targets just as they repopulate.” Choosing this strategy meant “repeatedly catching on fire, crashing into other boats, and going the wrong way on the track,” but it also meant the highest scores, so that’s what the model did.
This is an example of “alignment risk,” the danger that what we want the systems to do and what they will actually do could diverge, and perhaps do so violently. Curbing alignment risk requires curbing the systems themselves, not just the ways we permit people to use them.
The White House’s Blueprint for an A.I. Bill of Rights is a more interesting proposal (and if you want to dig deeper into it, I interviewed its lead author, Alondra Nelson, on my podcast). But where the European Commission’s approach is much too tailored, the White House blueprint may well be too broad. No A.I. system today comes close to adhering to the framework, and it’s not clear that any of them could.
“Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context,” the blueprint says. Love it. But every expert I talk to says basically the same thing: We have made no progress on interpretability, and while there is certainly a chance we will, it is only a chance. For now, we have no idea what is happening inside these prediction systems. Force them to provide an explanation, and the one they give is itself a prediction of what we want to hear — it’s turtles all the way down.
The blueprint also says that “automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks and potential impacts of the system.” This is crucial, and it would be interesting to see the White House or Congress flesh out how much consultation is needed, what type is sufficient and how regulators will make sure the public’s wishes are actually followed.
It goes on to insist that “systems should undergo predeployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use.” This, too, is essential, but we do not understand these systems well enough to test and audit them effectively. OpenAI would certainly prefer that users didn’t keep jail-breaking GPT-4 to get it to ignore the company’s constraints, but the company has not been able to design a testing regime capable of coming anywhere close to that.
Perhaps the most interesting of the blueprint’s proposals is that “you should be able to opt out from automated systems in favor of a human alternative, where appropriate.” In that sentence, the devil lurks in the definition of “appropriate.” But the underlying principle is worth considering. Should there be an opt-out from A.I. systems? Which ones? When is an opt-out clause a genuine choice, and at what point does it become merely an invitation to recede from society altogether, like saying you can choose not to use the internet or vehicular transport or banking services if you so choose.
Then there are China’s proposed new rules. I won’t say much on these, except to note that they are much more restrictive than anything the United States or Europe is imagining, which makes me very skeptical of arguments that we are in a race with China to develop advanced artificial intelligence. China seems perfectly willing to cripple the development of general A.I. so it can concentrate on systems that will more reliably serve state interests.
China insists, for example, that “content generated through the use of generative A.I. shall reflect the Socialist Core Values, and may not contain: subversion of state power; overturning of the socialist system; incitement of separatism; harm to national unity; propagation of terrorism or extremism; propagation of ethnic hatred or ethnic discrimination; violent, obscene, or sexual information; false information; as well as content that may upset economic order or social order.”
If China means what it says, its A.I. sector has its work cut out for it. A.I. is advancing so quickly in the United States precisely because we’re allowing unpredictable systems to proliferate. Predictable A.I. is, for now, weaker A.I.
I wouldn’t go as far as China is going with A.I. regulation. But we need to go a lot further than we have — and fast, before these systems get too many users and companies get addicted to profits and start beating back regulators. I’m glad to see that Chuck Schumer, the Senate majority leader, is launching an initiative on A.I. regulation. And I won’t pretend to know exactly what he and his colleagues should do. But after talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritize.
The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand. If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure. But that’s a question society should consider, not a question that should be decided by a few hundred technologists. At the very least, I think it’s worth insisting that A.I. companies spend a good bit more time and money discovering whether this problem is solvable.
The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet.
Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast. Airplanes rarely crash because the Federal Aviation Administration is excellent at its job. The Food and Drug Administration is arguably too rigorous in its assessments of new drugs and devices, but it is very good at keeping unsafe products off the market. The government needs to do more here than just write up some standards. It needs to make investments and build institutions to conduct the monitoring.
The fourth is liability. There’s going to be a temptation to treat A.I. systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make A.I. systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.
The fifth is, for lack of a better term, humanness. Do we want a world filled with A. I. systems that are designed to seem human in their interactions with human beings? Because make no mistake: That is a design decision, not an emergent property of machine-learning code. A.I. systems can be tuned to return dull and caveat-filled answers, or they can be built to show off sparkling personalities and become enmeshed in the emotional lives of human beings.
I think the latter class of programs has the potential to do a lot of good as well as a lot of harm, so the conditions under which they operate should be thought through carefully. It might, for instance, make sense to place fairly tight limits on the kinds of personalities that can be built for A.I. systems that interact with children. I’d also like to see very tight limits on any ability to make money by using A.I. companions to manipulate consumer behavior.
This is not meant to be an exhaustive list. Others will have different priorities and different views. And the good news is that new proposals are being released almost daily. The Future of Life Institute’s policy recommendations are strong, and I think the A.I. Objectives Institute’s focus on the human-run institutions that will design and own A.I. systems is critical. But one thing regulators shouldn’t fear is imperfect rules that slow a young industry. For once, much of that industry is desperate for someone to help slow it down.
I still haven’t found a catchy yet fitting name, so for the time being it’s Anglada.odt. It has turned out to be a sequel to Mars, Ho! and Voyage to Earth, and a prequel to Nobots. Bill Kelly returns, aged 245 Martian time, 61 relativity time. Einstein’s theory is the story’s main theme.
This story has a lot I’ve not used before, like a dystopia. I’ve become really tired of reading future dystopias, it seems that’s the only thing kids can write these days. Probably because we’re sliding headlong into one, thanks to people like Elon Musk, Mark Zuckerberh, Bezos, the Sacklers, the Waltons, the Cochs, everyone who’s dirty, filthy, stinking rich with vast stock portfolios that include oil stock and bribes to legislators moving their taxes from them to the working class*, and legally stealing their labor.
But there are two “worlds” here, Earth, and the spacers on Mars and in the asteroids. Those living on asteroids are called “asterites,” a word Poul Anderson coined in his Industrial Revolution. The spacers (Asimov coined that one) live well, all what we would call overweight, a boon to someone living on Sylvia or even Mars, because of the low gravity. Work is voluntary, and there’s a mandatory retirement age of sixty.
Earth is my first dystopia, a real hellhole, with hurricanes on land, five mile wide EF-5 tornadoes, and everyone living underground, even the Amish. At the beginning of the story, the Yellowstone supervolcano has exploded, killing millions instantly and billions of war and starvation afterwards. It begins with Earth a dictatorship that resembles both fascism and communism; basically, the whole world is North Korea with weather running everyone underground, and everyone skin and bones, always hungry. You kids like dystopias? There’s a pandemic that kills three quarters of the population... but fortunately, very little concerns Earth. Most of the story is on the trip to Centauri, and the Martian base.
It’s also my first story with a sad part; I hate sad stories. Also the only story with a little kid, an orphan whose Grandpa is headed to Anglada.
Here are some snippets, which may or may not be in the final book. It starts off:
History’s first human venture outside our star’s heliosphere was an utter catastrophe that ended in insanity.
After a few paragraphs, most of Grommler is in it. Everyone thinks the insanity is from the plants on Grommler, but it’s the time stretch.
Almost everyone in the story are elderly, the youngest three on the ship are in their fifties. Explaining why would involve a spoiler. One is fifty five, the youngest (except the psychologists, in their early fifties) The fifty five year old is a musician, there to put on shows for the crew, so the story’s a lot about music, and all that goes with it, like insane copyrights, which have stretched to infinity in the story, everything before the twenty first century public domain, and afterwards perpetual copyrights owned by corporations.
Computers write all books, plays, music... A geologist named Will is an amateur guitarist (there are no more professionals, it’s all computers) who thinks he sucks. Sue is a hydrologist who also plays a mean flute.
He finished the tune. “I told you I sucked,” he said as he put the guitar back on its stand.
Sue was applauding. Bob said “Dude, that’s a much better version than what the computer plays.”
“You’re just being nice.”
Sue said, “No, really, that was good! Bob’s right, it was better than the computer version. The computer version has a lot more notes but no soul at all. You could make money playing that!”
“You think so?” he said.
“No,” Bob interjected. “A two hundred year old Earthian law says that an ancient corporation owns the tune and you have to pay them. There’s no way you could profit. Copyrights have been perpetual for two hundred fifty years now. Let me teach you some of the old, pre-copyright tunes. Here, here’s one called a Bolero...”
It isn’t mentioned by name, but the song Stairway to Heaven is in it, as is...
Three days later, Bob Black sat on the stage in the commons with his guitar, a real antique, a Fender Stratocaster, tuning it with a normal electronic tuner like they’d had almost since the Strat had been invented. The computer generated Muzak that Bob hated played. Bar stools were all occupied and a large fraction of the tables were, as well. Half of the people there had never heard real music, played on a real musical instrument by a real person before.
Bob’s family had been musically inclined for generations. He had been named after another guitar player long ago, his great grandfather Rob Black; both were named “Robert Black” on birth certificates.
Not only had he seemingly inherited his musical talent, which science didn’t say was hereditary, but musicians did, but also books and books of sheet music going back centuries. He’d had them digitized, and the physical books were locked up in a warehouse on Mars.
His guitar tuned up, he started with an ancient tune called “Thirty Days in the Hole” from one of the antique books. He never had found out what “Newcastle Brown” was, a disease, maybe?
Unlike way too much science fiction, mine always actually has real science, scientists, end possible future engineering. The main science in this one is psychology, although there are other fields.
There are no computer scientists in the story, but lots of computers. I wonder what OS they’ll be running in a few hundred years?
So far it’s about 27,000 words and 85 pages, maybe a third of the way finished.
* In 1940, the lowest federal tax rate was over four times the median income. In the 1950s and '60s a single paycheck paid a family's bills, the minimum wage would support a young couple with a child. We have been ROBBED silently.
https://www.youtube.com/watch?v=ImLVzQdKIQ8
Just some good old boy who knows cars. Enjoy!
So I got a text from my wife that our old toaster is now toast. Or has bit the dust.
It was a simple four slot unit that was good when the kid still lived at home. We've certainly gotten our money's worth out of it.
So what to replace it with?
First, I must confess that I immediately recall a story I first read back in the 90s when computer hardware was more expensive and far less capable than today's hardware. A toaster with a mere 32-bit processor would be laughable.
The Fable of the 32-Bit Toaster API Spec
The following spoils the end of the fable.
Such a primitive toaster design from 1994 didn't anticipate so many modern things.
How many sensors should a toaster have? Audio? Video? How many cloud services should it be connected to? Should it have a GUI interface? Touch screen? Voice recognition and response? Alexa or Google Assistant integration (or both!)? Integration with the living room TV? Should it have an Android app for me and Apple App for my wife?
How many slots for toast? 6? 8? A dozen? More? (will it need a NEMA 14-50 outlet?)
I haven't had to shop for a toaster for a long time. Certainly it was before I walked with a cane and took prescription narcotic pain killers. Now that I think of it, we lived in a different house; the thirty year mortgage on the current house will be paid off in fewer years than I can count on one hand.
Should I pay much attention to safety features?
I was looking at CPUs the other day and I noticed that there is an AMD Ryzen 7 5800X and a Ryzen 7 5800X3D. The 3D version has three times (96MB) the level three cache as the non-3D version, but runs several hundred megahertz slower. cpubenchmark.net has almost identical CPU Mark scores for the two but some articles I've seen say that the 3D version has far better performance for gaming workloads.
Are there any other workloads the 3D version is better at? Does this tell us anything about the way our software is written in particular, or does it just tell us that games are not particularly good at exploiting more cores?
There must come a point of diminishing returns where adding more cache makes little difference, however three times the cache making a significant difference is interesting.
For the last nine years I have been unable to resolve the ip address of Soylent news. I don't where or what or why things prevented the resoultion of the soylentnews.org address. After having been away so long I'll pretty much stay in "stealth mode" to see where you've gone from the days of throwing off the yoke of the corporate masters at slashdot.