Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What was highest label on your first car speedometer?

  • 80 mph
  • 88 mph
  • 100 mph
  • 120 mph
  • 150 mph
  • it was in kph like civilized countries use you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:70 | Votes:295

posted by martyb on Monday May 18 2020, @10:56PM   Printer-friendly

Apple Issued Warrant by FBI to Provide Access to iCloud Account for Sen. Richard Burr for Stock Sales Investigation

FBI recently obtained iCloud data access from Apple for Senator Richard Burr, as part of an on-going investigation regarding stock sales.

Richard Burr is under investigation for selling his stock portfolio while he was receiving updates from government health officials regarding coronavirus pandemic. The timing of his stock sales preceded the sharp decline in the stock market, just a week later. He had heavily invested in businesses that suffered the most due to the pandemic.

[...] Burr sold between $628,000 and $1.72 million worth of stocks. He was not the only senator to do so, as a few others are also under investigation. His brother in law also sold his shares worth between $97,000 and $280,000, on the same day as Burr's sell-off.

It is against the law for lawmakers to make trading decisions based on classified intelligence briefings that they receive due to their position in the government.

Also at 9to5Mac.

Previously: US Rep Chris Collins Resigns Ahead of Insider Trading Plea Involving Australian Biotech Company
This Website Tracks Which Shares US Senators Are Unloading Mid-Pandemic


Original Submission

posted by cmn32480 on Monday May 18 2020, @08:35PM   Printer-friendly
from the take-the-money-and-run dept.

Crypto-Mining Campaign Hits European Supercomputers:

Several supercomputers across Europe were taken offline last week after being targeted in what appears to be a crypto-mining campaign.

In a notice on Saturday, the Swiss National Supercomputing Centre (CSCS) revealed that it too has been hit, along with other “HPC [High Performance Computing] and academic data centres of Europe and around the world.”

CSCS said it detected malicious activity related to these attacks and it has decided to suspend external access until the issue is addressed.

[...] While CSCS’ notice says that the background of the attack is currently unclear, the European Grid Infrastructure (EGI) security team issued an alert claiming that the purpose of the attack is cryptocurrency mining.

EGI mentions two security incidents “that may or may not be correlated,” which impact academic data centers, revealing that compromised SSH credentials are being used by the attackers to jump from a victim to another.

As part of the assaults, compromised hosts are being used as Monero (XMR) mining hosts, as XMR-proxy and SOCKS-proxy hosts, and as tunnel hosts (for SSH tunneling), EGI’s team explains.

[...] The attacks targeted multiple victims in Germany, including the Jülich Supercomputing Centre (JSC)-maintained JURECA, JUDAC, and JUWELS, the HPC systems at Leibniz Supercomputing Centre (LRZ), the Taurus supercomputer at the Technical University in Dresden, and five HPC clusters coordinated by bwHPC, among others.


Original Submission

posted by cmn32480 on Monday May 18 2020, @06:17PM   Printer-friendly
from the commencing-site-bidding-war dept.

Report: Tesla plans to build a new car factory in Texas:

Elon Musk's recent clashes with officials in Alameda County, home of Tesla's Fremont factory, may have given him a heightened sense of urgency to find Tesla's next US factory. On Friday, several news outlets reported that Tesla was narrowing in on a new location to build the Model Y crossover and Tesla's forthcoming Cybertruck.

The reports started with Electrek, a pro-Tesla site whose co-founder Fred Lambert has good connections inside the company. Just before 3pm Eastern time, Lambert reported that Tesla had settled on Austin, Texas as the site of its next factory.

"We are told that the decision for the site is not set in stone since Tesla was apparently given a few options in the greater Austin area," Lambert wrote. "But Musk is said to want to start construction extremely soon and aims to have Model Y vehicles coming out of the plant by the end of the year."

That would be a remarkably short amount of time for any car company to build a new factory from scratch. Last year, it took Tesla almost a year to build its Shanghai factory—and that was considered unusually fast.

[...] Hours after Electrek's story ran, three news organizations—TechCrunch, CNBC, and the Associated Press—all published stories stating that Tesla was still considering Tulsa, Oklahoma.

"A final decision has not been made, but Austin and Tulsa are among the finalists," Techcrunch's Kirsten Korosec writes, citing "multiple sources."

[...] Both Texas and Oklahoma have right-to-work laws that allow employees to opt out of paying union dues. These laws could help Tesla discourage workers in its new factory from forming a union. California's laws are more friendly to union organizing.

Elon Musk SpaceX already has two locations in Texas: The Boca Chica site is rapidly ramping up development of Starship and the McGregor location has their rocket testing facility.

Recently:
(2020-05-14) Elon Musk's Boring Company Finishes Digging Las Vegas Tunnels
(2020-05-12) Musk Dares County Officials to Arrest Him as He Reopens Fremont Factory
(2020-05-02) Elon Musk Tweet Wipes $14bn Off Tesla's Value
(2020-04-15) Tesla's Robotaxi Fleet Will be 'Functionally Ready' in 2020, Musk Says
(2020-04-04) Tesla Beats Expectations with Strong First-Quarter Delivery Numbers


Original Submission

posted by martyb on Monday May 18 2020, @04:22PM   Printer-friendly
from the reminder-all-phones-have-backdoors dept.

'Mandrake' Android Spyware Remained Undetected for 4 Years:

Security researchers at Bitdefender have identified a highly sophisticated Android spyware platform that managed to remain undetected for four years.

Dubbed Mandrake, the platform targets only specific devices, as its operators are keen on remaining undetected for as long as possible. Thus, the malware avoids infecting devices in countries that might bring no benefit for the attackers.

Over the past four years, the platform has received numerous updates, with new features being constantly added, and obsolete ones being removed. Under continuous development, the malware framework is highly complex, Bitdefender’s security researchers say.

Mandrake provides attackers with complete control over an infected device, allowing them to turn down the volume, block calls and messages, steal credentials, exfiltrate data, transfer money, record the screen, and blackmail the victim.

“Considering the complexity of the spying platform, we assume that every attack is targeted individually, executed with surgical precision and manual rather than automated. Weaponization would take place after a period of total monitoring of the device and victim,” Bitdefender explains.

Mandrake looks like an advanced espionage platform, but the security researchers believe the campaign is rather financially motivated. During their investigation, they observed phishing attacks targeting an Australian investment trading app, crypto-wallet apps, the Amazon shopping application, banking software, payment apps, an Australian pension fund app, and Gmail.

[...] Seven malicious applications delivering Mandrake were identified in Google Play alone, namely Abfix, CoinCast, SnapTune Vid, Currency XE Converter, Office Scanner, Horoskope and Car News, each of them having hundreds of thousands of downloads.

[...] The malware avoids about 90 countries from infection and does not run on devices with no SIM or with SIM cards issued by certain operators, including Verizon and China Mobile Communications Corporation (CMCC).


Original Submission

posted by NCommander on Monday May 18 2020, @02:00PM   Printer-friendly
from the from-16-to-32-to-64-complete-with-braindamage dept.

For those who've been long-time readers of SoylentNews, it's not exactly a secret that I have a personal interest in retro computing and documenting the history and evolution of the Personal Computer. About three years ago, I ran a series of articles about restoring Xenix 2.2.3c, and I'm far overdue on writing a new one. For those who do programming work of any sort, you'll also be familiar with "Hello World", the first program most, if not all, programmers write in their careers.

A sample hello world program might look like the following:

#include <stdio.h>

int main() {
 printf("Hello world\n");
 return 0;
}

Recently, I was inspired to investigate the original HELLO.C for Windows 1.0, a 125 line behemoth that was talked about in hush tones. To that end, I recorded a video on YouTube that provides a look into the world of programming for Windows 1.0, and then testing the backward compatibility of Windows through to Windows 10.

Hello World Titlecard

For those less inclined to watch a video, my write-up of the experience is past the fold and an annotated version of the file is available on GitHub

Bring Out Your Dinosaurs - DOS 3.3

Before we even get into the topic of HELLO.C though, there's a fair bit to be said about these ancient versions of Windows. Windows 1.0, like all pre-95 versions, required DOS to be pre-installed. One quirk however with this specific version of Windows is that it blows up when run on anything later than DOS 3.3. Part of this is due to an internal version check which can be worked around with SETVER. However, even if this version check is bypassed, there are supposedly known issues with running COMMAND.COM. To reduce the number of potential headaches, I decided to simply install PC-DOS 3.3, and give Windows what it wants.

You might notice I didn't say Microsoft DOS 3.3. The reason is that DOS didn't exist as a standalone product at the time. Instead, system builders would license the DOS OEM Adaptation Kit and create their own DOS such as Compaq DOS 3.3. Given that PC-DOS was built for IBM's own line of PCs, it's generally considered the most "generic" version of the pre-DOS 5.0 versions, and this version was chosen for our base. However, due to its age, it has some quirks that would disappear with the later and more common DOS versions.

PC DOS 3.3 loaded just fine in VirtualBox and — with the single 720 KiB floppy being bootable — immediately dropped me to a command prompt. Likewise, FDISK and FORMAT were available to partition the hard drive for installation. Each individual partition is limited, however, to 32 MiB. Even at the time, this was somewhat constrained and Compaq DOS was the first (to the best of my knowledge) to remove this limitation. Running FORMAT C: /S created a bootable drive, but something oft-forgotten was that IBM actually provided an installation utility known as SELECT.

SELECT's obscurity primarily lies in its non-obvious name or usage, nor the fact that it's actually needed to install DOS; it's sufficient to simply copy the files to the hard disk. However, SELECT does create CONFIG.SYS and AUTOEXEC.BAT so it's handy to use. Compared to the later DOS setup, SELECT requires a relatively arcane invocation with the target installation folder, keyboard layout, and country-code entered as arguments and simply errors out if these are incorrect. Once the correct runes are typed, SELECT formats the target drive, copies DOS, and finishes installation.

DOS Select

Without much fanfare, the first hurdle was crossed, and we're off to installing Windows.

Windows 1.0 Installation/Mouse Woes

With DOS installed, it was on to Windows. Compared to the minimalist SELECT command, Windows 1.0 comes with a dedicated installer and a simple text-based interface. This bit of polish was likely due to the fact that most users would be expected to install Windows themselves instead of having it pre-installed.

Windows 1 SETUP

Another interesting quirk was that Windows could be installed to a second floppy disk due to the rarity of hard drives of the era, something that we would see later with Microsoft C 4.0. Installation went (mostly) smoothly, although it took me two tries to get a working install due to a typo. Typing WIN brought me to the rather spartan interface of Windows 1.0.

DOS EXECUTIVE

Although functional, what was missing was mouse support. Due to its age, Windows predates the mouse as a standard piece of equipment and predates the PS/2 mouse protocol; only serial and bus mice were supported out of the box. There are two ways to solve this problem:

The first, which is what I used, involves copying MOUSE.DRV from Windows 2.0 to the Windows 1.0 installation media, and then reinstalling, selecting the "Microsoft Mouse" option from the menu. Re-installation is required because WIN.COM is statically linked as part of installation with only the necessary drivers included; there is no option to change settings afterward. The SDK documentation details the static linking process, and how to run Windows in "slow mode" for driver development, but the end result is the same. If you want to reconfigure, you need to re-install.

The second option, which I was unaware of until after producing my video is to use the PS/2 release of Windows 1.0. Like DOS of the era, Windows was licensed to OEMs who could adapt it to their individual hardware. IBM did in fact do so for their then-new PS/2 line of computers, adding in PS/2 mouse support at the time. Despite being for the PS/2 line, this version of Windows is known to run on AT-compatible machines.

Regardless, the second hurdle had been passed, and I had a working mouse. This made exploring Windows 1.0 much easier.

The Windows 1.0 Experience

If you're interested in trying Windows 1.0, I'd recommend heading over to PCjs.org and using their browser-based emulator to play with it as it already has working mouse support and doesn't require acquiring 35 year old software. Likewise, there are numerous write-ups about this version, but I'd be remiss if I didn't spend at least a little time talking about it, at least from a technical level.

Compared to even the slightly later Windows 2.0, Windows 1.0 is much closer to DOSSHELL than any other version of Windows, and is essentially a graphical bolt-on to DOS although through deep magic, it is capable of cooperative multitasking. This was done entirely with software trickery as Windows pre-dates the 80286, and ran on the original 8086. COMMAND.COM could be run as a text-based application, however, most DOS applications would launch a full-screen session and take control of the UI.

This is likely why Windows 1.0 has issues on later versions of DOS as it's likely taking control of internal structures within DOS to perform borderline magic on a processor that had no concept of memory protection.

Another oddity is that this version of Windows doesn't actually have "windows" per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it's clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates.

My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh. Compared to later versions, there are also almost no included applications. The most notable applications that were included are: NOTEPAD, PAINT, WRITE, and CARDFILE.

WRITE

CARDFILE

While NOTEPAD is essentially unchanged from its modern version, Write could be best considered a stripped-down version of Word, and would remain a mainstay until Windows 95 where it was replaced with Wordpad. CARDFILE likewise was a digital Rolodex. CARDFILE remained part of the default install until Windows 3.1, and remained on the CD-ROM for 95, 98, and ME before disappearing entirely.

CARDFILE

PAINT, on the other hand, is entirely different from the Paintbrush application that would become a mainstay. Specifically, it's limited to monochrome graphics, and files are saved in MSP format. Part of this is due to limitations of the Windows API of the era: for drawing bitmaps to the screen, Windows provided Display Independent Bitmaps or DIBs. These had no concept of a palette and were limited to the 8 colors that Windows uses as part of the EGA palette. Color support appears to have been a late addition to Windows, and seemingly wasn't fully realized until Windows 3.0.

Paintbrush (and the later and confusingly-named Paint) was actually a third party application created by ZSoft which had DOS and Windows 1.0 versions. ZSoft Paintbrush was very similar to what shipped in Windows 3.0 and used a bit of technical trickery to take advantage of the full EGA palette.

PAINTBRUSH

With that quick look completed, let's go back to actually getting to HELLO.C, and that involved getting the SDK installed.

The Windows SDK and Microsoft C 4.0

Getting the Windows SDK setup is something of an experience. Most of Microsoft's documentation for this era has been lost, but the OS/2 Museum has scanned copies of some of the reference binders, and the second disk in the SDK has both a README file and an installation batch file that managed to have most of the necessary information needed.

Unlike later SDK versions, it was the responsibility of the programmer to provide a compiler. Officially, Microsoft supported the following tools:

  • Microsoft Macro Assembler (MASM) 4
  • Microsoft C 4.0 (not to be confused with MSC++4, or Visual C++)
  • Microsoft Pascal 3.3

Unofficially (and unconfirmed), there were versions of Borland C that could also be used, although this was untested, and appeared to not have been documented beyond some notes on USENET. More interestingly, all the above tools were compilers for DOS, and didn't have any specific support for Windows. Instead, a replacement linker was shipped in the SDK that could create Windows 1.0 "NE" New Executables, an executable format that would also be used on early OS/2 before being replaced by Portable (PE) and Linear Executables (LX) respectively.

For the purposes of compiling HELLO.C, Microsoft C 4.0 was installed. Like Windows, MSC could be run from floppy disk, albeit it with a lot of disk swapping. No installer is provided, instead, the surviving PDFs have several pages of COPY commands combined with edits to AUTOEXEC.BAT and CONFIG.SYS for hard drive installation. It was also at this point I installed SLED, a full screen editor as DOS 3.3 only shipped with EDLIN. EDIT wouldn't appear until DOS 5.0

After much disk feeding and some troubleshooting, I managed to compile a quick and dirty Hello World program for DOS. One other interesting quirk of MSC 4.0 was it did not include a standalone assembler; MASM was a separate retail product at the time. With the compiler sorted, it was time for the SDK.

Fortunately, an installation script is provided. Like SELECT, it required listing out a bunch of folders, but otherwise was simple enough to use. For reasons that probably only made sense in 1985, both the script and the README file was on Disk 2, and not Disk 1. This was confirmed not to be a labeling error as the script immediately asks for Disk 1 to be inserted.

SDK Installation

The install script copies files from four of the seven disks before returning to a command line. Disk 5 contains the debug build of Windows, which are roughly equivalent to checked builds of modern Windows. Disk 6 and 7 have sample code, including HELLO.C.

With the final hurdle passed, it wasn't too hard to get to compiled HELLO.EXE.

HELLO compilation

HELLO compilation

Dissecting HELLO.C

I'm going to go through these at a high level, my annotated hello.c goes into much more detail on all these points.

General Notes

Now that we can build it, it's time to take a look at what actually makes up the nuts and bolts of a 16-bit Windows application. The first major difference, simply due to age is that HELLO.C uses K&R C simply on the basis of pre-dating the ANSI C function. It's also clear that certain conventions weren't commonplace yet: for example, windows.h lacks inclusion guards.

NEAR and FAR pointers

long FAR PASCAL HelloWndProc(HWND, unsigned, WORD, LONG);

Oh boy, the bane of anyone coding in real mode, near and far pointers are a "feature" that many would simply like to forget. The difference is seemingly simple, a near pointer is nearly identical to a standard pointer in C, except it refers to memory within a known segment, and a far pointer is a pointer that includes the segment selector. Clear right?

Yeah, I didn't think so. To actually understand what these are, we need to segue into the 8086's 20-bit memory map. Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time, or 64 kilobytes in total. Various tricks were done to break the 16-bit memory barrier such as bank switching, or in the case of the 8086, segmentation.

Instead of making all 20-bits directly accessible, memory pointers are divided into a selector which forms the base of a given pointer, and an offset from that base, allowing the full address space to be mapped. In effect, the 8086 gave four independent windows into system memory through the use of the Code Segment (CS), Data Segment (DS), Stack Segment (SS), and the Extra Segment (ES).

Near pointers thus are used in cases where data or a function call is in the same segment and only contain the offset; they're functionally identical to normal C pointers within a given segment. Far pointers include both segment and offset, and the 8086 had special opcodes for using these. Of note is the far call, which automatically pushed and popped the code segment for jumping between locations in memory. This will be relevant later.

HelloWndProc is a forward declaration for the Hello Window callback, a standard feature of Windows programming. Callback functions always had to be declared FAR as Windows would need to load the correct segment when jumping into application code from the task manager. Hence the far declaration. Windows 1.0 and 2.0, in addition, had other rules we'll look at below.

WinMain Decleration:

int PASCAL WinMain( hInstance, hPrevInstance, lpszCmdLine, cmdShow )
HANDLE hInstance, hPrevInstance;
LPSTR lpszCmdLine;
int cmdShow;

PASCAL Calling Convention

Windows API functions are all declared as PASCAL calling convention, also known as STDCALL on modern Windows. Under normal circumstances, the C programming language has a nominal calling convention (known as CDECL) which primarily relates to how the stack is cleaned up after a function call. In CDECL-declared functions, its the responsibility of the calling function to clean the stack. This is necessary for vardiac functions (aka, functions that take a variable number of arguments) to work as the callee won't know how many were pushed onto the stack.

The downside to CDECL is that it requires additional prologue and epilogue instructions for each and every function call, thereby slowing down execution speed and increasing disk space requirements. Conversely, PASCAL calling convention left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.

hPrevInstance

if (!hPrevInstance) {
/* Call initialization procedure if this is the first instance */
if (!HelloInit( hInstance ))
return FALSE;
} else {
/* Copy data from previous instance */
GetInstanceData( hPrevInstance, (PSTR)szAppName, 10 );
GetInstanceData( hPrevInstance, (PSTR)szAbout, 10 );
GetInstanceData( hPrevInstance, (PSTR)szMessage, 15 );
GetInstanceData( hPrevInstance, (PSTR)&MessageLength, sizeof(int) );
}

hPrevInstance has been a vestigial organ in modern Windows for decades. It's set to NULL on program start, and has no purpose in Win32. Of course, that doesn't mean it was always meaningless. Applications on 16-bit Windows existed in a general soup of shared address space. Furthermore, Windows didn't immediately reclaim memory that was marked unused. Applications thus could have pieces of themselves remain resident beyond the lifespan of the application.

hPrevInstance was a pointer to these previous instances. If an application still happened to have its resources registered to the Windows Resource Manager, it could reclaim them instead of having to load them fresh from disk. hPrevInstance was set to NULL if no previous instance was loaded, thereby instructing the application to reload everything it needs. Resources are registered with a global key so trying to register the same resource twice would lead to an initialization failure.

I've also gotten the impression that resources could be shared across applications although I haven't explicitly confirmed this.

Local/Global Memory Allocations

NOTE: Mostly cribbled off Raymond Chen's blog, a great read for why Windows works the way it does.

pHelloClass = (PWNDCLASS)LocalAlloc( LPTR, sizeof(WNDCLASS) );
LocalFree( (HANDLE)pHelloClass );

Another concept that's essentially gone is that memory allocations were classified as either local to an application or global. Due to the segmented architecture, applications have multiple heaps: a local heap that is initialized with the program and exists in the local data segment, and a global heap which requires a far pointer to make access to and from.

Every executable and DLL got their own local heaps, but global heaps could be shared across process boundaries, and as best I can tell, weren't automatically deallocated when a process ended. HEAPWALK could be used to see who allocated what and find leaks in the address space. It could also be combined with SHAKER which rearranged blocks of memories in an attempt to shake loose bugs. This is similar to more modern-day tools like valgrind on Linux, or Microsoft's Application Testing tools.

HEAPWALK and SHAKER side by side

MakeProcInstance

lpprocAbout = MakeProcInstance( (FARPROC)About, hInstance );

Oh boy, this is a real stinker and an entirely unnecessary one at that. MakeProcInstance didn't even make it to Windows 3.1 and its entire existence is because Microsoft forgot details of their own operating environment. To explain, we're going to need to dig a bit deeper into segmented mode programming.

MakeProcInstance's purpose was to register a function suitable as a callback. Only functions that have been marked with MPI or declared as an EXPORT in the module file can be safely called across process boundaries. The reason for this is that Windows needs to register the Code Segment and Data Segment to a global store to make function calls safely. Remember, each application had its own local heap which lived in its own selector in DS.

In real mode, doing a CALL FAR to jump to a far pointer automatically push and popped the code segment as needed, but the data segment was left unchanged. As such, a mechanism was required to store the additional information needed to find the local heap. So far, this is sounding relatively reasonable.

The problem is that 16-bit Windows has this as an invariant: DS = SS ...

If you're a real mode programmer, that might make it clear where I'm going with this. The Stack Segment selector is used to denote where in memory the stack is living. SS also got pushed to the stack during a function call across process boundaries along with the previous SP. You might begin to see why MakeProcInstance becomes entirely unnecessary.

Instead of needing a global registration system for function calls, an application could just look at the stack base pointer (bp) and retrieve the previous SS from there. Since SS = DS, the previous data segment was in fact saved and no registration is required, just a change to how Windows handles function epilogs and prologs. This was actually found by a third party, and a tool FixDS was released by Michael Geary that rewrote function code to do what I just described. Microsoft eventually incorporated his fix directly into Windows, and MakeProcInstance disappeared as a necessity.

Other Oddities

From Raymond Chen's blog and other sources, one interesting aspect of 16-bit Windows was it was actually designed with the possibility that applications would have their own address space, and there was talk that Windows would be ported to run on top of XENIX, Microsoft's UNIX-based operating system. It's unclear if OS/2's Presentation Manager shared code with 16-bit Windows although several design aspects and API names were closely linked together.

From the design of 16-bit Windows and playing with it, what's clear is this was actually future-proofing for Protected Mode on the 80286, sometimes known as segmented protection mode. On 286's Protected Mode, while the processor was 32-bit, the memory address space was still segmented into 64-kilobyte windows. The primary difference was that the segment selectors became logical instead of physical addresses.

Had the 80286 actually succeeded, 32-bit Windows would have been essentially identical to 16-bit Windows due to how this processor worked. In truth, separate address spaces would have to wait for the 80386 and Windows NT to see the light of day, and this potential ability was never used. The 80386 both removed the 64-kilobyte limit and introduced a flat address space through paging which brought the x86 processor more inline with other architectures.

Backwards Compatibility on Windows 3.1

While Microsoft's backward compatibility is a thing of legend, in truth, it didn't actually start existing until Windows 3.1 and later. Since Windows 1.0 and 2.0 applications ran in real mode, they could directly manipulate the hardware and perform operations that would crash under Protected Mode.

Microsoft originally released Windows 286, and 386 to add support for the 80286 and 80386, functionality that would be merged together in Windows 3.0 as Standard Mode, and 386 Enhanced Mode along with legacy "Real Mode" support. Due to running parts of the operating system in Protected Mode, many of the tricks applications could perform would cause a General Protection Fault and simply fail. This wasn't seen as a problem as early versions of Windows were not popular, and Microsoft actually dropped support for 1.x and 2.x applications in Windows 95.

Windows for Workgroups was installed in a fresh virtual machine, and HELLO.EXE, plus two more example applications, CARDFILE and FONTTEST were copied with it. Upon loading, Windows did not disappoint throwing up a compatibility warning right at the get-go.

Windows 3.1 Compatibility Warning

Accepting the warning showing that all three applications ran fine, albeit it with a broken resolution due to 0,0 being passed into CreateWindow().

HELLO on Windows 3.1

However, there's a bit more to explore here. The Windows 3.1 SDK included a utility known as MARK. MARK was used, as the name suggests, to mark legacy applications as being OK to run under Protected Mode. It also could enable the use of TrueType fonts, a feature introduced back in Windows 3.0.

MARKING

The effect is clear, HELLO.EXE now renders in TrueType fonts. The reason TrueType fonts are not immediately enabled can be see in FONTTEST, where the system typeface now overruns several dialog fields.

TrueType HELLO

The question now was, can we go further?

35 Years Later ...

As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations. As such, a fresh copy of Windows 10 32-bit was installed.

Some pain was suffered convincing Windows that I didn't want to use a Microsoft account to sign in. Inserting the same floppy disk as used in the previous test, I double-clicked HELLO and Feature Installer popped up asking to install NTVDM. After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10.

HELLO on Windows 10

FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared. CARDFILE loaded but immediately died with an initialization error. I did try debugging the issue and found WinDbg at least has partial support for working with these ancient binaries, although the story of why CARDFILE dies will have to wait for another day.

windbg

In Closing ...

I do hope you enjoyed this look at ancient Windows and HELLO.C. I'm happy to answer questions, and the next topic I'm likely going to cover is a more in-depth look at the differences between Windows 3.1 and Windows for Workgroups combined with demonstrating how networking worked in those versions.

Any feedback on either the article, or the video is welcome to help me improve my content in the future.

Until next time,

73 de NCommander

posted by Fnord666 on Monday May 18 2020, @12:17PM   Printer-friendly
from the flaws-of-the-month-club dept.

Microsoft Addresses 111 Bugs for May Patch Tuesday:

Microsoft has released fixes for 111 security vulnerabilities in its May Patch Tuesday update, including 16 critical bugs and 96 that are rated important.

Unlike other recent monthly updates from the computing giant this year, none of the flaws are publicly known or under active attack at the time of release.

Along with the expected cache of operating system, browser, Office and SharePoint updates, Microsoft has also released updates for .NET Framework, .NET Core, Visual Studio, Power BI, Windows Defender, and Microsoft Dynamics.

The majority of the fixes are important-rated elevation-of-privilege (EoP) bugs. There are a total of 56 of these types of fixes in Microsoft's May release, primarily impacting various Windows components. This class of vulnerabilities is used by attackers once they've managed to gain initial access to a system, in order to execute code on their target systems with elevated privileges.

[...] Other bugs of note include two remote code execution (RCE) flaws in Microsoft Color Management (CVE-2020-1117) and Windows Media Foundation (CVE-2020-1126), which could both be exploited by tricking a user via social engineering techniques into opening a malicious email attachment or visiting a website that contains the exploit code.

[...] The critical flaws also include updates for Chakra Core, Internet Explorer and EdgeHTML, while SharePoint has four critical bugs, continuing its dominance in that category from last month.

"Most of the critical vulnerabilities are resolved by the OS and browser updates, but there are four critical vulnerabilities in SharePoint and one in Visual Studio," Todd Schell, senior product manager, security, for Ivanti said via email.

[...] Administrators should also pay attention to a handful of other issues in the trove of patches, such as two for VBScript (CVE-2020-1060 and CVE-2020-1058).

When exploited, both could allow an attacker to gain the same right as the current user.

[...] There's also an interesting denial-of-service vulnerability (CVE-2020-1118) in Microsoft Windows Transport Layer Security. It allows a remote, unauthenticated attacker to abnormally reboot, resulting in a denial-of-service condition.

"A NULL pointer dereference vulnerability exists in the Windows implementation of the Diffie-Hellman protocol," explained Childs. "An attacker can exploit this vulnerability by sending a malicious Client Key Exchange message during a TLS handshake. The vulnerability affects both TLS clients and TLS servers, so just about any system could be shut down by an attacker. Either way, successful exploitation will cause the lsass.exe process to terminate."

[...] Microsoft has been on a bug-fixing roll lately; this month marks three months in a row that Microsoft has released patches for more than 110 CVEs.

"We'll see if they maintain that pace throughout the year," said Childs.


Original Submission

posted by Fnord666 on Monday May 18 2020, @10:08AM   Printer-friendly

COVID-19 Has Blown Away the Myth About 'First' and 'Third' World Competence:

One of the planet's – and Africa's – deepest prejudices is being demolished by the way countries handle COVID-19.

For as long as any of us remember, everyone "knew" that "First World" countries – in effect, Western Europe and North America – were much better at providing their citizens with a good life than the poor and incapable states of the "Third World". "First World" has become shorthand for competence, sophistication and the highest political and economic standards.

[...] So we should have expected the state-of-the-art health systems of the "First World", spurred on by their aware and empowered citizens, to handle COVID-19 with relative ease, leaving the rest of the planet to endure the horror of buckling health systems and mass graves.

We have seen precisely the opposite.

[...] [Britain and the US] have ignored the threat. When they were forced to act, they sent mixed signals to citizens which encouraged many to act in ways which spread the infection. Neither did anything like the testing needed to control the virus. Both failed to equip their hospitals and health workers with the equipment they needed, triggering many avoidable deaths.

The failure was political. The US is the only rich country with no national health system. An attempt by former president Barack Obama to extend affordable care was watered down by right-wing resistance, then further gutted by the current president and his party. Britain's much-loved National Health Service has been weakened by spending cuts. Both governments failed to fight the virus in time because they had other priorities.

And yet, in Britain, the government's popularity ratings are sky high and it is expected to win the next election comfortably. The US president is behind in the polls but the contest is close enough to make his re-election a real possibility. Can there be anything more typically "Third World" than citizens supporting a government whose actions cost thousands of lives?


Original Submission

posted by Fnord666 on Monday May 18 2020, @07:59AM   Printer-friendly
from the you-or-someone-like-you dept.

Hank Investigates: Incorrectly Charged for EZPass Tolls:

Cynthia's red four-door sits in her Concord driveway. Exactly where it's been for weeks.

[...] "We were following the governor's order and we were not leaving," Cynthia said.

So when Cynthia got her April EZ Pass bill she was baffled. It said her car went through tolls in New York, a COVID hot spot.

[...] She was billed for 60 different tolls with charges totaling more than 600 dollars.

"It said we were on the Bronx, Whitestone Bridge, the Throgs Neck Bridge, and the RFK Bridge in New York City.

I'd never been on those bridges," Cynthia said.

Not a chance her car was in New York. She says she spent hours on the phone with EZ Pass trying to get the errors fixed.

[...] What happened? We found Cynthia's toll trouble is because of the way Massachusetts issues license plates—and a glitch in the EZ Pass system.

The problem is Massachusetts, one of the 17 states connected in the system, uses the same numbers on different types of plates. For example, there could be Mass passenger 1234, but also commercial 1234, Cape and Island 1234, Red Sox, Purple Heart, and more.

When a special plate like that gets an electronic toll, cameras snap a photo of it, and then it’s looked up in the EZ Pass shared system so the car can be charged.

But we found those files do not provide “plate type” information! So if commercial 1234, for instance, goes through, passenger 1234 could get the bill.

How in the world did anyone thing that giving the same license plate number to multiple vehicles was a good idea?


Original Submission

posted by Fnord666 on Monday May 18 2020, @05:50AM   Printer-friendly
from the all-your-gifs-are-belong-to-us dept.

All your reaction GIFs now belong to Facebook, as it buys Giphy for $400M:

Seven years ago, Facebook claimed not to support the 21st century's new favorite communication tool, the animated GIF. Oh, how times have changed: Today, Facebook's newest acquisition is one of the Internet's most popular GIF hosting sites.

Facebook is making Giphy part of the Instagram team, the company said today. Axios, which was first to report the transaction, said the deal was valued at about $400 million.

According to Facebook, about half of Giphy's current traffic already comes from Facebook products, especially Instagram. That's perhaps unsurprising, given that Facebook's big three apps—WhatsApp, Instagram, and flagship Facebook—have literally billions of daily users among them.

[...] What the announcements did not mention, however, is that making Giphy a Facebook company can give Facebook access to all the data generated by those searches and API calls from other platforms. And using acquisitions to gather data on competitors is exactly the sort of behavior Facebook is under investigation for right now.


Original Submission

posted by Fnord666 on Monday May 18 2020, @03:41AM   Printer-friendly
from the duck-season-drone-season dept.

Man charged for shooting down drone:

Travis Duane Winters, 34, of Butterfield, was charged with criminal damage to property and reckless discharge of a weapon within city limits Monday in Watonwan County District Court.

A sheriff’s deputy was called Friday to a disturbance at Butterfield Foods. A man said he was flying over the food production company to capture images of the chickens that were being “slaughtered” because of the pandemic.

The suspect admitted to using a shotgun to shoot down the drone — which was valued at $1,900.

I have no idea what you're talking about officer. It must have just crashed. Did that nice gentleman have an FAA permit to fly that drone? He could have hurt someone crashing his drone on our property like that.

Previously:
(2016-01-09) Update: Dad Who Shot Down Drone is Getting Sued. Who Owns the Skies?
(2015-10-28) Update: Dad Who Shot "Snooping Vid Drone" Out of the Sky is Cleared of Charges
(2015-08-02) Man Arrested for Shooting Down Drone Flying Over His Property
(2015-06-29) Man Shoots Down Neighbor's Hexacopter


Original Submission

posted by Fnord666 on Monday May 18 2020, @01:32AM   Printer-friendly
from the just-like-it-sounds dept.

From iOS to SQL: The world's most incorrectly pronounced tech terms:

A lot of people pronounce common tech terms wrong, from iOS to SQL to Qi. It's understandable: Some of the proper or official pronunciations of these terms are counterintuitive at best. Still, we think it's time to clear the air on a few of them.

To that end, we're starting a discussion and inviting you to share your examples with us. Next week, we'll look into a bunch of them and publish a pronunciation guide.

[...] Below are a handful that have come up within the Ars [Technica] staff. Again, dear readers, feel free to discuss and debate, and to introduce some others of your own. For some of these and other terms suggested, we'll follow up with an article making the case for some correct (or, at least official) pronunciations versus incorrect ones, sourced as best as we can.

  • [...]iOS and beOS
  • [...]OS X and iPhone X
  • [...]SQL and MySQL
  • [...]Linux
  • [...]Qi
  • [...]Huawei

Original Submission

posted by martyb on Sunday May 17 2020, @11:23PM   Printer-friendly
from the is-there-anyon-out-there dept.

Researchers lead by Gwendel Fève, a physicist at Sorbonne University in Paris, have discovered the first experimental evidence that certain quasi-particles are 'anyons', members of a third kingdom of particles that are not fermions or bosons.

Every last particle in the universe — from a cosmic ray to a quark — is either a fermion or a boson. These categories divide the building blocks of nature into two distinct kingdoms.

While Quasi-particles demonstrating fractional quantum hall effect and displaying a fraction of the charge of a single electron had been observed before, this research is the first that demonstrates that they match predicted anyon behavior.

In 1984, a seminal two-page paper by [Frank A Wilzczek], Daniel Arovas and John Robert Schrieffer showed that these quasiparticles had to be anyons. But scientists had never observed anyon-like behavior in these quasiparticles. That is, they had been unable to prove that anyons are unlike either fermions or bosons, neither bunching together nor totally repelling one another.

That's what the new study does. In 2016, three physicists described an experimental setup that resembles a tiny particle collider in two dimensions. Fève and his colleagues built something similar and used it to smash anyons together. By measuring the fluctuations of the currents in the collider, they were able to show that the behavior of the anyons corresponds exactly with theoretical predictions.

"Everything fits with the theory so uniquely, there are no questions," said Dmitri Feldman, a physicist at Brown University who was not involved in the recent work. "That's very unusual for this field, in my experience."

Journal Reference:
H. Bartolomei, M. Kumar, R. Bisognin, et al. Fractional statistics in anyon collisions [$], Science (DOI: 10.1126/science.aaz5601)


Original Submission

posted by martyb on Sunday May 17 2020, @09:02PM   Printer-friendly
from the the-answer,-my-friend,-is-blowin'-in-the-wind dept.

Arthur T Knackerbracket has found the following story:

After decades of research, meteorologists still have questions about how hurricanes develop. Now, Florida State University researchers have found that even the smallest changes in atmospheric conditions could trigger a hurricane, information that will help scientists understand the processes that lead to these devastating storms.

"The whole motivation for this paper was that we still don't have that universal theoretical understanding of exactly how tropical cyclones form, and to really be able to forecast that storm-by-storm, it would help us to have that more solidly taken care of," said Jacob Carstens, a doctoral student in the Department of Earth, Ocean and Atmospheric Science.

The research by Carstens and Assistant Professor Allison Wing has been published in the Journal of Advances in Modeling Earth Systems.

[...] The simulations started with mostly uniform conditions spread across the imaginary box where the model played out. Then, researchers added a tiny amount of random temperature fluctuations to kickstart the model and observed how the simulated clouds evolved.

Despite the random start to the simulation, the clouds didn't stay randomly arranged. They formed into clusters as the water vapor, thermal radiation and other factors interacted. As the clusters circulated through the simulated atmosphere, the researchers tracked when they formed hurricanes. They repeated the model at simulated latitudes between 0.1 degrees and 20 degrees north, representative of areas such as parts of western Africa, northern South America and the Caribbean. That range includes the latitudes where tropical cyclones typically form, along with latitudes very close to the equator where their formation is rare and less studied.

The scientists found that every simulation in latitudes between 10 and 20 degrees produced a major hurricane, even from the stable conditions under which they began the simulation. These came a few days after a vortex first emerged well above the surface and affected its surrounding environment.

Journal Reference
Jacob D. Carstens, Allison A. Wing. Tropical Cyclogenesis From Self‐Aggregated Convection in Numerical Simulations of Rotating Radiative‐Convective Equilibrium [open], Journal of Advances in Modeling Earth Systems (DOI: 10.1029/2019MS002020)

-- submitted from IRC


Original Submission

posted by martyb on Sunday May 17 2020, @06:41PM   Printer-friendly
from the following-the-yellow-brick-road dept.

'The Wonderful Wizard of Oz' Turns 120:

Playwright, chicken farmer and children's book author L. Frank Baum published "The Wonderful Wizard of Oz" 120 years ago Sunday. The book would sell out its first run of 10,000 copies in eight months and go on to sell a total of 3 million copies before it fell into the public domain in 1956.

Baum would try his hand at other children's books but returned to his Oz characters time and time again, adapting them for a stage production in 1902 that ran for a while on Broadway and toured the country. Baum would write a total of 14 Oz novels, but his biggest success – a 1939 movie version – would come long after his death.

Baum's intent was to create a fairy tale along the lines of the Brothers Grimm and Hans Christian Anderson. Baum also admired the character of Alice in Lewis Carroll's work and chose a similar young girl to be his fictional hero.

[...] A portion of the success of the book has been attributed to Baum's illustrator, W.W. Denslow, who he worked with closely on the project. Denslow, in fact, was given partial ownership of the copyright of the book. This caused problems later when Denslow and Baum had a falling out while working on the 1902 stage adaptation.

The most popular adaptation of Baum's first Oz book was the 1939 movie starring Judy Garland.

Wikipedia has many more details on the story and the film.

[Aside: I had heard only the land of Oz was filmed in Technicolor because it was so much more costly than black and white. I've been unable to corroborate. Are there any Soylentils here who can confirm or deny it? --Ed.]


Original Submission

posted by martyb on Sunday May 17 2020, @04:17PM   Printer-friendly
from the getting-closer dept.

From the latest blog post of Derek Lowe :

One of the big (and so far unanswered) questions about the coronavirus epidemic is what kind of immunity people have after becoming infected. This is important for the idea of “re-infection” (is it even possible?) and of course for vaccine development. We’re getting more and more information in this area, though, and this new paper is a good example. A team from the La Jolla Institute for Immunology, UNC, UCSD, and Mt. Sinai (NY) reports details about the T cells of people who have recovered from the virus.

[...] So overall, this paper makes the prospects for a vaccine look good: there is indeed a robust response by the adaptive immune system, to several coronavirus proteins. And vaccine developers will want to think about adding in some of the other antigens mentioned in this paper, in addition to the Spike antigens that have been the focus thus far. It seems fair to say, though, that the first wave of vaccines will likely be Spike-o-centric, and later vaccines might have these other antigens included in the mix. But it also seems that Spike-protein-targeted vaccines should be pretty effective, so that’s good. The other good news is that this team looked for the signs of an antibody-dependent-enhancement response, which would be bad news, and did not find evidence of it in the recovering patients (I didn’t go into these details, but wanted to mention that finding, which is quite reassuring). And it also looks like the prospects for (reasonably) lasting immunity after infection (or after vaccination) are good. This, from what I can see, is just the sort of response that you’d want to see for that to be the case. Clinical data will be the real decider on that, but there’s no reason so far to think that a person won’t have such immunity if they fit this profile.

Onward from here, then – there will be more studies like this coming, but this is a good, solid look into the human immunology of this outbreak. And so far, so good.

Be sure to read the article if you’ve been wondering what your thymus has done for you lately.

Journal Reference
Alba Grifoni, Daniela Weiskopf. Targets of T cell responses to SARS-CoV-2 coronavirus in humans with COVID-19 disease and unexposed individuals, Cell (DOI: 10.1016/j.cell.2020.05.015)


Original Submission

Today's News | May 19 | May 17  >