posted by cmn32480 on Wednesday May 23 2018, @06:47PM
from the your-computer-is-not-a-fast-PDP-11 dept.
from the your-computer-is-not-a-fast-PDP-11 dept.
Very interesting article at the IEEE ACM by David Chisnall.
In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades.
This discussion has been archived. No new comments can be posted.
"C is Not a Low-Level Language" | Log In/Create an Account | Top | 65 comments | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(Score: 5, Interesting) by Anonymous Coward on Wednesday May 23 2018, @06:57PM (44 children)
C was never a low level language... it abstracts away machine code to facilitate productivity and maintainability. Even then you are making a request to some form of kernel, you are not even performing memory management, you are using or abusing what is provided. The compiler is key here and equally what the framework exposes. The article is convolving a language and where the end program is being deployed... Its easy to say C isn't as low level when it is being used on a modern OS where there is already layers upon layers of abstraction... Can the same be said when you write in C targeting a DSP or a uController? flashing the object onto the built-in flash.
Its all so easy to write a C application when it is being executed on the linux or windows kernel. Try doing it for a limited resource SoC where you do have to load the registers correctly. But saying that I could and have done that in assembler for the target chip LIKEWISE I could do it in python....
I write processors in VHDL and then essentially run mini OPCODE ... sometime I even use python and myHDL to then generate VHDL to synthesis to an Igloo2... A language is just a means to express an algorithm within the constrains of its lexicon, the compiler/synthesizer and the target framework
(Score: 5, Insightful) by Snotnose on Wednesday May 23 2018, @07:08PM (14 children)
C used to be a low level language, now it's not. Not because of the language, but because of the hardware it runs on. Back in the 80s it was routine to compile your C code, figure out the slow parts, look at the assembler, and rewrite the slow part in assembler. Hell, I remember embedding 8086 commands into C that gcc happily (well, grumpily but it would do it) assembled and integrated into my C code.
I still remember the first time I couldn't hand code a routine to run faster than C. It was a fax machine driven by an NSC 320016. I had x milliseconds to read each row of pixels while scanning the document, I couldn't quite do it. Not even in assembly. I don't remember the final fix, be it hardware or software, but I spent a good 6 weeks on that.
The inventor of auto-correct has died. The funnel will be held tomato.
(Score: 5, Insightful) by vux984 on Wednesday May 23 2018, @08:03PM (6 children)
"C used to be a low level language, and now it's not."
I think that accuses C of failing somehow. And that's not the case.
even Assembler and raw Machine Language are not a low level languages by the metric they are using here. And C is a touch above those. The problem isn't that C is higher level than it was, because its still just as close to assembly and machine language as it has always been.
No, the issue is simply that CPUs are more complex. Even if I was writing my hellow world user program in raw assembler, I wouldn't have to explicitly load values from main memory to the cache. The CPU does it for me. MOV EAX,[address] is as low as it gets -- I can hand assemble it to binary if you really like but its still not my problem to sort out if [address] is cached or not.
C is no further from the bare metal than it ever was, but 'the bare metal' is a lot more complicated and functional in its own right now. There is no direct programmer control over a lot of what it does. Its an interesting question to ponder whether or not there should be.
(Score: 5, Insightful) by jmorris on Wednesday May 23 2018, @09:28PM (5 children)
The article is repeating a classic mistake. We have been here before. Lets make the CPU expose really low level details and since the compiler and language knows what it is actually trying to do it can generate better code to utilize all these raw CPU bits. That thinking lead to the wreck known as Itanic.
It failed because they failed to realize the strength of C, x86, POSIX and Win32 is the binding contract across time they each provide. Yes you can build a highly optimized CPU core, expose all its compromises and optimizations to run really fast in $current_year's silicon. Add on the shinest research OS ideas. And if you are making a PlayStation you might sell enough units that developers and tool makers will invest the effort to extract the potential for some games that have the shelf life of produce. And if you are truly fortunate they will extract that maximum performance before the hardware is obsolete. Then ten years go by, the silicon world has changed entirely and your architecture is hopelessly obsolete and legacy code won't build well, if at all, on new hardware and you basically are left with emulation. But nobody is likely to port mainstream software to such a platform. Ask Intel and HP, they bet big and lost with Itanium when they built it and nobody came.
The one real problem the article exposed is the problem of cache transparency. That needs fixing. Put a few GB of HBM on the CPU package, scale back cache and then let the OS explicitly handle the NUMA issues if there is off chip RAM. Explicitly handling cache at the end program level is simply asking for a trainwreck as all of that tech changes over time.
The other problem is the bloat problem. CPUs have to cheat so outrageously to keep up with the increasingly inability of programmers to write efficient programs in any language. Netscape Navigator used to run well in 8MB, now Firefox can use up 8GB and want more. Does it do a thousand times as much? It does not. Full "Office" suites with Truetype, embedded graphics, DDE/OLE and such ran on machines with that same 8MB. Modern ones do some more things but again, do they really do hundreds of times as much? They certainly consume hundreds of times the memory. Which drives the ever increasing demand for faster chips and cutting corners.
(Score: 1, Insightful) by Anonymous Coward on Wednesday May 23 2018, @10:24PM
> Does it do a thousand times as much? It does not.
It's arguable whether modern browsers do more than NN (they certainly support more), but modern web pages and apps certainly do more.
(Score: 2) by meustrus on Wednesday May 23 2018, @10:42PM (1 child)
Hardware vs software performance has always been a bit of a chicken-and-the-egg problem. You can't just say that the CPUs get better at giving performance to the lazy, because a lot of software was built based on that level of performance.
You can idolize the programmers of yore if you want to, but the fact is that they wrote more efficient code because they had to. No programmer starts out building everything right. We all start by making something work, and only after it doesn't work fast enough do we ever go back and try to make it faster. The same goes for memory efficiency, avoiding I/O latency, maintainability, and any other metrics you can come up with for what makes "good" code.
It's the same with SSDs. The performance boost from replacing a spinning platter with an SSD has grown over time, because all software these days is developed on machines with them. The programmer does not experience the high latency of spinning disk I/O, so lots of software these days ships with synchronous file system access.
It's a self-perpetuating cycle. And it just happens to benefit the hardware manufacturer, who gets to keep selling new chips that are better at running the code that people started writing for the last set of chips.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 2) by Wootery on Thursday May 24 2018, @01:02PM
Of course we can. 'Built based on that level of performance' doesn't mean we can't compare the functionality-to-hardware-capability ratio and conclude that it's plummeted over the years.
'High-performance' applications like the Unreal Engine or scientific modelling, succeed in making good use of modern hardware. Desktop operating systems and word processors, on the other hand, do much the same as they did 20 years ago, but with vastly higher hardware requirements.
Well, kinda. I'm more inclined to credit competition in the hardware markets. If AMD and ARM imploded tomorrow, you think Intel would keep working hard on improving their products?
(Score: 5, Informative) by letssee on Thursday May 24 2018, @08:49AM
I was with you until the whining over bloat.
Yes Firefox does 1000x more than Netscape (memorywise anyway). Just look at the data size of a complete website from the nineties to one of today. Easily a factor 1000 increase.
(Score: 2) by Freeman on Thursday May 24 2018, @03:45PM
No, they don't provide 1000x more functionality, but your resolution sure is higher. Eye candy has driven the PC market just about as much as anything.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 5, Insightful) by crafoo on Wednesday May 23 2018, @08:13PM (2 children)
CPUs are not low-level devices anymore. They run microcode and have a large bit of functionality that is either undocumented, incorrectly documented, and/or just completely hidden from the owner. Writing applications for CPU microcode is not low level programming. Really has nothing to do with which language you use to do this.
(Score: 2, Interesting) by pTamok on Thursday May 24 2018, @10:28AM
The more things change, the more they stay the same.
I used to know a technical genius who specialised in VAX/VMS programming for oil exploration companies. His job was to write programs that analysed the geophysical data coming back from the geological survey teams - basically, huge amounts of data listening to the echoes after precise explosions (or at least, loud noises). Time was money, so he was paid to optimise the programming, so he did not only program in VAX assembler, but also re-programmed the microcode of the cpus to get better performance for these highly specific tasks. The DEC VAX 11/780 loaded its microcode from a floppy, so it was entirely possible to modify it, and indeed there was even a Pascal complier that targeted the 11/780s microcode [dtic.mil] as its output. DEC provided support tools for people to be able to do this: "User microprogramming". See references here: https://people.cs.clemson.edu/~mark/uprog.html [clemson.edu]
(Score: 0) by Anonymous Coward on Friday May 25 2018, @03:08PM
How is writing microcode any different than writing programs against a control matrix?
(Score: 2) by JoeMerchant on Wednesday May 23 2018, @09:48PM
We used a couple of 6811 C compilers back in the 90s: a French company called Cosmic which produced pretty good 6811 code, and some godforsaken port of a Z80 compiler that also output 6811 instructions but often resulted in 10x the code size and 1/10th the speed, or worse. Same code would compile and run on both compilers, but with Cosmic I usually couldn't improve the assembly code - not often enough to worry about, anyway. That other compiler should have been booted on day one, but it can be hard to separate developers from their preferred tools - especially when the developer was your boss.
Україна досі не є частиною Росії. https://en.interfax.com.ua/news/general/878601.html Слава Україні 🌻
(Score: 2) by sjames on Thursday May 24 2018, @01:33AM (2 children)
C was always considered a mid-level language. Higher level than assembly, but lower level than FORTRAN.
These days wityh out of order and speculative execution, even asm isn't as low level as it used to be.
(Score: 1) by anubi on Thursday May 24 2018, @07:55AM (1 child)
Yeh... I always saw "C" as like a super macro assembler... with nearly everything done by macros neatly defined in standard libraries.
My Borland Turbo C would let me do inline assembly if I had to... and that was really efficient when I wrote device drivers, when I had to do a lot of bit-fiddling.
It was kinda like mortar. Where Fortran and Cobol were more like bricks.
I might write the primitives to a tape transport or display driver in assembly.
Or write the primitives to a database engine in C.
But I will take Fortran or Cobol any day to build the program to interface to US.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by sjames on Sunday May 27 2018, @06:03PM
I thought of C in much the same way. Back in the days before the optimizers got sophisticated, I think it was fairly apt.
I do think C is over-used these days. There is no reason to be using a mid-level language for UI.
(Score: 3, Funny) by Anonymous Coward on Wednesday May 23 2018, @07:09PM (10 children)
excuse me dude but this man is in iee and has publiched a paper in it
just cause you did some jquery for ur sisters blog doesnt make u smart
(Score: 3, Insightful) by Anonymous Coward on Wednesday May 23 2018, @07:15PM (9 children)
Appeal to authority much? Even the smartest of us can be wrong.
(Score: 5, Touché) by tangomargarine on Wednesday May 23 2018, @07:43PM (8 children)
Yes but being an IEEE member has requirements. A random hobo online who can't even be bothered to make an account on this site shouting at me needs to show me some qualifications before I take his word over IEEE's.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 4, Interesting) by VLM on Wednesday May 23 2018, @07:46PM (5 children)
Wait what, I donno if this makes them look like idiots, or me, or I suppose both, but they spammed me mercilessly for decades to join. Its about the same as the ACM.
Getting spam from IEEE is not exactly like having the Nobel Committee contact you.
From memory I think they asked you to have a degree in EE, on the honor system.
(Score: 2) by tangomargarine on Wednesday May 23 2018, @07:52PM (3 children)
Either way, we're contrasting this with somebody who won't even make an account here on SoylentNews.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1, Touché) by Anonymous Coward on Thursday May 24 2018, @03:45AM (1 child)
Some of us have been anonymous (well, as much as possible) since USENET days, and we're not going to stop this side of the grave.
(Score: 2) by Wootery on Thursday May 24 2018, @01:05PM
You might have been doing it for decades, but that doesn't convince me that it makes any sense.
(Score: 2) by letssee on Thursday May 24 2018, @08:51AM
Which is the smart thing to do in this day & age.
Which reminds me I really should shred *all* my online 'social' 'media' accounts.
(Score: 2) by urza9814 on Wednesday May 23 2018, @07:53PM
I was asked to join many times when I was in college pursing a computer science degree. I think they were spamming the entire College of Engineering...
(Score: 3, Insightful) by MichaelDavidCrawford on Wednesday May 23 2018, @08:21PM
And therefore you owe not just me US a sincere apology
--- Michael David "Random Hobo" Crawford
Yes I Have No Bananas. [gofundme.com]
(Score: 0) by Anonymous Coward on Wednesday May 23 2018, @09:33PM
So what's your argument? 'X is correct because Y says so' will now and forever be a fallacy, regardless of whether or not Y is a relevant authority figure. It's better to respond to the argument directly than to waste time with this nonsense.
(Score: 3, Insightful) by VLM on Wednesday May 23 2018, @07:43PM (3 children)
C on a PDP-11 is essentially structured assembly language, kinda like IBM mainframes had HAL high level assembler. So you have macro assemblers and the next level of abstraction up is something like C.
The problem is modern processors have baroque compatibility hardware to emulate stuff from the 70s and unfortunately leaky unpredictable speculative execution units that you can occasionally trick and next thing you know its all meltdown and spectre.
I guess a good spectre/meltdown SN automobile analogy would be the engine computer in closed loop mode sniffs the exhaust for unburned oxygen and wiggles the fuel injector programming every couple seconds at idle to "perfect" the mixture, and supposedly the algo and source code is all top secret and confidential trade secret but it turns out if you F with meteorological conditions just right while taking very close notes of the computer's behavior, it turns out you can reverse engineer all this private secret stuff even though casually and simplistically its supposedly impossible to crack the engine computer code. In a simplistic way, yeah, 10 seconds of watching a car idle won't give me the complete algo parameters to its engine computer, but let someone F with it for enough hours and they'll have all the data dumped and neatly formatted in a report.
Its in the same spirit as hacking a crypto implementation in a CPU by high res monitoring the hell out of its current draw as it executes the algo and unless the algo implementor is really careful the current consumption "noise" is leaking out all kinds of secret data, sorta TEMPEST-like. There's a similar timing attack on some poorly implemented crypto. Isn't crypto a huge PITA to get really really correct, not leaking time or power information?
And rolling it even further back to the 60s, people noticed you could hear AM radio interference change based on code execution which is funny if you're trying to execute rando code to play "daisy" music using your $10M minicomputer but its not so funny when people realized you could listen to crypto routines.
Spectre/meltdown is in some ways VERY old going back to the oldest days of computing.
(Score: 4, Interesting) by Dr Spin on Wednesday May 23 2018, @07:48PM (2 children)
C on a PDP-11 is essentially structured assembly language
True, but the funny bit is that the PDP11 was designed to be a hardware Fortran machine!!!
Warning: Opening your mouth may invalidate your brain!
(Score: 5, Informative) by mechanicjay on Wednesday May 23 2018, @08:57PM
Gonna need a source for that claim, because it doesn't ring true to me.
My VMS box beat up your Windows box.
(Score: 1, Interesting) by Anonymous Coward on Wednesday May 23 2018, @09:30PM
Are you sure you haven't confused it with the Burroughs B1700?
(Score: 4, Interesting) by Arik on Wednesday May 23 2018, @08:32PM (3 children)
Or I guess I should say congratulations you beat me to it.
When I got my first computer it came equipped to be programmed in a high level language (BASIC) and also through it in a low level language (binary, or hex transparently converted to binary.) Years later, when I got a PC, I learned a different high level language, assembly. And then even later, when I finally took a programming class, C and Pascal were the paradigmatic examples of high level languages. I thought it was funny as it appeared to me that languages over time had gotten *less* "high level" - BASIC certainly seems like a higher level language than C, or Pascal.
Of course since that point in time, the reverse has been the case, relatively higher level languages seem to be the trend instead. But it's all relative, because only one end of the scale is fixed. There is an absolute *lower* limit - nothing is lower than binary. But there's no upper limit. There's no clear line between extreme high level languages and simply using the computer. So what we think of as being 'high' in this context is very subjective.
If laughter is the best medicine, who are the best doctors?
(Score: 2) by c0lo on Wednesday May 23 2018, @10:46PM (2 children)
Ah, PEEK and POKE [wikipedia.org] programming, those were the days. (grin)
(Score: 2) by Arik on Thursday May 24 2018, @01:01AM (1 child)
If laughter is the best medicine, who are the best doctors?
(Score: 0) by Anonymous Coward on Thursday May 24 2018, @04:39AM
I still have a copy of the Beagle Bros Peeks, Pokes and Pointers chart around here somewhere...
(Score: 2) by HiThere on Wednesday May 23 2018, @08:43PM (9 children)
Sure it was, or pretty nearly. Byte once had an article about a bunch of M6800 assembler macros that implemented well over 90% of C.
Now it's true you could do a lot with that assembler that was quite difficult to do with C, but that's a rather different argument.
(Score: 2) by Wootery on Thursday May 24 2018, @01:11PM (8 children)
Register allocation, static scoping, and static type-checking in a macro system? And C's precedence and type-promotion rules? And all the rest?
Sounds like quite a macro system.
(Score: 2) by DannyB on Thursday May 24 2018, @03:57PM
I remember looking at some assembler language once. In the 1980's. I think it was 68000, but memory grows weaker. But I was astonished at how much you could do with the macro system. The macro system was a programming language.
Interesting to study. But I didn't want to make a career of it. I just needed to accomplish a few specific things.
Can't large language models be put in charge of resolving ethical issues related to the use of AI?
(Score: 2) by HiThere on Thursday May 24 2018, @05:47PM (6 children)
C doesn't handle specifying register allocation. That's something this is only optionally paid attention to (and usually ignored). Etc.
Also, I didn't say it handled all of C (circa 1980's), just over 90% of it. I don't have a copy of the article, so I can't specify just what it handled, and what it didn't. I'm not sure it handled floating point.
FWIW, I also never actually tested the provided code. I was coding in Fortran on a mainframe and didn't have access to an M6800 machine, so it was more a "that's really interesting".
(Score: 2) by Wootery on Friday May 25 2018, @09:21AM (5 children)
I don't follow. C has the concept of local variables. Assembly languages don't. The process of mapping variables onto the register file, and memory, of the target machine, is called register allocation. It's a considerable algorithmic challenge. Indeed, solving the problem optimally is an NP-complete problem.
If you've somehow implemented a C compiler in an assembly macro system, that means you've implemented register-allocation, no?
(Score: 2) by HiThere on Friday May 25 2018, @05:27PM (4 children)
You are assuming features common to compilers creating efficient code, but not specified by the language. And I've used compilers in the past that didn't handle that at all well. (Actually, of course, I though you were referring to the register allocation declaration rather than local variables, but it's still true.)
FWIW, I used a C compiler on a Apple ][ (i6502) and subset compiler on an i8088. These were "around" the same time as the Byte article.
Also, to claim that assembler languages don't have local variables is also a mistake. Some of them do. Most (all?) of them don't protect local variables against external modification, but then neither did (do?) a lot of C compilers. I think that's usually an OS protection, and doesn't work within the program, though admittedly in C it's hard to find the address of a stack variable, and in some machines they actually don't usually *have* an address, being stored in registers. Again, this is implementation dependent, and not part of the C language. (At least not the older standard. I don't know the more recent ones. The last time I looked in detail was around 1990 or a possibly a bit earlier.)
But it's also true that the M6800 had a powerful macro assembler, which was the actual point of the Byte article. (Well, that an how well the M6800 instruction set was adapted to programming.)
(Score: 2) by Wootery on Saturday May 26 2018, @02:40PM (3 children)
Well, I'm stating that C has variables, and I'm assuming the target machine doesn't. If those assumptions hold, that's a big divide to cross, even sub-optimally. Without a seriously powerful macro system -- way beyond a typical assembly language -- I don't see how you'd do it.
Sure. When a compiler is said to generate poor code, poor register-allocation is probably a big part of it.
Well, some 'high-level assembler' languages, perhaps, but at that point it's a stretch to call it an assembly language. Show me a hardware infinite register machine, and sure, its assembly language could be said to have variables. (I'm a little disappointed that a quick Google turned up nothing on that front. Figured someone would have tried it.)
What would it mean to 'protect against external modification'?
Aside: here [cornell.edu] is a (freely available) paper exploring the idea of a register-allocation assembly macro, which would presumably exist as a special macro-language facility, not as a macro defined in the ordinary way. Rather thin on what an example usage might look like, though.
(Score: 2) by HiThere on Saturday May 26 2018, @05:26PM
You are assuming that register allocation is a part of being a low level language. This is only true on certain CPUs. Many I've programmed on only HAD two registers, and their use was essentially fixed. The i6502 could treat the entire lower page of memory as a set of registers.
I'm sorry I can't be more specific, but it's been multiple decades since I did any assembly language programming, but low level languages don't necessarily allocate registers in ways that aren't necessary. That depends on the architecture of the CPU. It also depends on various other features of the op code set. If registers aren't a highly constrained resource, and can also be addressed in other ways, it can make sense not to specify.
Now if you wanted to claim that assembler is lower level than C, I'd agree without question. And microcode is lower yet...if it's present. The IBM 7094 didn't have microcode, and I'm not sure anything much before 1980 did, but with chips you can't be sure without grinding them apart under a microscope. Still, I never even heard of microcode until after 1970. (I'm not sure how long.)
C allows you to suggest that variables be allocated to registers. It's free to ignore your suggestion, but that you can suggest that kind of hardware assignment is a low level feature. If it had to pay attention, that would limit the number of CPU types it could run on. If you happen to know the address of a hardware port, it lets you write to that port. I once wrote a printer driver in C. It wasn't a complete one, but it was needed for a special case (driving a dot matrix printer off a remote terminals secondary port) that the standard drivers wouldn't handle. That's a pretty low level activity.
(Score: 2) by HiThere on Saturday May 26 2018, @05:28PM (1 child)
What would it mean to 'protect against external modification'?
It would mean something like a C++ private variable.
(Score: 2) by Wootery on Saturday May 26 2018, @07:14PM
With some non-portable extensions of C, you can request/insist that the compiler use a specific register for a variable. That is certainly a low-level feature, yes.
You've misunderstood the intent of C++ private variables. They're about helping the programmer write good object-oriented code. They don't protect you against hostile code with access to your process. [itcsolutions.eu]
In other languages/programming environments, things might be different, but C++ provides no such language features, and has no such security model.
(Score: 5, Insightful) by Lester on Wednesday May 23 2018, @07:39PM (2 children)
Yes it was a low level language for PDP11. And it is still a low level language for POSIX.
The author of the article says that many problems of current processors is trying to make C programmers believe that it is a low level language. I'l fix it for him: Many problems of current processors is trying to make C programmers believe that they are running on a real processor not a low level emulator.
That is like saying that Windows is not an operating system because it is able to run on Virtual Machine
C is a low level language running on a simplified processor
(Score: 3, Informative) by crafoo on Wednesday May 23 2018, @08:17PM
Thank you. You said this much better than I did. It was really quite an impressive trick, taking control of the hardware away from the owner without many even noticing what was going on. But they did it, and here we are.
(Score: 2) by qzm on Wednesday May 23 2018, @08:44PM
What has happened had nothing to do with C, high or low level. They obviously have no actual experience of the situation.
The problem is caused by the desire for high performance and backwards compatibility.
Absolutely nothing directly to do with C. There is nothing in C that would require any of this.
Mostly they appear to be working on the preemie that C is old, and 8086 architecture I is old, so C must control that architecture, which is just stupid.
C could just add easily target the internal physical architecture of it was exposed.
Im sure they have done pretty language they think is magically better, but this cart they are pushing has no wheels.
(Score: 5, Insightful) by BsAtHome on Wednesday May 23 2018, @07:42PM (3 children)
The article's premise is geared towards current high-performance CPUs for servers and desktops (IA, ARM, Power, etc). All these CPUs are internally completely different from the visual programmer's architectural view.
However, if you go to the embedded world, then C maps much better to the architecture. For example, the AVR architecture is well suited for C. Once you move up the chain you will see a greater and greater mismatch, ARM Cortex M series are somewhere in between on the scale.
Some architectures map badly to any high programming language. If you look at the PIC architecture, then you need to code in assembly to do a good job (even though they have some higher-level instructions for mapping to higher languages). OTOH, sometimes you simply don't care when timing or code-size is of no issue.
Anyway, you always should use the Right Language(TM) for the task at hand and C is just one (1) of the many languages that you can choose from. That is the art only real programmers understand.
(Score: 0) by Anonymous Coward on Wednesday May 23 2018, @07:59PM (2 children)
Why does the word keep getting used with regard to programming? I just don't understand that. Is it because of the Microsoft product?
(Score: 2) by BsAtHome on Wednesday May 23 2018, @08:09PM
Visual in the abstract context "to visualize; a -mental- image of how something works or is put together".
There, now you may go mental. I see blue.
(Score: 2) by realDonaldTrump on Thursday May 24 2018, @04:41PM
Because so much cyber is done by men. Almost all of it. And men are very visually oriented. (o)(o)
(Score: 1, Insightful) by Anonymous Coward on Wednesday May 23 2018, @08:21PM (1 child)
Seems to be splitting hairs over "high" or "low" here. It's somewhere in between.
It started off as portable assembler. Sort of high-level but geared toward low-level operation.
It has since been used for generalized applications in place of other high-level languages. Often C becomes the choice because many other high-level languages abstract away enough that sometimes programmers will run in to something they can't do within that language, but in C almost anything is possible.
So it is sort of a mid-level programming language really. Used mostly for high level stuff, but still stinks to hell of low level crap.
(Score: 2) by HiThere on Wednesday May 23 2018, @08:53PM
Well, I never programmed on the PDP, but to me C seems close to assembler level on an appropriate architecture. I do recall that Lifeboat C for the i8088 didn't implement the full language because it would take too much RAM, and they did a lot of the instructions with assembler macros. That's pretty low level.
And how would you rate fig-forth? At least the first version I encountered was implemented via a bootstrap module implemented in assembler, and even the later modules could easily incorporate assembler code. I don't know if that's still true, it doesn't look as if they publish a version for a modern processor. That's what tends to happen when your code is tied to assembler.
(Score: 0) by Anonymous Coward on Wednesday May 23 2018, @08:47PM
These dipshits often write inflammatory articles just to get press, but I think this one is just because they're so far up their own holes they don't realize C is actually pretty low on the spectrum. It's by no means a high-level language, lacking most facilities from that category, and to hear them tell it only bitcode is low level.
(Score: 4, Insightful) by Anonymous Coward on Wednesday May 23 2018, @09:09PM
When CPUs emulate an "x86-compatible" CISC architecture by running undocumented microcode on undocumented RISC cores, you can not have any "low-level language", by definition. It takes a special breed of idiot to try and shift blame for bugs in their emulators from Intel and AMD to whoever/whatever make client software for them.
Why not go and blame languages, compilers, and programmers for not implementing workarounds for all the undiscovered Microsoft zerodays, while at it? Same kind of logic.
(Score: 2) by RamiK on Wednesday May 23 2018, @09:19PM (2 children)
Mill just got their fat pointers patents approved: https://patents.justia.com/assignee/mill-computing-inc [justia.com]
Well, if it's any consolation, you can still justify parallel languages if they fail to solve micro-threading... But even if they miss the FPGA release this year, turfs should work well enough to do away with capabilities even with more conventional machines so the article isn't really going anywhere.
Btw, when C++ introduced atomic operations and broke the memory model no one cared. So, it can argued that as long as the user land doesn't need substantial rewrites, new hardware requiring a new kernel will succeed commercially. Similarly, the way GPUs go 180 every few years and data-centers adapt further proves that as long as the required porting is localized user-land optimizations, hardware changes won't necessary lead to C being dropped.
But hey, I still advocate Go and friends. But only because I think C sucks and we can and should do better.
(Score: 2) by Wootery on Thursday May 24 2018, @01:22PM (1 child)
The Mill guys are still going, huh.
I see no mention of 'fat pointers' there. What are they?
(Score: 2) by RamiK on Thursday May 24 2018, @06:04PM
Pointers with a bit extra around the waist :D So yeah it's just a general term for having more than an address. It could be range resulting in a bounded pointer... It could be ownership resulting in a capability pointer... It could have meta bits signifying explicit data type... Typically it's a mix of the above.
The Mill patents in question covers their variation (turfs), roughly how it's implemented at the CPU and MMU levels and how they've circumvented the problems the article raises as reason to abandon the C memory model.
(Score: 0) by Anonymous Coward on Wednesday May 23 2018, @09:22PM
ASM is also not low level.
(Score: 1, Insightful) by Anonymous Coward on Wednesday May 23 2018, @11:33PM
I can't agree with the starting point or conclusion of this article.
C is not a high or low level language.
It is a language which supports both a sort of high and definitely low level programming.
You can write big programs and get the full performance of the machine in selected parts.
You can talk to hardware at the bit and register level.
None of this requires a big, complex compiler if the programmer is able to provide carefully crafted code for the few parts that need to be fast.
(Unrolled, lots of opportunities for speculative and parallel operations, and easy mapping into assembly code.)
Given this, the same code will run well on multiple of ISA's.
I know this to have been true since at least the 90's, and it's only getting better since then.
C does assume a sequential model, but if you provide enough work to do without sequential dependencies, since Alpha gcc has not had problems getting a bunch of functional units working in parallel. Vector processors may be a simpler way to make hardware to do this. They are available as optimizations in many common processors but history has shown that this is a niche thing.
These exploits are possible because there is too much shared logic between the protection levels. Some secure facilities paint physical things different colors for different levels. Perhaps the processor or at least the simulator needs to do this with logical labels to sort this out. (black code seeing red data is an error. Red code writing to black memory makes black data, etc) Didn't Buroughs or Multix already do this at runtime?
(Score: 2) by DrkShadow on Wednesday May 23 2018, @11:54PM
Ensure that you read the comments for this article. Some of them are well thought out and pointed. Especially,
This article is about the lack of speed due to lack of parallelization. Suppose that's not your driving factor -- maybe you need elliptic curve cryptography, or lots of simple, cheaply-written programs. Parallelization helps a great deal, from web servers serving many clients to big-data supercomputers doing a great deal all at once. They're not everything, though, and Intel seems to have identified the "else" as the majority -- whether from legacy code, presumed paradigms, or natural thought processes.
(Score: 2) by DannyB on Thursday May 24 2018, @04:02PM
As the Minbari ambassador Delenn says, "It's all a matter of perspective."
Those who use Prolog, or Mini-Kanren or similar, think Haskell is a low level language. Or Haskell users might view Lisp as a low level language.
C is most definitely a low level language by quite a few different reasonable definitions. I can agree that from a POV, one might not view C as low level. But for the vast majority of every day productive work, C is probably viewed as a low level language.
Can't large language models be put in charge of resolving ethical issues related to the use of AI?
(Score: 2) by dbe on Thursday May 24 2018, @06:00PM
If you look at architecture like ARM M0 (most arduino), the pipeline is still reasonable at 2 stages (http://microchipdeveloper.com/32arm:m0-pipeline) and the memory is still flat (no cache) with single thread.
So even for reasonably recent micro-controller C could still be considered low language with the author arguments.
Now try to even understand the behavior of a smaller loop hand-coded SSE/SIMD assembly to perform some vector computation on any modern CPU and it's nearly impossible to know why some instruction orders are faster than other and how to optimize cache usage to avoid cache-miss and other freebies that come with current CPU monsters...
(Score: 3, Interesting) by Wootery on Friday May 25 2018, @09:27AM
Dr. David Chisnall? He walks among us on SoylentNews!
His username is left as an Internet-sleuthing exercise for the reader.