The Register has a story about a Python compiler called Codon that turns Python code into native machine code without a runtime performance hit.:
Python is among the one of the most popular programming languages, yet it's generally not the first choice when speed is required.
"Typical speedups over Python are on the order of 10-100x or more, on a single thread," the Codon repo declares. "Codon's performance is typically on par with (and sometimes better than) that of C/C++."
"Unlike other performance-oriented Python implementations (such as PyPy or Numba), Codon is built from the ground up as a standalone system that compiles ahead-of-time to a static executable and is not tied to an existing Python runtime (e.g., CPython or RPython) for execution," the paper says. "As a result, Codon can achieve better performance and overcome runtime-specific issues such as the global interpreter lock."
C++ Weekly - Ep 366 - C++ vs Compiled Python (Codon) performs a benchmark by running the same algorithm in Python (Codon) 8.4 seconds and C++ which takes 0.09 seconds. The video also points out the following:
We need a python code that works with codon. It takes some porting. We have to give types. It is a lot like C++ in this regard.
(Score: 4, Funny) by Barenflimski on Tuesday March 14, @03:19AM (3 children)
I hope the software on my car is running C++, cuz I like to go fast!
But in all seriousness, my robot I made which is coded in Python to push the stick that trips the dominoes that knocks a tennis ball that falls in a bucket to pull a lever that flips on my lights, doesn't need to execute 10x - 100x faster. It also has more libraries for actuators that are easy to work with.
I'd even argue that without physical access, you aren't going to buffer overflow my robot.
I do love memcpy() though....
(Score: 4, Interesting) by JoeMerchant on Tuesday March 14, @01:46PM (1 child)
It's no surprise to me at all.
As long as the code you are writing isn't doing anything much, Python is fine.
If there's ever anything resembling "heavy lifting" in a Python project, you're gonna need to put that in a module - usually C or C++, Fortran works too, or... you're gonna have a veeeeery sloooooooow program.
I use Python for my Pi Pico projects, because the dev environment is much easier to work with than the Pico SDK for C, plus: my Pico projects aren't doing much in the way of actual computation.
Otherwise, I really despise systems of programs written in collections of languages with assemblages of compilers, interpreters, and other toolchain when you could do the job with a single language and a single toolchain. Makefiles should be dead simple, when the makefile is more complex than your code, you're doing it wrong.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 0) by Anonymous Coward on Wednesday March 15, @09:12AM
From what I see most stuff isn't heavy lifting. If you're trying to do a FPS shooter or 1Gbps/10Gbps servers that are pushing the limits of hardware then sure use stuff like C or C++.
Nowadays with GHz CPUs perl can do 10 million string concats in less than 0.8 seconds in a VM. That's fast enough for many use cases.
For work, I wrote a DHCP server in Perl more than a decade ago, no buffer overflow[1] vulnerabilities in my code. Was more than fast enough back then, should be even faster on current hardware. In fact it was even faster than the ISC DHCP server for some of the things we needed the DHCP server to do.
[1] It's in Perl so by definition if there are any buffer overflows it would be a bug in other people's code (e.g. the perl developers) not mine.
(Score: 3, Interesting) by istartedi on Tuesday March 14, @07:09PM
The right tool for the job. I took a robotics lab course when I was in school. We had to do a simple pick and place task, and there was actually a circuit to build that would switch something on and start it. We had all been taught about switching transistors, so that was the first thought but getting the circuit design correct actually turned out to be a hassle. Those of us who had dabbled in electronics outside of school (about half the class) decided to use a relay. I'm given to understand that few relays can switch must faster than 1 kHz--orders of magnitude slower than the transistor. The circuit was dirt simple though, worked the first time, and reliable enough for a lab where we just had to have it working for one day to pass. That dirt slow mechanical relay was the right tool for the job.
Appended to the end of comments you post. Max: 120 chars.
(Score: 5, Insightful) by Beryllium Sphere (r) on Tuesday March 14, @03:37AM (1 child)
>"Codon's performance is typically on par with (and sometimes better than) that of C/C++."
What was atypical about the benchmark that made it run two orders of magnitude slower? That's a larger difference than I'd expect between two native code generators.
(Score: 4, Informative) by GloomMower on Tuesday March 14, @12:39PM
The summary was not good. If you watch the video, he simply moved a lookup table variable so it wasn't getting made every loop and got it to go down 3.5 seconds.
The c++ version already had the LUT outside. Does it have other optimizations? I don't think it really was a good direct comparison and obviously you got to optimize each one for what you are using. I'm not sure if codon has some startup bootstrapping it is doing either. Comparing it to something that takes less than a second in c++ maybe isn't fair if there is any bootstrapping.
Looks like you can mix regular python and codon using some decorators, so can be useful in some places. scipy.weave lets you embed C/C++ in a python function, so this looks like it could be a nice middle ground.
(Score: 5, Insightful) by Rosco P. Coltrane on Tuesday March 14, @04:52AM (15 children)
There's running speed - which is what C delivers - and there's quick prototyping and ultra-fast software delivery and maintenance, for when that matters more.
I'm in the second boat where I work: I can spew out a super-complicated piece of software for a test or a test equipment I'm working on in a matter of hours in Python - even minutes sometimes - when it would take me 10x as long to code the same thing in a compiled, low-level language. My employer is perfectly okay to buy a big expensive computer with a giant CPU to do mundane things at tepid speeds because it's cheaper than paying me to fuck around with C or not performing the tests in time.
Python isn't the solution to every problem, but it's often the right solution to a different kind of speed problem.
(Score: 4, Interesting) by Thexalon on Tuesday March 14, @10:58AM (8 children)
Also, they're apparently comparing it to C++, not C. C++ has a major downside that really affects development time quite a bit: Compile time, which is significantly worse for C++ than C and definitely worse than languages that don't need to be compiled.
In an interpreted language like Python, your typical development cycle is "write, test, repeat as needed" In a compiled language like C++, that same cycle has an extra step, so it's "write, wait for compiling, test, repeat as needed". And if that compiling takes an average of, say 5 minutes, which it very easily could be for a non-toy-sized program, you've now changed the number of times you can go through that cycle from something like 40 times per hour to around 10 times per hour.
And that of course assumes that your programmers are completely 100% diligent about focusing on their work while waiting for compiling to finish. Which I know I definitely am, and would never even consider starting office-chair jousting with my coworkers.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 2) by Rosco P. Coltrane on Tuesday March 14, @01:49PM (1 child)
That's what unit tests are for. If your program takes several minutes or hours to compile any change that you need to test, it's not architectured properly. The final big compile should only be a final sanity test, as your units should pass the tests well before that.
(Score: 2) by Thexalon on Tuesday March 14, @02:38PM
Unit tests are good. Heck, TDD where you write the unit tests first are great too. Although they don't solve absolutely everything, because I've gotten plenty of tasks over the years that involve code I often wasn't responsible for writing where there was good unit test coverage and still a bug that needed to be addressed, which meant identifying the gap in automated testing, which involved a lot of manual write-test cycles until I could sort out what went wrong.
Those don't get you away from the fact that a compile step takes time and acts on a brake on how fast you can do things.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 3, Interesting) by Immerman on Tuesday March 14, @03:28PM (2 children)
I never really really understood the argument with compile speeds. I've done some decent sized projects - nothing *huge*, but pretty big - and when making changes you normally only have to compile the .cpp file(s) you've actually changed - all the other object and header files are already compiled and just waiting to be used. Rarely does it take more than a fraction of a minute, and you can find all your syntax errors before it even begins to run, rather than discovering them weeks or months later when a rarely used corner case finally tries to execute broken code.
Now, if you start changing interfaces or inline code (header files) then yeah, that can take a long time to compile if they're used in lots of places... but if you're working on a big project it's not often you're going to make changes to one module that affect the way it presents itself to the rest of the project.
Meanwhile, precompiled headers can dramatically reduce the (already usually pretty shot) compile times... unless you don't do something stupid like including one big "optimized" header that's used across the entire project, and thus requires the entire project to be recompiled whenever there's any change to any header file. Such "optimizing headers" things can make compiling a lot faster if used appropriately, but really only make sense for relatively stable header collections, or collections that are used within a relatively limited context (e.g. only within one module that ).
The old C-style compile performance rules remain - don't include headers unless you actually need them.
(Score: 2) by Thexalon on Tuesday March 14, @04:53PM
My admittedly limited experience is that the C++ compiling isn't a huge deal ... until you encounter a problem that doesn't show up when you only recompile one module. The kind of thing where the underlying issue turns out to be (for example) somebody screwing up pointer math because they forgot a sizeof(), which results in a variable or data structure getting changed in a way it wasn't supposed to, and the exact placement of things on the stack and heap will make a difference in how that error affects or doesn't affect other things in the system, and the linker didn't do the same optimizations when you were just building one module so you need to rebuild everything to see the problem.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 3, Insightful) by turgid on Tuesday March 14, @07:57PM
The Pascal family of languages had this problem solved over 40 years ago. And yes, they got OOP too. C++ didn't invent it.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by turgid on Tuesday March 14, @07:53PM (1 child)
I think you need to modularise your code and only test the bits that change first then leave testing the whole thing until you're pretty sure you've done a good job with the bit you've changed.
I had a colleague once who deliberately, for a laugh, went mad with C++ templates. He was on a 32-bit machine. It went out to lunch for tens of minutes compiling the templates. He used to go out to lunch while it was compiling too.
One day he came back from lunch and the machine had run out of virtual memory trying to compile his templates.
Moral of the story: (a) C++ is ludicrous and you should avoid it or (b) C++ is ludicrous and you should use it very, very carefully. Life's too short to wait all day for compilers. We are in the third decade of the 21st century with multi-gigahertz 64-bit multi-core computers with tens of gigabytes of memory. We are not compiling on 6MHz 16-bit CPUs with 64k segments.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Wednesday March 15, @03:10AM
No, the moral of the story is you purposely abused the template system, then blamed the language. Moron.
(Score: 0) by Anonymous Coward on Wednesday March 15, @03:43PM
Utter nonsense.
(Score: 4, Insightful) by DannyB on Tuesday March 14, @02:49PM (5 children)
Something similar which I notice (as a Java developer) time and time again when language and performance are discussed.
People arguing for every last cpu cycle and byte consistently underestimate the huge cost of development and maintenance of large software projects. Or said differently, people underestimate the value of using higher level languages that abstract you away from hardware and closer to your actual problem making it easier to reason about your problem instead of focusing on hardware issues and sharp edges where one stray pointer access can bring down the entire system.
This same principle applies to Python. Once upon a time, computers were extremely expensive and developers were cheap. A computer similarly powerful to an early Raspberry Pi cost MILLIONS of dollars. Developers cost maybe $30,000 per year in mid 1970s dollars.
Today it is the opposite but people arguing for too low level a language are blind to this. Developers are very expensive. Computers are dirt cheap and getting cheaper. For the cost of a developer with benefits for a month you can throw quite a few extra gigabytes of memory into a production server. Or a few extra cpu cores.
Ideally I want it all. A high level language that provides very good performance. But if I have to choose, my choice based on pure economics will be for a higher level language. Abstraction and developer effort are higher priorities than saving every possible cpu cycle and byte of memory.
OPTIMIZE FOR DOLLARS not for cpu cycles and bytes.
My dream would be a high level language with a compiler for every possible platform that produces a stand alone executable that is efficient. If I can't have it all, then I will make compromises on computer performance because that saves the most money. I can make up for performance and memory by using more/better hardware. For quite a few years now "throw more hardware at it" has been a cost effective solution. If conditions change businesses will respond.
Note: I fully understand that C has stood the test of time and is absolutely perfect for writing low level code close to the hardware or where extreme performance is needed nevermind the development and maintenance cost.
A language is too low level when it forces you to focus on the irrelevant. (like managing memory)
If there were one perfect programming language for all uses then we would all be using it already.
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Tuesday March 14, @09:00PM (1 child)
No we wouldn't because there are always masochist/sadists who enjoy things like forced whitespace structuring etc.
(Score: 2) by DannyB on Wednesday March 15, @02:03PM
I disagree because they would all be darwin award recipients after doing that. Those who didn't measure up to getting a darwin award would be victims of the grate extinction in the coming world wars of tabs vs spaces.
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Wednesday March 15, @03:47PM (1 child)
Ah, the ol' "I can write sh*tty code because computers are faster now" argument.
(Score: 3, Insightful) by DannyB on Wednesday March 15, @06:37PM
We can write higher quality code while saving money because we don't have to focus on the irrelevant details of the hardware.
How often should I have my memory checked? I used to know but...
(Score: 2) by Beryllium Sphere (r) on Wednesday March 15, @05:44PM
Which I could believe describes the 90% case.
The exceptions might be things like spacecraft, where the rad-hard chips are generations old and subject to a cruelly tight power budget.
(Score: 5, Informative) by Rosco P. Coltrane on Tuesday March 14, @05:02AM (1 child)
that's because it is. Codon is $$$ software sold by a company called Exaloop [exaloop.io].
From this page
:
Nothing wrong with for-pay software or hybrid open-source license, but worth mentioning if you wonder how an article about Python's execution speed winds up on the Reg...
(Score: 2) by GloomMower on Tuesday March 14, @12:43PM
After 3 years looks like it goes to Apache license. So future older versions.
(Score: 4, Interesting) by turgid on Tuesday March 14, @07:47AM (8 children)
Python is about 1000 times faster to write than C++, doesn't have so many memory corruption problems and can be read more easily than C++. While the C++ people are arguing with each other about which language features to use, how to design the code and arguing with the compiler, then debugging the memory bugs, the Python code has already been in production doing useful work. Note it is possible to write bad python too.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 4, Touché) by DannyB on Tuesday March 14, @02:52PM (7 children)
It is possible to write bad code in any language.
Similarly, it is possible to write good code in any language. I suppose that is even true for Perl. But Leonardo could probably have created amazing art using human excrement, but he knew better.
How often should I have my memory checked? I used to know but...
(Score: 2) by turgid on Wednesday March 15, @08:56PM (6 children)
Which is worse: C++ or Python? Or INTERCAL?
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by DannyB on Wednesday March 15, @09:31PM (5 children)
I guess INTERCAL would be the worst.
C++ and Python are both widely used and have stood the test of time.
While I've actually heard of INTERCAL, I would still say, I haven't heard of INTERCAL.
How often should I have my memory checked? I used to know but...
(Score: 2) by turgid on Wednesday March 15, @09:47PM (4 children)
It has a COME FROM [wikipedia.org] statement.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 3, Funny) by istartedi on Wednesday March 15, @10:09PM (3 children)
I'm still waiting for a language with the appropriate counterpart to call/cc: call-with-voltage-continuation.
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Insightful) by DannyB on Thursday March 16, @01:51PM (2 children)
It is too sad that more languages don't have call/cc. Their resistance to this feature is too high.
How often should I have my memory checked? I used to know but...
(Score: 3, Interesting) by istartedi on Thursday March 16, @05:00PM (1 child)
I'm not the least bit sad. Every time I look in to it, my head hurts. There also seems to be some valid criticism of it as a control structure, mostly from a performance standpoint. I know there's a "considered harmful" essay for it out there somewhere, but couldn't find it in short order. I haven't really looked in to delimited continuations [okmij.org], which apparently trade the generality of call/cc for better performance.
Appended to the end of comments you post. Max: 120 chars.
(Score: 2) by DannyB on Thursday March 16, @09:38PM
I've not used it beyond playing with it. It seems that rather than call/cc being a control structure, it is useful to experiment [washington.edu] with novel control flow mechanisms. Try out new ways of control flow which are not part of the Scheme language. Like the dynamic-wind and Exceptions examples.
Here is a more convincing argument in favor of call/cc: continuations are like GOTO but worse!
When I first saw call/cc in Scheme, I was jealous that Common Lisp didn't have it. But perhaps it is best that it doesn't. [stackoverflow.com]
How often should I have my memory checked? I used to know but...
(Score: 3, Interesting) by mth on Tuesday March 14, @09:16AM
While Codon will look familiar to Python programmers, you can't expect existing code bases to just work, as there are some important differences [exaloop.io]. For example, its integers are 64-bit instead of automatically switching to big integers and its strings don't support Unicode. Also collections containing multiple element types are not supported yet.
There are alternatives that require fewer changes to the source code, like PyPy [pypy.org], a JIT compiler that has very good compatibility with CPython, and mypyc [readthedocs.io], a static compiler for Python extensions that generates C code, which has some restrictions but nowhere near as impactful as Codon's. Neither of these produces a standalone executable, but in practice neither will Codon unless you're willing to check and adapt your entire program.
(Score: 5, Interesting) by janrinok on Tuesday March 14, @12:28PM
How many times do we hear the phase 'Pick the correct tool for the job'? I have lots of Python code helping me to manage this site - if you think the site's UI is sometimes a little shaky then you ought to see what we have to use to manage the accounts, submissions and stories. I can sometimes find a need for a specific utility which isn't available in slash code and it rarely takes me more than an hour or so to have something up and running
So my software does a lot of reading and writing from the database over the internet, and for that sqlalchemy (Python) makes my life so much easier. Being an editor the computer probably spends more time waiting for me to make the next key press or for the requested data to arrive on my computer than anything else. I don't care how fast it can 'wait' - the code is not the bottleneck. I am the bottleneck.
On top of that, the Python standard library is huge and in most cases the building blocks for what I need to do are already in the standard library and installed on my computer.
Other comments to this story have said much the same thing in different ways. If you need the speed of C - use C. If you need the extra bells or whistles of C++, then use C++. If you want to get the job done quickly and with the minimum of fuss, it is certainly worth considering using standard Python. I don't need another Python variant which cannot even use the majority of the standard library because it isn't, well, 'standard'.
(Score: 2, Interesting) by DadaDoofy on Tuesday March 14, @02:37PM (2 children)
All you Python users are DESTROYING THE PLANET!!! (wink)
(Score: 3, Funny) by inertnet on Tuesday March 14, @09:08PM
But at least it happens neatly organized.
(Score: 3, Funny) by DannyB on Thursday March 16, @02:00PM
Whenever a statement begins with:
All you ${ethnicity | race | computer-language | disability | etc} people . . .
You can safely discard the rest of the statement as being false.
For example: All you bind people cannot see very well.
Thus I can safely conclude that blind people have excellent vision.
At least the planetary destruction caused by Python is only affecting Earth at the present time.
How often should I have my memory checked? I used to know but...
(Score: 2) by mcgrew on Tuesday March 14, @07:40PM (1 child)
C++ can't hold a candle to hand-assembled machine code.
Carbon, The only element in the known universe to ever gain sentience
(Score: 1, Interesting) by Anonymous Coward on Wednesday March 15, @03:15AM
Yes, it can. All you idiots who think your hand-rolled assembly is faster than a good optimizer on a large scale project are IDIOTS.
(Score: 2) by Subsentient on Thursday March 16, @04:44AM
... CPython is finally seriously considering getting rid of the global interpreter lock.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 1) by jman on Thursday March 16, @11:11AM
His scripting language has certainly lived up to that regard. I still write Bash scripts out of habit, but really should be using Python for any new code.
Now, others have posted about the how compiled languages are by nature faster than interpreted ones.
That's all well and good, but in any "my car's faster than yours" discussion one must also factor in that the underlying hardware upon which all that code runs is much quicker these days.
Python was "good enough" for Guido back when he first thought of writing it, and these days, is even more capable (shout-out to PEP 636. I am really enjoying that it finally has a 'case' statement, which is so much more powerful than that in other languages. I can see a wee bit of syntax improvement, but 'match' is new to the language, and for a first release, really well thought out).
Now that many, many humans are using that interpretive shell, of course there will be those who that complain it's not as fast as something compiled.
So, what is "fast enough"? The some-years-old-now i9 in the daily driver is way beyond the old v20 chip used to build my first PC back in the 80's, and whether it's Pandas, Maria, or a straight CSV read, with the Snake's aid it can suck in data like you won't believe. (Of course, having more memory helps the CPU as well. With the v20 I had the standard 1M, of which 640K was readily usable. These days it's 128G, but only because that's all my consumer MB will hold.)
Not writing drivers, or processes that demand microsecond response or need to handle untold scads of concurrent requests from different users, I'll stick with interpretive for now, knowing that if any project came up that actually needed the extra "oomph", I'd either have to figure out how compile the already written code, or (ugh) refactor it all using a true compiled language.
At least in my case, the benefit of the interpretive language that is Python, combined with just how blazingly fast computer hardware is these days, outweighs any imaginary pain endured from the language itself being "slow".