Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Monday June 05 2017, @10:15AM   Printer-friendly
from the git-gud dept.

The Open Source Survey asked a broad array of questions. One that caught my eye was about problems people encounter when working with, or contributing to, open source projects. An incredible 93 percent of people reported being frustrated with “incomplete or confusing documentation”.

That’s hardly a surprise. There are a lot of projects on Github with the sparsest of descriptions, and scant instruction on how to use them. If you aren’t clever enough to figure it out for yourself, tough.

[...] According to the Github Open Source Survey, 60 percent of contributors rarely or never contribute to documentation. And that’s fine.

Documenting software is extremely difficult. People go to university to learn to become technical writers, spending thousands of dollars, and several years of their life. It’s not really reasonable to expect every developer to know how to do it, and do it well.

2017 Open Source Survey

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by kaszz on Monday June 05 2017, @11:27AM (27 children)

    by kaszz (4211) on Monday June 05 2017, @11:27AM (#520671) Journal

    Crappy documentation has been around for decades. Nothing new ;-)

    But one thing I often miss is that a lot of (free) software writers won't even begin to tell what their program is for even with like 4 sentences.

    Like "ls small program to list files in a directory with lots of options."

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Touché) by turgid on Monday June 05 2017, @12:27PM (5 children)

    by turgid (4318) Subscriber Badge on Monday June 05 2017, @12:27PM (#520682) Journal
    • (Score: 2) by kaszz on Monday June 05 2017, @12:56PM (4 children)

      by kaszz (4211) on Monday June 05 2017, @12:56PM (#520699) Journal

      Not all programs have man pages. That's when you get into the hmm territory.

      • (Score: 2) by turgid on Monday June 05 2017, @02:33PM (3 children)

        by turgid (4318) Subscriber Badge on Monday June 05 2017, @02:33PM (#520753) Journal

        None of my programs have man pages. That would be another boring and stupid markup language to learn and more stuff to keep up to date :-) And if you write a man page some poor soul might get the mistaken impression that your code actually works and can be used...

        • (Score: 0) by Anonymous Coward on Monday June 05 2017, @07:15PM (1 child)

          by Anonymous Coward on Monday June 05 2017, @07:15PM (#520916)

          FWIW, if you write documentation in almost any form, your build scripts can use pandoc to change it to man pages.

          • (Score: 2) by kaszz on Monday June 05 2017, @11:43PM

            by kaszz (4211) on Monday June 05 2017, @11:43PM (#521048) Journal

            That is time spent learning the idiosyncrasy of those build scripts and pandoc that could been spent on writing code.

        • (Score: 2) by kaszz on Monday June 05 2017, @11:39PM

          by kaszz (4211) on Monday June 05 2017, @11:39PM (#521047) Journal

          Make a simple text file. It solves a lot for very little effort.

  • (Score: 2) by VLM on Monday June 05 2017, @01:16PM (6 children)

    by VLM (445) Subscriber Badge on Monday June 05 2017, @01:16PM (#520709)

    I'm not necessarily disagreeing, but its hardly limited to FOSS.

    If you are unfamiliar with vmware's products lets try a little game, you have five minutes to figure out what the vRealize product does. Not accepting an answer of manager bullshit like "improve efficiency, performance and availability" which is the most informative line of bullshit bingo cut and pasted from the entire marketing page. I'm asking what does it do, like emit a close analogy or describe a workflow or similar.

    Good Luck. I'd give less than 50:50 odds a question can be answered that fast. There's a lot of marketing BS to wade thru.

    I use it, and I'm not even entirely sure what the answer is, so this is non-trivial question.

    I've seen this in more enterprise-ish FOSS also. Stupid as hell codewords. Lets look at openstack. Is Glance the object store? Naw dog its the DBaaS. Sorry DBaaS is Trove try again. CLOUDKITTY? seriously, thats your project name? WTF WTF WTF WTF abort abort just write a huge check to vmware and at least you'll eventually figure it out. Whats the Openstack update manager project name, let me guess "wipeToBareMetalAndInstallESXi" Stupid subproject names.

    I've used both and because openstack isn't working so hard to nickle and dime you, the integration is stronger and much smoother, but the coolest marketing features of vmware don't (yet) exist in openstack. Ironically openstack is much more confusing because they're not nickel and diming you there is no way to set up ten levels of something "super lame and simple" like in vmware.

    • (Score: 2) by choose another one on Monday June 05 2017, @03:52PM (5 children)

      by choose another one (515) Subscriber Badge on Monday June 05 2017, @03:52PM (#520804)

      > I'm not necessarily disagreeing, but its hardly limited to FOSS.

      No it's not, but at least with commercial there is potentially some money in the budget to pay for docs. There is also an incentive, up to a point - better docs mean fewer calls to support means less cost more profit (assuming annual support contracts not pay-per-issue, and assuming the docs are not so good no one takes support).

      With FOSS, since the product is typically free there is no money to pay someone to write docs, the programmer doesn't do it for free since there is no "itch to scratch" (they know how it works), and there is no potential revenue benefit - in fact a potential loss as people are then less likely to come to the programmer for support (or enhancements). Similarly a documentation writer is not going to do it for free, because they don't know enough unless they buy the programmer's time to explain.

      Of course then there is big-commercial-FOSS like Openstack, which is somewhere in between - with an actual team and budget that may support docs, but no revenue incentive for it.

      • (Score: 3, Interesting) by VLM on Monday June 05 2017, @05:03PM (4 children)

        by VLM (445) Subscriber Badge on Monday June 05 2017, @05:03PM (#520838)

        better docs

        That brings us back to what does better mean. For a laugh try my "vRealize challenge" and try to decode the marketing-speak in less than five minutes.

        I'd say in excess of 95% of the vmware website is useless. Its even funnier in that the target is moving fast that esx 6.5 can't really be used with 5.0 docs. A practical example is the legacy FLASH web ui for vcenter is the only option for entering license keys with 6.5, you can't enter license keys using the HTML 5.0 web interface and the instructions are INFURIATING as anything older than six months on the vmware website or "helpful" blogger posts on the internet don't acknowledge the fact. But if you're running 6.5 and can in freaking 2017 find a way to run a web browser with legacy old fashioned FLASH, and then use the legacy FLASH web interface, then and only then can you enter licensing key information into vCenter. Otherwise the html 5 UI is visually identical other than that option is missing. Hilarious, huh? The entire system is like that, top to bottom across the whole field. Nothing but the equivalent of "in-jokes" as far as the eye can see.

        I've had some hilarious adventures converting standard switches to distributed switches, very herculean efforts if you're playing "stupid VLAN games". From memory you can only configure or use distributed switches from vCenter/vSphere flash web page UI but you can only add and remove standard switches from the individual ESXi host web interface. So yeah there's three web interfaces for "the same thing" there's the ESX host UI, the flash legacy UI for vsphere, and the vcenter html5.0. And some tasks can only be done in 1 or 2 of the three UI. Hilarious. And the way its documented is you figure it out the hard way. Which is why VCP cert people, etc, earn such big bucks.

        In fact if you're playing stupid VLAN games you can't install the second half of vCenter without some interference to the ESX host its being installed upon, it'll insist on dropping on the probably standard switch on vlan 0 regardless what vlan your mgmt network actually uses. And then there's tribal knowledge thats only semi-documented (well, maybe its in release notes now) like configuring more than one DNS server crashes the vCenter installation (WTF?)

        Also sometimes vSAN needs dead chicken waved over it. vSAN why are you marking a disk as unhealthy for no discernible reason, why why why ...

        vSAN is also hilarious in that AFAIK the only way to create a new vSAN is to do it as part of installing a new vCenter appliance. I know the html5.0 web UI can't create a cluster using vSAN (I think?) maybe there's some way from an already installed vcenter using the legacy FLASH web UI?

        The joy of scratch partitions and log folders is legendarily hilarious. Just don't even go there.

        You can guess what I've been playing with for the last two weeks in my spare time. VMware is aggravating and ungodly expensive, but fun when it works.

        there is no money to pay someone to write docs, the programmer doesn't do it for free since there is no "itch to scratch"

        I have seen this in decline with the rise of unit testing and halfway attempts at unit testing. Then its a pretty short step from "cut and paste this test from unit testing into the docs file and call it a verified and tested example and we're done here" I've seen worse.

        I've also seen programmers "hate writing docs" but they really hate writing corporate-ized docs. Real docs not so bad, most of the time.

        I've been programming stuff since '81 and as I've become a better programmer I've gotten better about writing docs. It might be a condemnation of the entire field that when everyone's a noob, everyone writes docs like a noob, but as the field matures the professionals separate out and docs just aren't that bad. If you can't express yourself in Her Majesties English then you sure as hell can't express yourself in Clojure or SQL.

        • (Score: 2) by choose another one on Monday June 05 2017, @06:27PM (1 child)

          by choose another one (515) Subscriber Badge on Monday June 05 2017, @06:27PM (#520887)

          That brings us back to what does better mean.

          Yeah, maybe if I had said something like "more useful" that would have been better... :-)

          For a laugh try my "vRealize challenge" and try to decode the marketing-speak in less than five minutes.

          Decoded - the answer reduces to "it's marketing BS".

          Actually it looks like it's a sort of management console for creating, managing and monitoring "everything" from bare metal (obviously it can't create bare metal, but probably just gives an undocumented error number if you try) through virtual stuff to cloudy shite (and hybrids of). It therefore gives you one unholy mess of a tool to do everything badly, it won't give you all the options you'd have by doing things directly using individual tools, yet it is almost certainly a cobbled together collection of individual tools itself - and they probably don't look alike, function alike or indeed have any useful commonality. On top of that it throws templates and things so you can do more stuff badly and with even less flexibility - VMWares idea of what you want rather than what you actually wanted. Then the UI(s) are probably so bad that it needs APIs/Orchestration (i.e. scripting) to do do anything complex - by the time you've learnt that you might as well have just scripted the underlying stuff directly.

          How did I do? Took longer to write that description than to read marketing :-)

          I have seen this in decline with the rise of unit testing

          Me too, in fact it is more or less backed into TDD definition - the test is the spec for what the unit does and is therefore the documentation. At least the docs can't get out of date I suppose.

          • (Score: 2) by VLM on Monday June 05 2017, @07:49PM

            by VLM (445) Subscriber Badge on Monday June 05 2017, @07:49PM (#520936)

            Actually it looks like it's a sort of management console for creating, managing and monitoring "everything" from bare metal (obviously it can't create bare metal, but probably just gives an undocumented error number if you try) through virtual stuff to cloudy shite (and hybrids of). It therefore gives you one unholy mess of a tool to do everything badly, it won't give you all the options you'd have by doing things directly using individual tools, yet it is almost certainly a cobbled together collection of individual tools itself - and they probably don't look alike, function alike or indeed have any useful commonality. On top of that it throws templates and things so you can do more stuff badly and with even less flexibility - VMWares idea of what you want rather than what you actually wanted.

            Ah that's just vSphere. Other than having multiple almost identical appearance web UI, vSphere isn't all that bad. The cobbling is more related to licensing. You can buy a system without disaster recovery automation so there are aspects to openstack that are baked into the cake for restoral of a dead host that have to be kinda grafted onto vSphere and can't be the default. From memory you turn on DRS migration at the cluster level whereas on openstack there are no pay options so its kinda enabled by default AFAIK.

            Another weird example is openstack has "neutron" the network mismanager with everything baked in and you can't really do vmware "standard ethernet switches" with openstack, AFAIK, because openstack only offers distributed switching but you can buy ESX hosts without vSphere to manage distributed ethernet switches so being licenseable it has to be bolted on. If you're doing virtualbox or just screwing around on linux, vmware "standard ethernet switches" are just linux bridge networking, and what you configure has no effect on the config of other hosts. Distributed switches are more than just automation there's some weird shared uplink routing stuff going on that I haven't explored.

            Or in summary, ironically because vmware isn't free, there are options to purchase for networking that make networking more complicated for vmware than for openstack, where everythings free, so why would anyone ever use anything but distributed ethernet switches on simpler openstack?

            Then the UI(s) are probably so bad that it needs APIs/Orchestration (i.e. scripting) to do do anything complex - by the time you've learnt that you might as well have just scripted the underlying stuff directly.

            Yeah that's it pretty much. I'm a bit fuzzy on it myself. There is an API to mess with vmware stuff. Maybe that API is part of vRealize LOL.

        • (Score: 2) by kaszz on Monday June 05 2017, @11:50PM (1 child)

          by kaszz (4211) on Monday June 05 2017, @11:50PM (#521053) Journal

          Without proper docs it's possible to charge more for support calls..

          • (Score: 2) by VLM on Tuesday June 06 2017, @03:48PM

            by VLM (445) Subscriber Badge on Tuesday June 06 2017, @03:48PM (#521377)

            In theory true but even if generally true there's a spectrum of it and Cisco for example used to have legendary good docs online AND I was personally involved at a company where Cisco Certs meant a price deduction on the huge bill, a deduction big enough that paying me to shitpost and play video games all day would still have been a net profit to the company. Of course rather than shitpost and video I got stuck providing BGP support to our customers, which I was very good at although it got boring after the 50th time some guy tried to send us a 0/0 route or wanted to advertise his previous ISP's subnet, LOL. "Well see I was tryin to redistribute my RIP routes into BGP" "Excuse me sir WTF are you doing, STFU and configure it the way I tell you to" (although I spent about four hours per customer or so it seemed doing it very politely, although the previous one liner summarized it remarkably well). I also felt like a hostage rescue negotiator some days "I see my routes are dampened because of flapping, what does that mean, I guess I'll go reboot my router a couple times till it starts working again" "nooooooooo nooooo don't jump your router has a lot to live for nooooooo"

  • (Score: 3, Funny) by DannyB on Monday June 05 2017, @03:44PM (12 children)

    by DannyB (5839) Subscriber Badge on Monday June 05 2017, @03:44PM (#520797) Journal

    Crappy documentation has been around for decades. Nothing new ;-)

    Good documentation has been around for decades too.

    But it was three feet thick. And bolted to a table. Documentation could not physically be moved to another table, let alone taken from the computer room. You had to memorize three feet of documentation. There was no GUI. It was uphill both ways. Hey you kids, get off my lawn!

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    • (Score: 2) by VLM on Monday June 05 2017, @05:20PM (11 children)

      by VLM (445) Subscriber Badge on Monday June 05 2017, @05:20PM (#520853)

      Like you I miss the pre-internet days.

      I think you're mixing your memes, I had momentary access to VMS and a DEC wall and a DEC wall was about 3 feet of color coded binders and supposedly it made sense after you memorized all three feet. All I remember was the CLI for VMS was something like DCL DEC control language or something and it had a fabulous CLI with respect to consistency and online help. I had much longer term access to sunos (like pre-solaris unix) and hp--(s)-ux and for one of them the machine room did have a users manual that boiled down to a printout of all the man pages in the form of a 4 inch thick phonebook and it was on a stand bolted to the machine room desk console. Supposedly that manual cost $1000 or so they told us, probably to stop us from ripping out pages.

      What I miss about pre-internet days was when IBM or DEC shipped you a set of binders, thats honestly all you needed to know. You didn't have to google and find 200 sources and figure out which are clickbait and which are real.

      If you had a modern 90s CMOS mainframe from IBM I forget the name but I remember reading an entire ATM training manual of a couple hundred papers in an emulated 3270 terminal on roughly a 386 in about '95 or '96. I've never experienced anything quite like it since. I've seen the occasional "here's a pdf best of luck" as docs but IBM had amazing detailed manuals that all worked together.

      As for retro stuff, DECs PDP8 book series of manuals was simply amazing never seen anything like it since from a single source. Imagine all of 80s home computer tech times 100 or so all single sourced. Great manual series.

      Some other retro DEC stuff that was decent was the TOPS-10 manual series. Supposedly TOPS-20 was a better OS with inferior docs. TOPS-10 was a cool OS.

      • (Score: 2) by DannyB on Monday June 05 2017, @05:53PM (10 children)

        by DannyB (5839) Subscriber Badge on Monday June 05 2017, @05:53PM (#520865) Journal

        Back when I was a snot nosed 20 year old kid, I was able to brain download manuals. Much later, in about 1986 (as a Mac developer) I discovered Lisp. There was this XLisp I could download from CompuServe. Later I spent $495 (in 1989 dollars) on Macintosh Common Lisp. Used it until about 1993. Loved it. Still have every line of code I wrote. (Now I like Clojure.) I had bought my own copy of CLtL and CLtL2 [cmu.edu]; and brain downloaded those. They were well thumbed.

        For the youngsters, CompuServe is something from a different millennium. Before web sites. Before dial up internet. Before Usenet was popular. Even before AOL. Back when modems were 2400 bits/second as God intended.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
        • (Score: 2) by VLM on Monday June 05 2017, @06:19PM

          by VLM (445) Subscriber Badge on Monday June 05 2017, @06:19PM (#520884)

          Back when modems were 2400 bits/second as God intended.

          Noobs. My dad and grandfather has TRS-80s with 300 baud modems. Eventually my dad upgrades to a TRS80 4p with internal 1200 baud modem, which was pretty fast. Radio Shack used to include like "free 5 hours of compuserve" little carbon paper free accounts with all manner of products.

          Believe it or not I was BBS-ing in '87 or '88 with a 1200 baud modem I got from the "free" bin at a local ham radio fest (back in those days, radio fests had a lot of computers). I believe in Christmas of '89 I got a 2400 baud modem, doubling my speed was quite nice. That served until I blew like $500 in '92 or '93 and got a 14.4K and around then internet access was becoming a thing so I got a SLIP account with a dedicated static address, pretty cool.

        • (Score: 2) by kaszz on Tuesday June 06 2017, @12:06AM (8 children)

          by kaszz (4211) on Tuesday June 06 2017, @12:06AM (#521056) Journal

          Which of the LISPs are worthwhile btw? as in being powerful to do whatever one might wish without over complicating matters?
            * Erlang
            * Common Lisp
            * Clojure
            * Haskell

          Compuserve was like a BBS that were powered by mainframes connected worldwide right?
          Usenet.. killed by eternal autumn and lawyers peddling green cards. A lot of the big hosts handling it went away and others could not pick it up because the share volume and processing involved. But now capacity is cheap so maybe a revival is possible?

          • (Score: 2) by hendrikboom on Tuesday June 06 2017, @02:08AM (6 children)

            by hendrikboom (1125) Subscriber Badge on Tuesday June 06 2017, @02:08AM (#521109) Homepage Journal

            There are two interesting Lisps; both of them are Scheme implementations.

            (1) Racket: Racket's specialty is multilingual programming. You can define new syntaxes and new semantics (i.e. new languages) and compose a program from modules written in those languages. The feature seems to cover everything from minor tweaks to completely different languages. For example, you can write modules in Algol 60 or in Scribble. Scribble is a notation for Scheme that looks and feels like a text formatting markup language.

            And Racket has excellent tutorials and documentation and a supportive mailing list, to return this discussion to the original subject.

            (2) Gambit. Gambit is a Scheme implementation that compiles directly to C (or C++) (Or you can just use the interpreter). Its virtue is that you can actually introduce new features and specify just what C code is to be generated from it. As a bonus you get a rather flexible scripting language. And you get to use low-level Cisms when you want.

            -- hendrik

            • (Score: 2) by kaszz on Tuesday June 06 2017, @03:20AM (5 children)

              by kaszz (4211) on Tuesday June 06 2017, @03:20AM (#521137) Journal

              Do any of these LISPs make efficient use of multi cores that are common now, but were not when those languages came into being? They should be able to exploit that even better than many other languages simple by their nature.

              • (Score: 2) by DannyB on Tuesday June 06 2017, @02:17PM (1 child)

                by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @02:17PM (#521328) Journal

                I would agree that Racket and Gambit are interesting. If you are interested in Scheme. There is also Chez Scheme, which was commercial, but then became open source.
                https://en.wikipedia.org/wiki/Chez_Scheme [wikipedia.org]
                http://www.scheme.com/ [scheme.com]
                https://github.com/cisco/ChezScheme [github.com]

                Incremental native-code compilation to produce native binaries for the PowerPC, SPARC, IA-32, and x86-64 processor architectures. Chez Scheme supports R6RS.

                Chez Scheme has a windowing and graphics package called the Scheme Widget Library, and is supported by the portable SLIB library.

                I like Clojure. (clojure.org) Clojure is a modern lisp. It has reach, it runs on: JVM, JavaScript, .NET. Clojure runs on JVM and .NET, while ClojureScript compiles to JavaScript and runs in-browser or other JavaScript implementations.

                Momentary sidetrack on JVM (Java Virtual Machine) . . .
                JVM is the runtime engine that runs Java Bytecode which is emitted from the Java compiler, Clojure compiler, and other language compilers (Kotlin, Scala, Groovy, Jython, JRuby etc).
                JVM is interesting because it is an industrial strength runtime platform. A decade and a half of research has gone into JVM's JIT compilers C1 and C2, and it's multiple GC's. When you run JVM you can select "server" mode or "client" mode (eg, tune it for running on a workstation or on a server). You have a choice of GC algorithms to pick from, and gobs of GC tuning parameters. If you simply give it plenty of RAM, JVM is very fast. The GC's run in parallel on multiple CPU cores. So if you throw plenty of CPU cores at it, you may never see any GC pause times.

                You can get multiple implementations of JVM. Oracle. Or Azul's Zing -- which is a free binary build (with source) based on the open source OpenJDK. (Also Oracle's Java is based on OpenJDK) If you're on IBM mainframe hardware, then IBM provides Java runtime. Java runtime is available on Raspberry Pi. Languages that compile to JVM can be used for Android development (which compiles JVM bytecode into Dalvik bytecode to run on Android).

                Java (or rather JVM and various languages) are popular in enterprise software. Java is used for high frequency trading (yes really).

                If you need a JVM that runs on hundreds of GB of RAM, and with up to 768 cpu cores, then look at Azul Zing on Linux.

                We now return to Clojure . . .

                Without being grandiose, Clojure runs on an ordinary desktop java runtime, even on a humble Raspberry Pi.

                Clojure is functional programming.

                All data in Clojure is immutable. That means you can't alter anything in a data structure. If I have an array of 50 million elements, and I change element 1,253,507; I get back a new array with that element changed. The original array is unaltered. Yet the performance guarantee is "close to" what you expect of arrays for indexed access. How is this magic implemented behind the scenes? With 32-way trees. When you changed element 1,253,507, a certain leaf node in the tree was changed. That new node, along with new nodes all the way up to a new root node become the new array. The tree shares structure with all other tree elements of the old array. Thus only a few tree nodes are recreated at a cost of Log32(n). So it's close to direct indexed performance for the huge benefit of immutable data structures. This means there is no such thing as: (SETF (CADR x) 'foo) That would be trying to surgically modify a data structure. There are similar operations that can do this, but they re-construct the portions of the data structure, which ultimately makes use of the underlying (invisible to you) implementation of immutability.

                All variables are final. (Meaning, once a variable is assigned a value it cannot be changed to a different value.)

                This may sound restricting, but Clojure definitely has the right constructs to give it a functional style that mitigates what you might think of as restrictions. The end result is better quality code. You can see Haskell inspiration.

                Clojure has primitive types for Lists, Arrays, Sets and Hash Maps. Lists are what you already know. Arrays are what you expect. You can alter any element in the array by direct index, far cheaper than in a list, but with the guarantees of immutability of the original array (you get back a new array with the single element altered). Sets are what you expect. Sort of like an array or hash map, an item can only occur once in the set. Hash Maps are key-value stores like in many modern languages, but backed by an implementation that uses different backing for different sized hash maps. A small hash map of a dozen items will not use hashing, but merely be internally an array where the key array is searched for the key. At some invisible magic threshold the underlying implementation becomes a hash map. Clojure has lots of hidden implementation optimizations, like for short lists, arrays, etc. Implementations of functions with certain common parameters.

                Clojure has excellent Java interoperability (when used on Java). Think of this like other lisps having an FFI to C. Clojure and Java (or other JVM language) code is fully interoperable. Clojure data structures can be passed to Java and manipulated or even created in Java by importing the Clojure imports into Java code to be able to access the Clojure data structure APIs. Java data structures can be passed to Clojure and accessed with dot notation to access methods and class members. You can write JVM classes in Clojure for compatibility with other Java code. For example, if using a Java library that needs to be passed a callback function, it is easy to write this callback function in Clojure without using any Java.

                Because it runs on JVM, Clojure has access to the EMBARRASSINGLY large amount of riches of Java libraries. Code to do anything under the sun. Including access to the GPU. And there are Clojure libraries for working with the GPU.

                Clojure has an excellent story about concurrency. I won't go into it here. But you can write high level code that will run on all your CPU cores. In Clojure, MAP is like MAPCAR in CL. Instead of using "map", you can use "pmap" to process concurrently if you don't care about the order of the returned elements. (eg, I have a list of ten million items, and I need to run function F on all of them, I can use (pmap F items) if I don't care that the returned list is in a different order than items. Watch all your CPU cores light up.)

                Clojure has a nice syntax for Arrays, Sets and HashMaps. Thus you can render arbitrarily complex data structures to text, and then back to internal data again. Think JSON but much better.

                You'd be amazed with what people do in Clojure because of Java interop. Playing with MIDI and synthesizers, even software synthesizers. See Clojure's interface to Overtone. Using Clojure with OpenCV (computer vision).

                Let me throw out another interesting one.

                Pixie Lisp

                It compiles to native code. Built with the Rpython tool chain. Has good C FFI because of this. Early implementations on Raspberry Pi with direct Lisp access to WiringPi which provides access to the hardware GPIO pins (digital / analog / SPI / I2C / PWM input and output pins).

                Pixle lisp is also Clojure inspired.

                Finally, let me mention: Shen, a sufficiently advanced Lisp.
                (to be indistinguishable from Magic. See that it is like having Haskell and Prolog baked into the Lisp itself.)

                Hope that helps.

                --
                People today are educated enough to repeat what they are taught but not to question what they are taught.
                • (Score: 2) by DannyB on Tuesday June 06 2017, @02:19PM

                  by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @02:19PM (#521329) Journal

                  Duh. "Or Azul's Zing -- which is a free binary build" I meant Azul Zulu.

                  Zing is the one that runs on hundreds of GB of ram with up to 768 cpu cores.

                  --
                  People today are educated enough to repeat what they are taught but not to question what they are taught.
              • (Score: 2) by VLM on Tuesday June 06 2017, @03:59PM (1 child)

                by VLM (445) Subscriber Badge on Tuesday June 06 2017, @03:59PM (#521385)

                There are no obvious factual errors in Danny's reply to the best of my knowledge at this time.

                A very short answer to your very specific question in a limited sense is "yes"

                A longer answer to that specific question including plenty of low level implementation examples is at:

                http://www.braveclojure.com/concurrency/ [braveclojure.com]

                Clojure for the Brave and True is a bit of an acquired taste but I figured it would be more fun than a link into clojuredocs.org. clojuredocs is best used if you already know you want a future and you merely forgot some detail about futures.

                Its not a magic or magic-ish language like Erlang you kinda have to intentionally parallelize stuff in Clojure for it to run multi-core but its not hard either and there's plenty of support.

                It works.

                • (Score: 2) by DannyB on Tuesday June 06 2017, @04:39PM

                  by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @04:39PM (#521409) Journal

                  This [objectcomputing.com] is an intro that I found useful as an into to Clojure, a long time ago.

                  I think I remember an amusing YouTube video where Rich Hickey mentions running some cool core.async program on Azul's custom hardware with hundreds of cpu cores.

                  I am excited to try Pixie Lisp on a Pi. So many projects to tinker with. So little time. Ugh.

                  --
                  People today are educated enough to repeat what they are taught but not to question what they are taught.
              • (Score: 2) by hendrikboom on Saturday June 10 2017, @01:22AM

                by hendrikboom (1125) Subscriber Badge on Saturday June 10 2017, @01:22AM (#523345) Homepage Journal

                I believe Gambit has a multicore implementation in beta testing -- including multicore garbage collection, But not garbage collection running in parallel with program execution.

                Racket may well be working on something similar, but I don't remember the details.

                -- hendrik

          • (Score: 0) by Anonymous Coward on Tuesday June 06 2017, @10:04AM

            by Anonymous Coward on Tuesday June 06 2017, @10:04AM (#521242)

            Might be minus the -.

            What killed usenet wasn't the volume per-se, it was the BINARIES being pushed to newgroups, even non-binary usegroups once ISPs started filtering those. There was a chart up a few years back showing usenet daily capacity from the 80s to the early-mid '00s. The content amount went from a few gigabytes a day to tens of gigabytes a day, to a few hundred gigabytes a day, and by the '00s, was into the terabytes a day of traffic volume. Yes. Terabytes. In an era when drives were still sub-terabyte, and backbone capacities were probably below... OC48(?) with ISPs having OC3 or competitive being reasonably high end for non-corporate local ISPs.

            THAT, combined with spam is what killed Usenet. It could be reestablished today, but there are fundamental changes to the protocol that should be made to help synchronize mirroring of chains of posts, modifications rather than reposts for changed posts, and better peering of new blocks of messages (nntp was point to point, which is reponsible for a lot of the traffic overhead. It was okay in the UUCP era, but nowadays something bittorrent-esque for helping spread publishes across multiple fast hosts could help immensely.)

  • (Score: 2) by Wootery on Tuesday June 06 2017, @04:14PM

    by Wootery (2341) on Tuesday June 06 2017, @04:14PM (#521394)

    Surprised no-one seems to have mentioned this, but I suspect a major reason for bad documentation of FOSS is boredom.

    To be sure, plenty of FOSS is written by paid developers, but lots of it is written 'for fun', and writing documentation isn't something many programmers enjoy doing.

    I don't see any reason to jump to the conclusion that FOSS developers can't write documentation.